This post is more than 5 years old
13 Posts
0
1569
April 10th, 2013 13:00
NX4 reconfigure drives
I just purchased a VNXe and moved all my virtual servers over. All I have left is CIFS server and some left behind files. I would like to blow out all the SATA drives and create one large storage pool to run AppAssure backup on? I am having trouble figuring out how to get started. The current SATA drives have two virtual disk and one is being used by the CIFS server the other is blank. I am really green when it comes to the NX4 configuration and would appreciate any help. Thanks, Don
No Events found!
Rainer_EMC
4 Operator
•
8.6K Posts
0
April 11th, 2013 02:00
Are you planning to use them for NAS by the data movers ?
If yes then you need to keep in mind the support raid configs
for NX4 and SATA they are
2+1 RAID5
3+1 RAID5
4+1 RAID5
5+1 RAID5
RAID 1/0 (2 disk)
4+2 RAID 6
None of these fit 8 disks - so I suggest you stick with what you have
Also just 1 LUN wouldn’t be not optimal – you want at least 2 – better 4
Once you have delete all your old file systems all space should be returned to the system pool and you can create large file system spanning both raidgroups if you like
Rainer_EMC
4 Operator
•
8.6K Posts
0
April 10th, 2013 14:00
I assume you are still ok with your raid config
Just delete all the CIFS servers, shares and file systems
dsmmh
13 Posts
0
April 10th, 2013 20:00
Actually I would like to rework the RAID group. Currently I have two RAID groups with a total of 8 Drives. I would like to combine them into one large group. Then out of that create one large LUN.
Thanks, Don
dsmmh
13 Posts
0
April 12th, 2013 13:00
Thanks very much for your help.
I do have one more question on this:
As I was going in and deleting the iSCSI LUNs so I can create the larger storage share I get:
server_2 :
removeLU: lun 4 in use, can't be removed
failed to removeLU err=16
No recommended action is available. For more information
and assistance: 1. Log into http://powerlink.emc.com and go to Support >
Search Support. 2. Use the message ID or text from the error message's brief
description to search
Any suggestions?
Thanks Don
Rainer_EMC
4 Operator
•
8.6K Posts
0
April 12th, 2013 18:00
Either it's still used by an ISCSI server or there is a snapshot
Check the data mover log if there are more details
dsmmh
13 Posts
0
April 15th, 2013 12:00
I see that I still have a iSCSI LUN under the sharing tab---iSCSI --- using that file system. Can I just delete that iSCSI LUN?
Thanks, Don
dynamox
9 Legend
•
20.4K Posts
0
April 15th, 2013 12:00
you delete iSCSI LUN first and then delete file system it was using
Rainer_EMC
4 Operator
•
8.6K Posts
0
April 15th, 2013 12:00
It has been a while since I played with ISCSI on a NX4 – I think you first need to remove it before you can delete it
See the Configuring iSCSI Targets on Celerra manual – it has procedures for deleting snapshots (if necessary)
dsmmh
13 Posts
0
April 16th, 2013 05:00
I still get an error when I try and delete the iSCSI LUN do I need to delete the mount of the file system first?
Thanks, Don
dsmmh
13 Posts
0
April 16th, 2013 06:00
server_2 :
removeLU: lun 4 in use, can't be removed
failed to removeLU
err=16
dynamox
9 Legend
•
20.4K Posts
0
April 16th, 2013 06:00
what error message are you getting , are there any snapshots associated with iSCSI LUNs, replication sessions ?
dsmmh
13 Posts
0
April 16th, 2013 06:00
[nasadmin@mcsan ~]$ nas_fs -info mcstorage5
id = 31
name = mcstorage5
acl = 0
in_use = True
type = uxfs
worm = off
volume = v146
pool = clarata_archive
member_of = root_avm_fs_group_10
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = no,virtual_provision=no
deduplication = Off
stor_devs = SL7E9083700067-0007,SL7E9083700067-0006,SL7E9083700067-0008,SL7E9083700067-0019,SL7E9083700067-0018
disks = d22,d21,d20,d16,d15
disk=d22 stor_dev=SL7E9083700067-0007 addr=c0t1l15 server=server_2
disk=d22 stor_dev=SL7E9083700067-0007 addr=c16t1l15 server=server_2
disk=d21 stor_dev=SL7E9083700067-0006 addr=c0t1l14 server=server_2
disk=d21 stor_dev=SL7E9083700067-0006 addr=c16t1l14 server=server_2
disk=d20 stor_dev=SL7E9083700067-0008 addr=c0t1l12 server=server_2
disk=d20 stor_dev=SL7E9083700067-0008 addr=c16t1l12 server=server_2
disk=d16 stor_dev=SL7E9083700067-0019 addr=c16t1l9 server=server_2
disk=d16 stor_dev=SL7E9083700067-0019 addr=c0t1l9 server=server_2
disk=d15 stor_dev=SL7E9083700067-0018 addr=c0t1l8 server=server_2
disk=d15 stor_dev=SL7E9083700067-0018 addr=c16t1l8 server=server_2
Rainer_EMC
4 Operator
•
8.6K Posts
0
April 16th, 2013 06:00
No - it wouldn't let you and it wouldn't help
Please read the manual I mentioned, use CLI commands and post the command plus output including any errors in the data mover
Anything else is just guessing
dsmmh
13 Posts
0
April 16th, 2013 11:00
ok here is the results of the snapshot list
[nasadmin@mcsan ~]$ server_iscsi server_2 -snap -list -target mcstorage -lun 4
server_2 :
Snap Name Lun Number Target Create Time
[nasadmin@mcsan ~]$
christopher_ime
2K Posts
1
April 20th, 2013 19:00
I'll assume then that a LUN mask is still in place to a logged in initiator. Can you run the following command?
server_iscsi server_2 -mask -list
If so, you'll want to "clear" the mask with the following command:
server_iscsi server_2 -mask -clear
Basically, this is the analogy to a Storage Group of a block configuration (presented from the SP's). Just as you need to remove it first from the Storage Group, if presented from the SP's, you need to unpresent it from the host by removing the LUN mask.