This post is more than 5 years old
1 Rookie
•
13 Posts
2
2375
November 4th, 2010 10:00
How do I add new SATA-II disks as a new storage group / storage pool?
I need to add a set of 5 new 1TB SATA drives to my Celerra as a new RAID5 group and be able to create new filesystems & iSCSI targets that get directed to that Raid5 array distinct and separate from the rest of my iSCSI targets/filesystems.
I've followed the instructions here (http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm) as closely as they can be matched to my Celerra NS20, and bound the newly created LUNS to a new 'Storage Group' under Navisphere Manager.
When I go into Celerra Manager, there are still only two "Storage Pools" available to choose from, the original two as was installed by EMC personell when the system was brand new. Those are "clar_r5_performance" for the drive chassis filled with FC disks, and "clarata_archive" for the drive chassis full of existing SATA drives. The system was installed with a third, empty drive chassis, into which I've added the 5 new SATA drives.
I've already got many iSCSI targets / filesystems already defined and in production use on the "clarata_archive" referenced set of SATA drives, which are connected to several servers already.
I'm adding a brand new document imaging server to my network, and I wish for its iSCSI targets/filesystems to reside strictly on the new set of 5 disks defined as a RAID5 array, distinct and separate from that "clarata_archive" massive blob of SATA storage, so I believe I need to make a new "storage pool" which shall reference that new array, but there doesn't seem to be any clear way of doing that. The "Pools" folder under the "Storage" folder within Celerra Manager is grayed out and does nothing.
How can I accomplish what I need here?
dynamox
9 Legend
•
20.4K Posts
1
November 4th, 2010 11:00
did you actually create new storage group in Navisphere ? What you need to do is to create a raid-5 group (4+1), create two LUNs (one on SPA and one on SPB, same size but not to exceed 2TB) and add them to the existing Celerra storage group.Rescan each datamover, after that you can create user-defined pools and add those LUNs there. See document "Managing Celerra Volumes and File Systems Manually"
dynamox
9 Legend
•
20.4K Posts
1
November 4th, 2010 11:00
not immediately, if you presented them and then created a file system ..then yes the pool would grab them, but if you simply present them, rescan each datamover you will see that dvols will be listed with column "inuse" having an N next to them. Then you can use that document to do whatever you want with them. Make sure that 2TB LUN is actually a couple of megabytes shy of being 2TB.
nas_disk -l
dynamox
9 Legend
•
20.4K Posts
0
November 4th, 2010 11:00
NS20 will not recognize any LUNs that are greater than 2TB (minus 2MB) in size.
CWFNetman
1 Rookie
•
13 Posts
0
November 4th, 2010 11:00
When I created the two LUNs, I used the "MAX" setting from the pulldown box. Was that the incorrect thing to do?
CWFNetman
1 Rookie
•
13 Posts
0
November 4th, 2010 11:00
If I add the new LUNs to the existing Celerra storage group in Navisphere, will that not automatically put them into the existing default "clarata_archive" pool?
dynamox
9 Legend
•
20.4K Posts
0
November 4th, 2010 12:00
does it apply to all systems ? Release notes only mention NS960 and select gateway systems.
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 4th, 2010 12:00
either use MVM to build your meta's manually or create a user defined pool
both ways are described on the MVM / AVM manuals available on Powerlink or your doc CD
Rainer
dynamox
9 Legend
•
20.4K Posts
0
November 4th, 2010 12:00
Good..it created ~1.8TB LUNs.
when you create FC LUNs you want to distribute them between SPA and SPB, for SATA LUNs on older Clariions it used to be best practice to put LUNs from the same raid group on the same SP, now on Celerra with CX4 it does not matter anymore. What are you trying to get to on PowerLink ..that document ? Message me privately and i will email it to you.
sebbyr
99 Posts
0
November 4th, 2010 12:00
Please find attached documentation for managing filesystems and pools manually. Also, you may want to review Primus emc138143.
When adding new storage to the Celerra using Navisphere (Celerra Gateway Models), a set of guidelines should be followed: On a per LUN basis there is a 2-terabyte LUN capacity limit. Therefore, it is best to create LUNs of about 1 terabyte in size, and then assign them to the Celerra, be sure to read the restrictions at the bottom of this solution.
Please note that the 2 TB LUN size limit for Dart is for NAS code releases prior to 5.6.44. For NAS Code releases 5.6.44 or later NAS LUN size is increased up to 16 TB.
Celerra Model NS Systems with CX or CX3 CLARiiON
(NS500, NS600, NS700, NS20, NS40, NS80)
Number of Disks in the RAID Group
CLARiiON RAID Type
Drive Type
Storage Profile / Storage Pool
Number of LUNs per RAID Group
(4+1)
RAID 5
Fibre Channel
Clar_r5_performance
2 (different SP)
(8+1)
RAID 5
Fibre Channel
Clar_r5_economy
2 (different SP)
(2)
RAID 1
Fibre Channel
Clar_r1
2 (different SP)
(2) (4) (6) or (8)
RAID 10
Fibre Channel
Clar_r10
2 (different SP) or 4 (different SP) NAS code 5.6.44 +
(4+2) (6+2) or (12+2)
RAID 6
Fibre Channel
Clar_r6
2 (different SP) or 4 (different SP) NAS code 5.5.31+ (FLARE 26)
(4+1) (6+1) or (8+1)
RAID 5
ATA
Clarata_archive
2 (same SP) or 4 (Same SP)
(4+1) or (8+1)
RAID 3
ATA
Clarata_r3
1, 2 (same SP) or 4 (Same SP) or 6 (Same SP)
(4+2) (6+2) or (12+2)
RAID 6
ATA
Clarata_r6
2 (same SP) or 4 (Same SP) NAS code 5.5.31+ (FLARE 26)
(2) (4) (6) or (8)
RAID 10
ATA
Clarata_r10
2 (same SP) or 4 (Same SP) or 6 (Same SP) NAS code NAS code 5.6.44 +
(4+1) or (6+1)
RAID 5
LCFC
Clarata_archive
1 or 2 (Same SP) CX3 only
(4+1) or (8+1)
RAID 3
LCFC
Clarata_r3
2 (same SP) or 4 (Same SP)
(4+2) (6+2) or (12+2)
RAID 6
LCFC
Clarata_r6
2 (same SP) or 4 (Same SP) NAS code 5.5.31+ (FLARE 26)
Celerra Model NS Systems with CX4 CLARiiON
(NS-120, NS-480, NS-960)
Number of Disks in the RAID Group
CLARiiON RAID Type
Drive Type
Storage Profile / Storage Pool
Number of LUNs per RAID Group
(4+1) or (8+1)
RAID 5
EFD
Clarefd_r5
4 (different SP) or 8 (different SP) NAS code 5.6.45 +
(2)
RAID 10
EFD
Clarefd_r10
4 (different SP) NAS code 5.6.45 +
(4+1)
RAID 5
Fibre Channel
Clar_r5_performance
2 (different SP)
(8+1)
RAID 5
Fibre Channel
Clar_r5_economy
2 (different SP)
(2)
RAID 1
Fibre Channel
Clar_r1
2 (different SP)
(2) (4) (6) or (8)
RAID 10
Fibre Channel
Clar_r10
2 (different SP) or 4 (different SP) NAS code 5.6.44 +
(4+2) (6+2) or (12+2)
RAID 6
Fibre Channel
Clar_r6
2 (different SP) or 4 (different SP) NAS code 5.5.31+ (FLARE 26)
(4+1) (6+1) or (8+1)
RAID 5
ATA
Clarata_archive
2 (same SP) or 4 (Same SP)
(2) (4) (6) or (8)
RAID 10
ATA
Clarata_r10
2 (same SP) or 4 (Same SP) or 6 (Same SP) NAS code NAS code 5.6.44 +
(4+2) (6+2) or (12+2)
RAID 6
ATA
Clarata_r6
2 (same SP) or 4 (Same SP) NAS code 5.5.31+ (FLARE 26)
Celerra NX4 Systems Only
Number of Disks in the RAID Group
CLARiiON RAID Type
Drive Type
Storage Profile / Storage Pool
Number of LUNs per RAID Group
(2+1) (3+1) (4+1) or (5+1)
RAID 5
SATA
Clarata_archive
2 (different SP)
(2)
RAID 10
SATA
Clarata_r10
2 (different SP) NAS code 5.6.44 +
(4+2)
RAID 6
SATA
Clarata_r6
2 (different SP)
(2+1) (3+1) (4+1) or (5+1)
RAID 5
SAS
Clarsas_archive
2 (different SP)
(2)
RAID 10
SAS
Clarsas_r10
2 (different SP) NAS code 5.6.44 +
(4+2)
RAID 6
SAS
Clarsas_r6
2 (different SP)
After the LUNs are done binding, they can be added to the existing storage group. Be sure to manually select an HLU / HID above 16. After the LUNs have been added to the Storage Group on the Celerra, select rescan. The new devices will automatically be added to the existing storage pools. (Check that the pool is set for Automatic Extension.) If the CLARiiON admin does NOT manually select a HLU / HID greater than 16, then when a rescan is conducted on the Celerra side to add this storage the following error message will appear "skipping reservered LUN id". The devices will not be added to the Celerra database until the CLARiiON HLU / HID assignment is corrected.
Note: If LUNs are being removed from the Celerra Storage Group, then this solution does not apply. Contact the EMC support for proper assistance; otherwise data corruption could occur.
Another option to remove a LUN in NAS code 5.6 is to run the following "nas_disk -d dxx -p -unbind" which will delete the Celerra disk and auto unbind the associated LUN on the CLARiiON side.
Refer to the following document from Powerlink for additional information about this topic "Implementing Automatic Volume - Management with Celerra" - P/N 300-002-797 - Rev A03 - Version 5.5 and "Managing Celerra Volumes and File Systems with Automatic Volume Management" - P/N 300-002-689 - Rev A03 - Version 5.5.
1 Attachment
MgVolFSM.pdf
CWFNetman
1 Rookie
•
13 Posts
0
November 4th, 2010 12:00
Apparently it liked their size... they are 14 and 15 in this list.
[nasadmin@HOUSTON ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 APM00081900009-0000 CLSTD root_disk 1,2
2 y 11263 APM00081900009-0001 CLSTD root_ldisk 1,2
3 y 2047 APM00081900009-0002 CLSTD d3 1,2
4 y 2047 APM00081900009-0003 CLSTD d4 1,2
5 y 2047 APM00081900009-0004 CLSTD d5 1,2
6 y 32767 APM00081900009-0005 CLSTD d6 1,2
8 y 451387 APM00081900009-0011 CLSTD d8 1,2
9 y 451387 APM00081900009-0010 CLSTD d9 1,2
10 y 1878321 APM00081900009-0012 CLATA d10 1,2
11 y 1878321 APM00081900009-0013 CLATA d11 1,2
12 y 1878321 APM00081900009-0014 CLATA d12 1,2
13 y 1878321 APM00081900009-0015 CLATA d13 1,2
14 n 1878321 APM00081900009-001E CLATA d14 1,2
15 n 1878321 APM00081900009-001F CLATA d15 1,2
Curiously, all the rest of the SATA LUNs defined in this system only belong to SPB and all of the FC LUNs except for one, belong to SPA. These two newly created SATA LUNs are the only SATA ones that are spread across both SPA and SPB equally. Is this normal? (I did not set up the system, it was installed by an EMC field engineer when it was purchased).
Also it looks like I can't download or get to anything on PowerLink right now. All I get are mostly blank empty blue screens with nothing clickable on them, lots of Java errors, and "Error on Page" messages. PowerLink was working fine for me Tuesday, I guess I'll have to call them on the phone and see what's up. I tried three different browsers on two different PCs to log in :-(.
CWFNetman
1 Rookie
•
13 Posts
0
November 4th, 2010 12:00
Well, without access to the official documentation, and not really fully knwong what I'm doing outside of the web-gui interfaces on the Celerra, I did a little googling and came up with this command:
$nas_pool -create -name clar_imaging_sata -description "Imaging iSCSI LUNS" -volumes d14,d15
and it came back with:
id = 42
name = clar_imaging_sata
description = Imaging iSCSI LUNS
acl = 0
in_use = False
clients =
members = d14,d15
default_slice_flag = True
is_user_defined = True
virtually_provisioned= False
disk_type = CLATA
server_visibility = server_2,server_3
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Sooooo.... I gues that means I was successful?
After doing a system rescan in Celerra Manager, my new storage pool shows up when I go to create a new file system.
I hope I did the right thing, I'm about to create a new filesystem, iSCSI target, etc, and see if I can mount it from my new document imaging server.
CWFNetman
1 Rookie
•
13 Posts
0
November 4th, 2010 13:00
When I created the LUNs using the Navisphere GUI, it would not let me create one greater than 2TB. I also manually selected 30 as the starting LUN ID. Using the "MAX" option from the pulldown menu, and telling it to create two LUNs, it automatically calculated 1834.298GB per LUN, which filled all available space. The graphic shows no free space among the raid group partitions, which is what I wanted. The newly created storage pool is showing I have 3.6TB to dole out, which sounds just about right.
I'm right now in the process of making my filesystems, iscsi targets and iscsi luns righ tnow, and it looks like everything is doing what I wanted.
sebbyr
99 Posts
0
November 4th, 2010 13:00
Earliers codes did not accept LUNs that were greater than or equal to 2TB. You had to make them 1MB less than the the 2TB to be able to present it to the NAS. Please note that meta-luns are definitely NOT supported on the Celerra.
Later NAS codes are accepting of LUNs bigger than 2TB.
- Sebby Robles
dynamox
9 Legend
•
20.4K Posts
0
November 4th, 2010 13:00
for all systems or just NS960 and the select gateways ?
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 5th, 2010 01:00
If you are a customer then just get a Powerlink Account and download
the documentation