For the most part your setup is best practice. The only thing that I see that isn’t is that you have a lun bound to your OS drives. Best practice is to not have any luns bound to those drives if at all possible. As for your raid type it depends on what the data & the hosts.
Please let us know if you have any other questions
It looks like there are 3 raid 5's on this unit; disks 0_0 to 0_3, disks 0_4 to 0_10, and disks 1_0 to 1_6.
Depending on your usage, raid 5 may be fine. This unit does not directly offer raid 50 as an option. Only with Navisphere Manager (licensed upgrade from Navisphere Express) you could use MetaLUN striping to achieve raid 50.
Raid 10 on the other hand is available.
The flare drives (0_0 to 0_3) are best used for low IO storage.
If you are using 7200rpm drives, I'd recommend to go with smaller (as in: fewer drives) raid 5's over larger raid 5's or even to just go raid 10. Carving a single diskgroup in 2 virtual disks is not the same thing.
I'd consider deleting the virtual disks and disk groups from disks 0_7 to 0_10 and physically move those disks to the 2nd enclosure. Then you unassign the hotspare from slot 1_7 and make it 1_11. Now you could (in enclosure 1) do a 6-disk raid 5 and a 5-disk raid 5, or better yet, 2 4-disk raid 5's and a 3-disk raid 5.
If raid 10 yields enough storage space for you, you could do a 6-disk raid 10 on 0_4 to 0_9 and another on 1_0 to 1_5. Maybe even move 0_10 to 1_8, move the hotspare assignment from 1_7 to 1_8 and make that 2nd raid 10 disks 1_0 to 1_7.
If you're using 10k or 15k rpm SAS drives, a 7-drive raid 5 may be manageable.
DELL-Sam L
Moderator
•
7.6K Posts
0
February 19th, 2013 09:00
Hello Huwy,
For the most part your setup is best practice. The only thing that I see that isn’t is that you have a lun bound to your OS drives. Best practice is to not have any luns bound to those drives if at all possible. As for your raid type it depends on what the data & the hosts.
Please let us know if you have any other questions
huwy
1 Rookie
•
124 Posts
0
February 19th, 2013 09:00
Thanks for coming back.
The array is used to store VMWare VMs
Dev Mgr
4 Operator
•
9.3K Posts
0
February 19th, 2013 09:00
It looks like there are 3 raid 5's on this unit; disks 0_0 to 0_3, disks 0_4 to 0_10, and disks 1_0 to 1_6.
Depending on your usage, raid 5 may be fine. This unit does not directly offer raid 50 as an option. Only with Navisphere Manager (licensed upgrade from Navisphere Express) you could use MetaLUN striping to achieve raid 50.
Raid 10 on the other hand is available.
The flare drives (0_0 to 0_3) are best used for low IO storage.
What is this storage being used for?
Dev Mgr
4 Operator
•
9.3K Posts
0
February 20th, 2013 06:00
If you are using 7200rpm drives, I'd recommend to go with smaller (as in: fewer drives) raid 5's over larger raid 5's or even to just go raid 10. Carving a single diskgroup in 2 virtual disks is not the same thing.
I'd consider deleting the virtual disks and disk groups from disks 0_7 to 0_10 and physically move those disks to the 2nd enclosure. Then you unassign the hotspare from slot 1_7 and make it 1_11. Now you could (in enclosure 1) do a 6-disk raid 5 and a 5-disk raid 5, or better yet, 2 4-disk raid 5's and a 3-disk raid 5.
If raid 10 yields enough storage space for you, you could do a 6-disk raid 10 on 0_4 to 0_9 and another on 1_0 to 1_5. Maybe even move 0_10 to 1_8, move the hotspare assignment from 1_7 to 1_8 and make that 2nd raid 10 disks 1_0 to 1_7.
If you're using 10k or 15k rpm SAS drives, a 7-drive raid 5 may be manageable.