Unsolved
This post is more than 5 years old
9 Posts
0
35300
July 14th, 2010 08:00
Possible to move drive to different RAID Group on CX300?
I recently bought some more drives for my CX300 and wanted to move things around a bit. Right now I have it set up like this
RAID Group 0
- Disk 0-4
RAID Group 1
- Disk 5-9
RAID Group 100
- Disk 14 (Hot Swap)
I want to make RAID Group 1 contain disks 4-9 so that it contains 6 drives instead of 5 and I can do a RAID 10. Can I do the following?
Tell Disk 4 to Copy to Hot Swap
Somehow remove disk 4 from RAID Group 0
Add disk 4 to RAID Group 1
I'll be redoing RAID Group 0 as a RAID 10 with only 4 drives a few days later, but for the time being I need RAID Group 0 to stay online.
Is this possible? Some other way of doing it? And yes I realize there would not be a Hot Swap for a couple days...
If the above is not possible, I suppose I could make a 4 drive RAID 10 until I could get rid of RAID Group 0 and then allocate and expand the additional two drives to make it a 6 drive RAID 10 correct?
Any help would be appreciated greatly! Thanks
Dev Mgr
4 Operator
•
9.3K Posts
0
July 15th, 2010 06:00
What are the raid types currently in use on raid group 0 and 1?
OmegaZero
9 Posts
0
July 15th, 2010 08:00
they are both RAID 5
Dev Mgr
4 Operator
•
9.3K Posts
0
July 15th, 2010 13:00
Though your first plan may work, I'm not a big fan of using a hotspare for more than a few hours... hotspare drives barely get IO, so you never know if it's good or bad (for more intense IO) till you end up depending on it.
Keep in mind that the first 5 drives (disk 0 to 4) do lose ~7GB a piece to the flare and such. This means that if you mix one of the flare drives with the others, the available disk space will be less than you would get from not using them; a 10 disk raid 10 with 146GB drives (133GB usable) where 1 of the drives is a flare drives yields ~630GB instead of ~665GB.
Also; due to the flare being there, you want to avoid putting your more IO intensive LUNs on it as the flare puts some IO on those drives too.
Could you 'live' with a 4+4 disk raid 10 using disk 0_6 to 0_13, and then maybe a 3+3 disk raid 10 on 0_0 to 0_5?
OmegaZero
9 Posts
0
July 19th, 2010 10:00
Yeah, I forgot about the flare drives unti this last weekend =/
The drives I have are all 73GB (66.64 usable) so space is somewhat constrained. I guess before I go further, I should ask:
I don't see a LUN or anything for the flare in Navisphere, so does Navisphere even touch that? Like - right now drives 0_0 - 0_5 are RAID5. If I kill all those LUNS and make 0_0 - 0_4 or 0_0 - 0_6 a RAID 10, will that screw up the flare? Sorry, I don't know much about how that is set up. I guess I should know that first before I go messing with this.
Dev Mgr
4 Operator
•
9.3K Posts
0
July 19th, 2010 11:00
One thing I overlooked is that, even if you pro-actively hotspare 0_4, you'll never be able to use that slot till you destroy the raid group.
You could however remove the physical disk and move it to an open slot and then use the disk in that other slot. Keep in mind that if a flare drive is faulted, your write cache gets disabled, so you'll take a performance hit if you were to try your original plan.
As for the PSM (Persistent Storage Manager), you cannot see the 'LUNs' that make up the PSM, except if you know where to look in the SPcollects (and possibly even lower level than that). So, destroying your regular/data LUNs does not affect the PSM. Some more info on the PSM can be found here: http://storagenerve.com/2009/01/06/emc-clariion-flare-code-operating-environment/.
OmegaZero
9 Posts
0
July 19th, 2010 11:00
If I destroy RAID Group 0 though, that has drive 0_0 - 0_4 in it, will that destroy the flare stuff? Or can I just destroy and remake it as a RAID 10 as the flare runs below Navisphere?
Dev Mgr
4 Operator
•
9.3K Posts
0
July 19th, 2010 21:00
You can't destroy the PSM (contains the flare and a couple of other things) if you don't wander outside of Navisphere (EMC's engineering tools).
OmegaZero
9 Posts
0
July 28th, 2010 14:00
Alright, so I had an idea (although it would be a lot of work, but at least this would be done right at the end). Let me double check by posting my raid groups:
RAID Group 0: Disks 0-4 // RAID 5 // 73GB 15k
RAID Group 1: Disks 5-9 // RAID 5 // 73GB 15k
RAID Group 3: Disks 10 -11 // RAID 1 // 73GB 15k
RAID Group 2: Disks 12-13 // RAID 1 // 300GB 10k
RAID Group 100: Disk 14 Hot Spare 450GB 15k
Now, RAID Group 2 does not need a lot of speed (they are for archive), so I could do this:
1) Destroy RAID Group 2 and remove drive 12
2) Use Copy to Hot Swap on drive 4
3) Swap out what was drive 12 (300GB) for drive 4
4) Put what was drive 4 (73GB) into drive slot 12
5) Make RAID 10 on drives 5-9,12 (219GB space)
6) Move data from RAID Group 0 to new RAID 10
7) Destroy RAID Group 0
8) Make RAID 10 with drives 0,1,2,3 (110GB space...?)
9) Make RAID 1 with drives 4,13 (293GB space)
Would that work or would that destroy the flare software? How do I know when the flare has been rebuilt?
Also, If I did the copy to Hot Swap on drive 4, and then I removed drive 4 and put a different drive in, would it start rebuilding the RAID group at that point? Or would it still use the Hot Swap until I told it to move back (which would never happen as I'm going to destroy that RAID group soon anyways). Guess I should read up a bit on the Copy to Hot Swap feature...
So wish I had an EMC tech to ask all these questions. Never done anything like this so I'm kinda freaked out to go start doing this without asking someone. Thanks for the great help so far!
Dev Mgr
4 Operator
•
9.3K Posts
0
July 29th, 2010 11:00
I started going through your steps and providing feedback on each, but I got to a point that I started to wonder why you want to shuffle physical disks around?
- the flare is hardcoded to drive bays 0 to 4; so if you start swapping disk 4 with 12, the flare will be rebuilt onto the 'new' disk that you're putting in slot 4
- disks 0 to 4 (in bus 0 enclosure 0) cannot be hotspares, so step 2 isn't an option if I'm understanding what you mean correctly
- try to keep the flare drives (0 to 4) identical models as they all lose the same size to the flare/PSM
- I'd recommend to leave the physical disks where they are (disk 0 to 11 73/15, disk 12+13 300/10, and disk 14 450/15) and manouver your LUNs and raid groups around them
It looks like you want to end with:
- 4-disk raid 10 on 73/15 flare drives (will yield ~119GB (66GB-7GB x 2))
- 6-disk raid 10 on 73/15 non-flare drives (will yield ~200GB (66GB x 3)
- 2-disk raid 1 on 300/10 non-flare drives (will yield ~272GB)
I'd suggest:
- disk 0_0 to 0_3 in a raid 10
- disk 0_5 to 0_10 in a raid 10
- disk 0_12 and 0_13 in a raid 1
- disk 0_14 as a hotspare
Leaving disk 0_4 and 0_11 unused.
How big are the LUNs that currently exist on raid group 0 and raid group 1? Depending on LUN sizes, I can see if I can figure out a way to do this on the fly, but this requires enough space to keep the source LUN and migrate it to a destination LUN.
I do assume you're running Navisphere release 24 or later (you mentioned pro-active hotsparing, which wasn't introduced till release 24 if I remember correctly).
OmegaZero
9 Posts
0
July 29th, 2010 14:00
I'm trying to move the disks around because A) It uses RAID-5 and this is a database server so I would rather have RAID 10 (it was set up incorrectly 4 years ago). and B) I have more drives now so I would like to take advantage of the space (there was only 10 drives in this before). The problem with this is the database server is still live so I kind of have to do this in steps to move the data around accordingly.
I know if I used a 300GB drive in bay 4 and a 300GB drive in bay 13 that they would have different sized due to 4 having the -7GB for the flare, but as I said before this drive isn't really critical so is there any downside to just making like a 250GB+ LUN with a RAID1 for those drives (by downside I mean would it mess up the flare to having a different size and speed drive for the 4th drive)?
Here's basically what I want
RAID 1: 2 x 73GB non flare for SQL Log files (.ldf). One big LUN
RAID 10: 6 x 73GB non flare for SQL data files (.mdf). Will basically be one big LUN in the end
RAID 10: 4 x 73GB flare for SQL data files (.ndf). One big LUN minus 1GB LUN for Quorum and 1GB LUN for MSDTC
RAID 1: 2 x 300GB flare/non-flare for Log shipping backups of the databases (hence why it's not critical). One big LUN
Right now I have moved files around within the last few weeks.
RAID group 0 (drives 0-4) contains a 200GB (SQL Data), 1GB (Quorum) and 1GB (MSDTC) LUNs - there is unused space
RAID group 1 (drives 5-9) has no data on it at all, I moved so I free the drives up and redo the RAID 5
RAID group 3 (drives 10-11) has SQL Logs (ldf) (66GB LUN)
RAID group 2 (drives 12-13) has log shipping (268GB LUN)
RAID group 100 has 450GB Hot Swap
Also - you say I can't do the copy to hot swap on drive 4 because it is the flare drive. So I just have to resort to pulling the drive without proactively sparring on that one?
My Navisphere is 6.26.23.0.46
OmegaZero
9 Posts
0
August 5th, 2010 13:00
bump