Unsolved
This post is more than 5 years old
3 Posts
0
61825
August 15th, 2012 15:00
PS6100 RAID10 > RAID6 expanding progress stuck at 0%
PS6100 FW5.2.2 (yes I know not absolute latest/greatest)
Switched RAID configuration from RAID10 to RAID6 almost 24hrs ago. Progress is stuck at 0%
Load on this shelf is very low (SANHQ average IOPS shows 100) We've done this same thing before on other shelves and it's always gone much faster than this. Is it stuck? Any way to check what it's doing behind the scenes?
In the event log there's nothing really except for the initial two events:
Expanding disk array from 10 drives, RAID 10 to 11 drives, RAID 6.
Expanding disk array from 12 drives, RAID 10 to 12 drives, RAID 6.
No Events found!
DELL-Joe S
7 Technologist
•
729 Posts
0
August 16th, 2012 08:00
In the GUI, go to (highlight) the member that you are expanding, on the status tab, in the upper right hand corner is the "RAID Status" section, this shows the RAID status as "expanding" and the progress in percentage.
Also, this process can take a while.
-joe
kgbailey
3 Posts
0
August 16th, 2012 10:00
That's the percentage I was talking about as being stuck at 0%.
Anyway, after 28hrs the percentage jumped from 0 to 24%, then an hour later it jumped to 78%. (and yes I was hitting the refresh button in the GUI, even though it does dynamically update for the most part)
Could have sworn in the past it went up smoothly but oh well.
Jamsnz
10 Posts
0
August 25th, 2012 23:00
We have the same problem - running 5.2.4 Firmware.
Migration from RAID 50 to RAID 6 was started at 10:30am this morning, at 5:30pm - 7 hours later - its reporting Progress 0%. Both the GUI and cmdline report the same thing...
GROUP1> mem select MEMBER2>show
_____________________________ Member Information ______________________________
Name: MEMBER2
Status: online
TotalSpace: 69.26TB
UsedSpace: 40.39TB
SnapSpace: 146.6GB
Description:
Def-Gateway: 192.168.40.1
Serial-Number: xxxxxxxxxxxxxx
Disks: 48
Spares: 1
Controllers: 2
CacheMode: write-back
Connections: 22
RaidStatus: expanding
RaidPercentage: 0%
LostBlocks: false
HealthStatus: normal
LocateMember: disable
Controller-Safe: disabled
Low-Battery-Safe: enabled
Version: V5.2.4 (R255063) (H1)
Delay-Data-Move: disable
ChassisType: 4835
Accelerated RAID Capable: no
Pool: SATA
Raid-policy: raid6
Service Tag: xxxxxxx
Product Family: PS6510
_______________________________________________________________________________
____________________________ Health Status Details ____________________________
Critical conditions::
None
Warning conditions::
None
_______________________________________________________________________________
____________________________ Operations InProgress ____________________________
ID StartTime Progress Operation Details
-- -------------------- -------- ----------------------------------------------
Best option is to wait for 24 hours and see if it springs to life?
Jeremy
Jamsnz
10 Posts
0
August 26th, 2012 14:00
This must be the slowest RAID expansion known to mankind, after nearly 24 hours the smallest of the arrays is only at 36% and the SAN has been idle because it was a Sunday:
support exec raidtool
You are running a support command, which is normally restricted to PS Series Tec
hnical Support personnel. Do not use a support command without instruction from
Technical Support.
Driver Status: Ok
RAID LUN 0 Ok.
14 Drives (0,14,4,6,8,1,3,5,7,9,13,11,12,2)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 1 Ok.
14 Drives (15,16,17,18,19,20,21,47,23,24,25,26,27,28)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 2 Ok.
14 Drives (29,30,31,32,33,34,35,36,37,38,39,40,41,42)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 3 Ok.
05 Drives (43,44,45,46,10)
RAID 6 (64KB sectPerSU)
Capacity 5,872,337,879,040 bytes
Expansion Underway (%36.24 complete)
4 Drive RAID 5 --> 5 Drive RAID 6
Available Drives List: 22
Jamsnz
10 Posts
0
August 27th, 2012 23:00
After 48 hours the RAID migration progress so far is a dismal 66% - for one small Array and its a tiny 4 Drive array, not the three main 14 drive arrays in the Member.
RAID LUN 3 Ok.
05 Drives (43,44,45,46,10)
RAID 6 (64KB sectPerSU)
Capacity 5,872,337,879,040 bytes
Expansion Underway (%66.34 complete)
4 Drive RAID 5 --> 5 Drive RAID 6
This is slower than glacial, if thats possible, so after 48 hours we still have no idea how long a RAID 50 to RAID6 migration is going to take as the first array isn't even complete.
I'm only reporting this because others will be wanting to do a RAID50 -> RAID6 migration to meet Dell's new August 2012 Best Practice guidelines.
Short answer is a Member might need a month to perform a migration?
Jamsnz
10 Posts
0
August 29th, 2012 16:00
Migration Day 4 - RAID 50 to RAID6
Migration of the first small 4 drive RAID5 array ->RAID6 completed last night, after 80 hours.
I spoke to soon about the Member "migration" from RAID 50 to RAID 6 taking a month.
The first "real" 14 drive array is now migrating, after 12 hours - 7% has now been migrated.
By my maths that means converting each of the three 14 drive arrays (2TB drives) will take 7 days!!
OMG.
RAID LUN 0 Ok.
14 Drives (0,14,4,6,8,1,3,5,7,9,13,11,12,2)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion Underway (%7.60 complete)
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 1 Ok.
14 Drives (15,16,17,18,19,20,21,47,23,24,25,26,27,28)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 2 Ok.
14 Drives (29,30,31,32,33,34,35,36,37,38,39,40,41,42)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 3 Ok.
05 Drives (43,44,45,46,10)
RAID 6 (64KB sectPerSU)
Capacity 5,872,337,879,040 bytes
Available Drives List: 22
It would be quicker to drain the member, delete and re-create it as a RAID6 member, and migrate the volumes back, if I had enough space to store 70TB of course.
GUI now reports Progress at 7%, which is odd as I'm 7% of the way through a conversion of a single 14 drive RAID set with two more to go... but at least its now updating as its just moved to 8%.
I'm hoping now that we have migrated what I assume are the "system" drives (the small RAID5 array) that the migration might speed up.
Might put a bottle of Gwertz in the fridge to celebrate when this migration is complete.
Jeremy
Jamsnz
10 Posts
0
August 29th, 2012 20:00
This is still a glacial expansion...
I'd rather be able to choose how fast migration takes, slow, medium or fast - as I'd normally choose medium, and accept some performance impact as the migration is a one off change. If the member is empty of volumes you might choose fast, so its still a valid option.
It is a very similar discussion that I've had about drive replacement after failure taking up to 30 hours to complete. Then lo and behold Dell have sped up their drive replacements in the latest firmware.
During a drive failure/replacement there is a huge risk window (of a second disk failure or a URE) if a RAID rebuild takes 30 hours - I'd rather it was done quickly, and now that is what Dell do.
Same goes for a "migration", the longer it takes, the bigger the risk of something else going wrong... for instance having a drive failure/replacement during the migration. If the migration was quicker, then we wouldn't even be contemplating doing a vacate to change RAID policy. Quid pro Quo.
Jamsnz
10 Posts
0
August 29th, 2012 21:00
It always made sense to mirror the failing drive to the RAID5(6) (system?) drives if there was enough space allocated for that process, as the drives could also work in parallel. Unforunately you don't always get warned that a drive is going to fail, so an array rebuild may still be necessary.
Ideally RAID6 would still have two hot spare drives instead of one, however a drive failure should no longer be a near death event, but using the small RAID5(6) array would still allow them to act as a temporary hot spare drive if there is space available, until a replacement hot spare drive is available.
One can only live and hope.
Jeremy
Jamsnz
10 Posts
0
August 30th, 2012 17:00
Day 5.
Migration of the first 14 drive RAID set is now at 20% after 1.5 days, so the estimate of 7 days per RAID set to migrate from RAID 50 to RAID 6 with 2TB drives appears to be accurate.
Total of 3 RAID sets (with 2TB SATA drives) = 21 days plus 3.5 days (80 hours) for the small 4 drive RAID 5->6 conversion.
GUI shows conversion is at 20% when in fact only one of the three main Array sets are at 20%.
RAID LUN 0 Ok.
14 Drives (0,14,4,6,8,1,3,5,7,9,13,11,12,2)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion Underway (%20.69 complete)
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 1 Ok.
14 Drives (15,16,17,18,19,20,21,47,23,24,25,26,27,28)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 2 Ok.
14 Drives (29,30,31,32,33,34,35,36,37,38,39,40,41,42)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion yet to resume
14 Drive RAID 50 --> 14 Drive RAID 6
RAID LUN 3 Ok.
05 Drives (43,44,45,46,10)
RAID 6 (64KB sectPerSU)
Capacity 5,872,337,879,040 bytes
Available Drives List: 22
I'll give the Forum a rest until something new comes to light, as it may be some time before this migration completes if the RAID6 member migration does take 25 days to perform.
Some estimate of how long a migration is likely to take would be a big help, if people know a RAID6 migration might take close to a month then they might reconsider doing the migration!
With 1TB SATA drives the RAID 6 migration should take a more palatable 7-10 days.
With 3TB SATA drives presumably 50% can be added to my estimates.
Jeremy
Christian Hanse
1 Rookie
•
62 Posts
0
September 1st, 2012 02:00
It does sound like a long time, and i agree that it should be possible for one to select the priority that the conversion runs on.
What i am a bit more curious about, is the fact that it does not seem like all your customers have received the information about new raid recommendations.
Atleast I have not recieved that information :-P
Is it sent out to non-us costumers or is it handled world wide? My e-mail with all our serial numbers for arrays is registered on equallogic.com so it seems like a mistake that i do not get that rather vital information.
Jamsnz
10 Posts
0
September 4th, 2012 18:00
Day 11 - First 2TB 14 drive SATA Array completed migration to RAID6, two more 14 drive arrays to go:
RAID LUN 0 Ok.
14 Drives (0,14,4,6,8,1,3,5,7,9,13,11,12,2)
RAID 6 (64KB sectPerSU)
Capacity 23,489,351,516,160 bytes
Expansion Underway (%100.00 complete)
14 Drive RAID 50 --> 14 Drive RAID 6
GUI has gone back to showing 0% complete again, so cmdline is the only way to monitor real progress.
RAID6 Migration is still expected to take 3.5+7+7+7=25 days for the four 2TB drive arrays.
Jeremy
Jamsnz
10 Posts
0
September 11th, 2012 22:00
Day 18 - Second 2TB 14 Drive Array 100% migrated to RAID6, starting on third and last Array migration.
Expected completion date is: Day 25
Jamsnz
10 Posts
0
September 18th, 2012 17:00
Migration completed, Day 24.
19/09/12 4:00:59AM (MEMBER1) Expansion of RAID LUN 2 completed in 2051276 seconds at 22365 sect/sec.
Elapsed time: 23 days, 17 hours, 50 minutes or thereabouts.
Thanks to Dell and Equallogic, we got there in the end.