Unsolved
This post is more than 5 years old
1 Rookie
•
85 Posts
0
10164
June 8th, 2016 10:00
Min/Max disks required per tier for SC8000/Sc4020
Hi Guys,
I currently have multiple SC8000 controllers, and I am just trying to find some solid info.
I want to know what the min number of disks is required per Tier and what he max allowed is?
I cant seem to find this info anywhere for Sc8000 or the 4020.
Any help or pdfs or anything would be greatly appreciated
Cheers,
No Events found!
BVienneau
115 Posts
0
June 8th, 2016 11:00
That technically should work in a custom profile config... not sure if the sales process would allow for a system to be sold that way though, you'd have to check.
piedthepiper
1 Rookie
•
85 Posts
0
June 8th, 2016 11:00
Ah ok I see
So if Tier 1 was all flash and was using RAID 10
I could have a min of 5 drives 4 being used with 1 hot spare?
Cheers
BVienneau
115 Posts
0
June 8th, 2016 11:00
Minimums are typically to meet RAID type requirements but I believe a system cannot be sold with less than 6 drives total. There's no maximum per tier, but there is a maximum the SAS backend can support. This will vary in an SC8000 system based on the number of SAS cards, etc. but in a 4020 that number is 192 total drives (24 internal, 168 external).
Meeting RAID Requirements:
R5-5 (5 drives active, 1 spare)
R5-9 (9 drives active, 1 spare)
R6-6 /R6-10... you can see the pattern...
SC8000: https://www.dell.com/learn/us/en/04/shared-content~data-sheets~en/documents~ss812_compellent_storage_center_121614.pdf
SC4020: https://www.dell.com/learn/us/en/04/business~large-business/documents~dellstorage_sc4020_spec_sheet_030714.pdf
piedthepiper
1 Rookie
•
85 Posts
0
June 9th, 2016 10:00
I am curious why would it need a custom profile config?
I was thinking something like:
Tier 1
800Gb SLC WI drives x5, so 4 usable with 1 hot spare
The rest of the enclosure would be
Tier 2
TLC MRI 3.8TB drives x19, so 18 usable with 1 hot spare
and then Tier 3
7k 4TB 12 disks with 1 hot spare
BVienneau
115 Posts
0
June 9th, 2016 14:00
In this config:
No you wouldn't need a custom profile.
Tier 2 will require dual redundancy for that size drive so calculate usable needs based off that.
Many configs that I'd traditionally size this way... I am now just doing TLC configs, no mix of SLC & TLC... that is of course because the amount of writes/overwrites isn't super crazy. If it's an "average" system, you may want to consider just going TLC, unless the writes justify the need for SLC.
piedthepiper
1 Rookie
•
85 Posts
0
June 10th, 2016 08:00
Hi,
Thank you for your input.
I had sized it based on :
Tier 1 RAID 10
Tier 2 RAID 5-9
Tier 3 RAID 6-10
As I thought that is how Compellent worked, Tier 1 for writes, Tier 2 for Intensive reads and Tier 3 for archiving
Would Tier 2 be Dual redundant?
BVienneau
115 Posts
0
June 10th, 2016 08:00
If you go with the 3.8 TB SSD drives, yes.
If you went with the next size down, 1.9 TB SSD then no.
piedthepiper
1 Rookie
•
85 Posts
0
June 10th, 2016 08:00
Ah I had no idea,
Is that to do with the sheer size of the drive requiring RAID 6-10?
So originally I wrote "LC MRI 3.8TB drives x19, so 18 usable with 1 hot spare"
Now it should be :
LC MRI 3.8TB drives x19, so 18 usable with 1 hot spare, but only 16 drives worth of space can be used as 2 SSDs worth of space will be used for parity
piedthepiper
1 Rookie
•
85 Posts
0
June 12th, 2016 07:00
I have a follow up question
Lets say:
Disk
IOPs per drive
Size
Quantity
Hot Spare
Usable
RAW
Usable RAID 10
Total IOPs
MLC PRI
4703
1.9 TB
7
1
6
11.4 TB
5.7 TB
28k
Disk
IOPs per drive
Size
Quantity
Hot Spare
Usable
RAW
Usable RAID 5-9
Total IOPs
TLC MRI
4078
1.9 TB
61
60
114 TB
101.46 TB
244k
So with this setup I would have a Tier 1 and a Tier 3, no Tier 2 as there are only 2 types of disks. Tier 2 would span over 3 enclosures, am I correct in assuming that I would need 3 hot spares as a result? 1 per enclosure for Tier 3?
Also since Tier 3 starts where Tier 1 ends, if I wanted to expand Tier 1 I would have to add disks on the 3rd enclosure that has free space. If I did this later in time, would I need to factor in an additional hot spare for that enclosure? If that makes sense?
If the above is correct, if Tier 1 takes up 6 drive bays, would it be better/possible to leave a few of the adjacent drive bays free for expansion to avoid the problem of buying an additional hot spare?
Thank you for your time and input!
BVienneau
115 Posts
1
June 13th, 2016 13:00
If your config is T1 - MLC, Tier 3 - TLC I would probably just forgo the MLCs and just do an all TLC config. If this is a new system, then that'll help with software licensing, etc. It'll also in general simplify things.
IOPS: Based on your chart, your Tier 1 would be 32,921 and your Tier 3 248,758 IOPS... why not just have a single tier of all TLC and it would be 277,304 IOPS (68 drives). You are risking hitting the maximum IOPS in MLC and then you'd be giving greater IOPS in the "capacity" tier. You are better off doing just all TLC in this configuration.. If you are doing SLC & TLC that may be a different story if you have a super heavy write/overwrite application that may merit it, but if it's a "typical" workload that's 20/80 or 30/70 then a single tier would suffice. Simplifying the disk config would likely net you a better performing system.
but to answer your questions:
1 per enclosure is the recommendation OR one every 24 drives. I think with that particular case, you'd be good with 1 MLC and 3 TLC.
I would only add more MLC disks to meet the 1 per requirement as spares are global not actually per shelf.
Also... SSDs will rebuild way faster than HDDs...
piedthepiper
1 Rookie
•
85 Posts
0
June 13th, 2016 16:00
I was considering an all TLC array, I just like the idea of tiering haha
So lets say I went with an all TLC array with 68 drives across 3 enclosures
3 hot spares that would be 65 drives in total
My DPACK report for the current environment shows:
Disk Throughput
2107 MB/s
IOPS
93277 @peak 76613 @95%
Read/Write Ratio
76/24
Total Usable Capacity Required
100.51 TB
1.9x65=123.5TB RAW
65x4078=265k RAW IOPs
Now with a 24% Write split currently how would I calculate this? Basically 24% of the workload is writes, so that has to be RAID 10.
24% of 100TB is 24TB usable, would I need to size the array to make sure it could handle 24TB usable a day in RAID 10, meaning 48TB RAW? or am I over thinking it?
I am guessing it would revolve around my Replay profile, so if a Replay ran across every volume every 6 hours, in that configuration I would need 6TB usable for writes, with space to grow.
So 6TB usable RAID 10 is 12TB RAW
123.5-12=111.5TB RAW left
111.5x0.89 99.235TB usable in RAID 5-9
6+99.235=105.235TB usable, which exceeds the current requirement of 100.51TB reported by DPACK?
If a replay was done every 24 hours (which is not recommended) then I would need 24TB usable to be available in RAID 10
Now the sales guy just quoted the RAW IOPs with no RAID 10 penalty of 2, I understand there would be no RAID penalty for 5-9 as its all reads
RAID 10
Functional IOPS = (Raw IOPS * Write % / RAID Penalty) + (RAW IOPS * Read %)
Functional IOPS = (265000 * 0.24 / 2) + (265000 * 0.76)
55,650+201400 = 257,050k IOPs
So the array could provide 257k functional IOPs based on all RAID 10
Am I correct in assuming that because RAID 5-9 wont be used for writes in general (apart from replays moving down), 257k would be the total functional IOPs for the array?
If I needed to expand into a 4th enclosure, I could add disks on the fly and once they were added to the pool, the array would just restripe on the fly. Since its either 1 hot spare per enclosure or every 24 disks. I wouldn't need a hot spare in there till the enclosure was full? or I could just has a hot spare in there from the get go?
If RAID 10 kept filling up, I could have a more aggressive replay profile eg 3 hours and/or add more disks?
One final bit, since the TLC is all one big pool, and compellent stripes different RAID levels across all the disks, I do not need to worry about stuff like, RAID 10 needing an even number of disks? If I had a separate Tier 1 of SLC for example, I would have to have an even number of disks usable (4 disks and 1 hot spare) for correct RAID 10 usage.
Wow that was a lot to write! Thanks for taking the time to read it and respond!
BVienneau
115 Posts
1
June 15th, 2016 09:00
Did you get the "Average Daily Write" number from the DPACK? That is helpful to figure out how much data is being written. You can divide that number down based on your expected Replay profile, so if you are taking a Replay every 6 hours, then divide by 4.
That being said.... The Compellent will fluctuate how much is allocated to R10 or R5-9 based on actual write/read activity in the system, but I get that you want to try to estimate how much it WILL end up carving out and sizing properly.
You should definitely consider the 2x IO "cost" for doing a write. Your math looks right to me.
RAID 5-9 will have some backend writes when it converts from R10 to R5-9 but it should be fairly minimal and done on the Replay schedule and/or daily data progression.
You can add hotspares as needed to meet the ratio. There's not a hard requirement there. When you add drives to the disk pool, you'll kick off a RAID Rebalance and it'll redistribute all the data across all the drives in the tier. The new capacity is also immediately available. It doesn't need to wait until the Rebalance is finished.
RAID 10 Full? -- in a single tier system, it will expand and contract the pools automatically so you likely will never get "full" in a RAID "pool", but yes, you could take Replays and/or add more drives.
In a single pool, it's not as important (especially with the amount of drives you are looking at), it's going to write data to guarantee meeting the the R10 requirement, data bits on 2 separate disks and will do well with that. On a small pool of drives you just need to meet the requirements like you mentioned. Your minimum in a single larger pool is now just the R5 requirement (9 active drives, 1 spare). If the 9 drives met the requirement, I may do 10 active just to do an even # of drives for R10 but it wouldn't be required. But like I said, with 60+ SSDs it'll do well in general.
piedthepiper
1 Rookie
•
85 Posts
0
June 16th, 2016 11:00
Where can I find the Average Daily Write in the DPACK? All I got was the info shown here 76/24 read/write split.
Yep I understand that it does everything dynamically at the back-end, so ti fluctuates in real time, that just makes sizing all the more difficult haha
RAID 5 would only get written to when data was being moved down to it.
How is the total disk space available right away, when the re-striping hasn't completed?
When using larger disks I assume RAID10-DM would have a penalty of 3, because each write is committed 3 times for redundancy on bigger disks? How much more impact is that when compared to normal RAID-10. I haven't been able to find much info on RAID10-DM
piedthepiper
1 Rookie
•
85 Posts
0
June 16th, 2016 12:00
Ok Lets go with an All TLC Array again:
Since I dont have a an average write count, lets go with 8TB which I have gathered from looking at the customers daily backups, and seeing the daily change rate.
Output
Figures
Disk Throughput
2107 MB/s
IOPS
93277 @peak 76613 @95%
Read/Write Ratio
76/24
Average Daily Writes
8 TB
Total Usable Capacity Required
100.51 TB
Disk
IOPs per drive
Size
Quantity
Hot Spare
Usable
RAW
Total RAW IOPs
TLC MRI
4078
3.8 TB
48
2
46
174.8 TB
187k
2TB+ drives compellent forces RAID10-DM for writes and RAID6-10 for reads
Daily Writes
Replay
Data per Replay
RAID Level
RAW Space Needed
Raid Penalty
8 TB
Every 4 Hours
1.34 TB
10-Dual Mirror
4.02 TB
3
RAID
Efficiency
5-9
89%
6-10
80%
Replay Profile
RAW RAID 10-DM
Usable RAID 10-DM
RAW RAID 6-10
Usable RAID 6-10
Usable Total
Functional IOPs
4 Hours
4.02 TB
1.34 TB
170.78 TB
136.6 TB
137.94 TB
157k
Functional IOPs:
Functional IOPS = (Raw IOPS * Write % / RAID Penalty) + (RAW IOPS * Read %)
Functional IOPS = (187000* 0.24/ 3) + (187000 *0.76)
Disk
Drives Needed
Hot Spares
Enclosures
TLC SSD
48
2
2
How does that look? RAID 10DM penalty of 3, a replay profile every 4 hours based on 8TB of daily writes.
Does Dell have any information on its TLC performance/lifecycle/wear etc?
Once again thank you for taking the time to answer my questions
BVienneau
115 Posts
1
June 16th, 2016 13:00
I don't think the PDF report from DPACK contains the Average Daily Writes but if you log into the web portal it should be listed with all of the other summary stats.
Your math seems about right and yes if you go with 3.8 TB drives then it's R10-DM and R6-10 for you.
The disk space "just is" available immediately. My guess is that it writes the data to available space but then flags it to be "properly" striped with the rest of the data OR it takes priority to be striped properly over the other data that is restriping and gets laid properly the first time. I haven't dug into the why and my guess is that's part of the "secret sauce."
TLC Stats: http://www.theregister.co.uk/2015/07/20/dell_arrays_drop_costs_with_threedimensional_flash_chippery/