Unsolved
4 Posts
0
3974
November 23rd, 2020 08:00
Trying to understand volumes Raid Overhead
I am very new to SAN management, and i don't completely understand how the storage profiles work, but I am currently trying to understand why some of my volumes have a very large RAID Overhead, and some others don't :
Is is because those volumes are used more than often ? Is there a way to reduce this to a maximum % ?
No Events found!
Origin3k
4 Operator
•
2.3K Posts
0
November 23rd, 2020 15:00
The Raid Overhead is the protection of your data against drive failures. They are some factors which effects the needed resources for that protection. In an old array from the past you have to specify a Raidlevel during installation and you can easily calculate the needed space for that. But most likely you cant modify anything after the initial setup and when things over time you have not way to choose something different. Thats different in the world of Compellent.
For every tier you specify Single or Redundant protection which means you can survive one or up to 2 simultaneous drive failures. As example we often choose Single Protection for our SSDs in Tier1 and Dual Redundany for the 10K SAS Drives in Tier3.
If you have 10 or 11 drives from the same type within the same tier/folder the SC writes to a stripset of 9/10 which gives 80% (Raid6) or 89%(Raid5) efficiency.
If you have fewer drives from the same type within the same tier/folder the SC writes to a stripset of 5/6 which gives 67% (Raid6) or 80%(Raid5) efficiency.
How and where the data is stored can specify and modify during lifetime of a Volume by assigning a Storage profile to the Volume. You have the option for Performance, Balanced, Capacity and self configured and a SC would always try to write as fast as possible which is most of the time a RAID10. The RAID10 only gives you 50% efficiency when selecting Single redundancy. If you configure Dual Redundany ist a RAID10 Dual Mirror(DM) which writes the Data 3 times to survive 2 disk failaures.
Normaly writes/read of new data goes into Tier1 and from Tier3 the system only took the reads.
From your Screenshots it looks like you have a custom Storage Profile, Lowers Tier(Tier3) or the option "Import Data to lowest tier" is enabled. The naming of the storage profiles differents between different compellent models because SCv20x0 have only Raidlevel tiering but no data progression.
Conclusion. A single SC Volume can live on different types of media and every type of media can have different raid level(protection) and the tiers can use different write strategies to deliver the best available performance.
Regards,
Joerg
John_1234
4 Posts
0
November 25th, 2020 05:00
Thank you for these explanations. I think i get the picture now
Origin3k
4 Operator
•
2.3K Posts
0
November 25th, 2020 20:00
Some thoughts
Quesions
Regards,
Joerg
John_1234
4 Posts
0
November 27th, 2020 10:00
1. SAN Model :CT-SC4020
2. OS Version : 6.7.11.4
3.
4. Those storage profiles were created (or are by default ?) around mid 2016 by our consultant and a Dell technician. While creating volumes, we mostly never touched the default configuration, as none of my coworkers had any experience with SAN management and volume creation. We usually create a volume per "physical" disk a VM requires (1 for the OS, and 1 for apps)
If needed :
Since we are slowly running out of space, we are currently remove manually replays to save space on our SAN. It would probably be a better idea to create a new replay profile and limit the number of replays to 1 or swap them all in out 2h expiration profile to retain space though...
Can i easily swap storage profile on all my volumes without overloading the system with tasks ?
John_1234
4 Posts
0
November 27th, 2020 12:00
We are on a vSphere 6.0.0 and ESX 6.0.0
Our environment suffered from carelessness on the part of my colleagues who had little to no knowledge of virtualization and SAN management. They mainly maintained the infrastructure, but no updates has been done in a few years now. I have been employed for 2 years now and i just finished my VMWare training. We plan to upgrade our ESX and vSphere soon to 6.7 but as for now, i want to really understand how our SAN works (storage profile and replay wise)
Regarding out SCOS update, we still have support for all of our Dell products (ESX Servers and SAN)
Origin3k
4 Operator
•
2.3K Posts
0
November 27th, 2020 12:00
2. If your SC4020 is under support please consider a upgrade to 7.4 (Check if your Hosts supports a recent SCOS version). With the 7.3 they increase performance up to 40%. Massive improvement over 6.x on all places (Dataprogression on demand, Distributed Spare....)
3. Only the Storage Profile which you named "Tier3" is a self defined one (check the last column)
4. Hmm... very short time frame for your Snaps. The first screenshots you postet which showing your volume show 0 snaps.
You you can assign a new Storageprofile on the fly and it effects new data immediately. Old data will be move over time but iam not sure if data is moved when your low on capacity.
Question: what kind of Hosts? Is it a vSphere or Hyper-V environment or what?