Unsolved
This post is more than 5 years old
1 Rookie
•
53 Posts
0
1941
November 12th, 2015 00:00
What is the relationship between the SDS Memory and SDS capacity?
Hi,experts
Here have some questions:
1.When start the SDS service, what is the service name in the system?
2.With increasing the disk number ,in the other word, expand the SDS capacity ,does the service will use more host memory?Or there is a recommend memory size ?
3.When i create a volume, define the size is 200Gb, and then i see there have 400Gb been protected from the GUI.So I want to know when i save a file to this volume. How many copies will save to the system?
4.And ScaleIO has the function like "Deduplication"?
No Events found!
tomer__engineer
155 Posts
0
November 12th, 2015 03:00
1. If you run on Linux -> ps -ef | grep -i sds
2. During deployment via plugin the plugin calculates the needed SVM memory per the ScaleIO components installed on it and the SDS capacity (if installed on the SVM). SDS capacity increase requires additional Memory for the SVM on which it's installed. During the deployment wizard in the step where you select the SDS devices, you also have an option to declare on future capacity for the SDSs, to be taken into account during the calculation I mentioned and the SVM will support the future capacity as well (without the need to manually shutdown the VM and increase the Memory).
3. You created a Thick 200GB volume, which means it requires 400GB in order to allow all your 200GB to have a second copy (RAID-1). ScaleIO is a Block level storage and evert block will have 2 copies.
4. Not in version 1.32.X
littleboy1
1 Rookie
•
53 Posts
0
November 12th, 2015 18:00
Thank you for your answer.
2.What if i install on the physical server not a VM ?How can I calculate the memory?
3.I can't understand well.Every block have 2 copies,and why build RAID1?.If there are the SDD pool, It costs too much resouce.What if I create a Thick 200GB volume, and map the volume to the sdc. And i put 200GB size file into the volume. What exactly size in the system? It is 600GB? or 800GB?
4.It will release in next version?
daverush
51 Posts
0
November 13th, 2015 09:00
Littleboy,
2. On a physical server, we recommend at least 500MB RAM, 2GB preferably if there's a large capacity on the SDS.
3. Let's calculate this the other direction instead.
Say we have one 1TB disk on each of 4 SDSes in a cluster -this gives 4TB total capacity for ScaleIO to use. You need spare capacity for rebuilds in the event an SDS nodes goes down, so subtract (from total capacity) how much one SDS node failure (a whole SDS node, not just one disk) will remove from the cluster. In a 4-node cluster, this is 25%. In a 10-node cluster, 10%.
Our example has 4 nodes, so 4TB - 25% (1TB) leaves us 3TB capacity to be used for data.
It would be a bad policy to keep just one copy of data so we replicate it, like RAID1.
We now divide the capacity by the number of copies there are, which is 2.
3TB / 2 (number of copies) = 1.5 TB capacity usable as thick volumes.
In this example it looks like spare capacity is very costly, but the larger the cluster, the smaller of a percentage is needed to tolerate X number of SDS failures. A 12-node cluster with 4 3TB disks each node has 144TB total, 12TB go to 1 node's worth of sparing, and 132/2 = 66TB usable. Up that to handling 2 nodes failing (total, not concurrently), and you get 144 - 24 (spare), with 120/2 = 60TB usable.
littleboy1
1 Rookie
•
53 Posts
0
November 15th, 2015 18:00
Thanks,Rush.I am so sorry, I still have some doubts.
2.What if one node will have 48TB capacity, would 2GB RAM be ok?
3.Yeah, the spare capacity is ready for the rebuild,and could help me to understand the "one block have two copies"?
"You created a Thick 200GB volume, which means it requires 400GB in order to allow all your 200GB to have a second copy (RAID-1)." Why is the second copy ,Where is the first copy? Can I say the 400GB are copies?Where is the original data?
daverush
51 Posts
0
November 17th, 2015 21:00
2. 2GB should be fine for that, but I will check to be certain.
3. Consider a 100MB file written to a volume as the first copy. It is broken into smaller chunks and spread around SDSes so no one disk or SDS failure loses that whole file. ScaleIO makes another copy of it, (chunks spread across different SDSes) for a total of two instances of each data chunk.
littleboy1
1 Rookie
•
53 Posts
0
November 19th, 2015 17:00
Thanks so much.Get it now.It's helpful. About the host memory, cause we wanna build a storage domain by ScaleIO, need to figure out the relation between storage capacity and memory.