Start a Conversation

Unsolved

This post is more than 5 years old

D

1085

August 28th, 2015 08:00

How does ScaleIO handle configs of disparate storage sizes?

So I know that ScaleIO is generally a storage pooling software so maybe my question is ... redundant or non-applicable? For my home lab it would be more cost effective to get 2 more 12 bay HDD supermicro chassis than to fork out more money for 2 more 24 bay chassis. Thing is I already have 1 24 bay chassis that I plan to load completely with spinning drives for the storage.

Let's say I put 2x24 2TB drives in them for 48TB of storage and I fill the other 2 servers with 2TB drives, the config across the servers would be

48 24 24

Is this a compatible configuration and how does ScaleIO know/handle the storage across devices when it is comprised of disparaging amounts? Like what if you were near capacity across all 3 servers and the 48TB server went down, surely ScaleIO can't "relocate" or bring up the IO/workload on the other 2 nodes when they don't have the capacity, can it? Would there be wasted capacity on the 48TB server?

29 Posts

August 29th, 2015 19:00

Hi Victor,

Can you please elaborate on what you said a bit more.

You say that you should use the value of your largest node to define the minimal space for rebuild/reservation, what exactly does this mean. Is there a minimum value that ScaleIO will define based on the storage per node that must be available for rebuilds? Any further light you can shed on the topic would be most appreciated.

Thank you.    

29 Posts

August 30th, 2015 18:00

Hi Victor,

thanks for the clarification. That is an interesting suggestion that they make. Across the nodes originally mentioned in my OP, (48 24 24 TB of space), it would lead to an insightful situation. My question is, when you do as they suggest, how much of the space is usable? I'm assuming 48TB is usable since ScaleIO is a mesh of mirrors. Is that a correct assumption?

Thanks

September 1st, 2015 03:00

There are 3 options (in recommended order):

1. You can use the same amount of devices in all 3 servers to have 24/24/24 (and configure the spare to 24TB) -> this is the better option performance wise / disk parallelism utilization wise.

2. you can set to use only 50% of your devices in the 48TB node, so to have an even setup of 24/24/24 (and configure the spare to 24TB)

3. Keep the setup as it is (48/24/24), but keep in mind that the actual free capacity is always the total capacity from which the spare is substracted (spare should be configured to enable rebuild of your largest fault unit [48TB in this case]) and the remaining capacity is divided by 2 (for mirroring / raid1) purpose.

I would go on options 1/2 (1 is better), and not use option 3.

No Events found!

Top