Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

6698

March 2nd, 2016 13:00

Can somebody explain how capacity works in ScaleIO?

Screen Shot 2016-03-02 at 3.09.41 PM.png

This is my capacity. I have 9 drives of ~2TB each. I understand that 16.4 is about what it comes out to. In a raid 1 situation, I see I have roughly 7.8 TB of usable space. Can someone explain what the 3TB of unused space is there?

Also, I only have 2 2TB volumes created and mapped to SDCs, Why is it that I only have 1.5TB Free?

Thanks!

torage Pool SP1 (Id: xxxxxxxxxx) has 2 volumes and 1.5 TB (1520 GB) available for volume allocation

  The number of parallel rebuild/rebalance jobs: 2

  Rebuild is enabled and using Limit-Concurrent-IO policy with the following parameters:

  Number of concurrent IOs per device: 1

  Rebalance is enabled and using Favor-Application-IO policy with the following parameters:

  Number of concurrent IOs per device: 1, Bandwidth limit per device: 10240 KB per second

  Background device scanner: Disabled

  Zero padding is enabled

  Spare policy: 34% out of total

  Uses RAM Read Cache

  RAM Read Cache write handling mode is 'cached'

  16.4 TB (16754 GB) total capacity

  3.0 TB (3057 GB) unused capacity

  0 Bytes snapshots capacity

  7.8 TB (8000 GB) in-use capacity

  0 Bytes thin capacity

  7.8 TB (8000 GB) protected capacity

  0 Bytes failed capacity

  0 Bytes degraded-failed capacity

  0 Bytes degraded-healthy capacity

  0 Bytes unreachable-unused capacity

  0 Bytes active rebalance capacity

  0 Bytes pending rebalance capacity

  0 Bytes active fwd-rebuild capacity

  0 Bytes pending fwd-rebuild capacity

  0 Bytes active bck-rebuild capacity

  0 Bytes pending bck-rebuild capacity

  0 Bytes rebalance capacity

  0 Bytes fwd-rebuild capacity

  0 Bytes bck-rebuild capacity

  0 Bytes active moving capacity

  0 Bytes pending moving capacity

  0 Bytes total moving capacity

  5.6 TB (5696 GB) spare capacity

  7.8 TB (8000 GB) at-rest capacity

  0 Bytes decreased capacity

  Primary-reads                            1 IOPS 1.6 KB (1638 Bytes) per-second

  Primary-writes                           40 IOPS 238.6 KB (244326 Bytes) per-second

  Secondary-reads                          0 IOPS 0 Bytes per-second

  Secondary-writes                         43 IOPS 232.0 KB (237568 Bytes) per-second

  Backward-rebuild-reads                   0 IOPS 0 Bytes per-second

  Backward-rebuild-writes                  0 IOPS 0 Bytes per-second

  Forward-rebuild-reads                    0 IOPS 0 Bytes per-second

  Forward-rebuild-writes                   0 IOPS 0 Bytes per-second

  Rebalance-reads                          0 IOPS 0 Bytes per-second

  Rebalance-writes                         0 IOPS 0 Bytes per-second

Volumes summary:

  2 thick-provisioned volumes. Total size: 3.9 TB (4000 GB)

March 3rd, 2016 00:00

How many SP do you have?

How many SDSs do you have? 3? and in each SDS 3 drives?


As you can see in the output the spare is configured to 34% (rounded up from 33.33334).

Spare is calculated to be able to protect "loosing" 1 fault unit, if they are all even (Basically, your largest fault unit).

Fault unit can be a single SDS, or a Fault Set (FS) made out of several SDSs.

So in you case 16.4 TB * 34/100 = 5.6 TB spare capacity (Blue section)

You created 2 * ~2 TB thick volumes = ~4 TB (but due to the RAID1 it's actually ~8 TB (Green section)

So what is left is 3 TB available for volume allocation, but again, this needs to be divided to 2 (RAID1), so you have only 1.5 TB available for volume allocation (Grey section)

This is why you can have better disk space utilization to create a 30 TB storage made of 10 nodes/SDS (in that case if they are all even spare will be only 10%), rather than 3 nodes. And as a bonus, you will get better performance  due to ScaleIO's parallel disk utilization across all SDSs.

1 Rookie

 • 

11 Posts

March 2nd, 2016 20:00

ScaleIO reserves 10% of capacity by default for data protection in case of unplanned downtime. It is reserved when rebuilds are going to require unused disk space.

1 Rookie

 • 

10 Posts

March 3rd, 2016 07:00

This helps! This is what I was looking for. I never could find a straight answer how things were laid out. I assumed the Green area (7.8TB) was my available space (after RAID) for volumes, not volumes I configured including RAID.

So since all of my SDS's have the same size drives in them, my spare capacity equals 1 SDS size.

Yes, I have a basic config with 3 servers, each with an SDS on them. Each server has 3 2TB (1.8TB) drives. This is in my basement lab, so I can't really spin up 10 nodes...the wife would kill me.

Good information. Thanks a ton!

No Events found!

Top