This post is more than 5 years old
12 Posts
1
2773
February 9th, 2016 06:00
Disk capacity utilization on DDVE2.0
When a DDVE is deployed for the 1st time we see 2 disks automatically configured. These two disks are seen as a 250GB system disk, and a 10GB NVram disk.
We do not have any storage for customer Data configured on initial deployment of DDVE2.0.
You will need to add a minimum of 200GB storage for the 1st Data disk presented for customer usage. This 200GB will be broken down in the following way to give you usable storage: 200GB – 120GB for system disk – 5.6% for Raid on Lun = Usable storage.
Math Example:
200GB – 120GB = 80GB – 5.6% = ~76-77GB
300GB – 120GB = 80GB – 5.6% = ~169.92 – 170GB
(The 120GB on the 1st added Data Disk is for DDOS to make backup copies of Lic’s /reg / core / logs etc on the ext3 partition – this is the same as when we have ext3 partition on the 1st Physical shelf of a standard Datadomain.)
NOTE: After the 1st Data disk has been added and the filesystem is created, you can add further increments of 100GB Data Disks. You will see only the 5.6% overhead for Raid on Lun. We only take extra capacity for the system requirements from the 1st Data disk.



James_Ford
30 Posts
2
February 12th, 2016 06:00
So whilst the comments above are correct let me clarify exactly where your space goes to try and remove any confusion:
- Add an initial 200Gb data disk to DDVE - as this is the first data disk it will be partitioned as follows:
Slice 5 (used by DDVE for data storage): ~193.21Gb
Slice 6 (used by DDVE for ext3 file systems to hold a copy of the systems configuration and so on): ~6.77Gb
- DDVE takes ~5.64% of slice 5 for RAID on LUN leaving ~182.32Gb for data storage
- From this ~182.32Gb DDVE uses a small amount of space for metadata then splits the remainder into 1075838976 byte chunks/blocks - as a result there are 181 * 1075838976 byte blocks available for DDFS to use (~181.35Gb)
- A large proportion of this ~181.35Gb is taken for DDFS metadata (for example on disk indices)
- After indices e.t.c. DDVE is left with approximately 86 * 1075838976 byte blocks for physical data storage, i.e. ~86.17Gb
- This ~86.17Gb is used by the container set (CSET)
- DDVE creates 2176 * 4.5Mb reserved containers which are only available for internal operations such as cleaning
- This leaves ~76.61Gb available for user data and is the space displayed in 'filesys show space'
So in the above example, from an initial 200Gb disk, only ~76.61Gb is actually usable for data storage
Remember, though, that the first disk has a big overhead in terms of:
ext3 file systems
DDFS metadata (for example on disk indices)
These overheads don't really apply to additional data disks so if we add a second 100Gb data disk:
- DDVE uses 5.64% of this disk for RAID on LUN (as this overhead applies to all disks)
- There is a small amount of additional DDFS metadata (which uses up another couple Gb)
- The remaining space is available for user data so the DDFS file system is able to grow from ~76.61Gb usable space (with 200Gb data disk) -> ~168.66Gb usable space (with 300Gb total data disk)
- As a result, from the second 100Gb data disk we got ~92.05Gb usable space (i.e. basically all of it)
One final thing to point out is that the size of the DDFS metadata can change depending on workload on the system - in certain circumstances its entirely possible for on disk indices to use more space than they were originally allocated. In this situation the size of the DDFS file system (as shown by 'filesys show space') will reduce slightly (i.e. DDFS size is not fixed even if underlying storage is not changed).
I guess the thing to remember is that there is a big overhead in space used for RAID on LUN/metadata on the first disk (commonly around ~120Gb) but this overhead is much smaller on subsequent data disks. Don't expect to add a 500Gb disk, for example, and get 500Gb usable space!
Ryan_Johnson
73 Posts
0
February 9th, 2016 08:00
Can you clarify what RAID on Lun is? I've assumed that DDVE would not be doing any RAID and that all redundancy would be handled by the physical storage its running on.
AmitSinhaPM
39 Posts
0
February 9th, 2016 13:00
If you have not seen, there is a best practices document on our community page that I am linking here, which provides the best practices for storage, networking or virtual infrastructure in general. You will find recommendations there for best results (like RAID 6 mentioned above).
Ryan_Johnson
73 Posts
0
February 9th, 2016 15:00
Let me see if I understand this correctly.
Are these statements accurate:
- 5.6% RAID on LUN overhead only applies to first data disk.
- 120 GB DDOS system overhead only applies to first data disk.
- All data disks have ~1% overhead (EXT3 overhead)
- License capacity counts toward disk size not usable FS size.
To maximize usable space wouldn't I want to deploy the minimum 200GB data disk then deploy others up to the capacity purchased?
For example 4TB license:
- 250 GB system
- 10 GB NVRAM
- 200 GB first data (76GB usable)
- 3896 GB 2nd data (3818GB usable)
Required virtual infrastructure disk = 4356 GB
License = 4TB
usable disk = 3894GB
The example in the best practices document is 3716GB usable. Why not gain ~4.5% usable with 2 disks?
I'll see if I have time this week to deploy the technical preview both ways to see if my math and understanding is correct. If I am correct it is still easier to size and explain to customers than AVE\VDP but still something to note. Easy enough to understand from an engineering perspective but pre-sales can get complicated. I suspect when system sizer gets updated it will always assume a ~6% overhead on all data disks.