Unsolved
This post is more than 5 years old
72 Posts
0
9208
June 21st, 2016 08:00
thin or thick disk in vmware
Got myself a new SC4020 and I'm having a fun time trying to figure this thing out, especially how to calculate space usage.
Per the best practice document I'm supposed to use thick (lazy zeroed) disks. In my environment though there's a lot of development happening so a lot of snapshots on the vms.
A lot of my users will take a snapshot as soon as they get their VM so if I make everything thick I will lose 25gigs at the vmware level right away and won't be able to put as many vms per datastore.
Is there any problem if I go thin provisioned in vmware and end up being thin on thin? In such a case, does VMware become my authority on how much space I have left on the datastore?
piedthepiper
1 Rookie
•
85 Posts
0
June 22nd, 2016 04:00
This is a good question and one I am not sure of myself.
In general you dont want to go thin on thin, management becomes a pain, but i can see why you would do it that way.
Also youd want to run some kind of SCSI UNMAP command (ESXi) to make sure the hosts report back to the san exactly what data is free.
If you go thin and run the SCSi UNMAP daily you should be able to manage your space well, someone from support could confirm though.
I wrote an article on it a while ago
darkknightuk.com/.../
crackedup
72 Posts
0
June 22nd, 2016 07:00
Right now I'm thin on thick so it's easy, just check vmware. My luns get pretty low on space but at least I'm never running out of space on the backend. With the compellent I don't know what I'm going to do because a 1TB lun can, worst case, use as much as 2TB space. Trying to figure out how much space is used/needed seems like it's going to be tricky due to the vmware snaps.
I'm tempted to go thin on thin since with snapshots on the vms that's going to happen anyway and at least I won't give up half the HD space on the datastore
kerberos451
13 Posts
0
June 22nd, 2016 14:00
Thin provision everything. Compellents are thin provisioned by design, nothing wrong with thin provisioning every VM as well. You take a slight performance hit during the initial inflation of a vmdk, but the benefit is you dont chew up all your storage by people that say I need a TB, then only use 100GB. The only catch is not to over allocate requiring you to keep an eye on what you provision. Obviously, if you start getting close to the 90% threshold of a Compellent you need to tell the bean counters to buy more storage or stop further production. Thin provisioning allows you to consume closer to the 90% threshold before asking for more storage vs allocating close to the 90% threshold before asking for more storage. Its just more efficient.
It is difficult to predict a Compellents storage usage due to multiple RAID types and data progression. I see upwards of 8-9TB of change in a single day. So at one point in the day I could have 16TB free and later in the day it would be 25TB free.
digby180
2 Posts
1
June 28th, 2016 10:00
When you use Thin Provision make sure that your storage supports VAAI and can reclaim your space. I got in trouble when I found out that my storage doesn't support VAAI and I am unable to reclaim dead space
Confirming if SCSI UNMAP is supported on a LUN
To confirm if SCSI UNMAP is supported on a LUN, open an SSH session to a host and run this command:
Example output:
You can confirm using this command if you get Delete Status : Unsupported then I don't think using thin provision will be good idea .
McDaidH
28 Posts
1
July 11th, 2016 11:00
It is best practice to use thick (lazy zeroed) as the Compellent will ignore the zero bit blocks and not allocate them until they actually have data in them. You can tell this from looking at the actual usage for a volume when you initially create the VMware volume. It's essentially the same as doing a VMware thin provisioned disk.
The VM snapshots will also only use space for actual used blocks on the SAN, and not empty zero byte blocks.
We have been using VMware and Compellent for over 5 years now, and all our VM volumes are created as thick (lazy zeroed). A new Compellent LUN with a new VMware volume will only use a few MB on the SAN when initially created as thick (lazy zeroed), and will only grow when data is written.
rswislocki
52 Posts
0
August 23rd, 2016 08:00
Yes, it will....Without any problems...
rswislocki
52 Posts
0
August 23rd, 2016 08:00
From VMware environment perspective (space saving) it is better to use thin provisioned disks. For thick lazy zeored vmfs datastore reserves all required space so you can create only one 100 GB disk on 101GB datastore.
piedthepiper
1 Rookie
•
85 Posts
0
August 23rd, 2016 08:00
Dells Best Practice for vSphere is to use Thick Lazy Zero, so you dont end up with thin on thin. The compellent san will do thin on the san side anyway?
cpetry
19 Posts
0
October 24th, 2016 08:00
I'd like to know who wrote those best practices. If you thick on thin the VMFS3.EnableBlockDelete will have no effect. You'll have to buy PerfectStorage 3.0 and let it write zeros to the VMs free space so the SAN stays thin.
Yes, Compellent is thin by design and you can't change that setting. However, what Dell doesn't tell you is that the volumes will not STAY thin.
They don't have a solution for this either. You can run esxicli storage vmfs unmap all you want and that won't help you with the wasted space within the VMDK. That will only help you keep the data store itself clean.
If you bring this up with Dell support they will literally ignore you and have no comment. In my opinion they recommend thick lazy on thin so they can sell you more disks.
Edit: Yeah, I've been using my SC4020's (one being all flash) for over two years now. I am using thick lazy on thin too and after running PerfectStorage 3.0, we reclaimed a ridiculous amount of storage on these "thin provisioned" SANs.
rswislocki
52 Posts
0
October 26th, 2016 04:00
Isn't unmaping related to thin disks only?
http://cormachogan.com/2015/05/07/vsphere-6-0-storage-features-part-8-vaai-unmap-changes/#more-5023
Why the array should care about shrinking thick vmdk, if the creator on purpose choose thick?
cpetry
19 Posts
0
July 11th, 2017 20:00
Oh, I'm afraid I know exactly what I'm talking about. There are two different white spaces you have to worry about. You have the white space on the volume and the white space within the vmdk itself.
You need to go read about what the ESXi option is and how it works before you go telling people they don't know what they are talking about.
CompellentSanAd
43 Posts
0
July 11th, 2017 20:00
yes, cpetry does not know what he is talking about. Thin can run scsi unmap to recover white space. Thick cannot.
cpetry
19 Posts
0
July 11th, 2017 21:00
I would highly recommend you thin provision the VMDKs and enable the VMFS3.EnableBlockDelete setting on the host(s). This has to do with how the white space works within the VMDK, not on the volume.
If you thick provision the VMDK, the white space within the VMDK will not be reported, so to speak, to the Compellent array. So the VMDK will operate much like an MSSQL database and you'll have white space using up actual disk space. If you thin provision the VMDK and enable the above option, the underlying guest OS will report the white space to the host. The host will then pass that information to the underlying array. The alternative is you purchase and run PerfectDisk if you wish for the VMs disks to be thick.
This has nothing to do with running unmap against a volume.
The VMFS3.EnableBlockDelete option will have no affect on a VM if the VM's disks are thick.
In other words, the Compellent array won't stay thin if you choose thick. It will be unaware of the white space within the VMDKs themselves. That's exactly why PerfectDisk sells.
I run PerfectDisk against thick provisioned VMs and run unmap via scripts for the volumes (snapshots will fill the volumes up). I have a buddy who just thin provisions his VMs against Dells "recommendations" so the VMFS.EnableBlockDelete command works as it should.