Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4614

November 24th, 2015 04:00

VNX 5400 Thin pool lun consumed capacity

Hi Admins,

We have a VNX 5400 with a Thin pool of size 75 TB. Only thin luns are created from this pool. On the vsphere side, the THICK LAZY Zero format of datastores are created.

During a recent capacity review, I was surprised to see the total pool LUN consumed capacity is way to high than the actual used capacity on the vsphere side.

VNX Pool used capacity: 68 TB

vSphere used capacity: 35 TB

Why we do we have such huge difference in utilization ? Is this by design or are we missing anything ?

P.S. I already have a SR open for this but since we are nearing pool full condition - I want to make sure we have enough free space.

Thanks

65 Posts

November 26th, 2015 07:00

Hello,

When you provision storage from a VNX pool to VMware Datastores, they'll show up on Unisphere as full/used, even if no data exists on this allocated space.

From the VNX with MCx Virtual Provisioning White Paper, page 42:

For VMware environments, the Virtual Machine File System (VMFS) has many

characteristics that are thin-friendly. First, a minimal number of thin extents are

allocated from the pool when a VMware file system is created on thin LUNs. Also, a

VMFS Datastore reuses previously allocated blocks that are beneficial to thin LUNs.

When using RDM volumes, the file system or device created on the guest OS dictates

whether the RDM volume is thin-friendly.

When creating a VMware virtual disk, LUNs can be provisioned as:

 Thick Provision Lazy Zeroed

 Thick Provision Eager Zeroed

 Thin Provision

Thick Provision Lazy Zeroed is the default and recommended virtual disk type for thin

LUNs. When using this method, the storage required for the virtual disk is reserved in

the Datastore, but the VMware kernel does not initialize all the blocks at creation.

The VMware kernel also provides other mechanisms for creating virtual drives that are

not thin-friendly. The Thick Provision Eager Zeroed format is not recommended for

thin LUNs because it performs a write to every block of the virtual disk at creation.

This results in equivalent storage use in the thin pool.

When using Thin Provision, space required for the virtual disk is not allocated at

creation. Instead, it is allocated and zeroed out on demand.

As of vSphere 5, there is also the ability to perform thin LUN space reclamation at the

storage system level. VMFS 5 uses the SCSI UNMAP command to return space to the

storage pool when created on thin LUNs. SCSI UNMAP is used any time VMFS 5

deletes a file, such as Storage vMotion, delete VM, delete snapshot, etc. Earlier

versions of VMFS would only return the capacity at the file system level. vSphere 5

greatly simplifies the process by conducting space reclaim automatically.

In addition, features such as VMware DRS, Converter, VM Clones, Storage vMotion,

Cold Migration, Templates, and vMotion are thin-friendly.

For the full paper:

https://www.emc.com/collateral/white-papers/h12204-vp-for-new-vnx-series-wp.pdf

Hope this helps,

Adham

3 Apprentice

 • 

1.2K Posts

November 24th, 2015 11:00

Have you run space reclaim against the pool LUNs (datastores)?  This allows unused blocks to be marked "free" and have their unused space returned to the VNX Thin Pool.

1 Rookie

 • 

44 Posts

November 24th, 2015 23:00

We did not reclaim any LUNs recently. This is the second time the issue occurred in just 2 months.

3 Apprentice

 • 

1.2K Posts

November 25th, 2015 11:00

I suggest you should run a reclaim.  As I stated above, this should free up unused blocks and return the unused space to the VNX Thin Pool.

4 Operator

 • 

8.6K Posts

November 30th, 2015 07:00

did you find out what your issue was ?

No Events found!

Top