Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1641

August 24th, 2012 07:00

Trying to understand why striping a Filesystem in a VNX pool increases performance

I am trying to  understand why striping a FileSystem in a VNX pool increases performance.

Currently, on VNX5300, we mix Block and File in a pool.

To create a filesystem, I will create 1 thick LUN.   Then I create a filesystem and for "Storage Capacity" I give it the exact number of MBs

of the LUN I just created (I check the size with the nas_disk -l command).

This will automatically use a Mapped pool, create 1 disk volume (that maps to the Block LUN I just created),

1 Slice volume, 1 meta volume and the filesystem.

EMC recommended best practice with filesystems is to stripe LUNs that you create your filesystems on.

The explanation is "Stripe volume can achieve greater performance and higher aggregate throughput because all participating

volumes can be active concurrently.  In a stripe volume a read request is made across all component volumes concurrently, happening at the same time."

Because VNX builds volumes on top of Block LUNs, that are striped in 1GB slices across a pool of private RAID groups (and with INYO rebalance those slices based on heat map) the data is well distributed across a lot of spindles.  So the reason for the stripe best practice is not disk utilization.  Is The reason for the striping concurrency?

If a read request is made on a single slice volume, then the read request would be done sequentially.   The block LUN I initially create will write my LUN in 1 GB slices all over the storage pool private RAID groups.  But since it is written to just 1 LUN, the celerra will read the first 1 GB slice, then another 1GB slice, then another, etc.  sequentially.    But if I created a stripe volume, then it is reading smaller stripes concurrently from all the volumes.   I am assuming that with the Slice volume, the celerra cannot read concurrently, because

the celerra thinks that it is only working with 1 slice volume.   But if the celerra thinks it is working with a stripe volume, then it will read concurrently.

Am understanding this correctly?

And Does this provide a big performance increase?

4 Operator

 • 

8.6K Posts

August 24th, 2012 08:00

One reason is that just like any other SAN client for each device the data mover uses there is a limited number of entries in the I/O queue – once that is full it has to wait for an I/O to complete before issueing new ones.

If you stripes on multiple dvol’s you increase the number of outstanding I/O – you get more work done in parallel

9 Legend

 • 

20.4K Posts

August 24th, 2012 07:00

Multiple LUNs will be placed on different SP so by striping you will utilize more resources.

Considerations for VNX for File

When using Block storage pools with File, use the following recommendations:

• Create a separate pool for File LUNs.

• AVOID mixing with Block workloads.

• Pre-provision space in the storage pool

• Pro-actively create LUNs and assign them to File, so that File has available space for file-system creation and extension, checkpoints, and so on.

• Use only thick pool LUNs with File.

• DON’T use thin LUNs with File.

• DON’T use compressed LUNs with File.

• If Virtual Provisioning™ is required for VNX for File, use a thin- enabled file system on traditional or thick LUNs.

• Apply the same tiering policies to all LUNs in the storage pool.

• Allow LUNs to complete the prepare process (thick LUN slice allocation) before adding to the File storage group. Use this command to display the status of the prepare:

• naviseccli lun –list -opDetails

When creating LUNs:

• Create approximately 1 LUN for every 4 drives in the storage pool.

• Create LUNs in even multiples of 10.

• Number of LUNs = (number of drives in pool divided by 4), rounded up to nearest multiple of 10.

• Make all LUNs the same size.

• Balance LUN ownership across SPA and SPB.

2 Posts

September 3rd, 2012 17:00

Yes, according to a EMC tech I talked with,  the answer is that you want at least 10 I/O queues, so it does not matter what size the LUN's are in the pool.  Configure it so you have 10 LUNs.   Then you will have 10 I/O queues that the Celerra can use.  Thanks.

4 Operator

 • 

8.6K Posts

September 4th, 2012 02:00

You want to sure the LUNs are the same size of course – otherwise striping doesn’t work too well

No Events found!

Top