Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

991

August 22nd, 2012 23:00

storage pool history query

Hi this is a query about storage pool history to try and profile expected theoretical IO.

Background,

I understand that flare attempts to allocate disks in blocks of 5 and that therefore depending on history of a storage pool, the number of spindles data is on depends on the history of the storage pool. An example may show my understanding here.

So we allocate a raid 5 storage pool with 5 disks. In this case there is a fairly straight mapping between the storage pool and the private raid group within this.

If we allocate say 10 GB ( Lets call this LUN A ). Then data is striped across 5 disks.

So now we expand the storage pool by 5 disks. Existing data is not restriped. If we allocate a second LUN ( Lets call this LUN B ) with 10 GB, data in allocated in segments of 1 GB so is striped across 10 disks.

So in calculating performance, ignoring fast for a moment,

LUN A performance is 5 x disk IOPS, with write penelty of 4.

LUN B performans is 10 x disk IOPS, with write penelty of 4.

So if the above understanding is correct, we need to know the layout of the 1 GB segments to get an understanding of theoretical IOPS as simply saying its a 10 disk storage pool is misleading.

So my questions are;

1. Is the above simplistic understanding correct ?

2.. If it is correct, is there a way of knowing the segment layout to calculate the theoretical IOPs for a LUN ?

3. I have deliberatly left out FAST, as I do not know how to model this, and thoughts ? i.e. if I have a pool with 3 different disk performance characteristics, can I model what to expect in terms of performance ?

474 Posts

August 24th, 2012 17:00

It's very difficult.  There is some hidden data in CLI but even that won't really tell you where the LUN is sitting entirely.  Over time, all LUNs will end up all over the Pool so IO will balance to some degree but if the LUN itself was created when the pool had a small number of disks, then it will be bound to those disks as far as performance.

In VNX OE v.32, the 1GB slices are rebalanced automatically when new disks are added to the pool.  And if the VNX has the FASTVP license installed, the slices are monitored and rebalanced based on IO pattern as well (within the same tier, in addition to up and down tiers).  Unfortunately this is not the case for VNX OE v.31 and CX4 FLARE.

The recommendations for growing pools has been to start with some reasonable number of disks (15,20,30, etc) and grow in the same increments each time.  That way new LUNs will have about the same perceived performance as existing LUNs after pool expansion.

FASTCache can overcome much of the variability of performance for pool LUNs by accelerating reads and writes to busy blocks.

474 Posts

August 23rd, 2012 00:00

The answer depends a little since the behavior has changed in the latest VNX OE 32 release.  If your system is a VNX and is running OE v.32.x the below is not true.  For CX4 FLARE and VNX OE v.31 using THICK LUNs this applies..

RAID5 pools use RAID5 4+1 RAID Groups

RAID10 pools use RAID10 4+4 RAID Groups

RAID6 pools use RAID6 6+2 RAID Groups

I'm assuming you are using RAID5 right?

The disk level performance of LUNA and LUNB will each be approximately equal to a LUN in a single RAID5 4+1 RAID Group.  The reason is that FLARE allocates the 1GB slices from the disk group with the most free capacity.  Since the 5 new disks are 100% free and the first 5 disks are <100% free, new 1GB slices will be allocated from the new disks until the free% equals the same free% as the first 5 disks.  Once all disks have equal capacity free, additional slices will be allocated from all 10 disks.

Example, If you perform the following actions in this order...

Create Pool0 with 5 disks

Create LUNA of 10GB in Pool0

Expand Pool0 with 5 additional disks

Create LUNB of 10GB in Pool0

Create LUNC of 10GB in Pool0

Expand Pool0 with 10 more disks

Create LUND of 10GB in Pool0

Create LUNE of 10GB in Pool0

Create LUNF of 10GB in Pool0

LUNA = 5 disks

LUNB = 5 disks

LUNC = 10 disks

LUND = 10 disks

LUNE = 10 disks

LUNF = 20 disks

It may not perfectly end up this way in every case but it's generally what would happen.

What do you mean by three different performance characteristics in the pool?  Is it a FASTVP SSD,FC,SATA Pool?  The three tiers do not act together the same way as a single tier does.  If you elaborate here a bit I could provide some more information.

Note: Thin LUNs act differently and do not scale this way.

3 Apprentice

 • 

318 Posts

August 23rd, 2012 01:00

many thanks for your response.

So I am going to ignore the sub-lun tiering part for the moment  ( because I can only focus sequentially ! )

So regarding your example. The performance of the LUNS can be theoretically calculated because you know the history of the pool and the luns. Say you were approaching a vnx knowing nothing of the history, could you calculate the theoretical performance without this knowledge ?

No Events found!

Top