Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1106

June 22nd, 2010 07:00

SATA Drives with Celerra and MVM

Hello All,

I've searched the forums and found that BP w/ SATA drives is to keep each RG on a single SP.

If I will be presenting 2 Raid5 (6+1 in this case) to a 960 and would like to get the " "best performance" ", I would normally create a custom stripe across the d numbers on spA and spB, then create a master stripe. If I create each SATA RG on their own individual SPs, should I do the same technic as before, should I create both RGs on the same SP or should I forget about the Master Stripe all together?

Thanks In Advance.

Chris

4 Operator

 • 

8.6K Posts

July 1st, 2010 08:00

There was a moment with sata where it was bp to put them all on the same rg. It's my understand this is no longer the case with the newer dae's.

that was a long time ago with old DAE's that actually used P-ATA disks

now its no longer necesssary to have all LUNs from a S-SATA RG assigned to the same SP

The '4 lun fc stripe' is very insightful info.

thats how it works with FC drives - I think with S-ATA we only stripe two LUNs

I guess for S-ATA that's because mostly customers added just one S-ATA DAE and you wont get more than two RG's and also that we expected S-ATA to be used for more archive purposes

The is no general answer - it really depends on the workload. AVM tries to do a good compromise for most workloads

I would avoid double striping though

"P.S.: what performance test have definitely shown is that striping onto the same RG give worse performance than concat"

-I'm assuming this is if you create a stripe based off d#'s from a single RG? Not creating an A stripe and a B stripe from, say 2 RGs.

yes - there is a classic slide in the EMC world performance talk that show this for a single RG

I suppose what I'm taking from this is that AVM with SATA drives is more or less optimized & that MVM is best for FC configurations. Would you agree?

no - I would say that unless you really know the setup of your box and the Celerra best practices you should be using AVM.

Its a lot easier to create configs with MVM that perform worse than AVM than the other way around.

The only other exception is if you are using an EMC blueprint - like the templates we have tested for Exchange or SQL - they are based on MVM and well tested in our labs.

Of course AVM needs a number of disks/LUNs/RGs to work well - but if you only have a few disks than your options are limited anyway

18 Posts

June 24th, 2010 13:00

Hi Chris,

The below is an example of how AVM would create a clar_archive pool so it sounds like you are doing it the same way?

  In this case, the clar_archive pool attempts to stripe two allocated NAS disks from two different RAID groups with LUNs owned by different SPs.  From this initial stripe, a slice is built (assuming that slice=yes).  Any remaining space is returned to the pool.

untitled.bmp

17 Posts

June 30th, 2010 08:00

Rainer,

We're not talking about Clariion MetaLUNs here. If you read up, the clarification was about celerra metavolumes.

4 Operator

 • 

8.6K Posts

June 30th, 2010 08:00

depending on the backend and drive type AVM does create striped meta volumes if it makes sense - I.e. you have enough LUNs that aren't on the same

the AVM manual has a description for the selection algorithm used

you are free to create larger striped meta's with MVM - the "default" for striping 4 LUNs with classic FC setup was done when performance testing showed that adding more disks didn't increase performance much

with newer NAS codes you can also specify that you want the AVM striping behavior when creating a user-defined pool

as always it's a compromise between performance, flexibility and IO seperation

Rainer

P.S.: what performance test have definitely shown is that striping onto the same RG give worse performance than concat

17 Posts

June 30th, 2010 08:00

Hey Chris,

Thanks for your response.. and correct me if I'm wrong here but when the Celerra creates a filesystem, it does creates a meta(volume), but this meta is concatinanted as opposed to striped when the pool contains multiple celerra volumes. It was my understanding that to use MVM to create a custom stripe and thus a custom pool, allows the Celerra to utilize the maximum number of spindles per FS and also the ability to create larger striped FS(slices) as is normally the case with SATA drives as they are often used for archiving.

Any thoughts?

4 Operator

 • 

8.6K Posts

June 30th, 2010 08:00

it creates a meta volume on the Celerra which is something very different from a Clariion MetaLUN

4 Operator

 • 

8.6K Posts

June 30th, 2010 08:00

sorry - I've got that mixed up with another thread

Rainer

17 Posts

June 30th, 2010 09:00

Thanks Rainer,

Although the wording is a little confusing, I think I'm getting the jist.

With Sata I figured we could assume clariion BE but I should specify it is a CX4-960 and Code is 5.6.47-11

There was a moment with sata where it was bp to put them all on the same rg. It's my understand this is no longer the case with the newer dae's.

The '4 lun fc stripe' is very insightful info.

I haven't used the gui to do much in a while (and I'm also stuck in my ways), so the different options for creating SPs is interesting, and new to me.

"P.S.: what performance test have definitely shown is that striping onto the same RG give worse performance than concat"

-I'm assuming this is if you create a stripe based off d#'s from a single RG? Not creating an A stripe and a B stripe from, say 2 RGs.

I suppose what I'm taking from this is that AVM with SATA drives is more or less optimized & that MVM is best for FC configurations. Would you agree?

17 Posts

July 13th, 2010 10:00

.Thank You for your incite Rainer.

No Events found!

Top