Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1403

November 30th, 2010 09:00

Using AVM with RAID10

We currently have an NS480 and typically we use RAID5 4+1 (clar_r5_performance).  We add 20 disks at a time and create 4 - RAID groups of 4+1

and then we carve each RAID 4+1 group into 2 LUNs and assign each LUN to an SP in the Clariion.

From waht I have read, this optimizes the striping and for clar_r5_performance 20 drives is a good increment to add drives.

We are now looking at doing VMview through NFS on our NS480.  I would like to use AVM and do RAID10.  I am not sure what number of drives

should be added at one time to optimize the striping.

Can anyone point me to what that number of drives would be or a doucment that lays that out?

I would appreciate any help...

Chris

4 Operator

 • 

8.6K Posts

December 20th, 2010 09:00

Which doc specifically?

my support matrix says 2 disk Raid10 only

111 Posts

November 30th, 2010 09:00

OK...just I understand.

If I had 40 disks, would I create 20 - RAID1 RAID Groups and then carve each RAID group into 2 LUNs and present these LUNs

to the Celerra?

Thanks for the quick reply

Chris

366 Posts

November 30th, 2010 09:00

Hi,

Celerra only supports Raid 1/0 with 2 disks, and usually create 2 LUNs on each Raid Group.

Gustavo Barreto.

366 Posts

November 30th, 2010 09:00

Yes...exactly.

Regards,

4 Operator

 • 

8.6K Posts

November 30th, 2010 10:00

I would suggest to just use the Storage Provisioning Wizard

111 Posts

November 30th, 2010 10:00

Thanks Rainer,

I typically do not use that wizard but I did click through it and you still

need to know what you want to accomplish to use this wizard.

Would you agree in order to use RAID1/0 in the Celerra with

40 drives you would create 20 - 2 drive RAID groups and then

create 2 LUNs per RAID group and then assign these to the

Celerra?

Thanks for your input

Chris

4 Operator

 • 

8.6K Posts

December 1st, 2010 01:00

Yes

And thats exactly what the wizard will configure for you if you tell it you want max protection

111 Posts

December 1st, 2010 03:00

Like with the clar_r5_performance using AVM, it will be looking for 4 LUNs to stripe.

Assuming I am right on this point...

If I use 40 drives and set to max protection and it creates 20 - 2 drive RAID groups,

would it not just use the first 8 drives?

My goal would be to have AVM leverage as many spindles as each file

system is created.

I apologize for so many followups 

Chris

4 Operator

 • 

8.6K Posts

December 1st, 2010 03:00

SPW just creates the raidgroups and LUNs and makes them visible to the Celerra

how they are used later is up to AVM or MVM

I you want more disks involved than the AVM default for that profile I suggest creating the stripes yourself and putting them into a user defined storage pool.

This way you get both - control over layout and performance - and the ease-of-use of pools

Rainer

16 Posts

December 13th, 2010 19:00

Gustavo: I believe that since 5.6.46 or so raid10 is supported on some platforms.

I'm interested in this as well as we have a large ns80 that is entirely raid1, and we're seeing serious disk performance issues on our busiest filesystems.

111 Posts

December 18th, 2010 07:00

I have been doing some experimentation with user-defined AVM

and I believe what you are suggesting maybe best.  The issue

I see is limited striped members (max 8) and the Celerra metavolumes are

not striped.

I want to make sure the Celerra will not do anything unusual

with the dvolumes that would affect performance.

Want to use 20 drives in a RAID 1/0 to optimize performance,

2 - 10 drive R1/0 RGs

Create 2 LUNs on each RG (LUN 1 on SPA from 1 RG amd LUN 2 on SPB from the other RG)

Create a striped metalun of LUN 1 and LUN 2 (MetaLUN1)

Present MetaLUN1 to the Celerra and it will assign a dvolume number.

Create a user-defined storage pool with this dvolume.

I should not need to worry about template, num_stripe_members, or stripe size since

I have already laid out the disks the way I want.

The only option I should want is to "slice by default"

Now that the disk is presented and in a pool I can create a file system.

As I need to grow, I can follow this same process that I used for MetaLUN1 and just extend that

user defined storage pool (which will do a concatenation) with another dvolume using the command

nas_pool -name -xtend -volumes

Do you see any issue with the following configuration?

Are there any options or parameters in the Celerra that maybe set by

default when I create the storage pool that would cause issues based on this layout?

I appreciate the help and I am looking to optimize

performance with the drives I have available is the objective

Thanks

Chris

4 Operator

 • 

8.6K Posts

December 18th, 2010 13:00

Hi Chris,

please do NOT use Clariion MetaLUNs for Celerra - they are not supported.

Also with RAID10 you need to stick to 2 drive raidgroups and stripe one the Celerra side

I would create 10 2-disk R1 raidgroups with one LUN each and alternating SP ownership

then create a stripe volume on the Celerra and put that into a user-defined pool

Rainer

111 Posts

December 18th, 2010 14:00

Thanks Rainer for being so responsive and I am glad

I asked.  I do not want to do anything that is unsupported.

To get back to basics, there are 3 types of configuration

as I understand it.

system AVM

user defined AVM

user defined MVM

I have played with the user defined MVM where

I presented the LUNs as you suggest.

20 drives with 10 R1/0 RGs

2 LUNs per RG with each LUN per

RG on a different SP. (total 20 LUNs)

I presented all 20 LUNs (10 on SPA/10 on SPB)

to the Celerra.  I then manually created 2 stripes

using 10 LUNs from different RGs (5 LUNs on SPA/5 LUNs on SPB)

I then created a metavolume using the Celerra Manager where the

2 stripes are concatenated and this process created a storage pool.

So I now have a storage pool as follows

storage group (metavolume) = stripe1+stripe2

As we want to grow, how would I add a 3rd or more stripes to this

metavolume/storage pool?

After this storage pool is created, I know I can click extend

but it just asks me by how much MBs do I want to expand by...

Will it know automatically to pull in that 3rd or more stripes I created

and add extend the existing metavolume?

How do I get that 3rd or more created stripe to be part of this storage group/metavolume?

Thanks again for your help

Chris

4 Operator

 • 

8.6K Posts

December 19th, 2010 08:00

Hi Chris,

I am not sure if you are really using a user-defined pool – last time I did that 5.6 the pool needed to be created manually.

Then you add volumes to it. Not sure how it is called in the GUI these days – I just use nas_pool and nas_volume

My suggestion to just create one LUN is because the normal default of two LUNs is mainly so that we are always sure its SP balanced and we get a number of LUNs (I/O qeues) to work with.

Since you have “small” LUNs with RAID1 just creating one LUN per RG can save some head movement.

In terms of stripes I would make that dependent on how you later plan to extend the pool. If you think you are adding disks in multiples of 20 disks then go with a stipe containing 20 disks. If you think you will rather add only 10 then use 10

No need to concat before putting storage into the pool – a pool does that automatically when needed.

For extension just create new stripes and add the stripes to the pool

Rainer

16 Posts

December 20th, 2010 09:00

Rainer :

I'm curious why you're saying he has to stick to 2-drive raid groups? The docs imply that 2, 4, 6, or 8 disk raid groups are supported in 5.6.44+

Considering the performance problems we're suffering with raid1 mirrored pairs, I'd like to see people understand that better options are available.

No Events found!

Top