Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1335

October 7th, 2013 19:00

Celerra NS - disks, stripes, slices and metas

This is going to be a bit of a long post so thanks in advance for anyone who takes the time to read!

First time poster and newbie to EMC products.  Gotta say after having done Linux support (web/e-mail servers mostly) for 5 years I haven't felt this giddy in a long time when learning something new.

We'll eventually need additional capacity over NFS so I logged in to Navisphere, made it Storage ==> Volumes.  I see several types of resources (in the subject) and while I kind of get the purpose of some of them (I think!) I sure could use some feedback/help.

Disk:  This is a FC Lun presented by the CX4 to the NS.  NS sees it as a disk resource and marks it as such.  I took note of one of the LUN IDs and connected to the CX4 (using Navisphere) and I see that LUN has 6 disks and is of type RAID6.  So, it looks like any disk resources presented to the NS device is already data protected.  I then see two disk resources combined into a stripe, two stripes combined into a slice and several slices under a resource type "meta".  I get the meta resource type, it's essentially saying "the filessytem called data1 you wanted me to create will be created on top of all resources within this container" and then of course following the filesystem ==> slice(s)  ==> strip ==> disk ==> LuN === > etc...    we can figure out which actual physical disks are being used for the NFS share.  So:

- Am I right about disk resources?

- Am I right about meta resources?

If I'm right about the above two, what's the point of all else in between such as stripes and slices?  I'm thinking one of those two is probably a way of adding more physical disks to get more IOPs but then what about the other container?  Why couldn't it just be disk ===> slice ====> meta ?

I hope this makes sense to someone.  TIA.

254 Posts

October 8th, 2013 08:00

You have the basic idea.  I'll try to fill in some gaps.

Disks on the Data Mover side do map to LUNs on the SP side, as you've seen.  So, yes, every "Disk" or dVol, is protected and consists of more than one physical disk on the SPs.

Those disks are then put together by the volume manager into stripe sets (typically in sets of 5).  The idea here is to spread the load across lots of disks.  So the volume manager will stripe the data across each disk in the stripe set which in turn stripes across the physical disks on the back-end.

Of course, you probably want to have filesystems that are smaller or at least not multiples of the stripe sets.  So that's where slicing comes in.  If you ultimately want a 4TB filesystem and you've got, for example, 32-2TB drives in your stripe set, you'd only like to use 4TB of the stripe set and then be able to use the rest of the space for other filesystems (or expansions), so the volume manager takes a slice of the stripe set to create the space.  If you create a filesystem that is not sliced, it can only use full stripe sets.  There are use cases for that, but they are less common, in my experience.

Then the meta is, as you surmised, a container for a given filesystem.  That container can contain multiple slices.  The best way I've found to think about this is it gives the filesystem an single abstracted view of the space.  This is especially important for things like filesystem expansion.  I can expand a filesystem by adding slices, but the filesystem layer just sees a single meta and thus doesn't have to be aware of the underlying structure.

As you can see, there are several layers in the volume manager of the Data Mover.  The good news is that for the vast majority of cases, you don't have to deal with them much at all thanks to the Automatic Volume Manager (AVM).  With AVM, you only have to provide the disks to the NAS pools (which by default will sort themselves out based on disk type, RAID-type, etc.).  Then you just allocate filesystems from the NAS pools and the AVM does all of the middle work for you.  So you just say I want a 4TB filesystem from my SATA-R6 capacity pool.  Or I want to expand a filesystem by 1TB.  When you do this, all of the middle work is handled for you.  You have the option of creating and maintaining these layers if you have a need to but, like I said, most workloads do not need this type of manual layout.

There are some good NAS courses and whitepapers on how this works.  I would browse support.emc.com for them or sign up for the NAS courses if you want to go deep on these topics.

Hope this helps.

14 Posts

October 8th, 2013 08:00

Great response thank you so much for your time.

Erik

No Events found!

Top