Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2160

June 22nd, 2014 05:00

VMAX Extent Distribution

To see what pool devices are bound to, we can use:

C:\>symcfg -sid 123 list -tdev -sg SG_XXX -gb

Enabled Capacity (Tracks) : 4285068096

Bound   Capacity (Tracks) :    4457040

                 S Y M M E T R I X   T H I N   D E V I C E S

------------------------------------------------------------------------------

                                         Total          Total       Compressed

       Bound      Flgs      Total      Allocated       Written      Size/Ratio

Sym  Pool Name    EMPT        GBs        GBs (%)        GBs (%)        GBs (%)

---- ------------ ---- ---------- ---------- --- ---------- --- ---------- ---

0879 FC           F..B       64.0       64.0 100       64.0 100       64.0   0

0889 SATA         F..B      128.0      127.9 100      127.9 100      127.9   0

0899 SATA         F..B       80.0        0.1   0        0.1   0        0.1   0

Total                  ---------- ---------- --- ---------- --- ---------- ---

  GBs                       272.0      192.0   0      192.0   0      192.0   0

If a device is unbound from a pool, say SATA, and then rebound to FC, the data will remain intact, but the *extents* will also remain on SATA. Upon writing new extents, these will be written to the FC pool, but reading extents that are already on SATA will not change them. The 2 ways that I know to bind a device to a new pool are:

Unbind and Rebind to a new pool -> Existing extents remain on original pool, new extents are written to new pool.

- Is there a way to do this without it becoming offline to the host? (presumably I cannot unbind a device while it is in a Masking View? or can I??).

- If an existing extent is written do (a single btye say) will that whole extent be moved up to the FC pool, or will it remain on SATA ?

symmigrate to a new pool -> All extents will be migrated up to the new pool (I know that this is online and non-disruptive, so that's good)

With the above options in mind (i.e. a device can be bound to FC, but have a distribution of extents on FC or on SATA, or on other pools at the same time), is there a command that can show me the distribution of extents for a device?

i.e. in the above, if device 879 had been unbound and moved up to the FC pool, 100% of it's extents would remain on SATA until specific extents are written at which time they will be moved to FC; it would be useful to be able to see that?

2 Intern

 • 

226 Posts

June 22nd, 2014 06:00

Hi Rwise,

You can nondisruptively rebind a device to a different pool without first unbinding it.

Rewriting data to an existing extent will not cause that extent to be moved.

To see the distribution of capacity/extents for your devices, just add the -detail flag to the original command you pasted above.

Thanks,

- Sean

22 Posts

June 22nd, 2014 07:00

Thanks Sean. Perfect. I'd used -detail before, but I guess the devices were not split between pools so I just did not see this going on.

For others that come past this post ...


For single TDEV:

symconfigure -sid 123 -cmd "rebind tdev 08A3 to pool FC;" preview -noprompt

For entire SG or DG

symconfigure -sid 123 -cmd "rebind tdev in SG SM_XXX to pool FC;" preview -noprompt

symconfigure -sid 123 -cmd "rebind tdev in DG DG_XXX to pool FC;" preview -noprompt


Any of these commands run very fast (about 5 seconds).


And then as you say, symcfg now shows distribution by pool.

C:\>symcfg -sid 123 list -tdev -sg SM_XXX -gb -detail

                    S Y M M E T R I X  T H I N  D E V I C E S

-------------------------------------------------------------------------------------

                                    Pool        Pool          Total      Compressed

      Bound      Flags      Total  Subs      Allocated        Written    Size/Ratio

Sym  Pool Name    ESPT        GBs  (%)        GBs (%)        GBs (%)        GBs (%)

---- ------------ ----- ---------- ----- ---------- --- ---------- --- ---------- ---

0879 FC           F..B        64.0     1         64 100       64.0 100       64.0   0

0889 SATA         F..B       128.0     0        127 100      127.9 100      128.0   0

     FC           -.--           -     -          0   0          -   -          0   0

0899 SATA         F..B        80.0     0          0   0         0.1  0        0.1   0
     FC           -.--           -     -          0   0          -   -          0   0 

Total                   ---------- ----- ---------- --- ---------- --- ---------- ---

  GBs                        272.0     0      192.1   0      192.0   0      192.1   0

22 Posts

June 22nd, 2014 11:00

A couple of follow up questions ...

What is the general impact to host performance when symmigrate'ing between pools. Is it high, or is generally run as a background process with low impact to the host (i.e. host operations are given a much higher priority etc) ?

Is it safe to 'terminate' a running symmigrate, leaving half of the extents on SATA and half on FC say ?

What is the difference between Pool Allocated and Total Written ? My guess is that pool allocated is all the extents that are in use while Total Written are all the non-zero extents with actual data, though I'm a bit unclear.

What is Compressed Size/Ratio ? I see from some outputs that this can be higher than the Pool Allocated / Total Written (i.e. negative compression ratio which doesn't seem good). Is compression something we turn on/off (if so, where/how) and can manage, or is this globally on/off per array, and what is it's use (going by the above, it's not good) ?

Thanks.

2 Intern

 • 

226 Posts

June 22nd, 2014 14:00

Rwise,

symmigrate (VLUN) is generally a lower priority task; it will actually stop migrating extents in certain situations... for example, when system write pendings (basically dirty write cache pages) reach a certain level.

Yes, for thin symmigrate sessions, it is safe to terminate prior to completion. In this case, extents will remain where they are at the time of termination (e.g. half in SATA and half in FC, as you mentioned).

Pool Allocated includes both written and preallocated extents, and Pool Written just accounts for extents that have actually been written to by a host. For preallocated TDEVs, you'll likely see a higher value for Pool Allocated than Pool Written.

Compression is an option of FAST VP where we can optionally compress data that hasn't been accessed in a user-defined period of time. It's only relevant if you're using FAST VP and have explicitly enabled compression. Assuming you're not using FAST VP & Compression, the cases where you're seeing unusual values in the Compression column may just be rounding errors. When displaying capacity in GB, the Pool Allocated column displays in integers (no decimal point), and the Compressed Size (and Pool Written) column displays with a single digit after the decimal point. To confirm, you could try dropping the -gb flag so it displays capacity in tracks instead of GB.

Thanks,

- Sean

22 Posts

June 23rd, 2014 05:00

Thanks Sean,

I ran a complete symmigrate from SATA to FC for a 7 TB system:

symmigrate -sid 123 -name tmp -sg SM_XXX -tgt_pool -pool FC establish

This ran overnight and went to 100%

symmigrate -sid 123 -name tmp query

However, some extents are still on SATA:

symcfg -sid 123 list -tdev -sg SM_XXX -detail


                     S Y M M E T R I X   T H I N   D E V I C E S

-------------------------------------------------------------------------------------

                                    Pool         Pool           Total      Compressed

       Bound      Flags      Total  Subs      Allocated        Written     Size/Ratio

Sym  Pool Name    ESPT      Tracks   (%)     Tracks (%)     Tracks (%)     Tracks (%)

---- ------------ ----- ---------- ----- ---------- --- ---------- --- ---------- ---

0A33 FC            F..B   32770560     2   32775564 100   32768642 100   32775564   0

     SATA          -.--          -     -       1008   0          -   -       1008   0

0A2B FC            F..B   40962720     2   33580116  82   33575113  82   33580116   0

     SATA          -.--          -     -       1020   0          -   -       1020   0

So, I ran it again this morning and the symmigrate session went to 100% but again some devices have 1008 or 1020 left on the SATA pool. Any ideas why I cannot push all data to FC exclusively? (Maybe I should open a support case, but good to talk these things on forums, where others can see these openly and compare experiences I was thinking).

2 Intern

 • 

226 Posts

June 23rd, 2014 07:00

Rwise,

This sounds like a known issue -- documented at https://support.emc.com/kb/169241. I'd recommend opening an SR -- support can confirm whether or not you're running into this issue, and if so can apply an Enginuity fix to resolve it.

Thanks,

- Sean

No Events found!

Top