Start a Conversation

Unsolved

This post is more than 5 years old

2599

January 26th, 2012 09:00

Experience and considerations when migrating VMs across storage?

We are looking at some options how to migrate existing VMFS / VMs from one storage to a virtualized storage platform (what a surprise: VPLEX is in the game). As we believe the task is a pretty generic one and we couldn't find any specfic experience here on "Everything VMware At EMC",

I would like to share some thoughts around that and ask for your comments and experience.

Peter Tschabitscher and Bas Raayman here at EMC have already contributed in the discussion:

Those are the facts shared:

Since ESX 4.1 you are able to do a resignature or a persistent mount per datastore when presenting a “cloned” VMFS datastore to an ESX host.

1: When using the force mount, you can only mount the datastore to one ESX(i) host at a time. Multiple hosts simultaneously is only possible after resignaturing of the disk.

2: Metadata change is done by the resignature. It recognizes that the disk has the same content, but the path to access the disk is different, that’s why ESX identifies it as a duplicate, and won’t allow force mounting it to more than one host, unless you perform a resignature.

More details:

http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

vSphere Storage Guide

The scenario and considerations for VMFS / VM migration we looked at.

  1. Storage vMotion requires resources (manpower, time) and moves all data 1x -> big advantage, it's all 100% online. Make sure your backup is working, as if anything should go wrong, you need a way to restore.
  2. Using an encapsulated datastore can create a "real" VMFS. However you need to re-register all VMs (for a small environment o.k.). To consider all the information about resource pools, performance data and so on that are stored in the vCenter Server DB
  3. Force Mount, rather an option that doesn't seem to do the trick as you might have this signature story, we prefer to avoid

Any other elegant ways to do a migration?

For now our agreed preference is for Storage vMotion. However, we would be curious if anybody has discovered a reliable and smooth method without playing with resignaturing and / or the issue to lose vCenter Server DB information and configuration.

David

2 Intern

 • 

225 Posts

January 31st, 2012 02:00

VPLEX encapsulation and mirror device could be considered to be a migration method for you.

The following is consideration of plan.

The existing source array and planned target array are supported with VPLEX code, you could take ESM as reference.

You need downtime to implement VPLEX system and re-cable the host-VPLEX-array connection.

Capture the existing ESX LUNs from existing array as one leg of VPLEX mirror device, and the other leg on your new array.

Set Sync between two leg.

You could break one leg on source ,destroy mirror dev and remove VPLEX after Sync completion.

2 Intern

 • 

199 Posts

January 31st, 2012 21:00

Eddy, while VPLEX might be a feasible way to do the migration. I still doubt if your action plan will work. Because you will remove VPLEX after sync completion, then you might have the same resignature issue to mount the storage volumes to ESXi hosts. The metadata in the signature might be different because previously it was identified as VPLEX volume not the Storage Volume.

2 Intern

 • 

225 Posts

February 1st, 2012 02:00

As I know, VPLEX would not change any metadata on captured LUN as long as using 1:1 encapsulation.

Eddy

2 Intern

 • 

225 Posts

February 1st, 2012 18:00

Yeah, you are right. Device presented by VPLEX is entitled with a “inVista” device.

Do you what is ESX resignature made up with?

Thanks,

Eddy

2 Intern

 • 

199 Posts

February 1st, 2012 19:00

Eddy,

here is some background information for signature.

When a storage device contains a VMFS datastore copy, you can mount the datastore with the existing signature

or assign a new signature.

Each VMFS datastore created in a storage disk has a unique UUID that is stored in the file system superblock.

When the storage disk is replicated or snapshotted, the resulting disk copy is identical, byte-for-byte, with the

original disk. As a result, if the original storage disk contains a VMFS datastore with UUID X, the disk copy

appears to contain an identical VMFS datastore, or a VMFS datastore copy, with exactly the same UUID X.

ESXi can detect the VMFS datastore copy and display it in the vSphere Client. You can mount the datastore

copy with its original UUID or change the UUID, thus resignaturing the datastore.

In brief, signature is built in VMFS level. It has nothing to do with the invista Device label, so It seems VPLEX might handle the migration job seemlessly if these VMFS is already awared by ESX hosts on DR site. 

2 Intern

 • 

225 Posts

February 1st, 2012 21:00

JingYi and Alex,

http://one.emc.com/clearspace/docs/DOC-36425, hope you could access this PG of Encapsulating ESX volumes in VPLEX.

I think signature would not be modified, even VPLEX change device type from CX device to inVista device.

Thanks,

Eddy

2 Intern

 • 

199 Posts

February 1st, 2012 22:00

The thing is if you decommision the VPLEX and remount the storage volumes directly to ESX cluster, will ESX host think these volumes are original VPLEX volumes or Cloned VPLEX volumes? If the former, these volumes can be seen by all hosts without resignature.

The difference is in VPLEX migration solution, ESX Hosts on DR side already know and communicate to the DataStore. While for other replication solution, The DR volumes have not been presented to hosts yet.

2 Intern

 • 

225 Posts

February 2nd, 2012 18:00

How does ESX determine the UUID?

I think UUID would not be a problem, if device type is not a part of UUID calculation or validation.

Thoughts?

Eddy

2 Intern

 • 

225 Posts

March 12th, 2012 18:00

I am a bit confused with it.

SCSI ID is address of a physical device, LUN is a logic device and identified with LUN ID.

Txtee, could you provide more information about it?

Thanks,

Eddy

No Events found!

Top