Start a Conversation

Unsolved

This post is more than 5 years old

T

3204

September 18th, 2012 10:00

moving database from to different rdm

I need to relocate my sql database/logs from 2 rdms to 2 new rdms.

what is the best way to do this?

mount the new rdms

then use emcopy or openreplicator?

then remove the old rdm and change the drive letters on the new drives to the old drives?

would this work?

9 Legend

 • 

20.4K Posts

September 18th, 2012 10:00

storage vmotion ?

2 Intern

 • 

470 Posts

September 18th, 2012 10:00

i cannot do storage vmotion with the physical rdms. it only relocates the mapping file.

9 Legend

 • 

20.4K Posts

September 18th, 2012 10:00

here is one discussion, i guess no sVmotion for physical mode RDM.

http://communities.vmware.com/message/1987734

1K Posts

September 18th, 2012 11:00

Are the two new LUNs on the same array?

9 Legend

 • 

20.4K Posts

September 18th, 2012 11:00

What type of arrays ? (source and target)

2 Intern

 • 

470 Posts

September 18th, 2012 11:00

no. they are different array types

2 Intern

 • 

470 Posts

September 18th, 2012 11:00

clariion rdm to netapp rdm

9 Legend

 • 

20.4K Posts

September 18th, 2012 11:00

Sancopy is not supported between CX and Netapp so you will need to rely on host based tool.

1K Posts

September 18th, 2012 11:00

Do you need the LUNs to stay as RDM or would it be OK if the LUN is a vmdk?

1K Posts

September 18th, 2012 11:00

If RDM is not required you can shut down the VM, remove the pRDMs, re-add the LUNs as vRDM and storage vMotion the LUNs. When you storage vMotion the LUN it will convert it from a vRDM to a vmdk.

13 Posts

September 18th, 2012 13:00

there are multiple ways to catch that fly :

1) add the new rdms to the VM and use a volume manager tool (lvm for linux / disk manager for windows) to mirror the luns.

     - Pro :     

               adding disk and cloning data is done online - only have downtime to remove the old RDMs.

     - cons :

               performance impact during mirroring

               if using windows, this means using dynamic disks

2) use the vSphere command line : vmkfstools

     - pro :

               - works like a charm for me every time    : )

               - you have a rollback in case of any issue (source RDM is not modified)

     - cons : VM must be shutdown for this to take place. Downtime depends on size of RDMs

          see kb for details : http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=3443266

          It refers to clonig a vmdk to a rdm but it works as well from rdm to rdm and rdm to vmdk (need to tweak the parameters accordingly).

          Use the mapping file of the source rdm as the source file in the follwing command :

          For physical compatibility mode run the command:

         # vmkfstools –i srcfile -d rdmp:/vmfs/devices/disks/identifier /vmfs/volumes/datastore/vmdir/vmname.vmdk

     (see the KB for detailed examples regarding virtual RDMs and physical RDMs).

     Don't forget to modify your VM configuration to refer to the new RDMs after cloning.

     As a final note, don't forget that RDMs today make only sense in the context of Clustering and leveraging SAN based features (like san based snapshots to protect ORacle hosts for example)...

     Maybe a good opportunity to review your initial decisions about RDMs   ; )

Eric.

September 18th, 2012 23:00

Just to expand a bit on the feedback so far.

Another reason for using RDM's might be to overcome the 2TB-512byte limit of VMDK files.  Therefore, converting from pRDM to Virtual Disk may not be possible.  So in general, I thought I'd talk about the process.

1) If you don’t need it to be an RDM (and client wants to use this opportunity to convert to VMDK), then you can use Cold Migration or Storage vMotion (discussed in detail in point 2 below); however:

a) A single file within the datastore can only be up to 2TB – 512 bytes even with VMFS-5; therefore, if they are using an RDM because they need to support larger than 2TB volume then will need to continue to use RDM’s

b) Keep in mind, it is also dependent upon the block size of the VMFS datastore you are migrating to (which is why I emphasized up to).  You didn't mention the version of ESX/ESXi and the (con)version of VMFS, so I'll consider all options:

VMFS-3 (or VMFS-3 converted to VMFS-5)
8MB – Largest file 2TB (-512 Bytes)
4MB – Largest file 1TB (-512 Bytes)
2MB – Largest file 512MB (-512 Bytes)
1MB – Larges file 256MB (-512 Bytes)

VMFS-5 (*newly created*)
1MB - No longer have to choose block size; however, a single file (VMDK) within a datastore can still only be 2TB – 512 bytes.  A single datastore extent can be ~60TB.

2) Conversion from RDM to VMDK:

a) ESX/ESXi 4.x: conversion is only an option via Storage vMotion with Virtual Compatibility Mode RDM’s and not pRDM’s (only moves mapping file)

b) ESXi 5.x: allows migration of data and conversion to VMDK for both Virtual Compatibility Mode RDM’s (via Storage vMotion) and now supports conversion of pRDM’s but only as a Cold Migration


Also, I figured I should point out that when performing the conversion in the "Migration" wizard, don’t select “Same format as source” or else you simply moved the mapping file to another VMFS datastore (and still have an RDM when complete).  You would select Thin Provision or Thick Provision (ESXi 5.0 further breaks out Thick as Eager Zeroed and Lazy Zeroed).

3) Handling the migration of guest image

a) If you have chosen to keep the RDM intact, then you will want to remove it from the image as you potentially may convert it to a VMDK file unnecessarily (as described in step above).

b) If you have chosen to convert to VMDK, then review as noted above the required scenario for cold migration (powered down) or Storage vMotion eligibility

Keep in mind, that if SCSI Bus Sharing is enabled, then you have to power down the image.  This is normally enabled in a clustered environment (i.e. MSCS) which implies an RDM requirement so you wouldn't necessarily be converting it, but I mention it as it needs to be considered.

4) Migration options: pRDM -> pRDM

a) SAN Copy (not eligible in this specific scenario as Dynamox already noted, but since it was mentioned I thought I'd provide the steps anyways)

- Power down image

- Note the SCSI ID for the RDM

- Remove RDM from image and unpresent from ESX server(s)

- Cold migrate the VM

- SAN Copy the LUN

- Present new LUN to ESX servers(s)

- Add LUN to image as RDM using same SCSI ID as noted in step above

- Power on image

b) vmkfstools (refer to Eric's comments and the mentioned KB article)

c) Host based tools (already discussed above)

Finally, I wanted to provide two relevant articles that discusses this also:

http://blogs.vmware.com/vsphere/2012/02/migrating-rdms-and-a-question-for-rdm-users.html

Migrating virtual machines with Raw Device Mappings (RDMs)
http://kb.vmware.com/kb/1005241

No Events found!

Top