Unsolved

This post is more than 5 years old

6 Posts

2665

November 28th, 2011 16:00

No targets available for this volume

Hi all, i need a little help with this message.

I want to migrate a cluster windows 2003 from XP to EVA, the problem is

.- I atach the volumens to OM

.- restart the server

.-  I start the console root open migrator and i can see the disk with the label "ready"

But the problem is, when i do a right clik and select "Migrate Volume" the program send the message

No targets available for this volume

Any idea why this message

best regards

GS

89 Posts

November 29th, 2011 08:00

Hello,

There could be any number of reasons.  Keep in mind some of the following restrictions:

  • Cannot migrate boot, system or active partitions (must marked inactive using diskpart)
  • Volume must have drive letter or mount point – no multiple mount points in clusters only
  • Volume must be shared by both nodes
  • Require single volume on single partition in clusters only
  • No Veritas Dynamic disks
  • Target volume must always be larger in MSCS
  • GPT have some restrictions depending on your version of OM
  • Do not add target volumes to cluster resources

If none of these apply check the Open Migrator Release Notes for your version to see more possibilities.

Hope this helps

6 Posts

November 30th, 2011 16:00

Hi all

After a few days looking for answers to this problem, i found the answer !!

Open migrator send me the message

"No targets available for this volume"

Because the target disk had no space,  add a GB and thats all !!, worked

Best regards !!

GS

March 20th, 2014 13:00

Im running into the same issue....

What am I missing here?

I have the server/client installed on both the "target" and the "source". Ive attached the driver to the volume on each server that I want to migrate. When I click migrate it tells me "no targets available for this volume"

Its an RDM presented to a VM, shoudnt be an issue

Its NTFS

Its Server 2008 R2

Its only 5GB's

The volume is inactive in diskpart.

What does "Volume must be shared by both nodes" mean?

1K Posts

March 21st, 2014 08:00

Another no-no is active partitions. This applies to clustered and non-clustered servers. None of the target LUNs can contain an active partition.

89 Posts

March 21st, 2014 08:00

Hello,


OM migrations occurring with cluster disks require that the target be attached to every node in the cluster, but not made a cluster disk.  OM will swap the source and targets disks in the cluster.

Some additional notes regarding Microsoft clusters:

  • must be installed on one node, preferably the primary node of the volumes to be migrated
  • clustered volumes must be assigned a single drive letter or mount point
  • multiple mount points are not supported
  • only single partition and volume can be migrated
  • target volumes must be available to all nodes in the cluster and cannot be a cluster resource
  • Note: Does not support migrating Veritas dynamic disks within an MSCS environment

Regarding your RDM, please make sure your using OM within the support requirements for a virtual environment.  You can find details here  https://support.emc.com/kb/53591

March 24th, 2014 11:00

Is there a service that can restarted so the machine doesnt have to be rebooted when attaching the filter driver?

89 Posts

March 24th, 2014 11:00

I can see there being a bit of anger if it was pulled away before detaching the filter driver. 

March 24th, 2014 11:00

It seems like every Windows machine I've tested it on I have. Not a big deal.

I did notice after the migration that pulling the OLD volume away seems to make the OS angry (windows in this case) without rebooting first. Is that accurate or did I miss a step?

I completed the migration and then pulled it away. It looks like it cut everything over from the OLD to the NEW, drive letter change, data, etc. Once I pulled the volume away, it removed it from My Computer. Funny thing was under the Server manager and OM GUI it still showed it, and I could browse to it from the Server Manager..

89 Posts

March 24th, 2014 11:00

Unfortunately, no.

You won't always have to reboot for the filter driver.  Only in some situations where the application (or some other process) has an open handle on the volume preventing the filter driver install.  In these cases the OM filter driver install is suspended until the next reboot.

March 24th, 2014 12:00

Makes sense... Could I use PPME to migrate between EMC and other storage vendors? Assuming they're all block based? I have some test stuff I want to get off my EMC, and some now Prod stuff to go on my EMC.

March 24th, 2014 12:00

Im assuming its best practice to remove the filter driver, then reboot? Trying to avoid that 2nd reboot... its tough getting the first one scheduled let alone the 2nd.

89 Posts

March 24th, 2014 12:00

Understandable.  Unfortunately, its the nature of volume based migrations.  You may want to consider a block migration tool like PowerPath Migration Enabler (PPME) if it meets your needs. 

89 Posts

March 24th, 2014 13:00

Be aware that PP 5.7 SP2 for Windows supports non-disruptive cluster migrations and some NetApp arrays.  See the Release Notes for specifics. 

89 Posts

March 24th, 2014 13:00

In order to use PPME the luns must be managed by PowerPath.  If the 3rd party array isn't qualified to work with PP then it's not an option.

March 24th, 2014 13:00

In this case, I could use NetApp's DSM to manage the MPIO. Im assuming PPME is a licensed product, I'll go through my slew of EMC licenses and see if it is.

Worst case, I have to reboot twice. Thanks for the info though.

No Events found!

Top