Unsolved
This post is more than 5 years old
1 Rookie
•
63 Posts
0
2495
June 24th, 2010 09:00
moving Arrays and AIX servers on SAN Switches
All:
We are moving from Cisco to Brocade SAN Switches. Both the Arrays and Hosts will move. All dual access. The Servers are AIX, RHEL, windows and VMware.
Here's the question(s):
1. When I move the AIX Servers, do I have to do a exportvg/importvg sequence like I do on HP-UX?
o i.e. - moving a switch port connection in the HP-UX path from HBA to array (any switch port connection) will cause the !@#$% path to change in HP-UX and the device file name will change and HP-UX LVM won't recognize the device and the communication breaks down.
o I though that HP-UX was the only OS that did this. But I've come to see that AIX may also have this limitation. Does it?
o I have seen forum postings from Allen Ward that indicate that this is so.
2. Will I have problems with anything else?
o RHEL
o VMware - I don't think so. I think he scans the volume and identifies it and I'm okay.
o Windows, (which I don't understand and therefore hate because I started out as a "unix guy"), doesn't have this problem. I know that.
3. Can I do these migrations live, or will I have to reboot?
Stuart
dynamox
9 Legend
•
20.4K Posts
1
June 24th, 2010 09:00
AIX has the same problem as HPUX so prepare for :
RedHat - i've had mixed results doing things on the fly, i always bring them down
VMware - no problem , ditto for Windows
StuartA461
1 Rookie
•
63 Posts
0
June 24th, 2010 10:00
Dynamox:
What about "fast_fail" and "dynamic tracking" in AIX?
Stuart
dynamox
9 Legend
•
20.4K Posts
0
June 24th, 2010 10:00
my AIX admin does not have those enabled so i am not sure if they could eliminate some of the steps i mentioned.
emc_troy
2 Intern
•
142 Posts
0
June 24th, 2010 20:00
If you have PowerPath you can do it online, for example:
VG: testvg
Psevdo PV: hdiskpower6
PV (path to fabric A): hdisk3 - alive
PV (path to fabric B): hdisk4 - alive
move HBA1 & FA1 on fabric A to the new switch
# emc_cfgmgr
# powermt config
VG: testvg
Psevdo PV: hdiskpower6
PV (path to fabric A): hdisk3 - dead
PV (path to fabric A): hdisk5 - alive
PV (path to fabric B): hdisk4 - alive
Cleanup:
# powermt remove dev=hdisk3
# powermt save
# rmdev -??? (don't remember the options) hdisk3
Reapeat the same for fabric B.
Please test it first if you can, it's been a while since I've done it.
The same apply for HP-UX, but there you have to do some changes to VG, because there is no Psevdo devices,
and you will get the phenomenon where the VG is configured on PV's that don't exist but still works. PowerPath rotate the I/O to the new device files, but this can confuse the System Admins if they are not aware of how PowerPath works.
SKT2
2 Intern
•
1.3K Posts
0
June 25th, 2010 06:00
at HP-UX side
do `vgextend vgname /dev/dsk/newctd` & `vgreduce vgname /dev/dsk/oldeadctd` before you run the powermt check to clean dead paths.
dynamox
9 Legend
•
20.4K Posts
0
June 28th, 2010 20:00
JTC ..does it work regardless whether the host moves to a different port on the switch or the storage array FA moves to a different port ?
Thanks
RRR
2 Intern
•
5.7K Posts
0
June 29th, 2010 03:00
Dynamox tracking ? I'd like that !
ajbarth
14 Posts
0
August 3rd, 2010 12:00
If the host moves to a different port on the same switch there should be no issues provided. If it moves to a new switch then the children devices based off the fcs adapter use an fscsi device. The fscsi device has the switch domain_id as part of its scsi_id as seen with:
lsattr -El fscsiX
.
.
.
.
scsi_id 0x870005 Adapter SCSI ID
.
.
.