This post is more than 5 years old
8 Posts
0
3147
November 21st, 2012 15:00
Need to remove DAE on CX4-480 flare code 4.30.x
This is the last DAE on the bus...no drives on it...just the DAE (bus1 enclosure2) is attached to Bus0 enclosure 2. Need to remove it so that we can use the DAE on another frame.
I'm concerning about steps to remove the LCC cables. I assume the Management Console needs to be rebooted afterwords. I was hoping USM would walk me thru but it doesn't.
Any docs/explantions would be great.
Thanks
No Events found!
Storagesavvy
474 Posts
0
November 21st, 2012 16:00
IF…
1.) The DAE is at the end of the loop
2.) The DAE has no installed disks
Then you can simply unplug the LCC cables from that DAE and remove it. You will need to log in to the setup page (http://spip/setup) on each SP and restart the management server on each SP in order to clear the faults.
Be sure you have NO faults on the array before you start this process.
Richard J Anderson
qavipnsw
8 Posts
0
November 21st, 2012 15:00
USM doesn’t allow you to remove DAEs….only add them.
Joe Sullivan
Storage Administrator, Enterprise Storage
Information Services
Indiana University Health
South Campus, 1515 N Senate Ave., Indianapolis, IN 46202
317.962.5415
jsullivan4@iuhealth.org
Discover the strength at www.iuhealth.org
AnkitMehta
1.4K Posts
0
November 21st, 2012 15:00
You can download the CLARiiON Procedure Generator from Powerlink which would give you step by step instructions.
Moreover, I reckon in order to remove a blank DAE you can run Install/Uninstall Hardware wizard from USM and that would be it!
Storagesavvy
474 Posts
0
November 21st, 2012 16:00
Technically this is possible, I don’t think the procedure is documented and you will want to make sure EMC support is engaged.
Richard J Anderson
dynamox
9 Legend
•
20.4K Posts
2
November 21st, 2012 16:00
isn't there a procedure to remove DAEs that are not at the end of the loop, one LCC at a time ?
AnkitMehta
1.4K Posts
0
November 21st, 2012 16:00
Oh Sorry, my bad. I just checked CPG, there is a procedure there well you can follow that if you like but basically its just I initially thought Just remove it!
Then restart the management server to remove the ghost entries in Navisphere/Unisphere GUI.
qavipnsw
8 Posts
0
November 21st, 2012 17:00
All done and working great!!!
Thanks for your input!
Happy T-day
Joe Sullivan
Storage Administrator, Enterprise Storage
Information Services
Indiana University Health
South Campus, 1515 N Senate Ave., Indianapolis, IN 46202
317.962.5415
jsullivan4@iuhealth.org
Discover the strength at www.iuhealth.org
Storagesavvy
474 Posts
0
November 21st, 2012 17:00
I have a customer that needed to do it on a VNX and at first support said it couldn’t be done, then they later “created the procedure and tested it in the lab” so my impression is that it’s not in even the internal CPG, nor is it in any public documentation. So that’s why I suggest getting with support if you actually need to remove a DAE in the middle of a loop.
Richard J Anderson
dynamox
9 Legend
•
20.4K Posts
0
November 21st, 2012 17:00
that procedure is not documented for customers or for EMC support ?
AnkitMehta
1.4K Posts
0
November 21st, 2012 18:00
The procedure is not mentioned seperately in CPG. However, its there when you run Uninstall a DAE/DAE2-ATA/DAE2P/DAE3P enclosure from an active storage system CPG.
Only thing you need to keep in mind is when you remove the HSSDC Cable from the LCC A of the Encl you want to remove you would need to plug the lower encl's Cable to the one above it and follow the same for LCC B.
I will post a detailed post with a diagram soon. (Sun or Monday - gtg)
dynamox
9 Legend
•
20.4K Posts
0
November 21st, 2012 18:00
yes
AnkitMehta
1.4K Posts
0
November 21st, 2012 18:00
You mean DAE not the top DAE but any other DAE on the array except Vault DAE?
dynamox
9 Legend
•
20.4K Posts
0
November 21st, 2012 19:00
nice, maybe you can ask glen to turn it into a document ?
AnkitMehta
1.4K Posts
0
November 21st, 2012 20:00
I have access to that and I have created several Primus Articles myself but yes, Im Okay if he wants to do it!
christopher_ime
2K Posts
0
November 21st, 2012 20:00
Removing a DAE from the middle of the bus for a VNX is not possible. It has to do with the way the system automatically numbers the DAE's.
emc299019: "Can a DAE in the middle of a VNX backend bus be removed?"
[...]
The removal of the a DAE in the middle position of a backend bus should only be used for the replacement of that DAE. This action will cause a data unavailability situation for the enclosure to be replaced, and any subsequent enclosures with higher numbered ID's connected on the same backend bus. The DAE cannot be removed permanently.
[...]
However, for previous generations not using the automatic numbering, the process would go as follows:
PREPARATION:
============
1) As mentioned already, there is nothing requiring that the DAE numbering is contiguous on the back-end buses (except for the VNX).
2) It goes without saying that make *absolutely* certain that every drive from the trays being removed are unbound. Look also for private LUNs. Cleanup may be messy if you leave anything orphaned.
3) Unless you plan to rerack during the process, I recommend ahead of time you verify that the cables reach between DAE's
EXAMPLE (removing 2 DAE's from middle of bus):
========
For this example, I'm going to assume you are looking to remove trays 0_1 and 0_2 from the backend bus currently hosting:
0_0, 0_1, 0_2, and 0_3
Therefore, when completed, only 0_0 and 0_3 are left behind.
SPA
====
1) If the array is running FLARE 26, it supports the lower-redirector (introduced with ALUA but technically isn’t ALUA) so it will not trespass LUNs immediately as it will redirect back-end I/O via its peer.
- You don’t (necessarily) need to trespass LUNs as it will utilize a non-optimal path in the backend (hop through its peer)
2) If the array is running < FLARE 26 or they own the single-ported SATA drives (very old), trespass all LUNS from tray 0_3 first to SPB (so the system doesn’t have to on its own)
- Verify first that the SP’s aren’t over 50% utilized (threshold is 60%) prior to trespassing all LUNs
3) On DAE 0_3/LCC A/port “Pri”, disconnect cable (will be cable coming from 0_2/LCC A port “Exp”)
4) On DAE 0_1/LCC A/port “Pri”, disconnect cable (will be cable coming from 0_0/LCC A port “Exp”)
5) Connect cable from DAE 0_0/LCC A/port “Exp” to DAE 0_3/LCC A/port “Pri”
** Before you continue, make sure that DAE 0_0 and 0_3 are not faulted (0_1 and 0_2 will be) or rather that Navisphere (or Unisphere) acknowledges the redundant paths
SPB
===
6) If the array is running FLARE 26 or greater, it supports the lower-redirector (introduced with ALUA but technically isn’t ALUA) so it will not trespass LUNs immediately as it will redirect back-end I/O via its peer.
- You don’t (necessarily) need to trespass LUNs as it will utilize a non-optimal path in the backend (hop through its peer)
7) If the array is running - Verify first that the SP’s aren’t over 50% utilized (threshold is 60%) prior to trespassing all LUNs
8) On DAE 0_3/LCC B/port “Pri”, disconnect cable (will be cable coming from 0_2/LCC B port “Exp”)
9) On DAE 0_1/LCC B/port “Pri”, disconnect cable (will be cable coming from 0_0/LCC B port “Exp”)
10) Connect cable from DAE 0_0/LCC B/port “Exp” to DAE 0_3/LCC B/port “Pri”
** At this point you will likely see 0_1 and 0_2. I’ve been able to clear those in the past by restarting the mgmt service on each SP. I’ve heard though of people having to wait until they are able to fully reboot each SP one at a time; however, I believe it is because something was left behind and the drives weren’t fully unbound.