Unsolved
This post is more than 5 years old
2 Posts
0
2032
April 10th, 2018 09:00
PS6000 and PS4100 with Plenty of Isssues
I inherited a PS6000 and a PS4100 connected together via two "Control Module 7" controllers and two "Control Module 12" controllers. The network had a UPS connected which had bad batteries for years, allowing power interruptions to bring the three servers containing a dozen or so VMs crashing down. A few of the VMs are damaged beyond control, and don't ask about backups! The VM containing the Equallogic Virtual Storage Manager Version 3.5.2.1 boots and the web interface comes up with "Not connected to the server". The PS4100 has a red light flashing and I see one dead drive. The PS6000 has two controllers hooked up with all four ethernet ports plugged into the switch, so all eight ports should be lit, but one controller has dark LEDs.
Since I have no access to this mess via the web GUI, I tried the serial port. All four serial ports are alive and it quickly pops the message up about being "critical", but I haven't been able to find where in the CLI to really tell me what is going on. I downloaded the diagnostic data .zip file and nothing popped out at me, but I didn't read every single file yet.
I rebuilt a new vSphere for the VMs and upgraded the VMs to the latest 5.5 u3 that I'm licensed for. I see no Dell Equallogic controller info in the vSphere.
When I try to download the software again to rebuild the Storage Manager VM, my warranty period has expired and when I contacted Dell directly about renewing my contract so I could download the software they surprisingly said "Your EQL machine was purchased in 2011 and unfortunately it is not anymore eligible for renewal. Thank you."
I can't even pay to get access to the software I need to wrestle control of this mess, which shocked me! I tore the server room up looking for CDs but have found none. The VSM appears to be running but obviously it's broken and I don't know where to turn next. In one location in the vSphere I did catch once a notice about the redundancy being lost on the controller, which I assumed is the four dark LEDs on the Control Module 7.
Now that I have the three servers running VMware 5.5 u3 without any errors and the vSphere is new and running as expected, I wonder if rebooting the SANs boxes would/might restore some functionality to the Storage Manager VMA? I'm nervous about shutting them off due to the age, but a reboot may help, maybe?
btw, first order of business was getting the UPS fixed properly which it is, so I have good power now. The three servers are running VMware nicely, now the SANs boxes.
Notice the "Act" light at the bottom is lit yellow, the yellow warning on the top/right box , and the four ports on the top/left are not lit as the ones to the right are. Bottom controller 12s also are displaying unlit LEDs
I can be contacted directly at dkirk@eastmanoutdoors.com or replied here. THANK YOU for any support, encouragement or advise!



D-Kirk53
2 Posts
0
April 10th, 2018 09:00