This post is more than 5 years old

2 Intern

 • 

172 Posts

19080

September 13th, 2014 07:00

Reset PS5000e

We have a single production PS6000 set up for our vSphere 5.5 environment with 4 hosts connected to it through private Dell 5424 switches. We also have a single PS5000e with 2 vsphere hosts that was our original vm environment, but we've only used it for testing since the warranty expired and we got the 6000. They are on different 5424 switches and completely separated from the production environment.

The 5000 is raid 6. I want to connect it to the production environment and set it to raid 10 to use for scratch volumes for our statistical processing windows terminal server VMs, so I need to know a couple of things:

Reformatting the 5000 - I know I can't change the raid policy from 6 to 10 directly in group manager. Is it possible to just reformat the disks so I can change to 10? I don't care about any data on the 5000. Or do I have to reset it through the serial cable like I'm setting up from scratch?

When I connect the 5000 to the production switches, will the 2 EQs detect each other? I'll probably want to join the arrays at some point, but right now I want them separate to keep it simple, and because the 5000 is out of warranty I don't want production data on it.

Also, does anyone see any problems with my plan?


Thanks.

7 Technologist

 • 

729 Posts

September 15th, 2014 08:00

If the current RAID policy does not support an online conversion and the group has other members with sufficient free space, a member can be converted to any other RAID policy by removing the member from the group (which resets the array) and adding it back to the group, then selecting a different RAID policy.

The array does not support "online" conversion from RAID6 to RAID10 because the new RAID policy doesn't provide the same or more space the then current policy.  In order to do R6 to R10, you will need to reset the array to reconfigure the raid policy.  Resetting the array will delete all the data and settings on the array, so plan according (backup date, backup array settings if you plan on reusing them and/or want to modify the array configuration after setup (array cli command 'save-config', see the CLI guide)

Adding the array to the production switch should not be a problem provided you have enough ports available (for the new array and the number of eth interfaces you are planning on using), and the bandwidth between the switches is correct (i.e., if using a stacking cable or ISL ensure you have enough bandwidth for each eth interface on the arrays your are configuring, i.e., PS6000 + the PS5000 = 7gb's (4+3 Eth interfaces).

Once wired in, and the ISL is set up to handle the additional bandwidth, run setup on the PS5000, and when you are asked if this array will be part of an existing group answer no.  I would suggest that you don't join the array into the production group, since this product is EOL (end of life) now.  If you add it to the existing group, be reminded that it will be unsupported after your present contract expires, and at some point in the future, the firmware on the 5000 will be outdated and you will be unable to update the firmware on the 6000 due to version mismatch incompatibility issues.

-joe

2 Intern

 • 

172 Posts

September 19th, 2014 06:00

Thanks, guys. I reset the array yesterday, so I'm going to connect it to the production switches today. I'll create a new group for it. The only thing I'm missing by not putting in the same group is the ability to move volumes between the 2 members, right? Since it is just going to hold scratch volumes, it's not a big deal to start from scratch (Har!) and point new jobs to the new scratch volumes.

Thanks again for the info.

No Events found!

Top