Start a Conversation

Unsolved

This post is more than 5 years old

1949

March 20th, 2013 10:00

Getting the most out of the Clarion CX4-120 running last FLARE 30 Unisphere

Hi

This CX4 is fully utilizied with 120 disk, but 30 of the disk are not enabled yet. ( New drives )

10 SATAII drives ( 1 TB and 2 TB ) spreed around the six different DAE`s and 15 FC drives ( DAE 0-7 )

There are RAID5 groups been made of every DAE ( HS on the last slot ) and several LUNs are presented to a large VMware solution; Ten Blades using ESXi 5.1 hosts in a HA/DRS Ent plus is enabled, but disk i/o ctrl on the LUNs are in use. All vms are handled by automatic settings from HA/DRS and vmdk`s are 90% thin provisioned.

"The Plan"

Insted of adding the new disks to the existing RAID5 Groups and ext LUNs, or making new RAID groups. I want to reconfigure all DAEs and transform the RAID5 to RAID6 parity 2. I will make a big disk pool ( No FAST cache license ) of the new 25 disk and re-use five Hot Spares. There will be 3 HP for the hole CX4.

I will then make thin prov LUNs (SP cache enabled) from that pool and make the pool control hot and not-hot disk i/o. When more and more vms are storage migrated on to this pool, those disks from the RAID5 groups will be unbound. When a minimum of 4 old SATAII 1 TB disks are free, I will replace them with 4 new SSD and add them to a new big pool with a mix of 2 GB SATA II,  FC 400GB and SSD 200GB. (RAID1/0)

I will replace old 1 TB SATAII drives with new faster disk as needed for new disk pools. I hope the see tre big disk pools of 30 disks and rest for small RAID1/0 Groups and Hot Spares for the different drive types.

Since the FAST license is very expensive, I will manually tier the LUNs and control vms disk i/o form vCenter. ( Storage DRS )

I think that this will be a good config to get the most out of a fully utilized CX4-120

Please provide me with comments and tips

4.5K Posts

March 21st, 2013 15:00

I recommend reviewing the Best Practices guide for Flare 30 (attached) and look at the section on Pools. One Big Pool may not be the best solution for everything. Pools have overhead that make them a bit slower than regular raid groups. I'd look at using metaLUNs if you want the best performance. Also, with pools if you use R6, for best performance you should add the disks to the Pool in groups of 8 (6+2) so that all the internal parts of the Pool have the same optimal configuration.

One last point, when you create a Pool LUN is gets assigned to an "Allocated" Sp owner. The Default SP Owner is also set to this and the Current SP owner is set to the Default Owner. If you need to manually balance the load between SPA and SPB, you must use LUN Migration to move the LUN from SPA to SPB (for example). This is a performance issues if you just change the Default SP Owner as you could with Raid Group LUNs (no performance impact). Support Solution emc311319 "Setting the Pool LUN default and allocation ownership correctly"

glen

3 Attachments

March 22nd, 2013 01:00

Thx for the reply Glen

We have looked at meta lun for the future re-comnfig and we use them to day. But we run on old drives using RAID5, we need a more dynamical capacity management and we must move thin provisioning options of the vKernel. We have several applications that do not "support" vSphere thin prov. ( several cluster vms use RDMs )

I have read the following EMC docs:

  • h5773-clarion-best-practices-performance-avalability-wp.pdf ( page 51 - 52 When to use RAID6 )

"RAID6 offers increased protection against media failures and simultaneous doble drive failures in a parity RAIF Group. It has similar performance to RAID5, but requiers additional storage for the additional parity calculated. This additional storage is equivalent to adding an additional drive that is not available for data storage, to the RAID group.

We STRONGLY recommend using RAID 6 with high-capacity SATA drives. High capacity is 1 TB or greater in capacity. In particular, when high capacity SATA drives are used in VP pools, they should be configured in RAID 6.

"The optimal RAID 6 group are 10 drives and 12 drives for best compromise of user capacity over capacity used for parity and performance" So your comment on one big pool was good. I think I will split it into two Pools (10+2) Medium sized groups perform well for both sequential and random workloads.

Thx for the "Setting the Pool LUN default.." That one can be a problem for this re-cfg!

  • CX4_Planning_Your_Basic_Storage-System_Configurtaion_Master_1423476.pdf ( 53-57 + Table 17 )

We "get more" out of the fully utilized old CX4 by using RAID6 ( 67% user data- 33% parity data ) The need for higher availability is high. Several SATA drive have reported bad sectores and been replaced..

The need to move vKernel Thin Provisioning from vCenter and the need to control cache per LUN with more spindels i/o from pools with RAID6 is recommended.

Do you know of any solutions that have problems running RAID6 Pools?

During my VNX traning, we were told to use RAID5, but the trainer recommende me to use RIAD6 in Thin Prov Pools.

Customer with a FAST cache enabled and a mix of ATA, SATA, FC and SSD will benefit the most by using big RAID6 pools. But in a lagre Enterprise this can be a problem, I can see that. But this solution is a CX4-120

Regards

USN

2 Intern

 • 

5.7K Posts

March 22nd, 2013 02:00

USN,

RAID6 will have a significant performance penalty for write IOs (write penalty of 6 instead of 4), but if using large drives RAID6 will protect you from 2 drive failures, which RAID5 can’t. It’s up to you to decide what’ll work for you.

For example: we use RAID5 on our 1TB drives, since it’s mostly backup 2 disk data that resides there as well as low performance hungry applications. If a customers wants more dataprotection mirrorview is an option. For the larger drives RAID6 is a must. I recently suffered from a double drive failure… well, not exactly at the same time (1 week between them), but it was in the same RAID Group, so I was lucky there. But if you have a double failure – and with large arrays this is a possibility – you’d want your important data protected the best you can provide.

Have you seen this discussion: https://community.emc.com/thread/148025?start=0&tstart=0

4.5K Posts

March 22nd, 2013 08:00

One other point to remember is the CX4-120 has a single backend bus. It is possible to overload the bus (about 320MB/s bandwidth) with EFD - all disks and all EFD on the same bus could cause this.

With SATA disks I do agree that for the best level of protection R6 is a must, especially with the older disks. Remember that over time the rate of failure on hard disks increases. Just like all the light bulbs will burn out at the same time if you replaced them originally at the same time.

glen

No Events found!

Top