Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1809

November 18th, 2013 11:00

Reconfigure CX4-480 so Windows/Linux server can see all volumes as local drives

Environment:

a) Clariion CX4-480:

Flare Version: 04.30.000.5.517 

Navisphere Version: 7.30.11 (0.96) 

b) Celerra NS-480

Software ver 6.0.41-4

Over 70TB of raw drive space, currently 55TB of usable space.

The system is configured as cifs. Backing it up is problematic as my company requires single file/folder restores pretty often and since this is our only system of this kind NDMP is out of question. Backing it up as network shares is way to slow

I was thinking to connect a Windows/linux server to the Clariion and backup the volume(s) that way.

My question:

How do I reconfigure the system so the Windows/Linux server can see all of the volumes as local drives

I'm not quite experienced with EMC so please explain as plain as possible.

Thank you.

9 Legend

 • 

20.4K Posts

November 19th, 2013 06:00

optimal configuration is when each HBA is connected to both SPA and SPB. I do not know how many HBAs you have but i will assume you have two single port HBAs ..so one HBA will connect to SPA and the other to SPB. To verify that systems are logged-in properly, you will need to go into Connectivity Status in Navisphere and verify that you see your HBAs WWN in the list. That's where you might have to play with the topology settings. I also assume that operating system is loaded and HBA drivers have been installed.

9 Legend

 • 

20.4K Posts

November 18th, 2013 21:00

you can't do that, Celerra uses proprietary file system uxfs

http://www.scribd.com/doc/140152646/01-07-09-Storage-Virtualization

9 Posts

November 19th, 2013 05:00

Just to clarify, I don't want to use the Celerra Control station as the server that needs to see the volumes. I am willing to eliminate the control station out of the equation to have another server(s) see the volumes.

We have another EMC system (CX3-10c) setup as SAN where there is no control station, but rather 2 active/passive servers that are connected to the storage through FC switches.

I also remember the EMC engineer mentioning that this system can be reconfigured to SAN.

9 Legend

 • 

20.4K Posts

November 19th, 2013 05:00

if your goal is to connect a Linux/Window server to CX4-480 and give it access to LUN currently presented to Celerra datamover, then it cannot be done. I mean you can, but you will see garbage and will most likely corrupt uxfs file systems. If you just want to connect another server to CX4, create new LUNs and present them to this server ..i see no issues with that.

9 Posts

November 19th, 2013 05:00

I want to connect another server to CX4, create or rather recreate the existing LUNs and present them to this server.

the data currently present on the LUNs is not needed, so I pretty much have a fresh shot at the system without worrying about losing anything.

How can I accomplish this?

Thank you.

9 Legend

 • 

20.4K Posts

November 19th, 2013 06:00

do you know what model of HBA ? These are supported

11-19-2013 9-33-03 AM.png

if you are going to connect directly, you will need to play with HBA setting to get them to login to CX4 (might have to change topology to "point to point" or something of that nature)

9 Posts

November 19th, 2013 06:00

I have the 41ES and the 42ES. So Do I connect the SPA to the new Windows server and set the topology to point-topoint. What else do I need to do? Is there anything on the Clarrion that I have to do, disconnect the control station?

Please advise.

Thank you for looking into this.

9 Legend

 • 

20.4K Posts

November 19th, 2013 06:00

i see, so i assume that you have FC HBAs in your servers ? Do you have fiber channel switch(s) or did you plan on connecting these servers directly ?

9 Posts

November 19th, 2013 06:00

i assume that you have FC HBAs in your servers -> servers? Are you asking about SPA & SPB or the server I want to attach to the storage? SPA and SPB both have extra ports on their HBAs. I do  have an HBA on the server I want to attach, however I'm not certain if the card is supported. I have few ATTO Celerity cards, I hope they are supported.


Do you have fiber channel switch(s) or did you plan on connecting these servers directly-> I have 1 switch, but I don't mind connecting them directly.


Thank you.

9 Posts

November 19th, 2013 07:00

I will need some time to get this setup, hopefully towards the end of the day. Once done, I will get back to you.

Thanks again for helping with this.

9 Posts

November 20th, 2013 13:00

Good news,

I configured a Windows 2008 server with a single HBA (2 ports; one for SPA and 2nd for SPB). Like you predicted I had to change the Topology settings: from default point-to-point to Arbitrated Loop for the cards to login to Clariion.

I configured a Storage Group, added 10 LUNs and the newly added Host. The host sees 10 separate LUNs so I spanned them all together on the OS level to create one large Drive ~18TB.

This is just a test system before I configure the real thing so I have few follow up questions:

1) When I manually registered the two fiber ports I had to chose "Initiator Type". I chose Recovery Point Appliance!

the available choices are: Crariion Open, HP Auto Trespass, HP No Auto Trespass, SGI, Fugitsu Siemens, Compaq /Tru64, and Recover Point Appliance. I also chose Legacy failovermode 0 as Failover Mode. What would be the best practice for the Initiator Type and Failover mode?

2) It appears some LUNs are owned by SPA and some by SPB, should I chose half LUNs from each SP for the host to ensure redundancy or does it even matter?

3) Is there a way to present one large LUN to the host instead of many smaller ones to avoid spanning them on the OS level?

4) Do I need PowerPath or some other software to ensure failover works as needed?


Dynamax, thank you very much for guiding me this far.



9 Legend

 • 

20.4K Posts

November 20th, 2013 13:00

Make sure to configure multipathing software on the host, since this is a test system i don't think you will be investing into PowerPath, so at least configure the native MPIO. You can find instructions in this doc

Host Connectivity Guide for Windows

1) Select Clariion Open, failover mode 4

2) in terms of performance yes, not really for redundancy ..if one SP fails all LUNs will trespass to its partner SP

3) sure, take a look at MetaLUN functionality ..if you search support.emc.com you will find whitepapers that describe how MetaLUN work and best practices when creating them.

Best practices for creating metaLUNs on VNX or CLARiiON arrays.
How to configure CLARiiON or VNX HyperGroups.
Performance issue
Drive layout best practice
Product: CLARiiON
Product: VNX Series
Product: VNX Series with MCx
Feature: metaLUNs
MetaLUNs are a good method to improve performance by allowing a LUN to access many more drives in parallel than would be possible with a standard RG LUN or Pool. 
On VNX with MCx, metaLUNs can also be configured for symmetric active-active access, whereas Pool LUNs would still have to be active-passive or use ALUA (Asymmetric Logical Unit Access).


A huge number of permutations are possible for drive layouts within a metaLUN.  While these will provide the requested capacity, many combinations can lead to performance issues, such as 49371.

VNX / CLARiiON metaLUNs can help to reduce bottlenecks by spreading the I/O load over many drives.  The more drives that are used, however, the more likely there will be problems with 52084.  Each component LUN will also need to be monitored by the SP, so metaLUNs that use large numbers of component LUNs will increase the SP utilization needed to manage them.

Using large metaLUNs can help boost overall performance, by spreading the performance load over multiple drives.  However, with high capacity drives, this generally means that the RAID groups will be shared by multiple applications.  This, in turn, means that the I/O performance for one application will vary according to the performance loads of other applications.  If consistent performance is needed, either use dedicated RAID groups or consider implementing Navisphere Quality of Service Manager (NQM).

PLEASE NOTE: MetaLUNs are NOT SUPPORTED for VNX/File systems.  Do not attempt to create a metaLUN and assign it to the VNX/File server (it will fail to be imported into the File system), and do not take a traditional LUN assigned to a VNX/File and convert it to a metaLUN to gain more space (it will cause the File server to crash).  Instead, create new LUNs of the same geometry and RAID protection and assign them to the File server, and use Unisphere to extend the filesystem with the new space in the storage pool.
New VNX implementation
Data migration
Matching and aligning the size of a metaLUN stripe to the write sizes from a host will help with the efficiency of I/O operations.  For that reason it is usually best to use powers of two, when deciding how many component LUNs go into a metaLUN stripe (that is, 2, 4 or 8).  The same applies to drives in the RAID group so for RAID 5 a 4+1 configuration is ideal (that is, five drives in the RAID 5 group).

While it is always best to match the RAID configuration to the expected I/O load, in large configurations this is not always practical (example, large VMFS volumes).  The following configurations are a good balance between performance, contention, and rebuild times:

RAID 5: 20 drive metaLUN stripes (4 * (4+1)), which is four RAID 5 groups, of five drives each

RAID 1/0: 24 drive metaLUN stripes (4 * (3+3)), which is four RAID 1/0 groups, of six drives each

These are examples, but it is always best to determine the application I/O load in advance and plan the LUN layout accordingly (please refer to the White Papers listed below).

Here are some further best practice rules for creating metaLUNs:

  • Do use the correct RAID level for the pattern of I/O (e.g. An application generating 8KB Random Writes should ideally be using RAID 1/0 LUNs)
  • Do not have more than one component LUN from any particular RAID group within the same metaLUN stripe.  This would cause 49371.
  • Do use drives of the same capacity and rotational speed in all the RAID groups in the metaLUN.  The RAID groups should also contain the same number of drives.
  • Do not include the vault drives in the metaLUN.
  • Do use sets of RAID groups that only contain metaLUNs that are all striped in the same way, if possible.  Having standard LUNs in the same RAID group as metaLUN components will lead to some parts of a metaLUN having uneven response times across the metaLUN.  The order in which the component LUNs are added can be changed to evenly distribute the file system load (for example, RAID Group 1,2,3, and 4; then RAID Group 4,1,2, and 3, etc.).  The dedicated RAID group sets for metaLUNs are sometimes referred to as CLARiiON HyperGroups.
  • Do not concatenate stripes with large numbers of drives to components with much fewer drives.  This will lead to performance varying dramatically in different parts of the same metaLUN.
  • Do name the component LUNs in such a way as they are easy to identify (6871).  Numbering the components in a logical order helps to choose the correct RAID group and default SP owner (6696) although the component LUNs will be renumbered when the metaLUN is created.  The metaLUN will have its own default owner, but choosing the same default owner as all the components avoids the components being reported as being trespassed in some tools.

FAST Cache and metaLUNs

FAST Cache will work well with metaLUNs and can be enabled or disabled on each component LUN (although either all the component LUNs for any given metaLUN should have it either enabled or disabled and not a mixture of both).  As FAST VP will not work on metaLUNs (as these are not Pool LUNs), FAST Cache is the standard method for boosting metaLUN performance using flash drives.  See article 73184 73184.

See the following document on support.emc.com for further reference:

'Support by Product' - VNX Series - White Papers

Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance

4) answered above

9 Legend

 • 

20.4K Posts

November 21st, 2013 09:00

1) i am not sure why HAVT complains, maybe because it only sees one path from each HBA, if you performed your failover test and everything is fine (besides this is test environment), i would not worry about it too much.

2) for general file share server (windows), i would use concatenated MetaLUN. I hate doing host side striping unless i have a very specific requirement where there will be a lot concurrent I/O ..host striping would help there because you could present two LUNs ..one from SPA and one from SPB so both SPs would be handling the workload. For regular "office" file servers, concatenated meta is good enough. Easy to expand, don't have the same strict expansion requirements as striped meta. Make sure partition is formatted as GPT.

9 Posts

November 21st, 2013 09:00

Thank you this has been very helpful.

Few more questions:

Even after configuring Multipath I/O and doing a successful failover from SPA to SPB and back to SPA, the Connectivity status for the Windows host shows as "HAVT Issues" ->

There is insufficient information available to determine whether this server (10.X.X.X) is running an approved failover SW package. Please verify that some failover SW, such as EMC PowerPath, is installed and running on the server.

1) Is Multipath not an approved software? I read the doc you send me and it says that my flare(Version: 04.30.000.5.517) should be supported.

I was able to access files on the volume during the failover and right after, so failover seems to work great. No data corruption reported in the event logs as well.

2) I'm in the process of expanding one of the volumes into a very large 17TB metaLUN. It's gonna take forever but I'll wait to see how that works. What method of presenting the LUNs do you recommend for production systems for general file share servers? Span smaller LUNs on the OS level or metaLUNs?

I can't thank you enough for your help with this. I needed guidance with this as the volume of documentation on the subject is enormous. Thank you.

9 Legend

 • 

20.4K Posts

November 21st, 2013 10:00

it's my pleasure, enjoy the holidays as well.

No Events found!

Top