Start a Conversation

Unsolved

This post is more than 5 years old

521

May 15th, 2013 14:00

emc241591: NAS 6.0 does not support LUNs where CLARiiON Auto-Tiering, Compression, or dense Pool-based LUNs are enabled

Hello,

can anyone verify that the points in the emc241591 are still valid for the recent NS120 Dart (6.0.70.4) and Flare Code (4.30.0.5.525).

The bulleted items below are CLARiiON FLARE 30 features that are not supported by NAS 6.0 Celerras.  Currently these features are only supported by File OE 7.0 or later in the VNX product line.    CLARiiON LUNs containing the following features will not be diskmarked by the Celerra, resulting in diskmark failures similar to those described in the previous Symptom statements:

  • CLARiiON FAST Auto-Tiering is not supported on LUNs used by Celerra.
  • CLARiiON Fully Provisioned Pool-based LUNs (DLUs) are not supported on LUNs used by Celerra.
  • CLARiiON LUN Compression is not supported on LUNs used by Celerra.

4 Operator

 • 

8.6K Posts

May 15th, 2013 15:00

Certainly FAST based LUNs and LUNs using compression arent supported for use by the Celerra for NAS

Pool based LUNs – I am not sure

Check the release notes for the latest 6.0.x

What would be the reason for asking or using them ?

4 Operator

 • 

8.6K Posts

May 16th, 2013 12:00

Sorry – you have to go through normal performance troubleshooting and maybe a support case

Low write performance can have a number of reasons like

Network – packets getting lost

Disk layout – not enough disks or not enough LUNs for enough outstanding IOs

Test tools – Windows explorer for example isnt very clever, single stream vs. aggregate performance

Checkpoints – COFW does make a difference, savvols shouldn’t be on FastCache, ….

Client issues

…..

I would suggest to reduce to the simplest config – one interface (no trunking), no checkpoints and start troubleshooting from there

You can also try a simple protocol to test the network like ftp or ttcp

P.S.: meta LUNs are not supported for NAS usage – AVM does its own striping

2 Intern

 • 

223 Posts

May 16th, 2013 12:00

Hi Rainier,

I wanted to use FastVP becasue we have 3 tiers in the NS120 and I thought to use auto. tiering.

But I didn´t read the release notes and though the NS120 can be handeled even like a VNX.

So I created standard Raid5 (4+1) and (8+1) over 60 FC 15K disks, Raid 1/0 over 14 SATA disks and 2 EFD dirves as fast cache.

Now it works, I can see all the LUNs as Storage Pools in the file area.

But I have one BIG problem, perhaps you can help me.

We have only about 12 MB/sec wirte performance on the deinfed CIFS shares, read performance is good, it´s at 90-100 MB/sec.

We have a lacp trunk ober 4 x 1GBit.

I cretaed the luns for the file pool manually, as described in the best practice.

Following config:

3 x R5(4+1) --> each 2 LUNs with equal sice one SPAA one SPB.

6 LUNs in the r5_performance_pool.

1 Filesystem on this pool.

In the analyzer I can see that all 15 disks are equaly used, but only with about 10MB/sec.

Any idea what I can do to get the system write performance higer. I although edited the fastRTO settings for the datamovers to the value 2 as I read some times in this forum.

Over this night I have cretaed some metaluns on other drives and will add them to the file storage, perhaps this works better....

2 Intern

 • 

467 Posts

May 16th, 2013 17:00

You can use FLR to move files to a lower cost tier - not quite the same as FAST but a nice feature/option as well..

May 17th, 2013 01:00

Mark wrote:

You can use FLR to move files to a lower cost tier - not quite the same as FAST but a nice feature/option as well..

Sorry, just to clarify, did you maybe mean FMA?

2 Intern

 • 

467 Posts

May 17th, 2013 04:00

Yup,  I did mean that.  Thanks!

4 Operator

 • 

8.6K Posts

May 17th, 2013 04:00

I think you mean CTA (formerly called FMA or the FileMover/DHSM API)

FLR (aka. CWORM) sets file into a rentention state so that they cant be deleted or modified – it never moves files

2 Intern

 • 

467 Posts

May 17th, 2013 04:00

Yeah, good catch - It was late

No Events found!

Top