Unsolved
This post is more than 5 years old
3 Posts
0
2329
March 9th, 2011 08:00
How to provide performance when using NFS
Hi everyone,
we're currently planning a "small" virtualization project and I'm having some difficulties with the storage(NFS).
The environment will consist of 3 physical servers with ESX4.1(vSphere Advance/Enterprise) and a shared storage. The whole network will be redesigned and as we wanted to reduce the costs NFS was focused as the storage protocol. I introduced the VNXe system from EMC to my colleagues and it looks like we'll order a VNXe3300. Now the thing is don't know who to ensure that there's enough network performance. 1Gbit link won't be enough bandwidth to handle the NFS traffic so I thought to put 4x1Gbit ports in an etherchannel for load balancing. The problem there is that still all the traffic would go over the same 1Gbit link because of the IP Hash. The solution would be to mount the NFS exports with different IPs, but how do I configure that on the storage side and what do I have to consider when doing that?
I have no experiences with NFS and I'm also new to EMC(glad that I'll finally see VNX live ) so I could really need some help here. I've already read many papers, blog- and forum-posts on that topic, but somehow I haven't found the right solution. There have to be more people using NFS but no one writes about their solution/environment.
We're also thinking about using 10Gbit adapters, but that would increase the costs too much and I would rather have a solution with 1Gbit; that gives more money for other things
Fiberchannel(my favorite) would also mean higher costs and then we would have to choose a different storage system.
Thank you very much for every answer!
Best Regards,
Max


clintonskitson
116 Posts
0
March 9th, 2011 09:00
No problem.
Yes you are right that port channels and their native ability to balance via IP hash or TCP/UDP ports will only provide so much benefit. Keep in mind that the fan-out side of the connection will have different IPs (different ESX servers), so although their destination could be at one IP, load balancing can take place. The other option you mentioned was around adding multiple IP addresses. You can create extra interfaces that ride on top of your trunked (4x1gbe) uplink to the switch. These extra interfaces can has different IPs and can be tagged to be on different VLANs or subnets. All available exports will be advertised from the interfaces that are created.
Hope this helps.
regnor
3 Posts
0
March 10th, 2011 04:00
Thanks for your fast answer!
I already considered that, but at the beginning we'll have about 5-7 VMs per Host so I think the 1Gbit link won't have enough power.
Let's say the storage has 4 different IPs and so the same NFS export will be mounted 4 times on the ESX host(through the different IPs). Wouldn't it then be much work to seperate the VMs according to their storage needs and monitor the performance of the different connections?
What happens if one connection is overloaded? Will I just move the VM with storage vMotion to a different IP(which sounds a bit crazy since it's the same export on the same storage
)?
I think the best thing will be to setup a testlab for that topic and see how it works. Does EMC offer a simulator like the one from Netapp?
clintonskitson
116 Posts
0
March 10th, 2011 14:00
Yes, EMC offers a virtual storage appliance to demonstrate all NAS related functionality. Anyone, EMC and non-EMC customers, can download it from the below link.
http://nickapedia.com/2010/11/01/new-torrent-links-vsas-and-tools/
Also there is documentation for higher level testing for replication and other needs below.
http://nickapedia.com/2011/02/05/how-to-uber-new-celerra-uber-vsa-guide/
There is also a link below that shows some settings tweaks in the VSA OS (redhat) that will allow for a bit better performance.
https://community.emc.com/message/528101
Keep in mind that what the VSA is meant for testing and education and the performance of it is determined by the redhat OS and the underlying infrastructure you place it on. So you can play around with the functionality, but don't expect it to be a workhorse =) As a side note, since Redhat is advertising the network links to the VSA services, the VSA cannot do trunk ports. So in order to get multiple true VLANs you would need to advertise multiple virtiual adapters. If possible try and avoid multiple VLANs and if you need separate IP subnets just keep them in the same VLAN and on the same virtual adapter.
regnor
3 Posts
0
March 12th, 2011 05:00
Great! As soon as I've got a bit freetime I'll start playing with VSA.
Just a last quick question, can I create multiple VLANs on one Interface on the VNXe? CIFS and NFS will be in seperate VLANs but go over the same interface(s).
clintonskitson
116 Posts
0
March 18th, 2011 12:00
Sorry for the delay on this. Yes it does support trunking (vlans) 802.1q, see the spec sheet below for other details as well.
http://www.emc.com/collateral/hardware/specification-sheet/h8515-vnxe-ss.pdf