Unsolved
This post is more than 5 years old
1 Rookie
•
17 Posts
1
12279
January 10th, 2013 14:00
Isilon: Best Practices for VMware vSphere 5 on Isilon
The Question
A question came up on an internal DL regarding how to address best practices of vSphere 5 on Isilon. There was some confusion about the proper way to mount presented datastores. The document that was referenced can be found here: http://simple.isilon.com/doc-viewer/1739/best-practices-guide-for-vmware-vsphere.pdf
NFS Exports, Mount Points, and Different Paths
A single NFS export can, but doesn’t necessarily have to support any number of vSphere datastores…
VSphere can mount /ifs/nfs/ as 1 datastore, /ifs/nfs/ds0 as a 2nd datastore, and /ifs/nfs/ds1 as a 3rd datastore.
As long as the vSphere host has the ability to read the mount point, and any subsequent folders, it can mount them as a datastore (not specific to Isilon).
A Mesh Topology
The mesh topology, as different datastores on each host is a carryover from recommendations for vSphere 4, and has some things to remember.
A datastore mounted on a specific IP address will only be available on that IP address (not necessarily a node).
For proper vMotion, all hosts should mount datastores with the same mount point path (IP:/nfs/path), which is noted in this KB: http://kb.vmware.com/kb/1023230
Multiple nodes will present the same NFS mount, as Isilon presents the mount to all IP addresses in the SmartConnect Zone
If an export of /ifs/nfs is presented, to balance VMs across nodes, multiple vSphere datastores will have to be attached, and VMs to be run on each node will have to be registered on the datastore assigned to specific IP addresses.
Let me give an example:
A SmartConnect Zone has IPs of 192.168.1.11-192.168.1.13 (3 node cluster)
A mount point of /ifs/nfs can be mounted to any of the IPs.
Because an IP is used, /ifs/nfs on 192.168.1.11 will have a different vSphere UUID than /ifs/nfs on 192.168.1.12 or 192.168.1.13.
Each datastore (IP+NFS Mount combination) will be distinguished differently in vSphere.
Remember that these IPs are “tied” at any point in time to different nodes.
For the purpose of simplicity, let’s say Node 1 is 192.168.1.11, Node 2 is 192.168.1.12, and Node3 is 192.168.1.13.
Whenever a VM is mounted on 192.168.1.11 (/ifs/nfs) all traffic will be directed to Node1.
The guide basically states that when using this connection method, it is easy to “move” a VM by powering it down, unregistering it from that path (removed from inventory), and reregistering on another path (192.168.1.12 /ifs/nfs for example).
This does require VM downtime.
Multiple IP Addresses and SmartConnect
The practice of assigning multiple IP addresses to an network pool (from an Isilon perspective) is typically associated with using a SmartConnect Zone name.
SmartConnect Basic Example:
The same cluster as above (3 nodes)
A network pool (pool0) on subnet0 is comprised of 3 IP addresses
The associated addresses would be 192.168.1.11-192.168.1.13
SmartConnect Advanced Example:
The same cluster as above (3 nodes)
A network pool (pool0) on subnet0 is comprised of 6 IP addresses, using the recommended formula of N*(N-1) the result is 3*(3-1) = 6
The associated addresses would be 192.168.1.11-192.168.1.16
Depending on the SmartConnect Advanced Connection Policy, each node could have anywhere from 1 to 4 IP address assigned, with a total of 6 IP addresses across the system.
If a CPU Utilization or Network Throughput policy is used it is conceivable one or more nodes only receives 1 IP based on utilization.
vSphere 5, Single NFS Mount Names, Round Robin DNS, & More
In vSphere 5.0, a name can be used to mount NFS datastores. With vSphere 5, it is recommended to leverage a single name rather than IP (speaking in terms of a single datastore), and each host (regardless of resolved IP) views the datastore as the same.
The fact that different hosts can be connected to different nodes (and subsequent IP addresses) is transparent to vSphere 5 as a result. I have not heard of vMotion having any issues with this configuration.
Expanding on the Examples Above
The SmartConnect Zone is named isilon.domain.local with a mount point of /ifs/nfs
Datastores should be mounted to isilon.domain.local:/ifs/nfs rather than the IP addresses that make up the SmartConnect Zone
When a vSphere host first mounts that datastore, it uses the IP address that is returned from the DNS Delegation to the SmartConnect Zone Service Subnet.
Using a Round Robing Connection Policy, it is possible that the IPs would be assigned in this fashion: Node1 (192.168.1.11, 192.168.1.16), Node2 (192.168.1.12, 192.168.1.15), & Node 3 (192.168.1.13, 192.168.1.14)
When vSphere Host 1 mounts isilon.domain.local:/ifs/nfs, using this policy, it will be associated with 192.168.1.11 (based on the 1st FQDN lookup)
When vSphere Host 2 mounts isilon.domain.local:/ifs/nfs, using the same policy, it will be associated with 192.168.1.12, and so on (all based on how many DNS requests for isilon.domain.local have been resolved)
Even though Node1 has 2 IP addresses, only 1 IP address is used when connection to the datastore occurs.
When that host reboots, because Round Robin is used, it is possible for a different IP address to be resolved.
It is important to ensure the Connection Policy is appropriate for the environment. Things like workload, use-case, number of nodes & number of hosts all play a factor.
Also, in the event of a node failure or SmartFail, if the IP Allocation Method is set to Dynamic, IPs are moved to available nodes. If it is set to Static, then the IPs are no longer in the SmartConnect Zone “hunt group.”
Note, with the IP Allocation Method set to Static, each node only gets a single IP address from the Pool.
Because the vSphere mounted datastore is resolved to a specific IP (based on the DNS resolution), when the IP moves, the vSphere host then connects to the new host that now answers on the failed over IP.
If the Rebalance Policy is set to Automatic Failback, the IPs can move back to the failed node, but leverage their own Connection Policy (designated in the IP Allocation Policy).
More information from VMware on NFS and Round Robin connections
VMware’s Cormac Hogan details NFS mounted datastores and Round Robin connection methods in this VMware blog post:
http://blogs.vmware.com/vsphere/2011/12/load-balancing-with-nfs-and-round-robin-dns.html
SmartConnect is handling the Load Balancing role, albeit with the possibility of Round Robin, Connection Count, CPU Utilization, & Network Throughput as different algorithms for divvying out IP addresses for the NFS name resolution.
What's best for you?
If you are using vSphere 5.0, a combination of these will likely give you a suitable configuration.
- Using a SmartConnect Zone name as opposed to IP addresses
- An appropriate SmartConnect Advanced Connection Policy
- Dynamic IP Allocation Method and an appropriate IP Failover Policy
Where to find more on SmartConnect?
I have a couple blog posts about SmartConnect, how it works, how to configure it, and how to use it with vSphere 5 located here: http://www.jasemccarty.com/blog/?tag=smartconnect
Jase McCarty - Sr. vSpecialist
EMC
dynamox
9 Legend
•
20.4K Posts
0
January 10th, 2013 19:00
i have to say that is very nice that we can now use DNS name in vCenter instead of doing "manual" load-balancing by connecting each individual Isilon node by its IP address. My VMware admins were very confused because they saw multiple datastores that had the same capacity, yet i was asking them to try to deploy VMs on different datastores. When doing OneFS upgrades there was not quarantee that the same IP address would end up on the same host so after each OneFS upgrade i ended up manualy moving IPs around so each datastore was on different Isilon node. Pain !!!
I don't know about other customers but i have dedicated subnet for NFS connection for ESX hosts, we also have SmartConnect Advanced. I am actually thinking of using "Connection Count" policy instead of "Round Robin". When nodes are being rebooted for OneFS upgrade, there is no guarantee SmartConnect will give you an IP address of the nodes that has not been connected to by another ESX host. Thoughts ?
I would also would like to see if you/Isilon/VMware share some guidelines on "sizing" Isilon storage for VMware installations. I can't justify buying S-class nodes for VMware, would the X-series work for certain workloads ?
crklosterman
450 Posts
0
April 30th, 2015 08:00
Just wanted to add some color to this old discussion that someone forwarded to me:
Best Practices today are to:
1. Use a dedicated dynamic smartconnect zone for this workflow, but artificially limit it to 1 IP per node.
2. Create 1 datastore per node in the Isilon's smartconnect zone, so for instance with a 3 node cluster that might be:
/ifs/clustername/esx/ds01
/ifs/clustername/esx/ds02
/ifs/clustername/esx/ds03
3. Perform the mounts against IP addresses, not the smartconnect zone name, so mount all 3 datastores in this case on all ESX hosts, like this:
10.111.123.5:/ifs/clustername/esx/ds01
10.111.123.6:/ifs/clustername/esx/ds02
10.111.123.7:/ifs/clustername/esx/ds03
(assuming the dynamic pool had the range .5.6.7)
Some people will disagree with this approach, but here is why:
Anyway just needed to add some color to the discussion. I do have a request filed to get the documentation updated to reflect these recommendations.
~Chris Klosterman
Senior Solution Architect
EMC Isilon Offer & Enablement Team
chris.klosterman@emc.com
dynamox
9 Legend
•
20.4K Posts
1
April 30th, 2015 08:00
using dedicated IP is like going 5 years back. We had an 8 node cluster where each node was mounted using its dedicated IP address, you have 8 different datastores, VM admins get so confused which one to use. I understand what you are saying but i am not going back to that mess.