This post is more than 5 years old
16 Posts
0
56596
July 31st, 2012 11:00
PS4100X - Initial Set configuration (vSphere 5)
Hello all
we have just bought a PS4100X with 12 x 600GB 10K SAS (7.2TB RAW)
This device is due to replace our existing MD3000I Powervault Device that is being used in out Vmware ESX enviroment.
What RAID / ESX Datastore configuration would people recommend.
Currnently we have the following set up for example.
2 x Raid 10 Disk Groups with 1TB Virtual Disks on each.
The 2 1TB Disks are then presented to ESX and used by 2 Datastores.
cheers in advance.
No Events found!
john.denny73
16 Posts
0
July 31st, 2012 13:00
Thanks for this detailed information.
based on 12 600GB Drives how would we suggest creating the raid sets / LUNs i.e would you create 3 x 4 Disk Raid 10 LUNS, with a Datastore on each, or would you create a mix of say Raid 5, and Raid 6.
As i get the comment regarding splitting up into smaller vloumes, but want to ensure good raid protection and esx performanace.
john.denny73
16 Posts
0
August 26th, 2012 04:00
Thanks for the info.
I have created a 4.7 TB array on my san. Using Raid 6 and a hot spare.
I was advised to create volumes and present them directly to the guest os inside the vm enviroment, but I was just going to create datastores, as my enviroment has low i/o requirements.
This being the case, is there any best practices for creating datastores, or do I just have to follow esx best practices for datastores.
cheers
John
john.denny73
16 Posts
0
August 26th, 2012 08:00
One thing I did wonder, is if you are running presenting volumes directly to the vm, the vm will have to share the vswitch with a network port group and iscsi traffic. Does this not affect performance, as I have separate vswitches for iscsi traffic, and network port groups on another vswitch. or did I miss something.
john.denny73
16 Posts
0
August 26th, 2012 08:00
cheers Don food for thought.
john.denny73
16 Posts
0
August 26th, 2012 10:00
"Ideally, you would have a separate vSwitch and physical NICs for that VM iSCSI traffic. It's not required, since in most installations, you don't max out the network bandwidth."
didnt quite follow sorry. currently i have physical nics hanging off the vswitch for network connectivity and additional physical nics attached to another vswitch that runs the iscsi traffic.
The iscsi traffic is ran though physically seperated by switches, therefor a windows 2003 server for example can not see the iscsi target on the san if i created it a volume.
hope that makes sense
cheers
John
john.denny73
16 Posts
0
August 26th, 2012 10:00
also.
Where you say "In the VM settings, when you have multiple VMDKs (or RDMs), create a new Virtual SCSI adapter for each VMDK/RDM."
Is this not the standard action when you create a additonal virtual disk.
cheers
John
john.denny73
16 Posts
0
September 25th, 2012 04:00
one final question would be is it better to create volumes and present them to the VMs using the method above. as in "the VM settings, when you have multiple VMDKs (or RDMs), create a new Virtual SCSI adapter for each VMDK/RDM. (up to 4 SCSI controllers per VM max)"
or do it via the direct mapoping on the vswitch as you suggested " You would create a new vswitch and add physical nics to it. Those nics would be in same subnet as the esx iscsi nics are. So those vm would be able to access the san. That way you don't share phyysical nics for esx iscsi and guest iscsi"
I guess it depends on the applicaions?
cheers
John
john.denny73
16 Posts
0
September 25th, 2012 04:00
I get it, sorry, been away,
So the traffic is separated by vswitch but contactable by network.
sorry for being so slow.