Unsolved
This post is more than 5 years old
109 Posts
1
1555
October 19th, 2011 08:00
Combo: Tuning Latency Sensitive Workloads by VMware & VMworld PPT
Good information in here applicable to configuring VMs for Oracle "Whitepaper: Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere Virtual Machines":
http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf
Quote from introduction:
"The vSphere ESXi hypervisor provides a high-performance and competitive platform that effectively runs many Tier 1 application workloads in virtual machines. By default, ESXi has been heavily tuned for driving high I/O throughput efficiently by utilizing fewer CPU cycles and conserving power, as required by a wide range of workloads. However, many applications require I/O latency to be minimized, even at the expense of higher CPU utilization and greater power consumption. "
This ties in nicely and leads on from Sam’s VMworld presentation - VMware vSphere 5: Best Practices for Oracle RAC Virtualization
https://community.emc.com/docs/DOC-11870
This combination of using a VMware white paper and VMworld presentation works for supporting discussions about virtualizing Oracle.
Thanks to Allan Robertson for the original post via email.
reseach
2 Intern
•
225 Posts
0
October 21st, 2011 00:00
The hardware tuning part, majorly, is turning off all power saving option in BIOS Power Management and keep machine running over 100% performance, because these Power management feature would run system in “Power Saving” model ( @ lower frequency) , which would dramatically IO performance.
This kinda recommendation also happens in “Improving SSD/RAID performance on X86 system”.
jweinshe
40 Posts
0
November 7th, 2011 12:00
One thing to bring attention to - in the tuning for latency sensitive applications whitepaper you linked to, they discuss vNUMA (a feature of vSphere 5) and suggests that you take vNUMA into account with sizing the CPU resources for your workload. The whitepaper then incorrectly states “vNUMA is automatically enabled for VMs with more vCPUs than the number of cores per socket.”
According to some follow up Duncan Epping did when I inquired about this discrepency ( http://www.yellow-bricks.com/2011/10/28/vnuma-and-vmotion/#comment-28718 ) this is only the case when your VM has at least 8 vCPUs.
Just something to keep in mind.