Unsolved
This post is more than 5 years old
12 Posts
0
3408
November 18th, 2010 08:00
AX4-5i + M610 Blades + Hyper-V....a very broad question about the suitability of these components for virtualization.
The environment I help administer has made some major investments in new (to us) technology in the last year and half, and I was the guy who made the majority of the decisions about what hardware to buy.
We have never dabbled in virtualization until now, and I am concerned that maybe some of the decisions I made in hardware were....suboptimal to put it nicely.
Most of what we serve now live on m610 blades with dual 10k SAS drives, and 12GB of RAM. For most, we went with L5630 procs (trying to keep our heat/power consumption under control, as our 'server room' is a converted office with barely adequate cooling). It's ok if you're laughing at this point :)
The storage I selected for all this is the AX4-5i. I opted to fill every shelf with 7200rpm 1TB SATA drives, because at the time this decision was made, maximum disk space per dollar was our priority.
I know that this is by no means a 'high performance' system, with 1GB iSCSI, and slow SATA drives.
Recently, some of our developers have sort of sprung their latest creations on us (does this happen in your environment?), and I'm finding that I
#1 don't know how to spec hardware for database driven web apps, and
#2 don't really know how to best utilize what I have, because this IS what I have, and I am stuck with it.
Our developers also (god love 'em) seem to prefer to work in bizarre, wildly diverse architectures, so having more than one such app live on the same server is out of the question in most cases. As an example, we have your typical AMP web apps, apache/postgreSql/python apps, web enabled filemaker apps, web enabled foxrpro, and recently we've been told we're going to support our very fist IIS/.NET/mssql app.
Because of this, I would really like to start virtualizing some of these apps. If I have enough processing power in one box to run 2 or 3, I'd rather do that than dedicate a server to an app that may see 100-1000 or so users connecting at a time.
Do I have hardware that can support this satisfactorily?
I know this is a very broad question, and without info on the actual level of intensity that these apps are going to get worked, it may be impossible to answer.
My biggest concern is this-
If I set up LUNs on the AX4-5i for virtual hard drives and storage, am I going to find that the combination of 1GB iSCSI links and slow SATA drives is simply not suitable? As I mentioned, nothing we host is really visible on a national, let alone world-wide level. I think it would be rare for anything we host to see more than 1000 concurrent connections.
Dev Mgr
4 Operator
•
9.3K Posts
0
November 18th, 2010 08:00
First off; what virtualization solution are you exactly using?
- Windows 2008 R2 (core of full) with hyper-v enabled, or
- Hyper-V server 2008 R2
The latter isn't supported by EMC to be connected to their SANs, so I'd ditch it for sure if you want to use this for a production setup that you company is going to depend on (support nightmares if you ever have a problem with connectivity and it's not unlikely that you cannot install Powerpath at all).
Then there's your IOPS requirements that determine if your drive configuration can sustain the performance that you need.
A 7200rpm drive gets ~80 IOPS. So, as an example, a 4+1 raid 5 yields ~320 IOPS. This does obviously assume you aren't running anything else on these drives or the IOPS would have to be shared between the different applications/virtual machines.
What are your IOPS requirements? While you're looking at your current setup to get this info, also look for the read-to-write ratio. If you're finding that 30% or more of your IO is writes, you really should consider raid 10 to prevent the raid 5 parity calculation from impacting your write performance and slowing down overall performance on the application/VM. This does cut into your IOPS of course (to get the same 320 IOPS would require 4+4 drives in raid 10, so that's 3 extra drives that are needed).
Another factor is your iSCSI network. Are you using (planning to use) dedicated switches or isolated VLANs and dedicated NICs on each server, or were you planning to just put it on your LAN?
DKLoki
12 Posts
0
November 18th, 2010 08:00
I am running Server 2008 R2 with Hyper-v enabled...currently full install, may try core at some point.
I can't speak to the IOPS requirement...I haven't done any performance monitoring on these apps in their current state. I plan to.
The iSCSI network is dedicated to just iSCSI network traffic and consists of the following:
Each blade has a dual 1gb port mezzanine card with iSCSI offload on the 'B' fabric There are M6220 switch modules in the B slots, and the 4 ports of Storage Procs A and B of my AX4 connect directly to the external ports on these M6220s.
DKLoki
12 Posts
0
November 26th, 2010 08:00
So...
I feel like I'm in school and I'm asking someone else to do my homework for me, but here goes...
No information on the IOPS requirement of the app these servers are going to run. It's a foxpro database, which will be accessed by perhaps 100 people at a time, max. I don't know the size of the database, number of tables, records, etc. I'm trying to work with what I have and put together something that would meet the needs of a variety of smallish databases.
I am thinking of taking 8 or 10 disks and making a raid 10 group. From there, I will split it up into LUNS for 3 VMs
Should I create a separate LUN for each VM, or put all 3 on one? Since separate LUNs would still be on the same raid group, would it really make a difference?
Or how about this- if you were in my situation, what would you configure? This particular app is not high profile enough to be worth throwing more than a dozen disks at, but I want it to have acceptable performance.
I appreciate any input.