Unsolved
This post is more than 5 years old
23 Posts
0
5469
November 6th, 2011 11:00
Nexus 5010 vs MDS 9148
Imagine you are building a somewhat green-field storage deployment in a Cisco "nexus" datacenter. You are in the market for a pair of switches for a "storage fabric". Considering the following:
-FC is the preferred storage access protocol
-Your HP blade infrastructure consolidates your port density requirements and you cant afford director class switching availability, so you go with a 1U fabric switching option
-Your storage and network team are the same, BUT ownership of Nexus (non-storage) equipment will still be maintained mostly by outside vendors for the next three years
-You want to entertain a metro cluster storage option like VPLEX or SVC
-Your management is extremely weary of FCoE, even at the ToR access layer especially since the savings isn't no-brainier to adopt a still maturing tech.
What switching option would you go with?
The n5k simply doesn't have much native FC port density so you are forced to use FCoE to your FC SAN as you scale out. On the other hand, investment in a strictly native FC fabric seems to limit the potential of the DC to use different access protocols via a consolidated storage fabric over time.
It is in the storage admin's best interest to keep some autonomy over "his" switches but the 5010's still need uplinks to the n7k cores if unified IO was ever adopted or a metro cluster.
Decision...decisions...
giograves
23 Posts
0
November 11th, 2011 17:00
Ok, no responses so let me para-phrase:
Has anyone used a 5010/5020 as a complete replacement to MDS switches? If not, what might be some of the management considerations I may be missing?
Eric Bursley
1 Message
1
November 16th, 2011 14:00
with the 50** or 55** Nexus switches, you are limited as to what you can do in terms of fabric services. Using an MDS expands the capabilities of your fabric to include multiple vSANS. From what I've seen you need an MDS in your fabric to get full fabric capabilities.
For example in a vBlock, there are MDS and Nexus switches. The array is configured as FC, the UCS servers are connected to the fabric interconnects 60** switches to the 5548 Nexus. The 5548 Nexus is then interconnected to the MDS, vSANS are created and managed from Unified Infrastructure Manager.
jay_cuthrell
12 Posts
1
November 29th, 2011 21:00
To piggyback on Eric's points...
The costing of such a Nexus only solution (today) is problematic. I could throw in a Nexus 1000v vs. Nexus 1010 story right now but let's keep it storage focused for a moment and bring it back to the network (whatever 'network' means long term) later.
To reach for a higher layer analogy... why not borrow the concepts from a pre-validated, pre-engineered, and successful design pattern that is in use today?
Historically only the Vblock designs for Vblock 1 (CX4-480) and Vblock 2 (VMAX) would include MDS. Vblock 0 (NS-120) and Vblock 1U (NS-960) required an exception to gain MDS placement in the manufacturing delivered Vblock designs. Oddly enough, customers wanted their SAN options and exceptions became rules. From a LAN perspective, the Vblock 0 (Nexus 5010) and Vblock 1/1U (Nexus 5020) were optional elements in manufacturing delivered Vblock designs
Today, Vblock 300 (VNX) personalities are either Block or Unified -- and as such there is always a pair of MDS. This is due to overwhelming customer desires for a SAN regardless of their File/NAS ambitions. From a LAN perspective, Nexus lines moved ahead with Cisco roadmaps and hardware enablement in NX-OS and meant the shift from Nexus 50x0 to Nexus 5548P and currently shipping Nexus 5548UP. As for why the Nexus 5548UP isn't optional as with prior Vblock designs, since the option to convert from Block to Unified was envisioned, the dedicated ports for X-blades in VNX were reserved in the Nexus 5548UP.
Longer term, (you know, like in a few days/weeks/months) there will be always be progressive enhancements in Vblock so if you didn't catch the VCE roadmap updates let me know and I'll dig up a link. In a nuthsell though, you can expect to see unified network architectures through enabling technologies and updates to the Vblock certification matrix.