Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2551

June 19th, 2017 16:00

FAST.X - What's it really like?

I wasn't sure where to ask, but this has been bugging me for some time and I'd like to get other admins/engineers opinions or experience (apologies if this isn't in the right community). I've been reading white papers on this, articles, and even discussions and FAST.X (previously FTS/Federated Tiered Storage?) seems like a great solution. We have a 250F and 200K in our environment and they both have sufficient cache to support it (1.5TB), however when I talk to our EMC group or even consultants they write off FAST.X/FTS as old technology, unnecessary now with flash- I feel strongly that this isn't the case but the only knowledge I have is theoretical and, well quick info sessions from EMC world. Seems though this is where VMAX really brings value, being able to extend features to otherwise dumb arrays. I haven't been able to find too much stuff after 2015 in regards to it.

We have some environments we manage where I'd like to Tier, we have XtremIO and VNX 8000's I'd like to Federate behind VMAX, and possibly start stretching clusters between our two data centers with SRDF. We also have Vplex, however I haven't been too happy with the feature set and lack of caching (somehow migrating to this from our SVC added latency). It's certainly more complex than this but that's the short.

I'd like to hear peoples experiences in using this technology, and maybe some pros and cons- or perhaps why it just is or isn't worth doing.

9 Legend

 • 

20.4K Posts

June 20th, 2017 10:00

there is a limit of how many external LUNs you can present to a DX port, per engine.  When i was setting it up it was 2048 per engine. I needed about ~200TB of logical capacity on VMAX so i created about 400 x 500G LUNs on XIO,  presented 200 to each VMAX engine.  Get your hands on document "Design and Implementation Best Practices for EMC FAST.X", excellent read.

9 Legend

 • 

20.4K Posts

June 20th, 2017 05:00

We are using FAST.X where a VMAX 200K is front-ending XtremIO. Our use-case is a pretty large medical records database (>40TB) and we need to have multiple copies of it. Customer requires SRDF/S for DR/BC functionality so this was the solution that we purchased in 2015. It has proven to be very economical (great dedupe) and surprisingly very easy to deploy and manage. VMAX 200K was sized appropriately (memory wize) to handle XtremIO behind it, configured DX ports that were zoned to XtremIO ports.  At that point i configured volumes on XtremIO and presented them to VMAX 200K as if it were another "host".  VMAX 200K discovered those XtremIO volumes and created a created a brand new SRP (SRP_XIO).  That's it, now i provision VMAX devices using that SRP as if it were internal devices.  When VMAX devices get deleted, i can see logical and physical utilization on XtremIO decrease as well.

21 Posts

June 20th, 2017 08:00

How did you go about sizing the amount of capacity you could put behind VMAX? We're currently feeding DB workloads to XIO, but through Vplex, if there is an opportunity to improve performance however I'd rather place it behind VMAX. I suppose it doesn't matter with XIO, but when you present to VMAX did you do a series of LUNs or just a single large LUN, then built multiple TDEVs on top of those resources?

I appreciate the feedback, I think this would be great to implement in our environment- I have already scheduled the change of our BIN to change port emulation and am looking forward to this process. 

No Events found!

Top