Unsolved
This post is more than 5 years old
24 Posts
1
3034
April 13th, 2015 19:00
TDAT number in pools within a VP Tier - Same or uneven?
Afternoon,
I have tried to look at three different FAST VP guides, but don't seem to find an answer to this question. We get charged by EMC by how many TDATs we have tied to each thin pool, thus we want to keep the number as low as possible. In each of our VP tiers we have two thin pools, made up from exact same type of disks, however, the binding and FAST will make the utilization uneven between the two pools in the VP tier.
I suggest that we then can have different number of TDATs in each pool to meet the utilization and keep the number as low as possible, however, others suggest that we should keep the number the same to prevent the performance being skewed. Again, I can't find any best practice advised other than FAST will sort it out. Anyone having any documentation suggesting if there is an impact keeping uneven pool sizes within a tier?
John Fjeldberg
Quincy561
1.3K Posts
1
April 14th, 2015 06:00
I think the best practices have been pretty clear, at least from me :-D
8 TDATs per disk, or the minimum amount to fully use the capacity. Even numbers for mirrored devices.
Every disk in the pool should have the same number of TDATs, and every TDAT should be roughly the same size. Having one slightly smaller because of internal devices such as vault or SFS is OK. (or all the TDATs on those drives slightly smaller)
I talked about this last year at EMC World, this topic starts at about 40:28 into the presentation.
http://vts.inxpo.com/Launch/Event.htm?DisplayItem=E127807&ShowKey=19337
Quincy561
1.3K Posts
1
April 14th, 2015 07:00
I see now you are doing a capacity on demand model. In that case, I would still follow the 8 hypers and use all the disk method, but simply add full disks every time you need to add capacity. Then rebalance when you add a new group of disks to the pool. Otherwise you are likely to end up with a significant imbalance on your drives.
NoDecaf
24 Posts
0
April 14th, 2015 21:00
Quincy,
thanks for your reply, and I enjoyed watching your presentation. I'm not sure you fully understood the question, or perhaps I did not ask it right? So the vendor has already carved out all the TDATs/hypers possible in the system, and created them in a way that is suppose to be optimal for our IO requirements. However, they have also created two thin pools for each tier, e.g. SATA_01 and SATA_02, which has been put into a VP Tier. As the thin pools are filling up, we add more TDATs as needed, which is when we get charged more. However, I can't see if it is a recommendation that the number of TDATs in each pool within the same VP Tier needs to be the same?
Again, appreciate your comments!
Quincy561
1.3K Posts
0
April 15th, 2015 05:00
Every disk pool should have disks of the same size, and TDATs of roughly the same size and the same number of TDATs per disk. When added a rebalance should be performed immediately, before new allocations take place.
You can have different pool configurations mixed in the same tier (for example 200GB EFDs and 400GB EFDs), but they should not be mixed in the same pool.
Quincy561
1.3K Posts
1
April 16th, 2015 07:00
Maybe if I had the IMPL bin file, I could understand better the current configuration.
You can send me the serial # in a private message.
Allen Ward
4 Operator
•
2.1K Posts
1
April 16th, 2015 07:00
I'm going to jump in here as well cause I think you are right about Quincy kind of missing the point of the question. It's quite possible it is because what I think you are suggesting is configured on your array doesn't make a whole lot of sense. If I can rephrase to make sure I'm understanding it correctly...
Within a single FAST VP Tier you have two pools. Those two pools contain drives of exactly the same configuration (size, technology, capacity). You want to know if there are negative performance impacts to having a different number of TDATs in each of the two pools in that tier.
What bothers me about how I understand the question is that I can't think of a logical explanation for why they would have built two pools the same in a single tier. In our environment we have two pools per tier on one of our arrays, but it is because we originally deployed with 200GB EFDs, 450GB FC, and 1TB SATA drives. When we doubled the engine count in our 20K from 4 to 8 we made a significant capacity purchase and started using 400GB EFDs, 600GB FC, and 3TB SATA drives. In order to combine the new capacity with the existing we added new pools and put those pools in the matching tiers with the existing pools. In our case the two pools in the tiers are never going to match each other because the TDATs are different sizes and one pool will grow while the other won't. We can't balance between pools, but over time FAST VP does that naturally.
Realistically, if you are letting FAST VP do its job, if you have a storage group associated with a policy that allows for three tiers and each tier has two pools then you will see devices consuming capacity across all six pools. FAST VP will shuffle things as required to keep the performance optimal. The only point you should be especially careful of when working with multiple pools per tier is that you try to "load balance" where TDEVs are initially bound. Since there is an "affinity" set with the binding such that when new extents need to be allocated the bind pool is the first choice (and maybe the only choice if you haven't enabled bind by policy) you should try to avoid creating artificial bottlenecks in the array by loading up a single pool as the bind pool for everything.
Please let me know if I'm off the mark in interpreting your question. If I hit the mark them maybe the different perspective will help Quincy see your point more clearly. He has far more engineering experience with these arrays then I do. I'm just a customer with lot's of hands on field experience. He has access to data I don't, so may have insight that I don't.
NoDecaf
24 Posts
0
April 16th, 2015 21:00
Allen,
Is this by design? Does each storage group require completely different performance characteristics?
I probably should have mentioned is a shared system, and that this was set up to meet minimum IOPS requirements for our customers, e.g. the MEL2_FSG_T1 supports a number between 5-600 IOPS (4k I think) < 10ms or something along those lines, while the T2 Tier guarantees a number around 300 IOPS. Now, we had two more systems set up in a different department, where they had the recommended one pool per tier.
Allen Ward
4 Operator
•
2.1K Posts
1
April 16th, 2015 21:00
I find it very odd that they configured your tiers each with two pools of identical TDATs off identical drives. There must be some kind of story behind that and maybe that would help answer the question.
One thing that jumps out at me from this listing though is that it appears you are creating individual FAST VP policies for each of the storage groups. Is this by design? Does each storage group require completely different performance characteristics? If the policies are the same you should just have one policy and associate each of the Storage groups with that policy. If you really do need different ones do you need four different ones for the four storage groups? In our environment we have 6 policies for six very different performance scenarios. They handle several hundred storage groups among them. Two of those policies are barely ever used and one more is only for the most demanding of applications. From the best practices the VMAX gurus tend to preach it isn't unreasonable to think that some environments could easily get away with a single policy for everything (100/100/100).
NoDecaf
24 Posts
0
April 16th, 2015 21:00
Thanks Allen, I appreciate the helping out, I think you have clarified the question, and I think it's more to do with my ability to ask the question, than Quincy's ability to answer them
Anyway, after doing a bit more studying, it may have helped if I specified that I was talking about FAST VP instead of FAST DP, where I think the confusion may have come into play. In addition, I have below done a couple of dumps , where I list the following:
Now, I tried to send Q the Sym ID, but till he follows me I can't send a private message. As you can see e.g. from the FC pools, they are 61/86% full, and that's where I'm thinking that I could reduce the number of TDATs in the pools, but I'm trying to understand what potential drawback it would incur. Again, the two pools in each tier is build up by the exact same type of drive and number. Let me know I can provide any information beside what's below to clarify the situation:
C:\Windows\system32>symcfg list -pool -thin -sid
Symmetrix ID:
S Y M M E T R I X P O O L S
---------------------------------------------------------------------------
Pool Flags Dev Usable Free Used Full Comp
Name PTECSL Config Tracks Tracks Tracks (%) (%)
------------ ------ ------------ ---------- ---------- ---------- ---- ----
MEL2_EFD_01 TEFDEI RAID-5(3+1) 24793560 350712 24442848 98 0
MEL2_EFD_02 TEFDEI RAID-5(3+1) 24793560 997464 23796096 95 0
MEL2_FC_02 TFFDEI RAID-5(3+1) 277037712 106299060 170738652 61 0
MEL2_FC_01 TFFDEI RAID-5(3+1) 277037712 37930836 239106876 86 0
MEL2_SATA_01 TSFDEI RAID-6(6+2) 673283076 296513640 376769436 55 0
MEL2_SATA_02 TSFDEI RAID-6(6+2) 673283076 247521156 425761920 63 0
Total ---------- ---------- ---------- ---- ----
Tracks 1950228696 689612868 1260615828 64 0
C:\Windows\system32>symtier list -thin -sid
Symmetrix ID :
---------------------------------------------------------------------------
L I Logical Capacities (GB)
O Target n --------------------------
Tier Name C Tech Protection Emul c Enabled Free Used
--------------------- - ---- ------------ ---- - -------- -------- --------
MEL2_EFD_01 I EFD RAID-5(3+1) FBA S 3026 82 2944
MEL2_FC_01 I FC RAID-5(3+1) FBA S 33818 8803 25015
MEL2_SATA_01 I SATA RAID-6(6+2) FBA S 82188 33206 48982
C:\Windows\system32>symfast list -association -sid
Symmetrix ID :
--------------------------------------------------------------------------
Storage Group Name Policy Name Pri Flgs
R
-------------------------------- -------------------------------- --- ----
MEL2_FSG_T0 MEL2_FP_T0 2 .
MEL2_FSG_T1 MEL2_FP_T1 1 .
MEL2_FSG_T2 MEL2_FP_T2 2 .
MEL2_FSG_T3 MEL2_FP_T3 3 .
Allen Ward
4 Operator
•
2.1K Posts
1
April 17th, 2015 07:00
OK, but are the details of the policies assigned to each of the storage groups the same or different? If they are all the same you should only have one policy. If they need to be different then it needs to be the way it is.
Can you post the configuration of the FAST VP policies?
Quincy561
1.3K Posts
0
April 17th, 2015 07:00
Seems reasonable to have different FAST policies for different performance demands.
Allen Ward
4 Operator
•
2.1K Posts
0
April 17th, 2015 07:00
I don't disagree Quincy, but when I see four Storage Groups and each one has a policy named specifically for the SG it is associated with it raises a question of whether the policies are actually different. That's why I'm suggesting looking at the details. With only four SGs and four Policies it isn't that bad, but as arrays grow to hundreds of SGs it wouldn't work very well to have inidividual policies for each.
NoDecaf
24 Posts
0
April 20th, 2015 19:00
Hi Allan,
so, here are the buildups of two of the four pools we use.
In my own mind my thinking as the reason to for raising this question, where we want to have over provisioned pools and as high utilization as possible (as long as we meet the IOPS targets) we could have different number of TDATs in each of the two pools within each of the VP tiers. I am perhaps more concerned meeting the read IOPS targets with the high over provisioning leading to less spindles per GB.
C:\Windows\system32>symfast -fp list -sid 72
---------------------------------------------
Policy Name Tiers Assocs
-------------------------------- ----- ------
MEL2_FP_T0 3 1
MEL2_FP_T1 3 1
MEL2_FP_T2 3 1
MEL2_FP_T3 3 1
C:\Windows\system32>symfast show -fp_name MEL2_FP_T1 -sid 72
Policy Name : MEL2_FP_T1
Emulation : FBA
Tiers(3)
{
-------------------------------------------------------------------------
L
Max SG O Target Flgs
Tier Name Type Percent C Tech Protection C
-------------------------------- ---- -------- - ----- ------------- ----
MEL2_EFD_01 VP 5 I EFD RAID-5(3+1) .
MEL2_FC_01 VP 42 I FC RAID-5(3+1) .
MEL2_SATA_01 VP 53 I SATA RAID-6(6+2) .
}
C:\Windows\system32>symfast show -fp_name MEL2_FP_T2 -sid 72
Policy Name : MEL2_FP_T2
Emulation : FBA
Tiers(3)
{
-------------------------------------------------------------------------
L
Max SG O Target Flgs
Tier Name Type Percent C Tech Protection C
-------------------------------- ---- -------- - ----- ------------- ----
MEL2_EFD_01 VP 2 I EFD RAID-5(3+1) .
MEL2_FC_01 VP 44 I FC RAID-5(3+1) .
MEL2_SATA_01 VP 54 I SATA RAID-6(6+2) .
}
NoDecaf
24 Posts
0
April 22nd, 2015 18:00
Quincy,
just wondering of you had a chance to look at our system. Would you say the best practice would be to keep both pool in sync, or let them be adapted after their utilization?
Quincy561
1.3K Posts
1
April 23rd, 2015 06:00
Yes, I looked at it. The bin file looks normal, with the exception of having multiple pools with the same technology. Having 4 SATA pools might be one that could potentially cause issues, if one gets really busy, especially if one of the ones with only 40 drives gets busy.