This post is more than 5 years old
3 Posts
0
2075
August 17th, 2016 23:00
VNX LUN Migration - performance impact on source LUN
Hi,
I did a LUN migration test from Unisphere Client UI. I wanted to know what is the performance impact, if any, on the source LUN during migration, but the test result seems to say otherwise. I have the performance test output shown below for your interpretation too.
Scenario:
Source LUN is a 500GB thick LUN in a new Pool1 backed by 3 x R5_4+1 SAS 10K drives.
Target LUN is a 500GB thick LUN in a new Pool2 backed by 3 x R10_4+4 SAS 10K drives.
Both pools have no other LUNs, save for the above.
No FAST Cache. No SSD.
Source LUN is presented to VMware ESXi as VMFS datastore. There is an online Windows VM inside.
Diskspd.exe tool is run in the VM for 10 seconds, to set a baseline measurement.
Diskspd.exe tool is run in the VM for 1 hour (same parameters), concurrent to the start of migration, which took 3 hours.
10-sec Diskspd output:
585.39 IOPS, Average latency 1.692ms
%-ile Read (ms) Write (ms) Total (ms)
min 0.020 0.019 0.019
25th 1.006 0.030 0.038
50th 3.059 0.038 0.101
75th 4.766 0.051 3.046
90th 6.086 0.070 5.076
95th 7.615 0.085 6.059
99th 12.441 0.197 9.712
3-nines 32.488 1.962 24.127
4-nines 35.307 2.030 35.307
** truncated... 35.307ms from here onwards
1-hour Diskspd output:
63405.89 IOPS , Average latency 0.013ms
%-ile Read (ms) Write (ms) Total (ms)
min 0.000 0.000 0.000
25th 0.009 0.010 0.009
50th 0.009 0.011 0.010
75th 0.011 0.013 0.012
90th 0.017 0.019 0.018
95th 0.022 0.024 0.023
99th 0.032 0.035 0.034
3-nines 0.096 0.067 0.074
4-nines 5.118 0.903 3.963
5-nines 10.437 1.838 8.029
6-nines 32.265 8.086 25.950
7-nines 100.593 20.107 79.159
8-nines 173.966 88.929 146.311
9-nines 200.945 140.639 200.945
What is the explanation for the high IOPS and low latency, during LUN migration?
Thanks.
- KC
kcong
3 Posts
0
September 6th, 2016 01:00
Managed to do another round of measurement with hardware and software cache disabled.
Before LUN migration started:
672.27 IOPS, Average latency 1.474ms
99th %-tile = 9.232ms
3-nines = 37.597ms
4-nines = 140.384ms
max = 140.384ms
During LUN migration on speed ASAP:
626.80 IOPS , Average latency 1.585ms
99th %-tile = 12.004ms
3-nines = 41.780ms
4-nines = 97.886ms
5-nines = 174.939ms
6-nines = 1036.318ms
max = 6772.611ms
So performance does take a slight hit.
Thanks!
storagtetalk
1 Rookie
•
53 Posts
1
August 18th, 2016 10:00
Just a guess, but I'm thinking you are just hitting cache during the second run. During the first run you were reading mostly from disk. Second run most of your data is in cache, so your read performance is much better.
Write performance may have improved during the second run because the array is destaging to R10 instead of R5, which is much more efficient in general.
kcong
3 Posts
0
August 19th, 2016 08:00
Thank you. That makes sense. I realized the diskspd parameter for disabling hardware and software caching were left out. I guess I need to do another round of measurement to have a look at the real stats.
kelleg
4.5K Posts
1
September 8th, 2016 08:00
Just a quick note on the speed setting for the migration - setting to ASAP will use all the free resources on the arary - if your array is not doing anything it should run as fast as possible. If your array was in production, that would impact the other operations running.
Also, please mark this question as answered - that will help others looking for similar information.
glen