Unsolved
This post is more than 5 years old
1 Rookie
•
33 Posts
0
3894
April 27th, 2018 01:00
Compellent 3.84tb RI limited to 150mb/sec
Wondering if someone can answer this. We have 10x the REad intensive ssd 3.84tb drive in our compellent System. We never will come close to the dwpd spec if these drives. The read performance of these drives is up to spec but we seem to be hitting a top end write limit of about 140-150mb/sec. is this a limiting factor for these drives ? Would figure even if they were limited to this speed - the fact that you have a R10 over many drives would increase the write/ingest rate unless its just using two drives in r1 for writes. Kind of stumped why we are seeing writes at 140-150 and reads well over 500-600 on this size ssd array.
No Events found!
edfang
1 Rookie
•
33 Posts
0
April 27th, 2018 07:00
I will probably have to do that. . Because I know these are TLC ssd drives which havea funky 150mb/sec limit once it gets past the cache . We're seeing pretty good sustained write performance, but once it hits a certain point, it drops down to like 25mb/sec and hangs for a second or two. . and then sometimes it ramps back up and sometimes it just hovers at 130-156mb/sec. . .
I definitely think something is funny because if I create a Tier 3 only volume (which is R10 on spinning disks)
edfang
1 Rookie
•
33 Posts
0
April 27th, 2018 08:00
yes very true., . . .. . this was just large block sequential write and not trying random. . random is certainly way different and an entirely different beast altogether. . We also have to get past the windows server cache as it tends to write to its swap cache first until writing to disk. . So small transfers look great, but you have to factor that windows is caching it first before it does a write via iscsi to the SAN. . but if we do a sustained 1hr transfer after the cache pools are flooded, the write speed just averages to 150mb/sec which we find kind of strange for a R10 over ssd. . .
Compellentuser99
1 Rookie
•
12 Posts
0
April 28th, 2018 23:00
Has your case been solved?
edfang
1 Rookie
•
33 Posts
0
May 1st, 2018 13:00
Yes mostly so .. we're seeing about 400-500MB/sec on writes now. . not sure if it was the IWN or something else. . but writes are decent.. Reads we see 800+. . we're still going to see about adding a new tier 1 WI drives though.
dwilliam62
4 Operator
•
1.5K Posts
0
November 7th, 2018 12:00
Hello,
If you have not already opened a support case then please do so.
However, latency will always go up when you increase blocksize. The data will acknowledged later. Small blocksizes provide shorter latency but less throughput. Larger blocksizes provide greater throughput with longer latency.
What kind of throughput are you seeing at what latency? What blocksizes are you testing?
Regards,
Don
squebel
12 Posts
0
November 7th, 2018 12:00
We're seeing a similar issue on our SC5020 on volumes that are on all SSD. I'm able to get very good read/write for small block I/O random or sequential, but as soon as we start hitting it with larger blocks, the latency goes through the roof.