Start a Conversation

Unsolved

This post is more than 5 years old

F

2225

June 13th, 2018 09:00

SCv3000 Configuration Issues

Hi All

I know, I know, I should ask my Dell rep for help with this but I have, multiple times, and my Dell rep just isn't listening. While on the phone we worked out a config but given previous mistakes my confidence is low, and now fear I'll end up with a dud. An expensive dud. Looking to community for practical advice. 

From what I've read, the SCv3000 appears to an ideal tool for a 'slim and trim' modernization project. I've done a LiveOptics data grab on my 4 node ESXi cluster and know my 'numbers'. I need a bit under 8TB of usable storage. IOPS peak at 5200 and average (i.e. 95% percent) at a hair under 2000 IOPS. Read write ratio favours write over read most of the day at roughly 70% write / 30% read but in the evening the character changes to 70% read / 30 % write, no doubt a reflection of nightly backup. Daily change is around 500GB but the growth is very low, explaining the low requirement of 8TB.

The initial offer from our rep was 28 x 1.8TB of HDD, which make little sense for an 8TB ask. I'm thinking 7 x 480GB RI SSD for T1, and 10 x 900GB 15k HDD for T2, with the thinking being that if the machine is writing 500GB per day with little change in over all size, the R1 SSD tier would have enough space to soak this up with the lower tier capable of what's not highly active. While MU SSDs would technically be better for writes over RI SSDs, given the low requirement of IOPS, even RI is going have a ton of headroom on the IOPS side.

I got the impression the 8TB requirement was a low ask. I wonder if I can get the performance I need without going overboard on storage size and avoid cramming it full of SSDs and killing the project on price . . .

Any feedback would be great. I imagined the experience with Dell would have been more positive, as it had been in the past, but offering a 48TB when I need 8, I wonder where the disconnect happened.

Moderator

 • 

7.6K Posts

June 15th, 2018 11:00

Hello FAdmin,

With the current setup that you are looking to have it should be fine.  The only possible issue that I can see is that since you will be using tier storage you may run out of space on your tier 1 storage before data progression can start to move data to tier 2 storage. When using data progression, it can take a few days for it to start moving data between the different tiers.  When you first put data on your SCv3000 all data will be written to tier 1 storage. After data progression, has run for a few days it will then move your data that is not being access a lot to tier 2 storage. 

Please let us know if you have any other questions.

2 Intern

 • 

230 Posts

June 18th, 2018 18:00

I find it important to remember when planning a system to include not only space to be used but Raid Overhead also. So if you use the disk you plan:
7 x 480GB RI SSD for T1, and 10 x 900GB 15k HDD for T2

RI-SSD will have 6 Active and 1 Spare for 2880 GB or 2.8125 TB of Total Space
15K will have 9 Active and 1 Spare for for 8100 GB or 7.910 TB of Total Space

Single Redundancy
Raid 10 has a 100% overhead so every 1 TB of data will use 2 TB of disk space
Raid 5-5 has a 20% overhead so every 1 TB of data will use 1.2 TB of disk space
Raid 5-9 has an 11% overhead so every 1 TB of data will use 1.11 TB of disk space

Dual Redundancy
Raid 10 Dual Mirror has a 200% overhead so every 1 TB of data will use 3 TB of disk space
Raid 6-6 has a 33% overhead so every 1 TB of data will use 1.33 TB of disk space
Raid 5-9 has an 20% overhead so every 1 TB of data will use 1.2 TB of disk space

You may end up under sizing your system if you do not look at actual used space

No Events found!

Top