Unsolved
This post is more than 5 years old
1 Rookie
•
61 Posts
1
5904
December 12th, 2013 20:00
What should be the stripe size for a data base LUN(traditional LUN)?
Hi Guys,
What should be the stripe size for a data base LUN(traditional LUN)? Is there a recommended approach by EMC to set a particular stripe size, element size?
Also, can we adjust the same values from database e.g. Oracle database?
Regards,
Nishant Kohli
No Events found!
AnkitMehta
1.4K Posts
0
December 12th, 2013 21:00
Hi Nishant,
The default stripe element size for the CLARiiON is 64KB. However, Stripe element size is not changeable.
Stripe size is calculated by multiplying number of stripe drives by the stripe element size.
You may also refer to http://www.emc.com/collateral/hardware/white-papers/h1024-clariion-metaluns-cncpt-wp-ldv.pdf
Ankit
Nishant_Kohli
1 Rookie
•
61 Posts
0
December 12th, 2013 21:00
I am sorry I should have specified. But the question was regarding VNX stripe size for a database LUN.
I want to know if there is any recommendation by EMC on what should be the stripe size when it comes to databases LUN e.g. Data_disk, Log_disk, Temp_disk, etc.
e.g. if we take 4+1 R5 we get a stripe size of 256 KB(4*64). Is there any recommended RAID configuration for the above databases?
Also, we can choose the element size for traditional RGs and I know how to do that using engineering mode. However, are there any scenarios where it will be beneficial to do so.
Regards
Nishant Kohli
Storage - Guitar Center, ITO | Desk: (818) 735-8800 ext 2835
Mobile – 818 584 5956
E: Nishant.Kohli@Guitarcenter.com
Roger_Wu
4 Operator
•
4K Posts
0
December 12th, 2013 21:00
The stripe element is the amount of contiguous data stored on a single disk of the stripe. Stripe elements are measured in 512 byte blocks or in kilobytes (KB). The default stripe element size is 128 blocks, which is 64 KB. The default element size has been optimized for the operating environment. Attempts to "tune" it are likely to reduce performance.
The stripe size is the amount of user data in a RAID group stripe. This does not include drives used for parity or mirroring. The stripe is measured in KB. It is calculated by multiplying the number of stripe disks by the stripe element size.
For example, an eight-disk RAID 1/0 has a stripe width of four, with a stripe element size of 64 KB has a stripe size of 256 KB (4 * 64 KB). A five-disk RAID 5 (4+1) with a 64 KB stripe element size also has a stripe width of 256 KB drive.
You can get more information from the whitepaper "EMC Unified Storage System Fundamentals for Performance and Availability":
https://support.emc.com/docu35535_White_Paper:_EMC_Unified_Storage_System_Fundamentals_for_Performance_and_Availability.pdf?language=en_US
Sometimes we choose "power of two" stripe sizes (64KB, 128KB, 256KB...) to match the host's or application's block size. But you'd better follow application vendor's (Oracle DB, MS Exchange etc.) recommendations.
Anonymous User
67 Posts
0
January 24th, 2014 22:00
for your temp and undo tablespaces, database files and logs ..
If your application creates a large amount of temp activity, placing your temporary tablespace datafiles on RAID 1 devices instead of RAID 5 devices may provide a performance benefit due to RAID 1 having superior sequential I/O performance. The same is true for undo tablespaces as well if an application creates a lot of undo activity. Further, an application that creates a large number of full table scans or index scans may benefit from these datafiles being placed on a RAID 1 device.
RAID 5 is generally recommended for database files, due to storage efficiency. However, if the write I/O is greater than 30 percent of the total I/O, then RAID 1 (with Celerra striping) may provide better performance, as it avoids hot spots and gives the best possible performance during a disk failure. Random write performance on RAID 1 can be as much as 20 percent higher than on RAID 5.
Online redo log files should be put on RAID 1 devices. You should not use RAID 5 because sequential write performance of distributed parity (RAID 5) is not as high as that of simple mirroring (RAID 1).
Further, RAID 1 provides the best data protection, and protection of online redo log files is critical for Oracle recoverability.
In some cases, placing archived redo log files on RAID 1 may be appropriate. RAID 1 for archived redo logs will provide better mean time to recovery, as the sequential read performance is superior to RAID 5.
However, due to storage efficiency, RAID 5 may be chosen. This is a tradeoff, and must be determined on a case-by-case basis.
Hope this helps.
thanks.
Anonymous User
67 Posts
0
January 24th, 2014 22:00
Hello Nishant.
Is there any recommended RAID configuration for the above databases?
Yes, there is.
You need to find out if the database that you're provisioning your LUNs for, is an OLTP or a DSS type-workload.
For OLTP workloads, mirrored RAID options, such as R1 or R10 or R 0/1, are frequently preferred over parity RAID options such as R5, R6.
Typically, the key concern is with random DBMS data page writes resulting from "update transactions".
For OLTP workloads that tend to frequently update and rewrite many single 4 KB or 8 KB database pages, each write would have to be done with a parity adjustment. The old image of the DB page, as well as the corresponding 4 KB or 8 KB of parity data, frequently has to be read back in from the two different disks in that RAID group. Then, the bit delta between the old and revised DB page image is determined, and the parity page content adjusted. Finally, the revised new DB page and adjusted parity page would have to be sent back to the respective disks.
Hence, parity RAID is generally avoided for extremely high update rate OLTP workloads.
DSS workloads, on the other hands, tend to focus on reads because data writes to the LUNs holding the database are generally from the loading of new data into the data warehouse. Most data warehouse new data loads tend to be batched together. Random updates to existing data are typically rare. And even with the trend of moving more toward “close to real time continual data loads,” new data loads tend to be done as controlled batches at well-defined operational windows.
As such, the actual OS level write requests being sent by the DBMS to the storage system tend to be bursts of large writes. In fact, for most DBMS engines supporting DW data load functions, the engine is often be configured to batch up consecutive new DB pages, sending the entire batch as a “coalesced” big server write request that spans a full RAID stripe of the storage LUN (or multiples of full stripes).
Hence, for DW purposes, parity RAID is often a good choice.
Thanks.
dynamox
9 Legend
•
20.4K Posts
0
January 25th, 2014 06:00
or you could have just referenced the document where you copied that info from
https://www.emc.com/collateral/hardware/white-papers/h5548-deploying-clariion-dss-workloads-wp.pdf
Anonymous User
67 Posts
0
January 25th, 2014 07:00
yes. I could've done that too.
I thought, I'd rather explain the gist of the whole idea, by posting just the relevant info.
thanks.
kelleg
4.5K Posts
0
February 11th, 2014 14:00
Was your question answered correctly? If so, please remember to mark your question Answered when you get the correct answer and award points to the person providing the answer. This helps others searching for a similar issue.
glen
mahindervmw
1 Message
0
April 6th, 2014 21:00
Really Liked this post. Can you please clarify below queries.
We are deploying dataware house application on VNX8000 which has 10 x 200GB SSD for FAST Cache,FAST VP(16 X 200GB SSD,158 X 600GB SAS 10K)
1) For OS LUNs preferred raid level will be either raid 10 or raid 5 ?
2) For Flash Recovery Area which raid level will be preferred.
kelleg
4.5K Posts
0
April 7th, 2014 13:00
Please see the following best practice white paper for VNX2 (VNX8000) - there is a section that addresses the question about Element Size for specific application such as data warehousing on page 14.
https://support.emc.com/docu42660_VNX-Unified-Best-Practices-for-Performance-Applied-Best-Practices-Guide.pdf
glen