Unsolved
This post is more than 5 years old
2 Intern
•
157 Posts
0
736
July 25th, 2012 06:00
Stripe depth for MPFS
hi all,
I'd like to setup a new dedicated MPFS share for some completely sequential workloads but not all of the clients have FC cards and the admins are not crazy about doing iSCSI. So while I see that the recommendation for years has been to build the stripes with 256K, does this apply mainly to situations where all clients of the NAS are able to see the underlying disk via MPFS? The disk layout (stripe) will consist of roughly eleven 4+1 R5 groups comprised of 15k disks on a NS960 running flare 30 and 6.0-55 dart code. I can't do even multiples of 8 LUNs or RGs without buying more disk and using 8 alone does not satisfy the capacity needed.
If only one or 2 clients can actuall use the MPFS stack, and a bunch more CIFS/NFS clients which can't, is the recommendation still to do the big stripe depths or would the default 32K be a better fit?
Has anyone ever benchmarked these different settings? Someone suggested that having the larger stripe would penalize severely the occasional reads which are not big sequential pulls but rather "ls / dir" or other random file access.
thanks
afp92Tq1w012558
86 Posts
0
September 17th, 2012 01:00
Hi,
Let me check if I can get anything on this .
Thanks
Vanitha