Unsolved
This post is more than 5 years old
2 Posts
0
681
March 19th, 2018 12:00
z/OS-- Defragmentation in VP Environment
When I was a SE at EMC, I occasionally fielded questions from customers who had converted to VP. They were wondering if they should discontinue their long-established weekly or monthly DF/DSS defrag jobs, since such activity seemed pointless in an environment where the z/OS host doesn't manage the physical data and everything is striped across the entire backend.
Recently I encountered a situation on our virtually-provisioned 20K. A large DB2 catalog could not be relocated to a new SMS SG. It received IDCAMS RC=0028 errors indicating a secondary extent could not be allocated. There was plenty of free space across the SG, but ISMF panels indicated the fragmentation index on all the volumes was very high-- above 500. I ran a DF/DSS defrag job against the SG and the fragmentation indices dropped below 50 on the volumes. I then resubmitted the catalog move job, and it was successful; the catalog was reduced from 201 extents to 55 after the move.
It seems to me that fragmentation, of a sort, still occurs. I wonder if the metadata constructs in cache that represent tracks on the virtual "3390" devices the host sees have a way of getting "fragmented" or out of continuity. I remember when we were showing customers the virtual devices with TF SNAP, we would remind them that the longer a SNAP volume stayed in use, more and more track pointers would be needed to access the changed tracks as updates were made. I'm wondering if a similar situation occurs with virtually provisioned 3390 devices, and whether a defrag job relocates the track pointers more efficiently.
PedalHarder
3 Apprentice
•
465 Posts
1
March 20th, 2018 15:00
Hi David,
There is a still a valid requirement for defrag, from a host perspective as you have pointed out. There is no array requirement to perform host defrag.
Depending on the array configuration, defrag may impact array performance as follows:
- In a FAST (multi-tier) environment. Since tracks / extents are being moved around, FAST may have to demote / promote based on where the new locations of the hot / cold data has moved to. This can cause a lack of consistency in performance for a short period. Just something to be aware of as a explainer for variations in run times.
- Datasets that are highly fragmented can benefit, performance wise from defrag if they are read sequentially. As sequential detect is track based.
- If the volume being defragged is the source of a SNAP (copy on write), then the target can experience a higher change rate. Most Mainframe environments don't over-provision so generally it is not a capacity issue, but the performance overheads of copy-on-write should be considered as part of the scheduling of the defrag. If it is a CLONE (copy) then again, scheduling the defrag outside the period that the background copy is taking place might be a consideration.
david_malbuff
2 Posts
0
March 21st, 2018 06:00
Thanks!