With the vast amounts of storage space available on mainframes and their associated direct access storage drives (DASD), there’s no need to worry about drive defragmentation or space allocation, right? Right? Wrong.
The huge volume of data stored on many businesses’ DASD systems make it even more necessary to have properly defragmented drives. The larger the volume of data on the drives, the larger the gains in efficiency from good defragmentation. Secondary storage devices hold the bulk of the rarely-accessed data on a mainframe. However, just because data is accessed less frequently doesn’t mean businesses can afford to store it inefficiently. Fragmented data takes longer to access, and takes up more drive space than necessary.
Fragmentation occurs because of the inability of many systems to store data contiguously. Storage volumes become fragmented over time as files are created, moved, and removed. By default, the system tries to fill the vacant space, but when files are too big to fit in one gap, the system will find another location to store the file. The gap left behind between the two sections is the fragmentation.
As time goes on, thousands of files end up scattered across the storage device with gaps of non-contiguous space. This means it takes the system longer to locate files. Each time the system goes hunting for a file in a fragmented storage volume, it has to skip over the blank sections. Those split seconds add up over time, and the system slows down. Also, those fragments of space continue to increase. Thus valuable storage space is left vacant that could be better used in storing more data.
Defragmentation is the solution for gaps in the file system. It is the physical rearrangement of files on a storage device. Rearranging these files more efficiently allows the system to access them more quickly. Although retrieving data from DASD will never be as efficient accessing them directly from the main system, it can get close with proper and efficient defragmentation. (For a great animated conceptual image of the defragmentation concept, check out http://en.wikipedia.org/wiki/Defragmentation.)
No matter the capacity of DASD systems, there will still remain the natural and persistent problems with the allocation of data. And those problems grow in proportion to the size of the data sets. The increased cost of storing data inefficiently, the manual storage allocation required of operators, and the production interruptions make it obvious that every business needs a strategy for optimizing DASD space.
Unfortunately, many companies’ solution for defragmenting DASD involves consolidating space by running DFSMSdss during times of lower activity, like overnight or weekends. However, these batch processes take hours, hours during which many businesses cannot afford to have their systems down. For high capacity mainframe systems, times of low utilization are few and far between, meaning there are not sufficient opportunities to keep DASD fragmentation in check. Mainframes that serve as the central nervous system of a business cannot afford down time, but DASD systems that are not defragmented will have stunted performance and reduced storage capacity.
A better solution would be to have a defragmentation program keeping up with changes to the DASD that will store data efficiently and will deal effectively with space-sapping fragmentation. DSO, a DASD defragmentation tool, keeps disc fragmentation to a minimum by quickly defragmenting DASDs in the background, using minimal system resources. Once it’s started, you don’t have to think about or worry about how it’s operating and what it’s doing. DSO works around the clock to maintain optimal space allocation. It can work on defragmenting DASD data without affecting your production environment. That ensures that work can go on, uninterrupted. Read more about DSO here.
If you are not able to keep up with the fragmentation on your DASD systems, you should start a 30-day free trial of DSO to see how it will make your system more efficeint.