This is my cyberhome!

June 18, 2004

A Comparison of PDS and PDSE

Filed under: Mainframes — Manish Bansal @ 12:37 pm

PDSEs (Partitioned dataset extended) were first introduced by IBM in MVS/AFP 3.2 in the year 1989. Even though regular PDSes were quite adequate for normal tasks, many IBM customers were not happy with them. One of the IBM user groups, SHARE, did a project on MVS Storage Management and published a white paper. This paper summarized the findings of the project and asked for a number of improvements and new features to then current PDSes. Another IBM user group, GUIDE, also published their requirements and asked for similar changes. IBM listened and the result was PDSE. But first a bit of background on plain old PDS to understand its limitations.

A PDS is essentially made up of two parts – a directory and a few members. The directory is a set of contiguous 256-byte blocks, present at the beginning of the dataset. Each of these directory blocks contains a 2-byte count field at the beginning and from 3 to 21 directory entries after that. There is one directory entry for each member in the PDS. Each directory entry contains – 8 byte member name (padded with spaces, if needed), starting position of the member in the PDS (in TTR addressing format) and some optional (up to 62 bytes) user data.

A directory block will contain only as many complete entries as can fit in 254 bytes (2 bytes are reserved for count field). The remaining bytes are left unused. The length of the user data determines how many complete entries can fit in one directory block. The 2-byte count field contains the number of used (also called ‘active’) bytes, including the bytes used for the count field.

This directory structure was the reason why there was a need to improve PDS. When IBM introduced PDSE, it replaced the rigid directory structure of the PDS with a new flexible scheme and also brought in many new features. And all this was done while keeping the PDSE backward compatible with PDS. That means that except for very low-level (hardware dependent) processing, users need not even be aware of what they are dealing with.

Some of the new features introduced in PDSE and their comparison with PDS is given below –

  1. Expandable directory size: The number of directory blocks in a PDS is specified at the time of its creation and can not be changed after that. Also the space for all the directory blocks is allocated at the time of creating the dataset. Lets say that a PDS was allocated with a directory block count of 20. Assume that an average 256-byte directory block holds 10 directory entries. So now this PDS can contain at most 20×10 = 200 members. But what if you use this up and want to create 201st member? Tough luck!PDSE solved this problem by creating an indexed directory structure. Now each directory entry points to the one coming next to it. This matters because now there is no need to allocate all the directory blocks at the time of creating the dataset. This also means that they need not be contiguous and need not be fixed in number. They can be interleaved with the member data blocks and they indeed are! When you want to create new members, a new directory block is created in the next available storage and the pointers updated.

    Note that its only the directory blocks that increase in number. The total size of the PDSE does not grow beyond one primary extent and 123 secondary extents. In other words, the directory can expand only if there is enough space in the dataset. The maximum size of the PDSE itself remains fixed.

  2. Better search and insertion: The directory entries in a PDS are stored in the alphabetical order of member names. So if a new entry is to be created, all the entries coming after it need to be shifted to make room for it. This is called ‘Ripple Stow’ and it results in many I/O operations, making the whole process a lot slower. Same holds true for searching for a member within a PDS. The entire directory needs to be scanned to locate a particular member.Since the directory in a PDSE is an indexed structure, there are no such performance problems in PDSE. So it always takes the same amount of time to search/insert a new member whether it starts with ‘A’ or ‘Z’.
  3. Improved sharing facilities: The locking mechanism in a PDS operates at the dataset level. If you want to update a single member in a PDS, you need the exclusive access to the entire dataset. No other user or job can update any other member in that PDS during that time. While in a PDSE, the access control is implemented at the member level. So two users can update two different members at the same time. Makes you wonder how people worked before PDSE came.
  4. Better use of disk space: When a PDS member gets deleted, the space that gets freed up is not used for allocating new members. Since the deletion of a member causes the deletion of that directory entry, the pointer to that member location is lost and so is that space. As the members get allocated/deleted during the lifetime of a PDS, the amount of this wasted space keeps growing. This wasted space, also called PDS gas, can be as much as 40% of the total allocated space. So the PDS needs to be compressed periodically to re-claim this space. The compression can be done by either typing ‘Z’ in front of the PDS name (in ISPF) or by using IEBCOPY utility.On the other hand, a PDSE keeps on re-claiming the freed space automatically, using a first-fit algorithm. Issuing a ‘Z’ command or doing an IEBCOPY has no effect on a PDSE.

    Also, whenever a new member is created in a PDS, the data blocks allocated for it have to be contiguous. But there is no such restriction in a PDSE. So the space re-claimed from deleted members can be allocated to new or existing members. This results in a much better space utilization.

  5. Improved dataset integrity: If a PDS is opened for output in a sequential mode, e.g. if an IEBGENER step omits the member name and uses only the PDS name, say in

    //SYSUT1 DD DSN=Some.input.sequential.file,DISP=SHR

    //SYSUT2 DD DSN=PDS

    the entire directory would get destroyed all the members would be lost. If a similar thing is attempted on a PDSE, The job would terminate with a abend code of S213-4C and the PDSE would remain intact.

    S213-4C : WHEN OPENING A PDSE DSORG=PS WAS SPECIFIED, BUT NO MEMBER WAS SPECIFIED.

  6. Hardware independence: PDS uses an addressing scheme called TTR (Track-Track-Record) which is based on the DASD geometry. TTR addresses are stored in hexadecimal format. So an address of X’002E26’ would mean track number X’002E’ and record X’26’. The name TTR comes from the fact that first two bytes of the address denote track number and third byte denotes record number. This dependence on the DASD geometry makes it very difficult to migrate PDS from one type of DASD to another one, e.g. from 3380 to 3390.The PDSE addressing scheme is not dependent on the physical device geometry. It uses a “simulated” 3-byte TTR address to locate the members and the records which makes the migration easier. Incidentally, this simulation of addresses places some limitations on the number of members and the number of records per member in a PDSE. A TTR address of X’000001′ in a PDSE points to the directory. The addresses from X’000002′ to X’07FFFF’ point to the first record of each member, which is why there is a limit of 524,286 members. The addresses from X’100001′ to X’FFFFFF’ point to records within each member, which is why there is a limit of 15,728,639 records in each member.

June 4, 2004

Verifying a VSAM file

Filed under: Mainframes — Manish Bansal @ 8:50 am

When a VSAM file is opened by a job in the output mode, a flag in the VSAM catalog called “open-for-output” gets set to ‘ON’. This flag does not get turned off until the job ends successfully i.e. until the job closes the file normally. Or in case we are editing the file manually in file-aid, the flag gets set when we open the file and gets turned off when we close it. But say the job goes down halfway through the update process or our TSO session expires before we could close the file. The “open-for-output” flag remains turned on. What happens to the file now?

When next time a job/user tries to open the file for output, the file manager trots off to the catalog to turn on the flag. But the flag is already on!! So it guesses (somewhat optimistically) that some other job might have opened the file. It issues a GRS (Global resource serialization) enqueueing on the file to find that out. If the file is not open anywhere else, it comes to know that it was not closed properly during its last use and its time for some catalog cleanup. Enter VERIFY.

Verify is a record management macro like get or put. In our case, the open processing issues an implicit verify against the file. This can be confirmed by the IEC161I type warning messages (RC of 56 and 62) in the sysout. The verify macro will compare the ICF catalog information with the physical VSAM cluster. It starts reading the VSAM dataset CI by CI, starting with the High Used RBA. It compares HURBA value, number of index levels, system time-stamp and many other fields. If the verify is not successful, it will issue a warning message with a return code of 116 (X’74’). Two things worth noting about implicit verify macro:

  1. It will not correct the catalog information. That is the job reserved for IDCAMS verify. It will just issue some warning messages. It is up to us to figure out what to do next. We can continue processing but the data integrity won’t be guaranteed.
  2. Implicit verify is not issued if the file is being opened for input or reset processing. Or if the VSAM file is of type LDS.

IDCAMS verify is an explicit verify command. When an IDCAMS verify is issued against a file, it opens the file, calls the record management verify macro and then closes the file. And it is at the time of closing the file that the ICF catalog gets updated and the “open-for-output” flag gets reset. Two things worth noting about explicit verify macro:

  1. A successful verify does not guarantee that everything is fine now. The catalog statistics may be invalid; the file might have duplicate or missing records. HURBA may be off by a few bytes.
  2. IDCAMS verify does not update the catalog or the VSAM control blocks directly. It relies on the implicit verify and VSAM close processing to do its job.

So whether the verify is successful or not, the structural integrity of a VSAM file is not guaranteed. Its always a good idea to take standard recovery actions on such files.

March 19, 2004

EXCPs

Filed under: Mainframes — Manish Bansal @ 1:59 pm

One of my teammates remarked that the Exception count for his job was not increasing and that it might be in a loop. He was monitoring the job in SDSF and apparently there was no visible activity. The discussion that we had on this is the reason for this post.

EXCP stands for EXecute Channel Program. It does not stand for “Exception”. These are the I/O subsystem hardware driver programs that do the actual data transfer between the DASD (Hard disk) and the system memory(core/RAM). Channels in mainframes are similar to Buses in the PCs. They are basically the electrical paths to carry data. So each time a trip is made to the DASD to fetch data, EXCP count goes up by one. This is the reason we say a job is looping if the EXCP count is not increasing. Note that the count goes up by one for each block of data transferred. So a data transfer of a single 4K block and a single 32K block will count for one EXCP each.

There is a little caveat here. If the data being fetched is in a DB2 table, the EXCP count will NOT go up even though large amounts of data are being transferred. That is because the EXCPs are logged by a part of MVS called SMF (Systems Management Facility) while DB2 I/Os are handled by MMF (Media Manager Facility). So those I/Os don’t show up on the EXCP reports.

Create a free website or blog at WordPress.com.