ABCs of z/OS System Programming Volume 3 - IBM Redbooks
ABCs of z/OS System Programming Volume 3 - IBM Redbooks
ABCs of z/OS System Programming Volume 3 - IBM Redbooks
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
ibm.com/redbooks<br />
Front cover<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong><br />
<strong>Programming</strong><br />
<strong>Volume</strong> 3<br />
DFSMS, Data set basics, SMS<br />
Storage management s<strong>of</strong>tware<br />
and hardware<br />
Catalogs, VSAM, DFSMStvs<br />
Paul Rogers<br />
Redelf Janssen<br />
Andre Otto<br />
Rita Pleus<br />
Alvaro Salla<br />
Valeria Sokal
International Technical Support Organization<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
March 2010<br />
SG24-6983-03
Note: Before using this information and the product it supports, read the information in “Notices” on<br />
page ix.<br />
Fourth Edition (March 2010)<br />
This edition applies to Version 1 Release 11 <strong>of</strong> z/<strong>OS</strong> (5694-A01) and to subsequent releases and<br />
modifications until otherwise indicated in new editions.<br />
© Copyright International Business Machines Corporation 2010. All rights reserved.<br />
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule<br />
Contract with <strong>IBM</strong> Corp.
Contents<br />
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix<br />
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x<br />
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi<br />
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi<br />
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii<br />
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii<br />
Stay connected to <strong>IBM</strong> <strong>Redbooks</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii<br />
Chapter 1. DFSMS introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />
1.1 Introduction to DFSMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />
1.2 Data facility storage management subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />
1.3 DFSMSdfp component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />
1.4 DFSMSdss component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7<br />
1.5 DFSMSrmm component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />
1.6 DFSMShsm component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />
1.7 DFSMStvs component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />
Chapter 2. Data set basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15<br />
2.1 Data sets on storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />
2.2 Data set name rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
2.3 DFSMSdfp data set types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />
2.4 Types <strong>of</strong> VSAM data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22<br />
2.5 Non-VSAM data sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />
2.6 Extended-format data sets and objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />
2.7 Data set striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
2.8 Data set striping with z/<strong>OS</strong> V1R11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />
2.9 Large format data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
2.10 Large format data sets and TSO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
2.11 IGDSMSxx parmlib member support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
2.12 z/<strong>OS</strong> UNIX files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
2.13 Data set specifications for non-VSAM data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />
2.14 Locating an existing data set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42<br />
2.15 Uncataloged and cataloged data sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />
2.16 <strong>Volume</strong> table <strong>of</strong> contents (VTOC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
2.17 VTOC and DSCBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />
2.18 VTOC index structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
2.19 Initializing a volume using ICKDSF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />
Chapter 3. Extended access volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53<br />
3.1 Traditional DASD capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />
3.2 Large volumes before z/<strong>OS</strong> V1R10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />
3.3 zArchitecture data scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />
3.4 WLM controlling PAVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />
3.5 Parallel Access <strong>Volume</strong>s (PAVs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
3.6 HyperPAV feature for DS8000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64<br />
3.7 HyperPAV implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65<br />
3.8 Device type 3390 and 3390 Model A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />
3.9 Extended access volumes (EAV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. iii
3.10 Data sets eligible for EAV volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />
3.11 EAV volumes and multicylinder units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />
3.12 Dynamic volume expansion (DVE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />
3.13 Using dynamic volume expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76<br />
3.14 Command-line interface (DSCLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77<br />
3.15 Using Web browser GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
3.16 Select volume to increase capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79<br />
3.17 Increase capacity <strong>of</strong> volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
3.18 Select capacity increase for volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81<br />
3.19 Final capacity increase for volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82<br />
3.20 VTOC index with EAV volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
3.21 Device Support FACILITY (ICKDSF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85<br />
3.22 Update VTOC after volume expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87<br />
3.23 Automatic VTOC index rebuild - z/<strong>OS</strong> V1R11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89<br />
3.24 Automatic VTOC rebuild with DEVMAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91<br />
3.25 EAV and IGDSMSxx parmlib member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<br />
3.26 IGDSMSxx member BreakPointValue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
3.27 New EATTR attribute in z/<strong>OS</strong> V1R11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
3.28 EATTR parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99<br />
3.29 EATTR JCL DD statement example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />
3.30 Migration assistance tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />
3.31 Migration tracker commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />
Chapter 4. Storage management s<strong>of</strong>tware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
4.1 Overview <strong>of</strong> DFSMSdfp utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108<br />
4.2 IEBCOMPR utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />
4.3 IEBCOPY utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112<br />
4.4 IEBCOPY: Copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />
4.5 IEBCOPY: Compress operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116<br />
4.6 IEBGENER utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117<br />
4.7 IEBGENER: Adding members to a PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119<br />
4.8 IEBGENER: Copying data to tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120<br />
4.9 IEHLIST utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121<br />
4.10 IEHLIST LISTVTOC output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />
4.11 IEHINITT utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />
4.12 IEFBR14 utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125<br />
4.13 DFSMSdfp access methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126<br />
4.14 Access method services (IDCAMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />
4.15 IDCAMS functional commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />
4.16 AMS modal commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />
4.17 DFSMS Data Collection Facility (DCOLLECT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />
4.18 Generation data group (GDG). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
4.19 Defining a generation data group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139<br />
4.20 Absolute generation and version numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141<br />
4.21 Relative generation numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142<br />
4.22 Partitioned organized (PO) data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143<br />
4.23 PDS data set organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144<br />
4.24 Partitioned data set extended (PDSE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146<br />
4.25 PDSE enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148<br />
4.26 PDSE: Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150<br />
4.27 Program objects in a PDSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152<br />
4.28 Sequential access methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />
4.29 z/<strong>OS</strong> V1R9 QSAM - BSAM enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156<br />
iv <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.30 Virtual storage access method (VSAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
4.31 VSAM terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160<br />
4.32 VSAM: Control interval (CI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
4.33 VSAM data set components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164<br />
4.34 VSAM key sequenced cluster (KSDS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166<br />
4.35 VSAM: Processing a KSDS cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167<br />
4.36 VSAM entry sequenced data set (ESDS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<br />
4.37 VSAM: Typical ESDS processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170<br />
4.38 VSAM relative record data set (RRDS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171<br />
4.39 VSAM: Typical RRDS processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172<br />
4.40 VSAM linear data set (LDS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173<br />
4.41 VSAM: Data-in-virtual (DIV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174<br />
4.42 VSAM: Mapping a linear data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
4.43 VSAM resource pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176<br />
4.44 VSAM: Buffering modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />
4.45 VSAM: <strong>System</strong>-managed buffering (SMB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179<br />
4.46 VSAM buffering enhancements with z/<strong>OS</strong> V1R9 . . . . . . . . . . . . . . . . . . . . . . . . . . . 181<br />
4.47 VSAM SMB enhancement with z/<strong>OS</strong> V1R11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184<br />
4.48 VSAM enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186<br />
4.49 Data set separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
4.50 Data set separation syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
4.51 Data facility sort (DFSORT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190<br />
4.52 z/<strong>OS</strong> Network File <strong>System</strong> (z/<strong>OS</strong> NFS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
4.53 DFSMS optimizer (DFSMSopt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194<br />
4.54 Data Set Services (DFSMSdss) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196<br />
4.55 DFSMSdss: Physical and logical processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198<br />
4.56 DFSMSdss: Logical processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199<br />
4.57 DFSMSdss: Physical processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201<br />
4.58 DFSMSdss stand-alone services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203<br />
4.59 Hierarchical Storage Manager (DFSMShsm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204<br />
4.60 DFSMShsm: Availability management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205<br />
4.61 DFSMShsm: Space management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207<br />
4.62 DFSMShsm: Storage device hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209<br />
4.63 ML1 enhancements with z/<strong>OS</strong> V1R11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211<br />
4.64 DFSMShsm z/<strong>OS</strong> V1R11 enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213<br />
4.65 ML1 and ML2 volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215<br />
4.66 Data set allocation format and volume pool determination . . . . . . . . . . . . . . . . . . . . 217<br />
4.67 DFSMShsm volume types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219<br />
4.68 DFSMShsm: Automatic space management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221<br />
4.69 DFSMShsm data set attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223<br />
4.70 RETAINDAYS keyword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />
4.71 RETAINDAYS keyword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227<br />
4.72 DFSMShsm: Recall processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229<br />
4.73 Removable media manager (DFSMSrmm). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231<br />
4.74 Libraries and locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233<br />
4.75 What DFSMSrmm can manage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234<br />
4.76 Managing libraries and storage locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237<br />
Chapter 5. <strong>System</strong>-managed storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239<br />
5.1 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240<br />
5.2 DFSMS and DFSMS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241<br />
5.3 Goals and benefits <strong>of</strong> system-managed storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242<br />
5.4 Service level objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245<br />
Contents v
5.5 Implementing SMS policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247<br />
5.6 Monitoring SMS policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />
5.7 Assigning data to be system-managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />
5.8 Using data classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251<br />
5.9 Using storage classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253<br />
5.10 Using management classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />
5.11 Management class functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />
5.12 Using storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258<br />
5.13 Using aggregate backup and recovery support (ABARS). . . . . . . . . . . . . . . . . . . . . 260<br />
5.14 Automatic Class Selection (ACS) routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262<br />
5.15 SMS configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264<br />
5.16 SMS control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266<br />
5.17 Implementing DFSMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268<br />
5.18 Steps to activate a minimal SMS configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269<br />
5.19 Allocating SMS control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271<br />
5.20 Defining the SMS base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273<br />
5.21 Creating ACS routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276<br />
5.22 DFSMS setup for z/<strong>OS</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278<br />
5.23 Starting SMS and activating a new configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 280<br />
5.24 Control SMS processing with operator commands . . . . . . . . . . . . . . . . . . . . . . . . . . 282<br />
5.25 Displaying the SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284<br />
5.26 Managing data with a minimal SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 285<br />
5.27 Device-independence space allocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287<br />
5.28 Developing naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289<br />
5.29 Setting the low-level qualifier (LLQ) standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291<br />
5.30 Establishing installation standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292<br />
5.31 Planning and defining data classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293<br />
5.32 Data class attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294<br />
5.33 Use data class ACS routine to enforce standards . . . . . . . . . . . . . . . . . . . . . . . . . . 295<br />
5.34 Simplifying JCL use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296<br />
5.35 Allocating a data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297<br />
5.36 Creating a VSAM cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299<br />
5.37 Retention period and expiration date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301<br />
5.38 SMS PDSE support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302<br />
5.39 Selecting data sets to allocate as PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303<br />
5.40 Allocating new PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304<br />
5.41 <strong>System</strong>-managed data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305<br />
5.42 Data types that cannot be system-managed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307<br />
5.43 Interactive Storage Management Facility (ISMF) . . . . . . . . . . . . . . . . . . . . . . . . . . . 309<br />
5.44 ISMF: Product relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310<br />
5.45 ISMF: What you can do with ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312<br />
5.46 ISMF: Accessing ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314<br />
5.47 ISMF: Pr<strong>of</strong>ile option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315<br />
5.48 ISMF: Obtaining information about a panel field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316<br />
5.49 ISMF: Data set option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318<br />
5.50 ISMF: <strong>Volume</strong> Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319<br />
5.51 ISMF: Management Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320<br />
5.52 ISMF: Data Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321<br />
5.53 ISMF: Storage Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322<br />
5.54 ISMF: List option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323<br />
Chapter 6. Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325<br />
6.1 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326<br />
vi <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.2 The basic catalog structure (BCS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328<br />
6.3 The VSAM volume data set (VVDS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330<br />
6.4 Catalogs by function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332<br />
6.5 Using aliases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335<br />
6.6 Catalog search order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337<br />
6.7 Defining a catalog and its aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339<br />
6.8 Using multiple catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342<br />
6.9 Sharing catalogs across systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343<br />
6.10 Listing a catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345<br />
6.11 Defining and deleting data sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347<br />
6.12 DELETE command enhancement with z/<strong>OS</strong> V1R11 . . . . . . . . . . . . . . . . . . . . . . . . 351<br />
6.13 Backup procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353<br />
6.14 Recovery procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355<br />
6.15 Checking the integrity on an ICF structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357<br />
6.16 Protecting catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359<br />
6.17 Merging catalogs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361<br />
6.18 Splitting a catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363<br />
6.19 Catalog performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365<br />
6.20 F CATALOG,REPORT,PERFORMANCE command . . . . . . . . . . . . . . . . . . . . . . . . 367<br />
6.21 Catalog address space (CAS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369<br />
6.22 Working with the catalog address space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371<br />
6.23 Fixing temporary catalog problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373<br />
6.24 Enhanced catalog sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375<br />
Chapter 7. DFSMS Transactional VSAM Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377<br />
7.1 VSAM share options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378<br />
7.2 Base VSAM buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380<br />
7.3 Base VSAM locking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382<br />
7.4 CICS function shipping before VSAM RLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383<br />
7.5 VSAM record-level sharing introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384<br />
7.6 VSAM RLS overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385<br />
7.7 Data set sharing under VSAM RLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387<br />
7.8 Buffering under VSAM RLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388<br />
7.9 VSAM RLS locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390<br />
7.10 VSAM RLS/CICS data set recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392<br />
7.11 Transactional recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394<br />
7.12 The batch window problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395<br />
7.13 VSAM RLS implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396<br />
7.14 Coupling Facility structures for RLS sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397<br />
7.15 Update PARMLIB with VSAM RLS parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400<br />
7.16 Define sharing control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402<br />
7.17 Update SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405<br />
7.18 Update data sets with log parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408<br />
7.19 The SMSVSAM address space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410<br />
7.20 Interacting with VSAM RLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412<br />
7.21 Backup and recovery <strong>of</strong> CICS VSAM data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415<br />
7.22 Interpreting RLSDATA in an IDCAMS LISTCAT output . . . . . . . . . . . . . . . . . . . . . . 417<br />
7.23 DFSMStvs introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419<br />
7.24 Overview <strong>of</strong> DFSMStvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421<br />
7.25 DFSMStvs use <strong>of</strong> z/<strong>OS</strong> RRMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423<br />
7.26 Atomic updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425<br />
7.27 Unit <strong>of</strong> work and unit <strong>of</strong> recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426<br />
7.28 DFSMStvs logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427<br />
Contents vii
7.29 Accessing a data set with DFSMStvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429<br />
7.30 Application considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430<br />
7.31 DFSMStvs logging implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432<br />
7.32 Prepare for logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433<br />
7.33 Update PARMLIB with DFSMStvs parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436<br />
7.34 The DFSMStvs instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438<br />
7.35 Interacting with DFSMStvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439<br />
7.36 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442<br />
Chapter 8. Storage management hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445<br />
8.1 Overview <strong>of</strong> DASD types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446<br />
8.2 Redundant array <strong>of</strong> independent disks (RAID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448<br />
8.3 Seascape architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450<br />
8.4 Enterprise Storage Server (ESS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453<br />
8.5 ESS universal access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455<br />
8.6 ESS major components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456<br />
8.7 ESS host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457<br />
8.8 FICON host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458<br />
8.9 ESS disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460<br />
8.10 ESS device adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462<br />
8.11 SSA loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464<br />
8.12 RAID-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466<br />
8.13 Storage balancing with RAID-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468<br />
8.14 ESS copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469<br />
8.15 ESS performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472<br />
8.16 <strong>IBM</strong> TotalStorage DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474<br />
8.17 <strong>IBM</strong> TotalStorage DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477<br />
8.18 DS8000 hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479<br />
8.19 Storage systems LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481<br />
8.20 <strong>IBM</strong> TotalStorage Resiliency Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483<br />
8.21 TotalStorage Expert product highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485<br />
8.22 Introduction to tape processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487<br />
8.23 SL and NL format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489<br />
8.24 Tape capacity - tape mount management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491<br />
8.25 TotalStorage Enterprise Tape Drive 3592 Model J1A. . . . . . . . . . . . . . . . . . . . . . . . 493<br />
8.26 <strong>IBM</strong> TotalStorage Enterprise Automated Tape Library 3494 . . . . . . . . . . . . . . . . . . 495<br />
8.27 Introduction to Virtual Tape Server (VTS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497<br />
8.28 <strong>IBM</strong> TotalStorage Peer-to-Peer VTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499<br />
8.29 Storage area network (SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501<br />
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503<br />
<strong>IBM</strong> <strong>Redbooks</strong> publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503<br />
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503<br />
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504<br />
How to get <strong>IBM</strong> <strong>Redbooks</strong> publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504<br />
Help from <strong>IBM</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504<br />
viii <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Notices<br />
This information was developed for products and services <strong>of</strong>fered in the U.S.A.<br />
<strong>IBM</strong> may not <strong>of</strong>fer the products, services, or features discussed in this document in other countries. Consult<br />
your local <strong>IBM</strong> representative for information on the products and services currently available in your area. Any<br />
reference to an <strong>IBM</strong> product, program, or service is not intended to state or imply that only that <strong>IBM</strong> product,<br />
program, or service may be used. Any functionally equivalent product, program, or service that does not<br />
infringe any <strong>IBM</strong> intellectual property right may be used instead. However, it is the user's responsibility to<br />
evaluate and verify the operation <strong>of</strong> any non-<strong>IBM</strong> product, program, or service.<br />
<strong>IBM</strong> may have patents or pending patent applications covering subject matter described in this document. The<br />
furnishing <strong>of</strong> this document does not give you any license to these patents. You can send license inquiries, in<br />
writing, to:<br />
<strong>IBM</strong> Director <strong>of</strong> Licensing, <strong>IBM</strong> Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.<br />
The following paragraph does not apply to the United Kingdom or any other country where such<br />
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION<br />
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR<br />
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,<br />
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURP<strong>OS</strong>E. Some states do not allow disclaimer <strong>of</strong><br />
express or implied warranties in certain transactions, therefore, this statement may not apply to you.<br />
This information could include technical inaccuracies or typographical errors. Changes are periodically made<br />
to the information herein; these changes will be incorporated in new editions <strong>of</strong> the publication. <strong>IBM</strong> may make<br />
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time<br />
without notice.<br />
Any references in this information to non-<strong>IBM</strong> Web sites are provided for convenience only and do not in any<br />
manner serve as an endorsement <strong>of</strong> those Web sites. The materials at those Web sites are not part <strong>of</strong> the<br />
materials for this <strong>IBM</strong> product and use <strong>of</strong> those Web sites is at your own risk.<br />
<strong>IBM</strong> may use or distribute any <strong>of</strong> the information you supply in any way it believes appropriate without incurring<br />
any obligation to you.<br />
Information concerning non-<strong>IBM</strong> products was obtained from the suppliers <strong>of</strong> those products, their published<br />
announcements or other publicly available sources. <strong>IBM</strong> has not tested those products and cannot confirm the<br />
accuracy <strong>of</strong> performance, compatibility or any other claims related to non-<strong>IBM</strong> products. Questions on the<br />
capabilities <strong>of</strong> non-<strong>IBM</strong> products should be addressed to the suppliers <strong>of</strong> those products.<br />
This information contains examples <strong>of</strong> data and reports used in daily business operations. To illustrate them<br />
as completely as possible, the examples include the names <strong>of</strong> individuals, companies, brands, and products.<br />
All <strong>of</strong> these names are fictitious and any similarity to the names and addresses used by an actual business<br />
enterprise is entirely coincidental.<br />
COPYRIGHT LICENSE:<br />
This information contains sample application programs in source language, which illustrate programming<br />
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in<br />
any form without payment to <strong>IBM</strong>, for the purposes <strong>of</strong> developing, using, marketing or distributing application<br />
programs conforming to the application programming interface for the operating platform for which the sample<br />
programs are written. These examples have not been thoroughly tested under all conditions. <strong>IBM</strong>, therefore,<br />
cannot guarantee or imply reliability, serviceability, or function <strong>of</strong> these programs.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. ix
Trademarks<br />
<strong>IBM</strong>, the <strong>IBM</strong> logo, and ibm.com are trademarks or registered trademarks <strong>of</strong> International Business Machines<br />
Corporation in the United States, other countries, or both. These and other <strong>IBM</strong> trademarked terms are<br />
marked on their first occurrence in this information with the appropriate symbol (® or ), indicating US<br />
registered or common law trademarks owned by <strong>IBM</strong> at the time this information was published. Such<br />
trademarks may also be registered or common law trademarks in other countries. A current list <strong>of</strong> <strong>IBM</strong><br />
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml<br />
The following terms are trademarks <strong>of</strong> the International Business Machines Corporation in the United States,<br />
other countries, or both:<br />
AIX®<br />
AS/400®<br />
CICS®<br />
DB2®<br />
DS6000<br />
DS8000®<br />
Enterprise Storage Server®<br />
ESCON®<br />
eServer<br />
FICON®<br />
FlashCopy®<br />
GDPS®<br />
Geographically Dispersed Parallel<br />
Sysplex<br />
Hiperspace<br />
HyperSwap®<br />
x <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
i5/<strong>OS</strong>®<br />
<strong>IBM</strong>®<br />
IMS<br />
iSeries®<br />
Language Environment®<br />
Magstar®<br />
<strong>OS</strong>/390®<br />
<strong>OS</strong>/400®<br />
Parallel Sysplex®<br />
POWER5<br />
PowerPC®<br />
PR/SM<br />
pSeries®<br />
RACF®<br />
<strong>Redbooks</strong>®<br />
<strong>Redbooks</strong> (logo) ®<br />
The following terms are trademarks <strong>of</strong> other companies:<br />
RETAIN®<br />
RS/6000®<br />
S/390®<br />
<strong>System</strong> i®<br />
<strong>System</strong> Storage<br />
<strong>System</strong> z®<br />
Tivoli®<br />
TotalStorage®<br />
VTAM®<br />
z/Architecture®<br />
z/<strong>OS</strong>®<br />
z/VM®<br />
z9®<br />
zSeries®<br />
Novell, the Novell logo, and the N logo are registered trademarks <strong>of</strong> Novell, Inc. in the United States and other<br />
countries.<br />
ACS, Interchange, and the Shadowman logo are trademarks or registered trademarks <strong>of</strong> Red Hat, Inc. in the<br />
U.S. and other countries.<br />
SAP, and SAP logos are trademarks or registered trademarks <strong>of</strong> SAP AG in Germany and in several other<br />
countries.<br />
Java, and all Java-based trademarks are trademarks <strong>of</strong> Sun Microsystems, Inc. in the United States, other<br />
countries, or both.<br />
Micros<strong>of</strong>t, Windows NT, Windows, and the Windows logo are trademarks <strong>of</strong> Micros<strong>of</strong>t Corporation in the<br />
United States, other countries, or both.<br />
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks <strong>of</strong> Intel<br />
Corporation or its subsidiaries in the United States and other countries.<br />
UNIX is a registered trademark <strong>of</strong> The Open Group in the United States and other countries.<br />
Linux is a trademark <strong>of</strong> Linus Torvalds in the United States, other countries, or both.<br />
Other company, product, or service names may be trademarks or service marks <strong>of</strong> others.
Preface<br />
The <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong>® <strong>System</strong> <strong>Programming</strong> is a thirteen-volume collection that provides an<br />
introduction to the z/<strong>OS</strong> operating system and the hardware architecture. Whether you are a<br />
beginner or an experienced system programmer, the <strong>ABCs</strong> collection provides the<br />
information that you need to start your research into z/<strong>OS</strong> and related subjects. The <strong>ABCs</strong><br />
collection serves as a powerful technical tool to help you become more familiar with z/<strong>OS</strong> in<br />
your current environment, or to help you evaluate platforms to consolidate your e-business<br />
applications.<br />
This edition is updated to z/<strong>OS</strong> Version 1 Release 1.<br />
The contents <strong>of</strong> the volumes are:<br />
<strong>Volume</strong> 1: Introduction to z/<strong>OS</strong> and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/<strong>OS</strong><br />
delivery and installation<br />
<strong>Volume</strong> 2: z/<strong>OS</strong> implementation and daily maintenance, defining subsystems, JES2 and<br />
JES3, LPA, LNKLST, authorized libraries, Language Environment®, and SMP/E<br />
<strong>Volume</strong> 3: Introduction to DFSMS, data set basics, storage management hardware and<br />
s<strong>of</strong>tware, VSAM, <strong>System</strong>-Managed Storage, catalogs, and DFSMStvs<br />
<strong>Volume</strong> 4: Communication Server, TCP/IP and VTAM®<br />
<strong>Volume</strong> 5: Base and Parallel Sysplex®, <strong>System</strong> Logger, Resource Recovery Services (RRS),<br />
Global Resource Serialization (GRS), z/<strong>OS</strong> system operations, Automatic Restart<br />
Management (ARM), Geographically Dispersed Parallel Sysplex (GPDS)<br />
<strong>Volume</strong> 6: Introduction to security, RACF®, Digital certificates and PKI, Kerberos,<br />
cryptography and z990 integrated cryptography, zSeries® firewall technologies, LDAP,<br />
Enterprise Identity Mapping (EIM), and firewall technologies<br />
<strong>Volume</strong> 7: Printing in a z/<strong>OS</strong> environment, Infoprint Server and Infoprint Central<br />
<strong>Volume</strong> 8: An introduction to z/<strong>OS</strong> problem diagnosis<br />
<strong>Volume</strong> 9: z/<strong>OS</strong> UNIX® <strong>System</strong> Services<br />
<strong>Volume</strong> 10: Introduction to z/Architecture®, zSeries processor design, zSeries connectivity,<br />
LPAR concepts, HCD, and HMC<br />
<strong>Volume</strong> 11: Capacity planning, performance management, RMF, and SMF<br />
<strong>Volume</strong> 12: WLM<br />
<strong>Volume</strong> 13: JES3<br />
The team who wrote this book<br />
This book was produced by a team <strong>of</strong> specialists from around the world working at the<br />
International Technical Support Organization, Poughkeepsie Center.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. xi
Paul Rogers is a Consulting IT Specialist at the International Technical Support<br />
Organization, Poughkeepsie Center and has worked for <strong>IBM</strong>® for more than 40 years. He<br />
writes extensively and teaches <strong>IBM</strong> classes worldwide on various aspects <strong>of</strong> z/<strong>OS</strong>, JES3,<br />
Infoprint Server, and z/<strong>OS</strong> UNIX. Before joining the ITSO 20 years ago, Paul worked in the<br />
<strong>IBM</strong> Installation Support Center (ISC) in Greenford, England, providing <strong>OS</strong>/390® and JES<br />
support for <strong>IBM</strong> EMEA and the Washington <strong>System</strong>s Center in Gaithersburg, Maryland.<br />
Redelf Janssen is an IT Architect in <strong>IBM</strong> Global Services ITS in <strong>IBM</strong> Germany. He holds a<br />
degree in Computer Science from University <strong>of</strong> Bremen and joined <strong>IBM</strong> Germany in 1988. His<br />
areas <strong>of</strong> expertise include <strong>IBM</strong> zSeries, z/<strong>OS</strong> and availability management. He has written<br />
<strong>IBM</strong> <strong>Redbooks</strong>® publications on <strong>OS</strong>/390 Releases 3, 4, and 10, and z/<strong>OS</strong> Release 8.<br />
Andre Otto is a z/<strong>OS</strong> DFSMS SW service specialist at the EMEA Back<strong>of</strong>fice team in<br />
Germany. He has 12 years <strong>of</strong> experience in the DFSMS, VSAM and catalog components.<br />
Andre holds a degree in Computer Science from the Dresden Pr<strong>of</strong>essional Academy.<br />
Rita Pleus is an IT Architect in <strong>IBM</strong> Global Services ITS in <strong>IBM</strong> Germany. She has 21 years<br />
<strong>of</strong> IT experience in a variety <strong>of</strong> areas, including systems programming and operations<br />
management. Before joining <strong>IBM</strong> in 2001, she worked for a German S/390® customer. Rita<br />
holds a degree in Computer Science from the University <strong>of</strong> Applied Sciences in Dortmund.<br />
Her areas <strong>of</strong> expertise include z/<strong>OS</strong>, its subsystems, and systems management.<br />
Alvaro Salla is an <strong>IBM</strong> retiree who worked for <strong>IBM</strong> for more than 30 years in large systems.<br />
He has co-authored many <strong>IBM</strong> <strong>Redbooks</strong> publications and spent many years teaching S/360<br />
to S/390. He has a degree in Chemical Engineering from the University <strong>of</strong> Sao Paulo, Brazil.<br />
Valeria Sokal is an MVS system programmer at an <strong>IBM</strong> customer. She has 16 years <strong>of</strong><br />
experience as a mainframe systems programmer.<br />
The fourth edition was updated by Paul Rogers.<br />
Now you can become a published author, too!<br />
Here's an opportunity to spotlight your skills, grow your career, and become a published<br />
author - all at the same time! Join an ITSO residency project and help write a book in your<br />
area <strong>of</strong> expertise, while honing your experience using leading-edge technologies. Your efforts<br />
will help to increase product acceptance and customer satisfaction, as you expand your<br />
network <strong>of</strong> technical contacts and relationships. Residencies run from two to six weeks in<br />
length, and you can participate either in person or as a remote resident working from your<br />
home base.<br />
Find out more about the residency program, browse the residency index, and apply online at:<br />
ibm.com/redbooks/residencies.html<br />
Comments welcome<br />
Your comments are important to us!<br />
We want our books to be as helpful as possible. Send us your comments about this book or<br />
other <strong>IBM</strong> <strong>Redbooks</strong> in one <strong>of</strong> the following ways:<br />
► Use the online Contact us review <strong>Redbooks</strong> form found at:<br />
ibm.com/redbooks<br />
xii <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Send your comments in an e-mail to:<br />
redbooks@us.ibm.com<br />
► Mail your comments to:<br />
<strong>IBM</strong> Corporation, International Technical Support Organization<br />
Dept. HYTD Mail Station P099<br />
2455 South Road<br />
Poughkeepsie, NY 12601-5400<br />
Stay connected to <strong>IBM</strong> <strong>Redbooks</strong><br />
► Find us on Facebook:<br />
http://www.facebook.com/pages/<strong>IBM</strong>-<strong>Redbooks</strong>/178023492563?ref=ts<br />
► Follow us on twitter:<br />
http://twitter.com/ibmredbooks<br />
► Look for us on LinkedIn:<br />
http://www.linkedin.com/groups?home=&gid=2130806<br />
► Explore new <strong>Redbooks</strong> publications, residencies, and workshops with the <strong>IBM</strong> <strong>Redbooks</strong><br />
weekly newsletter:<br />
https://www.redbooks.ibm.com/<strong>Redbooks</strong>.nsf/subscribe?OpenForm<br />
► Stay current on recent <strong>Redbooks</strong> publications with RSS Feeds:<br />
http://www.redbooks.ibm.com/rss.html<br />
Preface xiii
xiv <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 1. DFSMS introduction<br />
1<br />
This chapter gives a brief overview <strong>of</strong> the Data Facility Storage Management Subsystem<br />
(DFSMS) and its primary functions in the z/<strong>OS</strong> operating system. DFSMS comprises a suite<br />
<strong>of</strong> related data and storage management products for the z/<strong>OS</strong> system. DFSMS is now an<br />
integral element <strong>of</strong> the z/<strong>OS</strong> operating system.<br />
DFSMS is an operating environment that helps automate and centralize the management <strong>of</strong><br />
storage based on the policies that your installation defines for availability, performance,<br />
space, and security.<br />
The heart <strong>of</strong> DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage<br />
administrator defines policies that automate the management <strong>of</strong> storage and hardware<br />
devices. These policies describe data allocation characteristics, performance and availability<br />
goals, backup and retention requirements, and storage requirements for the system.<br />
DFSMS is an exclusive element <strong>of</strong> the z/<strong>OS</strong> operating system and is a s<strong>of</strong>tware suite that<br />
automatically manages data from creation to expiration.<br />
DFSMSdfp is a base element <strong>of</strong> z/<strong>OS</strong>. DFSMSdfp is automatically included with z/<strong>OS</strong>.<br />
DFSMSdfp performs the essential data, storage, and device management functions <strong>of</strong> the<br />
system. DFSMSdfp and DFSMShsm provide disaster recovery functions such as Advanced<br />
Copy Services and aggregate backup and recovery support (ABARS).<br />
The other elements <strong>of</strong> DFSMS—DFSMSdss, DFSMShsm, DFSMSrmm, and<br />
DFSMStvs—are optional features that complement DFSMSdfp to provide a fully integrated<br />
approach to data and storage management. In a system-managed storage environment,<br />
DFSMS automates and centralizes storage management based on the policies that your<br />
installation defines for availability, performance, space, and security. With the optional<br />
features enabled, you can take full advantage <strong>of</strong> all the functions that DFSMS <strong>of</strong>fers.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 1
1.1 Introduction to DFSMS<br />
<strong>IBM</strong> 3494<br />
Tape Library<br />
<strong>System</strong> Programmer<br />
Figure 1-1 Introduction to data management<br />
Understanding DFSMS<br />
Data management is the part <strong>of</strong> the operating system that organizes, identifies, stores,<br />
catalogs, and retrieves all the data information (including programs) that your installation<br />
uses. DFSMS is an exclusive element <strong>of</strong> the z/<strong>OS</strong> operating system. DFSMS is a s<strong>of</strong>tware<br />
suite that automatically manages data from creation to expiration.<br />
DFSMSdfp helps you store and catalog information about DASD, optical, and tape devices so<br />
that it can be quickly identified and retrieved from the system. DFSMSdfp provides access to<br />
both record- and stream-oriented data in the z/<strong>OS</strong> environment. The z/<strong>OS</strong> operating system<br />
enables you to efficiently manage e-business workloads and enterprise transactions 24 hours<br />
a day. DFSMSdfp is automatically included with z/<strong>OS</strong>. It performs the essential data, storage,<br />
and device management functions <strong>of</strong> the system.<br />
<strong>System</strong>s programmer<br />
As a systems programmer, you can use DFSMS data management to:<br />
► Allocate space on DASD and optical volumes<br />
► Automatically locate cataloged data sets<br />
► Control access to data<br />
► Transfer data between the application program and the medium<br />
► Mount magnetic tape volumes in the drive<br />
2 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
dsname.f.data<br />
DASD<br />
Tape<br />
Data Set
1.2 Data facility storage management subsystem<br />
Storage<br />
Hierarchy<br />
dss<br />
rmm<br />
dfp<br />
DFSMS<br />
Figure 1-2 Data Facility Storage Management Subsystem<br />
<strong>IBM</strong> workstations<br />
DFSMS components<br />
DFSMS is an exclusive element <strong>of</strong> the z/<strong>OS</strong> operating system. DFSMS is a s<strong>of</strong>tware suite<br />
that automatically manages data from creation to expiration. The following elements comprise<br />
DFSMS:<br />
► DFSMSdfp, a base element <strong>of</strong> z/<strong>OS</strong><br />
► DFSMSdss, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
► DFSMShsm, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
► DFSMSrmm, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
► DFSMStvs, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
DFSMSdfp Provides storage, data, program, and device management. It is comprised <strong>of</strong><br />
components such as access methods, OPEN/CL<strong>OS</strong>E/EOV routines, catalog<br />
management, DADSM (DASD space control), utilities, IDCAMS, SMS, NFS,<br />
ISMF, and other functions.<br />
DFSMSdss Provides data movement, copy, backup, and space management functions.<br />
DFSMShsm Provides backup, recovery, migration, and space management functions. It<br />
invokes DFSMSdss for certain <strong>of</strong> its functions.<br />
DFSMSrmm Provides management functions for removable media such as tape cartridges<br />
and optical media.<br />
DFSMStvs Enables batch jobs and CICS® online transactions to update shared VSAM<br />
data sets concurrently.<br />
tvs<br />
hsm<br />
NFS<br />
<strong>IBM</strong> <strong>System</strong> z<br />
P690<br />
NFS<br />
Chapter 1. DFSMS introduction 3
Network File <strong>System</strong><br />
The Network File <strong>System</strong> (NFS) is a distributed file system that enables users to access UNIX<br />
files and directories that are located on remote computers as though they were local. NFS is<br />
independent <strong>of</strong> machine types, operating systems, and network architectures.<br />
Importance <strong>of</strong> DFSMS elements<br />
The z/<strong>OS</strong> operating system enables you to efficiently manage e-business workloads and<br />
enterprise transactions 24 hours a day. DFSMSdfp is automatically included with z/<strong>OS</strong>.<br />
DFSMSdfp performs the essential data, storage, and device management functions <strong>of</strong> the<br />
system. DFSMSdfp and DFSMShsm provide disaster recovery functions such as Advanced<br />
Copy Services and aggregate backup and recovery support (ABARS).<br />
The other elements <strong>of</strong> DFSMS—DFSMSdss, DFSMShsm, DFSMSrmm, and<br />
DFSMStvs—complement DFSMSdfp to provide a fully-integrated approach to data and<br />
storage management. In a system-managed storage environment, DFSMS automates and<br />
centralizes storage management based on the policies that your installation defines for<br />
availability, performance, space, and security. With these optional features enabled, you can<br />
take full advantage <strong>of</strong> all the functions that DFSMS <strong>of</strong>fers.<br />
4 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
1.3 DFSMSdfp component<br />
DFSMSdfp provides the following functions:<br />
Managing storage<br />
Managing data<br />
Using access methods, commands, and utilities<br />
Managing devices<br />
Tape mount management<br />
Distributed data access<br />
Advanced copy services<br />
Object access method (OAM)<br />
Figure 1-3 DFSMSdfp functions<br />
DFSMSdfp component<br />
DFSMSdfp provides storage, data, program, and device management. It is comprised <strong>of</strong><br />
components such as access methods, OPEN/CL<strong>OS</strong>E/EOV routines, catalog management,<br />
DADSM (DASD space control), utilities, IDCAMS, SMS, NFS, ISMF, and other functions.<br />
Managing storage<br />
The storage management subsystem (SMS) is a DFSMSdfp facility designed for automating<br />
and centralizing storage management. SMS automatically assigns attributes to new data<br />
when that data is created. SMS automatically controls system storage and assigns data to<br />
the appropriate storage device. ISMF panels allow you to specify these data attributes.<br />
For more information about ISMF, see 5.43, “Interactive Storage Management Facility (ISMF)”<br />
on page 309.<br />
Managing data<br />
DFSMSdfp organizes, identifies, stores, catalogs, shares, and retrieves all the data that your<br />
installation uses. You can store data on DASD, magnetic tape volumes, or optical volumes.<br />
Using data management, you can complete the following tasks:<br />
► Allocate space on DASD and optical volumes<br />
► Automatically locate cataloged data sets<br />
► Control access to data<br />
► Transfer data between the application program and the medium<br />
► Mount magnetic tape volumes in the drive<br />
Chapter 1. DFSMS introduction 5
Using access methods, commands, and utilities<br />
DFSMSdfp manages the organization and storage <strong>of</strong> data in the z/<strong>OS</strong> environment. You can<br />
use access methods with macro instructions to organize and process a data set or object.<br />
Access method services commands manage data sets, volumes, and catalogs. Utilities<br />
perform tasks such as copying and moving data. You can use system commands to display<br />
and set SMS configuration parameters, use DFSMSdfp callable services to write advanced<br />
application programs, and use installation exits to customize DFSMS.<br />
Managing devices with DFSMSdfp<br />
You need to use the Hardware Configuration Definition (HCD) to define I/O devices to the<br />
operating system, and to control these devices. DFSMS manages DASD, storage control<br />
units, magnetic tape devices, optical devices, and printers. You can use DFSMS functions to<br />
manage many device types, but most functions apply specifically to one type or one family <strong>of</strong><br />
devices.<br />
Tape mount management<br />
Tape mount management is a methodology for improving tape usage and reducing tape<br />
costs. This methodology involves intercepting selected tape data set allocations through the<br />
SMS automatic class selection (ACS) routines and redirecting them to a direct access storage<br />
device (DASD) buffer. Once on DASD, you can migrate these data sets to a single tape or<br />
small set <strong>of</strong> tapes, thereby reducing the overhead associated with multiple tape mounts.<br />
Distributed data access with DFSMSdfp<br />
In the distributed computing environment, applications must <strong>of</strong>ten access data residing on<br />
other computers in a network. Often, the most effective data access services occur when<br />
applications can access remote data as though it were local data.<br />
Distributed FileManager/MVS is a DFSMSdfp client/server product that enables remote<br />
clients in a network to access data on z/<strong>OS</strong> systems. Distributed FileManager/MVS provides<br />
workstations with access to z/<strong>OS</strong> data. Users and applications on heterogeneous client<br />
computers in your network can take advantage <strong>of</strong> system-managed storage on z/<strong>OS</strong>, data<br />
sharing, and data security with RACF.<br />
z/<strong>OS</strong> UNIX <strong>System</strong> Services (z/<strong>OS</strong> UNIX) provides the command interface that interactive<br />
UNIX users can use. z/<strong>OS</strong> UNIX allows z/<strong>OS</strong> programs to directly access UNIX data.<br />
Advanced copy services<br />
Advanced copy services includes remote and point-in-time copy functions that provide<br />
backup and recovery <strong>of</strong> data. When used before a disaster occurs, Advanced Copy Services<br />
provides rapid backup <strong>of</strong> critical data with minimal impact to business applications. If a<br />
disaster occurs to your data center, Advanced Copy Services provides rapid recovery <strong>of</strong><br />
critical data.<br />
Object access method<br />
Object access method (OAM) provides storage, retrieval, and storage hierarchy management<br />
for objects. OAM also manages storage and retrieval for tape volumes that are contained in<br />
system-managed libraries.<br />
6 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
1.4 DFSMSdss component<br />
DFSMSdss provides the following functions:<br />
Data movement and replication<br />
Space management<br />
Data backup and recovery<br />
Data set and volume conversion<br />
Distributed data management<br />
FlashCopy feature with Enterprise Storage Server<br />
(ESS)<br />
SnapShot feature with RAMAC Virtual Array (RVA)<br />
Concurrent copy<br />
Figure 1-4 DFSMSdss functions<br />
DFSMSdss component<br />
DFSMSdss is the primary data mover for DFSMS. DFSMSdss copies and moves data to help<br />
manage storage, data, and space more efficiently. It can efficiently move multiple data sets<br />
from old to new DASD. The data movement capability that is provided by DFSMSdss is useful<br />
for many other operations, as well. You can use DFSMSdss to perform the following tasks.<br />
Data movement and replication<br />
DFSMSdss lets you move or copy data between volumes <strong>of</strong> like and unlike device types. If<br />
you create a backup in DFSMSdss, you can copy a backup copy <strong>of</strong> data. DFSMSdss also can<br />
produce multiple backup copies during a dump operation.<br />
Space management<br />
DFSMSdss can reduce or eliminate DASD free-space fragmentation.<br />
Data backup and recovery<br />
DFSMSdss provides you with host system backup and recovery functions at both the data set<br />
and volume levels. It also includes a stand-alone restore program that you can run without a<br />
host operating system.<br />
Data set and volume conversion<br />
DFSMSdss can convert your data sets and volumes to system-managed storage. It can also<br />
return your data to a non-system-managed state as part <strong>of</strong> a recovery procedure.<br />
Chapter 1. DFSMS introduction 7
Distributed data management<br />
DFSMSdss saves distributed data management (DDM) attributes that are associated with a<br />
specific data set and preserves those attributes during copy and move operations.<br />
DFSMSdss also <strong>of</strong>fers the FlashCopy® feature with Enterprise Storage Server® (ESS) and<br />
the SnapShot feature with RAMAC Virtual Array (RVA). FlashCopy and SnapShot function<br />
automatically, work much faster than traditional data movement methods, and are well-suited<br />
for handling large amounts <strong>of</strong> data.<br />
Concurrent copy<br />
When it is used with supporting hardware, DFSMSdss also provides concurrent copy<br />
capability. Concurrent copy lets you copy or back up data while that data is being used. The<br />
user or application program determines when to start the processing, and the data is copied<br />
as though no updates have occurred.<br />
8 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
1.5 DFSMSrmm component<br />
DFSMSdmm provides the following functions:<br />
Library management<br />
Shelf management<br />
<strong>Volume</strong> management<br />
Data set management<br />
Figure 1-5 DFSMSrmm functions<br />
DFSMSrmm component<br />
DFSMSrmm manages your removable media resources, including tape cartridges and reels.<br />
It provides the following functions.<br />
Library management<br />
You can create tape libraries, or collections <strong>of</strong> tape media associated with tape drives, to<br />
balance the work <strong>of</strong> your tape drives and help the operators that use them.<br />
DFSMSrmm can manage the following devices:<br />
► A removable media library, which incorporates all other libraries, such as:<br />
– <strong>System</strong>-managed manual tape libraries.<br />
– <strong>System</strong>-managed automated tape libraries. Examples <strong>of</strong> automated tape libraries<br />
include:<br />
<strong>IBM</strong> TotalStorage®<br />
Enterprise Automated Tape Library (3494)<br />
<strong>IBM</strong> TotalStorage Virtual Tape Servers (VTS)<br />
► Non-system-managed or traditional tape libraries, including automated libraries such as a<br />
library under Basic Tape Library Support (BTLS) control.<br />
Chapter 1. DFSMS introduction 9
Shelf management<br />
DFSMSrmm groups information about removable media by shelves into a central online<br />
inventory, and keeps track <strong>of</strong> the volumes residing on those shelves. DFSMSrmm can<br />
manage the shelf space that you define in your removable media library and in your storage<br />
locations.<br />
<strong>Volume</strong> management<br />
DFSMSrmm manages the movement and retention <strong>of</strong> tape volumes throughout their life<br />
cycle.<br />
Data set management<br />
DFSMSrmm records information about the data sets on tape volumes. DFSMSrmm uses the<br />
data set information to validate volumes and to control the retention and movement <strong>of</strong> those<br />
data sets.<br />
10 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
1.6 DFSMShsm component<br />
DFSMShsm provides the following functions:<br />
Storage management<br />
Space management<br />
Tape mount management<br />
Availability management<br />
Figure 1-6 DFSMShsm functions<br />
DFSMShsm component<br />
DFSMShsm complements DFSMSdss to provide the following functions.<br />
Storage management<br />
DFSMShsm provides automatic DASD storage management, thus relieving users from<br />
manual storage management tasks.<br />
Space management<br />
DFSMShsm improves DASD space usage by keeping only active data on fast-access storage<br />
devices. It automatically frees space on user volumes by deleting eligible data sets, releasing<br />
overallocated space, and moving low-activity data to lower cost-per-byte devices, even if the<br />
job did not request tape.<br />
Tape mount management<br />
DFSMShsm can write multiple output data sets to a single tape, making it a useful tool for<br />
implementing tape mount management under SMS. When you redirect tape data set<br />
allocations to DASD, DFSMShsm can move those data sets to tape, as a group, during<br />
interval migration. This methodology greatly reduces the number <strong>of</strong> tape mounts on the<br />
system. DFSMShsm uses a single-file format, which improves your tape usage and search<br />
capabilities.<br />
Chapter 1. DFSMS introduction 11
Availability management<br />
DFSMShsm backs up your data, automatically or by command, to ensure availability if<br />
accidental loss <strong>of</strong> the data sets or physical loss <strong>of</strong> volumes occurs. DFSMShsm also allows<br />
the storage administrator to copy backup and migration tapes, and to specify that copies be<br />
made in parallel with the original. You can store the copies onsite as protection from media<br />
damage, or <strong>of</strong>fsite as protection from site damage. DFSMShsm also provides disaster backup<br />
and recovery for user-defined groups <strong>of</strong> data sets (aggregates) so that you can restore critical<br />
applications at the same location or at an <strong>of</strong>fsite location.<br />
Attention: You must also have DFSMSdss to use the DFSMShsm functions.<br />
12 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
1.7 DFSMStvs component<br />
Provide transactional recovery within VSAM<br />
RLS allows batch sharing <strong>of</strong> recoverable data sets<br />
for read<br />
RLS provides locking and buffer coherency<br />
CICS provides logging and two-phase commit<br />
protocols<br />
Transactional VSAM allows batch sharing <strong>of</strong><br />
recoverable data sets for update<br />
Logging provided using the <strong>System</strong> Logger<br />
Two-phase commit and backout using Recoverable<br />
Resource Management Services (RRMS)<br />
Figure 1-7 DFSMStvs functions<br />
DFSMStvs component<br />
DFSMS Transactional VSAM Services (DFSMStvs) allows you to share VSAM data sets<br />
across CICS, batch, and object-oriented applications on z/<strong>OS</strong> or distributed systems.<br />
DFSMStvs enables concurrent shared updates <strong>of</strong> recoverable VSAM data sets by CICS<br />
transactions and multiple batch applications. DFSMStvs enables 24-hour availability <strong>of</strong> CICS<br />
and batch applications.<br />
VSAM record-level sharing (RLS)<br />
With VSAM RLS, multiple CICS systems can directly access a shared VSAM data set,<br />
eliminating the need to ship functions between the application-owning regions and file-owning<br />
regions. CICS provides the logging, commit, and backout functions for VSAM recoverable<br />
data sets. VSAM RLS provides record-level serialization and cross-system caching. CICSVR<br />
provides a forward recovery utility.<br />
DFSMStvs is built on top <strong>of</strong> VSAM record-level sharing (RLS), which permits sharing <strong>of</strong><br />
recoverable VSAM data sets at the record level. Different applications <strong>of</strong>ten need to share<br />
VSAM data sets. Sometimes the applications need only to read the data set. Sometimes an<br />
application needs to update a data set while other applications are reading it. The most<br />
complex case <strong>of</strong> sharing a VSAM data set is when multiple applications need to update the<br />
data set and all require complete data integrity.<br />
Transaction processing provides functions that coordinate work flow and the processing <strong>of</strong><br />
individual tasks for the same data sets. VSAM record-level sharing and DFSMStvs provide<br />
Chapter 1. DFSMS introduction 13
key functions that enable multiple batch update jobs to run concurrently with CICS access to<br />
the same data sets, while maintaining integrity and recoverability.<br />
Recoverable resource management services (RRMS)<br />
RRMS is part <strong>of</strong> the operating system and comprises registration services, context services,<br />
and recoverable resource services (RRS). RRMS provides the context and unit <strong>of</strong> recovery<br />
management under which DFSMStvs participates as a recoverable resource manager.<br />
14 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 2. Data set basics<br />
2<br />
A data set is a collection <strong>of</strong> logically related data; it can be a source program, a library <strong>of</strong><br />
macros, or a file <strong>of</strong> data records used by a processing program. Data records (also called<br />
logical records) are the basic unit <strong>of</strong> information used by a processing program. By placing<br />
your data into volumes <strong>of</strong> organized data sets, you can save and process the data efficiently.<br />
You can also print the contents <strong>of</strong> a data set, or display the contents on a terminal.<br />
You can store data on secondary storage devices, such as:<br />
► A direct access storage device (DASD)<br />
The term DASD applies to disks or to a large amount <strong>of</strong> magnetic storage media on which<br />
a computer stores data. A volume is a standard unit <strong>of</strong> secondary storage. You can store<br />
all types <strong>of</strong> data sets on DASD.<br />
Each block <strong>of</strong> data on a DASD volume has a distinct location and a unique address, thus<br />
making it possible to find any record without extensive searching. You can store and<br />
retrieve records either directly or sequentially. Use DASD volumes for storing data and<br />
executable programs, including the operating system itself, and for temporary working<br />
storage. You can use one DASD volume for many separate data sets, and reallocate or<br />
reuse space on the volume.<br />
► A magnetic tape volume<br />
Only sequential data sets can be stored on magnetic tape. Mountable tape volumes can<br />
reside in an automated tape library. For information about magnetic tape volumes, see<br />
z/<strong>OS</strong> DFSMS: Using Magnetic Tapes, SC26-7412. You can also direct a sequential data<br />
set to or from spool, a UNIX file, a TSO/E terminal, a unit record device, virtual I/O (VIO),<br />
or a dummy data set.<br />
The Storage Management Subsystem (SMS) is an operating environment that automates the<br />
management <strong>of</strong> storage. Storage management uses the values provided at allocation time to<br />
determine, for example, on which volume to place your data set, and how many tracks to<br />
allocate for it. Storage management also manages tape data sets on mountable volumes that<br />
reside in an automated tape library. With SMS, users can allocate data sets more easily.<br />
The data sets allocated through SMS are called system-managed data sets or SMS-managed<br />
data sets.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 15
An access method is a DFSMSdfp component that defines the technique that is used to store<br />
and retrieve data. Access methods have their own data set structures to organize data,<br />
macros to define and process data sets, and utility programs to process data sets.<br />
Access methods are identified primarily by the way that they organize the data in the data set.<br />
For example, use the basic sequential access method (BSAM) or queued sequential access<br />
method (QSAM) with sequential data sets. However, there are times when an access method<br />
identified with one organization can be used to process a data set organized in another<br />
manner. For example, a sequential data set (not extended-format data set) created using<br />
BSAM can be processed by the basic direct access method (BDAM), and vice versa. Another<br />
example is UNIX files, which you can process using BSAM, QSAM, basic partitioned access<br />
method (BPAM), or virtual storage access method (VSAM).<br />
This chapter describes various data set basics:<br />
► Data set name rules<br />
► Data set characteristics<br />
► Locating a data set<br />
► <strong>Volume</strong> table <strong>of</strong> contents (VTOC)<br />
► Initializing a volume<br />
16 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.1 Data sets on storage devices<br />
DASD volume<br />
DATASET.SEQ<br />
DATASET.PDS<br />
DATASET.VSAM<br />
VOLSER=DASD01<br />
Figure 2-1 Data sets on volumes<br />
Tape volume<br />
DATASET.SEQ1<br />
DATASET.SEQ2<br />
DATASET.SEQ3<br />
VOLSER=SL0001<br />
MVS data sets<br />
An MVS data set is a collection <strong>of</strong> logically related data records stored on one volume or a set<br />
<strong>of</strong> volumes. A data set can be, for example, a source program, a library <strong>of</strong> macros, or a file <strong>of</strong><br />
data records used by a processing program. You can print a data set or display it on a<br />
terminal. The logical record is the basic unit <strong>of</strong> information used by a processing program.<br />
Note: As an exception, the z/<strong>OS</strong> UNIX services component supports Hierarchical File<br />
<strong>System</strong> (HFS) data sets, where the collection is <strong>of</strong> bytes and there is not the concept <strong>of</strong><br />
logically related data records.<br />
Storage devices<br />
Data can be stored on a magnetic direct access storage device (DASD), magnetic tape<br />
volume, or optical media. As mentioned previously, the term DASD applies to disks or<br />
simulated equivalents <strong>of</strong> disks. All types <strong>of</strong> data sets can be stored on DASD, but only<br />
sequential data sets can be stored on magnetic tape. The types <strong>of</strong> data sets are described in<br />
2.3, “DFSMSdfp data set types” on page 20.<br />
DASD volumes<br />
Each block <strong>of</strong> data on a DASD volume has a distinct location and a unique address, making it<br />
possible to find any record without extensive searching. You can store and retrieve records<br />
either directly or sequentially. Use DASD volumes for storing data and executable programs,<br />
Chapter 2. Data set basics 17
including the operating system itself, and for temporary working storage. You can use one<br />
DASD volume for many separate data sets, and reallocate or reuse space on the volume.<br />
The following sections discuss the logical attributes <strong>of</strong> a data set, which are specified at data<br />
set creation time in:<br />
► DCB/ACB control blocks in the application program<br />
► DD cards (explicitly, or through the Data Class (DC) option with DFSMS)<br />
► In an ACS Data Class (DC) routine (overridden by a DD card)<br />
After the creation, such attributes are kept in catalogs and VTOCs.<br />
18 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.2 Data set name rules<br />
HARRY.FILE.EXAMPLE.DATA<br />
1º 2º<br />
Figure 2-2 Data set name rules<br />
3º 4º<br />
HLQ LLQ<br />
Data set naming rules<br />
Whenever you allocate a new data set, you (or MVS) must give the data set a unique name.<br />
Usually, the data set name is given as the DSNAME keyword in JCL.<br />
A data set name can be one name segment, or a series <strong>of</strong> joined name segments. Each<br />
name segment represents a level <strong>of</strong> qualification. For example, the data set name<br />
HARRY.FILE.EXAMPLE.DATA is composed <strong>of</strong> four name segments. The first name on the left<br />
is called the high-level qualifier (HLQ), the last name on the right is the lowest-level qualifier<br />
(LLQ).<br />
Each name segment (qualifier) is 1 to 8 characters, the first <strong>of</strong> which must be alphabetic (A to<br />
Z) or national (# @ $). The remaining seven characters are either alphabetic, numeric (0 - 9),<br />
national, a hyphen (-). Name segments are separated by a period (.).<br />
Note: Including all name segments and periods, the length <strong>of</strong> the data set name must not<br />
exceed 44 characters. Thus, a maximum <strong>of</strong> 22 name segments can make up a data set<br />
name.<br />
Chapter 2. Data set basics 19
2.3 DFSMSdfp data set types<br />
Data set types supported<br />
VSAM data sets<br />
Non-VSAM data sets<br />
Extended-format data sets<br />
Large format data sets<br />
Basic format data sets<br />
Objects<br />
z/<strong>OS</strong> UNIX files<br />
Virtual input/output data sets<br />
Figure 2-3 DFSMSdfp data set types supported<br />
DFSMSdfp data set types<br />
The data organization that you choose depends on your applications and the operating<br />
environment. z/<strong>OS</strong> allows you to use temporary or permanent data sets, and to use several<br />
ways to organize files for data to be stored on magnetic media, as described here.<br />
VSAM data sets<br />
VSAM data sets are formatted differently than non-VSAM data sets. Except for linear data<br />
sets, VSAM data sets are collections <strong>of</strong> records, grouped into control intervals. The control<br />
interval is a fixed area <strong>of</strong> storage space in which VSAM stores records. The control intervals<br />
are grouped into contiguous areas <strong>of</strong> storage called control areas. To access VSAM data<br />
sets, use the VSAM access method. See also 2.4, “Types <strong>of</strong> VSAM data sets” on page 22.<br />
Non-VSAM data sets<br />
Non-VSAM data sets are collections <strong>of</strong> fixed-length or variable-length records, grouped into<br />
blocks, but not in control intervals. To access non-VSAM data sets, use BSAM, QSAM, or<br />
BPAM. See also 2.5, “Non-VSAM data sets” on page 23.<br />
Extended-format data sets<br />
You can create both sequential and VSAM data sets in extended format on system-managed<br />
DASD; this implies a 32-bye suffix at each physical record. See also 2.6, “Extended-format<br />
data sets and objects” on page 25.<br />
20 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Large format data sets<br />
Large format data sets are sequential data sets that can grow beyond the size limit <strong>of</strong> 65 535<br />
tracks (4369 cylinders) per volume that applies to other sequential data sets. Large format<br />
data sets can be system-managed or not. They can be accessed using QSAM, BSAM, or<br />
EXCP.<br />
Large format data sets reduce the need to use multiple volumes for single data sets,<br />
especially very large ones such as spool data sets, dumps, logs, and traces. Unlike<br />
extended-format data sets, which also support greater than 65 535 tracks per volume, large<br />
format data sets are compatible with EXCP and do not need to be SMS-managed.<br />
You can allocate a large format data set using the DSNTYPE=LARGE parameter on the DD<br />
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE, or the access method services<br />
ALLOCATE command.<br />
Basic format data sets<br />
Basic format data sets are sequential data sets that are specified as neither extended-format<br />
nor large-format. Basic format data sets have a size limit <strong>of</strong> 65 535 tracks (4369 cylinders) per<br />
volume. They can be system-managed or not, and can be accessed using QSAM, BSAM, or<br />
EXCP.<br />
You can allocate a basic format data set using the DSNTYPE=BASIC parameter on the DD<br />
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE, or the access method services<br />
ALLOCATE command, or the data class. If no DSNTYPE value is specified from any <strong>of</strong> these<br />
sources, then its default is BASIC.<br />
Objects<br />
Objects are named streams <strong>of</strong> bytes that have no specific format or record orientation. Use<br />
the object access method (OAM) to store, access, and manage object data. You can use any<br />
type <strong>of</strong> data in an object because OAM does not recognize the content, format, or structure <strong>of</strong><br />
the data. For example, an object can be a scanned image <strong>of</strong> a document, an engineering<br />
drawing, or a digital video. OAM objects are stored either on DASD in a DB2® database, or<br />
on an optical drive, or on an optical or tape storage volume.<br />
The storage administrator assigns objects to object storage groups and object backup<br />
storage groups. The object storage groups direct the objects to specific DASD, optical, or tape<br />
devices, depending on their performance requirements. You can have one primary copy <strong>of</strong> an<br />
object and up to two backup copies <strong>of</strong> an object. A Parallel Sysplex allows you to access<br />
objects from all instances <strong>of</strong> OAM and from optical hardware within the sysplex.<br />
z/<strong>OS</strong> UNIX files<br />
z/<strong>OS</strong> UNIX <strong>System</strong> Services (z/<strong>OS</strong> UNIX) enables applications and even z/<strong>OS</strong> to access<br />
UNIX files. Also UNIX applications also can access z/<strong>OS</strong> data sets. You can use the<br />
hierarchical file system (HFS), z/<strong>OS</strong> Network File <strong>System</strong> (z/<strong>OS</strong> NFS), zSeries File <strong>System</strong><br />
(zFS), and temporary file system (TFS) with z/<strong>OS</strong> UNIX. You can use the BSAM, QSAM,<br />
BPAM, and VSAM access methods to access data in UNIX files and directories. z/<strong>OS</strong> UNIX<br />
files are byte-oriented, similar to objects.<br />
Chapter 2. Data set basics 21
2.4 Types <strong>of</strong> VSAM data sets<br />
Types <strong>of</strong> VSAM data sets<br />
Figure 2-4 VSAM data set types<br />
VSAM data sets<br />
VSAM arranges records by an index key, by a relative byte address, or by a relative record<br />
number. VSAM data sets are cataloged for easy retrieval.<br />
Key-sequenced data set (KSDS)<br />
A KSDS VSAM data set contains records in order by a key field and can be accessed by the<br />
key or by a relative byte address. The key contains a unique value, such as an employee<br />
number or part number.<br />
Entry-sequenced data set (ESDS)<br />
An ESDS VSAM data set contains records in the order in which they were entered and can<br />
only be accessed by relative byte address. An ESDS is similar to a sequential data set.<br />
Relative-record data set (RRDS)<br />
An RRDS VSAM data set contains records in order by relative-record number and can only<br />
be accessed by this number. Relative records can be fixed length or variable length. VRRDS<br />
is a type <strong>of</strong> RRDS where the logical records can be variable.<br />
Linear data set (LDS)<br />
An LDS VSAM data set contains data that can be accessed as byte-addressable strings in<br />
virtual storage. A linear data set does not have imbedded control information that other VSAM<br />
data sets hold.<br />
22 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Key-sequenced data set (KSDS)<br />
Entry-sequenced data set (ESDS)<br />
Relative-record data set (RRDS)<br />
Variable relative-record data set (VRRDS)<br />
Linear data set (LDS)
2.5 Non-VSAM data sets<br />
DIRECTORY<br />
DSORG specifies the organization <strong>of</strong> the data set as:<br />
Physical sequential (PS)<br />
Partitioned (PO)<br />
Direct (DA)<br />
A<br />
Partitioned Organized<br />
(PDS and PDSE)<br />
Figure 2-5 Types <strong>of</strong> non-VSAM data sets<br />
B<br />
PO.DATA.SET<br />
C<br />
A B<br />
C<br />
MEMBERS<br />
SEQ.DATA.SET1<br />
SEQ.DATA.SET2<br />
Physical Sequential<br />
Data set organization (DSORG)<br />
DSORG specifies the organization <strong>of</strong> the data set as physical sequential (PS), partitioned<br />
(PO), or direct (DA). If the data set is processed using absolute rather than relative<br />
addresses, you must mark it as unmovable by adding a U to the DSORG parameter (for<br />
example, by coding DSORG=PSU). You must specify the data set organization in the DCB<br />
macro. In addition:<br />
► When creating a direct data set, the DSORG in the DCB macro must specify PS or PSU<br />
and the DD statement must specify DA or DAU.<br />
► PS is for sequential and extended format DSNTYPE.<br />
► PO is the data set organization for both PDSEs and PDSs. DSNTYPE is used to<br />
distinguish between PDSEs and PDSs.<br />
Non-VSAM data sets<br />
Non-VSAM data sets are collections <strong>of</strong> fixed-length or variable-length records grouped into<br />
physical blocks (a set <strong>of</strong> logical records). To access non-VSAM data sets, an application can<br />
use BSAM, QSAM, or BPAM. There are several types <strong>of</strong> non-VSAM data sets, as follows:<br />
Physical sequential data set (PS)<br />
Sequential data sets contain logical records that are stored in physical order. New records are<br />
appended to the end <strong>of</strong> the data set. You can specify a sequential data set in extended format<br />
or not.<br />
Chapter 2. Data set basics 23
Partitioned data set (PDS)<br />
Partitioned data sets contain a directory <strong>of</strong> sequentially organized members, each <strong>of</strong> which<br />
can contain a program or data. After opening the data set, you can retrieve any individual<br />
member without searching the entire data set.<br />
Partitioned data set extended (PDSE)<br />
Partitioned data sets extended contain an indexed, expandable directory <strong>of</strong> sequentially<br />
organized members, each <strong>of</strong> which can contain a program or data. You can use a PDSE<br />
instead <strong>of</strong> a PDS. The main advantage <strong>of</strong> using a PDSE over a PDS is that a PDSE<br />
automatically reclaims the space released by a previous member deletion, without the need<br />
for a reorganization.<br />
24 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.6 Extended-format data sets and objects<br />
An extended-format data set supports the following<br />
options:<br />
Compression<br />
Data striping<br />
Extended-addressability<br />
Objects<br />
Use object access method (OAM)<br />
Storage administrator assigns objects<br />
Figure 2-6 Types <strong>of</strong> extended-format data sets<br />
Extended-format data sets<br />
While sequential data sets have a maximum <strong>of</strong> 16 extents on each volume, extended-format<br />
sequential data sets have a maximum <strong>of</strong> 123 extents on each volume. Each extended-format<br />
sequential data set can have a maximum <strong>of</strong> 59 volumes, so an extended-format sequential<br />
data set can have a maximum <strong>of</strong> 7257 extents (123 times 59).<br />
An extended-format data set can occupy any number <strong>of</strong> tracks. On a volume that has more<br />
than 65,535 tracks, a sequential data set cannot occupy more than 65,535 tracks.<br />
An extended-format, striped sequential data set can contain up to 4 GB blocks. The maximum<br />
size <strong>of</strong> each block is 32 760 bytes.<br />
An extended-format data set supports the following additional functions:<br />
► Compression, which reduces the space for storing data and improves I/O, caching, and<br />
buffering performance.<br />
► Data striping, which in a sequential processing environment distributes data for one data<br />
set across multiple SMS-managed DASD volumes, improving I/O performance and<br />
reducing the batch window. For example, a data set with 6 stripes is distributed originally<br />
across 6 volumes.<br />
Large data sets with high sequential I/O activity are the best candidates for striped data<br />
sets. Data sets defined as extended-format sequential must be accessed using BSAM or<br />
QSAM, and not EXCP (means no access method is used) or BDAM.<br />
Chapter 2. Data set basics 25
► Extended-addressability, which enables you to create a VSAM data set that is larger than<br />
4 GB.<br />
<strong>System</strong>-managed DASD<br />
You can allocate both sequential and VSAM data sets in extended format on a<br />
system-managed DASD. Extended-format VSAM data sets also allow you to release partial<br />
unused space and to use system-managed buffering (SMB, a fast buffer pool management<br />
technique) for VSAM batch programs. You can select whether to use the primary or<br />
secondary space amount when extending VSAM data sets to multiple volumes.<br />
Objects<br />
Objects are named streams <strong>of</strong> bytes that have no specific format or record orientation. Use<br />
the object access method (OAM) to store, access, and manage object data. You can use any<br />
type <strong>of</strong> data in an object because OAM does not recognize the content, format, or structure <strong>of</strong><br />
the data. For example, an object can be a scanned image <strong>of</strong> a document, an engineering<br />
drawing, or a digital video. OAM objects are stored either on DASD in a DB2 database, or on<br />
an optical drive, or on a tape storage volume.<br />
The storage administrator assigns objects to object storage groups and object backup<br />
storage groups. The object storage groups direct the objects to specific DASD, optical, or tape<br />
devices, depending on their performance requirements. You can have one primary copy <strong>of</strong> an<br />
object, and up to two backup copies <strong>of</strong> an object.<br />
26 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.7 Data set striping<br />
Striping is a s<strong>of</strong>tware implementation that distributes<br />
sequential data sets across multiple 3390 volumes<br />
Data sets across multiple SMS-managed DASD volumes<br />
Improves I/O performance<br />
For example, a data set with 28 stripes is distributed<br />
across 28 volumes and therefore 28 parallel I/Os<br />
All striped data sets must be extended-format data sets<br />
Physical sequential and VSAM data sets<br />
Defining using data class and storage groups<br />
Figure 2-7 Data set striping<br />
Data striping<br />
Sequential data striping can be used for physical sequential data sets that cause I/O<br />
bottlenecks for critical applications. Sequential data striping uses extended-format sequential<br />
data sets that SMS can allocate over multiple volumes, preferably on separate channel paths<br />
and control units, to improve performance. These data sets must reside on 3390 volumes that<br />
are located on the <strong>IBM</strong> DS8000®.<br />
Sequential data striping can reduce the processing time required for long-running batch jobs<br />
that process large, physical sequential data sets. Smaller sequential data sets can also<br />
benefit because <strong>of</strong> DFSMS's improved buffer management for QSAM and BSAM access<br />
methods for striped extended-format sequential data sets.<br />
A stripe in DFSMS is the portion <strong>of</strong> a striped data set, such as an extended format data set,<br />
that resides on one volume. The records in that portion are not always logically consecutive.<br />
The system distributes records among the stripes such that the volumes can be read from or<br />
written to simultaneously to gain better performance. Whether it is striped is not apparent to<br />
the application program. Data striping distributes data for one data set across multiple<br />
SMS-managed DASD volumes, which improves I/O performance and reduces the batch<br />
window. For example, a data set with 28 stripes is distributed across 28 volumes.<br />
Extended-format data sets<br />
You can write striped extended-format sequential data sets with the maximum physical block<br />
size for the data set plus control information required by the access method. The access<br />
Chapter 2. Data set basics 27
method writes data on the first volume selected until a track is filled. The next physical blocks<br />
are written on the second volume selected until a track is filled, continuing until all volumes<br />
selected have been used or no more data exists. Data is written again to selected volumes in<br />
this way until the data set has been created. A maximum <strong>of</strong> 59 stripes can be allocated for a<br />
data set. For striped data sets, the maximum number <strong>of</strong> extents on a volume is 123.<br />
Physical sequential and VSAM data sets<br />
The sustained data rate (SDR) has an effect only for extended-format data sets. Striping<br />
allows you to spread data across DASD volumes and controllers. The number <strong>of</strong> stripes is the<br />
number <strong>of</strong> volumes on which the data set is initially allocated. Striped data sets must be<br />
system-managed and must be in an extended format. When no volumes that use striping are<br />
available, the data set is allocated as nonstriped with EXT=P specified in the data class; the<br />
allocation fails if EXT=R is specified in the data class.<br />
Physical sequential data sets cannot be extended if none <strong>of</strong> the stripes can be extended. For<br />
VSAM data sets, each stripe can be extended to an available candidate volume if extensions<br />
fail on the current volume.<br />
Data classes<br />
Data class attributes define space and data characteristics <strong>of</strong> data sets that are normally<br />
specified on JCL DD statements, TSO/E ALLOCATE commands, access method services<br />
(IDCAMS) DEFINE commands, dynamic allocation requests, and ISPF/PDF panels. You can<br />
use data class to allocate sequential and VSAM data sets in extended format for the benefits<br />
<strong>of</strong> compression (sequential and VSAM KSDS), striping, and large data set sizes (VSAM).<br />
Storage groups<br />
SMS calculates the average preference weight <strong>of</strong> each storage group using the preference<br />
weights <strong>of</strong> the volumes that will be selected if the storage group is selected for allocation.<br />
Then, SMS selects the storage group that contains at least as many primary volumes as the<br />
stripe count and has the highest average weight. If there are no storage groups that meet<br />
these criteria, the storage group with the largest number <strong>of</strong> primary volumes is selected. If<br />
multiple storage groups have the largest number <strong>of</strong> primary volumes, the one with the highest<br />
average weight is selected. If there are still multiple storage groups that meet the selection<br />
criteria, SMS selects one at random.<br />
For striped data sets, ensure that there are a sufficient number <strong>of</strong> separate paths to DASD<br />
volumes in the storage group to allow each stripe to be accessible through a separate path.<br />
The maximum number <strong>of</strong> stripes for physical sequential (PS) data sets is 59. For VSAM data<br />
sets, the maximum number <strong>of</strong> stripes is 16. Only sequential or VSAM data sets can be<br />
striped.<br />
28 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.8 Data set striping with z/<strong>OS</strong> V1R11<br />
Allow striping volume selection to support all current and<br />
future volume preference attributes specified in SMS<br />
constructs<br />
Allow volumes that are above high allocation threshold to be<br />
eligible for selection as secondary volumes<br />
Prefer enabled volumes over quiesced volumes<br />
Prefer normal storage groups over overflow storage groups<br />
Support data set separation function<br />
Support multi-tiered storage group function that honors<br />
storage group sequence derived by ACS<br />
Support availability, accessibility, PAV and other volume<br />
preference attributes that are used in non-striping volume<br />
selection<br />
Figure 2-8 z/<strong>OS</strong> V1R11 improvements for data set striping<br />
Striping volume selection<br />
Striping volume selection is very similar to conventional volume selection. <strong>Volume</strong>s that are<br />
eligible for selection are classified as primary and secondary, and assigned a volume<br />
preference weight, based on preference attributes.<br />
Note: This support is invoked when allocating a new striped data set. <strong>Volume</strong>s are ranked<br />
by preference weight from each individual controller. This support selects the most<br />
preferred storage group that meets or closely meets the target stripe count. This allows<br />
selection from the most preferred volume from individual controllers to meet the stripe<br />
count (try to spread stripes across controllers).<br />
Automatically activate the fast volume selection function to avoid overutilizing system<br />
resources.<br />
High allocation threshold<br />
With V1R11, volumes that have sufficient space for the allocation amount without exceeding<br />
the storage group HIGH THRESHOLD value are eligible for selection as secondary volumes.<br />
<strong>Volume</strong>s that do not meet all the criteria for the primary volume list are placed on the<br />
secondary list. In z/<strong>OS</strong> V1R11, the SMS striping volume selection enhancement will try to<br />
make striping allocation function for both VSAM and non-VSAM as close as possible to the<br />
conventional volume selection.<br />
Chapter 2. Data set basics 29
SMS calculates the average preference weight <strong>of</strong> each storage group using the preference<br />
weights <strong>of</strong> the volumes that will be selected if the storage group is selected for allocation.<br />
Then, SMS selects the storage group that contains at least as many primary volumes as the<br />
stripe count and has the highest average weight. If there are no storage groups that meet<br />
these criteria, the storage group with the largest number <strong>of</strong> primary volumes is selected. If<br />
multiple storage groups have the largest number <strong>of</strong> primary volumes, the one with the highest<br />
average weight is selected. If there are still multiple storage groups that meet the selection<br />
criteria, SMS selects one at random.<br />
Storage group support<br />
Normal storage groups are preferred over overflow storage groups. The storage group<br />
sequence order as specified in the ACS storage group selection routines is supported when a<br />
multi-tiered storage group is requested in the storage class.<br />
After selecting a storage group, SMS selects volumes by their preference weight. Primary<br />
volumes are preferred over secondary volumes because they have a higher preference<br />
weight. Secondary volumes are selected when there is an insufficient number <strong>of</strong> primary<br />
volumes. If there are multiple volumes with the same preference weight, SMS selects one <strong>of</strong><br />
the volumes at random.<br />
Data set separation<br />
Data set separation allows you to designate groups <strong>of</strong> data sets in which all SMS-managed<br />
data sets within a group are kept separate, on the physical control unit (PCU) level or the<br />
volume level, from all the other data sets in the same group. To use data set separation, you<br />
must create a data set separation pr<strong>of</strong>ile and specify the name <strong>of</strong> the pr<strong>of</strong>ile to the base<br />
configuration. During allocation, SMS attempts to separate the data sets listed in the pr<strong>of</strong>ile. A<br />
data set separation pr<strong>of</strong>ile contains at least one data set separation group. Each data set<br />
separation group specifies whether separation is at the PCU or volume level, whether it is<br />
required or preferred, and includes a list <strong>of</strong> data set names to be separated from each other<br />
during allocation.<br />
<strong>Volume</strong> preference<br />
<strong>Volume</strong> preference attributes, such as availability, accessibility, and PAV capability are<br />
supported.<br />
Fast volume selection is supported, regardless <strong>of</strong> the current specification <strong>of</strong> the<br />
FAST_VOLSEL parameter. SMS will reject the candidate volumes that do not have sufficient<br />
free space for the stripe when 100 volumes have already been rejected by DADSM for<br />
insufficient space. This is to prevent the striping allocation from overusing the system<br />
resources, because an iteration <strong>of</strong> volume reselection can consume significant overhead<br />
when there are a large number <strong>of</strong> candidate volumes.<br />
30 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.9 Large format data sets<br />
Figure 2-9 Allocating a data set with ISPF option 3.2<br />
Large format data sets<br />
Defining large format data sets was introduced with z/<strong>OS</strong> V1R7. Large format data sets are<br />
physical sequential data sets, with generally the same characteristics as other non-extended<br />
format sequential data sets, but with the capability to grow beyond the basic format size limit<br />
<strong>of</strong> 65,535 tracks on each volume. (This is about 3,500,000,000 bytes, depending on the block<br />
size.) Large format data sets reduce the need to use multiple volumes for single data sets,<br />
especially very large ones such as spool data sets, dumps, logs, and traces. Unlike<br />
extended-format data sets, which also support greater than 65,535 tracks per volume, large<br />
format data sets are compatible with EXCP and do not need to be SMS-managed.<br />
Data sets defined as large format must be accessed using QSAM, BSAM, or EXCP.<br />
Large format data sets have a maximum <strong>of</strong> 16 extents on each volume. Each large format<br />
data set can have a maximum <strong>of</strong> 59 volumes. Therefore, a large format data set can have a<br />
maximum <strong>of</strong> 944 extents (16 times 59).<br />
A large format data set can occupy any number <strong>of</strong> tracks, without the limit <strong>of</strong> 65,535 tracks<br />
per volume. The minimum size limit for a large format data set is the same as for other<br />
sequential data sets that contain data: one track, which is about 56,000 bytes. Primary and<br />
secondary space can both exceed 65,535 tracks per volume.<br />
Large format data sets can be on SMS-managed DASD or non-SMS-managed DASD.<br />
Chapter 2. Data set basics 31
Restriction: The following types <strong>of</strong> data sets cannot be allocated as large format data<br />
sets:<br />
► PDS, PDSE, and direct data sets<br />
► Virtual I/O data sets, password data sets, and system dump data sets<br />
Allocating data sets<br />
To process an already existing data set, first allocate it (establish a logical link with it and your<br />
program), then access the data using macros in Assembler or HLL statements to activate the<br />
access method that you have chosen. The allocation <strong>of</strong> a data set means either or both <strong>of</strong> two<br />
things:<br />
► To set aside (create) space for a new data set on a disk or tape<br />
► To establish a logical link between a job step (your program) and any data set using JCL<br />
Figure 2-9 on page 31 shows the creation <strong>of</strong> a data set using ISPF panel 3.2. Other ways to<br />
create a data set are as follows:<br />
► Access method services<br />
You can define VSAM data sets and establish catalogs by using a multifunction services<br />
program called access method services.<br />
► TSO ALLOCATE command<br />
You can issue the ALLOCATE command <strong>of</strong> TSO/E to define VSAM and non-VSAM data<br />
sets.<br />
► Using JCL<br />
Any data set can be defined directly with JCL by specifying DSNTYPE=LARGE on the DD<br />
statement.<br />
Basic format data sets<br />
Basic format data sets are sequential data sets that are specified as neither extended-format<br />
nor large-format. Basic format data sets have a size limit <strong>of</strong> 65,535 tracks (4,369 cylinders)<br />
per volume. Basic format data sets can be system-managed or not. They can be accessed<br />
using QSAM, BSAM, or EXCP.<br />
You can allocate a basic format data set using the DSNTYPE=BASIC parameter on the DD<br />
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE or the access method services<br />
ALLOCATE command, or the data class. If no DSNTYPE value is specified from any <strong>of</strong> these<br />
sources, then its default is BASIC.<br />
Virtual input/output data sets<br />
You can manage temporary data sets with a function called virtual input/output (VIO). VIO<br />
uses DASD space and system I/O more efficiently than other temporary data sets.<br />
You can use the BPAM, BSAM, QSAM, BDAM, and EXCP access methods with VIO data<br />
sets. SMS can direct SMS-managed temporary data sets to VIO storage groups.<br />
32 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.10 Large format data sets and TSO<br />
There are three types <strong>of</strong> sequential data sets:<br />
Basic format: A traditional data set existing prior to V1R7<br />
that cannot grow beyond 64 K tracks per volume<br />
Large format: A data set (introduced in V1R7) that has<br />
the capability to grow beyond 64 K tracks<br />
Extended format: An extended format data set that must<br />
be DFSMS-managed<br />
With z/<strong>OS</strong> V1R9:<br />
Updates to the following commands and service ensure<br />
that each can handle large format data sets:<br />
TSO TRANSMIT, RECEIVE<br />
PRINTDS<br />
REXX LISTDSI function<br />
CLIST LISTDSI statement<br />
REXX EXECIO command<br />
CLIST OPENFILE/GETFILE/PUTFILE I/O processing<br />
Figure 2-10 Large format data set enhancement with z/<strong>OS</strong> V1R9<br />
Sequential data sets<br />
There are three types <strong>of</strong> sequential data sets, as follows:<br />
Basic format A traditional data set, as existed prior to z/<strong>OS</strong> V1.7. These data sets<br />
cannot grow beyond 64 K tracks per volume.<br />
Large format A data set (introduced in z/<strong>OS</strong> v1.7) that has the capability to grow<br />
beyond 64 K tracks but can be very small. The significance is that after<br />
being defined as a large format data set, it can grow to over 64 K tracks<br />
without further intervention. The maximum size is x’FFFFFE’ or<br />
approximately 16 M tracks per volume.<br />
Extended format An extended format data set that must be DFSMS-managed. This<br />
means that it must have a storage class. These data sets can be<br />
striped, and can grow up to x’FFFFFFFE’ tracks per volume.<br />
Using large format data sets with z/<strong>OS</strong> V1R9<br />
These enhancements are internal, and remove the restriction in z/<strong>OS</strong> V1R7 and z/<strong>OS</strong> V1R8<br />
that prevented use <strong>of</strong> large format data sets.<br />
Updates have been made to the following commands and service to ensure that each can<br />
handle large format data sets:<br />
► TSO TRANSMIT, RECEIVE<br />
► PRINTDS<br />
Chapter 2. Data set basics 33
► REXX LISTDSI function<br />
► CLIST LISTDSI statement<br />
► REXX EXECIO command<br />
► CLIST OPENFILE/GETFILE/PUTFILE I/O processing<br />
Restriction: Types <strong>of</strong> data sets that cannot be allocated as large format data sets are:<br />
► PDS, PDSE, and direct data sets<br />
► Virtual I/O data sets, password data sets, and system dump data sets<br />
34 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
2.11 IGDSMSxx parmlib member support<br />
Defining format large data set in parmlib (z/<strong>OS</strong> V1R9)<br />
BLOCKTOKENSIZE(REQUIRE | NOREQUIRE)<br />
Figure 2-11 Using large format data sets<br />
Using BLOCKTOKENSIZE(REQUIRE)<br />
If your installation uses the default BLOCKTOKENSIZE(REQUIRE) setting in PARMLIB<br />
member IGDSMSxx, you can issue the following command to see the current<br />
BLOCKTOKENSIZE settings, from the MVS console:<br />
D SMS,OPTIONS<br />
REQUIRE: - Requires every open for a large format data<br />
set to use BLOCKTOKENSIZE=LARGE on the DCBE<br />
macro<br />
NOREQUIRE: - Allows applications to access large<br />
format data sets under more conditions without having to<br />
specify BLOCKTOKENSIZE=LARGE on DCBE macro<br />
Services updated for large format data sets<br />
Large format data sets with BLOCKTOKENSIZE(REQUIRE)<br />
The following services are updated for large format data sets:<br />
ALLOCATE For a new or existing large format data set, use the<br />
DSNTYPE(LARGE) keyword.<br />
REXX EXECIO Using EXECIO DISKR to read a LARGE format SEQ data set will<br />
work as long as the data set is 64 K trks, expect an ABEND213-17. Using EXECIO DISKW to<br />
attempt to write to any LARGE format data set will fail with<br />
ABEND213-15.<br />
CLIST support Using CLIST OPENFILE INPUT/GETFILE and OPENFILE<br />
OUTPUT/PUTFILE, as follows:<br />
Using CLIST OPENFILE INPUT/GETFILE to read a LARGE format<br />
SEQ data set will work as long as the data set is
Using CLIST OPENFILE INPUT/GETFILE to read a LARGE format<br />
SEQ data set that is >64 K trks in size will fail with ABEND213-17 from<br />
OPENFILE INPUT.<br />
Using CLIST OPENFILE OUTPUT/PUTFILE to attempt to write to any<br />
LARGE format data set will fail with ABEND213-15 from OPENFILE<br />
OUTPUT.<br />
REXX LISTDSI With the dsname function (CLIST LISTDSI statement), LISTDSI<br />
issued against a large format data set will complete with a REXX<br />
function return code 0. If the data set is 64 K trks, the<br />
data set space used (SYSUSED) returned by LISTDSI will be<br />
inaccurate.<br />
TRANSMIT/RECEIVE When issued for a large format data set will transmit the data set as<br />
long as it is 64 K<br />
trks, an ABEND213-15 will occur during the transmit process, and<br />
nothing will be sent.<br />
If you RECEIVE a data set that was sent with TRANSMIT and that<br />
data set is a LARGE format, RECEIVE will be able to receive the data<br />
set as long as it is
Using CLIST OPENFILE INPUT/GETFILE to read a LARGE format<br />
SEQ data set will also work when the data set is >64 K trks in size.<br />
Using CLIST OPENFILE OUTPUT/PUTFILE to write to any large<br />
format data set will work.<br />
LISTDSI Support works the same as when BLOCKTOKENSIZE(REQUIRE).<br />
REXX LISTDSI A REXX LISTDSI dsname function or CLIST LISTDSI statement<br />
issued against a large format data set will complete with a REXX<br />
function return code 0.<br />
If the data set is 64 K trks, the data set space used<br />
(SYSUSED) returned by LISTDSI will be inaccurate.<br />
If LISTDSI issued against a LARGE format data set that is >64 K trks<br />
and the SMSINFO operand is also specified, then LISTDSI completes<br />
with REXX function return code 0, but the space used (SYSUSED)<br />
information will not be correct.<br />
TRANSMIT/RECEIVE Issued for a LARGE format data set will transmit the data set<br />
regardless <strong>of</strong> whether it is 64 K trks in size. If you<br />
RECEIVE a data set that was sent with TRANSMIT and that data set<br />
is a LARGE format, RECEIVE will be able to receive the file into a<br />
BASIC data set created by RECEIVE if the transmitted file is 64 K trks in size, because RECEIVE will not get the correct SIZE<br />
information to allocate the received file. An ABENDx37 can occur. In<br />
all cases, any data set allocated by RECEIVE will be BASIC format,<br />
even if it is attempting to receive a large format data set. You can<br />
RECEIVE data into a preallocated LARGE format data set that you<br />
allocate <strong>of</strong> sufficient size to hold the transmitted data.<br />
PRINTDS Using PRINTDS to print a large format input data set to the spool will<br />
work normally regardless <strong>of</strong> whether the data set is 64 K trks in size. If you attempt to write output to a TODATASET, the<br />
data set allocated by PRINTDS will always be a normal BASIC data<br />
set. This will not be large enough to hold output from a LARGE format<br />
data set that is >64 K trks. An ABENDx37 can occur. If you preallocate<br />
a TODATASET <strong>of</strong> large format, PRINTDS will be able to successfully<br />
write to the large format TODATASET.<br />
Chapter 2. Data set basics 37
2.12 z/<strong>OS</strong> UNIX files<br />
Following are the types <strong>of</strong> z/<strong>OS</strong> UNIX files:<br />
Figure 2-12 z/<strong>OS</strong> UNIX files<br />
z/<strong>OS</strong> UNIX<br />
z/<strong>OS</strong> UNIX <strong>System</strong> Services (z/<strong>OS</strong> UNIX) enables z/<strong>OS</strong> to access UNIX files. UNIX<br />
applications also can access z/<strong>OS</strong> data sets. z/<strong>OS</strong> UNIX files are byte-oriented, similar to<br />
objects. We differentiate between the following types <strong>of</strong> z/<strong>OS</strong> UNIX files.<br />
Hierarchical file system (HFS)<br />
You can define on DASD an HFS data set on the z/<strong>OS</strong> system. Each HFS data set contains a<br />
hierarchical file system. Each hierarchical file system is structured like a tree with subtrees,<br />
and consists <strong>of</strong> directories and all their related files.<br />
z/<strong>OS</strong> Network File <strong>System</strong> (z/<strong>OS</strong> NFS)<br />
z/<strong>OS</strong> NFS is a distributed file system that enables users to access UNIX files and directories<br />
that are located on remote computers as though they were local. z/<strong>OS</strong> NFS is independent <strong>of</strong><br />
machine types, operating systems, and network architectures.<br />
zSeries File <strong>System</strong> (zFS)<br />
A zFS is a UNIX file system that contains one or more file systems in a VSAM linear data set.<br />
zFS is application compatible with HFS and more performance efficient than HFS.<br />
Temporary file system (TFS)<br />
A TFS is stored in memory and delivers high-speed I/O. A systems programmer can use a<br />
TFS for storing temporary files.<br />
38 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Hierarchical File <strong>System</strong> (HFS)<br />
Network File <strong>System</strong> (NFS)<br />
zSeries File <strong>System</strong> (zFS)<br />
Temporary File <strong>System</strong> (TFS)
2.13 Data set specifications for non-VSAM data sets<br />
Data Set<br />
80 80 80 80<br />
DATASET.TEST.SEQ1<br />
80 80<br />
Figure 2-13 Data set specifications for non-VSAM data sets<br />
DSORG=PS<br />
RECFM=FB<br />
LRECL=80<br />
BLKSIZE=27920<br />
Data set specifications for non-VSAM data set<br />
A non-VSAM data set has several attributes that describe the data set. When you want to<br />
define (create) a new data set, you have to specify those values to tell the system which kind<br />
<strong>of</strong> data set you want to allocate.<br />
See also z/<strong>OS</strong> MVS JCL Reference, SA22-7597 for information about the data set<br />
specifications discussed in this section.<br />
Data set organization (DSORG)<br />
DSORG specifies the organization <strong>of</strong> the data set as physical sequential (PS), partitioned<br />
(PO) for PDS or partitioned PDSE, or direct (DA).<br />
Data set name type (DSNTYPE)<br />
Use the DSNTYPE parameter to specify:<br />
► For a partitioned organized (PO) data set, whether it is a:<br />
– PDS for a partitioned data set<br />
– LIBRARY for a partitioned data set extended (PDSE)<br />
► Hierarchical file system if the DSNTYPE is HFS<br />
► Large data sets (see 2.16, “<strong>Volume</strong> table <strong>of</strong> contents (VTOC)” on page 45 for more details<br />
about large data sets)<br />
Chapter 2. Data set basics 39
Logical records and blocks<br />
To an application, a logical record is a unit <strong>of</strong> information (for example, a customer, an<br />
account, a payroll employee, and so on). It is the smallest amount <strong>of</strong> data to be processed,<br />
and it is comprised <strong>of</strong> fields that contain information recognized by the processing application.<br />
Logical records, when located in DASD or tape, are grouped into physical records named<br />
blocks (to save space in DASD because <strong>of</strong> the gaps). Each block <strong>of</strong> data on a DASD volume<br />
has a distinct location and a unique address (block number, track, and cylinder), thus making<br />
it possible to find any block without extensive sequential searching. Logical records can be<br />
stored and retrieved either directly or sequentially.<br />
DASD volumes are used for storing data and executable programs (including the operating<br />
system itself), and for temporary working storage. One DASD volume can be used for many<br />
separate data sets, and space on it can be reallocated and reused. The maximum length <strong>of</strong> a<br />
logical record (LRECL) is limited by the physical size <strong>of</strong> the media used.<br />
Record format (RECFM)<br />
RECFM specifies the characteristics <strong>of</strong> the logical records in the data set. They can have a:<br />
► Fixed length (RECFM=F) - Every record is the same size.<br />
► Variable length (RECFM=V) - The logical records can be <strong>of</strong> varying sizes; every record<br />
has a preceding record descriptor word (RDW) to describe the length <strong>of</strong> such record.<br />
► ASCII variable length (RECFM=D) - Used for ISO/ANSI tape data sets.<br />
► Undefined length (RECFM=U) - Permits processing <strong>of</strong> records that do not conform to the<br />
F or V format.<br />
F, V, or D-format logical records can be blocked (RECFM=FB, VB or DB), which means<br />
several logical records are in the same block.<br />
Spanned records are specified as VS, VBS, DS, or DBS. A spanned record is a logical record<br />
that spans two or more blocks. Spanned records can be necessary if the logical record size is<br />
larger than the maximum allowed block size.<br />
You can also specify the records as fixed-length standard by using FS or FBS, meaning that<br />
there is not an internal short block.<br />
Logical record length (LRECL)<br />
LRECL specifies the length, in bytes, <strong>of</strong> each record in the data set. If the records are <strong>of</strong><br />
variable length or undefined length, LRECL specifies the maximum record length. For input,<br />
the field has no effect for undefined-length (format-U) records.<br />
Block size (BLKSIZE)<br />
BLKSIZE specifies the maximum length, in bytes, <strong>of</strong> the physical record (block). If the logical<br />
records are format-F, the block size must be an integral multiple <strong>of</strong> the record length. If the<br />
records are format-V, you must specify the maximum block size. If format-V records are<br />
unblocked, the block size must be 4 bytes (to specify the block length) greater than the record<br />
length (LRECL). For data sets in DASD the maximum block size is 32760. For data sets in<br />
tapes the maximum block size is much larger.<br />
In an extended-format data set, the system adds a 32-byte suffix to each block, which is<br />
transparent to the application program.<br />
40 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
<strong>System</strong>-determined block size<br />
The system can derive the best block size to optimize DASD space, tape, and spooled data<br />
sets (the ones in printing format but stored temporarily in DASD).<br />
Space values<br />
For DASD data sets, you can specify the amount <strong>of</strong> space required in: logical records, blocks,<br />
records, tracks, or cylinders. You can specify a primary and a secondary space allocation.<br />
When you define a new data set, only the primary allocation value is used to reserve space<br />
for the data set on DASD. Later, when the primary allocation <strong>of</strong> space is filled, space is<br />
allocated in secondary storage amounts (if specified). The extents can be allocated on other<br />
volumes if the data set was defined as multivolume.<br />
For example, if you allocate a new data set and specify SPACE=(TRK,(2,4)), this initially<br />
allocates two tracks for the data set. As each record is written to the data set and these two<br />
tracks are used up, the system automatically obtains four more tracks. When these four tracks<br />
are used, another four tracks are obtained. The same sequence is followed until the extent<br />
limit for the type <strong>of</strong> data set is reached.<br />
The procedure for allocating space on magnetic tape devices is not like allocating space on<br />
DASD. Because data sets on magnetic tape devices must be organized sequentially, each<br />
one is located contiguously. All data sets that are stored on a given magnetic tape volume<br />
must be recorded in the same density. See z/<strong>OS</strong> DFSMS Using Magnetic Tapes, SC26-7412<br />
for information about magnetic tape volume labels and tape processing.<br />
Chapter 2. Data set basics 41
2.14 Locating an existing data set<br />
TSO<br />
FPITA.DATA<br />
VOLDAT<br />
FPITA.DATA<br />
Figure 2-14 Locating an existing data set<br />
Locating a data set<br />
A catalog consists <strong>of</strong> two separate kinds <strong>of</strong> data sets: a basic catalog structure (BCS); and a<br />
VSAM volume data set (VVDS). The BCS can be considered the catalog, whereas the VVDS<br />
can be considered an extension <strong>of</strong> the volume table <strong>of</strong> contents (VTOC). Following are the<br />
terms used to locate a data set that has been cataloged:<br />
VTOC The volume table <strong>of</strong> contents is a sequential data set located in each<br />
DASD volume that describes the data set contents <strong>of</strong> this volume. The<br />
VTOC is used to find empty space for new allocations and to locate<br />
non-VSAM data sets. For all VSAM data sets, and for SMS-managed<br />
non-VSAM data sets, the VTOC is used to obtain information not kept in<br />
the VVDS. See 2.18, “VTOC index structure” on page 48.<br />
User catalog A catalog is a data set used to locate in which DASD volume the<br />
requested data set is stored; user application data sets are cataloged in<br />
this type <strong>of</strong> catalog.<br />
Master catalog This has the same structure as a user catalog, but points to system<br />
(z/<strong>OS</strong>) data sets. It also contains information about the user catalog<br />
location and any alias pointer.<br />
Alias A special entry in the master catalog pointing to a user catalog that<br />
coincides with the HLQ <strong>of</strong> a data set name. The alias is used to find in<br />
which user catalog the data set location information exists. It means that<br />
the data set with this HLQ is cataloged in that user catalog.<br />
42 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
VTOC<br />
MCAT<br />
ALIAS: FPITA<br />
ALIAS: VERA<br />
UCAT<br />
UCAT<br />
FPITA.DATA<br />
FPITA.FILE1<br />
VERA.FILE1
The sequence for locating an existing data set<br />
The MVS system provides a service called LOCATE to read entries in a catalog. When z/<strong>OS</strong><br />
tries to locate an existing data set, the following sequence takes place.<br />
► When the master catalog is examined:<br />
– If it has the searched data set name, the volume information is picked up and the<br />
volume VTOC is used to locate the data set in the specified volume.<br />
– If the HLQ is a defined alias in the master catalog, the user catalog is searched. If the<br />
data set name is found, processing proceeds as in a master catalog find.<br />
► Finally, the requesting program can access the data set. As you can imagine, it is<br />
impossible to keep track <strong>of</strong> the location <strong>of</strong> millions <strong>of</strong> data sets without the catalog concept.<br />
For detailed information about catalogs refer to Chapter 6, “Catalogs” on page 325.<br />
Chapter 2. Data set basics 43
2.15 Uncataloged and cataloged data sets<br />
Uncataloged reference<br />
// DD DSN=PAY.D1,DISP=OLD,UNIT=3390,VOL=SER=MYVOL1<br />
Cataloged reference<br />
// DD DSN=PAY.D2,DISP=OLD<br />
Figure 2-15 Cataloged and uncataloged data sets<br />
Cataloged data sets<br />
When an existing data set is cataloged, z/<strong>OS</strong> obtains unit and volume information from the<br />
catalog using the LOCATE macro service. However, if the DD statement for a catalog data set<br />
contains VOLUME=SER=serial-number, the system does not look in the catalog; in this case,<br />
you must code the UNIT parameter and volume information.<br />
Uncataloged data sets<br />
When your existing data set is not cataloged, you must know in advance its volume location<br />
and specify it in your JCL. This can be done through the UNIT and VOL=SER, as shown in<br />
Figure 2-15.<br />
See z/<strong>OS</strong> MVS JCL Reference, SA22-7597 for information about UNIT and VOL parameters.<br />
Note: We strongly recommend that you do not have uncataloged data sets in your<br />
installation because uncataloged data sets can cause problems with duplicate data and<br />
possible incorrect data set processing.<br />
44 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
JCL references to data sets in JCL<br />
CATALOG<br />
PAY.D2<br />
MYVOL1<br />
PAY.D1
2.16 <strong>Volume</strong> table <strong>of</strong> contents (VTOC)<br />
A B C<br />
A<br />
B<br />
C<br />
Figure 2-16 <strong>Volume</strong> table <strong>of</strong> contents (VTOC)<br />
VTOC<br />
Data sets<br />
VTOC Data Set<br />
(Can be located<br />
after cylinder 0,<br />
track 0.)<br />
<strong>Volume</strong> table <strong>of</strong> contents (VTOC)<br />
The VTOC is a data set that describes the contents <strong>of</strong> the direct access volume on which it<br />
resides. It is a contiguous data set; that is, it resides in a single extent on the volume and<br />
starts after cylinder 0, track 0, and before track 65,535. A VTOC's address is located in the<br />
VOLVTOC field <strong>of</strong> the standard volume label. The volume label is described in z/<strong>OS</strong> DFSMS<br />
Using Data Sets. A VTOC consists <strong>of</strong> complete tracks.<br />
The VTOC lists the data sets that reside on its volume, along with information about the<br />
location and size <strong>of</strong> each data set, and other data set attributes. It is created when the volume<br />
is initialized through the ICKDSF utility program.<br />
The VTOC locates data sets on that volume. The VTOC is composed <strong>of</strong> 140-byte data set<br />
control blocks (DSCBs), <strong>of</strong> which there are six types shown in Table 2-1 on page 47, that<br />
correspond either to a data set currently residing on the volume, or to contiguous, unassigned<br />
tracks on the volume. A set <strong>of</strong> assembler macros is used to allow a program or z/<strong>OS</strong> to<br />
access VTOC information.<br />
IEHLIST utility<br />
The IEHLIST utility can be used to list, partially or completely, entries in a specified volume<br />
table <strong>of</strong> contents (VTOC), whether indexed or non-indexed. The program lists the contents <strong>of</strong><br />
selected data set control blocks (DSCBs) in edited or unedited form.<br />
Chapter 2. Data set basics 45
2.17 VTOC and DSCBs<br />
VTOC<br />
DSCBs<br />
Figure 2-17 Data set control block (DSCB)<br />
Data set control block (DSCB)<br />
The VTOC is composed <strong>of</strong> 140-byte data set control blocks (DSCBs) that point to data sets<br />
currently residing on the volume, or to contiguous, unassigned (free) tracks on the volume<br />
(depending on the DSCB type).<br />
DSCBs also describe the VTOC itself. CVAF routines automatically construct a DSCB when<br />
space is requested for a data set on the volume. Each data set on a DASD volume has one or<br />
more DSCBs (depending on its number <strong>of</strong> extents) describing space allocation and other<br />
control information such as operating system data, device-dependent information, and data<br />
set characteristics. There are seven kinds <strong>of</strong> DSCBs, each with a different purpose and a<br />
different format number.<br />
The first record in every VTOC is the VTOC DSCB (format-4). The record describes the<br />
device, the volume the data set resides on, the volume attributes, and the size and contents <strong>of</strong><br />
the VTOC data set itself. The next DSCB in the VTOC data set is a free-space DSCB<br />
(format-5) that describes the unassigned (free) space in the full volume. The function <strong>of</strong><br />
various DSCBs depends on whether an optional Index VTOC is allocated in the volume. Index<br />
VTOC is a sort <strong>of</strong> B-tree, to make the search in VTOC faster.<br />
Table 2-1 on page 47 describes the various types <strong>of</strong> DSCBs, taking into consideration<br />
whether the Index VTOC is in place or not.<br />
In z/<strong>OS</strong> V1R7 there is a new address space (DEVMAN) containing trace information about<br />
CVAF events.<br />
46 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
F4 F0 F1 F1 F1<br />
F4<br />
DATA SET C<br />
DATA SET A<br />
DATA SET B
Table 2-1 DSCBs that can be found in the VTOC<br />
Type Name Function How many<br />
0 Free VTOC<br />
DSCB<br />
Describes unused DSCB records<br />
in the VTOC (contains 140 bytes<br />
<strong>of</strong> binary zeros). To delete a<br />
DSCB from the VTOC, a format-0<br />
DSCB is written over it.<br />
1 Identifier Describes the first three extents<br />
<strong>of</strong> a data set or VSAM data space.<br />
2 Index Describes the indexes <strong>of</strong> an ISAM<br />
data set. This data set<br />
organization is old, and is not<br />
supported anymore.<br />
3 Extension Describes extents after the third<br />
extent <strong>of</strong> a non-VSAM data set or<br />
a VSAM data space.<br />
4 VTOC Describes the extent and<br />
contents <strong>of</strong> the VTOC, and<br />
provides volume and device<br />
characteristics. This DSCB<br />
contains a flag indicating whether<br />
the volume is SMS-managed.<br />
5 Free space On a non-indexed VTOC,<br />
describes the space on a volume<br />
that has not been allocated to a<br />
data set (available space). For an<br />
indexed VTOC, a single empty<br />
format-5 DSCB resides in the<br />
VTOC; free space is described in<br />
the index and DS4IVTOC is<br />
normally on.<br />
7 Free space<br />
for certain<br />
device<br />
Only one field in the format-7<br />
DSCB is an intended interface.<br />
This field indicates whether the<br />
DSCB is a format-7 DSCB. You<br />
can reference that field as<br />
DS1FMTID or DS5FMTID. A<br />
character 7 indicates that the<br />
DSCB is a format-7 DSCB, and<br />
your program is not to modify it.<br />
One for every unused 140-byte record<br />
in the VTOC. The DS4DSREC field <strong>of</strong><br />
the format-4 DSCB is a count <strong>of</strong> the<br />
number <strong>of</strong> format-0 DSCBs in the<br />
VTOC. This field is not maintained for an<br />
indexed VTOC.<br />
One for every data set or data space on<br />
the volume, except the VTOC.<br />
One for each ISAM data set (for a<br />
multivolume ISAM data set, a format-2<br />
DSCB exists only on the first volume).<br />
One for each data set on the volume<br />
that has more than three extents. There<br />
can be as many as 10 for a PDSE, HFS,<br />
extended format data set, or a VSAM<br />
data set component cataloged in an<br />
integrated catalog facility catalog.<br />
PDSEs, HFS, and extended format data<br />
sets can have up to 123 extents per<br />
volume. All other data sets are<br />
restricted to 16 extents per volume. A<br />
VSAM component can have 7257<br />
extents in up to 59 volumes (123 each).<br />
One on each volume.<br />
One for every 26 non-contagious<br />
extents <strong>of</strong> available space on the<br />
volume for a non- indexed VTOC; for an<br />
indexed VTOC, there is only one.<br />
This DSCB is not used frequently.<br />
Chapter 2. Data set basics 47
2.18 VTOC index structure<br />
VOLUME<br />
LABEL<br />
FREE SPACE<br />
VTOC INDEX<br />
VTOC<br />
VVDS<br />
DATA<br />
FREE SPACE<br />
Figure 2-18 VTOC index structure<br />
VTOC index<br />
The VTOC index enhances the performance <strong>of</strong> VTOC access. The VTOC index is a<br />
physical-sequential data set on the same volume as the related VTOC, created by the<br />
ICKDSF utility program. It consists <strong>of</strong> an index <strong>of</strong> data set names in format-1 DSCBs<br />
contained in the VTOC and volume free space information.<br />
Important: An SMS-managed volume requires an indexed VTOC; otherwise, the VTOC<br />
index is highly recommended. For additional information about SMS-managed volumes,<br />
see z/<strong>OS</strong> DFSMS Implementing <strong>System</strong>-Managed Storage, SC26-7407.<br />
If the system detects a logical or physical error in a VTOC index, the system disables further<br />
access to the index from all systems that might be sharing the volume. Then, the VTOC<br />
remains usable but with possibly degraded performance.<br />
If a VTOC index becomes disabled, you can rebuild the index without taking the volume <strong>of</strong>fline<br />
to any system. All systems can continue to use that volume without interruption to other<br />
applications, except for a brief pause during the index rebuild. After the system rebuilds the<br />
VTOC index, it automatically re-enables the index on each system that has access to it.<br />
Next, we see more details about the internal implementation <strong>of</strong> the Index VTOC.<br />
48 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
FREE SPACE MAP<br />
POINTERS TO VTOC<br />
DATA SET ENTRIES<br />
LIST OF EMPTY<br />
VTOC ENTRIES
Creating the VTOC and VTOC index<br />
To initialize a volume (prepare for I/O activity), use the Device Support Facilities (ICKDSF)<br />
utility to initially build the VTOC. You can create a VTOC index at that time by using the<br />
ICKDSF INIT command and specifying the INDEX keyword.<br />
You can use ICKDSF to convert a non-indexed VTOC to an indexed VTOC by using the<br />
BUILDIX command and specifying the IXVTOC keyword. The reverse operation can be<br />
performed by using the BUILDIX command and specifying the <strong>OS</strong>VTOC keyword. For details,<br />
see Device Support Facilities User’s Guide and Reference Release 17, GC35-0033, and<br />
z/<strong>OS</strong> DFSMSdfp Advanced Services, SC26-7400, for more information about that topic.<br />
VTOC index format-1 DSCB<br />
A format-1 DSCB in the VTOC contains the name and extent information <strong>of</strong> the VTOC index.<br />
The name <strong>of</strong> the index must be 'SYS1.VTOCIX.xxxxxxxx', where xxxxxxxx conforms to<br />
standard data set naming conventions and is usually the serial number <strong>of</strong> the volume<br />
containing the VTOC and its index. The name must be unique within the system to avoid ENQ<br />
contention.<br />
VTOC index record (VIR)<br />
Device Support Facilities (ICKDSF) initializes a VTOC index into 2048-byte physical blocks<br />
named VTOC index records (VIRs). VIRs are used in several ways. A VTOC index contains<br />
the following kinds <strong>of</strong> VIRs:<br />
► VTOC index entry record (VIER) identifies the location <strong>of</strong> format-1 DSCBs and the<br />
format-4 DSCB.<br />
► VTOC pack space map (VPSM) identifies the free and allocated space on a volume.<br />
► VTOC index map (VIXM) identifies the VIRs that have been allocated in the VTOC index.<br />
► VTOC map <strong>of</strong> DSCBs (VMDS) identifies the DSCBs that have been allocated in the<br />
VTOC.<br />
Chapter 2. Data set basics 49
2.19 Initializing a volume using ICKDSF<br />
Figure 2-19 Initializing a volume<br />
Device Support Facilities (ICKDSF)<br />
ICKDSF is a program you can use to perform functions needed for the initialization,<br />
installation, use, and maintenance <strong>of</strong> DASD volumes. You can also use it to perform service<br />
functions, error detection, and media maintenance. However, due to the virtualization <strong>of</strong> the<br />
volumes there is no need for executing media maintenance.<br />
Initializing a DASD volume<br />
After you have completed the installation <strong>of</strong> a device, you must initialize and format the<br />
volume so that it can be used by MVS.<br />
You use the INIT command to initialize volumes. The INIT command writes a volume label (on<br />
cylinder 0, track 0) and a VTOC on the device for use by MVS. It reserves and formats tracks<br />
for the VTOC at the location specified by the user and for the number <strong>of</strong> tracks specified. If no<br />
location is specified, tracks are reserved at the default location.<br />
If the volume is SMS-managed, the STORAGEGROUP option must be declared, in order to<br />
keep such information (SMS managed) in a format-4 DSCB.<br />
Initializing a volume for the first time in <strong>of</strong>fline mode<br />
In the example in Figure 2-19, a volume is initialized at the minimal level because neither the<br />
CHECK nor VALIDATE parameter is specified (as recommended). Because the volume is<br />
being initialized for the first time, it must be mounted <strong>of</strong>fline (to avoid MVS data set<br />
allocations), and the volume serial number must be specified. Because the VTOC parameter<br />
50 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
//EXAMPLE JOB<br />
//EXEC PGM=ICKDSF<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
INIT UNITADDRESS(0353) NOVERIFY -<br />
VOLID(VOL123)<br />
/*
is not specified, the default VTOC size is the number <strong>of</strong> tracks in a cylinder minus one. For a<br />
3390, the default location is cylinder 0, track 1 for 14 tracks.<br />
Initializing a volume to be managed in a DFSMS environment<br />
In the following example, a volume that is to be system-managed is initialized. The volume is<br />
initialized in <strong>of</strong>fline mode at the minimal level. The VTOC is placed at cylinder 2, track 1 and<br />
occupies ten tracks. The VTOC is followed by the VTOC index. The STORAGEGROUP<br />
parameter indicates the volume is to be managed in a DFSMS environment.<br />
INIT UNIT(0353) NOVERIFY STORAGEGROUP -<br />
OWNERID(PAYROLL) VTOC(2,1,10) INDEX(2,11,5)<br />
The following example performs an online minimal initialization, and as a result <strong>of</strong> the<br />
command, an index to the VTOC is created.<br />
// JOB<br />
// EXEC PGM=ICKDSF<br />
//XYZ987 DD UNIT=3390,DISP=OLD,VOL=SER=PAY456<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
INIT DDNAME(XYZ987) NOVERIFY INDEX(X'A',X'B',X'2')<br />
/*<br />
ICKDSF stand-alone version<br />
You can run the stand-alone version <strong>of</strong> ICKDSF under any <strong>IBM</strong> <strong>System</strong> z® processor. To run<br />
the stand-alone version <strong>of</strong> ICKDSF, you IPL ICKDSF with a stand-alone IPL tape that you<br />
create under z/<strong>OS</strong>. This function allows you to initialize volumes without the need for running<br />
an operating system such as z/<strong>OS</strong>.<br />
Creating an ICKDSF stand-alone IPL tape using z/<strong>OS</strong><br />
For z/<strong>OS</strong>, the stand-alone code is in SYS1.SAMPLIB as ICKSADSF. You can load the<br />
ICKDSF program from a file on tape. The following example can be used to copy the<br />
stand-alone program to an unlabeled tape.<br />
//JOBNAME JOB JOB CARD PARAMETERS<br />
//STEPNAME EXEC PGM=IEBGENER<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD DUMMY,DCB=BLKSIZE=80<br />
//SYSUT1 DD DSNAME=SYS1.SAMPLIB(ICKSADSF),UNIT=SYSDA,<br />
// DISP=SHR,VOLUME=SER=XXXXXX<br />
//SYSUT2 DD DSNAME=ICKDSF,UNIT=3480,LABEL=(,NL),<br />
// DISP=(,KEEP),VOLUME=SER=YYYYYY,<br />
// DCB=(RECFM=F,LRECL=80,BLKSIZE=80)<br />
For details on how to IPL the stand-alone version and to see examples <strong>of</strong> the commands,<br />
refer to Device Support Facilities User’s Guide and Reference Release 17, GC35-0033.<br />
Chapter 2. Data set basics 51
52 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 3. Extended access volumes<br />
3<br />
z/<strong>OS</strong> V1R10 introduced extended address volume (EAV), which allowed DASD storage<br />
volumes to be larger than 65,520 cylinders. The space above the first 65,520 cylinders is<br />
referred to as cylinder-managed space. Tracks in cylinder-managed space use extended<br />
addressing space (EAS) techniques to access these tracks. Data sets that can use<br />
cylinder-managed space are referred to as being EAS-eligible.<br />
With z/<strong>OS</strong> V1R10, only VSAM data sets are EAS-eligible. You can control whether VSAM<br />
data sets can reside in cylinder-managed space by including or excluding EAVs in particular<br />
storage groups. For non-SMS managed data sets, control the allocation to a volume by<br />
specifying a specific VOLSER or esoteric name.<br />
With z/<strong>OS</strong> V1R11, extended-format sequential data sets are now EAS-eligible. You can<br />
control whether the allocation <strong>of</strong> EAS-eligible data sets can reside in cylinder-managed space<br />
using both the methods supported in z/<strong>OS</strong> V1R10 and by using the new EATTR data set<br />
attribute keyword.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 53
3.1 Traditional DASD capacity<br />
885 Cyl<br />
Model J<br />
1113 Cyl<br />
Model 1<br />
Figure 3-1 Traditional DASD capacity<br />
DASD capacity<br />
Figure 3-1 shows various DASD device types. 3380 devices were used in the 1980s. Capacity<br />
went from 885 to 2,655 cylinders per volume. When storage density increased, new device<br />
types were introduced at the end <strong>of</strong> the 1980s. Those types were called 3390. Capacity per<br />
volume ranged from 1,113 to 3,339 cylinders. A special device type, model 3390-9 was<br />
introduced to store large amounts <strong>of</strong> data that needed very fast access. The track geometry<br />
within one device category was (and is) always the same; this means that 3380 volumes have<br />
47,476 bytes per track, and 3390 volumes have 56,664 bytes per track.<br />
Table 3-1 lists further information about DASD capacity.<br />
Table 3-1 DASD capacity<br />
Physical<br />
characteristics<br />
54 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
D/T3380<br />
1770 Cyl<br />
Model E<br />
D/T3390<br />
2226 Cyl<br />
Model 2<br />
2655 Cyl<br />
Model K<br />
3339 Cyl<br />
Model 3<br />
10017 Cyl<br />
Model 9<br />
3380-J 3380-E 3380-K 3390-1 3390-2 3390-3 3390-9<br />
Data Cyl/Device 855 1770 2655 1113 2226 3339 10017<br />
Track/Cyl 15 15 15 15 15 15 15<br />
Bytes/Trk 47476 47476 47476 56664 56664 56664 56664<br />
Bytes/Cylinder 712140 712140 712140 849960 849960 849960 849960<br />
MB/Device 630 1260 1890 946 1892 2838 8514
3.2 Large volumes before z/<strong>OS</strong> V1R10<br />
A "large volume" is larger than a 3390-9<br />
The largest possible volume has 65520 (3390) cylinders<br />
That would be a "3390-54" if it had its own device type<br />
Almost 54 GB<br />
3390-27<br />
32760 Cyls<br />
Figure 3-2 Large volume support 3390-54<br />
3390-54<br />
65520 Cyls<br />
Large volume support<br />
The <strong>IBM</strong> TotalStorage Enterprise Storage Server (ESS) initially supported custom volumes <strong>of</strong><br />
up to 10017 cylinders, the size <strong>of</strong> the largest standard volume, the 3390 model 9. This was<br />
the limit set by the operating system s<strong>of</strong>tware. The <strong>IBM</strong> TotalStorage ESS large volume<br />
support enhancement, announced in November 2001, has now increased the upper limit to<br />
65520 cylinders, approximately 54 GB. The enhancement is provided as a combination <strong>of</strong><br />
<strong>IBM</strong> TotalStorage ESS licensed internal code (LIC) changes and system s<strong>of</strong>tware changes,<br />
available for z/<strong>OS</strong> and z/VM®.<br />
Today, for example, the <strong>IBM</strong> Enterprise Storage Server emulates the <strong>IBM</strong> 3390. On an<br />
emulated disk or on a VM minidisk, the number <strong>of</strong> cylinders per volume is a configuration<br />
option. It might be less than or greater than the stated number. If so, the number <strong>of</strong> bytes per<br />
device will differ accordingly. The <strong>IBM</strong> ESS Model 1750 supports up to 32760 cylinders and<br />
the <strong>IBM</strong> ESS Model 2107 supports up to 65520 cylinders.<br />
Large volume support is available on z/<strong>OS</strong> operating systems, the ICKDSF, and DFSORT<br />
utilities.<br />
Large volume support must be installed on all systems in a sysplex prior to sharing data sets<br />
on large volumes. Shared system and application data sets cannot be placed on large<br />
volumes until all system images in a sysplex have large volume support installed.<br />
Chapter 3. Extended access volumes 55
Large volume support design considerations<br />
Benefits <strong>of</strong> using large volumes can be briefly summarized as follows:<br />
► They reduce storage management tasks by allowing you to define and manage smaller<br />
configurations.<br />
► They reduce the number <strong>of</strong> multivolume data sets you have to manage.<br />
► They relieve architectural constraints by allowing you to address more data within the<br />
existing 64K subchannel number limit.<br />
The size <strong>of</strong> the logical volume defined does not have an impact on the performance <strong>of</strong> the<br />
ESS subsystem. The ESS does not serialize I/O on the basis <strong>of</strong> logical devices, so an<br />
increase in the logical volume size does not affect the ESS backend performance. Host<br />
operating systems, on the other hand, serialize I/Os against devices. As more data sets<br />
reside on a single volume, there will be greater I/O contention accessing the device. With<br />
large volume support, it is more important than ever to try to minimize contention on the<br />
logical device level. To avoid potential I/O bottlenecks on devices:<br />
► Exploit the use <strong>of</strong> Parallel Access <strong>Volume</strong>s to reduce I<strong>OS</strong> queuing on the system level.<br />
► Eliminate unnecessary reserves by using WLM in goal mode.<br />
► Multiple allegiance will automatically reduce queuing on sharing systems.<br />
Parallel Access <strong>Volume</strong> (PAV) support is <strong>of</strong> key importance when implementing large<br />
volumes. PAV enables one MVS system to initiate multiple I/Os to a device concurrently. This<br />
keeps I<strong>OS</strong>Q times down and performance up even with many active data sets on the same<br />
volume. PAV is a practical “must” with large volumes. We discourage you from using large<br />
volumes without PAV. In particular, we recommend the use <strong>of</strong> dynamic PAV and HyperPAV.<br />
As the volume sizes grow larger, more data and data sets will reside on a single S/390 device<br />
address. Thus, the larger the volume, the greater the multi-system performance impact will be<br />
<strong>of</strong> serializing volumes with RESERVE processing. You need to exploit a GRS Star<br />
configuration and convert all RESERVE's possible into system ENQ requests.<br />
56 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.3 zArchitecture data scalability<br />
Serialization<br />
Granularity<br />
reserve<br />
Access<br />
visibility<br />
Physical<br />
volume<br />
release<br />
3390-3 3390-9<br />
Figure 3-3 zArchitecture data scalability<br />
Multiple allegiance .......................................<br />
Base UCB<br />
3390-9<br />
Parallel Access <strong>Volume</strong> (PAV)<br />
HyperPAV<br />
Alias UCBs<br />
Dynamic volume<br />
expansion<br />
3390-9<br />
"3390-54"<br />
3 GB 9 GB 27 GB 54 GB<br />
max cyls: 3339 max cyls: 10017 max cyls: 32760 max cyls: 65520<br />
DASD architecture<br />
In the past decade, as processing power has dramatically increased, great care and<br />
appropriate solutions have been deployed so that the amount <strong>of</strong> data that is directly<br />
accessible can be kept proportionally equivalent. Over the years DASD volumes have<br />
increased in size by increasing the number <strong>of</strong> cylinders and thus GB capacity.<br />
years<br />
However, the existing track addressing architecture has limited growth to relatively small GB<br />
capacity volumes. This has placed increasing strain on the 4-digit device number limit and the<br />
number <strong>of</strong> UCBs that can be defined. The largest available volume is one with 65,520<br />
cylinders or approximately 54 GB, as shown in Figure 3-3.<br />
Rapid data growth on the z/<strong>OS</strong> platform is leading to a critical problem for various clients, with<br />
a 37% compound rate <strong>of</strong> disk storage growth between 1996 and 2007. The result is that this<br />
is becoming a real constraint on growing data on z/<strong>OS</strong>. Business resilience solutions<br />
(GDPS®, HyperSwap®, and PPRC) that provide continuous availability are also driving this<br />
constraint.<br />
Serialization granularity<br />
Since the 1960s, shared DASD can be serialized through a sequence <strong>of</strong><br />
RESERVE/RELEASE CCWs that are today under the control <strong>of</strong> GRS, as shown in Figure 3-3.<br />
This was a useful mechanism as long as the volume <strong>of</strong> data so serialized (the granularity)<br />
was not too great. But whenever such a device grew to contain too much data, bottlenecks<br />
became an issue.<br />
Chapter 3. Extended access volumes 57
DASD virtual visibility<br />
Traditional S/390 architecture does not allow more than one I/O operation to the same S/390<br />
device because such devices can only handle, physically, one I/O operation at a time.<br />
However, in modern DASD subsystems such as ESS, DS6000, and DS8000, the device<br />
(such as a 3390) is only a logical view. The contents <strong>of</strong> this logical device are spread in HDA<br />
RAID arrays and in caches. Therefore, it is technically possible to have more than one I/O<br />
operation towards the same logical device. Changes have been made in z/<strong>OS</strong> (in I<strong>OS</strong> code),<br />
in the channel subsystem (SAP), and in ESS, DS6000, and DS8000 to allow more than one<br />
I/O operation on the same logical device. This is called parallel I/O, and it is available in two<br />
types:<br />
► Multiple allegiance<br />
► Parallel access volume (PAV)<br />
This relief builds upon prior technologies that were implemented in part to help reduce the<br />
pressure on running out <strong>of</strong> device numbers. These include PAV and HyperPAV. PAV alias<br />
UCBs can be placed in an alternate subchannel set (z9® multiple subchannel support).<br />
HyperPAV reduces the number <strong>of</strong> alias UCBs over traditional PAVs and provides the I/O<br />
throughput required.<br />
Multiple allegiance<br />
Multiple allegiance (MA) was introduced to alleviate the following constraint. It allows<br />
serialization on a limited amount <strong>of</strong> data within a given DASD volume, which leads to the<br />
possibility <strong>of</strong> having several (non-overlapping) serializations held at the same time on the<br />
same DASD volume. This is a useful mechanism on which any extension <strong>of</strong> the DASD volume<br />
addressing scheme can rely. In other terms, multiple allegiance provides finer (than<br />
RESERVE/RELEASE) granularity for serializing data on a volume. It gives the capability to<br />
support I/O requests from multiple systems, one per system, to be concurrently active against<br />
the same logical volume, if they do not conflict with each other. Conflicts occur when two or<br />
more I/O requests require access to overlapping extents (an extent is a contiguous range <strong>of</strong><br />
tracks) on the volume, and at least one <strong>of</strong> the I/O requests involves writing <strong>of</strong> data.<br />
Requests involving writing <strong>of</strong> data can execute concurrently with other requests as long as<br />
they operate on non-overlapping extents on the volume. Conflicting requests are internally<br />
queued in the DS8000. Read requests can always execute concurrently regardless <strong>of</strong> their<br />
extents. Without the MA capability, DS8000 generates a busy indication for the volume<br />
whenever one <strong>of</strong> the systems issues a request against the volume, thereby causing the I/O<br />
requests to be queued within the channel subsystem (CSS). However, this concurrency can<br />
be achieved as long as no data accessed by one channel program can be altered through the<br />
actions <strong>of</strong> another channel program.<br />
Parallel access volume<br />
Parallel access volume (PAV) allows concurrent I/Os to originate from the same z/<strong>OS</strong> image.<br />
Using PAV can provide significant performance enhancements in <strong>IBM</strong> <strong>System</strong> z environments<br />
by enabling simultaneous processing for multiple I/O operations to the same logical volume.<br />
PAV was introduced in z/<strong>OS</strong> V1R6 as a new option that allows to specify PAV capability as<br />
one <strong>of</strong> the volume selection criteria for SMS-managed data sets assigned to a storage class.<br />
HyperPAV feature<br />
With the <strong>IBM</strong> <strong>System</strong> Storage DS8000 Turbo model and the <strong>IBM</strong> server synergy feature,<br />
the HyperPAV together with PAV, multiple allegiance can dramatically improve performance<br />
and efficiency for <strong>System</strong> z environments. With HyperPAV technology:<br />
► z/<strong>OS</strong> uses a pool <strong>of</strong> UCB aliases.<br />
58 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► As each application I/O is requested, if the base volume is busy with another I/O:<br />
– z/<strong>OS</strong> selects a free alias from the pool, quickly binds the alias device to the base<br />
device, and starts the I/O.<br />
– When the I/O completes, the alias device is used for another I/O on the LSS or is<br />
returned to the free alias pool.<br />
If too many I/Os are started simultaneously:<br />
► z/<strong>OS</strong> queues the I/Os at the LSS level.<br />
► When an exposure frees up that can be used for queued I/Os, they are started.<br />
► Queued I/O is done within assigned I/O priority.<br />
For each z/<strong>OS</strong> image within the sysplex, aliases are used independently. WLM is not involved<br />
in alias movement so it does not need to collect information to manage HyperPAV aliases.<br />
Benefits <strong>of</strong> HyperPAV<br />
HyperPAV has been designed to provide an even more efficient parallel access volume (PAV)<br />
function. When implementing larger volumes, it provides a way to scale I/O rates without the<br />
need for additional PAV alias definitions. HyperPAV exploits FICON® architecture to reduce<br />
overhead, improve addressing efficiencies, and provide storage capacity and performance<br />
improvements, as follows:<br />
► More dynamic assignment <strong>of</strong> PAV aliases improves efficiency.<br />
► The number <strong>of</strong> PAV aliases needed might be reduced, taking fewer from the 64 K device<br />
limitation and leaving more storage for capacity use.<br />
Chapter 3. Extended access volumes 59
3.4 WLM controlling PAVs<br />
zSeries<br />
WLM<br />
I<strong>OS</strong>Q on 100?<br />
Figure 3-4 WLM controlling PAVs<br />
Workload Manager (WLM)<br />
In the zSeries Parallel Sysplex environments, the z/<strong>OS</strong> Workload Manager (WLM) controls<br />
where work is run and optimizes the throughput and performance <strong>of</strong> the total system. The<br />
ESS provides the WLM with more sophisticated ways to control the I/O across the sysplex.<br />
These functions include parallel access to both single-system and shared volumes, and the<br />
ability to prioritize the I/O based upon WLM goals. The combination <strong>of</strong> these features<br />
significantly improves performance in a wide variety <strong>of</strong> workload environments.<br />
Parallel Access <strong>Volume</strong> (PAV)<br />
Parallel Access <strong>Volume</strong> is one <strong>of</strong> the original features that the <strong>IBM</strong> TotalStorage Enterprise<br />
Storage Server brings specifically for z/<strong>OS</strong> operating systems, helping the zSeries running<br />
applications to concurrently share the same logical volumes.<br />
The ability to do multiple I/O requests to the same volume nearly eliminates I<strong>OS</strong> queue time<br />
(I<strong>OS</strong>Q), one <strong>of</strong> the major components in z/<strong>OS</strong> response time. Traditionally, access to highly<br />
active volumes has involved manual tuning, splitting data across multiple volumes, and more.<br />
With PAV and the Workload Manager, you can almost forget about manual performance<br />
tuning. WLM manages PAVs across all members <strong>of</strong> a sysplex, too. The ESS, in conjunction<br />
with z/<strong>OS</strong>, has the ability to meet the performance requirements on its own.<br />
WLM I<strong>OS</strong> queuing<br />
As part <strong>of</strong> the Enterprise Storage Subsystem's implementation <strong>of</strong> parallel access volumes,<br />
the concept <strong>of</strong> base addresses versus alias addresses is introduced. While the base address<br />
60 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
WLMs exchange performance information<br />
Goals not met because <strong>of</strong> I<strong>OS</strong> queue?<br />
Who can donate an alias?<br />
zSeries<br />
WLM WLM WLM<br />
I<strong>OS</strong>Q on 100? I<strong>OS</strong>Q on 100? I<strong>OS</strong>Q on 100?<br />
Base Alias Alias<br />
100 to 100 to 100<br />
Dynamic PAVs Dynamic PAVs<br />
<strong>IBM</strong> DS8000<br />
zSeries zSeries<br />
Alias<br />
to 110<br />
Alias Base<br />
to 110 110
is the actual unit address <strong>of</strong> a given volume, there can be many alias addresses assigned to a<br />
base address, and any or all <strong>of</strong> those alias addresses can be reassigned to a separate base<br />
address. With dynamic alias management, WLM can automatically perform those alias<br />
address reassignments to help work meet its goals and to minimize I<strong>OS</strong> queuing.<br />
When you specify a yes value on the Service Coefficient/Service Definition Options panel,<br />
you enable dynamic alias management globally throughout the sysplex. WLM will keep track<br />
<strong>of</strong> the devices used by separate workloads and broadcast this information to other systems in<br />
the sysplex. If WLM determines that a workload is not meeting its goal due to I<strong>OS</strong> queue time,<br />
then WLM attempts to find alias devices that can be moved to help that workload achieve its<br />
goal. Even if all work is meeting its goals, WLM will attempt to move aliases to the busiest<br />
devices to minimize overall queuing.<br />
Alias assignment<br />
It is not always easy to predict the volumes to have an alias address assigned, and how many.<br />
Your s<strong>of</strong>tware can automatically manage the aliases according to your goals. z/<strong>OS</strong> can exploit<br />
automatic PAV tuning if you are using WLM in goal mode. z/<strong>OS</strong> recognizes the aliases that<br />
are initially assigned to a base during the Nucleus Initialization Program (NIP) phase. WLM<br />
can dynamically tune the assignment <strong>of</strong> alias addresses. WLM monitors the device<br />
performance and is able to dynamically reassign alias addresses from one base to another if<br />
predefined goals for a workload are not met. WLM instructs I<strong>OS</strong> to reassign an alias.<br />
WLM goal mode management in a sysplex<br />
WLM keeps track <strong>of</strong> the devices utilized by the various workloads, accumulates this<br />
information over time, and broadcasts it to the other systems in the same sysplex. If WLM<br />
determines that any workload is not meeting its goal due to I<strong>OS</strong>Q time, WLM attempts to find<br />
an alias device that can be reallocated to help this workload achieve its goal.<br />
Through WLM, there are two mechanisms to tune the alias assignment:<br />
► The first mechanism is goal based. This logic attempts to give additional aliases to a PAV<br />
device that is experiencing I<strong>OS</strong> queue delays and is impacting a service class period that<br />
is missing its goal. To give additional aliases to the receiver device, a donor device must<br />
be found with a less important service class period. A bitmap is maintained with each PAV<br />
device that indicates the service classes using the device.<br />
► The second mechanism is to move aliases to high-contention PAV devices from<br />
low-contention PAV devices. High-contention devices will be identified by having a<br />
significant amount <strong>of</strong> I<strong>OS</strong> queue. This tuning is based on efficiency rather than directly<br />
helping a workload to meet its goal.<br />
Chapter 3. Extended access volumes 61
3.5 Parallel Access <strong>Volume</strong>s (PAVs)<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
UCB 08F1<br />
UCB 08F0<br />
UCB 0801<br />
UCB 08F3<br />
UCB 08F2<br />
UCB 0802<br />
UCB 08F1<br />
UCB 08F0<br />
UCB 0801<br />
UCB 08F3<br />
UCB 08F2<br />
UCB 0802<br />
Figure 3-5 Parallel access volumes (PAVs)<br />
Parallel access volume (PAV)<br />
z/<strong>OS</strong> system I<strong>OS</strong> maps a device in a unit control block (UCB). Traditionally this I/O device<br />
does not support concurrency, being treated as a single resource, serially used. High I/O<br />
activity towards the same device can adversely affect performance. This contention is worst<br />
for large volumes with many small data sets. The symptom displayed is extended I<strong>OS</strong>Q time,<br />
where the I/O request is queued in the UCB. z/<strong>OS</strong> cannot attempt to start more than one I/O<br />
operation at a time to the device.<br />
The ESS and DS8000 support concurrent data transfer operations to or from the same<br />
3390/3380 devices from the same system. A device (volume) accessed in this way is called a<br />
parallel access volume (PAV).<br />
PAV exploitation requires both s<strong>of</strong>tware enablement and an optional feature on your<br />
controller. PAV support must be installed on each controller. It enables the issuing <strong>of</strong> multiple<br />
channel programs to a volume from a single system, and allows simultaneous access to the<br />
logical volume by multiple users or jobs. Reads, as well as writes to other extents, can be<br />
satisfied simultaneously. The domain <strong>of</strong> an I/O consists <strong>of</strong> the specified extents to which the<br />
I/O operation applies, which corresponds to the extents <strong>of</strong> the same data set. Writes to the<br />
same domain still have to be serialized to maintain data integrity, which is also the case for<br />
reads and write.<br />
The implementation <strong>of</strong> N parallel I/Os to the same 3390/3380 device consumes N addresses<br />
in the logical controller, thus decreasing the number <strong>of</strong> possible real devices. Also, UCBs are<br />
62 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
z/<strong>OS</strong> Image<br />
z/<strong>OS</strong> Image<br />
Storage Server<br />
Logical Subsystem (LSS) 0800<br />
Alias UA=F1<br />
Alias UA=F0<br />
Base UA=01<br />
Alias UA=F3<br />
Alias UA=F2<br />
Base UA=02
not prepared to allow multiples I/Os due to s<strong>of</strong>tware product compatibility issues. Support is<br />
then implemented by defining multiple UCBs for the same device.<br />
The UCBs are <strong>of</strong> two types:<br />
► Base address: This is the actual unit address. There is only one for any volume.<br />
► Alias address: Alias addresses are mapped back to a base device address. I/O scheduled<br />
for an alias is executed against the base by the controller. No physical disk space is<br />
associated with an alias address. Alias UCBs are stored above the 16 MB line.<br />
PAV benefits<br />
Workloads that are most likely to benefit from PAV functionality being available include:<br />
► <strong>Volume</strong>s with many concurrently open data sets, such as volumes in a work pool<br />
► <strong>Volume</strong>s that have a high read to write ratio per extent<br />
► <strong>Volume</strong>s reporting high I<strong>OS</strong>Q times<br />
Candidate data sets types:<br />
► Have high read to write ratio<br />
► Have many extents on one volume<br />
► Are concurrently shared by many readers<br />
► Are accessed using media manager or VSAM-extended format (32-byte suffix)<br />
PAVs can be assigned to base UCBs either:<br />
► Manually (static) by the installation.<br />
► Dynamically, WLM can move aliases’ UCBs from one base UCB to another base UCB in<br />
order to:<br />
– Balance device utilizations<br />
– Honor the goal <strong>of</strong> transactions suffering I/O delays because long I<strong>OS</strong>Q time. All WLMs<br />
in a sysplex must agree with the movement <strong>of</strong> aliases’ UCBs.<br />
However, the dynamic PAV still has issues:<br />
► Any change must be decided by all WLMs in a sysplex using XCF communication.<br />
► The number <strong>of</strong> aliases for one device must be equal in all z/<strong>OS</strong> systems.<br />
► Any change implies a dynamic I/O configuration.<br />
To resolve such issues HyperPAV was introduced. With HyperPAV all aliases’ UCBs are<br />
located in a pool and are used dynamically by I<strong>OS</strong>.<br />
Chapter 3. Extended access volumes 63
3.6 HyperPAV feature for DS8000 series<br />
HyperPAV<br />
Reduces the number <strong>of</strong> PAV-aliases needed per<br />
logical subsystem (LSS)<br />
Figure 3-6 HyperPAV implementation<br />
DS8000 feature<br />
HyperPAV is an optional feature on the DS8000 series, available with the HyperPAV indicator<br />
feature number 0782 and corresponding DS8000 series function authorization (2244-PAV<br />
HyperPAV feature number 7899). HyperPAV also requires the purchase <strong>of</strong> one or more PAV<br />
licensed features and the FICON/ESCON® Attachment licensed feature. The FICON/ESCON<br />
Attachment licensed feature applies only to the DS8000 Turbo Models 931, 932, and 9B2.<br />
HyperPAV allows many DS8000 series users to benefit from enhancements to PAV with<br />
support for HyperPAV.<br />
HyperPAV allows an alias address to be used to access any base on the same control unit<br />
image per I/O base. This capability also allows separate HyperPAV hosts to use one alias to<br />
access separate bases, which reduces the number <strong>of</strong> alias addresses required to support a<br />
set <strong>of</strong> bases in a <strong>System</strong> z environment with no latency in targeting an alias to a base. This<br />
functionality is also designed to enable applications to achieve equal or better performance<br />
than is possible with the original PAV feature alone, while also using the same or fewer z/<strong>OS</strong><br />
resources. The HyperPAV capability is <strong>of</strong>fered on z/<strong>OS</strong> V1R6 and later.<br />
64 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
By an order <strong>of</strong> magnitude but still maintaining optimal<br />
response times<br />
This is accomplished by no longer statically binding<br />
PAV-aliases to PAV-bases<br />
WLM no longer adjusts the bindings<br />
In HyperPAV mode, PAV-aliases are bound to<br />
PAV-bases only for the duration <strong>of</strong> a single I/O<br />
operation, then reducing the number <strong>of</strong> aliases required<br />
per LSS significantly
3.7 HyperPAV implementation<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
Applications<br />
do I/O to base<br />
volumes<br />
z/<strong>OS</strong> Image<br />
UCB 0801<br />
UCB 08F0<br />
UCB 0802<br />
z/<strong>OS</strong> Image<br />
UCB 08F0<br />
UCB 0801<br />
UCB 08F1<br />
UCB 08F3<br />
UCB 0802<br />
UCB 08F3<br />
UCB 08F2<br />
UCB 08F1<br />
UCB 08F2<br />
Figure 3-7 HyperPAV implementation using a pool <strong>of</strong> aliases<br />
Storage Server<br />
Logical Subsystem (LSS) 0800<br />
Alias UA=F0<br />
Alias UA=F1<br />
Alias UA=F2<br />
Alias UA=F3<br />
Base UA=01<br />
Base UA=02<br />
HyperPAV feature<br />
With the <strong>IBM</strong> <strong>System</strong> Storage DS8000 Turbo model and the <strong>IBM</strong> server synergy feature, the<br />
HyperPAV together with PAV, Multiple Allegiance, and support for <strong>IBM</strong> <strong>System</strong> z MIDAW<br />
facility can dramatically improve performance and efficiency for <strong>System</strong> z environments.<br />
With HyperPAV technology:<br />
► z/<strong>OS</strong> uses a pool <strong>of</strong> UCB aliases.<br />
► As each application I/O is requested, if the base volume is busy with another I/O:<br />
– z/<strong>OS</strong> selects a free alias from the pool, quickly binds the alias device to the base<br />
device, and starts the I/O.<br />
– When the I/O completes, the alias device is used for another I/O on the LSS or is<br />
returned to the free alias pool.<br />
If too many I/Os are started simultaneously:<br />
► z/<strong>OS</strong> will queue the I/Os at the LSS level.<br />
► When an exposure frees up that can be used for queued I/Os, they are started.<br />
► Queued I/O is done within assigned I/O priority.<br />
P<br />
O<br />
O<br />
L<br />
P<br />
O<br />
O<br />
L<br />
Chapter 3. Extended access volumes 65
For each z/<strong>OS</strong> image within the sysplex, aliases are used independently. WLM is not involved<br />
in alias movement. Therefore, it does not need to collect information to manage HyperPAV<br />
aliases.<br />
Note: HyperPAV was introduced and integrated in z/<strong>OS</strong> V1R9 and is available in z/<strong>OS</strong><br />
V1R8 with APAR OA12865.<br />
66 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.8 Device type 3390 and 3390 Model A<br />
An EAV is a volume with more than<br />
65,520 cylinders<br />
EAV volumes increase the amount <strong>of</strong><br />
addressable DASD storage per<br />
volume beyond 65,520 cylinders<br />
3339 Cyl<br />
Model 3<br />
3 GB<br />
Figure 3-8 EAV volumes<br />
10017 Cyl<br />
Model 9<br />
9 GB<br />
32760 Cyl<br />
Model 9<br />
27 GB<br />
65520 Cyl<br />
Model 9<br />
54 GB<br />
EAV<br />
Model A<br />
100's <strong>of</strong> TB<br />
EAV volumes<br />
An extended address volume (EAV) is a volume with more than 65,520 cylinders. An EAV<br />
increases the amount <strong>of</strong> addressable DASD storage per volume beyond 65,520 cylinders by<br />
changing how tracks on volumes are addressed. The extended address volume is the next<br />
step in providing larger volumes for z/<strong>OS</strong>. z/<strong>OS</strong> provided this support first in z/<strong>OS</strong> V1R10 <strong>of</strong><br />
the operating system. Over the years, volumes have grown by increasing the number <strong>of</strong><br />
cylinders and thus GB capacity. However, the existing track addressing architecture has<br />
limited the required growth to relatively small GB capacity volumes, which has put pressure<br />
on the 4-digit device number limit. Previously, the largest available volume is one with 65,520<br />
cylinders or approximately 54 GB. Access to the volumes includes the use <strong>of</strong> PAV, HyperPAV,<br />
and FlashCopy SE (Space-efficient FlashCopy).<br />
With EAV volumes, an architecture is implemented that provides a capacity <strong>of</strong> hundreds <strong>of</strong><br />
terabytes for a single volume. However, the first releases are limited to a volume with 223 GB<br />
or up to 262,668 cylinders.<br />
3390 Model A<br />
A volume <strong>of</strong> this size has to be configured in the DS8000 as a 3390 Model A. However, a<br />
3390 Model A is not always an EAV. A 3390 Model A is any device configured in the DS8000<br />
to have more than 65,220 cylinders. Figure 3-8 illustrates the 3390 device types.<br />
Chapter 3. Extended access volumes 67
Note: With the 3390 Model A, the model A refers to the model configured in the DS8000. It<br />
has no association with the 3390A notation in HCD that indicates a PAV-alias UCB in the<br />
z/<strong>OS</strong> operating system. The Model “A” was chosen so that it did not imply a particular<br />
device size as previous models 3390-3 and 3390-9 did.<br />
68 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.9 Extended access volumes (EAV)<br />
Increased z/<strong>OS</strong> addressable disk storage<br />
Provide constraint relief for applications using large<br />
VSAM data sets - extended-format data sets<br />
3390 Model A: Device can be configured to have from<br />
1 to 268,434,453 cylinders - (architectural maximum)<br />
With z/<strong>OS</strong> V1R10 - Size limited to 223 GB - 262,668<br />
(Max cylinders)<br />
Managed by the system as a general purpose volume<br />
Works well for applications with large files<br />
PAV and HyperPAV technologies help by allowing I/O<br />
rates to scale as a volume gets larger<br />
Figure 3-9 EAV volumes with z/<strong>OS</strong><br />
EAV benefit<br />
The benefit <strong>of</strong> this support is that the amount <strong>of</strong> z/<strong>OS</strong> addressable disk storage is further<br />
significantly increased. This provides relief for customers that are approaching the 4-digit<br />
device number limit by providing constraint relief for applications using large VSAM data sets,<br />
such as those used by DB2, CICS, zFS file systems, SMP/E CSI data sets, and NFS<br />
mounted data sets. This support is provided in z/<strong>OS</strong> V1R10 and enhanced in z/<strong>OS</strong> V1R11.<br />
EAV eligible data sets<br />
VSAM KSDS, RRDS, ESDS and linear data sets, both SMS-managed and<br />
non-SMS-managed, and zFS data sets (which are VSAM anyway) are supported beginning<br />
with z/<strong>OS</strong> V1R10.<br />
Extended format sequential data sets are supported in z/<strong>OS</strong> V1R11. PDSE, basic and large<br />
sequential, BDAM and PDS data set support is provided for all data set types but will not be<br />
enabled to use the extended addressing space (EAS) <strong>of</strong> an EAV. When this document<br />
references EAS-eligible data sets, it is referring to VSAM and extended-format sequential<br />
data sets.<br />
3390 Model A<br />
With EAV volumes, an architecture is implemented that provides a capacity <strong>of</strong> hundreds <strong>of</strong><br />
terabytes for a single volume. However, the first releases are limited to a volume with 223 GB<br />
or 262,668 cylinders.<br />
Chapter 3. Extended access volumes 69
Note the following points regarding extended address volumes:<br />
► Only 3390 Model A devices can be EAV.<br />
► EAV is supported starting in z/<strong>OS</strong> V1R10 and further releases.<br />
► The size is limited to 223 GB (262,668 cylinders) in z/<strong>OS</strong> V1R10 and V1R11.<br />
Important: The 3390 Model A as a device can be configured to have from 1 to<br />
268,434,453 cylinders on an <strong>IBM</strong> DS8000. It becomes an EAV if it has more than 65,520<br />
cylinders defined. With current z/<strong>OS</strong> releases, this maximum size is not supported.<br />
Using EAV volumes<br />
How an EAV is managed by the system allows it to be a general purpose volume. However,<br />
EAVs work especially well for applications with large files. PAV and HyperPAV technologies<br />
help in both these regards by allowing I/O rates to scale as a volume gets larger.<br />
70 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.10 Data sets eligible for EAV volumes<br />
In z/<strong>OS</strong> V1R10, the following VSAM data sets are<br />
EAS-eligible:<br />
All VSAM data types (KSDS, RRDS, ESDS, and<br />
Linear)<br />
This covers DB2, IMS, CICS, zFS and NFS<br />
SMS-managed and non-SMS-managed VSAM data<br />
sets<br />
VSAM data sets on an EAV that were inherited from a<br />
prior physical migration or copy<br />
With z/<strong>OS</strong> V1R11, extended-format sequential data<br />
sets that are SMS-managed<br />
Figure 3-10 Eligible data sets for EAV volumes<br />
Eligible data sets<br />
EAS-eligible data sets are defined to be those that can be allocated in the extended<br />
addressing space, which is the area on an EAV located above the first 65,520 cylinders. This<br />
is sometimes referred to as cylinder-managed space. All <strong>of</strong> the following data sets can be<br />
allocated in the base addressing space <strong>of</strong> an EAV:<br />
► SMS-managed VSAM (all types)<br />
► Non-SMS VSAM (all types)<br />
► zFS data sets (which are VSAM LS)<br />
– zFS aggregates are supported in an EAV environment.<br />
– zFS aggregates or file systems can reside in track-managed space or<br />
cylinder-managed space (subject to any limitations that DFSMS has).<br />
– zFS still has an architected limit <strong>of</strong> 4 TB for the maximum size <strong>of</strong> a zFS aggregate.<br />
► Database (DB2, IMS) use <strong>of</strong> VSAM<br />
► VSAM data sets inherited from prior physical migrations or copies<br />
► With z/<strong>OS</strong> V1R11: Extended-format sequential data sets that are SMS-managed.<br />
Chapter 3. Extended access volumes 71
EAS non-eligible data sets<br />
An EAS-ineligible data set is a data set that may exist on an EAV but is not eligible to have<br />
extents (through Create or Extend) in the cylinder-managed space. The exceptions to EAS<br />
eligibility are as follows:<br />
► Catalogs (BCS (basic catalog structure) and VVDS (VSAM volume data set)<br />
► VTOC (continues to be restricted to within the first 64 K-1 tracks)<br />
► VTOC index<br />
► Page data sets<br />
► VSAM data sets with imbed or keyrange attributes<br />
► VSAM data sets with incompatible CA sizes<br />
Note: In a future release, various <strong>of</strong> these data sets may become EAS-eligible. All data set<br />
types, even those listed here, can be allocated in the track-managed space on a device<br />
with cylinder-managed space on an EAV volume. Eligible EAS data sets can be created<br />
and extended anywhere on an EAV. Data sets that are not eligible for EAS processing can<br />
only be created or extended in the track-managed portions <strong>of</strong> the volume.<br />
72 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.11 EAV volumes and multicylinder units<br />
EAV<br />
cylinder-managed<br />
space<br />
track-managed<br />
space<br />
Figure 3-11 EAV and multicylinder units<br />
Cylinder 262667<br />
Multicylinder Units (MCUs) - 21 cylinders<br />
Value that divides evenly into the 1 GB<br />
storage segments <strong>of</strong> an <strong>IBM</strong> DS8000<br />
These 1 GB segments are the<br />
allocation unit in the <strong>IBM</strong> DS8000 and<br />
are equivalent to 1113 cylinders<br />
Cylinder 65520<br />
Cylinder 0<br />
Multicylinder unit<br />
A multicylinder unit (MCU) is a fixed unit <strong>of</strong> disk space that is larger than a cylinder. Currently,<br />
on an EAV volume, a multicylinder unit is 21 cylinders and the number <strong>of</strong> the first cylinder in<br />
each multicylinder unit is a multiple <strong>of</strong> 21. Figure 3-11 illustrates the EAV and multicylinder<br />
units.<br />
The cylinder-managed space is space on the volume that is managed only in multicylinder<br />
units. Cylinder-managed space begins at cylinder address 65,520. Each data set occupies an<br />
integral multiple <strong>of</strong> multicylinder units. Space requests targeted for the cylinder-managed<br />
space are rounded up to the next multicylinder unit. The cylinder-managed space only exists<br />
on EAV volumes.<br />
The 21-cylinder value for the MCU is derived from being the smallest unit that can map out<br />
the largest possible EAV volume and stay within the index architecture with a block size <strong>of</strong><br />
8,192 bytes, as follows:<br />
► It is also a value that divides evenly into the 1 GB storage segments <strong>of</strong> an <strong>IBM</strong> DS8000.<br />
► These 1 GB segments are the allocation unit in the <strong>IBM</strong> DS8000 and are equivalent to<br />
1,113 cylinders.<br />
► These segments are allocated in multiples <strong>of</strong> 1,113 cylinders starting at cylinder 65,520.<br />
One <strong>of</strong> the more important EAV design points is that <strong>IBM</strong> maintains its commitment to<br />
customers that the 3390 track format and image size, and tracks per cylinders, will remain the<br />
Chapter 3. Extended access volumes 73
same as for previous 3390 model devices. An application using data sets on an EAV will be<br />
comparable to how it runs today on 3390 “numerics” kind <strong>of</strong> models. The extended address<br />
volume has two managed spaces, as shown in Figure 3-11 on page 73:<br />
► The track-managed space<br />
► The cylinder-managed space<br />
Cylinder-managed space<br />
The cylinder-managed space is the space on the volume that is managed only in<br />
multicylinder units (MCUs). Cylinder-managed space begins at cylinder address 65520. Each<br />
data set occupies an integral multiple <strong>of</strong> multicylinder units. Space requests targeted for the<br />
cylinder-managed space are rounded up to the next multicylinder unit. The cylinder-managed<br />
space exists only on EAV volumes. A data set allocated in cylinder-managed space may<br />
have its requested space quantity rounded up to the next MCU.<br />
Data sets allocated in cylinder-managed space are described with a new type <strong>of</strong> data set<br />
control blocks (DSCB) in the VTOC. Tracks allocated in this space will also be addressed<br />
using the new track address. Existing programs that are not changed will not recognize these<br />
new DSCBs and therefore will be protected from seeing how the tracks in cylinder-managed<br />
space are addressed.<br />
Track-managed space<br />
The track-managed space is the space on a volume that is managed in track and cylinder<br />
increments. All volumes today have track-managed space. Track-managed space ends at<br />
cylinder number 65519. Each data set occupies an integral multiple <strong>of</strong> tracks. The<br />
track-managed space ends at cylinder address 65519. Each data set occupies an integral<br />
multiple <strong>of</strong> tracks. The track-managed space allows existing programs and physical migration<br />
products to continue to work. Physical copies can be done from a non-EAV to an EAV and<br />
have those data sets accessible.<br />
74 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.12 Dynamic volume expansion (DVE)<br />
Significantly reduces the complexity <strong>of</strong> migrating to<br />
larger volumes<br />
Copy service relationships must be removed<br />
Previously, customers must use migration utilities that<br />
require an additional volume for each volume that is<br />
being expanded and require the data to be moved<br />
DVE can expand volumes beyond 65,520 cylinders<br />
Without moving data or application outage<br />
Two methods to dynamically grow a volume:<br />
Use the command-line interface (DSCLI)<br />
Use a Web browser GUI<br />
Figure 3-12 Using dynamic volume expansion to create EAV volumes<br />
Dynamic volume expansion<br />
The <strong>IBM</strong> <strong>System</strong> Storage DS8000 series supports dynamic volume expansion, which<br />
increases the capacity <strong>of</strong> existing zSeries volumes, while the volume remains connected to a<br />
host system.<br />
This capability simplifies data growth by allowing volume expansion without taking volumes<br />
<strong>of</strong>fline. Using DVE significantly reduces the complexity <strong>of</strong> migrating to larger volumes.<br />
Note: For the dynamic volume expansion function, volumes cannot be in Copy Services<br />
relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global<br />
Mirror, and z/<strong>OS</strong> Global Mirror) during expansion.<br />
There are two methods to dynamically grow a volume:<br />
► Use the command-line interface (DSCLI)<br />
► Use a Web browser GUI<br />
Note: All systems must be at the z/<strong>OS</strong> V1R10 level or above for the DVE feature to be<br />
used when the systems are sharing the Release 4.0 Licensed Internal Microcode updated<br />
DS8000 at an LCU level. Current DS8000 Licensed Internal Microcode is Version<br />
5.4.1.1043.<br />
Chapter 3. Extended access volumes 75
3.13 Using dynamic volume expansion<br />
More recently, Dynamic <strong>Volume</strong> Expansion is a<br />
function (available at the <strong>IBM</strong> DS8000 console) that:<br />
Figure 3-13 Using dynamic volume expansion<br />
Using dynamic volume expansion<br />
It is possible to grow an existing volume with the DS8000 with dynamic volume expansion<br />
(DVE). Previously, it was necessary to use migration utilities that require an additional volume<br />
for each volume that is being expanded and require the data to be moved.<br />
A logical volume can be increased in size while the volume remains online to host systems for<br />
the following types <strong>of</strong> volumes:<br />
► 3390 model 3 to 3390 model 9<br />
► 3390 model 9 to EAV volume sizes using z/<strong>OS</strong> V1R10<br />
Dynamic volume expansion can be used to expand volumes beyond 65,520 cylinders<br />
without moving data or causing an application outage.<br />
Dynamic volume expansion is performed by the DS8000 Storage Manager and can be<br />
requested using its Web GUI. 3390 volumes may be increased in size, for example from a<br />
3390 model 3 to a model 9 or a model 9 to a model A (EAV). z/<strong>OS</strong> V1R11 introduces an<br />
interface that can be used to make requests for dynamic volume expansion <strong>of</strong> a 3390 volume<br />
on a DS8000 from the system.<br />
Note: For the dynamic volume expansion function, volumes cannot be in Copy Services<br />
relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global<br />
Mirror, and z/<strong>OS</strong> Global Mirror) during expansion.<br />
76 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Increases the capacity <strong>of</strong> existing zSeries volumes<br />
Requires a manual operation for the system<br />
programmer to expand the VTOC size<br />
3390 model 3 to 3390 model 9<br />
3390 model 9 to EAV volume sizes using z/<strong>OS</strong> V1R10<br />
Note: <strong>Volume</strong>s cannot be in Copy Services relationships<br />
(point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror,<br />
Metro/Global Mirror, and z/<strong>OS</strong> Global Mirror) during expansion.<br />
All systems must be at the z/<strong>OS</strong> V1R10 level for the DVE feature<br />
to be used when the systems are sharing the Release 4.0<br />
Licensed Internal Microcode updated DS8000 at an LCU level.
3.14 Command-line interface (DSCLI)<br />
The chckdvol command changes can expand the number<br />
<strong>of</strong> cylinders on an existing volume<br />
New option -cap new_capacity (optional on DS8000 only)<br />
Specifies cylinders to allocate to the specified volume<br />
3380 volumes cannot be expanded<br />
3390 Model A (DS8000 only), value can be in the range <strong>of</strong><br />
1 to 65,520 or 65,667 to 262,668 (increments <strong>of</strong> 1113)<br />
For 3390 volumes, the -cap parameter value can be in the<br />
range <strong>of</strong> 1 to 65,520 (849 KB to 55.68 GB)<br />
Change a 3390 Mod 3 device, ID 0860 (actual device<br />
address D860), to a 3390 Mod 9:<br />
chckdvol –cap 10017 0860<br />
Figure 3-14 Using the command line interface<br />
Command line interface<br />
The chckdvol command changes the name <strong>of</strong> a count key data (CKD) base volume. It can be<br />
used to expand the number <strong>of</strong> cylinders on an existing volume. A new parameter on the<br />
chckdvol command allows a volume to be increased in size, as follows:<br />
-cap new_capacity (Optional and DS8000 only)<br />
This specifies the quantity <strong>of</strong> CKD cylinders that you want allocated to the specified volume.<br />
3380 volumes cannot be expanded. For 3390 Model A volumes (DS8000 only), the -cap<br />
parameter value can be in the range <strong>of</strong> 1 to 65,520 (increments <strong>of</strong> 1) or 65,667 to 262,668<br />
(increments <strong>of</strong> 1113). For 3390 volumes, the -cap parameter value can be in the range <strong>of</strong> 1 to<br />
65,520 (849 KB to 55.68 GB).<br />
3390 Model 3 to 3390 Model 9<br />
This example in Figure 3-11 on page 73 shows how to use DSCLI to change the size <strong>of</strong> a<br />
3390 Mod 3 device, ID 0860 (actual device address D860) to a 3390 Mod 9:<br />
chckdvol –cap 10017 0860<br />
Chapter 3. Extended access volumes 77
3.15 Using Web browser GUI<br />
Need a URL for your own connection to the DS8000<br />
Storage Manager (lpar_IPaddr)<br />
Figure 3-15 Log in to DS8000 to access a device to create an EAV<br />
Access to the DS8000 port<br />
Each DS8000 Fibre Channel card <strong>of</strong>fers four Fibre Channel ports (port speed <strong>of</strong> 2 or 4 Gbps,<br />
depending on the host adapter). The cable connector required to attach to this card is an LC<br />
type. Each 2 Gbps port independently auto-negotiates to either 2 or 1 Gbps and the 4 Gbps<br />
ports to 4 or 2 Gbps link speed. Each <strong>of</strong> the 4 ports on one DS8000 adapter can also<br />
independently be either Fibre Channel protocol (FCP) or FICON, though the ports are initially<br />
defined as switched point-to-point FCP. Selected ports will be configured to FICON<br />
automatically based on the definition <strong>of</strong> a FICON host. Each port can be either FICON or<br />
Fibre Channel Protocol (FCP). The personality <strong>of</strong> the port is changeable through the DS<br />
Storage Manager GUI. A port cannot be both FICON and FCP simultaneously, but it can be<br />
changed as required.<br />
DS8000 shipped with pre-Release 3 code (earlier than Licensed Machine Code 5.3.xx.xx)<br />
can also establish the communication with the DS Storage Manager GUI using a Web<br />
browser on any supported network-connected workstation by simply entering into the Web<br />
browser the IP address <strong>of</strong> the HMC and the port that the DS Storage Management server is<br />
listening to:<br />
http://:8451/DS8000<br />
After accessing the GUI through a LOGON, follow these steps to access the device for which<br />
you want to increase the number <strong>of</strong> cylinders.<br />
78 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
http://lpar_IPaddr:8451/DS8000/Console<br />
Log in<br />
Select --------- <strong>Volume</strong>s - zSeries<br />
Select --------- Select Storage Image<br />
Select LCU ---- number (xx) - where Device is and<br />
Page number <strong>of</strong> device
3.16 Select volume to increase capacity<br />
Figure 3-16 Select volume to increase capacity<br />
<strong>Volume</strong> to increase capacity<br />
The 3390 Model A can be defined as new or can be defined when a new EAV volume beyond<br />
65,520 cylinders is specified. The number <strong>of</strong> cylinders when defining an EAV volume can be<br />
defined as low as 1 cylinder and up to 262,668 cylinders. In addition, expanding a 3390 type<br />
<strong>of</strong> volume that is below 65,520 cylinders to be larger than 65,520 cylinders converts the<br />
volume from a 3390 standard volume type to a 3390 Model A and thus an expanded address<br />
volume.<br />
To increase the number <strong>of</strong> cylinders on an existing volume, you can use the Web browser<br />
GUI by selecting the volume (shown as volser MLDF64 in Figure 3-16) and then using the<br />
Select Action pull-down to select Increase Capacity.<br />
Chapter 3. Extended access volumes 79
3.17 Increase capacity <strong>of</strong> volumes<br />
Figure 3-17 Screen showing current maximum size available<br />
Screen showing maximum available cylinders<br />
Figure 3-17 is displayed after you select Increase Capacity. A warning message is issued<br />
regarding a possible volume type change to 3390 custom. You can then select Close<br />
Message. Notice that the panel displays the maximum number <strong>of</strong> cylinders that can be<br />
specified.<br />
After you close the message, the panel in Figure 3-18 on page 81 is displayed where you can<br />
specify the number <strong>of</strong> cylinders to increase the volume size. When you specify a new capacity<br />
to be applied to the selected volume, specify a value that is between the minimum and<br />
maximum size values that are displayed. Maximum values cannot exceed the amount <strong>of</strong> total<br />
storage that is available.<br />
Note: Only volumes <strong>of</strong> type 3390 model 3, 3390 model 9, and 3390 custom, can be<br />
expanded. The total volume cannot exceed the available capacity <strong>of</strong> the storage image.<br />
Capacity cannot be increased for volumes that are associated with Copy Services<br />
functions.<br />
80 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.18 Select capacity increase for volume<br />
75000<br />
75000<br />
Figure 3-18 Specifying the number <strong>of</strong> cylinders to increase the volume size<br />
Specify cylinders to increase volume size<br />
After you close the message, shown in the panel in Figure 3-17 on page 80, Figure 3-18 is<br />
displayed where you can specify the number <strong>of</strong> cylinders to increase the volume size; 75,000<br />
is shown in Figure 3-18.<br />
When you specify a new capacity to be applied to the selected volumes, specify a value that<br />
is between the minimum and maximum size values that are displayed. Maximum values<br />
cannot exceed the amount <strong>of</strong> total storage that is available.<br />
Chapter 3. Extended access volumes 81
3.19 Final capacity increase for volume<br />
Figure 3-19 Capacity increase for volume<br />
Final volume cylinder increase<br />
Since 75,000 cylinders was specified on the panel shown in Figure 3-18 on page 81, the<br />
message displayed in Figure 3-19 is issued when you click OK.<br />
Specify Continue in Figure 3-19, and a requested size <strong>of</strong> 75,684 is processed for the<br />
expansion <strong>of</strong> the volume.<br />
Note: Remember that the number <strong>of</strong> cylinders must be as stated earlier. The reason an<br />
MCU value is 21 cylinders is because it is derived from being the smallest unit that can<br />
map out the largest possible EAV and stay within the index architecture (with a block size<br />
<strong>of</strong> 8192 bytes). It is also a value that divides evenly into the 1 GB storage segments <strong>of</strong> a<br />
DS8000. These 1 GB segments are the allocation unit in the DS8000 and are equivalent to<br />
1113 cylinders. Data sets allocated in cylinder-managed space may have their requested<br />
space quantity rounded up to the next MCU.<br />
82 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.20 VTOC index with EAV volumes<br />
With z/<strong>OS</strong> V1R10, the index block size is increased<br />
from 2,048 bytes to 8,192 bytes for devices with<br />
cylinder-managed space<br />
This requires a delete <strong>of</strong> the old VTOC and a build <strong>of</strong><br />
a new VTOC<br />
The new block size is recorded in the format-1 DSCB<br />
for the index and is necessary to allow for scaling to<br />
largest sized volumes<br />
The DEVTYPE INFO=DASD macro can be used to<br />
return the actual block size or<br />
Can be determined from examining the format-1 DSCB<br />
<strong>of</strong> the index data set<br />
Figure 3-20 Build a new VTOC after volume expansion<br />
New VTOC for volume expansion<br />
When a volume is dynamically expanded the VTOC and VTOC index has to be reformatted to<br />
map the additional space. At z/<strong>OS</strong> V1R10, and earlier, this has to be done manually by the<br />
system programmer, or storage administrator, by submitting an ICKDSF REFVTOC job.<br />
VTOC index<br />
The VTOC index enhances the performance <strong>of</strong> VTOC access. The VTOC index is a<br />
physical-sequential data set on the same volume as the related VTOC. It consists <strong>of</strong> an index<br />
<strong>of</strong> data set names in format-1 DSCBs contained in the VTOC and volume free space<br />
information.<br />
An SMS-managed volume requires an indexed VTOC; otherwise, the VTOC index is highly<br />
recommended. For additional information about SMS-managed volumes, see z/<strong>OS</strong> DFSMS<br />
Implementing <strong>System</strong>-Managed Storage, SC26-7407.<br />
Note: You can use the ICKDSF REFORMAT REFVTOC command to rebuild a VTOC index<br />
to reclaim any no longer needed index space and to possibly improve access times.<br />
Chapter 3. Extended access volumes 83
VTOC index changes<br />
The index block size increased from 2,048 bytes to 8,192 bytes for devices with<br />
cylinder-managed space, as follows:<br />
► Contents <strong>of</strong> index map records increased proportionally (more bits per record, <strong>of</strong>fsets<br />
changed).<br />
► Extended header in VIXM for free space statistics.<br />
► VPSM, track-managed space. Small (tracks and cylinders) and large (cylinders) unit map.<br />
Same but more bits.<br />
► VPSM, cylinder-managed space. Each bit in large unit map represents 21 cylinders (a<br />
MCU).<br />
► Programs that access index maps must use existing self-describing fields in the map<br />
records.<br />
► Block size is in format-1 DSCB for the VTOC index and returned by DEVTYPE macro.<br />
► Default index size computed by ICKDSF; might not be 15 tracks.<br />
This new block size is recorded in the format-1 DSCB for the index and is necessary to allow<br />
for scaling to largest sized volumes. The VTOC index space map (VIXM) has a new bit,<br />
VIMXHDRV, to indicate that new fields exist in the new VIXM extension, as follows:<br />
► The VIXM contains a new field for the RBA <strong>of</strong> the new large unit map and new space<br />
statistics.<br />
► The VIXM contains a new field for the minimum allocation unit in cylinders for the<br />
cylinder-managed space. Each extent in the cylinder-managed space must be a multiple<br />
<strong>of</strong> this on an EAV.<br />
Note: If the VTOC index size is omitted when formatting a volume with ICKDSF and does<br />
not preallocate the index, the default before this release was 15 tracks. In EAV Release 1<br />
(that is, starting with z/<strong>OS</strong> V1R10), the default size for EAV and non-EAV volumes is<br />
calculated and can be different from earlier releases.<br />
84 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
VTOC VTOC Index<br />
Format-4 DSCB<br />
Format-5 DSCB<br />
Other DSCBs<br />
Format-1 DSCB for the VTOC<br />
Index: SYS1.VTOCIX.nnn<br />
Other DSCBs<br />
VIXM<br />
VPSM<br />
VMDS<br />
VIER<br />
VIER<br />
VIER<br />
VTOC must reside in the "old" space on the volume<br />
.
3.21 Device Support FACILITY (ICKDSF)<br />
ICKDSF initializes a VTOC index into 2048-byte physical<br />
blocks, or 8192-byte physical blocks on EAV volumes,<br />
named VTOC index records (VIRs)<br />
The DEVTYPE INFO=DASD macro can be used to<br />
return the actual block size or it can be determined from<br />
examining the format-1 DSCB <strong>of</strong> the index data set<br />
APAR PK56092 for ICKDSF R17 provides support for<br />
the (EAV) function in z/<strong>OS</strong> V1R10<br />
Changes to the default size <strong>of</strong> a VTOC index when the<br />
INDEX parameter <strong>of</strong> the ICKDSF INIT command is<br />
defaulted after you install the PTF for the APAR,<br />
regardless <strong>of</strong> whether you exploit the EAV function<br />
Figure 3-21 ICKDSF facility to build the VTOC<br />
ICKDSF utility<br />
The ICKDSF utility performs functions needed for the installation, use, and maintenance <strong>of</strong><br />
<strong>IBM</strong> direct-access storage devices (DASD). You can also use it to perform service functions,<br />
error detection, and media maintenance.<br />
The ICKDSF utility is used primarily to initialize disk volumes. At a minimum, this process<br />
involves creating the disk label record and the volume table <strong>of</strong> contents (VTOC). ICKDSF can<br />
also scan a volume to ensure that it is usable, can reformat all the tracks, can write home<br />
addresses, as well as other functions.<br />
You can use the ICKDSF utility through two methods:<br />
► Execute ICKDSF as a job or job step using job control language (JCL). ICKDSF<br />
commands are then entered as part <strong>of</strong> the SYSIN data for z/<strong>OS</strong>.<br />
► Use Interactive Storage Management Facility (ISMF) panels to schedule ICKDSF jobs.<br />
APAR PK56092<br />
This APAR provides extended address volume (EAV) support up to 262,668 cylinders. If you<br />
define a volume greater than 262,668 cylinders you will receive the following message when<br />
running any ICKDSF command:<br />
ICK30731I X'xxxxxxx' CYLINDER SIZE EXCEEDS MAXIMUM SIZE<br />
SUPPORTED<br />
xxxxxxx will contain the hex value size <strong>of</strong> the volume you defined.<br />
Chapter 3. Extended access volumes 85
The documentation is in Device Support Facilities (ICKDSF) User's Guide and Reference<br />
Release 17, GC35-0033.<br />
ICKDSF to build a VTOC<br />
After the expansion <strong>of</strong> the volume operation completes, you need to use the ICKDSF<br />
REFORMAT REFVTOC command, shown in Figure 3-22, to adjust the volume VTOC to<br />
reflect the additional cylinders. Note that the capacity <strong>of</strong> a volume cannot be decreased.<br />
//INIT EXEC PGM=ICKDSF,PARM='NOREPLYU'<br />
//IN1 DD UNIT=3390,VOL=SER=ITSO02,DISP=SHR<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
REFORMAT DDNAME(IN1) VERIFY(ITSO02) REFVTOC<br />
/*<br />
Figure 3-22 Build a new VTOC<br />
86 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.22 Update VTOC after volume expansion<br />
REFORMAT command must be used to rebuild the<br />
VTOC and index structures after a volume has been<br />
expanded - (Method used with V1R10)<br />
The command updates portions <strong>of</strong> a previously<br />
initialized volume<br />
You can also enter ICKDSF commands with ISMF<br />
If an index exists when you expand the VTOC, it must<br />
be deleted and rebuilt to reflect the VTOC changes<br />
Parameters for index total track size rebuild<br />
EXTINDEX(n)<br />
XINDEX(n)<br />
Figure 3-23 Create a new VTOC after volume expansion<br />
New VTOC after expansion<br />
To prepare a volume for activity by initializing it, use the Device Support Facilities (ICKDSF)<br />
utility to build the VTOC. You can create a VTOC index when initializing a volume by using<br />
the ICKDSF INIT command and specifying the INDEX keyword.<br />
To convert a non-indexed VTOC to an indexed VTOC, use the BUILDIX command with the<br />
IXVTOC keyword. The reverse operation can be performed by using the BUILDIX command<br />
and specifying the <strong>OS</strong>VTOC keyword.<br />
Parameters for index rebuild<br />
To refresh a volume VTOC and INDEX in its current format, use the ICKDSF command<br />
REFORMAT with the RVTOC keyword. To optionally extend the VTOC and INDEX, use the<br />
ICKDSF command REFORMAT with the EXTVTOC and EXTINDEX keywords.<br />
Run REFORMAT NEWVTOC(cc,hh,n|ANY,n) to expand the VTOC. The new VTOC will be<br />
allocated on the beginning location cc,hh with total size <strong>of</strong> n tracks. Overlay between the new<br />
and old VTOC is not allowed. If cc,hh is omitted, ICKDSF will locate the new VTOC at the first<br />
eligible location on the volume other than at the location <strong>of</strong> the old VTOC where free space<br />
with n tracks is found. n must be greater than the old VTOC size. The volume must be <strong>of</strong>fline<br />
to use the NEWVTOC parameter.<br />
Run REFORMAT EXTVTOC(n) where n is the total size <strong>of</strong> the new VTOC in tracks. There<br />
must be free space available to allow for contiguous expansion <strong>of</strong> the VTOC. If there is no<br />
Chapter 3. Extended access volumes 87
free space available following the current location <strong>of</strong> the VTOC, an error message is issued. n<br />
must be greater than the old VTOC size.<br />
EXTINDEX(n) and XINDEX(n)<br />
If an index exists when you expand the VTOC, it must be deleted and rebuilt to reflect the<br />
VTOC changes. This parameter is used to specify the total track size <strong>of</strong> the index to be<br />
rebuilt. For n, substitute the decimal or hexadecimal digits (for example, X'1E') to specify the<br />
total number <strong>of</strong> tracks for the new index after expansion. If the value for n is less than the<br />
current value, the current value is used.<br />
Restriction: Valid only for MVS online volumes. Valid only when EXTVTOC is specified.<br />
88 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.23 Automatic VTOC index rebuild - z/<strong>OS</strong> V1R11<br />
DEVSUPxx parmlib member options<br />
REFVTOC=ENABLE - Enables the automatic<br />
REFVTOC function <strong>of</strong> the device manager that when<br />
enabled, and a volume expansion is detected, the<br />
device manager causes the volume VTOC to be rebuilt<br />
This allows the newly added space on the volume to be<br />
used by the system<br />
REFVTOC=DISABLE - This is the default value. It<br />
disables the automatic REFVTOC function <strong>of</strong> the device<br />
manager<br />
With the REFVTOC function disabled, when a volume<br />
expansion is detected: (Use R10 method)<br />
The following message is issued:<br />
IEA019I DDDD,VVVVVV,VOLUME CAPACITY CHANGE,OLD=XXXXXXXX NEW=YYYYYYYY<br />
Figure 3-24 DEVSUPxx parmlib member parameters<br />
VTOC rebuild with z/<strong>OS</strong> V1R11<br />
With z/<strong>OS</strong> V1R11, when a volume is increased in size, this is detected by the system, which<br />
then does an automatic VTOC and index rebuild. The system is informed by state change<br />
interrupts (SCIs), which are controlled with new DEVSUPxx parmlib member options as<br />
follows:<br />
► REFVTOC=ENABLE<br />
Enables the automatic REFVTOC function <strong>of</strong> the device manager. With the REFVTOC<br />
function enabled, when a volume expansion is detected, the device manager causes the<br />
volume VTOC to be rebuilt. This allows the newly added space on the volume to be used<br />
by the system.<br />
► REFVTOC=DISABLE<br />
This is the default value. It disables the automatic REFVTOC function <strong>of</strong> the device<br />
manager. With the REFVTOC function disabled, when a volume expansion is detected,<br />
the following message is issued:<br />
IEA019I dev, volser, VOLUME CAPACITY CHANGE,OLD=xxxxxxxx,NEW=yyyyyyyy.<br />
The VTOC is not rebuilt. An ICKDSF batch job must be submitted to rebuild the VTOC<br />
before the newly added space on the volume can be used, as is the case for z/<strong>OS</strong> V1R10.<br />
Invoke ICKDSF with REFORMAT/REFVTOC to update the VTOC and index to reflect the<br />
real device capacity.<br />
Chapter 3. Extended access volumes 89
Note: The refresh <strong>of</strong> the index occurs under protection <strong>of</strong> an exclusive SYSTEMS ENQ<br />
macro for major name SYSZDMO, minor name DMO.REFVTOC.VOLSER.volser.<br />
90 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.24 Automatic VTOC rebuild with DEVMAN<br />
Use the F DEVMAN command to communicate with<br />
the device manager address space to rebuild the<br />
VTOC<br />
F DEVMAN,ENABLE(REFVTOC)<br />
F DEVMAN,{DUMP}<br />
{REPORT}<br />
{RESTART}<br />
{END(taskid)}<br />
{ENABLE(feature) }<br />
{DISABLE(feature)}<br />
{?|HELP}<br />
Figure 3-25 Rebuild the VTOC for EAV volumes with DEVMAN<br />
New with z/<strong>OS</strong> V1R11<br />
New functions for DEVMAN with z/<strong>OS</strong> V1R11<br />
Following are brief descriptions <strong>of</strong> the new parameters with the F DEVMAN command in z/<strong>OS</strong><br />
V1R11.<br />
END(taskid) Terminates the subtask identified by taskid. The F<br />
DEVMAN,REPORT command displays the taskid for a subtask.<br />
ENABLE(feature name) Enables an optional feature. The supported features are named<br />
as follows:<br />
► REFVTOC - Use ICKDSF to automatically<br />
REFORMAT/REFVTOC a volume when it expands.<br />
► DATRACE - Capture dynamic allocation diagnostic<br />
messages.<br />
DISABLE(feature name) Disables one <strong>of</strong> the following optional features:<br />
► REFVTOC<br />
► DATRACE<br />
?|HELP Displays the F DEVMAN command syntax.<br />
Chapter 3. Extended access volumes 91
Updating the VTOC following volume expansion<br />
Specifying REFVTOC on the F DEVMAN command uses ICKDSF to automatically<br />
REFORMAT/REFVTOC a volume when it has been expanded. See z/<strong>OS</strong> MVS <strong>System</strong><br />
Commands, SA22-7627.<br />
Specifying REFVTOC={ENABLE | DISABLE} in the DEVSUPxx parmlib member to indicate<br />
whether you want to enable or disable the use <strong>of</strong> ICKDSF to automatically<br />
REFORMAT/REFVTOC a volume when it is expanded. See z/<strong>OS</strong> MVS Initialization and<br />
Tuning Reference, SA22-7592.<br />
You can also use the F DEVMAN,ENABLE(REFVTOC) command after an IPL as well to<br />
enable automatic VTOC and index reformatting. However, update the DEVSUPxx parmlib<br />
member to ensure that it remains enabled across subsequent IPLs.<br />
Using DEVMAN<br />
The DEVMAN REPORT display has the following format, as shown in Figure 3-26.<br />
Where:<br />
FMID Displays the FMID level <strong>of</strong> DEVMAN.<br />
APARS Displays any DEVMAN APARs that are installed (or the word NONE).<br />
OPTIONS Displays the currently enabled options (in the example, REFVTOC is<br />
enabled).<br />
SUBTASKS Lists the status <strong>of</strong> any subtasks that are currently executing.<br />
F DEVMAN,HELP<br />
DMO0060I DEVICE MANAGER COMMANDS:<br />
**** DEVMAN *********************************************************<br />
* ?|HELP - display devman modify command parameters<br />
* REPORT - display devman options and subtasks<br />
* RESTART - quiesce and restart devman in a new address space<br />
* DUMP - obtain a dump <strong>of</strong> the devman address space<br />
* END(taskid) - terminate subtask identified by taskid<br />
* ENABLE(feature) - enable an optional feature<br />
* DISABLE(feature)- disable an optional feature<br />
*--------------------------------------------------------------------<br />
* Optional features:<br />
* REFVTOC - automatic VTOC rebuild<br />
* DATRACE - dynamic allocation diagnostic trace<br />
**** DEVMAN *********************************************************<br />
Figure 3-26 Displaying DEVMAN modify parameters<br />
92 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.25 EAV and IGDSMSxx parmlib member<br />
USEEAV(YES|NO)<br />
Specifies, at the system level, whether SMS can select<br />
an extended address volume during volume selection<br />
processing<br />
Check applies to new allocations and when extending<br />
data sets to a new volume<br />
YES - EAV volumes can be used to allocate new data<br />
sets or to extend existing data sets to new volumes<br />
NO - Default - SMS does not select any EAV during<br />
volume selection<br />
SETSMS USEEAV(YES|NO)<br />
Figure 3-27 Parmlib members for using EAV volumes<br />
Extended address volume selection considerations<br />
The two new SMS IGDSMSxx parmlib member parameters need to be set before you can<br />
use extended address volumes with SMS for volume selection, as follows:<br />
► USEEAV<br />
► BreakPointValue<br />
IGDSMSxx parmlib member parameters<br />
The introduction and use by the system <strong>of</strong> an EAV is determined by adding the volumes to<br />
SMS storage groups or adding them to non-SMS managed storage pools (specific or esoteric<br />
volumes). <strong>Volume</strong>s can be in their dedicated storage group or pool, or mixed with other<br />
volumes. Adding and enabling them to these pools allows the system to allocate VSAM data<br />
sets in cylinder-managed space if the USEEAV parmlib setting is set to YES. When USEEAV<br />
is set to NO, no new data set creates (even non-VSAM) are allowed on any EAV in the<br />
system.<br />
IGDSMSxx parmlib member changes to be able to use EAV volumes<br />
USEEAV(YES|NO) specifies, at the system level, whether SMS can select an extended<br />
address volume during volume selection processing. This check applies to new allocations<br />
and when extending data sets to a new volume.<br />
Chapter 3. Extended access volumes 93
USEEAV(YES|NO)<br />
YES This means that extended address volumes can be used to allocate new data sets<br />
or to extend existing data sets to new volumes.<br />
NO (Default) This means that SMS does not select any extended address volumes<br />
during volume selection. Note that data sets might still exist on extended address<br />
volumes in either the track-managed or cylinder-managed space <strong>of</strong> the volume.<br />
Note: You can use the SETSMS command to change the setting <strong>of</strong> USEEAV without<br />
having to re-IPL. This modified setting is in effect until the next IPL, when it reverts to the<br />
value specified in the IGDSMSxx member <strong>of</strong> PARMLIB.<br />
To make the setting change permanent, you must alter the value in SYS1.PARMLIB. The<br />
syntax <strong>of</strong> the operator command is:<br />
SETSMS USEEAV(YES|NO)<br />
SMS requests will not use EAV volumes if the USEEAV setting in the IGDSMSxx parmlib<br />
member is set to NO.<br />
Specific allocation requests are failed. For non-specific allocation requests (UNIT=SYSDA),<br />
EAV volumes are not selected. Messages indicating no space available are returned when<br />
non-EAV volumes are not available.<br />
For non-EAS eligible data sets, all volumes (EAV and non-EAV) are equally preferred (or they<br />
have no preference). This is the same as today, with the exception that extended address<br />
volumes are rejected when the USEEAV parmlib value is set to NO.<br />
94 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.26 IGDSMSxx member BreakPointValue<br />
BreakPointValue (0- 65520) in cylinders<br />
Value used by SMS in making volume selection<br />
decisions and subsequently by DADSM<br />
If the allocation request is less than the BreakPointValue,<br />
the system prefers to satisfy the request from free space<br />
available from the track-managed space<br />
If the allocation request is equal to or higher than the<br />
BreakPointValue, the system prefers to satisfy the<br />
request from free space available from the<br />
cylinder-managed space<br />
Figure 3-28 BreakPointValue parameter<br />
SETSMS BreakPointValue(0-65520)<br />
If the preferred area cannot satisfy the request, both<br />
areas become eligible to satisfy the requested space<br />
Using the BreakPointValue<br />
How the system determines which managed space is to be used is based on the derived<br />
BreakPointValue for the target volume <strong>of</strong> the allocation. EAV volumes are preferred over<br />
non-EAV volumes where space >=BreakPointValue (there is no EAV volume preference<br />
where space < BreakPointValue).<br />
BreakPointValue (0- 65520)<br />
BreakPointValue (0- 65520) is used by SMS in making volume selection decisions and<br />
subsequently by DADSM. If the allocation request is less than the BreakPointValue, the<br />
system prefers to satisfy the request from free space available from the track-managed<br />
space.<br />
If the allocation request is equal to or higher than the BreakPointValue, the system prefers to<br />
satisfy the request from free space available from the cylinder-managed space. If the<br />
preferred area cannot satisfy the request, both areas become eligible to satisfy the requested<br />
space amount.<br />
Note: The BreakPointValue is only used to direct placement on an expanded address<br />
volume.<br />
Chapter 3. Extended access volumes 95
Generally, for VSAM data set allocation requests that are equal to or larger than the<br />
BreakPointValue, SMS prefers extended address volumes for:<br />
► Non-VSAM allocation requests<br />
► VSAM allocation requests that are smaller than the BreakPointValue—SMS does not<br />
have a preference.<br />
The default is 10 cylinders.<br />
Note: You can use the SETSMS command to change the setting <strong>of</strong> BreakPointValue<br />
without having to re-IPL. This modified setting is in effect until the next IPL when it reverts<br />
to the value specified in the IGDSMSxx member <strong>of</strong> PARMLIB. To make the setting change<br />
permanent, you must alter the value in SYS1.PARMLIB. The syntax <strong>of</strong> the operator<br />
command is:<br />
SETSMS BreakPointValue(0-65520)<br />
<strong>System</strong> use <strong>of</strong> the BreakPointValue<br />
For an extended address volume, the system and storage group use <strong>of</strong> the BreakPointValue<br />
(BPV) helps direct disk space requests to cylinder-managed or track-managed space. The<br />
breakpoint value is expressed in cylinders and is used as follows:<br />
► When the size <strong>of</strong> a disk space request is the breakpoint value or more, the system prefers<br />
to use the cylinder-managed space for that extent. This rule applies to each request for<br />
primary or secondary space for data sets that are eligible for the cylinder-managed space.<br />
► If cylinder-managed space is insufficient, the system uses the track-managed space or<br />
uses both types <strong>of</strong> spaces.<br />
► When the size <strong>of</strong> a disk space request is less than the breakpoint value, the system<br />
prefers to use the track-managed space.<br />
► If space is insufficient, the system uses the cylinder-managed space or uses both types <strong>of</strong><br />
space.<br />
96 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.27 New EATTR attribute in z/<strong>OS</strong> V1R11<br />
For all EAS eligible data sets - new data set attribute<br />
EATTR - allow a user to control data set allocation to<br />
have extended attribute DSCBs (format 8 and 9)<br />
Use the EATTR parameter to indicate whether the<br />
data set can support extended attributes<br />
To create such data sets, you can include (EAVs) in<br />
specific storage groups or specify an EAV on the<br />
request or direct the allocation to an esoteric<br />
containing EAV devices<br />
By definition, a data set with extended attributes can<br />
reside in the extended address space (EAS) on an<br />
extended address volume (EAV)<br />
Figure 3-29 EATTR attribute with z/<strong>OS</strong> V1R11<br />
EATTR attribute<br />
This z/<strong>OS</strong> v1R11support for extended format sequential data sets includes the EATTR<br />
attribute, which has been added for all data set types to allow a user to control whether a data<br />
set can have extended attribute DSCBs and thus control whether it can be allocated in the<br />
EAS.<br />
EAS-eligible data sets are defined to be those that can be allocated in the extended<br />
addressing space and have extended attributes. This is sometimes referred to as<br />
cylinder-managed space.<br />
DFSMShsm checks the data set level attribute EATTR when performing non-SMS volume<br />
selection. The EATTR data set level attribute specifies whether a data set can have extended<br />
attributes (Format 8 and 9 DSCBs) and optionally reside in EAS on an extended address<br />
volume (EAV). Valid value for the EATTR are NO and OPT.<br />
For more information about the EATTR attribute, see z/<strong>OS</strong> DFSMS Access Method Services<br />
for Catalogs, SC26-7394.<br />
Using the EATTR attribute<br />
Use the EATTR parameter to indicate whether the data set can support extended attributes<br />
(format-8 and format-9 DSCBs). To create such data sets, you can include extended address<br />
volumes (EAVs) in specific storage groups, or specify an EAV on the request, or direct the<br />
allocation to an esoteric containing EAV devices. By definition, a data set with extended<br />
Chapter 3. Extended access volumes 97
attributes can reside in the extended address space (EAS) on an extended address volume<br />
(EAV). The EATTR parameter can be used as follows:<br />
► It can be specified for non-VSAM data sets as well as for VSAM data sets.<br />
► The EATTR value has no effect for DISP=OLD processing, even for programs that might<br />
open a data set for OUTPUT, INOUT, or OUTIN processing.<br />
► The value on the EATTR parameter is used for requests when the data set is newly<br />
created.<br />
98 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.28 EATTR parameter<br />
The EATTR parameter can be used as follows:<br />
It can be specified for non-VSAM data sets as well as for<br />
VSAM data sets<br />
The EATTR value has no effect for DISP=OLD processing,<br />
even for programs that might open a data set for OUTPUT,<br />
INOUT, or OUTIN processing<br />
The value <strong>of</strong> the EATTR parameter is used for requests<br />
when the data set is newly created<br />
EATTR = OPT - Extended attributes are optional because<br />
data sets can have extended attributes and reside in EAS<br />
(Default)<br />
EATTR = NO - No extended attributes. The data set<br />
cannot have extended attributes (format 8 and 9 DSCBs)<br />
or reside in EAS (Default for non-VSAM data sets)<br />
Figure 3-30 Using the EATTR parameter<br />
Subparameter definitions in JCL<br />
Use the EATTR JCL keyword, EATTR=OPT, to specify that the data set can have extended<br />
attribute DSCBs and can optionally reside in EAS.<br />
EATTR = OPT Extended attributes are optional. The data set can have extended attributes<br />
and reside in EAS. This is the default value for VSAM data sets if<br />
EATTR(OPT) is not specified.<br />
EATTR = NO No extended attributes. The data set cannot have extended attributes<br />
(format-8 and format-9 DSCBs) or reside in EAS. This is the default value<br />
for non-VSAM data sets. This is equivalent to specifying EATTR=NO on the<br />
JCL and is applicable to all data set types.<br />
Note: The EATTR specification is recorded in the format-1 or format-8 DSCBs for all data<br />
set types and volume types and is recorded in the VVDS for VSAM cluster names. EATTR<br />
is listed by IEHLIST, ISPF, ISMF, LISTCAT, and the catalog search interface (CSI).<br />
Chapter 3. Extended access volumes 99
Other methods for EATTR specification<br />
Different BCP components have adapted to use the EATTR attribute introduced with z/<strong>OS</strong><br />
V1R11. This attribute is specifiable using any <strong>of</strong> the following methods:<br />
► AMS DEFINE CLUSTER and ALLOCATE<br />
On the ALLOCATE or DEFINE CLUSTER commands, allocate new data sets with<br />
parameter EATTR(OPT) to specify that the data set can have extended attribute DSCBs<br />
(format-8 and format-9) and can optionally reside in EAS.<br />
► ISMF Data Class Define panel<br />
Use the EATTR attribute on the ISMF Data Class Define panel, to define a data set that<br />
can have extended attribute DSCBs and optionally reside in EAS.<br />
► ISPF<br />
There is new support in ISPF for displaying and setting the EATTR attribute for data sets<br />
that can reside on extended address volumes (EAVs). ISPF has added support for the<br />
display <strong>of</strong> extended address volumes (EAV) with the specification <strong>of</strong> a data set level<br />
attribute, EATTR, using the allocation panel, Option 3.2, as shown in Figure 3-31.<br />
Figure 3-31 ISPF Option 3.2 allocation using EATTR<br />
100 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
3.29 EATTR JCL DD statement example<br />
If an EATTR value is not specified for a data set, the<br />
following default data set EATTR values are used:<br />
The default behavior for VSAM data sets is OPT<br />
The default behavior for non-VSAM data sets is NO<br />
A DD statement defines a new data set<br />
Example allocates 10,000 cylinders to the data set<br />
When the CONTIG subparameter is coded, the system<br />
allocates 10,000 contiguous cylinders on the volume<br />
//DD2 DD DSNAME=XYZ12,DISP=(,KEEP),UNIT=SYSALLDA,<br />
// VOLUME=SER=25143,SPACE=(CYL,(10000,100),,CONTIG),<br />
// EATTR=OPT<br />
Figure 3-32 JCL example using the EATTR parameter<br />
Using the EATTR parameter<br />
OPT is the default behavior for VSAM data sets if EATTR(OPT) is not specified. For<br />
non-VSAM data sets the default is that the data set cannot have extended attribute DSCBs<br />
and optionally reside in EAS. This is equivalent to specifying EATTR(NO) on the command.<br />
EATTR JCL example<br />
In Figure 3-32, the DD statement defines a new partitioned data set. The system allocates<br />
10,000 cylinders to the data set, <strong>of</strong> which one hundred 256-byte records are for a directory.<br />
When the CONTIG subparameter is coded, the system allocates 10 contiguous cylinders on<br />
the volume. EATTR=OPT indicates that the data set might be created with extended<br />
attributes. With this option, the data set can reside in the extended address space (EAS) <strong>of</strong><br />
the volume.<br />
Note: The EATTR value has no effect for DISP=OLD processing, even for programs that<br />
might open a data set for OUTPUT, INOUT, or OUTIN processing. The value on the EATTR<br />
parameter is used for requests when the data set is newly created.<br />
Chapter 3. Extended access volumes 101
3.30 Migration assistance tracker<br />
The EAV migration assistance tracker can help with:<br />
Finding programs that you might need to change if you<br />
want to support extended address volumes (EAVs)<br />
Identify interfaces that access the VTOC - they need to<br />
have EADSCB=OK specified for the following functions:<br />
OBTAIN, CVAFDIR, CVAFDSM, CVAFVSM, CVAFSEQ,<br />
CVAFFILT, OPEN to VTOC, OPEN EXCP<br />
Identify programs using new services as info messages<br />
Identify possible improper use <strong>of</strong> returned information,<br />
Parsing 28-bit cylinder numbers in output as 16-bit cylinder<br />
numbers as warning messages for the following<br />
commands and functions:<br />
IEHLIST LISTVTOC, IDCAMS LISTCAT, IDCAMS<br />
LISTDATA PINNED, LSPACE, DEVTYPE, IDCAMS<br />
Figure 3-33 Migration assistant tracker functions<br />
Migration tracker program<br />
The EAV migration assistance tracker can help you find programs that you might need to<br />
change if you want to support extended address volumes (EAVs). The EAV migration<br />
assistance tracker is an extension <strong>of</strong> the console ID tracking facility. Programs identified in<br />
this phase <strong>of</strong> migration assistance tracking will continue to fail if the system service is issued<br />
for an EAV and you do not specify the EADSCB=OK keyword for them.<br />
DFSMS provides an EAV migration assistance tracker program. The tracking <strong>of</strong> EAV<br />
migration assistance instances uses the Console ID Tracking facility provided in z/<strong>OS</strong> V1R6.<br />
The EAV migration assistance tracker helps you to do the following:<br />
► Identify select systems services by job and program name, where the invoking programs<br />
might require analysis for changes to use new services. The program calls are identified<br />
as informational instances for possible migration actions. They are not considered errors,<br />
because the services return valid information.<br />
► Identify possible instances <strong>of</strong> improper use <strong>of</strong> returned information in programs, such as<br />
parsing 28-bit cylinder numbers in output as 16-bit cylinder numbers. These instances are<br />
identified as warnings.<br />
► Identify instances <strong>of</strong> programs that will either fail or run with an informational message if<br />
they run on an EAV. These are identified as programs in error. The migration assistance<br />
tracker flags programs with the following functions: when the target volume <strong>of</strong> the<br />
operation is non-EAV, and the function invoked did not specify the EADSCB=OK keyword.<br />
102 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Note: Being listed in the tracker report does not imply that there is a problem. It is simply a<br />
way to help you determine what to examine to see whether it is to be changed.<br />
Error detection by the tracker<br />
Each category <strong>of</strong> errors is explained here:<br />
► Identify interfaces with access to the VTOC that are to be upgraded to have EADSCB=OK<br />
specified for the following functions:<br />
– OBTAIN<br />
– CVAFDIR<br />
– CVAFDSM<br />
– CVAFVSM<br />
– CVAFSEQ<br />
– CVAFFILT<br />
– OPEN to VTOC<br />
– OPEN EXCP<br />
► Identify programs to use new services as informational messages.<br />
► Identify the possible improper use <strong>of</strong> returned information, such as parsing 28-bit cylinder<br />
numbers in output as 16-bit cylinder numbers as warning messages for the following<br />
commands and functions:<br />
– IEHLIST LISTVTOC, IDCAMS LISTCAT, IDCAMS LISTDATA PINNED<br />
– LSPACE, DEVTYPE, IDCAMS DCOLLECT<br />
General information about the tracker<br />
The tracker function allows component code to register tracking information as a text string <strong>of</strong><br />
its choosing, <strong>of</strong> up to 28 characters in length. The tracker records this as a unique instance<br />
and appends additional information to it such as job name, program name, and count <strong>of</strong><br />
occurrences, to avoid duplicates.<br />
DFSMS instances tracked by the EAV migration assistance tracker are shown in Figure 3-34.<br />
LSPACE (SVC 78)<br />
DEVTYPE (SVC 24)<br />
IDCAMS LISTDATA PINNED<br />
IEHLIST LISTVTOC<br />
IDCAMS DCOLLECT<br />
IDCAMS LISTCAT<br />
OBTAIN (SVC 27)<br />
CVAFDIR<br />
CVAFSEQ<br />
CVAFDSM<br />
CVAFFILT<br />
CVAFVSM<br />
DCB Open <strong>of</strong> a VTOC<br />
DCB Open <strong>of</strong> EAS eligible data set<br />
Figure 3-34 DFSMS instances tracked by the migration tracker<br />
Chapter 3. Extended access volumes 103
3.31 Migration tracker commands<br />
SETCON command<br />
Used to activate and deactivate the Console ID<br />
Tracking facility<br />
SETCON TR=ON<br />
DISPLAY OPDATA,TRACKING command<br />
Used to display the current status <strong>of</strong> the console ID<br />
tracking facility, along with any recorded instances <strong>of</strong><br />
violations<br />
Figure 3-35 Migration tracker commands<br />
Migration tracker commands usage<br />
The tracking facility can be used with the following commands:<br />
► The SETCON command is used to activate and deactivate the Console ID Tracking<br />
facility.<br />
► The DISPLAY OPDATA,TRACKING command is used to display the current status <strong>of</strong> the<br />
Console ID Tracking facility, along with any recorded instances <strong>of</strong> violations. Sample<br />
output <strong>of</strong> this command is shown in Figure 3-37 on page 105.<br />
CNIDTRxx parmlib member<br />
The CNIDTRxx parmlib member is used to list violations that have already been identified to<br />
prevent them from being recorded again. An optional CNIDTRxx parmlib member can be<br />
defined to exclude instances from being recorded. The exclusion list is picked up when the<br />
tracker is started or through the SET command. The filters for the exclusion list are the three<br />
items that make an instance unique. You can exclude all SMS instances or just LSPACE<br />
instances, or exclude all instances created by job names that begin with BRS*. There are<br />
many more possibilities. The filters support wildcarding on these three fields.<br />
As events are recorded by the Tracking facility, report the instance to the product owner. After<br />
the event is reported, update the parmlib member so that the instance is no longer recorded<br />
by the facility. In this way, the facility only reports new events.<br />
104 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Tracker exclusion list<br />
The exclusion list prevents instances from being recorded. To identify or use an exclusion list,<br />
use the following operator command, as illustrated in Figure 3-36.<br />
set cnidtr=7t<br />
IEE536I CNIDTR VALUE 7T NOW IN EFFECT<br />
SAMPLE EXCLUSION LIST 7T<br />
* Jobname Pgmname *<br />
* Tracking Information Mask Mask Mask Comments (ignored) *<br />
*----------------------------+--------+--------+----------------------+<br />
|SMS-I:3 LSPACE* |*MASTER*|IEE70110| I SMF CALLS TO LSPACE|<br />
|SMS-I:3 LSPACE* |ALLOCAS |IEFW21SD| VARY DEVICE OFFLINE |<br />
|SMS-I:3 LSPACE* |* |VTDSOIS2| VTDSOIS2 PROG CALLS |<br />
|SMS-I:3 LSPACE* |VTDSOIS1|* | VTDSOIS1 JOB CALLS |<br />
Figure 3-36 Sample tracker exclusion list in the CNIDTR7T parmlib member<br />
Tracking command example<br />
In Figure 3-37, the tracking information column identifies the instance as being<br />
DFSMS-related with the SMS prefix. The additional I, E, or W appended to SMS identifies the<br />
instance as being an informational, error, or warning event. The remaining text in the tracking<br />
information describes the event that was recorded or, for error events, the type <strong>of</strong> error that<br />
would have occurred if the function were executed on an EAV. The tracking value is a value<br />
unique to the error being recorded. JOBNAME, PROGRAM+OFF, and ASID identify what<br />
was being run at the time <strong>of</strong> the instance. Only unique instances are recorded. Duplicates are<br />
tracked by the NUM column being incremented. The tracking information, jobname, and<br />
program name fields make up a unique instance.<br />
13.21.19 SYSTEM1 d opdata,tracking<br />
13.21.19 SYSTEM1 CNZ1001I 13.21.19 TRACKING DISPLAY 831<br />
STATUS=ON,ABEND NUM=15 MAX=1000 MEM=7T EXCL=45 REJECT=0<br />
----TRACKING INFORMATION---- -VALUE-- JOBNAME PROGNAME+OFF-- ASID NUM<br />
SMS-E:1 CVAFDIR STAT082 045201 CVAFJBN CVAFPGM 756 28 4<br />
SMS-E:1 CVAFDSM STAT082 045201 CVAFJBN CVAFPGM 556 28 4<br />
SMS-E:1 CVAFFILT STAT086 04560601 CVAFJBN CVAFPGM 456 28 4<br />
SMS-E:1 CVAFSEQ STAT082 045201 CVAFJBN CVAFPGM 656 28 4<br />
SMS-E:1 DADSM OBTAIN C08001 OBTJBN OBTPGM 856 28 4<br />
SMS-E:1 DCB OPEN VSAM 113-44 01 OPENJBN OPENPGM 256 28 4<br />
SMS-E:1 DCB OPEN VTOC 113-48 01 OPENJBN OPENPGM 356 28 4<br />
SMS-I:3 DEVTYPE 02 DEVTJOB DEVTPROG CE5C 11 1<br />
SMS-I:3 IDCAMS DCOLLECT 02 DCOLLECT IDCAMS 1515 28 4<br />
SMS-I:3 LSPACE EXPMSG= 8802 VTDS0IS1 VTDS0IS2 118 28 2<br />
SMS-I:3 LSPACE MSG= 5002 ALLOCAS IEFW21SD 4CE5C 11 2<br />
SMS-I:3 LSPACE MSG= 9002 *MASTER* IEE70110 52F6 01 43<br />
SMS-W:2 IDCAMS LISTDATA PINN 03 LISTDATX IDCAMS E48E 28 2<br />
SMS-W:2 IDCAMS LISTCAT 03 LISTCAT IDCAMS 956 28 4<br />
SMS-W:2 IEHLIST LISTVTOC 03 LISTVTOC IEHLIST 1056 28 4<br />
----------------------------------------------------------------------<br />
TO REPORT THESE INSTANCES, SEND THIS MESSAGE VIA E-MAIL TO<br />
CONSOLES@US.<strong>IBM</strong>.COM. FOR ADDITIONAL INFORMATION OR TO OBTAIN A CURRENT<br />
EXCLUSION LIST, SEE APAR II13752.<br />
Figure 3-37 Tracker instance report output<br />
Chapter 3. Extended access volumes 105
106 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 4. Storage management s<strong>of</strong>tware<br />
4<br />
DFSMS is an exclusive element <strong>of</strong> the z/<strong>OS</strong> operating system and it automatically manages<br />
data from creation to expiration. In this chapter we present the following topics:<br />
► The DFSMS utility programs to assist you in organizing and maintaining data<br />
► The major DFSMS access methods<br />
► The data set organizations<br />
► An overview <strong>of</strong> the elements that comprise DFSMS:<br />
– DFSMSdfp, a base element <strong>of</strong> z/<strong>OS</strong><br />
– DFSMSdss, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
– DFSMShsm, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
– DFSMSrmm, an optional feature <strong>of</strong> z/<strong>OS</strong><br />
– z/<strong>OS</strong> DFSORT<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 107
4.1 Overview <strong>of</strong> DFSMSdfp utilities<br />
IEBCOMPR: Compare records in SEQ/PDS(E)<br />
IEBCOPY: Copy/Merge/Compress/Manage PDS(E)<br />
IEBDG: Test data generator<br />
IEBEDIT: Selectively copy job steps<br />
IEBGENER: Sequential copy, generate data sets<br />
IEBIMAGE: Create printer image<br />
IEBISAM: Create, copy, backup, print ISAM data set<br />
IEBPTPCH: Print or punch SEQ/PDS(E)<br />
IEBUPDTE: Create/modify SEQ/PDS(E)<br />
IEHINITT: Write standard labels on tape volumes<br />
IEHLIST: List VTOC/PDS(E) entries<br />
IEHMOVE: Move or copy collections <strong>of</strong> data<br />
IEHPROGM: Build, maintain system control data<br />
IFHSTATR: Formats SMF records type 21 (ESV data)<br />
Figure 4-1 DFSMSdfp utilities<br />
DFSMSdfp utilities<br />
Utilities are programs that perform commonly needed functions. DFSMS provides utility<br />
programs to assist you in organizing and maintaining data. There are system and data set<br />
utility programs that are controlled by JCL, and utility control statements.<br />
The base JCL and certain utility control statements necessary to use these utilities are<br />
provided in the major discussion <strong>of</strong> the utility programs in this chapter. For more details and to<br />
help you find the program that performs the function you need, see “Guide to Utility Program<br />
Functions” in z/<strong>OS</strong> DFSMSdfp Utilities, SC26-7414.<br />
<strong>System</strong> utility programs<br />
<strong>System</strong> utility programs are used to list or change information related to data sets and<br />
volumes, such as data set names, catalog entries, and volume labels. Most functions that<br />
system utility programs can perform are accomplished more efficiently with other programs,<br />
such as IDCAMS,ISPF/PDF, ISMF, or DFSMSrmm.<br />
Table 4-1 on page 109 lists and describes system utilities. Programs that provide functions<br />
which are better performed by newer applications (such as ISMF, ISPF/PDF or DFSMSrmm<br />
or DFSMSdss) are marked with an asterisk (*) in the table.<br />
108 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Table 4-1 <strong>System</strong> utility programs<br />
<strong>System</strong> utility Alternate program Purpose<br />
*IEHINITT DFSMSrmm EDGINERS Write standard labels on tape volumes.<br />
IEHLIST ISMF, PDF 3.4 List system control data.<br />
*IEHMOVE DFSMSdss, IEBCOPY Move or copy collections <strong>of</strong> data.<br />
IEHPROGM Access Method Services,<br />
PDF 3.2<br />
Data set utility programs<br />
You can use data set utility programs to reorganize, change, or compare data at the data set<br />
or record level. These programs are controlled by JCL statements and utility control<br />
statements.<br />
These utilities allow you to manipulate partitioned, sequential or indexed sequential data sets,<br />
or partitioned data sets extended (PDSEs), which are provided as input to the programs. You<br />
can manipulate data ranging from fields within a logical record to entire data sets. The data<br />
set utilities included in this section cannot be used with VSAM data sets. You use the<br />
IDCAMS utility to manipulate VSAM data set; refer to “Invoking the IDCAMS utility program”<br />
on page 130.<br />
Table 4-2 lists data set utility programs and their use. Programs that provide functions which<br />
are better performed by newer applications, such as ISMF or DFSMSrmm or DFSMSdss, are<br />
marked with an asterisk (*) in the table.<br />
Table 4-2 Data set utility programs<br />
Build and maintain system control data.<br />
*IFHSTATR DFSMSrmm, EREP Select, format, and write information about tape<br />
errors from the IFASMFDP tape.<br />
Data set utility Use<br />
*IEBCOMPR, SuperC, (PDF 3.12) Compare records in sequential or partitioned data sets, or<br />
PDSEs.<br />
IEBCOPY Copy, compress, or merge partitioned data sets or PDSEs; add<br />
RLD count information to load modules; select or exclude<br />
specified members in a copy operation; rename or replace<br />
selected members <strong>of</strong> partitioned data sets or PDSEs.<br />
IEBDG Create a test data set consisting <strong>of</strong> patterned data.<br />
IEBEDIT Selectively copy job steps and their associated JOB<br />
statements.<br />
IEBGENER or ICEGENER Copy records from a sequential data set, or convert a data set<br />
from sequential organization to partitioned organization.<br />
*IEBIMAGE Modify, print, or link modules for use with the <strong>IBM</strong> 3800 Printing<br />
Subsystem, the <strong>IBM</strong> 3262 Model 5, or the 4284 printer.<br />
*IEBISAM Unload, load, copy, or print an ISAM data set.<br />
IEBPTPCH or PDF 3.1 or 3.6 Print or punch records in a sequential or partitioned data set.<br />
IEBUPDTE Incorporate changes to sequential or partitioned data sets, or<br />
PDSEs.<br />
Chapter 4. Storage management s<strong>of</strong>tware 109
4.2 IEBCOMPR utility<br />
Example 1<br />
Example 2<br />
Directory 1<br />
ABCDGL<br />
Directory 1<br />
ABCFHIJ<br />
Figure 4-2 IEBCOMPR utility example<br />
IEBCOMPR utility<br />
IEBCOMPR is a data set utility used to compare two sequential data sets, two partitioned<br />
data sets (PDS), or two PDSEs, at the logical record level, to verify a backup copy. Fixed,<br />
variable, or undefined records from blocked or unblocked data sets or members can also be<br />
compared. However, you should not use IEBCOMPR to compare load modules.<br />
Two sequential data sets are considered equal (that is, are considered to be identical) if:<br />
► The data sets contain the same number <strong>of</strong> records<br />
► Corresponding records and keys are identical<br />
Two partitioned data sets or two PDSEs are considered equal if:<br />
► Corresponding members contain the same number <strong>of</strong> records<br />
► Note lists are in the same position within corresponding members<br />
► Corresponding records and keys are identical<br />
► Corresponding directory user data fields are identical<br />
If all these conditions are not met for a specific type <strong>of</strong> data set, those data sets are<br />
considered unequal. If records are unequal, the record and block numbers, the names <strong>of</strong> the<br />
DD statements that define the data sets, and the unequal records are listed in a message<br />
data set. Ten successive unequal comparisons stop the job step, unless you provide a routine<br />
for handling error conditions.<br />
110 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
PDSE1 PDSE2<br />
Directory 2<br />
ABCDEFG<br />
HIJKL<br />
PDSE1 PDSE2<br />
Directory 2<br />
ABFGHIJ
Note: Load module partitioned data sets that reside on different types <strong>of</strong> devices should<br />
not be compared. Under most circumstances, the data sets will not compare as equal.<br />
A partitioned data set or partitioned data set extended can be compared only if all names in<br />
one or both directories have counterpart entries in the other directory. The comparison is<br />
made on members identified by these entries and corresponding user data.<br />
Recommendation: Use the SuperC utility instead <strong>of</strong> IEBCOMPR. SuperC is part <strong>of</strong><br />
ISPF/PDF and the High Level Assembler Toolkit Feature. SuperC can be processed in the<br />
foreground as well as in batch, and its report is more useful.<br />
Examples <strong>of</strong> comparing data sets<br />
As mentioned, partitioned data sets or PDSEs can be compared only if all the names in one<br />
or both <strong>of</strong> the directories have counterpart entries in the other directory. The comparison is<br />
made on members that are identified by these entries and corresponding user data.<br />
You can run this sample JCL to compare two cataloged, partitioned organized (PO) data sets:<br />
//DISKDISK JOB ...<br />
// EXEC PGM=IEBCOMPR<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSUT1 DD DSN=PDSE1,DISP=SHR<br />
//SYSUT2 DD DSN=PDSE2,DISP=SHR<br />
//SYSIN DD *<br />
COMPARE TYPORG=PO<br />
/*<br />
Figure 4-3 IEBCOMPR sample<br />
Figure 4-2 on page 110 shows several examples <strong>of</strong> the directories <strong>of</strong> two partitioned data<br />
sets.<br />
In Example 1, Directory 2 contains corresponding entries for all the names in Directory 1;<br />
therefore, the data sets can be compared.<br />
In Example 2, each directory contains a name that has no corresponding entry in the other<br />
directory; therefore, the data sets cannot be compared, and the job step will be ended.<br />
Chapter 4. Storage management s<strong>of</strong>tware 111
4.3 IEBCOPY utility<br />
//COPY JOB ...<br />
//JOBSTEP EXEC PGM=IEBCOPY<br />
//SYSPRINT DD SYSOUT=A<br />
//OUT1 DD DSNAME=DATASET1,UNIT=disk,VOL=SER=111112,<br />
// DISP=(OLD,KEEP)<br />
//IN6 DD DSNAME=DATASET6,UNIT=disk,VOL=SER=111115,<br />
// DISP=OLD<br />
//IN5 DD DSNAME=DATASET5,UNIT=disk,VOL=SER=111116,<br />
// DISP=(OLD,KEEP)<br />
//SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(1))<br />
//SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(1))<br />
//SYSIN DD *<br />
COPY OUTDD=OUT1<br />
INDD=IN5,IN6<br />
SELECT MEMBER=((B,,R),A)<br />
/*<br />
Figure 4-4 IEBCOPY utility example<br />
IEBCOPY utility<br />
IEBCOPY is a data set utility used to copy or merge members between one or more<br />
partitioned data sets (PDS), or partitioned data sets extended (PDSE), in full or in part. You<br />
can also use IEBCOPY to create a backup <strong>of</strong> a partitioned data set into a sequential data set<br />
(called an unload data set or PDSU), and to copy members from the backup into a partitioned<br />
data set.<br />
IEBCOPY is used to:<br />
► Make a copy <strong>of</strong> a PDS or PDSE<br />
► Merge partitioned data sets (except when unloading)<br />
► Create a sequential form <strong>of</strong> a PDS or PDSE for a backup or transport<br />
► Reload one or more members from a PDSU into a PDS or PDSE<br />
► Select specific members <strong>of</strong> a PDS or PDSE to be copied, loaded, or unloaded<br />
► Replace members <strong>of</strong> a PDS or PDSE<br />
► Rename selected members <strong>of</strong> a PDS or PDSE<br />
► Exclude members from a data set to be copied, unloaded, or loaded (except on<br />
COPYGRP)<br />
► Compress a PDS in place<br />
► Upgrade an <strong>OS</strong> format load module for faster loading by MVS program fetch<br />
112 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Copy and reblock load modules<br />
► Convert load modules in a PDS to program objects in a PDSE when copying a PDS to a<br />
PDSE<br />
► Convert a PDS to a PDSE, or a PDSE to a PDS<br />
► Copy to or from a PDSE data set a member and its aliases together as a group<br />
(COPYGRP)<br />
In addition, IEBCOPY automatically lists the number <strong>of</strong> unused directory blocks and the<br />
number <strong>of</strong> unused tracks available for member records in the output partitioned data set.<br />
INDD statement<br />
This statement specifies the names <strong>of</strong> DD statements that locate the input data sets. When<br />
an INDD=appears in a record by itself (that is, not with a COPY keyword), it functions as a<br />
control statement and begins a new step in the current copy operation.<br />
INDD=[(]{DDname|(DDname,R)}[,...][)]<br />
R specifies that all members to be copied or loaded from this input data set are to replace<br />
any identically named members on the output partitioned data set.<br />
OUTDD statement<br />
This statement specifies the name <strong>of</strong> a DD statement that locates the output data set.<br />
OUTDD=DDname<br />
SELECT statement<br />
This statement selects specific members to be processed from one or more data sets by<br />
coding a SELECT statement to name the members. Alternatively, all members but a specific<br />
few can be designated by coding an EXCLUDE statement to name members not to be<br />
processed.<br />
Chapter 4. Storage management s<strong>of</strong>tware 113
4.4 IEBCOPY: Copy operation<br />
DATA.SET1<br />
Directory<br />
ABF<br />
Member F<br />
Before Copy<br />
Figure 4-5 IEBCOPY copy operation<br />
Copy control command example<br />
In Figure 4-5, two input partitioned data sets (DATA.SET5 and DATA.SET6) are copied to an<br />
existing output partitioned data set (DATA.SET1). In addition, all members on DATA.SET6 are<br />
copied; members on the output data set that have the same names as the copied members<br />
are replaced. After DATA.SET6 is processed, the output data set (DATA.SET1) is compressed<br />
in place. Figure 4-5 shows the input and output data sets before and after copy processing.<br />
The compress process is shown in Figure 4-7 on page 116. Figure 4-6 shows the job that is<br />
used to copy and compress partitioned data sets.<br />
//COPY JOB ...<br />
//JOBSTEP EXEC PGM=IEBCOPY<br />
//SYSPRINT DD SYSOUT=A<br />
//INOUT1 DD DSNAME=DATA.SET1,UNIT=disk,VOL=SER=111112,DISP=(OLD,KEEP)<br />
//IN5 DD DSNAME=DATA.SET5,UNIT=disk,VOL=SER=111114,DISP=OLD<br />
//IN6 DD DSNAME=DATA.SET6,UNIT=disk,VOL=SER=111115,<br />
// DISP=(OLD,KEEP)<br />
//SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(1))<br />
//SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(1))<br />
//SYSIN DD *<br />
COPY OUTDD=INOUT1,INDD=(IN5,(IN6,R),INOUT1)<br />
/*<br />
Figure 4-6 IEBOPY with copy and compress<br />
114 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
A<br />
Unused<br />
B<br />
Available<br />
DATA.SET5<br />
Directory<br />
AC<br />
Unsued<br />
Member C<br />
Unused<br />
A<br />
Available<br />
DATA.SET6<br />
Directory<br />
BCD<br />
Member B<br />
D<br />
C<br />
Available<br />
DATA.SET1<br />
Directory<br />
ABCDF<br />
Member F<br />
A<br />
Unused<br />
B<br />
D<br />
C<br />
After Copy<br />
Before Compress
COPY control statement<br />
In the control statement, note the following:<br />
► INOUT1 DD defines a partitioned data set (DATA.SET1) that contains three members (A,<br />
B, and F).<br />
► IN5 DD defines a partitioned data set (DATA.SET5) that contains two members (A and C).<br />
► IN6 DD defines a partitioned data set (DATA.SET6), that contains three members (B, C,<br />
and D).<br />
► SYSUT3 and SYSUT4 DD define temporary spill data sets. One track is allocated for each<br />
DD statement on a disk volume.<br />
► SYSIN DD defines the control data set, which follows in the input stream. The data set<br />
contains a COPY statement.<br />
► COPY indicates the start <strong>of</strong> the copy operation. The OUTDD operand specifies<br />
DATA.SET1 as the output data set.<br />
COPY processing<br />
Processing occurs as follows:<br />
1. Member A is not copied from DATA.SET5 into DATA.SET1 because it already exists on<br />
DATA.SET1 and the replace option was not specified for DATA.SET5.<br />
2. Member C is copied from DATA.SET5 to DATA.SET1, occupying the first available space.<br />
3. All members are copied from DATA.SET6 to DATA.SET1, immediately following the last<br />
member. Members B and C are copied even though the output data set already contains<br />
members with the same names because the replace option is specified on the data set<br />
level.<br />
The pointers in the DATA.SET1 directory are changed to point to the new members B and C.<br />
Thus, the space occupied by the old members B and C is unused.<br />
Chapter 4. Storage management s<strong>of</strong>tware 115
4.5 IEBCOPY: Compress operation<br />
DATA.SET1<br />
Directory<br />
ABCDF<br />
Member F<br />
Figure 4-7 IEBCOPY compress operation<br />
IEBCOPY compress operation<br />
A partitioned data set will contain unused areas (sometimes called gas) where a deleted<br />
member or the old version <strong>of</strong> an updated member once resided. This unused space is only<br />
reclaimed when a partitioned data set is copied to a new data set, or after a<br />
compress-in-place operation successfully completes. It has no meaning for a PDSE and is<br />
ignored if requested.<br />
The simplest way to request a compress-in-place operation is to specify the same ddname for<br />
both the OUTDD and INDD parameters <strong>of</strong> a COPY statement.<br />
Example<br />
In our example in 4.4, “IEBCOPY: Copy operation” on page 114, the pointers in the<br />
DATA.SET1 directory are changed to point to the new members B and C. Thus, the space<br />
occupied by the old members B and C is unused. The members currently on DATA.SET1 are<br />
compressed in place as a result <strong>of</strong> the copy operation, thereby eliminating embedded unused<br />
space. However, be aware that a compress-in-place operation may bring risk to your data if<br />
something abnormally disrupts the process.<br />
116 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
A<br />
Unused<br />
B<br />
D<br />
C<br />
After Copy,and<br />
Before Compress<br />
Compress<br />
DATA.SET1<br />
Directory<br />
ABCDF<br />
Member F<br />
A<br />
B<br />
D<br />
C<br />
Available<br />
After Copy,and<br />
After Compress
4.6 IEBGENER utility<br />
//COPY JOB ...<br />
//STEP1 EXEC PGM=IEBGENER<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSUT1 DD DSNAME=INSET,DISP=SHR<br />
//SYSUT2 DD DSNAME=OUTPUT,DISP=(,CATLG),<br />
// SPACE=(CYL,(1,1)),DCB=*.SYSUT1<br />
//SYSIN DD DUMMY<br />
Figure 4-8 IEBGENER utility<br />
Using IEBGENER<br />
IEBGENER copies records from a sequential data set or converts sequential data sets into<br />
members <strong>of</strong> PDSs or PDSEs. You can use IEBGENER to:<br />
► Create a backup copy <strong>of</strong> a sequential data set, a member <strong>of</strong> a partitioned data set or<br />
PDSE, or a UNIX <strong>System</strong> Services file such as an HFS file.<br />
► Produce a partitioned data set or PDSE, or a member <strong>of</strong> a partitioned data set or PDSE,<br />
from a sequential data set or a UNIX <strong>System</strong> Services file.<br />
► Expand an existing partitioned data set or PDSE by creating partitioned members and<br />
merging them into the existing data set.<br />
► Produce an edited sequential or partitioned data set or PDSE.<br />
► Manipulate data sets containing double-byte character set data.<br />
► Print sequential data sets or members <strong>of</strong> partitioned data sets or PDSEs or UNIX <strong>System</strong><br />
Services files.<br />
► Re-block or change the logical record length <strong>of</strong> a data set.<br />
► Copy user labels on sequential output data sets.<br />
► Supply editing facilities and exits.<br />
Jobs that call IEBGENER have a system-determined block size used for the output data set if<br />
RECFM and LRECL are specified, but BLKSIZE is not specified. The data set is also<br />
considered to be system-reblockable.<br />
Chapter 4. Storage management s<strong>of</strong>tware 117
Copying to z/<strong>OS</strong> UNIX<br />
IEBGENER can be used to copy from a PDS or PDSE member to a UNIX file. You also can<br />
use TSO/E commands and other copying utilities such as ICEGENER or BPXCOPY to copy a<br />
PDS or PDSE member to a UNIX file.<br />
In Figure 4-9, the data set in SYSUT1 is a PDS or PDSE member and the data set in<br />
SYSUT2 is a UNIX file. This job creates a macro library in the UNIX directory.<br />
// JOB ....<br />
// EXEC PGM=IEBGENER<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSUT1 DD DSN=PROJ.BIGPROG.MACLIB(MAC1),DISP=SHR<br />
//SYSUT2 DD PATH='/u/BIGPROG/macros/special/MAC1',PATHOPTS=OCREAT,<br />
// PATHDISP=(KEEP,DELETE),<br />
// PATHMODE=(SIRUSR,SIWUSR,<br />
// SIRGRP,SIROTH),<br />
// FILEDATA=TEXT<br />
//SYSIN DD DUMMY<br />
Figure 4-9 Job to copy a PDS to a z/<strong>OS</strong> UNIX file<br />
Note: If you have the DFSORT product installed, you should be using ICEGENER as an<br />
alternative to IEBGENER when making an unedited copy <strong>of</strong> a data set or member. It may<br />
already be installed in your system under the name IEBGENER. It generally gives better<br />
performance.<br />
118 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.7 IEBGENER: Adding members to a PDS<br />
Utility control<br />
statements<br />
define record<br />
groups<br />
name members<br />
Sequential Input<br />
Member B<br />
LASTREC<br />
Member D<br />
LASTREC<br />
Member F<br />
Existing<br />
Data Set<br />
Directory<br />
A C E G<br />
Available<br />
Figure 4-10 Adding members to a PDS using IEBGENER<br />
Expanded<br />
Data Set<br />
Directory<br />
ACEGBDF<br />
Member A<br />
Adding members to a PDS<br />
You can use IEBGENER to add members to a partitioned data set or PDSE. IEBGENER<br />
creates the members from sequential input and adds them to the data set. The merge<br />
operation, which is the ordering <strong>of</strong> the partitioned directory, is automatically performed by the<br />
program.<br />
Figure 4-10 shows how sequential input is converted into members that are merged into an<br />
existing partitioned data set or PDSE. The left side <strong>of</strong> the figure shows the sequential input<br />
that is to be merged with the partitioned data set or PDSE shown in the middle <strong>of</strong> the figure.<br />
Utility control statements are used to divide the sequential data set into record groups and to<br />
provide a member name for each record group. The right side <strong>of</strong> the figure shows the<br />
expanded partitioned data set or PDSE.<br />
Note that members B, D, and F from the sequential data set were placed in available space<br />
and that they are sequentially ordered in the partitioned directory.<br />
A<br />
C<br />
E<br />
G<br />
Chapter 4. Storage management s<strong>of</strong>tware 119<br />
C<br />
E<br />
G<br />
B<br />
D<br />
F
4.8 IEBGENER: Copying data to tape<br />
MY.DATA IEBGENER<br />
Figure 4-11 Copying data to tape<br />
Copying data to tape example<br />
You can use IEBGENER to copy data to tape. The example in Figure 4-11 copies the data set<br />
MY.DATA to an SL cartridge. The data set name on tape is MY.DATA.OUTPUT.<br />
//DISKTOTP JOB ...<br />
//STEP1 EXEC PGM=IEBGENER<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSUT1 DD DSNAME=MY.DATA,DISP=SHR<br />
//SYSUT2 DD DSNAME=MY.DATA.OUTPUT,UNIT=3490,DISP=(,KEEP),<br />
// VOLUME=SER=<strong>IBM</strong>001,LABEL=(1,SL)<br />
//SYSIN DD DUMMY<br />
Figure 4-12 Copying data to tape with IEBGENER<br />
For further information about IEBGENER, refer to z/<strong>OS</strong> DFSMSdfp Utilities, SC26-7414.<br />
120 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
MY.DATA.OUTPUT
4.9 IEHLIST utility<br />
//VTOCLIST JOB ...<br />
//STEP1 EXEC PGM=IEHLIST<br />
//SYSPRINT DD SYSOUT=A<br />
//DD2 DD UNIT=3390,VOLUME=SER=SBOXED,DISP=SHR<br />
//SYSIN DD *<br />
LISTVTOC VOL=3390=SBOXED,INDEXDSN=SYS1.VTOCIX.TOTTSB<br />
/*<br />
Figure 4-13 IEHLIST utility<br />
Using IEHLIST<br />
IEHLIST is a system utility used to list entries in the directory <strong>of</strong> one or more partitioned data<br />
sets or PDSEs, or entries in an indexed or non-indexed volume table <strong>of</strong> contents. Any number<br />
<strong>of</strong> listings can be requested in a single execution <strong>of</strong> the program.<br />
Listing a PDS or PDSE directory<br />
IEHLIST can list up to ten partitioned data set or PDSE directories at a time.<br />
The directory <strong>of</strong> a partitioned data set is composed <strong>of</strong> variable-length records blocked into<br />
256-byte blocks. Each directory block can contain one or more entries that reflect member or<br />
alias names and other attributes <strong>of</strong> the partitioned members. IEHLIST can list these blocks in<br />
edited and unedited format.<br />
The directory <strong>of</strong> a PDSE, when listed, will have the same format as the directory <strong>of</strong> a<br />
partitioned data set.<br />
Listing a volume table <strong>of</strong> contents (VTOC)<br />
IEHLIST can be used to list, partially or completely, entries in a specified volume table <strong>of</strong><br />
contents (VTOC), whether indexed or non-indexed. The program lists the contents <strong>of</strong> selected<br />
data set control blocks (DSCBs) in edited or unedited form.<br />
Chapter 4. Storage management s<strong>of</strong>tware 121
4.10 IEHLIST LISTVTOC output<br />
CONTENTS OF VTOC ON VOL SBOXED <br />
THERE IS A 2 LEVEL VTOC INDEX<br />
DATA SETS ARE LISTED IN ALPHANUMERIC ORDER<br />
----------------DATA SET NAME--------------- CREATED DATE.EXP FILE TYPE SM<br />
@LSTPRC@.REXX.SDSF.#MESSG#.$DATASET 2007.116 00.000 PARTITIONED<br />
ADB.TEMP.DB8A.C0000002.AN154059.CHANGES 2007.052 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000051.AN171805.CHANGES 2007.064 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000062.AN130142.IFF 2007.066 00.000 PARTITIONED<br />
ADMIN.DB8A.C0000090.SMAP.T0001 2007.067 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000091.AN153524.CHANGES 2007.067 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000092.AN153552.SHRVARS 2007.067 00.000 PARTITIONED<br />
ADMIN.DB8A.C0000116.ULD.T0001 2007.068 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000120.SDISC.T0001 2007.068 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000123.AN180219.IFF 2007.068 00.000 PARTITIONED<br />
ADMIN.DB8A.C0000157.AN195422.IFF 2007.071 00.000 PARTITIONED<br />
ADMIN.DB8A.C0000204.AN120807.SHRVARS 2007.074 00.000 PARTITIONED<br />
ADMIN.DB8A.C0000231.AN185546.CHANGES 2007.074 00.000 SEQUENTIAL<br />
ADMIN.DB8A.C0000254.SERR.T0001 2007.078 00.000 SEQUENTIAL<br />
BART.CG38V1.EMP4XML.DATA 2004.271 00.000 SEQUENTIAL<br />
BART.MAZDA.IFCID180.FB80 2004.254 00.000 SEQUENTIAL<br />
BART.PM32593.D060310.T044716.DMRABSUM.UNL 2006.072 00.000 SEQUENTIAL<br />
BART.PM32593.D060317.T063831.DMRAPSUM.UNL 2006.079 00.000 SEQUENTIAL<br />
BART.PM32593.D060324.T061425.DMRABSUM.UNL 2006.086 00.000 SEQUENTIAL<br />
THERE ARE 45 EMPTY CYLINDERS PLUS 162 EMPTY TRACKS ON THIS VOLUME<br />
THERE ARE 4238 BLANK DSCBS IN THE VTOC ON THIS VOLUME<br />
THERE ARE 301 UNALLOCATED VIRS IN THE INDEX<br />
Figure 4-14 IEHLIST LISTVTOC output<br />
Obtaining the VTOC listing<br />
Running the job shown in Figure 4-13 on page 121 produces a SYSOUT very similar to that<br />
shown in Figure 4-14.<br />
If you include the keyword FORMAT in the LISTVTOC parameter, you will have more detailed<br />
information about the DASD and about the data sets, and you can also specify the DSNAME<br />
that you want to request information about. If you specify the keyword DUMP instead <strong>of</strong><br />
FORMAT, you will get an unformatted VTOC listing.<br />
Note: This information is at the DASD volume level, and does not have any interaction with<br />
the catalog.<br />
122 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.11 IEHINITT utility<br />
Initializing Tape Cartridges<br />
Create a tape label - (EBCDIC or ASCII)<br />
Consider placement in an authorized library<br />
Figure 4-15 IEHINITT utility<br />
//LABEL JOB ...<br />
//STEP1 EXEC PGM=IEHINITT<br />
//SYSPRINT DD SYSOUT=A<br />
//LABEL1 DD DCB=DEN=2,UNIT=(TAPE,1,DEFER)<br />
//LABEL2 DD DCB=DEN=3,UNIT=(TAPE,1,DEFER)<br />
//SYSIN DD *<br />
LABEL1 INITT SER=TAPE1<br />
LABEL2 INITT SER=001234,NUMBTAPE=2<br />
/*<br />
DFSMSrmm EDGINERS - the newer alternative<br />
IEHINITT utility<br />
IEHINITT is a system utility used to place standard volume label sets onto any number <strong>of</strong><br />
magnetic tapes mounted on one or more tape units. They can be ISO/ANSI Version 3 or<br />
ISO/ANSI Version 4 volume label sets written in American Standard Code for Information<br />
Interchange (ASCII) or <strong>IBM</strong> standard labels written in EBCDIC.<br />
Attention: Because IEHINITT can overwrite previously labeled tapes regardless <strong>of</strong><br />
expiration date and security protection, <strong>IBM</strong> recommends that the security administrator<br />
use PROGRAM protection with the following sequence <strong>of</strong> RACF commands:<br />
RDEFINE PROGRAM IEHINITT ADDMEM(‘SYS1.LINKLIB’//NODPADCHK) UACC(NONE)<br />
PERMIT IEHINITT CLASS(PROGRAM) ID(users or group) ACCESS(READ)<br />
SETROPTS WHEN(PROGRAM) REFRESH<br />
(Omit REFRESH if you did not have this option active previously.)<br />
IEHINITT should be moved into an authorized password-protected private library, and<br />
deleted from SYS1.LINKLIB.<br />
To further protect against overwriting the wrong tape, IEHINITT asks the operator to verify<br />
each tape mount.<br />
Chapter 4. Storage management s<strong>of</strong>tware 123
Examples <strong>of</strong> IEHINITT<br />
In the example in Figure 4-16, two groups <strong>of</strong> serial numbers, (001234, 001235, 001236, and<br />
001334, 001335, 001336) are placed on six tape volumes. The labels are written in EBCDIC<br />
at 800 bits per inch. Each volume labeled is mounted, when it is required, on a single 9-track<br />
tape unit.<br />
//LABEL3 JOB ...<br />
//STEP1 EXEC PGM=IEHINITT<br />
//SYSPRINT DD SYSOUT=A<br />
//LABEL DD DCB=DEN=2,UNIT=(tape,1,DEFER)<br />
//SYSIN DD *<br />
LABEL INITT SER=001234,NUMBTAPE=3<br />
LABEL INITT SER=001334,NUMBTAPE=3<br />
/*<br />
Figure 4-16 IEHINITT example to write EBCDIC labels in different densities<br />
In Figure 4-17, serial numbers 001234, 001244, 001254, 001264, 001274, and so forth are<br />
placed on eight tape volumes. The labels are written in EBCDIC at 800 bits per inch. Each<br />
volume labeled is mounted, when it is required, on one <strong>of</strong> four 9-track tape units.<br />
//LABEL4 JOB ...<br />
//STEP1 EXEC PGM=IEHINITT<br />
//SYSPRINT DD SYSOUT=A<br />
//LABEL DD DCB=DEN=2,UNIT=(tape,4,DEFER)<br />
//SYSIN DD *<br />
LABEL INITT SER=001234<br />
LABEL INITT SER=001244<br />
LABEL INITT SER=001254<br />
LABEL INITT SER=001264<br />
LABEL INITT SER=001274<br />
LABEL INITT SER=001284<br />
LABEL INITT SER=001294<br />
LABEL INITT SER=001304<br />
/*<br />
Figure 4-17 IEHINITT Place serial number on eight tape volumes<br />
DFSMSrmm EDGINERS utility<br />
The EDGINERS utility program verifies that the volume is mounted before writing a volume<br />
label on a labeled, unlabeled, or blank tape. EDGINERS checks security and volume<br />
ownership, and provides auditing. DFSMSrmm must know that the volume needs to be<br />
labelled. If the labelled volume is undefined, then DFSMSrmm defines it to DFSMSrmm and<br />
can create RACF volume security protection.<br />
Detailed procedures for using the program are described in z/<strong>OS</strong> DFSMSrmm<br />
Implementation and Customization Guide, SC26-7405.<br />
Note: DFSMSrmm is an optional priced feature <strong>of</strong> DFSMS. That means that EDGINERS<br />
can only be used when DFSMSrmm is licensed. If DFSMSrmm is licensed, <strong>IBM</strong><br />
recommends that you use EDGINERS for tape initialization instead <strong>of</strong> using IEHINITT.<br />
124 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.12 IEFBR14 utility<br />
//DATASETS JOB ...<br />
//STEP1 EXEC PGM=IEFBR14<br />
//DD1 DD DSN=DATA.SET1,<br />
// DISP=(OLD,DELETE,DELETE)<br />
//DD2 DD DSN=DATA.SET2,<br />
// DISP=(NEW,CATLG),UNIT=3390,<br />
// VOL=SER=333001,<br />
// SPACE=(CYL,(12,1,1),),<br />
// DCB=(RECFM=FB,LRECL=80)<br />
Figure 4-18 IEFBR14 program<br />
IEFBR14 program<br />
IEFBR14 is not a utility program. It is a two-line program that clears register 15, thus passing<br />
a return code <strong>of</strong> 0. It then branches to the address in register 14, which returns control to the<br />
system. So in other words, this program is dummy program. It can be used in a step to force<br />
MVS (specifically, the initiator) to process the JCL code and execute functions such as the<br />
following:<br />
► Checking all job control statements in the step for syntax<br />
► Allocating direct access space for data sets<br />
► Performing data set dispositions like creating new data sets or deleting old ones<br />
Note: Although the system allocates space for data sets, it does not initialize the new data<br />
sets. Therefore, any attempt to read from one <strong>of</strong> these new data sets in a subsequent step<br />
may produce unpredictable results. Also, we do not recommend allocation <strong>of</strong> multi-volume<br />
data sets while executing IEFBR14.<br />
In the example in Figure 4-18 the first DD statement DD1 deletes old data set DATA.SET1.<br />
The second DD statement creates a new PDS with name DATA.SET2.<br />
Chapter 4. Storage management s<strong>of</strong>tware 125
4.13 DFSMSdfp access methods<br />
DFSMSdfp provides several access methods for<br />
formatting and accessing data, as follows:<br />
Figure 4-19 DFSMSdfp access methods<br />
Access methods<br />
An access method is a friendly program interface between programs and their data. It is in<br />
charge <strong>of</strong> interfacing with Input Output Supervisor (I<strong>OS</strong>), the z/<strong>OS</strong> code that starts the I/O<br />
operation. An access method makes the physical organization <strong>of</strong> data transparent to you by:<br />
► Managing data buffers<br />
► Blocking and de-blocking logical records into physical blocks<br />
► Synchronizing your task and the I/O operation (wait/post mechanism)<br />
► Writing the channel program<br />
► Optimizing the performance characteristics <strong>of</strong> the control unit (such as caching and data<br />
striping)<br />
► Compressing and decompressing I/O data<br />
► Executing s<strong>of</strong>tware error recovery<br />
In contrast to other platforms, z/<strong>OS</strong> supports several types <strong>of</strong> access methods and data<br />
organizations.<br />
An access method defines the organization by which the data is stored and retrieved. DFSMS<br />
access methods have their own data set structures for organizing data, macros, and utilities<br />
to define and process data sets. It is an application choice, depending on the type <strong>of</strong> access<br />
(sequential or random), to allow or disallow insertions and deletions, to pick up the most<br />
adequate access method for its data.<br />
126 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Basic partitioned access method - (BPAM)<br />
Basic sequential access method - (BSAM)<br />
Object access method - (OAM) - <strong>OS</strong>REQ interface<br />
Queued sequential access method - (QSAM)<br />
Virtual storage access method - (VSAM)<br />
DFSMS also supports the basic direct access<br />
method (BDAM) for coexistence with previous<br />
operating systems
Access methods are identified primarily by the data set organization to which they apply. For<br />
example, you can use the basic sequential access method (BSAM) with sequential data sets.<br />
However, there are times when an access method identified with one data organization type<br />
can be used to process a data set organized in a different manner. For example, a sequential<br />
data set (not an extended format data set) created using BSAM can be processed by the<br />
basic direct access method (BDAM), and vice versa.<br />
Basic direct access method (BDAM)<br />
BDAM arranges records in any sequence your program indicates, and retrieves records by<br />
actual or relative address. If you do not know the exact location <strong>of</strong> a record, you can specify a<br />
point in the data set where a search for the record is to begin. Data sets organized this way<br />
are called direct data sets.<br />
Optionally, BDAM uses hardware keys. Hardware keys are less efficient than the optional<br />
s<strong>of</strong>tware keys in VSAM KSDS.<br />
Note: Because BDAM tends to require the use <strong>of</strong> device-dependent code, it is not a<br />
recommended access method. In addition, using keys is much less efficient than in VSAM.<br />
BDAM is supported by DFSMS only to enable compatibility with other <strong>IBM</strong> operating<br />
systems.<br />
Basic partitioned access method (BPAM)<br />
BPAM arranges records as members <strong>of</strong> a partitioned data set (PDS) or a partitioned data set<br />
extended (PDSE) on DASD. You can use BPAM to view a UNIX directory and its files as<br />
though it were a PDS. You can view each PDS, PDSE, or UNIX member sequentially with<br />
BSAM or QSAM. A PDS or PDSE includes a directory that relates member names to<br />
locations within the data set. Use the PDS, PDSE, or UNIX directory to retrieve individual<br />
members. A member is a sequential file contained in the PDS or PDSE data set. When<br />
members contain load modules (executable code in PDS) or program objects (executable<br />
code in PDSE), the directory contains program attributes that are required to load and rebind<br />
the member. Although UNIX files can contain program objects, program management does<br />
not access UNIX files through BPAM.<br />
For information about partitioned organized data set, see 4.22, “Partitioned organized (PO)<br />
data sets” on page 143, and subsequent sections.<br />
Basic sequential access method<br />
BSAM arranges logical records sequentially in the order in which they are entered. A data set<br />
that has this organization is a sequential data set. Blocking, de-blocking and the I/O<br />
synchronization is done by the application program. This is basic access. You can use BSAM<br />
with the following data types:<br />
► Sequential data sets<br />
► Extended-format data sets<br />
► z/<strong>OS</strong> UNIX files<br />
See also 4.28, “Sequential access methods” on page 154.<br />
Queued sequential access method (QSAM)<br />
QSAM arranges logical records sequentially in the order that they are entered to form<br />
sequential data sets, which are the same as those data sets that BSAM creates. The system<br />
organizes records with other records. QSAM anticipates the need for records based on their<br />
order. To improve performance, QSAM reads these records into main storage before they are<br />
requested. This is called queued access. QSAM blocks and de-blocks logical records into<br />
Chapter 4. Storage management s<strong>of</strong>tware 127
physical blocks. QSAM also guarantees the synchronization between the task and the I/O<br />
operation. You can use QSAM with the same data types as BSAM. See also 4.28, “Sequential<br />
access methods” on page 154.<br />
Object Access Method (OAM)<br />
OAM processes very large named byte streams (objects) that have no record boundary or<br />
other internal orientation as image data. These objects can be recorded in a DB2 data base<br />
or on an optical storage volume. For information about OAM, see z/<strong>OS</strong> DFSMS Object<br />
Access Method Application Programmer’s Reference, SC35-0425, and z/<strong>OS</strong> DFSMS Object<br />
Access Method Planning, Installation, and Storage Administration Guide for Object Support,<br />
SC35-0426.<br />
Virtual Storage Access Method (VSAM)<br />
VSAM is an access method that has several ways <strong>of</strong> organizing data, depending on the<br />
application’s needs.<br />
VSAM arranges and retrieves logical records by an index key, relative record number, or<br />
relative byte addressing (RBA). A logical record has an RBA, which is the relative byte<br />
address <strong>of</strong> its first byte in relation to the beginning <strong>of</strong> the data set. VSAM is used for direct,<br />
sequential or skip sequential processing <strong>of</strong> fixed-length and variable-length records on DASD.<br />
VSAM data sets (also named clusters) are always cataloged. There are five types <strong>of</strong> cluster<br />
organization:<br />
► Entry-sequenced data set (ESDS)<br />
This contains records in the order in which they were entered. Records are added to the<br />
end <strong>of</strong> the data set and can be accessed sequentially or randomly through the RBA.<br />
► Key-sequenced data set (KSDS)<br />
This contains records in ascending collating sequence <strong>of</strong> the contents <strong>of</strong> a logical record<br />
field called key. Records can be accessed by the contents <strong>of</strong> such key, or by an RBA.<br />
► Linear data set (LDS)<br />
This contains data that has no record boundaries. Linear data sets contain none <strong>of</strong> the<br />
control information that other VSAM data sets do. Data in Virtual (DIV) is an optional<br />
intelligent buffering technique that includes a set <strong>of</strong> assembler macros that provide<br />
buffering access to VSAM linear data sets. See 4.41, “VSAM: Data-in-virtual (DIV)” on<br />
page 174.<br />
► Relative record data set (RRDS)<br />
This contains logical records in relative record number order; the records can be accessed<br />
sequentially or randomly based on this number. There are two types <strong>of</strong> relative record data<br />
sets:<br />
– Fixed-length RRDS: logical records must be <strong>of</strong> fixed length.<br />
– Variable-length RRDS: logical records can vary in length.<br />
A z/<strong>OS</strong> UNIX file (HFS or zFS) can be accessed as though it were a VSAM entry-sequenced<br />
data set (ESDS). Although UNIX files are not actually stored as entry-sequenced data sets,<br />
the system attempts to simulate the characteristics <strong>of</strong> such a data set. To identify or access a<br />
UNIX file, specify the path that leads to it.<br />
128 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.14 Access method services (IDCAMS)<br />
Access method services is a utility to:<br />
Define VSAM data sets<br />
Maintain catalogs<br />
Command interface to manage VSAM and catalogs<br />
Functional commands<br />
Modal commands<br />
Invoke IDCAMS utility program<br />
Using JCL jobs<br />
From a TSO session<br />
From a user program<br />
Figure 4-20 Access method services<br />
Access method services<br />
You can use the access method services utility (also know as IDCAMS) to establish and<br />
maintain catalogs and data sets (VSAM and non-VSAM). It is used mainly to create and<br />
manipulate VSAM data sets. IDCAMS has other functions (such as catalog updates), but it is<br />
most closely associated with the use <strong>of</strong> VSAM.<br />
Access method services commands<br />
There are two types <strong>of</strong> access method services commands:<br />
Functional commands Used to request the actual work (for example, defining a data set or<br />
listing a catalog)<br />
Modal commands Allow the conditional execution <strong>of</strong> functional commands (to make it<br />
look like a language)<br />
All access method services commands have the following general structure:<br />
COMMAND parameters ... [terminator]<br />
The command defines the type <strong>of</strong> service requested; the parameters further describe the<br />
service requested; the terminator indicates the end <strong>of</strong> the command statement.<br />
Time Sharing Option (TSO) users can use functional commands only. For more information<br />
about modal commands, refer to z/<strong>OS</strong> DFSMS Access Method Services for Catalogs,<br />
SC26-7394.<br />
Chapter 4. Storage management s<strong>of</strong>tware 129
The automatic class selection (ACS) routines (established by your storage administrator) and<br />
the associated SMS classes eliminate the need to use many access method services<br />
command parameters. The SMS environment is discussed in more detail in Chapter 5,<br />
“<strong>System</strong>-managed storage” on page 239.<br />
Invoking the IDCAMS utility program<br />
When you want to use an access method services function, enter a command and specify its<br />
parameters. Your request is decoded one command at a time; the appropriate functional<br />
routines perform all services required by that command.<br />
You can call the access method services program in the following ways:<br />
► As a job or jobstep<br />
► From a TSO session<br />
► From within your own program<br />
TSO users can run access method services functional commands from a TSO session as<br />
though they were TSO commands.<br />
For more information, refer to “Invoking Access Method Services from Your Program” in z/<strong>OS</strong><br />
DFSMS Access Method Services for Catalogs, SC26-7394.<br />
As a job or jobstep<br />
You can use JCL statements to call access method services. PGM=IDCAMS identifies the<br />
access method services program, as shown in Figure 4-21.<br />
//YOURJOB JOB YOUR INSTALLATION'S JOB=ACCOUNTING DATA<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
/*<br />
access method services commands and their parameters<br />
Figure 4-21 JCL statements to call IDCAMS<br />
From a TSO session<br />
You can use TSO with VSAM and access method services to:<br />
► Run access method services commands<br />
► Run a program to call access method services<br />
Each time you enter an access method services command as a TSO command, TSO builds<br />
the appropriate interface information and calls access method services. You can enter one<br />
command at a time. Access method services processes the command completely before<br />
TSO lets you continue processing. Except for ALLOCATE, all the access method services<br />
functional commands are supported in a TSO environment.<br />
To use IDCAMS and certain <strong>of</strong> its parameters from TSO/E, you must update the IKJTSOxx<br />
member <strong>of</strong> SYS1.PARMLIB. Add IDCAMS to the list <strong>of</strong> authorized programs (AUTHPGM). For<br />
more information, see z/<strong>OS</strong> DFSMS Access Method Services for Catalogs, SC26-7394.<br />
From within your own program<br />
You can also call the IDCAMS program from within another program and pass the command<br />
and its parameters to the IDCAMS program.<br />
130 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.15 IDCAMS functional commands<br />
ALTER: alters attributes <strong>of</strong> exiting data sets<br />
DEFINE CLUSTER: creates/catalogs VSAM cluster<br />
DEFINE GENERATIONDATAGROUP: defines<br />
GDG data sets<br />
DEFINE PAGESPACE: creates page data sets<br />
EXPORT: exports VSAM DS, AIX or ICF catalog<br />
IMPORT: imports VSAM DS, AIX or ICF catalog<br />
LISTCAT: lists catalog entries<br />
REPRO: copies VSAM, non-VSAM and catalogs<br />
VERIFY: corrects end-<strong>of</strong>-file information for VSAM<br />
clusters in the catalog<br />
Figure 4-22 Functional commands<br />
IDCAMS functional commands<br />
Table 4-3 lists and describes the functional commands.<br />
Table 4-3 Functional commands<br />
Command Functions<br />
ALLOCATE Allocates VSAM and non-VSAM data sets.<br />
ALTER Alters attributes <strong>of</strong> data sets, catalogs, tape library entries, and tape<br />
volume entries that have already been defined.<br />
BLDINDEX Builds alternate indexes (AIX®) for existing VSAM base clusters.<br />
CREATE Creates tape library entries and tape volume entries.<br />
DCOLLECT Collects data set, volume usage, and migration utility information.<br />
DEFINE ALIAS Defines an alternate name for a user catalog or a non-VSAM data set.<br />
DEFINE ALTERNATEINDEX Defines an alternate index for a KSDS or ESDS VSAM data set.<br />
DEFINE CLUSTER Creates KSDS, ESDS, RRDS, VRRDS and linear VSAM data sets.<br />
DEFINE<br />
GENERATIONDATAGROUP<br />
Defines a catalog entry for a generation data group (GDG).<br />
DEFINE NONVSAM Defines a catalog entry for a non-VSAM data set.<br />
Chapter 4. Storage management s<strong>of</strong>tware 131
Command Functions<br />
DEFINE PAGESPACE Defines an entry for a page space data set.<br />
DEFINE PATH Defines a path directly over a base cluster or over an alternate index<br />
and its related base cluster.<br />
DEFINE USERCATALOG Defines a user catalog.<br />
DELETE Deletes catalogs, VSAM clusters, and non-VSAM data sets.<br />
DIAGN<strong>OS</strong>E Scans an integrated catalog facility basic catalog structure (BCS) or a<br />
VSAM volume data set (VVDS) to validate the data structures and<br />
detect structure errors.<br />
EXAMINE Analyzes and reports the structural consistency <strong>of</strong> either an index or<br />
data component <strong>of</strong> a KSDS VSAM data set cluster.<br />
EXPORT Disconnects user catalogs, and exports VSAM clusters and<br />
ICF catalog information about the cluster.<br />
EXPORT DISCONNECT Disconnects a user catalog.<br />
IMPORT Connects user catalogs, and imports VSAM cluster and its ICF catalogs<br />
information.<br />
IMPORT CONNECT Connects a user catalog or a volume catalog.<br />
LISTCAT Lists catalog entries.<br />
PRINT Used to print VSAM data sets, non-VSAM data sets, and catalogs.<br />
REPRO Performs the following functions:<br />
► Copies VSAM and non-VSAM data sets, user catalogs, master<br />
catalogs, and volume catalogs.<br />
► Splits ICF catalog entries between two catalogs.<br />
► Merges ICF catalog entries into another ICF user or master<br />
catalog.<br />
► Merges tape library catalog entries from one volume catalog into<br />
another volume catalog.<br />
SHCDS Lists SMSVSAM recovery associated with subsystems spheres and<br />
controls that allow recovery <strong>of</strong> a VSAM RLS environment.<br />
VERIFY Causes a catalog to correctly reflect the end <strong>of</strong> a data set after an error<br />
occurred while closing a VSAM data set. The error might have caused<br />
the catalog to be incorrect.<br />
For a complete description <strong>of</strong> all AMS commands, see z/<strong>OS</strong> DFSMS Access Method<br />
Services for Catalogs, SC26-7394.<br />
132 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.16 AMS modal commands<br />
IF-THEN-ELSE command sequence controls<br />
command execution on the basis <strong>of</strong> condition codes<br />
returned by previous commands<br />
NULL command specifies no action be taken<br />
DO-END command sequence specifies more than<br />
one functional access method services command<br />
and its parameters<br />
SET command resets condition codes<br />
CANCEL command terminates processing <strong>of</strong> the<br />
current sequence <strong>of</strong> commands<br />
PARM command specifies diagnostic aids and<br />
printed output options<br />
Figure 4-23 AMS modal commands<br />
AMS modal commands<br />
With Access Method Services (AMS), you can set up jobs to execute a sequence <strong>of</strong> modal<br />
commands with a single invocation <strong>of</strong> IDCAMS. Modal command execution depends on the<br />
success or failure <strong>of</strong> prior commands.<br />
Using modal commands<br />
Figure 4-23 lists and briefly describes the AMS modal commands. With access method<br />
services, you can set up jobs to execute a sequence <strong>of</strong> modal commands with a single<br />
invocation <strong>of</strong> IDCAMS. Modal command execution depends on the success or failure <strong>of</strong> prior<br />
commands. The access method services modal commands are used for the conditional<br />
execution <strong>of</strong> functional commands. The commands are:<br />
► The IF-THEN-ELSE command sequence controls command execution on the basis <strong>of</strong><br />
condition codes<br />
► The NULL command causes the program to take no action.<br />
► The DO-END command sequence specifies more than one functional access method<br />
services command and its parameters.<br />
► The SET command resets condition codes.<br />
► The CANCEL command ends processing <strong>of</strong> the current job step.<br />
► The PARM command chooses diagnostic aids and options for printed output.<br />
Chapter 4. Storage management s<strong>of</strong>tware 133
Note: These commands cannot be used when access method services is run in TSO. See<br />
z/<strong>OS</strong> DFSMS Access Method Services for Catalogs, SC26-7394, for a complete<br />
description <strong>of</strong> the AMS modal commands.<br />
Commonly used single job step command sequences<br />
A sequence <strong>of</strong> commands commonly used in a single job step includes DELETE-DEFINE-REPRO<br />
or DELETE-DEFINE-BLDINDEX, as follows:<br />
► You can specify either a data definition (DD) name or a data set name with these<br />
commands.<br />
► When you refer to a DD name, allocation occurs at job step initiation. The allocation can<br />
result in a job failure if a command such as REPRO follows a DELETE-DEFINE sequence that<br />
changes the location (volser) <strong>of</strong> the data set. (Such failures can occur with either<br />
SMS-managed data sets or non-SMS-managed data sets.)<br />
Avoiding potential command sequence failures<br />
To avoid potential failures with a modal command sequence in your IDCAMS job, perform<br />
either one <strong>of</strong> the following tasks:<br />
► Specify the data set name instead <strong>of</strong> the DD name.<br />
► Or, use a separate job step to perform any sequence <strong>of</strong> commands (for example, REPRO,<br />
IMPORT, BLDINDEX, PRINT, or EXAMINE) that follow a DEFINE command.<br />
134 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.17 DFSMS Data Collection Facility (DCOLLECT)<br />
Figure 4-24 Data Collection Facility<br />
DCOLLECT functions<br />
Capacity planning<br />
Active data sets<br />
VSAM clusters<br />
Migrated data sets<br />
Backed-up data sets<br />
SMS configuration information<br />
DFSMS Data Collection Facility (DCOLLECT)<br />
The DFSMS Data Collection Facility (DCOLLECT) is a function <strong>of</strong> access method services.<br />
DCOLLECT collects data in a sequential file that you can use as input to other programs or<br />
applications.<br />
The IDCAMS DCOLLECT command collects DASD performance and space occupancy data in<br />
a sequential file that you can use as input to other programs or applications.<br />
An installation can use this command to collect information about:<br />
► Active data sets: DCOLLECT provides data about space use and data set attributes and<br />
indicators on the selected volumes and storage groups.<br />
► VSAM data set information: DCOLLECT provides specific information relating to VSAM<br />
data sets residing on the selected volumes and storage groups.<br />
► <strong>Volume</strong>s: DCOLLECT provides statistics and information about volumes that are selected<br />
for collection.<br />
► Inactive data: DCOLLECT produces output for DFSMShsm-managed data (inactive data<br />
management), which includes both migrated and backed up data sets.<br />
– Migrated data sets: DCOLLECT provides information about space utilization and data<br />
set attributes for data sets migrated by DFSMShsm.<br />
– Backed up data sets: DCOLLECT provides information about space utilization and<br />
data set attributes for every version <strong>of</strong> a data set backed up by DFSMShsm.<br />
Chapter 4. Storage management s<strong>of</strong>tware 135
► Capacity planning: Capacity planning for DFSMShsm-managed data (inactive data<br />
management) includes the collection <strong>of</strong> both DASD and tape capacity planning.<br />
– DASD Capacity Planning: DCOLLECT provides information and statistics for volumes<br />
managed by DFSMShsm (ML0 and ML1).<br />
– Tape Capacity Planning: DCOLLECT provides statistics for tapes managed by<br />
DFSMShsm.<br />
► SMS configuration information: DCOLLECT provides information about the SMS<br />
configurations. The information can be from either an active control data set (ACDS) or a<br />
source control data set (SCDS), or the active configuration.<br />
Data is gathered from the VTOC, VVDS, and DFSMShsm control data set for both managed<br />
and non-managed storage. ISMF provides the option to build the JCL necessary to execute<br />
DCOLLECT.<br />
DCOLLECT example<br />
With the sample JCL shown in Figure 4-25 you can gather information about all volumes<br />
belonging to storage group STGGP001.<br />
//COLLECT2 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//OUTDS DD DSN=USER.DCOLLECT.OUTPUT,<br />
// STORCLAS=LARGE,<br />
// DSORG=PS,<br />
// DCB=(RECFM=VB,LRECL=644,BLKSIZE=0),<br />
// SPACE=(1,(100,100)),AVGREC=K,<br />
// DISP=(NEW,CATLG,KEEP)<br />
//SYSIN DD *<br />
DCOLLECT -<br />
OFILE(OUTDS) -<br />
STORAGEGROUP(STGGP001) -<br />
NODATAINFO<br />
/*<br />
Figure 4-25 DCOLLECT job to collect information about all volumes in on storage group<br />
Many clients run DCOLLECT on a time-driven basis triggered by automation. In such a<br />
scenario, the peak hours should be avoided due to the heavy access rate caused by<br />
DCOLLECT in VTOC and catalogs.<br />
136 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.18 Generation data group (GDG)<br />
User catalog<br />
GDG base: ABC.GDG<br />
GDS:<br />
ABC.GDG.G0001V00 (-4)<br />
ABC.GDG.G0002V00 (-3)<br />
ABC.GDG.G0003V00 (-2)<br />
ABC.GDG.G0004V00 (-1)<br />
ABC.GDG.G0005V00 ( 0)<br />
Limit = 5<br />
oldest<br />
newest<br />
Figure 4-26 Generation data group (GDG)<br />
VOLABC<br />
ABC.GDG.G0001V00<br />
ABC.GDG.G0003V00<br />
ABC.GDG.G0004V00<br />
VOLDEF<br />
ABC.GDG.G0002V00<br />
ABC.GDG.G0005V00<br />
Generation data group<br />
Generation data group (GDG) is a catalog function that makes it easier to process data sets<br />
with the same type <strong>of</strong> data but in different update levels. For example, the same data set is<br />
produced every day but with different data. Then, you can catalog successive updates or<br />
generations <strong>of</strong> these related data sets. They are called generation data groups (GDG). Each<br />
data set within a GDG is called a generation data set (GDS) or generation.<br />
Within a GDG, the generations can have like or unlike DCB attributes and data set<br />
organizations. If the attributes and organizations <strong>of</strong> all generations in a group are identical,<br />
the generations can be retrieved together as a single data set.<br />
Generation data sets can be sequential, PDSs, or direct (BDAM). Generation data sets<br />
cannot be PDSEs, UNIX files, or VSAM data sets. The same GDG may contain SMS and<br />
non-SMS data sets.<br />
There are usability benefits to grouping related data sets using a function such as GDS. For<br />
example, the catalog management routines can refer to the information in a special index<br />
called a generation index in the catalog, and as a result:<br />
► All data sets in the group can be referred to by a common name.<br />
► z/<strong>OS</strong> is able to keep the generations in chronological order.<br />
► Outdated or obsolete generations can be automatically deleted from the catalog by z/<strong>OS</strong>.<br />
Another benefit is the ability to reference a new generation using the same JCL.<br />
Chapter 4. Storage management s<strong>of</strong>tware 137
A GDS has sequentially ordered absolute and relative names that represent its age. The<br />
catalog management routines use the absolute generation name in the catalog. Older data<br />
sets have smaller absolute numbers. The relative name is a signed integer used to refer to the<br />
most current (0), the next to most current (-1), and so forth, generation. See also 4.20,<br />
“Absolute generation and version numbers” on page 141 and 4.21, “Relative generation<br />
numbers” on page 142 for more information about this topic.<br />
A generation data group (GDG) base is allocated in a catalog before the GDS’s are<br />
cataloged. Each GDG is represented by a GDG base entry. Use the access method services<br />
DEFINE command to allocate the GDG base (see also 4.19, “Defining a generation data<br />
group” on page 139).<br />
The GDG base is a construct that exists only in a user catalog, it does not exist as a data set<br />
on any volume. The GDG base is used to maintain the generation data sets (GDS), which are<br />
the real data sets.<br />
The number <strong>of</strong> GDS’s in a GDG depends on the limit you specify when you create a new<br />
GDG in the catalog.<br />
GDG example<br />
In our example in Figure 4-26 on page 137, the limit is 5. That means, the GDG can hold a<br />
maximum <strong>of</strong> five GDSs. Our data set name is ABC.GDG. Then, you can access the GDSs by<br />
their relative names; for example, ABC.GDG(0) corresponds to the absolute name<br />
ABC.GDG.G0005V00. ABC.GDG(-1) corresponds to generation ABC.GDG.G0004V00, and<br />
so on. The relative number can also be used to catalog a new generation (+1), which will be<br />
generation number 6 with an absolute name <strong>of</strong> ABC.GDG.G0006V00. Because the limit is 5,<br />
the oldest generation (G0001V00) is rolled-<strong>of</strong>f if you define a new one.<br />
Rolled in and rolled <strong>of</strong>f<br />
When a GDG contains its maximum number <strong>of</strong> active generation data sets in the catalog,<br />
defined in the LIMIT parameter, and a new GDS is rolled in at the end-<strong>of</strong>-job step, the oldest<br />
generation data set is rolled <strong>of</strong>f from the catalog. If a GDG is defined using DEFINE<br />
GENERATIONDATAGROUP EMPTY and is at its limit, then when a new GDS is rolled in, all the<br />
currently active GDSs are rolled <strong>of</strong>f.<br />
The parameters you specify on the DEFINE GENERATIONDATAGROUP IDCAMS command determine<br />
what happens to rolled-<strong>of</strong>f GDSs. For example, if you specify the SCRATCH parameter, the GDS<br />
is scratched from VTOC when it is rolled <strong>of</strong>f. If you specify the N<strong>OS</strong>CRATCH parameter, the<br />
rolled-<strong>of</strong>f generation data set is re-cataloged as rolled <strong>of</strong>f and is disassociated with its<br />
generation data group.<br />
GDSs can be in a deferred roll-in state if the job never reached end-<strong>of</strong>-step or if they were<br />
allocated as DISP=(NEW,KEEP) and the data set is not system-managed. However, GDSs in<br />
a deferred roll-in state can be referred to by their absolute generation numbers. You can use<br />
the IDCAMS command ALTER ROLLIN to roll in these GDSs.<br />
For further information about generation data groups, see z/<strong>OS</strong> DFSMS: Using Data Sets,<br />
SC26-7410.<br />
138 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.19 Defining a generation data group<br />
//DEFGDG1 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//GDGMOD DD DSNAME=ABC.GDG,DISP=(,KEEP),<br />
// SPACE=(TRK,(0)),UNIT=DISK,VOL=SER=VSER03,<br />
// DCB=(RECFM=FB,BLKSIZE=2000,LRECL=100)<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DEFINE GENERATIONDATAGROUP -<br />
(NAME(ABC.GDG) -<br />
EMPTY -<br />
N<strong>OS</strong>CRATCH -<br />
LIMIT(255))<br />
/*<br />
Figure 4-27 Defining a GDG<br />
Defining a generation data group<br />
The DEFINE GENERATIONDATAGROUP command creates a catalog entry for a generation data<br />
group (GDG).<br />
Figure 4-28 shows the JCL to define a GDG.<br />
//DEFGDG1 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DEFINE GENERATIONDATAGROUP -<br />
(NAME(ABC.GDG) -<br />
NOEMPTY -<br />
N<strong>OS</strong>CRATCH -<br />
LIMIT(5) )<br />
/*<br />
Figure 4-28 JCL to define a GDG catalog entry<br />
A B<br />
VSER03<br />
ABC.GDG<br />
The DEFINE GENERATIONDATAGROUP command defines a GDG base catalog entry GDG01.<br />
A<br />
B<br />
C<br />
Chapter 4. Storage management s<strong>of</strong>tware 139<br />
C<br />
available space<br />
Model DSCB<br />
}<br />
VTOC
The parameters are:<br />
NAME This specifies the name <strong>of</strong> the ABC.GDG. Each GDS in the group will have the<br />
name ABC.GDG.GxxxxVyy, where xxxx is the generation number and yy is the<br />
version number. See “Absolute generation and version numbers” on page 141.<br />
NOEMPTY This specifies that only the oldest generation data set is to be uncataloged<br />
when the maximum is reached (recommended).<br />
EMPTY This specifies that all GDSs in the group are to be uncataloged when the group<br />
reaches the maximum number <strong>of</strong> data sets (as specified by the LIMIT<br />
parameter) and one more GDS is added to the group.<br />
N<strong>OS</strong>CRATCH This specifies that when a data set is uncataloged, its DSCB is not to be<br />
removed from its volume's VTOC. Therefore, even if a data set is uncataloged,<br />
its records can be accessed when it is allocated to a job step with the<br />
appropriate JCL DD statement.<br />
LIMIT This specifies that the maximum number <strong>of</strong> GDG data sets in the group is 5.<br />
This parameter is required.<br />
Figure 4-29 shows a generation data set defined within the GDG by using JCL statements.<br />
//DEFGDG2 JOB ...<br />
//STEP1 EXEC PGM=IEFBR14<br />
//GDGDD1 DD DSNAME=ABC.GDG(+1),DISP=(NEW,CATLG),<br />
// SPACE=(TRK,(10,5)),VOL=SER=VSER03,<br />
// UNIT=DISK<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
/*<br />
Figure 4-29 JCL to define a generation data set<br />
The job DEFGDG2 allocates space and catalogs a GDG data set in the newly-defined GDG.<br />
The job control statement GDGDD1 DD specifies the GDG data set in the GDG.<br />
Creating a model DSCB<br />
As Figure 4-27 on page 139 shows, you can also create a model DSCB with the same name<br />
as the GDG base. You can provide initial DCB attributes when you create your model;<br />
however, you need not provide any attributes now. Because only the attributes in the data set<br />
label are used, allocate the model data set with SPACE=(TRK,0) to conserve direct access<br />
space. You can supply initial or overriding attributes creating and cataloging a generation.<br />
Only one model DSCB is necessary for any number <strong>of</strong> generations. If you plan to use only<br />
one model, do not supply DCB attributes when you create the model. When you subsequently<br />
create and catalog a generation, include necessary DCB attributes in the DD statement<br />
referring to the generation. In this manner, any number <strong>of</strong> GDGs can refer to the same model.<br />
The catalog and model data set label are always located on a direct access volume, even for<br />
a magnetic tape GDG.<br />
Restriction: You cannot use a model DSCB for system-managed generation data sets.<br />
140 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.20 Absolute generation and version numbers<br />
A.B.C.G0001V00<br />
A.B.C.G0009V01<br />
Figure 4-30 Absolute generation and version numbers<br />
Generation Data Set 1, Version 0,<br />
in generation data group A.B.C<br />
Generation Data Set 9, Version 1,<br />
in generation data group A.B.C<br />
Absolute generation and version numbers<br />
An absolute generation and version number is used to identify a specific generation <strong>of</strong> a<br />
GDG. The same GDS may have different versions, which are maintained by your installation.<br />
The version number allows you to perform normal data set operations without disrupting the<br />
management <strong>of</strong> the generation data group. For example, if you want to update the second<br />
generation in a three-generation group, then replace generation 2, version 0, with<br />
generation 2, version 1. Only one version is kept for each generation.<br />
The generation and version number are in the form GxxxxVyy, where xxxx is an unsigned<br />
four-digit decimal generation number (0001 through 9999) and yy is an unsigned two-digit<br />
decimal version number (00 through 99). For example:<br />
► A.B.C.G0001V00 is generation data set 1, version 0, in generation data group A.B.C.<br />
► A.B.C.G0009V01 is generation data set 9, version 1, in generation data group A.B.C.<br />
The number <strong>of</strong> generations and versions is limited by the number <strong>of</strong> digits in the absolute<br />
generation name; that is, there can be 9,999 generations. Each generation can have 100<br />
versions. The system automatically maintains the generation number.<br />
You can catalog a generation using either absolute or relative numbers. When a generation is<br />
cataloged, a generation and version number is placed as a low-level entry in the generation<br />
data group. To catalog a version number other than V00, you must use an absolute<br />
generation and version number.<br />
Chapter 4. Storage management s<strong>of</strong>tware 141
4.21 Relative generation numbers<br />
A.B.C.G0005V00 = A.B.C(-1)<br />
A.B.C.G0006V00 = A.B.C(0)<br />
DEFINE NEW GDS<br />
A.B.C.G0007V00 = A.B.C(+1)<br />
Figure 4-31 Relative generation numbers<br />
Relative generation numbers<br />
As an alternative to using absolute generation and version numbers when cataloging or<br />
referring to a generation, you can use a relative generation number. To specify a relative<br />
number, use the generation data group name followed by a negative integer, a positive<br />
integer, or a zero (0), enclosed in parentheses; for example, A.B.C(-1), A.B.C(+1), or<br />
A.B.C(0).<br />
The value <strong>of</strong> the specified integer tells the operating system what generation number to<br />
assign to a new generation data set, or it tells the system the location <strong>of</strong> an entry representing<br />
a previously cataloged old generation data set.<br />
When you use a relative generation number to catalog a generation, the operating system<br />
assigns an absolute generation number and a version number <strong>of</strong> V00 to represent that<br />
generation. The absolute generation number assigned depends on the number last assigned<br />
and the value <strong>of</strong> the relative generation number that you are now specifying. For example, if in<br />
a previous job generation, A.B.C.G0006V00 was the last generation cataloged, and you<br />
specify A.B.C(+1), the generation now cataloged is assigned the number G0007V00.<br />
Though any positive relative generation number can be used, a number greater than 1 can<br />
cause absolute generation numbers to be skipped for a new generation data set. For<br />
example, if you have a single step job and the generation being cataloged is a +2, one<br />
generation number is skipped. However, in a multiple step job, one step might have a +1 and<br />
a second step a +2, in which case no numbers are skipped. The mapping between relative<br />
and absolute numbers is kept until the end <strong>of</strong> the job.<br />
142 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
READ/UPDATE OLD GDS
4.22 Partitioned organized (PO) data sets<br />
DIRECTORY<br />
Figure 4-32 Partitioned organized (PO) data set<br />
Partitioned organized (PO) data sets<br />
Partitioned data sets are similar in organization to a library and are <strong>of</strong>ten referred to this way.<br />
Normally, a library contains a great number <strong>of</strong> “books,” and sorted directory entries are used<br />
to locate them.<br />
In a partitioned organized data set, the “books” are called members, and to locate them, they<br />
are pointed to by entries in a directory, as shown in Figure 4-32.<br />
The members are individual sequential data sets and can be read or written sequentially, after<br />
they have been located by directory. Then, the records <strong>of</strong> a given member are written or<br />
retrieved sequentially.<br />
Partitioned data sets can only exist on DASD. Each member has a unique name, one to eight<br />
characters in length, and is stored in a directory that is part <strong>of</strong> the data set.<br />
The main benefit <strong>of</strong> using a PO data set is that, without searching the entire data set, you can<br />
retrieve any individual member after the data set is opened. For example, in a program library<br />
(always a partitioned data set) each member is a separate program or subroutine. The<br />
individual members can be added or deleted as required.<br />
There are two types <strong>of</strong> PO data sets:<br />
► Partitioned data set (PDS)<br />
► Partitioned data set extended (PDSE)<br />
(PO) data set (PDS)<br />
A<br />
B<br />
A<br />
PO.DATA.SET<br />
C<br />
C<br />
B<br />
MEMBERS<br />
Chapter 4. Storage management s<strong>of</strong>tware 143
4.23 PDS data set organization<br />
Advantages <strong>of</strong> the PDS organization:<br />
Easier management: Processed by member or a<br />
whole. Members can be concatenated and processed<br />
as sequential files<br />
Space savings: Small members fit in one DASD track<br />
Good usability: Easily accessed via JCL, ISPF, TSO<br />
Required improvements for the PDS organization:<br />
Release space when a member is deleted without the<br />
need to compress<br />
Expandable directory size<br />
Improved directory and member integrity<br />
Better performance for directory search<br />
Improving sharing facilities<br />
Figure 4-33 PDS data organization<br />
Partitioned data set (PDS)<br />
A PDS is stored in only one DASD device. It is divided into sequentially organized members,<br />
each described by one or more directory entries. It is an MVS data organization that <strong>of</strong>fers<br />
such useful features as:<br />
► Easier management, which makes grouping related data sets under a single name and<br />
managing MVS data easier. Files stored as members <strong>of</strong> a PDS can be processed<br />
individually, or all the members can be processed as a unit.<br />
► Space savings so that small members fit in just one DASD track.<br />
► Good usability so that members <strong>of</strong> a PDS can be used as sequential data sets, and they<br />
can be concatenated to sequential data sets. They are also easy to create with JCL, or<br />
ISPF, and they are easy to manipulate with ISPF utilities or TSO commands.<br />
However, there are requirements for improvement regarding PDS organization:<br />
► There is no mechanism to reuse the area that contained a deleted or rewritten member.<br />
This unused space must be reclaimed by the use <strong>of</strong> the IEBCOPY utility function called<br />
compression. See “IEBCOPY utility” on page 112 for information about this utility.<br />
► Directory size is not expandable, causing an overflow exposure. The area for members<br />
can grow using secondary allocations. However, this is not true for the directory.<br />
144 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► A PDS has no mechanism to prevent a directory from being overwritten if a program<br />
mistakenly opens it for sequential output. If this happens, the directory is destroyed and all<br />
the members are lost.<br />
Also, PDS DCB attributes can be easily changed by mistake. If you add a member whose<br />
DCB characteristics differ from those <strong>of</strong> the other members, you change the DCB<br />
attributes <strong>of</strong> the entire PDS, and all the old members become unusable. In this case there<br />
is a workaround in the form <strong>of</strong> code to correct the problem.<br />
► Better directory search time: Entries in the directory are physically ordered by the collating<br />
sequence <strong>of</strong> the names in the members they are pointing to. Any inclusion may cause the<br />
full rearrange <strong>of</strong> the entries.<br />
There is also no index to the directory entries. The search is sequential using a CKD<br />
format. If the directory is big, the I/O operation takes more time. To minimize this, a strong<br />
directory buffering has been used (library lookaside (LLA)) for load modules in z/<strong>OS</strong>.<br />
► Improved sharing facilities: To update or create a member <strong>of</strong> a PDS, you need exclusive<br />
access to the entire data set.<br />
All these improvements require almost total compatibility, at the program level and the user<br />
level, with the old PDS.<br />
Allocating space for a PDS<br />
To allocate a PDS, specify PDS in the DSNTYPE parameter and the number <strong>of</strong> directory<br />
blocks in the SPACE parameter, in either the JCL or the SMS data class. You must specify the<br />
number <strong>of</strong> directory blocks, or the allocation fails.<br />
If your data set is large, or if you expect to update it extensively, it might be best to allocate a<br />
large space. A PDS cannot occupy more than 65,535 tracks and cannot extend beyond one<br />
volume. If your data set is small or is seldom changed, let SMS calculate the space<br />
requirements to avoid wasted space or wasted time used for recreating the data set.<br />
Space for the directory is expressed in 256 byte blocks. Each block contains from 3 to 21<br />
entries, depending on the length <strong>of</strong> the user data field. If you expect 200 directory entries,<br />
request at least 10 blocks. Any unused space on the last track <strong>of</strong> the directory is wasted<br />
unless there is enough space left to contain a block <strong>of</strong> the first member.<br />
The following DD statement defines a new partitioned data set:<br />
//DD2 DD DSNAME=OTTO.PDS12,DISP=(,CATLG),<br />
// SPACE=(CYL,(5,2,10),,CONTIG)<br />
The system allocates five cylinders to the data set, <strong>of</strong> which ten 256-byte records are for a<br />
directory. Since the CONTIG subparameter is coded, the system allocates 10 contiguous<br />
cylinders on the volume. The secondary allocation is two cylinders, which is needed when the<br />
data set needs to expand beyond the five cylinders primary allocation.<br />
Chapter 4. Storage management s<strong>of</strong>tware 145
4.24 Partitioned data set extended (PDSE)<br />
CREATION<br />
CONVERSION<br />
USE<br />
Figure 4-34 PDSE structure<br />
SMS<br />
VOLUME<br />
Partitioned data set extended (PDSE)<br />
Partitioned data set extended (PDSE) is a type <strong>of</strong> data set organization that improves the<br />
partition data set (PDS) organization. It has an improved indexed directory structure and a<br />
different member format. You can use PDSE for source (programs and text) libraries, macros,<br />
and program object (the name <strong>of</strong> executable code when loaded in PDSE) libraries.<br />
Logically, a PDSE directory is similar to a PDS directory. It consists <strong>of</strong> a series <strong>of</strong> directory<br />
records in a block. Physically, it is a set <strong>of</strong> “pages” at the front <strong>of</strong> the data set, plus additional<br />
pages interleaved with member pages. Five directory pages are initially created at the same<br />
time as the data set.<br />
New directory pages are added, interleaved with the member pages, as new directory entries<br />
are required. A PDSE always occupies at least five pages <strong>of</strong> storage.<br />
The directory is like a KSDS index structure (KSDS is covered in 4.34, “VSAM key sequenced<br />
cluster (KSDS)” on page 166), making a search much faster. It cannot be overwritten by being<br />
opened for sequential output.<br />
If you try to add a member with DCB characteristics that differs from the rest <strong>of</strong> the members,<br />
you will get an error.<br />
There is no longer a need for a PDSE data set to be SMS-managed.<br />
146 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
+<br />
DATA CLASS CONSTRUCT<br />
DSNTYPE=LIBRARY<br />
// DD DSNTYPE=LIBRARY<br />
DFDSS CONVERT (PDS I PDSE)<br />
BSAM, QSAM, BPAM
Advantages <strong>of</strong> PDSEs<br />
PDSE advantages when compared with PDS are:<br />
► The size <strong>of</strong> a PDSE directory is flexible and can expand to accommodate the number <strong>of</strong><br />
members stored in it (the size <strong>of</strong> a PDS directory is fixed at allocation time).<br />
► PDSE members are indexed in the directory by member name. This eliminates the need<br />
for time-consuming sequential directory searches holding channels for a long time.<br />
► The logical requirements <strong>of</strong> the data stored in a PDSE are separated from the physical<br />
(storage) requirements <strong>of</strong> that data, which simplifies data set allocation.<br />
► PDSE automatically reuses space, without the need for an IEBCOPY compress. A list <strong>of</strong><br />
available space is kept in the directory. When a PDSE member is updated or replaced, it is<br />
written in the first available space. This is either at the end <strong>of</strong> the data set, or in a space in<br />
the middle <strong>of</strong> the data set marked for reuse. For example, by moving or deleting a PDSE<br />
member, you free space that is immediately available for the allocation <strong>of</strong> a new member.<br />
This makes PDSEs less susceptible to space-related abends than are PDSs. This space<br />
does not have to be contiguous. The objective <strong>of</strong> the space reuse algorithm is to avoid<br />
extending the data set unnecessarily.<br />
► The number <strong>of</strong> PDSE members stored in the library can be large or small without concern<br />
for performance or space considerations.<br />
► You can open a PDSE member for output or update without locking the entire data set.<br />
The sharing control is at the member level, not the data set level.<br />
► The ability to update a member in place is possible with PDSs and PDSEs. But with<br />
PDSEs, you can extend the size <strong>of</strong> members and the integrity <strong>of</strong> the library is maintained<br />
while simultaneous changes are made to separate members within the library.<br />
► The maximum number <strong>of</strong> extents <strong>of</strong> a PDSE is 123; the PDS is limited to 16.<br />
► PDSEs are device-independent because they do not contain information that depends on<br />
location or device geometry.<br />
► All members <strong>of</strong> a PDSE are re-blockable.<br />
► PDSEs can contain program objects built by the program management binder that cannot<br />
be stored in PDSs.<br />
► You can share PDSEs within and across systems using PDSESHARING(EXTENDED) in<br />
the IGDSMSxx member in the SYS1.PARMLIB. Multiple users are allowed to read PDSE<br />
members while the data set is open. You can extend the sharing to enable multiple users<br />
on multiple systems to concurrently create new PDSE members and read existing<br />
members (see also “SMS PDSE support” on page 302).<br />
► For performance reasons, directories can be buffered in dataspaces and members can be<br />
buffered in hiperspaces by using the MSR parameter in the SMS storage class.<br />
► Replacing a member without replacing all <strong>of</strong> its aliases deletes all aliases.<br />
► An unlimited number <strong>of</strong> tracks per volume are now available for PDSEs, that is, more than<br />
the previous limit <strong>of</strong> 65535.<br />
Restriction: You cannot use a PDSE for certain system data sets that are opened in the<br />
IPL/NIP time frame.<br />
Chapter 4. Storage management s<strong>of</strong>tware 147
4.25 PDSE enhancements<br />
PDSE, two address spaces<br />
SMXC in charge <strong>of</strong> PDSE serialization<br />
SYSBMAS, the owner <strong>of</strong> DS and HS buffering<br />
z/<strong>OS</strong> V1R6 combines both in a single address space<br />
called SMSPDSE and improves the following:<br />
Reducing excessive ECSA usage<br />
Reducing re-IPLs due to system hangs in failure or<br />
CANCEL situation<br />
Providing tools for monitoring and diagnosis through<br />
VARY SMS,PDSE,ANALYSIS command<br />
Finally, a restartable SMSPDSE1 in charge <strong>of</strong> all<br />
allocated PDSEs, except the ones in the LNKLST<br />
controlled by SMSPDSE<br />
Figure 4-35 PDSE enhancements<br />
PDSE enhancements<br />
Recent enhancements have made PDSEs more reliable and available, correcting a few<br />
problems that caused IPLs due to a hang, deadlock, or out-<strong>of</strong>-storage condition.<br />
Originally, in order to implement PDSE, two system address spaces were introduced:<br />
► SMXC, in charge <strong>of</strong> PDSE serialization.<br />
► SYSBMAS, the owner <strong>of</strong> the data space and hiperspace buffering.<br />
z/<strong>OS</strong> V1R6 combines SMXC and SYSBMAS to a single address space called SMSPDSE.<br />
This improves overall PDSE usability and reliability by:<br />
► Reducing excessive ECSA usage (by moving control blocks into the SMSPDSE address<br />
space)<br />
► Reducing re-IPLs due to system hangs in failure or CANCEL situations<br />
► Providing storage administrators with tools for monitoring and diagnosis through VARY<br />
SMS,PDSE,ANALYSIS command (for example, determining which systems are using a<br />
particular PDSE)<br />
However, the SMSPDSE address space is usually non-restartable because <strong>of</strong> the eventual<br />
existence <strong>of</strong> perennial PDSEs data sets in the LNKLST concatenation. Then, any hang<br />
condition can cause an unplanned IPL. To fix this, we have a new AS, the restartable<br />
SMSPDSE1, which is in charge <strong>of</strong> all allocated PDSEs except the ones in the LNKST.<br />
148 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
SMSPDSE1 is created if the following parameters are defined in IGDSMSxx:<br />
► PDSESHARING(EXTENDED)<br />
► PDSE_RESTARTABLE_AS(YES)<br />
z/<strong>OS</strong> V1R8 enhancements<br />
With z/<strong>OS</strong> V1R8, the following improvements for PDSEs are introduced:<br />
► SMSPDSE and SMSPDSE1 support 64-bit addressing, allowing more concurrently<br />
opened PDSE members.<br />
► The buffer pool can be retained beyond PDSE close, improving performance due to less<br />
buffer pool deletions and creation. This is done by setting<br />
PDSE1_BUFFER_BEYOND_CL<strong>OS</strong>E(YES) in the IGDSMSxx member in parmlib.<br />
► You can dynamically change the amount <strong>of</strong> virtual storage allocated to hiperspaces for<br />
SMSPDSE1 through the SETSMS PDSE1HSP_SIZE(nnn) command.<br />
Chapter 4. Storage management s<strong>of</strong>tware 149
4.26 PDSE: Conversion<br />
Using DFSMSdss<br />
COPY DATASET(INCLUDE -<br />
(MYTEST.**) -<br />
BY(DSORG = PDS)) -<br />
INDY(SMS001) -<br />
OUTDY(SMS002) -<br />
CONVERT(PDSE(**))-<br />
RENAMEU(MYTEST2) -<br />
DELETE<br />
Figure 4-36 Converting PDS to PDSE<br />
Converting a PDS data set to a PDSE<br />
You can use IEBCOPY or DFSMSdss COPY to convert PDS to PDSE, as shown in Figure 4-36.<br />
We recommend using DFSMSdss.<br />
You can convert the entire data set or individual members, and also back up and restore<br />
PDSEs. By using the DFSMSdss COPY function with the CONVERT and PDS keywords, you<br />
can convert a PDSE back to a PDS. This is especially useful if you need to prepare a PDSE<br />
for migration to a site that does not support PDSEs. When copying members from a PDS load<br />
module library into a PDSE program library, or vice versa, the system invokes the program<br />
management binder component.<br />
Many types <strong>of</strong> libraries are candidates for conversion to PDSE, including:<br />
► PDSs that are updated <strong>of</strong>ten, and that require frequent and regular reorganization<br />
► Large PDSs that require specific device types because <strong>of</strong> the size <strong>of</strong> allocation<br />
Converting PDSs to PDSEs is beneficial, but be aware that certain data sets are unsuitable<br />
for conversion to, or allocation as, PDSEs because the system does not retain the original<br />
block boundaries.<br />
Using DFSMSdss<br />
In Figure 4-36, the DFSMSdss COPY example converts all PDSs with the high-level qualifier<br />
<strong>of</strong> “MYTEST” on volume SMS001 to PDSEs with the high-level qualifier <strong>of</strong> “MYTEST2” on<br />
150 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Using IEBCOPY<br />
//INPDS DD DISP=SHR,<br />
DSN=USER.PDS.LIBRARY<br />
//PDSE DD DSN=USER.PDSE.LIBRARY,<br />
DISP=OLD<br />
//SYSIN DD *<br />
COPY OUTDD=OUTPDSE<br />
INDD=INPDS<br />
SELECT MEMBER=(A,B,C)
volume SMS002. The original PDSs are then deleted. If you use dynamic allocation, specify<br />
INDY and OUTDY for the input and output volumes. However, if you define the ddnames for<br />
the volumes, use the INDD and OUDD parameters.<br />
Using IEBCOPY<br />
To copy one or more specific members using IEBCOPY, as shown in Figure 4-36 on<br />
page 150, use the SELECT control statement. In this example, IEBCOPY copies members A,<br />
B, and C from USER.PDS.LIBRARY to USER.PDSE.LIBRARY.<br />
For more information about DFSMSdss, see z/<strong>OS</strong> DFSMSdss Storage Administration Guide,<br />
SC35-0423, and z/<strong>OS</strong> DFSMSdss Storage Administration Reference, SC35-0424.<br />
Chapter 4. Storage management s<strong>of</strong>tware 151
4.27 Program objects in a PDSE<br />
Functions to create, update, execute, and access<br />
program objects in PDSEs<br />
Load module format<br />
Binder replaces linkage editor<br />
Program fetch<br />
DESERV internal interface function AMASPZAP<br />
Set <strong>of</strong> utilities such as IEWTPORT, which builds<br />
transportable programs from program objects and<br />
vice versa<br />
Coexistence between PDS and PDSE load module<br />
libraries in the same system<br />
Figure 4-37 Program objects in PDSE<br />
Program objects in PDSE<br />
Program objects are created automatically when load modules are copied into a PDSE.<br />
Likewise, program objects are automatically converted back to load modules when they are<br />
copied into a partitioned data set. Note that certain program objects cannot be converted into<br />
load modules because they use features <strong>of</strong> program objects that do not exist in load modules.<br />
A load module is an executable program loaded by the binder in a PDS. A program object is<br />
an executable program loaded by the binder in a PDSE.<br />
Load module format<br />
For accessing a PDS directory or member, most PDSE interfaces are indistinguishable from<br />
PDS interfaces. However, PDSEs have a different internal format, which gives them increased<br />
usability. Each member name can be eight bytes long. The primary name for a program<br />
object can be eight bytes long. Alias names for program objects can be up to 1024 bytes long.<br />
The records <strong>of</strong> a given member <strong>of</strong> a PDSE are written or retrieved sequentially. You can use a<br />
PDSE in place <strong>of</strong> a PDS to store data, or to store programs in the form <strong>of</strong> program objects. A<br />
program object is similar to a load module in a PDS. A load module cannot reside in a PDSE<br />
and be used as a load module. One PDSE cannot contain a mixture <strong>of</strong> program objects and<br />
data members.<br />
The binder<br />
The binder is the program that processes the output <strong>of</strong> language translators and compilers<br />
into an executable program (load module or program object). It replaced the linkage editor<br />
152 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
and batch loader. The binder converts the object module output <strong>of</strong> language translators and<br />
compilers into an executable program unit that can either be stored directly into virtual storage<br />
for execution or stored in a program library (PDS or PDSE).<br />
MVS program fetch<br />
Most <strong>of</strong> the loading functions are transparent to the user. When the executable code is in a<br />
library, the program management loader component knows whether the program being<br />
loaded is a load module or a program object by the source data set type:<br />
► If the program is being loaded from a PDS, it calls IEWFETCH (integrated as part <strong>of</strong> the<br />
loader) to do what it has always done.<br />
► If the program is being loaded from a PDSE, a new routine is called to bring in the program<br />
using data-in-virtual (DIV). The loading is done using special loading techniques that can<br />
be influenced by externalized options.<br />
The program management loader increases the services <strong>of</strong> the program fetch component by<br />
adding support for loading program objects. The program management loader reads both<br />
program objects and load modules into virtual storage and prepares them for execution. It<br />
relocates any address constants in the program to point to the appropriate areas in virtual<br />
storage and supports 24-bit, 31-bit, and 64-bit addressing ranges. All program objects loaded<br />
from a PDSE are page-mapped into virtual storage. When loading program objects from a<br />
PDSE, the loader selects a loading mode based on the module characteristics and<br />
parameters specified to the binder when you created the program object. You can influence<br />
the mode with the binder FETCHOPT parameter. The FETCHOPT parameter allows you to<br />
select whether the program is completely preloaded and relocated before execution, or<br />
whether pages <strong>of</strong> the program can be read into virtual storage and relocated only when they<br />
are referenced during execution.<br />
DESERV directory service<br />
The directory service DESERV supports both PDS and PDSE libraries. You can issue<br />
DESERV in your Assembler program for either PDS or PDSE directory access, but you must<br />
pass the DCB address. It does not default to a predefined search order, as does BLDL (the<br />
old directory service).<br />
Members <strong>of</strong> a PDSE program library cannot be rewritten, extended, or updated in place.<br />
When updating program objects in a PDSE program library, the AMASPZAP service aid<br />
invokes the program management binder, which creates a new version <strong>of</strong> the program rather<br />
than updating the existing version in place.<br />
IEWTPORT utility<br />
The transport utility (IEWTPORT) is a program management service with very specific and<br />
limited function. It obtains (through the binder) a program object from a PDSE and converts it<br />
into a transportable program file in a sequential (nonexecutable) format. It also reconstructs<br />
the program object from a transportable program file and stores it back into a PDSE (through<br />
the binder).<br />
PDS and PDSE coexistence<br />
You can use a PDSE in place <strong>of</strong> a PDS to store data, or to store programs in the form <strong>of</strong><br />
program objects. A program object is similar to a load module in a PDS. A load module<br />
cannot reside in a PDSE and be used as a load module. One PDSE cannot contain a mixture<br />
<strong>of</strong> program objects and data members. PDSEs and PDSs are processed using the same<br />
access methods (BSAM, QSAM, BPAM) and macros, but you cannot use EXCP because <strong>of</strong><br />
the data set's internal structures.<br />
Chapter 4. Storage management s<strong>of</strong>tware 153
4.28 Sequential access methods<br />
Sequential access data organization<br />
Physical sequential<br />
Extended format<br />
Compressed data sets<br />
Data striped data sets<br />
z/<strong>OS</strong> UNIX files<br />
These organizations are accessed by the<br />
sequential access methods:<br />
Figure 4-38 Sequential access methods<br />
Access methods<br />
An access method defines the technique that is used to store and retrieve data. Access<br />
methods have their own data set structures to organize data, macros to define and process<br />
data sets, and utility programs to process data sets. Access methods are identified primarily<br />
by the data set organization. For example, use the basic sequential access method (BSAM)<br />
or queued sequential access method (QSAM) with sequential data sets. However, there are<br />
times when an access method identified with one organization can be used to process a data<br />
set organized in a different manner.<br />
Physical sequential<br />
There are two sequential access methods, basic sequential access method (BSAM) and<br />
queued sequential access method (QSAM) and just one sequential organization. Both<br />
methods access data organized in a physical sequential manner; the physical records<br />
(containing logical records) are stored sequentially in the order in which they are entered.<br />
An important performance item in sequential access is buffering. If you allow enough buffers,<br />
QSAM is able to minimize the number <strong>of</strong> SSCHs by packaging together in the same I/O<br />
operation (through CCW command chaining) the data transfer <strong>of</strong> many physical blocks. This<br />
function decreases considerably the total amount <strong>of</strong> I/O connect time. Another key point is the<br />
look-ahead function for reads, that is, reading in advance records that are not yet required by<br />
the application program.<br />
154 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Queued sequential access method (QSAM)<br />
Basic sequential access method (BSAM)
Extended format data sets<br />
A special type <strong>of</strong> this organization is the extended format data set. Extended format data sets<br />
have a different internal storage format from sequential data sets that are not extended (fixed<br />
block with a 32-byte suffix). This storage format gives extended format data sets additional<br />
usability and availability characteristics, as explained here:<br />
► They can be allocated in the compressed format (can be referred to as a compressed<br />
format data set). A compressed format data set is a type <strong>of</strong> extended format data set that<br />
has an internal storage format that allows for data compression.<br />
► They allow data striping, that is, a multivolume sequential file where data can be accessed<br />
in parallel.<br />
► They are able to recover from padding error situations.<br />
► They can use the system managed buffering (SMB) technique.<br />
Extended format data sets must be SMS-managed and must reside on DASD. You cannot<br />
use an extended format data set for certain system data sets.<br />
z/<strong>OS</strong> UNIX files<br />
Another type <strong>of</strong> this organization is the Hierarchical File <strong>System</strong> (HFS). HFS files are<br />
P<strong>OS</strong>IX-conforming files that reside in an HFS data set. They are byte-oriented rather than<br />
record-oriented, as are MVS files. They are identified and accessed by specifying the path<br />
leading to them. Programs can access the information in HFS files through z/<strong>OS</strong> UNIX<br />
system calls, such as open(pathname), read(file descriptor), and write(file descriptor).<br />
Programs can also access the information in HFS files through the MVS BSAM, QSAM, and<br />
VSAM (Virtual Storage Access Method) access methods. When using BSAM or QSAM, an<br />
HFS file is simulated as a multi-volume sequential data set. When using VSAM, an HFS file is<br />
simulated as an ESDS. Note the following point about HFS data sets:<br />
► They are supported by standard DADSM create, rename, and scratch.<br />
► They are supported by DFSMShsm for dump/restore and migrate/recall if DFSMSdss is<br />
used as the data mover.<br />
► They are not supported by IEBCOPY or the DFSMSdss COPY function.<br />
QSAM and BSAM<br />
BSAM arranges records sequentially in the order in which they are entered. A data set that<br />
has this organization is a sequential data set. The user organizes records with other records<br />
into blocks. This is basic access. You can use BSAM with the following data types:<br />
► Basic format sequential data sets, which before z/<strong>OS</strong> V1R7 were known as sequential<br />
data sets, or more accurately as non-extended-format sequential data sets<br />
► Large format sequential data sets<br />
► Extended-format data sets<br />
► z/<strong>OS</strong> UNIX files<br />
QSAM arranges records sequentially in the order that they are entered to form sequential<br />
data sets, which are the same as those data sets that BSAM creates. The system organizes<br />
records with other records. QSAM anticipates the need for records based on their order. To<br />
improve performance, QSAM reads these records into storage before they are requested.<br />
This is called queued access. You can use QSAM with the following data types:<br />
► Sequential data sets<br />
► Basic format sequential data sets before z/<strong>OS</strong> V1R7, which were known as sequential<br />
data sets or more accurately as non-extended-format sequential data sets<br />
► Large format sequential data sets<br />
► Extended-format data sets<br />
► z/<strong>OS</strong> UNIX files<br />
Chapter 4. Storage management s<strong>of</strong>tware 155
4.29 z/<strong>OS</strong> V1R9 QSAM - BSAM enhancements<br />
DFSMS provides the following enhancements to<br />
BSAM and QSAM:<br />
Long-term page fixing for BSAM data buffers with the<br />
FIXED=USER parameter<br />
BSAM and QSAM support for the MULTACC<br />
parameter<br />
QSAM support for the MULTSDN parameter<br />
Figure 4-39 QSAM and BSAM enhancements with z/<strong>OS</strong> V1R9<br />
Long-term page fixing for BSAM data buffers<br />
To improve performance, in z/<strong>OS</strong> V1.9 BSAM allows certain calling programs to specify that<br />
all their BSAM data buffers have been page fixed. This specification frees BSAM from the<br />
CPU-time intensive work <strong>of</strong> fixing and freeing the data buffers itself. The only restrictions are:<br />
► The calling program must be APF authorized, or be in system key or supervisor state.<br />
► The format <strong>of</strong> the data set must be either basic format, large format, PDS, or extended<br />
format.<br />
The DCBE macro option “FIXED=USER” must be coded to specify that the calling program<br />
has done its own page fixing and indicates that the user has page fixed all BSAM data buffers.<br />
Note: Compressed format data sets are not supported.<br />
BSAM and QSAM support for MULTACC<br />
In z/<strong>OS</strong> V1R9, the MULTACC parameter <strong>of</strong> the DCBE macro is expanded. This is done to<br />
optimize performance for tape data sets with BSAM, and to support QSAM with optimized<br />
performance for both tape and DASD data sets. The calculations that are used to optimize the<br />
performance for BSAM with DASD data sets are also enhanced. When dealing with a tape<br />
data set, OPEN supports MULTACC for BSAM and QSAM.<br />
156 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
BSAM support<br />
For BSAM in z/<strong>OS</strong> V1R9, if you code a nonzero MULTACC value, OPEN calculates a default<br />
number <strong>of</strong> READ or WRITE requests that you are suggesting the system queue more<br />
efficiently. OPEN calculates the number <strong>of</strong> BLKSIZE-length blocks that can fit within 64 KB,<br />
then multiplies that value by the MULTACC value. If the block size exceeds 32 KB, then OPEN<br />
uses the MULTACC value without modification (this can happen only if you are using LBI, the<br />
large block interface). The system then tries to defer starting I/O requests until you have<br />
issued this number <strong>of</strong> READ or WRITE requests for the DCB. BSAM will never queue (defer)<br />
more READ or WRITE requests than the NCP value set in OPEN.<br />
For BSAM, it will work as documented for DASD:<br />
► If you code a nonzero MULTACC value, OPEN will calculate a default number <strong>of</strong> read or<br />
write requests that you are suggesting the system queue more efficiently.<br />
► The system will try to defer starting I/O requests until you have issued this many read or<br />
write requests for the DCB.<br />
Note: BSAM will never queue or defer more read or write requests than the number <strong>of</strong><br />
channel programs (NCP) value set in OPEN.<br />
QSAM support<br />
For QSAM in z/<strong>OS</strong> V1R9, if you code a nonzero MULTACC value, OPEN calculates a default<br />
number <strong>of</strong> buffers that you are suggesting the system queue more efficiently. OPEN<br />
calculates the number <strong>of</strong> BLKSIZE-length blocks that can fit within 64 KB, then multiplies that<br />
value by the MULTACC value. If the block size exceeds 32 KB, then OPEN uses the<br />
MULTACC value without modification (this can happen only if you are using LBI, the large<br />
block interface). The system then tries to defer starting I/O requests until that number <strong>of</strong><br />
buffers has been accumulated for the DCB. QSAM will never queue (defer) more buffers than<br />
the BUFNO value that is in effect.<br />
Note: If you code a nonzero MULTACC value, OPEN will calculate a default number <strong>of</strong><br />
buffers that you are suggesting the system queue more efficiently.<br />
QSAM support for MULTSDN parameter<br />
You can use the MULTSDN parameter <strong>of</strong> the DCBE macro with QSAM. In previous releases,<br />
QSAM ignored the MULTSDN parameter. This new support for MULTSDN allows the system<br />
to calculate a more efficient default value for DCB's BUFNO parameter, and reduces the<br />
situations where you need to specify a BUFNO value. The user can use MULTSDN to give a<br />
hint to OPEN so it can calculate a better default value for QSAM BUFNO instead <strong>of</strong> 1, 2, or 5.<br />
The user will not have to be dependent on device information such as blocks per track or<br />
number <strong>of</strong> stripes. QSAM accepts a MULTSDN value for the following data sets:<br />
► Tape data sets<br />
► DASD data sets <strong>of</strong> the following types:<br />
– Basic format<br />
– Large format<br />
– Extended format (non-compressed)<br />
– PDS<br />
For these supported data set types, the system uses MULTSDN to calculate a more efficient<br />
value for BUFNO when the following conditions are true:<br />
► The MULTSDN value is not zero.<br />
► DCBBUFNO has a value <strong>of</strong> zero after completion <strong>of</strong> the DCB OPEN exit routine.<br />
► The data set block size is available.<br />
Chapter 4. Storage management s<strong>of</strong>tware 157
4.30 Virtual storage access method (VSAM)<br />
Catalog<br />
Management<br />
CATALOG<br />
Figure 4-40 Virtual storage access method<br />
Virtual storage access method (VSAM)<br />
VSAM is a DFSMSdfp component used to organize data and maintain information about the<br />
data in a catalog. VSAM arranges records by an index key, relative record number, or relative<br />
byte addressing. VSAM is used for direct or sequential processing <strong>of</strong> fixed-length and<br />
variable-length records on DASD. Data that is organized by VSAM is cataloged for easy<br />
retrieval and is stored in one <strong>of</strong> five types <strong>of</strong> data sets.<br />
There are two major parts <strong>of</strong> VSAM:<br />
► Catalog management - The catalog contains information about the data sets.<br />
► Record management - VSAM can be used to organize records into five types <strong>of</strong> data sets,<br />
that is, the VSAM access method function:<br />
– Key-sequenced data sets (KSDS) contain records in ascending collating sequence.<br />
Records can be accessed by a field, called a key, or by a relative byte address.<br />
– Entry-sequenced data sets (ESDS) contain records in the order in which they were<br />
entered. Records are added to the end <strong>of</strong> the data set.<br />
– Linear data sets (LDS) contain data that has no record boundaries. Linear data sets<br />
contain none <strong>of</strong> the control information that other VSAM data sets do. Linear data sets<br />
must be cataloged in a catalog.<br />
– Relative record data sets (RRDS) contains records in relative record number order, and<br />
the records can be accessed only by this number. There are two types <strong>of</strong> relative<br />
158 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Record<br />
Management<br />
VSAM DATA SETS<br />
VSAM Data set types:<br />
KSDS<br />
ESDS<br />
LDS<br />
RRDS<br />
Fixed Length<br />
Variable Length
ecord data sets, as follows:<br />
Fixed-length RRDS: The records must be <strong>of</strong> fixed length.<br />
Variable-length RRDS (VRRDS): The records can vary in length.<br />
VSAM data sets<br />
The primary difference among these types <strong>of</strong> data sets is the way in which their records are<br />
stored and accessed. VSAM arranges records by an index key, by relative byte address, or by<br />
relative record number. Data organized by VSAM must be cataloged and is stored in one <strong>of</strong><br />
five types <strong>of</strong> data sets, depending on an application designer option.<br />
z/<strong>OS</strong> UNIX files can be accessed as though they are VSAM entry-sequenced data sets<br />
(ESDS). Although UNIX files are not actually stored as entry-sequenced data sets, the<br />
system attempts to simulate the characteristics <strong>of</strong> such a data set. To identify or access a<br />
UNIX file, specify the path that leads to it.<br />
Any type <strong>of</strong> VSAM data set can be in extended format. Extended-format data sets have a<br />
different internal storage format than data sets that are not extended. This storage format<br />
gives extended-format data sets additional usability characteristics and possibly better<br />
performance due to striping. You can choose that an extended-format key-sequenced data<br />
set be in the compressed format. Extended-format data sets must be SMS managed. You<br />
cannot use an extended-format data set for certain system data sets.<br />
Chapter 4. Storage management s<strong>of</strong>tware 159
4.31 VSAM terminology<br />
Logical record<br />
Unit <strong>of</strong> application information in a VSAM data set<br />
Designed by the application programmer<br />
Can be <strong>of</strong> fixed or variable size<br />
Divided into fields, one <strong>of</strong> them can be a key<br />
Physical record<br />
Control interval<br />
Control area<br />
Component<br />
Cluster<br />
Sphere<br />
Figure 4-41 VSAM terminology<br />
Logical record<br />
A logical record is a unit <strong>of</strong> application information used to store data in a VSAM cluster. The<br />
logical record is designed by the application programmer from the business model. The<br />
application program, through a GET, requests that a specific logical record be moved from the<br />
I/O device to memory in order to be processed. Through a PUT, the specific logical record is<br />
moved from memory to an I/O device. A logical record can be <strong>of</strong> a fixed size or a variable size,<br />
depending on the business requirements.<br />
The logical record is divided into fields by the application program, such as the name <strong>of</strong> the<br />
item, code, and so on. One or more contiguous fields can be defined as a key field to VSAM,<br />
and a specific logical record can be retrieved directly by its key value.<br />
Logical records <strong>of</strong> VSAM data sets are stored differently from logical records in non-VSAM<br />
data sets.<br />
Physical record<br />
A physical record is device-dependent and is a set <strong>of</strong> logical records moved during an I/O<br />
operation by just one CCW (Read or Write). VSAM calculates the physical record size in<br />
order to optimize the track space (to avoid many gaps) at the time the data set is defined. All<br />
physical records in VSAM have the same length. A physical record is also referred to as a<br />
physical block or simply a block. VSAM may have control information along with logical<br />
records in a physical record.<br />
160 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Control interval (CI)<br />
VSAM stores records in control intervals. A control interval is a continuous area <strong>of</strong> direct<br />
access storage that VSAM uses to store data records and control information that describes<br />
the records. Whenever a record is retrieved from direct access storage, the entire control<br />
interval containing the record is read into a VSAM I/O buffer in virtual storage. The desired<br />
record is transferred from the VSAM buffer to a user-defined buffer or work area.<br />
Control area (CA)<br />
The control intervals in a VSAM data set are grouped together into fixed-length contiguous<br />
areas <strong>of</strong> direct access storage called control areas. A VSAM data set is actually composed <strong>of</strong><br />
one or more control areas. The number <strong>of</strong> control intervals in a control area is fixed by VSAM.<br />
The maximum size <strong>of</strong> a control area is one cylinder, and the minimum size is one track <strong>of</strong><br />
DASD storage. When you specify the amount <strong>of</strong> space to be allocated to a data set, you<br />
implicitly define the control area size. Refer to 4.32, “VSAM: Control interval (CI)” on<br />
page 162, for more information.<br />
Component<br />
A component in systems with VSAM is a named, cataloged collection <strong>of</strong> stored records, such<br />
as the data component or index component <strong>of</strong> a key-sequenced file or alternate index. A<br />
component is a set <strong>of</strong> CAs. It is the VSAM terminology for an MVS data set. A component has<br />
an entry in the VTOC. An example <strong>of</strong> a component can be the data set containing only data<br />
for a KSDS VSAM organization.<br />
Cluster<br />
A cluster is a named structure consisting <strong>of</strong> a group <strong>of</strong> related components. VSAM data sets<br />
can be defined with either the DEFINE CLUSTER command or the ALLOCATE command. The<br />
cluster is a set <strong>of</strong> components that have a logical binding between them. For example, a<br />
KSDS cluster is composed <strong>of</strong> the data component and the index component. The concept <strong>of</strong><br />
cluster was introduced to make the JCL to access VSAM more flexible. If you want to access<br />
a KSDS normally, just use the cluster’s name on a DD card. Otherwise, if you want special<br />
processing with just the data, use the data component name on the DD card.<br />
Sphere<br />
A sphere is a VSAM cluster and its associated data sets. The cluster is originally defined with<br />
the access method services ALLOCATE command, the DEFINE CLUSTER command, or through<br />
JCL. The most common use <strong>of</strong> the sphere is to open a single cluster. The base <strong>of</strong> the sphere<br />
is the cluster itself.<br />
Chapter 4. Storage management s<strong>of</strong>tware 161
4.32 VSAM: Control interval (CI)<br />
LR1 LR2 LRn<br />
LR = Logical record<br />
RDF = Record definition field<br />
CIDF = Control interval definition field<br />
LR1<br />
100<br />
bytes<br />
LR2<br />
100<br />
bytes<br />
LR3<br />
100<br />
bytes<br />
Figure 4-42 Control interval format<br />
Control interval (CI)<br />
The control interval is a concept that is unique to VSAM. A CI is formed by one or several<br />
physical records (usually just one). It is the fundamental building block <strong>of</strong> every VSAM file. A<br />
CI is a contiguous area <strong>of</strong> direct access storage that VSAM uses to store data records and<br />
control information that describes the records. A CI is the unit <strong>of</strong> information that VSAM<br />
transfers between the storage device and the main storage during one I/O operation.<br />
Whenever a logical record is requested by an application program, the entire CI containing<br />
the logical record is read into a VSAM I/O buffer in virtual storage. The desired logical record<br />
is then transferred from the VSAM buffer to a user-defined buffer or work area (if in move<br />
mode).<br />
Based on the CI size, VSAM calculates the best size <strong>of</strong> the physical block in order to better<br />
use the 3390/3380 logical track. The CI size can be from 512 bytes to 32 KB. A CI contents<br />
depends on the cluster organization. A KSDS consists <strong>of</strong>:<br />
► Logical records stored from the beginning to the end <strong>of</strong> the CI.<br />
► Free space, for data records to be inserted into or lengthened.<br />
► Control information, which is made up <strong>of</strong> two types <strong>of</strong> fields:<br />
– One control interval definition field (CIDF) per CI. CIDF is a 4-byte field. CIDF contains<br />
information about the amount and location <strong>of</strong> free space in the CI.<br />
162 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Control Interval Format<br />
FREE SPACE<br />
R<br />
D<br />
Fn<br />
Contigous records <strong>of</strong><br />
the same size<br />
LR4<br />
LRn 150<br />
bytes<br />
LR5<br />
100<br />
bytes<br />
FREE<br />
SPACE<br />
R<br />
D<br />
F4<br />
3 bytes<br />
R<br />
D<br />
F2<br />
R<br />
D<br />
F1<br />
Control information fields<br />
R<br />
D<br />
F3<br />
R<br />
D<br />
F2<br />
R<br />
D<br />
F1<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
4 bytes
– Several record definition fields (RDF) describing the logical records. RDF is a 3-byte<br />
field and describes the length <strong>of</strong> logical records. For fixed length records there are two<br />
RDFs, one with the length, and the other with how many records are the same length.<br />
The size <strong>of</strong> CIs can vary from one component to another, but all the CIs within the data or<br />
index component <strong>of</strong> a particular cluster data set must be <strong>of</strong> the same length. The CI<br />
components and properties may vary, depending on the data set organization. For example,<br />
an LDS does not contain CIDFs and RDFs in its CI. All <strong>of</strong> the bytes in the LDS CI are data<br />
bytes.<br />
Spanned records<br />
Spanned records are logical records that are larger than the CI size. They are needed when<br />
the application requires very long logical records. To have spanned records, the file must be<br />
defined with the SPANNED attribute at the time it is created. Spanned records are allowed to<br />
extend across or “span” control interval boundaries, but not beyond control area limits. The<br />
RDFs describe whether the record is spanned or not.<br />
A spanned record always begins on a control interval boundary, and fills one or more control<br />
intervals within a single control area. A spanned record does not share the CI with any other<br />
records; in other words, the free space at the end <strong>of</strong> the last segment is not filled with the next<br />
record. This free space is only used to extend the spanned record.<br />
Control area (CA)<br />
Control area is also a concept unique to VSAM. A CA is formed by two or more CIs put<br />
together into fixed-length contiguous areas <strong>of</strong> direct access storage. A VSAM data set is<br />
composed <strong>of</strong> one or more CAs. In most cases, a CA is the size <strong>of</strong> a 3390/3380 cylinder. The<br />
minimum size <strong>of</strong> a CA is one track. The CA size is implicitly defined when you specify the size<br />
<strong>of</strong> a data set at data set definition.<br />
CAs are needed to implement the concept <strong>of</strong> splits. The size <strong>of</strong> a VSAM file is always a<br />
multiple <strong>of</strong> the CA size and VSAM files are extended in units <strong>of</strong> CAs.<br />
Splits<br />
CI splits and CA splits occur as a result <strong>of</strong> data record insertions (or increasing the length <strong>of</strong><br />
an already existing record) in KSDS and VRRDS organizations. If a logical record is to be<br />
inserted (in key sequence) and there is not enough free space in the CI, the CI is split.<br />
Approximately half the records in the CI are transferred to a free CI provided in the CA, and<br />
the record to be inserted is placed in the original CI.<br />
If there are no free CIs in the CA and a record is to be inserted, a CA split occurs. Half the CIs<br />
are sent to the first available CA at the end <strong>of</strong> the data component. This movement creates<br />
free CIs in the original CA, then the record to be inserted causes a CI split.<br />
Chapter 4. Storage management s<strong>of</strong>tware 163
4.33 VSAM data set components<br />
Cluster<br />
Index<br />
Component<br />
Data<br />
Component<br />
Figure 4-43 VSAM data set components<br />
VSAM data set components<br />
A component is an individual part <strong>of</strong> a VSAM cluster. Each component has a name, an entry<br />
in the catalog, and an entry in the VTOC. There are two types <strong>of</strong> components, the data<br />
component and the index component. Some VSAM organizations (such as an ESDS, RRDS,<br />
and LDS) have only the data component.<br />
Data component<br />
The data component is the part <strong>of</strong> a VSAM cluster, alternate index, or catalog that contains<br />
the data records. All VSAM cluster organizations have the data component.<br />
Index component<br />
The index component is a collection <strong>of</strong> records containing data keys and pointers (relative<br />
byte address, or RBA). The data keys are taken from a fixed defined field in each data logical<br />
record. The keys in the index logical records are compressed (rear and front). The RBA<br />
pointers are compacted. Only KSDS and VRRDS VSAM data set organizations have the<br />
index component.<br />
Using the index, VSAM is able to retrieve a logical record from the data component when a<br />
request is made randomly for a record with a certain key. A VSAM index can consist <strong>of</strong> more<br />
than one level (binary tree). Each level contains pointers to the next lower level. Because<br />
there are random and sequential types <strong>of</strong> access, VSAM divides the index component into<br />
two parts: the sequence set, and the index set.<br />
164 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
H<br />
D<br />
R<br />
7 11 14 21 30 38<br />
2 5 7<br />
8 9<br />
12 13 14<br />
15 16 19<br />
22 23 26<br />
31 35 38<br />
HDR 38 67<br />
H<br />
D<br />
R<br />
HDR 67 95<br />
43 50 54 57 64 67<br />
39 41 43<br />
44 45 46<br />
51 53 54<br />
55 56 57<br />
58 61 62<br />
65 66 67<br />
Control Area Control Area Control Area<br />
H<br />
D<br />
R<br />
Logical Record<br />
HDR 95<br />
71 75 78 85 92 95<br />
68 69<br />
72 73 74<br />
76 77 78<br />
79 80 85<br />
86 89<br />
93 94 95<br />
Control<br />
Interval<br />
Index<br />
Set<br />
Sequence<br />
Set<br />
Record<br />
key
Sequence set<br />
The sequence set is the lowest level <strong>of</strong> index, and it directly points (through an RBA) to the<br />
data CI in the CA <strong>of</strong> the data component. Each index logical record:<br />
► Occupies one index CI.<br />
► Maps one CI in the data component.<br />
► Contains pointers and high key information for each data CI.<br />
► Contains horizontal pointers from one sequence set CI to the next keyed sequence set CI.<br />
These horizontal pointers are needed because <strong>of</strong> the possibility <strong>of</strong> splits, which make the<br />
physical sequence different from the logical collating sequence by key.<br />
Index set<br />
The records in all levels <strong>of</strong> the index above the sequence set are called the index set. An entry<br />
in an index set logical record consists <strong>of</strong> the highest possible key in an index record in the<br />
next lower level, and a pointer to the beginning <strong>of</strong> that index record. The highest level <strong>of</strong> the<br />
index always contains a single index CI.<br />
The structure <strong>of</strong> VSAM prime indexes is built to create a single index record at the lowest level<br />
<strong>of</strong> the index. If there is more than one sequence-set-level record, VSAM automatically builds<br />
another index level.<br />
Cluster<br />
A cluster is the combination <strong>of</strong> the data component (data set) and the index component (data<br />
set) for a KSDS. The cluster provides a way to treat index and data components as a single<br />
component with its own name. Use <strong>of</strong> the word cluster instead <strong>of</strong> data set is recommended.<br />
Alternate index (AIX)<br />
The alternate index is a VSAM function that allows logical records <strong>of</strong> a KSDS or ESDS to be<br />
accessed sequentially and directly by more than one key field. The cluster that has the data is<br />
called the base cluster, then an AIX cluster is built from the base cluster. Alternate indexes<br />
eliminate the need to store the same data in different sequences in multiple data sets for the<br />
purposes <strong>of</strong> various applications. Each alternate index is a KSDS cluster consisting <strong>of</strong> an<br />
index component and a data component.<br />
The records in the AIX index component contain the alternate key and the RBA pointing to the<br />
alternate index data component. The records in the AIX data component contain the alternate<br />
key value itself and all the primary keys corresponding to the alternate key value (pointers to<br />
data in the base cluster). The primary keys in the logical record are in ascending sequence<br />
within an alternate index value.<br />
Any field in the base cluster record can be used as an alternate key. It can also overlap the<br />
primary key (in a KSDS), or any other alternate key. The same base cluster may have several<br />
alternate indexes varying the alternate key. There may be more than one primary key value<br />
per the same alternate key value. For example, the primary key might be an employee<br />
number and the alternate key might be the department name; obviously, the same<br />
department name may have several employee numbers.<br />
AIX cluster is created with IDCAMS DEFINE ALTERNATEINDEX command, then it is populated by<br />
the BLDINDEX command. Before a base cluster can be accessed through an alternate index, a<br />
path must be defined. A path provides a way to gain access to the base data through a<br />
specific alternate index. To define a path, use the DEFINE PATH command. The utility to issue<br />
this command is discussed in 4.14, “Access method services (IDCAMS)” on page 129.<br />
Sphere<br />
A sphere is a VSAM cluster and its AIX associated clusters’ data sets.<br />
Chapter 4. Storage management s<strong>of</strong>tware 165
4.34 VSAM key sequenced cluster (KSDS)<br />
Data component<br />
Figure 4-44 Key sequenced cluster (KSDS)<br />
VSAM KSDS cluster<br />
In a KSDS, logical records are placed in the data set in ascending collating sequence by key.<br />
The key contains a unique value, which determines the record's collating position in the<br />
cluster. The key must be in the same position (<strong>of</strong>f set) in each record.<br />
The key field must be contiguous and each key’s contents must be unique. After it is<br />
specified, the value <strong>of</strong> the key cannot be altered, but the entire record may be deleted.<br />
When a new record is added to the data set, it is inserted in its logical collating sequence by<br />
key.<br />
A KSDS has a data component and an index component. The index component keeps track<br />
<strong>of</strong> the used keys and is used by VSAM to retrieve a record from the data component quickly<br />
when a request is made for a record with a certain key.<br />
A KSDS can have fixed or variable length records.<br />
A KSDS can be accessed in sequential mode, direct mode, or skip sequential mode (meaning<br />
that you process sequentially, but directly skip portions <strong>of</strong> the data set).<br />
166 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
VSAM.KSDS<br />
VSAM.KSDS.DATA<br />
VSAM.KSDS.INDEX<br />
Cluster component<br />
Index component
4.35 VSAM: Processing a KSDS cluster<br />
Index<br />
Component<br />
Data<br />
Component<br />
H<br />
D<br />
R<br />
Application: GET record with key = 23<br />
7 11 14 21 26 38<br />
2 5 7<br />
8 9<br />
12 13 14<br />
15 16 19<br />
22 23 26<br />
31 35 38<br />
HDR 38 67<br />
Control<br />
Interval Control Area Control Area Control Area<br />
Figure 4-45 Processing an indexed VSAM cluster: Direct access<br />
H<br />
D<br />
R<br />
43 50 54 57 64 67<br />
39 41 43<br />
44 45 46<br />
51 53 54<br />
55 56 57<br />
58 61 62<br />
65 66 67<br />
HDR 67 95<br />
Application<br />
Processing a KSDS cluster<br />
A KSDS has an index that relates key values to the relative locations in the data set. This<br />
index is called the prime index. It has two uses:<br />
► Locate the collating position when inserting records<br />
► Locate records for retrieval<br />
When initially loading a KSDS data set, records must be presented to VSAM in key sequence.<br />
This loading can be done through the IDCAMS VSAM utility named REPRO. The index for a<br />
key-sequenced data set is built automatically by VSAM as the data set is loaded with records.<br />
When a data CI is completely loaded with logical records, free space, and control information,<br />
VSAM makes an entry in the index. The entry consists <strong>of</strong> the highest possible key in the data<br />
control interval and a pointer to the beginning <strong>of</strong> that control interval.<br />
When accessing records sequentially, VSAM refers only to the sequence set. It uses a<br />
horizontal pointer to get from one sequence set record to the next record in collating<br />
sequence.<br />
Sequence<br />
Set<br />
Request for data direct access<br />
When accessing records directly, VSAM follows vertical pointers from the highest level <strong>of</strong> the<br />
index down to the sequence set to find vertical pointers to the requested logical record.<br />
Figure 4-45 shows how VSAM searches the index when an application issues a GET for a<br />
logical record with key value 23.<br />
H<br />
D<br />
R<br />
HDR 95<br />
71 75 78 85 92 95<br />
68 69<br />
72 73 74<br />
76 77 78<br />
79 80 85<br />
86 89<br />
93 94 95<br />
Index<br />
Set<br />
Chapter 4. Storage management s<strong>of</strong>tware 167
The sequence is as follows:<br />
1. VSAM scans the index record in the highest level <strong>of</strong> the index set for a key that is greater<br />
or equal to 23.<br />
2. The entry 67 points to an index record in the next lower level. In this index record, VSAM<br />
scans for an entry for a key that is higher than or equal to 23.<br />
3. The entry 38 points to the sequence set that maps the CA holding the CI containing the<br />
logical record.<br />
4. VSAM scans the sequence set record with the highest key, searching for a key that is<br />
greater than or equal to 23.<br />
5. The entry 26 points to the data component CI that holds the desired record.<br />
6. VSAM searches the CI for the record with key 23. VSAM finds the logical record and gives<br />
it to the application program.<br />
If VSAM does not find a record with the desired key, the application receives a return code<br />
indicating that the record was not found.<br />
168 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.36 VSAM entry sequenced data set (ESDS)<br />
CI 1<br />
RBA 0<br />
CI 2<br />
RBA 4096<br />
CI 3<br />
RBA 8192<br />
CI 4<br />
RBA 12288<br />
RECORD<br />
1<br />
RECORD<br />
5<br />
RECORD<br />
9<br />
RECORD<br />
2<br />
RECORD<br />
6<br />
RECORD<br />
10<br />
RECORD<br />
3<br />
RECORD<br />
7<br />
Figure 4-46 VSAM entry sequenced data set (ESDS)<br />
UNUSED SPACE<br />
RECORD<br />
4<br />
RECORD<br />
8<br />
UNUSED SPACE<br />
UNUSED SPACE<br />
UNUSED<br />
SPACE<br />
VSAM entry sequenced data set (ESDS)<br />
An entry sequenced data set (ESDS) is comparable to a sequential data set. It contains<br />
fixed-length or variable-length records. Records are sequenced by the order <strong>of</strong> their entry in<br />
the data set, rather than by a key field in the logical record. All new records are placed at the<br />
end <strong>of</strong> the data set. An ESDS cluster has only a data component.<br />
Records can be accessed sequentially or directly by relative byte address (RBA). When a<br />
record is loaded or added, VSAM indicates its relative byte address (RBA). The RBA is the<br />
<strong>of</strong>fset <strong>of</strong> the first byte <strong>of</strong> the logical record from the beginning <strong>of</strong> the data set. The first record<br />
in a data set has an RBA <strong>of</strong> 0; the second record has an RBA equal to the length <strong>of</strong> the first<br />
record, and so on. The RBA <strong>of</strong> a logical record depends only on the record's position in the<br />
sequence <strong>of</strong> records. The RBA is always expressed as a full-word binary integer.<br />
Although an entry-sequenced data set does not contain an index component, alternate<br />
indexes are allowed. You can build an alternate index with keys to keep track <strong>of</strong> these RBAs.<br />
Chapter 4. Storage management s<strong>of</strong>tware 169<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F
4.37 VSAM: Typical ESDS processing<br />
CI 1<br />
RBA 0<br />
CI 2<br />
RBA 4096<br />
CI 3<br />
RBA 8192<br />
CI 4<br />
RBA 12288<br />
RECORD<br />
1<br />
RECORD<br />
5<br />
RECORD<br />
9<br />
Figure 4-47 Typical ESDS processing (ESDS)<br />
Typical ESDS processing<br />
For an ESDS, two types <strong>of</strong> processing are supported:<br />
► Sequential access (the most common).<br />
► Direct (or random) access requires the program to give the RBA <strong>of</strong> the record.<br />
Skip sequential is not allowed.<br />
RECORD<br />
2<br />
Existing records can never be deleted. If the application wants to delete a record, it must flag<br />
that record as inactive. As far as VSAM is concerned, the record is not deleted. Records can<br />
be updated, but without length change.<br />
ESDS organization is suited for sequential processing with variable records, but in a few read<br />
accesses you need a direct (random) access by key (here using the AIX cluster).<br />
170 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Application program:<br />
GET NEXT<br />
RECORD<br />
6<br />
RECORD<br />
10<br />
RECORD<br />
3<br />
RECORD<br />
7<br />
UNUSED SPACE<br />
RECORD<br />
4<br />
RECORD<br />
8<br />
UNUSED SPACE<br />
UNUSED SPACE<br />
UNUSED<br />
SPACE<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F
4.38 VSAM relative record data set (RRDS)<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
A<br />
R<br />
E<br />
A<br />
A<br />
R<br />
E<br />
A<br />
CI 0<br />
CI 1<br />
CI 2<br />
CI 3<br />
CI 0<br />
CI 1<br />
CI 2<br />
CI 3<br />
SLOT 1<br />
SLOT 6<br />
SLOT 11<br />
SLOT 16<br />
SLOT 21<br />
SLOT 26<br />
SLOT 31<br />
SLOT 36<br />
Figure 4-48 VSAM relative record data set (RRDS)<br />
Relative record data set<br />
A relative record data set (RRDS) consists <strong>of</strong> a number <strong>of</strong> preformed, fixed length slots. Each<br />
slot has a unique relative record number, and the slots are sequenced by ascending relative<br />
record number. Each (fixed length) record occupies a slot, and it is stored and retrieved by the<br />
relative record number <strong>of</strong> that slot. The position <strong>of</strong> a data record is fixed; its relative record<br />
number cannot change.<br />
An RRDS cluster has a data component only.<br />
SLOT 2 SLOT 3 SLOT 4 SLOT 5<br />
SLOT 7 SLOT 8 SLOT 9 SLOT 10<br />
SLOT 12 SLOT 13 SLOT 14 SLOT 15<br />
SLOT 17 SLOT 18 SLOT 19 SLOT 20<br />
SLOT 22 SLOT 23 SLOT 24 SLOT 25<br />
SLOT 27 SLOT 28 SLOT 29 SLOT 30<br />
SLOT 32 SLOT 33 SLOT 34 SLOT 35<br />
SLOT 37 SLOT 38 SLOT 39 SLOT 40<br />
Random load <strong>of</strong> an RRDS requires a user program implementing such logic.<br />
Chapter 4. Storage management s<strong>of</strong>tware 171<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F
4.39 VSAM: Typical RRDS processing<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
A<br />
R<br />
E<br />
A<br />
A<br />
R<br />
E<br />
A<br />
CI 0<br />
CI 1<br />
CI 2<br />
CI 3<br />
CI 0<br />
CI 1<br />
CI 2<br />
CI 3<br />
SLOT 1<br />
SLOT 6<br />
SLOT 11<br />
SLOT 16<br />
SLOT 21<br />
SLOT 26<br />
SLOT 31<br />
SLOT 36<br />
Figure 4-49 Typical RRDS processing<br />
Processing RRDS data sets<br />
The application program inputs the relative record number <strong>of</strong> the target record. VSAM is able<br />
to find its location very quickly by using a formula that takes into consideration the geometry<br />
<strong>of</strong> the DASD device. The relative number is always used as a search argument. For an<br />
RRDS, three types <strong>of</strong> processing are supported:<br />
► Sequential processing.<br />
► Skip-sequential processing.<br />
► Direct processing; in this case, the randomization routine is supported by the application<br />
program.<br />
172 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Application program:<br />
GET Record 26<br />
SLOT 2 SLOT 3 SLOT 4 SLOT 5<br />
SLOT 7 SLOT 8 SLOT 9 SLOT 10<br />
SLOT 12 SLOT 13 SLOT 14 SLOT 15<br />
SLOT 17 SLOT 18 SLOT 19 SLOT 20<br />
SLOT 22 SLOT 23 SLOT 24 SLOT 25<br />
SLOT 27 SLOT 28 SLOT 29 SLOT 30<br />
SLOT 32 SLOT 33 SLOT 34 SLOT 35<br />
SLOT 37 SLOT 38 SLOT 39 SLOT 40<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
R<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F<br />
C<br />
I<br />
D<br />
F
4.40 VSAM linear data set (LDS)<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
C<br />
O<br />
N<br />
T<br />
R<br />
O<br />
L<br />
A<br />
R<br />
E<br />
A<br />
A<br />
R<br />
E<br />
A<br />
CI<br />
CI<br />
CI<br />
CI<br />
CI<br />
CI<br />
CI<br />
CI<br />
Figure 4-50 VSAM linear data set (LDS)<br />
DATA<br />
DATA<br />
DATA<br />
DATA<br />
DATA<br />
DATA<br />
DATA<br />
DATA<br />
VSAM linear data set (LDS)<br />
A linear data set is a VSAM data set with a control interval size from 4096 bytes to 32768<br />
bytes, in increments <strong>of</strong> 4096 bytes. An LDS has no imbedded control information in its CI, that<br />
is, no record definition fields (RDFs) and no control interval definition fields (CIDFs).<br />
Therefore, all LDS bytes are data bytes. Logical records must be blocked and deblocked by<br />
the application program (although logical records do not exist, from the point <strong>of</strong> view <strong>of</strong><br />
VSAM).<br />
IDCAMS is used to define a linear data set. An LDS has only a data component. An LDS data<br />
set is just a physical sequential VSAM data set comprised <strong>of</strong> 4 KB physical records, but with a<br />
revolutionary buffer technique called data-in-virtual (DIV).<br />
A linear data set is processed as an entry-sequenced data set, with certain restrictions.<br />
Because a linear data set does not contain control information, it cannot be accessed as<br />
though it contained individual records. You can access a linear data set with the DIV macro. If<br />
using DIV to access the data set, the control interval size must be 4096; otherwise, the data<br />
set will not be processed.<br />
When a linear data set is accessed with the DIV macro, it is referred to as the data-in-virtual<br />
object or the data object.<br />
For information about how to use data-in-virtual, see z/<strong>OS</strong> MVS <strong>Programming</strong>: Assembler<br />
Services Guide, SA22-7605.<br />
Chapter 4. Storage management s<strong>of</strong>tware 173
4.41 VSAM: Data-in-virtual (DIV)<br />
DIV<br />
(Address space)<br />
(Dataspace)<br />
or<br />
(Hiperspace)<br />
Figure 4-51 Data-in-virtual (DIV)<br />
Data-in-virtual (DIV)<br />
You can access a linear data set using these techniques:<br />
► VSAM<br />
► DIV, if the control interval size is 4096 bytes. The data-in-virtual (DIV) macro provides<br />
access to VSAM linear data sets.<br />
► Window services, if the control interval size is 4096 bytes.<br />
Data-in-virtual (DIV) is an optional and unique buffering technique used for LDS data sets.<br />
Application programs can use DIV to map a data set (or a portion <strong>of</strong> a data set) into an<br />
address space, a data space, or a hiperspace. An LDS cluster is sometimes referred to as a<br />
DIV object. After setting the environment, the LDS cluster looks to the application as a table<br />
in virtual storage with no need <strong>of</strong> issuing I/O requests.<br />
Data is read into main storage by the paging algorithms only when that block is actually<br />
referenced. During RSM page-steal processing, only changed pages are written to the cluster<br />
in DASD. Unchanged pages are discarded since they can be retrieved again from the<br />
permanent data set.<br />
DIV is designed to improve the performance <strong>of</strong> applications that process large files<br />
non-sequentially and process them with significant locality <strong>of</strong> reference. It reduces the<br />
number <strong>of</strong> I/O operations that are traditionally associated with data retrieval. Likely candidates<br />
are large arrays or table files.<br />
174 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Enable users to:<br />
Map data set to virtual storage<br />
Access data by exploiting paging<br />
algorithms
4.42 VSAM: Mapping a linear data set<br />
WINDOW<br />
(Address space)<br />
(Dataspace)<br />
or<br />
(Hiperspace)<br />
Figure 4-52 Mapping a linear data set<br />
LDS<br />
BLOCK3<br />
BLOCK4<br />
BLOCK5<br />
OFFSET<br />
SPAN<br />
Mapping a linear data set<br />
To establish a map from a linear data set to a window (a program-provided area in multiples <strong>of</strong><br />
4 KB on a 4 KB boundary), the program issues:<br />
► DIV IDENTIFY to introduce (allocate) a linear data set to data-in-virtual services.<br />
► DIV ACCESS to cause a VSAM open for the data set and indicate access mode<br />
(read/update).<br />
► GETMAIN to allocate a window in virtual storage where the LDS will be mapped totally or<br />
in pieces.<br />
► DIV MAP to enable the viewing <strong>of</strong> the data object by establishing an association between<br />
a program-provided area (window) and the data object. The area may be in an address<br />
space, data space, or hiperspace.<br />
No actual I/O is done until the program references the data in the window. The reference will<br />
result in a page fault which causes data-in-virtual services to read the data from the linear<br />
data set into the window.<br />
DIV SAVE can be used to write out changes to the data object. DIV RESET can be used to<br />
discard changes made in the window since the last SAVE operation.<br />
Chapter 4. Storage management s<strong>of</strong>tware 175
4.43 VSAM resource pool<br />
VSAM resource pool is formed by:<br />
I/O control blocks<br />
Buffer pool (set <strong>of</strong> equal-sized buffers)<br />
VSAM resource pool can be shared by VSAM<br />
clusters, improving the effectiveness <strong>of</strong> these<br />
buffers<br />
Four types <strong>of</strong> VSAM buffering techniques:<br />
Non-shared resource (NSR)<br />
Local shared resource (LSR)<br />
Global shared resource (GSR)<br />
Record-level sharing (RLS)<br />
Figure 4-53 VSAM resource pool<br />
VSAM resource pool<br />
Buffering is one <strong>of</strong> the key aspects as far as I/O performance is concerned. A VSAM buffer is<br />
a virtual storage area where the CI is transferred during an I/O operation. In VSAM KSDS<br />
there are two types <strong>of</strong> buffers: buffers for data CIs and buffers for index CIs. A buffer pool is a<br />
set <strong>of</strong> buffers with the same size. A resource pool is a buffer pool with several control blocks<br />
describing the pool and describing the clusters with CIs in the resource pool.<br />
The objective <strong>of</strong> a buffer pool is to avoid I/O operations in random accesses (due to re-visiting<br />
data) and to make these I/O operations more efficient in sequential processing, thereby<br />
improving performance.<br />
For more efficient use <strong>of</strong> virtual storage, buffer pools can be shared among clusters using<br />
locally or globally shared buffer pools. There are four types <strong>of</strong> resource pool management,<br />
called modes, defined according to the technique used to manage them:<br />
► Not shared resources (NSR)<br />
► Local shared resources (LSR)<br />
► Global shared resources (GSR)<br />
► Record-level shared resources (RLS)<br />
These modes can be declared in the ACB macro <strong>of</strong> the VSAM data set (MACRF keyword)<br />
and are described in the following section.<br />
176 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.44 VSAM: Buffering modes<br />
User<br />
ACB<br />
MACRF=<br />
(LSR,NUB)<br />
INDEX<br />
Data<br />
INDEX<br />
User ACB<br />
MACRF=<br />
(LSR,NUB)<br />
Data<br />
User ACB INDEX<br />
MACRF=<br />
(LSR,NUB)<br />
Buffers and I/O related control blocks<br />
associated with a pool (BLDVRP)<br />
Multiple data sets share pooled resources<br />
Figure 4-54 VSAM LSR buffering mode<br />
Data<br />
VSAM buffering modes<br />
The VSAM buffering modes that you can use are:<br />
► NSR, LSR, GSR, and RLS<br />
Buffers and I/O related<br />
control blocks<br />
Non-shared resource (NSR)<br />
Non-shared resource (NSR) is the default VSAM buffering technique. It has the following<br />
characteristics:<br />
► The resource pool is implicitly constructed at data set open time.<br />
► The buffers are not shared among VSAM data sets; only one cluster has CIs in this<br />
resource pool.<br />
► Buffers are located in the private area.<br />
► For sequential reads, VSAM uses the read-ahead function: when the application finishes<br />
processing half the buffers, VSAM schedules an I/O operation for that half <strong>of</strong> the buffers.<br />
This continues until a CA boundary is encountered; the application must wait until the last<br />
I/O to the CA is done before proceeding to the next CA. The I/O operations are always<br />
scheduled within CA boundaries.<br />
► For sequential writes, VSAM postpones the writes to DASD until half the buffers are filled<br />
by the application. Then VSAM schedules an I/O operation to write that half <strong>of</strong> the buffers<br />
to DASD. The I/O operations are always scheduled within CA boundaries.<br />
Chapter 4. Storage management s<strong>of</strong>tware 177
► CIs are discarded as soon as they are used, using a sequential algorithm to keep CIs in<br />
the resource pool.<br />
► There is dynamic addition <strong>of</strong> strings. Strings are like cursors; each string represents a<br />
position in the data set for the requested record.<br />
► For random access there is not look ahead, but the algorithm is still the sequential one.<br />
NSR is used by high-level languages. Since buffers are managed by a sequential algorithm,<br />
NSR is not the best choice for random processing. For applications using NSR, consider<br />
using system-managed buffering, discussed in 4.45, “VSAM: <strong>System</strong>-managed buffering<br />
(SMB)” on page 179.<br />
Local shared resource (LSR)<br />
An LSR resource pool is suitable for random processing and not for sequential processing.<br />
The characteristics <strong>of</strong> the LSR include that it is:<br />
► Shared among VSAM clusters accessed by tasks in the same address space.<br />
► Located in the private area and ESO hiperspace. With hiperspace, VSAM buffers are<br />
located in expanded storage to improve the processing <strong>of</strong> VSAM clusters. With<br />
z/Architecture ESO hiperspaces are mapped in main storage.<br />
► A resource pool is explicitly constructed by macro BLDVRP, before the OPEN.<br />
► Buffers are managed by the last recently used (LRU) algorithm, that is, the most<br />
referenced CIs are kept in the resource pool. It is very adequate for random processing.<br />
► LSR expects that CIs in buffers are re-visited.<br />
Global shared resource (GSR)<br />
GSR is similar to the LSR buffering technique. GSR differs from LSR in the following ways:<br />
► The buffer pool is shared among VSAM data sets accessed by tasks in multiple address<br />
spaces in the same z/<strong>OS</strong> image.<br />
► Buffers are located in CSA.<br />
► The code using GSR must be in the supervisor state.<br />
► Buffers cannot use hiperspace.<br />
► The separate index resource pools are not supported for GSR.<br />
GSR is not commonly used by applications, so you should consider the use <strong>of</strong> VSAM RLS<br />
instead.<br />
Record-level sharing (RLS)<br />
Record-level sharing (RLS) is the implementation <strong>of</strong> VSAM data sharing. Record-level<br />
sharing is discussed in detail in Chapter 7, “DFSMS Transactional VSAM Services” on<br />
page 377.<br />
For more information about NSR, LSR, and GSR, refer to 7.2, “Base VSAM buffering” on<br />
page 380 and also to the <strong>IBM</strong> <strong>Redbooks</strong> publication VSAM Demystified, SG24-6105.<br />
178 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.45 VSAM: <strong>System</strong>-managed buffering (SMB)<br />
Only for SMS-managed extended format data sets<br />
and NSR buffering mode<br />
RECORD_ACCESS_BIAS in DATACLASS<br />
ACCBIAS subparameter <strong>of</strong> AMP, in JCL DD<br />
statement<br />
For ACCBIAS equal to SYSTEM, VSAM decisions<br />
based on MACRF parameter <strong>of</strong> ACB and MSR in<br />
storage class<br />
Optimum number <strong>of</strong> index and data buffers<br />
For random access, VSAM changes buffering<br />
management algorithm from NSR to LSR<br />
Figure 4-55 <strong>System</strong>-managed buffering (SMB)<br />
<strong>System</strong>-managed buffering (SMB)<br />
SMB is a feature <strong>of</strong> VSAM that was introduced in DFSMS V1R4. SMB enables VSAM to:<br />
► Determine the optimum number <strong>of</strong> index and data buffers<br />
► Change the buffer algorithm as declared in the application program in the ACB MACRF<br />
parameter, from NSR (sequential) to LSR (least recently used - LRU)<br />
Usually, SMB allocates many more buffers than are allocated without SMB. Performance<br />
improvements can be dramatic with random access (particularly when few buffers were<br />
available). The use <strong>of</strong> SMB is transparent from the point <strong>of</strong> view <strong>of</strong> the application; no<br />
application changes are needed.<br />
SMB is available to a data set when all the following conditions are met:<br />
► It is an SMS-managed data set.<br />
► It is in extended format (DSNTYPE = EXT in the data class).<br />
► The application opens the data set for NSR processing.<br />
SMB is invoked or disabled through one <strong>of</strong> the following methods:<br />
1. The Record Access Bias data class field.<br />
2. The ACCBIAS subparameter <strong>of</strong> AMP in the JCL DD statement. JCL information takes<br />
precedence over data class information.<br />
Chapter 4. Storage management s<strong>of</strong>tware 179
SMB processing techniques<br />
If all <strong>of</strong> the required conditions are met, SMB is invoked when option SYSTEM or an SMB<br />
processing technique is used in the fields described previously. SMB is disabled when USER<br />
is entered instead (USER is the default). Since JCL information takes precedence over data<br />
class information, installations can enable or disable SMB for some executions.<br />
The information contained in SMB processing techniques is needed because SMB must<br />
maintain an adequate algorithm for managing the CIs in the resource pool. SMB accepts the<br />
ACB MACRF options when the I/O operation is requested. For this reason, the installation<br />
must accurately specify the processing type, through ACCBIAS options:<br />
► Direct Optimized (DO)<br />
SMB optimizes for totally random record access. When this technique is used, VSAM<br />
changes the buffering management from NSR to LSR.<br />
► Direct Weighted (DW)<br />
The majority is direct access to records, with some sequential.<br />
► Sequential Optimized (SO)<br />
Totally sequential access.<br />
► Sequential Weighted (SW)<br />
The majority is sequential access, with some direct access to records.<br />
When SYSTEM is used in JCL or in the data class, SMB chooses the processing technique<br />
based on the MACRF parameter <strong>of</strong> the ACB.<br />
For more information about the use <strong>of</strong> SMB, refer to VSAM Demystified, SG24-6105.<br />
180 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.46 VSAM buffering enhancements with z/<strong>OS</strong> V1R9<br />
Provide a way to limit the storage SMB DO uses for<br />
a large number <strong>of</strong> data sets at once, without<br />
changing the JCL for each job step<br />
JCL AMP parameter SMBVSP<br />
Limiting amount <strong>of</strong> virtual buffer space<br />
Specify data class SMBVSP value using ISMF<br />
JCL AMP keyword (optional)<br />
MSG=SMBBIAS - request a VSAM open message<br />
indicating what SMB access bias actually is used for<br />
a particular component being opened<br />
Issues message IEC161I<br />
Figure 4-56 VSAM buffering enhancements with z/<strong>OS</strong> V1R9<br />
VSAM system-managed buffering enhancements<br />
The JCL AMP parameter SMBVSP keyword lets you limit the amount <strong>of</strong> virtual buffer space to<br />
acquire for direct optimized processing when opening a data set. Before z/<strong>OS</strong> V1R9,<br />
changing that value required editing the JCL statement, which was not practical when running<br />
a batch job. In z/<strong>OS</strong> V1.9, VSAM provides a simpler and more efficient way <strong>of</strong> modifying the<br />
SMBVSP value, by specifying it for a data class using ISMF. The system-managed buffering<br />
(SMB) field on the ISMF DATA CLASS DEFINE/ALTER panel lets you specify the value in<br />
kilobytes or megabytes. SMB then uses the specified value for any data set defined to that<br />
data class. With this method, the effect <strong>of</strong> modifying the SMBVSP keyword is no longer limited<br />
to one single job step, and no longer requires editing individual JCL statements.<br />
JCL AMP keyword<br />
In addition, a new JCL AMP keyword, MSG=SMBBIAS, lets you request a message that<br />
displays the record access bias that is specified on the ACCBIAS keyword or chosen by SMB<br />
in the absence <strong>of</strong> a user selection. The IEC161I message is issued for each data set that is<br />
opened. The new keyword is optional and the default is to not issue a message. Avoid using<br />
the keyword when a large number <strong>of</strong> data sets are opened in quick succession.<br />
AMP=('subparameter[,subparameter]...')<br />
AMP='subparameter[,subparameter]...'<br />
//DS1 DD DSNAME=VSAMDATA,AMP=('BUFSP=200,OPTCD=IL,RECFM=FB',<br />
// 'STRNO=6,MSG=SMBBIAS')<br />
Chapter 4. Storage management s<strong>of</strong>tware 181
MSG=SMBBIAS<br />
When you specify MSG = SMBBIAS in a JCL DD statement, the system issues message<br />
IEC161I to indicate which access bias SMB has chosen. The default is no message.<br />
IEC161I (return code 001)<br />
rc[(sfi)]-ccc,jjj,sss,ddname,dev,volser,xxx,dsname, cat<br />
The first IEC161I (return code 001) message indicates the access bias used by SMB. The sfi<br />
field can be:<br />
► DO - Direct Optimized<br />
► DW - Direct Weighted<br />
► SO - Sequential Optimized<br />
► SW - Sequential Weighted<br />
► CO - Create optimized<br />
► CR - Create Recovery<br />
When you can code MSG=SMBBIAS in your JCL to request a VSAM open message, it<br />
indicates what SMB access bias actually is used for a particular component being opened:<br />
15.00.02 SYSTEM1 JOB00028 IEC161I<br />
001(DW)-255,TESTSMB,STEP2,VSAM0001,,,SMB.KSDS,,<br />
IEC161I SYS1.MVSRES.MASTCAT<br />
15.00.02 SYSTEM1 JOB00028 IEC161I 001(0000002B 00000002 00000000<br />
00000000)-255,TESTSMB,STEP2,<br />
IEC161I VSAMDATA,,,SMB.KSDS,,SYS1.MVSRES.MASTCAT<br />
SMB overview<br />
<strong>System</strong>-managed buffering (SMB), a feature <strong>of</strong> DFSMSdfp, supports batch application<br />
processing.<br />
SMB uses formulas to calculate the storage and buffer numbers needed for a specific access<br />
type SMB takes the following actions:<br />
► It changes the defaults for processing VSAM data sets. This enables the system to take<br />
better advantage <strong>of</strong> current and future hardware technology.<br />
► It initiates a buffering technique to improve application performance. The technique is one<br />
that the application program does not specify. You can choose or specify any <strong>of</strong> the four<br />
processing techniques that SMB implements:<br />
Direct Optimized (DO) The DO processing technique optimizes for totally random<br />
record access. This is appropriate for applications that<br />
access records in a data set in totally random order. This<br />
technique overrides the user specification for nonshared<br />
resources (NSR) buffering with a local shared resources<br />
(LSR) implementation <strong>of</strong> buffering.<br />
Sequential Optimized (SO) The SO technique optimizes processing for record access<br />
that is in sequential order. This is appropriate for backup<br />
and for applications that read the entire data set or a large<br />
percentage <strong>of</strong> the records in sequential order.<br />
182 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Direct Weighted (DW) The majority is direct processing, some is sequential. DW<br />
processing provides the minimum read-ahead buffers for<br />
sequential retrieval and the maximum index buffers for<br />
direct requests.<br />
Sequential Weighted (SW) The majority is sequential processing, some is direct. This<br />
technique uses read-ahead buffers for sequential requests<br />
and provides additional index buffers for direct requests.<br />
The read-ahead will not be as large as the amount <strong>of</strong> data<br />
transferred with SO.<br />
Chapter 4. Storage management s<strong>of</strong>tware 183
4.47 VSAM SMB enhancement with z/<strong>OS</strong> V1R11<br />
Performance <strong>of</strong> SMB due to too few index buffers being<br />
allocated to open small data sets that grow in time<br />
SMB (during VSAM OPEN) calculates how to increase<br />
index buffer space<br />
Users now do not have to close and reopen data sets to<br />
obtain more index buffers<br />
Changes to the SMBVSP parameter<br />
SMBVSP=nnK or SMBVSP=nnM<br />
Specifies the amount <strong>of</strong> virtual buffer space to acquire for<br />
direct optimized processing when opening the data set<br />
Figure 4-57 VSAM SMB performance enhancement<br />
SMB performance<br />
Performance <strong>of</strong> <strong>System</strong> Managed Buffering (SMB) Direct Optimized Access Bias had been<br />
affected adversely when a VSAM data set continued to grow. This is due to the original<br />
allocation for index buffer space becoming increasingly deficient as the data set size<br />
increases. This problem is avoided for data buffer space by using the subparameter SMBVSP<br />
<strong>of</strong> the JCL AMP parameter. However, for index buffer space, the only way to adjust the index<br />
buffer space to a more appropriate allocation was to close and reopen the data set. Changes<br />
have been made in z/<strong>OS</strong> V1R11 VSAM to avoid the necessity <strong>of</strong> closing and reopening the<br />
data set.<br />
SMBVSP parameter<br />
Prior to z/<strong>OS</strong> V1R11, when SMBVSP was used to specify the amount <strong>of</strong> storage for SMB<br />
Direct Optimized Access Bias, the value was used by VSAM OPEN to calculate the number <strong>of</strong><br />
data buffers (BUFND). The number <strong>of</strong> index buffers (BUFNI), in contrast,was calculated by<br />
VSAM OPEN based on the current high used CI. That is, it was based upon the data set size<br />
at open time.<br />
With z/<strong>OS</strong> V1R11, VSAM OPEN calculates the BUFNI to be allocated using 20% <strong>of</strong> the value<br />
<strong>of</strong> SMBVSP, or the data set size, if the calculation using it actually yields a higher BUFNI. The<br />
usage in regard to calculating BUFND remains unchanged. Now, as a KSDS grows, provision<br />
can be made for a better storage allocation for both data and index buffers by the use <strong>of</strong> the<br />
SMBVSP parameter.<br />
184 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
nn is 1 to 2048000 kilobytes or 1 to 2048 megabytes
SMBVSP is only a parameter for SMB Direct Optimized Access Bias. It can be specified in the<br />
SMS DATACLAS construct or through JCL as shown in Figure 4-58.<br />
//STEP000 EXEC PGM=TVC15<br />
//SYSPRINT DD SYSOUT=*<br />
//KSDS0001 DD DSN=PROD.KSDS,DISP=SHR,AMP=(‘ACCBIAS=DO,SMBVSP=2048M’)<br />
Figure 4-58 SMSVSP specification in JCL<br />
Note: For further details about SMB, see z/<strong>OS</strong> DFSMS Using Data Sets, SC26-7410. For<br />
further details about how to invoke SMB and about specifying Direct Optimized (DO) and<br />
SMBVSP values in a DATACLAS construct, see z/<strong>OS</strong> DFSMS Storage Administration<br />
Reference (for DFSMSdfp, DFSMSdss, DFSMShsm), SC26-7402. For information about<br />
specification with JCL, see z/<strong>OS</strong> MVS JCL Reference, SA22-7597.<br />
SMBVSP parameter considerations<br />
You can use the SMBVSP parameter to restrict the size <strong>of</strong> the pool that is built for the data<br />
component or to expand the size <strong>of</strong> the pool for the index records. The SMBHWT parameter<br />
can be used to provide buffering in Hiperspace in combination with virtual buffers for the<br />
data component.<br />
The value <strong>of</strong> this parameter is used as a multiplier <strong>of</strong> the virtual buffer space for Hiperspace<br />
buffers. This can reduce the size required for an application region, but does have<br />
implications related to processor cycle requirements. That is, all application requests must<br />
orient to a virtual buffer address. If the required data is in a Hiperspace buffer, the data must<br />
be moved to a virtual buffer after “stealing” a virtual buffer and moving that buffer to a least<br />
recently used (LRU) Hiperspace buffer.<br />
Chapter 4. Storage management s<strong>of</strong>tware 185
4.48 VSAM enhancements<br />
Data compression for KSDS<br />
Extended addressability<br />
Record level sharing (RLS)<br />
<strong>System</strong>-managed buffering (SMB)<br />
Data striping and multi-layering<br />
DFSMS data set separation<br />
Free space release<br />
Figure 4-59 VSAM enhancements<br />
VSAM enhancements<br />
The following list presents the major VSAM enhancements since DFSMS V1R2. For the<br />
majority <strong>of</strong> these functions, extended format is a prerequisite. The enhancements are:<br />
► Data compression for KSDS - This is useful for improving I/O mainly for write-once,<br />
read-many clusters.<br />
► Extended addressability - This allows data components larger than 4 GB. The limitation<br />
was caused by an RBA field <strong>of</strong> 4 bytes; RBA now has an 8-byte length.<br />
► Record-level sharing (RLS) - This allows VSAM data sharing across z/<strong>OS</strong> systems in a<br />
Parallel Sysplex.<br />
► <strong>System</strong>-managed buffering (SMB) - This improves the performance <strong>of</strong> random NSR<br />
processing.<br />
► Data stripping and multi-layering - This improves sequential access performance due to<br />
parallel I/Os in several volumes (stripes).<br />
► DFSMS data set separation - This allows the allocation <strong>of</strong> clusters in distinct physical<br />
control units.<br />
► Free space release - As with non-VSAM data sets, the free space that is not used at the<br />
end <strong>of</strong> the data component can be released at deallocation.<br />
186 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.49 Data set separation<br />
SMS-managed data sets within a group are kept separate<br />
On a physical control unit (PCU) level or<br />
<strong>Volume</strong> level<br />
From all the other data sets in the same group<br />
Poor performance and single point <strong>of</strong> failure may occur<br />
when critical data sets are allocated on the same volumes<br />
During data set allocation, SMS attempts to separate data<br />
sets listed in a separation group onto different extent pools<br />
In the storage subsystem and volumes<br />
Facility to separate critical data sets onto different extent<br />
pools and volumes<br />
Reduces I/O contention and single point <strong>of</strong> failure<br />
Figure 4-60 Data set separation<br />
Data set separation with z/<strong>OS</strong> V1R11<br />
Data set separation allows you to designate groups <strong>of</strong> data sets in which all SMS-managed<br />
data sets within a group are kept separate, on the physical control unit (PCU) level or the<br />
volume level, from all the other data sets in the same group.<br />
When allocating new data sets or extending existing data sets to new volumes, SMS volume<br />
selection frequently calls SRM to select the best volumes. Unfortunately, SRM may select the<br />
same set <strong>of</strong> volumes that currently have the lowest I/O delay. Poor performance or single<br />
points <strong>of</strong> failure may occur when a set <strong>of</strong> functional-related critical data sets are allocated onto<br />
the same volumes. SMS provides a function to separate their critical data sets, such as DB2<br />
partitions, onto different volumes to prevent DASD hot spots and reduce I/O contention.<br />
<strong>Volume</strong> separation groups<br />
This provides a solution by expanding the scope <strong>of</strong> the data set separation function currently<br />
available at PCU level to the volume level. The user defines the volume separation groups in<br />
a data set separation pr<strong>of</strong>ile. During data set allocation, SMS attempts to separate data sets<br />
that are specified in the same separation group onto different extent pools and volumes.<br />
This provides a facility for an installation to separate functional-related critical data sets onto<br />
different extent pools and volumes for better performance and to avoid single points <strong>of</strong> failure.<br />
Important: Use data set separation only for a small set <strong>of</strong> mission-critical data.<br />
Chapter 4. Storage management s<strong>of</strong>tware 187
4.50 Data set separation syntax<br />
Data set separation function syntax:<br />
Earlier syntax supports only PCU separation<br />
{SEPARATIONGROUP | SEP}<br />
{FAILLEVEL | FAIL}({PCU|NONE})<br />
Figure 4-61 Defining separation pr<strong>of</strong>iles and syntax<br />
Data set pr<strong>of</strong>ile for separation<br />
To use data set separation, you must create a data set separation pr<strong>of</strong>ile and specify the<br />
name <strong>of</strong> the pr<strong>of</strong>ile to the base configuration. During allocation, SMS attempts to separate the<br />
data sets listed in the pr<strong>of</strong>ile.<br />
A data set separation pr<strong>of</strong>ile contains at least one data set separation group. Each data set<br />
separation group specifies whether separation is at the PCU or volume level and whether it is<br />
required or preferred. It also includes a list <strong>of</strong> data set names to be separated from each other<br />
during allocation.<br />
Restriction: You cannot use data set separation when allocating non-SMS-managed data<br />
sets or during use <strong>of</strong> full volume copy utilities such as PPRC.<br />
Separation pr<strong>of</strong>ile<br />
The syntax for the data set separation pr<strong>of</strong>iles is defined as follows:<br />
SEPARATIONGROUP(PCU) This indicates that separation is on the PCU level.<br />
SEPARATIONGROUP(VOLUME) This indicates that separation is on the volume level.<br />
VOLUME may be abbreviated as VOL.<br />
TYPE(REQUIRED) This indicates that separation is required. SMS fails the<br />
allocation if the specified data set or data sets cannot be<br />
separated from other data sets on the specified level<br />
188 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
{DSNLIST | DSNS | DSN}(data set name[,data set name,....}<br />
New syntax supports both PCU and <strong>Volume</strong> separation<br />
{SEPARATIONGROUP | SEP}({PCU | VOL})<br />
TYPE({REQ | PREF})<br />
{DSNLIST | DSNS | DSN}(data set name[,data set name,...}<br />
Wildcard characters (* and %) are supported for DSN at<br />
the third level qualifier (for example, A.B.*)<br />
Earlier syntax for PCU separation continues to function<br />
Both earlier and new syntax can coexist in the same separation<br />
pr<strong>of</strong>ile
(PCU or volume). REQUIRED may be abbreviated as<br />
REQ or R.<br />
TYPE(PREFERRED) This indicates that separation is preferred. SMS allows<br />
the allocation if the specified data set or data sets cannot<br />
be separated from other data sets on the specified level.<br />
SMS allocates the data sets and issues an allocation<br />
message that indicates that separation was not honored<br />
for a successful allocation. PREFERRED may be<br />
abbreviated as PREF or P.<br />
DSNLIST(data-set-name[,,...]) This specifies the names <strong>of</strong> the data sets that are to be<br />
separated. The data set names must follow the naming<br />
convention described in z/<strong>OS</strong> MVS JCL Reference,<br />
SA22-7597. You can specify the same data set name in<br />
multiple data set separation groups. Wildcard characters<br />
are supported, beginning with the third qualifier. For<br />
example, you can specify XYZ.TEST.* but not XYZ.*.<br />
The wildcard characters are as follows:<br />
► * - This indicates that either a qualifier or one or<br />
more characters within a qualifier can occupy that<br />
position. An asterisk (*) can precede or follow a set<br />
<strong>of</strong> characters.<br />
► ** - This indicates that zero (0) or more qualifiers can<br />
occupy that position. A double asterisk (**) cannot<br />
precede or follow any characters; it must be<br />
preceded or followed by either a period or a blank.<br />
► % - This indicates that exactly one alphanumeric or<br />
national character can occupy that position. You can<br />
specify up to eight % characters in each qualifier.<br />
Note: If only one data set name is specified with DSNLIST, the data set name must contain<br />
at least one wildcard character.<br />
Earlier syntax<br />
The following earlier form <strong>of</strong> the syntax for SEPARATIONGROUP is tolerated by z/<strong>OS</strong> V1R11.<br />
It supports separation at the PCU level only.<br />
SEPARATIONGROUP|SEP<br />
FAILLEVEL|FAIL ({PCU|NONE})<br />
DSNLIST|DSNS|DSN (data-set-name[,data-set-name,...])<br />
Chapter 4. Storage management s<strong>of</strong>tware 189
4.51 Data facility sort (DFSORT)<br />
Paula<br />
Miriam<br />
Marie<br />
Enete<br />
Dovi<br />
Cassio<br />
Carolina<br />
Ana<br />
Descending order<br />
Figure 4-62 DFSORT example<br />
Data facility sort (DFSORT)<br />
The DFSORT licensed program is a high performance data arranger for z/<strong>OS</strong> users. Using<br />
DFSORT you can sort, merge, and copy data sets using EBCDIC, z/Architecture decimal or<br />
binary keys. It also helps you to analyze data and produce detailed reports using the<br />
ICETOOL utility or the OUTFIL function. DFSORT is an optional feature <strong>of</strong> z/<strong>OS</strong>.<br />
DFSORT, together with DFSMS and RACF, form the strategic product base for the evolving<br />
system-managed storage environment. DFSORT is designed to optimize the efficiency and<br />
speed with which operations are completed through synergy with processor, device, and<br />
system features (for example, memory objects, Hiperspace, data space, striping,<br />
compression, extended addressing, DASD and tape device architecture, processor memory,<br />
and processor cache.<br />
DFSORT example<br />
The simple example in Figure 4-62 illustrates how DFSORT merges data sets by combining<br />
two or more files <strong>of</strong> sorted records to form a single data set <strong>of</strong> sorted records.<br />
You can use DFSORT to do simple application tasks such as alphabetizing a list <strong>of</strong> names, or<br />
you can use it to aid complex tasks such as taking inventory or running a billing system. You<br />
can also use DFSORT's record-level editing capability to perform data management tasks.<br />
For most <strong>of</strong> the processing done by DFSORT, the whole data set is affected. However, certain<br />
forms <strong>of</strong> DFSORT processing involve only certain individual records in that data set.<br />
190 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Source Data Set<br />
Marie<br />
Carolina<br />
Ana<br />
Cassio<br />
Dovi<br />
Miriam<br />
Enete<br />
Paula<br />
SORT SORT<br />
Ana<br />
Carolina<br />
Cassio<br />
Dovi<br />
Enete<br />
Marie<br />
Miriam<br />
Paula<br />
Ascending order
DFSORT functions<br />
DFSORT adds the ability to do faster and easier sorting, merging, copying, reporting and<br />
analysis <strong>of</strong> your business information, as well as versatile data handling at the record, fixed<br />
position/length or variable position/length field, and bit level. While sorting, merging, or<br />
copying data sets, you can also:<br />
► Select a subset <strong>of</strong> records from an input data set. You can include or omit records that<br />
meet specified criteria. For example, when sorting an input data set containing records <strong>of</strong><br />
course books from many different school departments, you can sort the books for only one<br />
department.<br />
► Reformat records, add or delete fields, and insert blanks, constants, or binary zeros. For<br />
example, you can create an output data set that contains only certain fields from the input<br />
data set, arranged differently.<br />
► Sum the values in selected records while sorting or merging (but not while copying). In the<br />
example <strong>of</strong> a data set containing records <strong>of</strong> course books, you can use DFSORT to add up<br />
the dollar amounts <strong>of</strong> books for one school department.<br />
► Create multiple output data sets and reports from a single pass over an input data set. For<br />
example, you can create a different output data set for the records <strong>of</strong> each department.<br />
► Sort, merge, include, or omit records according to the collating rules defined in a selected<br />
local.<br />
► Alter the collating sequence when sorting or merging records (but not while copying). For<br />
example, you can have the lowercase letters collate after the uppercase letters.<br />
► Sort, merge, or copy Japanese data if the <strong>IBM</strong> Double Byte Character Set Ordering<br />
Support (DBCS Ordering, the 5665-360 Licensed Program, Release 2.0 or an equivalent<br />
product) is used with DFSORT to process the records.<br />
DFSORT and ICETOOL<br />
DFSORT has utilities such as ICETOOL, which is a multipurpose DFSORT utility that uses<br />
the capabilities <strong>of</strong> DFSORT to perform multiple operations on one or more data sets in a<br />
single step.<br />
Tip: You can use DFSORT's ICEGENER facility to achieve faster and more efficient<br />
processing for applications that are set up to use the IEBGENER system utility. For more<br />
information, see z/<strong>OS</strong> DFSORT Application <strong>Programming</strong> Guide, SC26-7523.<br />
DFSORT customization<br />
Specifying the DFSORT customization parameters is a very important task for z/<strong>OS</strong> system<br />
programmers. Depending on such parameters, DFSORT may use lots <strong>of</strong> system resources<br />
such as CPU, I/O, and especially virtual storage. The uncontrolled use <strong>of</strong> virtual storage may<br />
cause IPLs due to the lack <strong>of</strong> available slots in page data sets. Plan to use the IEFUSI z/<strong>OS</strong><br />
exit to control products such as DFSORT.<br />
For articles, online books, news, tips, techniques, examples, and more, visit the z/<strong>OS</strong><br />
DFSORT home page:<br />
http://www-1.ibm.com/servers/storage/support/s<strong>of</strong>tware/sort/mvs<br />
Chapter 4. Storage management s<strong>of</strong>tware 191
4.52 z/<strong>OS</strong> Network File <strong>System</strong> (z/<strong>OS</strong> NFS)<br />
AIX<br />
UNIX<br />
Sun Solaris<br />
HP/UX<br />
(z/<strong>OS</strong> NFS)<br />
Other NFS<br />
Client and<br />
Servers<br />
Figure 4-63 DFSMS Network File <strong>System</strong><br />
z/<strong>OS</strong> Network File <strong>System</strong> (z/<strong>OS</strong> NFS)<br />
The z/<strong>OS</strong> Network File <strong>System</strong> is a distributed file system that enables users to access UNIX<br />
files and directories that are located on remote computers as though they were local. NFS is<br />
independent <strong>of</strong> machine types, operating systems, and network architectures. Use the NFS<br />
for file serving (as a data repository) and file sharing between platforms supported by z/<strong>OS</strong>.<br />
Clients and servers<br />
A client is a computer or process that requests services in the network. A server is a<br />
computer or process that responds to a request for service from a client. A user accesses a<br />
service, which allows the use <strong>of</strong> data or other resources.<br />
Figure 4-63 illustrates the client-server relationship:<br />
► The upper center portion shows the DFSMS NFS address space server; the lower portion<br />
shows the DFSMS NFS address space client.<br />
► The left side <strong>of</strong> the figure shows various NFS clients and servers that can interact with the<br />
DFSMS NFS server and client.<br />
► In the center <strong>of</strong> the figure is the Transmission Control Protocol/Internet Protocol (TCP/IP)<br />
network used to communicate between clients and servers.<br />
With the NFS server, you can remotely access z/<strong>OS</strong> conventional data sets or z/<strong>OS</strong> UNIX<br />
files from workstations, personal computers, and other systems that run client s<strong>of</strong>tware for the<br />
192 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
TCP/IP<br />
Network<br />
z/<strong>OS</strong><br />
DFSMS<br />
NETWORK<br />
FILE<br />
SYSTEM<br />
SERVER<br />
z/<strong>OS</strong><br />
DFSMS<br />
NETWORK<br />
FILE<br />
SYSTEM<br />
CLIENT<br />
AMS<br />
OMVS<br />
MVS Data Sets<br />
Hierarchical File<br />
<strong>System</strong>
Sun NFS version 2 protocols, the Sun NFS version 3 protocols, and the WebNFS protocols<br />
over TCP/IP network.<br />
The z/<strong>OS</strong> NFS server acts as an intermediary to read, write, create, or delete z/<strong>OS</strong> UNIX files<br />
and MVS data sets that are maintained on an MVS host system. The remote MVS data sets<br />
or z/<strong>OS</strong> UNIX files are mounted from the host processor to appear as local directories and<br />
files on the client system.<br />
This server makes the strengths <strong>of</strong> a z/<strong>OS</strong> host processor—storage management,<br />
high-performance disk storage, security, and centralized data—available to the client<br />
platforms.<br />
With the NFS client you can allow basic sequential access method (BSAM), queued<br />
sequential access method (QSAM), virtual storage access method (VSAM), and z/<strong>OS</strong> UNIX<br />
users and applications transparent access to data on systems that support the Sun NFS<br />
version 2 protocols and the Sun NFS version 3 protocols.<br />
The Network File <strong>System</strong> can be used for:<br />
► File sharing between platforms<br />
► File serving (as a data repository)<br />
Supported clients for the NFS server<br />
The z/<strong>OS</strong> NFS client supports all servers that implement the server portion <strong>of</strong> the Sun NFS<br />
Version 2 and Version 3 protocols. The z/<strong>OS</strong> NFS client does not support NFS version 4.<br />
Tested clients for the z/<strong>OS</strong> NFS server, using the NFS version 4 protocol, are:<br />
► <strong>IBM</strong> RS/6000® AIX version 5.3<br />
► Sun Solaris version 10<br />
► Enterprise Linux® 4<br />
► Windows® 2000/XP with Hummingbird Maestro 9 and Maestro 10<br />
Other client platforms should work as well because NFS version 4 is an industry standard<br />
protocol, but they have not been tested by <strong>IBM</strong>.<br />
NFS client s<strong>of</strong>tware for other <strong>IBM</strong> platforms is available from other vendors. You can also<br />
access the NFS server from non-<strong>IBM</strong> clients that use the NFS version 2 or version 3 protocol,<br />
including:<br />
► DEC stations running DEC ULTRIX version 4.4<br />
► HP 9000 workstations running HP/UX version 10.20<br />
► Sun PC-NFS version 5<br />
► Sun workstations running Sun<strong>OS</strong> or Sun Solaris versions 2.5.3<br />
For further information about NFS, refer to z/<strong>OS</strong> Network File <strong>System</strong> Guide and Reference,<br />
SC26-7417, and visit:<br />
http://www-1.ibm.com/servers/eserver/zseries/zos/nfs/<br />
Chapter 4. Storage management s<strong>of</strong>tware 193
4.53 DFSMS optimizer (DFSMSopt)<br />
Figure 4-64 DFSMS Optimizer: Data set performance summary<br />
DFSMS Optimizer (DFSMSopt)<br />
The DFSMS Optimizer gives you the ability to understand how you manage storage today.<br />
With that information, you can make informed decisions about how you should manage<br />
storage in the future.<br />
These are the analyzers you can use:<br />
► Performance analyzer<br />
► Management class analyzer<br />
► I/O trace analyzer<br />
DFSMS Optimizer uses input data from several sources in the system and processes it using<br />
an extract program that merges the data and builds the Optimizer database.<br />
By specifying different filters you can produce reports that help you build a detailed storage<br />
management picture <strong>of</strong> your enterprise. With the report data, you can use the charting facility<br />
to produce color charts and graphs.<br />
The DFSMS Optimizer provides analysis and simulation information for both SMS and<br />
non-SMS users. The DFSMS Optimizer can help you maximize storage use and minimize<br />
storage costs. It provides methods and facilities for you to:<br />
► Monitor and tune DFSMShsm functions such as migration and backup<br />
► Create and maintain a historical database <strong>of</strong> system and data activity<br />
194 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Fine-tune an SMS configuration by performing in-depth analysis <strong>of</strong>:<br />
– Management class policies, including simulations and cost/benefit analysis using your<br />
storage component costs<br />
– Storage class policies for SMS data, with recommendations for both SMS and<br />
non-SMS data<br />
– High I/O activity data sets, including recommendations for placement and simulation for<br />
cache and expanded storage<br />
– Storage hardware performance <strong>of</strong> subsystems and volumes including I/O rate,<br />
response time, and caching statistics<br />
► Simulate potential policy changes and understand the costs <strong>of</strong> those changes<br />
► Produce presentation-quality charts<br />
For more information about the DFSMS Optimizer, see DFSMS Optimizer User’s Guide and<br />
Reference, SC26-7047, or visit:<br />
http://www-1.ibm.com/servers/storage/s<strong>of</strong>tware/opt/<br />
Chapter 4. Storage management s<strong>of</strong>tware 195
4.54 Data Set Services (DFSMSdss)<br />
//JOB2 JOB accounting information,REGION=nnnnK<br />
//STEP1 EXEC PGM=ADRDSSU<br />
//SYSPRINT DD SYSOUT=A<br />
//DASD1 DD UNIT=3390,VOL=(PRIVATE,SER=111111),DISP=OLD<br />
//TAPE DD UNIT=3490,VOL=SER=TAPE02,<br />
// LABEL=(1,SL),DISP=(NEW,CATLG),DSNAME=USER2.BACKUP<br />
//SYSIN DD *<br />
DUMP INDDNAME(DASD1) OUTDDNAME(TAPE) -<br />
DATASET(INCLUDE(USER2.**,USER3.*))<br />
TSO<br />
Figure 4-65 DFSMSdss backing up and restoring volumes and data sets<br />
Data Set Services (DFSMSdss)<br />
DFSMSdss is a direct access storage device (DASD) data and space management tool.<br />
DFSMSdss works on DASD volumes only in a z/<strong>OS</strong> environment. You can use DFSMSdss to<br />
do the following:<br />
► Copy and move data sets between volumes <strong>of</strong> like and unlike device types.<br />
Note: Like devices have the same track capacity and number <strong>of</strong> tracks per cylinder (for<br />
example, 3380 Model D, Model E, and Model K). Unlike DASD devices have different<br />
track capacities (for example, 3380 and 3390), a different number <strong>of</strong> tracks per cylinder,<br />
or both.<br />
► Dump and restore data sets, entire volumes, or specific tracks.<br />
► Convert data sets and volumes to and from SMS management.<br />
► Compress partitioned data sets.<br />
► Release unused space in data sets.<br />
► Reduce or eliminate DASD free-space fragmentation by consolidating free space on a<br />
volume.<br />
► Implement concurrent copy or a flashcopy in ESS or DS8000 DASD controllers.<br />
196 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
RESTORE...<br />
TAPECNTL...
Backing up and restoring volumes and data sets with DFSMSdss<br />
You can use the DFSMSdss DUMP command to back up volumes and data sets, and you can<br />
use the DFSMSdss RESTORE command to recover them. You can make incremental<br />
backups <strong>of</strong> your data sets by specifying a data set DUMP command with RESET and filtering<br />
on the data-set-changed indicator.<br />
The DFSMShsm component <strong>of</strong> DFSMS provides automated incremental backup, interactive<br />
recovery, and inventory <strong>of</strong> what it backs up. If DFSMShsm is used, you should use<br />
DFSMSdss for volume backup <strong>of</strong> data sets not supported by DFSMShsm and for dumping<br />
SYSRES and special volumes such as the one containing the master catalog, as shown in<br />
Figure 4-65 on page 196. If DFSMShsm is not installed, you can use DFSMSdss for all<br />
volume and data set backups.<br />
Chapter 4. Storage management s<strong>of</strong>tware 197
4.55 DFSMSdss: Physical and logical processing<br />
TSO<br />
Figure 4-66 DFSMSdss physical and logical processing<br />
DFSMSdss: physical and logical processing<br />
Before you begin using DFSMSdss, you should understand the difference between logical<br />
processing and physical processing and how to use data set filtering to select data sets for<br />
processing. DFSMSdss can perform two kinds <strong>of</strong> processing when executing COPY, DUMP, and<br />
RESTORE commands:<br />
► Logical processing operates against data sets independently <strong>of</strong> physical device format.<br />
► Physical processing moves data at the track-image level and operates against volumes,<br />
tracks, and data sets.<br />
Each type <strong>of</strong> processing <strong>of</strong>fers different capabilities and advantages.<br />
During a restore operation, the data is processed the same way it is dumped because<br />
physical and logical dump tapes have different formats. If a data set is dumped logically, it is<br />
restored logically; if it is dumped physically, it is restored physically. A data set restore<br />
operation from a full volume dump is a physical data set restore operation.<br />
198 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Physical<br />
or<br />
Logical ?
4.56 DFSMSdss: Logical processing<br />
TSO<br />
DUMP01<br />
DUMP<br />
ABC.FILE<br />
Figure 4-67 DFSMSdss logical processing<br />
UCAT<br />
ABC.FILE<br />
VOLABC<br />
ABC.FILE<br />
Logical processing<br />
A logical copy, dump, or restore operation treats each data set and its associated information<br />
as a logical entity, and processes an entire data set before beginning the next one.<br />
Each data set is moved by tracks from the source device and is potentially written to the target<br />
device as a set <strong>of</strong> data records, allowing data movement between devices with different track<br />
and cylinder configurations. Checking <strong>of</strong> data record consistency is not performed during<br />
dump operation.<br />
DFSMSdss performs logical processing if:<br />
► You specify the DATASET keyword with the COPY command. A data set copy is always a<br />
logical operation, regardless <strong>of</strong> how or whether you specify input volumes.<br />
► You specify the DATASET keyword with the DUMP command, and either no input volume is<br />
specified, or LOGINDDNAME, LOGINDYNAM, or STORGRP is used to specify input<br />
volumes.<br />
► The RESTORE command is performed, and the input volume was created by a logical dump.<br />
Catalogs and VTOCs are used to select data sets for logical processing. If you do not specify<br />
input volumes, the catalogs are used to select data sets for copy and dump operations. If you<br />
specify input volumes using the LOGINDDNAME, LOGINDYNAM, or STORGRP keywords on<br />
the COPY or DUMP command, DFSMSdss uses VTOCs to select data sets for processing.<br />
Chapter 4. Storage management s<strong>of</strong>tware 199
When to use logical processing<br />
Use logical processing for the following situations:<br />
► Data is copied to an unlike device type.<br />
Logical processing is the only way to move data between unlike device types.<br />
► Data that may need to be restored to an unlike device is dumped.<br />
► Data must be restored the same way it is dumped.<br />
This is particularly important to bear in mind when making backups that you plan to retain<br />
for a long period <strong>of</strong> time (such as vital records backups). If a backup is retained for a long<br />
period <strong>of</strong> time, it is possible that the device type it originally resided on will no longer be in<br />
use at your site when you want to restore it. This means you will have to restore it to an<br />
unlike device, which can be done only if the backup has been made logically.<br />
► Aliases <strong>of</strong> VSAM user catalogs are to be preserved during copy and restore functions.<br />
Aliases are not preserved for physical processing.<br />
► Unmovable data sets or data sets with absolute track allocation are moved to different<br />
locations.<br />
► Multivolume data sets are processed.<br />
► VSAM and multivolume data sets are to be cataloged as part <strong>of</strong> DFSMSdss processing.<br />
► Data sets are to be deleted from the source volume after a successful dump or copy<br />
operation.<br />
► Both non-VSAM and VSAM data sets are to be renamed after a successful copy or restore<br />
operation.<br />
► You want to control the percentage <strong>of</strong> space allocated on each <strong>of</strong> the output volumes for<br />
copy and restore operations.<br />
► You want to copy and convert a PDS to a PDSE or vice versa.<br />
► You want to copy or restore a data set with an undefined DSORG to an unlike device.<br />
► You want to keep together all parts <strong>of</strong> a VSAM sphere.<br />
200 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.57 DFSMSdss: Physical processing<br />
TSO<br />
DUMP FULL<br />
CACSW3<br />
Figure 4-68 DFSMSdss physical processing<br />
DUMP01<br />
Physical processing<br />
Physical processing moves data based on physical track images. Because data movement is<br />
carried out at the track level, only target devices with track sizes equal to those <strong>of</strong> the source<br />
device are supported. Physical processing operates on volumes, ranges <strong>of</strong> tracks, or data<br />
sets. For data sets, it relies only on volume information (in the VTOC and VVDS) for data set<br />
selection, and processes only that part <strong>of</strong> a data set residing on the specified input volumes.<br />
DFSMSdss performs physical processing if:<br />
► You specify the FULL or TRACKS keyword with the COPY or DUMP command. This results in<br />
a physical volume or physical tracks operation.<br />
Attention: Take care when invoking the TRACKS keyword with the COPY and RESTORE<br />
commands. The TRACKS keyword should be used only for a data recovery operation.<br />
For example, you can use it to “repair” a bad track in the VTOC or a data set, or to<br />
retrieve data from a damaged data set. You cannot use it in place <strong>of</strong> a full-volume or a<br />
logical data set operation. Doing so can destroy a volume or impair data integrity.<br />
► You specify the data set keyword on the DUMP command and input volumes with the<br />
INDDNAME or INDYNAM parameter. This produces a physical data set dump.<br />
► The RESTORE command is executed and the input volume is created by a physical dump<br />
operation.<br />
Chapter 4. Storage management s<strong>of</strong>tware 201
When to use physical processing<br />
Use physical processing when:<br />
► Backing up system volumes that you might want to restore with a stand-alone DFSMSdss<br />
restore operation.<br />
Stand-alone DFSMSdss restore supports only physical dump tapes.<br />
► Performance is an issue.<br />
Generally, the fastest way (measured by elapsed time) to copy or dump an entire volume is<br />
by using a physical full-volume command. This is primarily because minimal catalog<br />
searching is necessary for physical processing.<br />
► Substituting one physical volume for another or recovering an entire volume.<br />
With a COPY or RESTORE (full volume or track) command, the volume serial number <strong>of</strong> the<br />
input DASD volume can be copied to the output DASD volume.<br />
► Dealing with I/O errors.<br />
Physical processing provides the capability to copy, dump, and restore a specific track or<br />
range <strong>of</strong> tracks.<br />
► Dumping or copying between volumes <strong>of</strong> the same device type but different capacity.<br />
202 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.58 DFSMSdss stand-alone services<br />
DFDSS Stand-Alone Tape<br />
Figure 4-69 DFSMSdss stand-alone services<br />
DFSMSdss stand-alone services<br />
The DFSMSdss stand-alone restore function is a single-purpose program. It is designed to<br />
allow the system programmer to restore vital system packs during disaster recovery without<br />
relying on an MVS environment.<br />
Stand-alone services can perform either a full-volume restore or a tracks restore from dump<br />
tapes produced by DFSMSdss or DFDSS and <strong>of</strong>fers the following benefits:<br />
► Provides user-friendly commands to replace the previous control statements<br />
► Supports <strong>IBM</strong> 3494 and 3495 Tape Libraries, and 3590 Tape Subsystems<br />
► Supports IPLing from a DASD volume, in addition to tape and card readers<br />
► Allows you to predefine the operator console to be used during stand-alone services<br />
processing<br />
For detailed information about the stand-alone service and other DFSMSdss information,<br />
refer to z/<strong>OS</strong> DFSMSdss Storage Administration Reference, SC35-0424, and z/<strong>OS</strong><br />
DFSMSdss Storage Administration Guide, SC35-0423, and visit:<br />
http://www-1.ibm.com/servers/storage/s<strong>of</strong>tware/sms/dss/<br />
Chapter 4. Storage management s<strong>of</strong>tware 203
4.59 Hierarchical Storage Manager (DFSMShsm)<br />
Figure 4-70 DFSMShsm<br />
Hierarchical Storage Manager (DFSMShsm)<br />
Hierarchical Storage Manager (DFSMShsm) is a disk storage management and productivity<br />
product for managing low activity and inactive data. It provides backup, recovery, migration,<br />
and space management functions as well as full-function disaster recovery support.<br />
DFSMShsm improves disk use by automatically managing both space and data availability in<br />
a storage hierarchy.<br />
Availability management is used to make data available by automatically copying new and<br />
changed data set to backup volumes.<br />
Space management is used to manage DASD space by enabling inactive data sets to be<br />
moved <strong>of</strong>f fast-access storage devices, thus creating free space or new allocations.<br />
DFSMShsm also provides for other supporting functions that are essential to your<br />
installation's environment.<br />
For further information about DFSMShsm, refer to z/<strong>OS</strong> DFSMShsm Storage Administration<br />
Guide, SC35-0421 and z/<strong>OS</strong> DFSMShsm Storage Administration Reference, SC35-0422,<br />
and visit:<br />
http://www-1.ibm.com/servers/storage/s<strong>of</strong>tware/sms/hsm/<br />
204 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Availability<br />
Space<br />
Automatic Backup<br />
Incremental Backup
4.60 DFSMShsm: Availability management<br />
SMS-managed Non-SMS-managed<br />
Storage Groups<br />
(volumes)<br />
DFSMShsm<br />
Backup<br />
Functions<br />
TAPE<br />
Figure 4-71 DFSMShsm availability management<br />
Primary and<br />
Secondary <strong>Volume</strong>s<br />
Availability management<br />
DFSMShsm backs up your data, automatically or by command, to ensure availability if<br />
accidental loss <strong>of</strong> the data sets or physical loss <strong>of</strong> volumes should occur. DFSMShsm also<br />
allows the storage administrator to copy backup and migration tapes, and to specify that<br />
copies be made in parallel with the original. You can store the copies on site as protection<br />
from media damage, or <strong>of</strong>fsite as protection from site damage. DFSMShsm also provides<br />
disaster backup and recovery for user-defined groups <strong>of</strong> data sets (aggregates) so that you<br />
can restore critical applications at the same location or at an <strong>of</strong>fsite location.<br />
Note: You must also have DFSMSdss to use the DFSMShsm functions.<br />
User<br />
Catalog<br />
Control<br />
Data<br />
Sets<br />
Availability management ensures that a recent copy <strong>of</strong> your DASD data set exists. The<br />
purpose <strong>of</strong> availability management is to ensure that lost or damaged data sets can be<br />
retrieved at the most current possible level. DFSMShsm uses DFSMSdss as a fast data<br />
mover for backups. Availability management automatically and periodically performs functions<br />
that:<br />
1. Copy all the data sets on DASD volumes to tape volumes<br />
2. Copy the changed data sets on DASD volumes (incremental backup) either to other DASD<br />
volumes or to tape volumes<br />
DFSMShsm minimizes the space occupied by the data sets on the backup volume by using<br />
compression and stacking.<br />
Chapter 4. Storage management s<strong>of</strong>tware 205
Tasks for availability management functions<br />
The tasks for controlling automatic availability management <strong>of</strong> SMS-managed storage require<br />
adding DFSMShsm commands to the ARCCMDxx parmlib member and specifying attributes<br />
in the SMS storage classes and management classes. It is assumed that the storage classes<br />
and management classes have already been defined.<br />
The attribute descriptions explain the attributes to be added to the previously defined storage<br />
groups and management classes. Similarly, the descriptions <strong>of</strong> DFSMShsm commands relate<br />
to commands to be added to the ARCCMDxx member <strong>of</strong> SYS1.PARMLIB.<br />
Two groups <strong>of</strong> tasks are performed for availability management: dump tasks and backup<br />
tasks. Availability management comprises the following functions:<br />
► Aggregate backup and recovery (ABARS)<br />
► Automatic physical full-volume dump<br />
► Automatic incremental backup<br />
► Automatic control data set backup<br />
► Command dump and backup<br />
► Command recovery<br />
► Disaster backup<br />
► Expiration <strong>of</strong> backup versions<br />
► Fast replication backup and recovery<br />
Command availability management<br />
Commands cause availability management functions to occur, resulting in the following<br />
conditions:<br />
► One data set to be backed up.<br />
► All changed data sets on a volume to be backed up.<br />
► A volume to be dumped.<br />
► A backed-up data set to be recovered.<br />
► A volume to be restored from a dump and forward recovered from later incremental<br />
backup versions. Forward recovery is a process <strong>of</strong> updating a restored volume by<br />
applying later changes as indicated by the catalog and the incremental backup versions.<br />
► A volume to be restored from a dump copy.<br />
► A volume to be recovered from backup versions.<br />
► A specific data set to be restored from a dump volume.<br />
► All expired backup versions to be deleted.<br />
► A fast replication backup version to be recovered from a copy pool.<br />
206 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.61 DFSMShsm: Space management<br />
SMS-managed Non-SMS-managed<br />
Storage Groups<br />
(volumes)<br />
TAPE<br />
Migration<br />
Level 1<br />
DFSMShsm<br />
Migration<br />
Level 2<br />
Figure 4-72 DFSMShsm space management<br />
DASD<br />
Primary<br />
<strong>Volume</strong>s<br />
DASD<br />
User<br />
Catalog<br />
Control<br />
Data<br />
Sets<br />
Space management<br />
Space management is the function <strong>of</strong> DFSMShsm that allows you to keep DASD space<br />
available for users in order to meet the service level objectives for your system. The purpose<br />
<strong>of</strong> space management is to manage your DASD storage efficiently. To do this, space<br />
management automatically and periodically performs functions that:<br />
1. Move low activity data sets (using DFSMSdss) from user-accessible volumes to<br />
DFSMShsm volumes<br />
2. Reduce the space occupied by data on both the user-accessible volumes and the<br />
DFSMShsm volumes<br />
DFSMShsm improves DASD space usage by keeping only active data on fast-access storage<br />
devices. It automatically frees space on user volumes by deleting eligible data sets, releasing<br />
overallocated space, and moving low-activity data to lower cost-per-byte devices, even if the<br />
job did not request tape.<br />
Space management functions<br />
The DFSMShsm space management functions are:<br />
► Automatic primary space management <strong>of</strong> DFSMShsm-managed volumes, which includes:<br />
– Deletion <strong>of</strong> temporary data sets<br />
– Deletion <strong>of</strong> expired data sets<br />
Chapter 4. Storage management s<strong>of</strong>tware 207
– Release <strong>of</strong> unused, over-allocated space<br />
– Migration to DFSMShsm-owned migration level 1 (ML1) volumes (compressed)<br />
► Automatic secondary space management <strong>of</strong> DFSMShsm-owned volumes, which includes:<br />
– Migration-level cleanup, including deletion <strong>of</strong> expired migrated data sets and migration<br />
control data set (MCDS) records<br />
– Moving migration copies from migration level 1 (ML1) to migration level 2 (ML2)<br />
volumes<br />
► Automatic interval migration, initiated when a DFSMShsm-managed volume exceeds a<br />
specified threshold<br />
► Automatic recall <strong>of</strong> user data sets back to DASD volumes, when referenced by the<br />
application<br />
► Space management by command<br />
► Space-saving functions, which include:<br />
– Data compaction and data compression. Compaction provides space savings through<br />
fewer gaps and less control data. Compression provides a more compact way to store<br />
data.<br />
– Partitioned data set (PDS) free space compression.<br />
– Small data set packing (SDSP) data set facility, which allows small data sets to be<br />
packaged in just one physical track.<br />
– Data set reblocking.<br />
It is possible to have more than one z/<strong>OS</strong> image sharing the same DFSMShsm policy. In this<br />
case one <strong>of</strong> the DFSMShsm images is the primary host and the others are secondary. The<br />
primary HSM host is identified by H<strong>OS</strong>T= in the HSM startup and is responsible for:<br />
► Hourly space checks<br />
► During auto backup: CDS backup, backup <strong>of</strong> ML1 data sets to tape<br />
► During auto dump: Expiration <strong>of</strong> dump copies and deletion <strong>of</strong> excess dump VTOC copy<br />
data sets<br />
► During secondary space management (SSM): Cleanup <strong>of</strong> MCDS, migration volumes, and<br />
L1-to-L2 migration<br />
If you are running your z/<strong>OS</strong> HSM images in sysplex (parallel or basic), you can use<br />
secondary host promotion to allow a secondary image to assume the primary image's tasks if<br />
the primary host fails. Secondary host promotion uses XCF status monitoring to execute the<br />
promotion. To indicate a system as a candidate, issue:<br />
SETSYS PRIMARYH<strong>OS</strong>T(YES)<br />
and<br />
SSM(YES)<br />
208 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.62 DFSMShsm: Storage device hierarchy<br />
Migration<br />
Level 2 (ML2)<br />
Figure 4-73 Storage device hierarchy<br />
Primary <strong>Volume</strong>s<br />
or Level 0 (ML0)<br />
Migration<br />
Level 2 (ML2)<br />
Migration<br />
Level 1 (ML1)<br />
Storage device hierarchy<br />
A storage device hierarchy consists <strong>of</strong> a group <strong>of</strong> storage devices that have different costs for<br />
storing data, different amounts <strong>of</strong> data stored, and different speeds <strong>of</strong> accessing the data,<br />
processed as follows:<br />
Level 0 volumes A volume that contains data sets that are directly accessible by the<br />
user. The volume may be either DFSMShsm-managed or<br />
non-DFSMShsm-managed.<br />
Level 1 volumes A volume owned by DFSMShsm containing data sets that migrated<br />
from a level 0 volume.<br />
Level 2 volumes A volume under control <strong>of</strong> DFSMShsm containing data sets that<br />
migrated from a level 0 volume, from a level 1 volume, or from a<br />
volume not managed by DFSMShsm.<br />
DFSMShsm storage device management<br />
DFSMShsm uses the following three-level storage device hierarchy for space management:<br />
► Level 0 - DFSMShsm-managed storage devices at the highest level <strong>of</strong> the hierarchy; these<br />
devices contain data directly accessible to your application.<br />
► Level 1 and Level 2 - Storage devices at the lower levels <strong>of</strong> the hierarchy; level 1 and<br />
level 2 contain data that DFSMShsm has compressed and optionally compacted into a<br />
format that you cannot use. Devices at this level provide lower cost per byte storage and<br />
Chapter 4. Storage management s<strong>of</strong>tware 209
usually slower response time. Usually L1 is in a cheaper DASD (or the same cost, but with<br />
the gain <strong>of</strong> compression) and L2 is on tape.<br />
Note: If you have a DASD controller that compresses data, you can skip level 1 (ML1)<br />
migration because the data in L0 is already compacted/compressed.<br />
Defining management class migration attributes<br />
To DFSMShsm, a data set occupies one <strong>of</strong> two distinct states in storage:<br />
Primary Also known as level 0, the primary state indicates that users can directly<br />
access a data set residing on a volume.<br />
Migrated Users cannot directly access data sets that have migrated from the primary<br />
state to a migrated state. To be accessed, the data sets must be recalled to<br />
primary storage. A migrated data set can reside on either migration level 1<br />
(usually permanently mounted DASD) or migration level 2 (usually tape).<br />
A data set can move back and forth between these two states, and it can move from level 0 to<br />
migration level 2 (and back) without passing through migration level 1. Objects do not migrate.<br />
Movement back to level 0 is known as recall.<br />
210 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.63 ML1 enhancements with z/<strong>OS</strong> V1R11<br />
Back up and migrate data sets larger than 64 K to disk<br />
as large format sequential (LFS) data sets<br />
Utilize OVERFLOW volumes for migrated data sets and<br />
backup copies<br />
Specify data set size eligibility for OVERFLOW volumes<br />
and ML1 OVERFLOW volume pool threshold<br />
Larger data sets (>64 K tracks) no longer have to be<br />
directed to tape<br />
Data sets larger than 58 K tracks will be allocated as LFS<br />
data sets regardless <strong>of</strong> whether they are on a<br />
OVERFLOW or NOOVERFLOW volume<br />
OVERFLOW volumes can be used as a repository for<br />
larger data sets<br />
Customize an installation to exploit the OVERFLOW<br />
volume pool according to a specified environment<br />
Figure 4-74 ML1 enhancements with z/<strong>OS</strong> V1R11<br />
ML1 enhancements<br />
Beginning in V1R11, DFSMShsm enables ML1 overflow volumes to be selected for migration<br />
processing, in addition to their current use for data set backup processing. DFSMShsm<br />
enables these ML1 overflow volumes to be selected for migration or backup <strong>of</strong> large data<br />
sets, with the determining size values specified by a new parameter <strong>of</strong> the SETSYS command.<br />
Use the new ML1OVERFLOW parameter with the subparameter <strong>of</strong> DATASETSIZE(dssize) to<br />
specify the minimum size that a data set must be in order for DFSMShsm to prefer ML1<br />
overflow volume selection for migration or backup copies.<br />
In addition, DFSMShsm removes the previous ML1 volume restriction against migrating or<br />
backing up a data set whose expected size after compaction (if active and used) is greater<br />
than 65,536 tracks The new limit for backed up or migrated copies is equal to the maximum<br />
size limit for the largest volume available.<br />
ML1OVERFLOW option<br />
ML1OVERFLOW is an optional parameter specifying the following:<br />
► The minimum data set size for ML1 OVERFLOW volume preference<br />
► The threshold for ML1 OVERFLOW volume capacity for automatic secondary space<br />
management migration from ML1 OVERFLOW to ML2 volumes<br />
Chapter 4. Storage management s<strong>of</strong>tware 211
DATASSETSIZE(dssize)<br />
This specifies the minimum size, in K bytes, <strong>of</strong> the data set for which an ML1 OVERFLOW<br />
volume is preferred for migration or backup. This includes invoking inline backup, the HBACKDS<br />
or BACKDS commands, or the ARCHBACK macro.<br />
This parameter has the same meaning when applied to SMS-managed or<br />
non-SMS-managed DASD volumes or data sets. The SETSYS default is 2000000 K bytes.<br />
The DFSMShsm default, if you do not specify this parameter on any SETSYS command, will<br />
use the value <strong>of</strong> 2000000 K bytes. If the calculated size <strong>of</strong> a data set being backed up or<br />
migrated is equal to or greater than dssize, then DFSMShsm prefers the OVERFLOW ML1<br />
volume with the least amount <strong>of</strong> free space that still has enough remaining space for the<br />
data set.<br />
If the calculated size <strong>of</strong> the data set is less than the minimum size specified in dssize, then<br />
DFSMShsm prefers the NOOVERFLOW ML1 volume with the maximum amount <strong>of</strong> free<br />
space and least number <strong>of</strong> users.<br />
OVERFLOW and NOOVERFLOW volumes<br />
DFSMShsm can create backup copies to either ML1 backup OVERFLOW volumes or<br />
NOOVERFLOW volumes. Use the SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) command to<br />
specify the minimum size in K bytes (where K=1024) <strong>of</strong> the data set for which an ML1<br />
OVERFLOW volume is preferred for the migration or backup copy.<br />
For data sets smaller than 58 K tracks, DFSMShsm allocates a basic sequential format data<br />
set for the backup copy. For data sets larger than 58 K tracks, DFSMShsm allocates a large<br />
format sequential data set for the backup copy.<br />
Basic or large format sequential data sets will prefer OVERFLOW or NOOVERFLOW<br />
volumes based on the SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) value. If there is<br />
not enough free space on the NOOVERFLOW or OVERFLOW volume for a particular backup<br />
copy, then DFSMShsm tries to create the backup on a OVERFLOW or NOOVERFLOW<br />
volume, respectively. If the data set is too large to fit on a single ML1 volume (OVERFLOW or<br />
NOOVERFLOW), then the migration or backup fails.<br />
212 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.64 DFSMShsm z/<strong>OS</strong> V1R11 enhancements<br />
Data sets that were larger than 64 K tracks in size can<br />
migrate/backup only to tape<br />
Can now migrate/backup a data set larger than 64 K<br />
tracks<br />
Can go to DASD as large format sequential (LFS) data<br />
sets<br />
OVERFLOW volumes can be only used for backups<br />
and preference is not given to them for data sets<br />
larger in size<br />
OVERFLOW volumes can now be utilized for<br />
migration and backup copies for data sets <strong>of</strong> an<br />
installation-specified size<br />
Improves free space utilization on ML1 volumes<br />
Figure 4-75 OVERFLOW volumes used for backup<br />
OVERFLOW or NOOVERFLOW use <strong>of</strong> level 1 volumes<br />
Using these type <strong>of</strong> volumes for data set migration and backup works as follows:<br />
► OVERFLOW and NOOVERFLOW are mutually exclusive optional subparameters <strong>of</strong> the<br />
MIGRATION parameter that you use to specify how a level 1 volume is considered during<br />
selection for placement <strong>of</strong> a data set migration or backup version.<br />
► OVERFLOW specifies that the volume is considered if either <strong>of</strong> the following are true:<br />
– The data you are migrating or backing up is larger than a given size, as specified on the<br />
SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) command.<br />
– DFSMShsm cannot allocate enough space on a NOOVERFLOW volume.<br />
– NOOVERFLOW specifies that the volume is considered with other level 1 volumes for<br />
migration data and backup versions <strong>of</strong> any size.<br />
– Defaults: If you are adding a migration volume to DFSMShsm, the default is<br />
NOOVERFLOW. If you are changing the attributes <strong>of</strong> a volume and do not specify<br />
either subparameter, the overflow attribute is not changed.<br />
Migration and backup to tape<br />
Using the DFSMShsm ML1 enhancements, the installation now can:<br />
► Back up and migrate data sets larger than 64 K to disk as large format sequential (LFS)<br />
data sets.<br />
Chapter 4. Storage management s<strong>of</strong>tware 213
► Utilize OVERFLOW volumes for migrated data sets and backup copies.<br />
► Specify data set size eligibility for OVERFLOW volumes and ML1 OVERFLOW volume<br />
pool threshold.<br />
Therefore, data sets larger than 64 K tracks no longer have to be directed to tape. Such data<br />
sets larger will be allocated as LFS data sets regardless whether they are on an OVERFLOW<br />
or NOOVERFLOW volume. OVERFLOW volumes can be used as a repository for larger data<br />
sets. You can now customize your installation to exploit the OVERFLOW volume pool<br />
according to a specified environment.<br />
Installation considerations<br />
A coexistence APAR will be required to enable downlevel DFSMShsm to tolerate migrated or<br />
backup copies <strong>of</strong> LFS format DFSMShsm data sets. APAR OA26330 enables processing <strong>of</strong><br />
large format sequential migration and backup copies to be processed on lower level<br />
installations.<br />
Downlevel DFSMShsm installations (pre-V1R11) will be able to recall and recover data sets<br />
from a V1R11 DFSMShsm LFS migration or backup copies. For OVERFLOW volumes on<br />
lower level systems, recalls and recovers will be successful. Migrations from lower level<br />
systems to the V1R11 OVERFLOW volumes will not be allowed because the OVERFLOW<br />
volumes will not be included in the volume selection process.<br />
Large data sets can and will migrate and backup to ML1 DASD. These will be large format<br />
sequential HSM migration data sets on ML1. OVERFLOW volumes will be used for migration<br />
now, in addition to backup. We anticipate that not many installations used OVERFLOW<br />
volumes before but if they were used, then migration actions are needed.<br />
214 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.65 ML1 and ML2 volumes<br />
DFSMShsm will allocate migration copies and backup<br />
copies for large data sets (>58 K tracks) as large<br />
format physical sequential (LFS) data sets<br />
Migration occurs to either <strong>of</strong> two levels: migration level<br />
1 or migration level 2<br />
Migration level 1 (ML1) volumes are always DASD<br />
Migration level 2 (ML2) volumes can be either DASD or<br />
tape<br />
The computing center controls which volumes are to<br />
be used as migration volumes<br />
The maximum migration or backup copy size will now be<br />
limited to the size <strong>of</strong> the ML1 volume’s free space<br />
Figure 4-76 ML1 and ML2 volumes<br />
DFSMShsm migration levels<br />
With z/<strong>OS</strong> V1R11, larger data sets (>58 K tracks) will now be DFSMShsm-managed as large<br />
format sequential data sets when migrated or backed up. OVERFLOW volumes can now be<br />
used for migration and backup. An installation can adjust the values that direct data sets to<br />
OVERFLOW versus NOOVERFLOW volumes and the threshold <strong>of</strong> the OVERFLOW volume<br />
pool.<br />
If you back up or migrate data sets to ML1 OVERFLOW volumes, you can specify the<br />
percentage <strong>of</strong> occupied space that must be in the ML1 OVERFLOW volume pool before the<br />
migration <strong>of</strong> data sets to ML2 volumes occurs during automatic secondary space<br />
management.<br />
Migration volume levels 1 or 2<br />
There is a check to see there is any command level 1 to level 2 running. If yes, the<br />
level 1-to-level 2 migration <strong>of</strong> automatic secondary space management will not start. There is<br />
no reason to run two level 1-to-level 2 migrations at the same time.<br />
Chapter 4. Storage management s<strong>of</strong>tware 215
Before automatic secondary space management starts level 1-to-level 2 migration, it performs<br />
the following checks:<br />
► It performs a space check on the all ML1 volumes. If any NOOVERFLOW ML1 volume has<br />
an occupancy that is equal to or greater than its high threshold, then all eligible data sets<br />
migrate from the NOOVERFLOW ML1 volume to ML2 volumes.<br />
For OVERFLOW volumes, the percentage <strong>of</strong> occupied space for each ML1 OVERFLOW<br />
volume will be calculated and then DFSMShsm will compare the average value with the<br />
specified threshold. Secondary space management is started for all eligible ML1<br />
OVERFLOW data sets if the overflow volume pool threshold is met or exceeded.<br />
If you choose to use OVERFLOW volumes, you can specify the average overflow volume<br />
pool capacity threshold at which you want automatic secondary space management to<br />
migrate ML1 OVERFLOW volumes to ML2 volumes by using the SETSYS ML1OVERFLOW<br />
command.<br />
ML1 and ML2 volumes<br />
The default setting for the ML1 OVERFLOW volume pool threshold is 80%. This requires the<br />
occupied space <strong>of</strong> the entire overflow volume pool reach 80% full before any data is moved<br />
<strong>of</strong>f <strong>of</strong> OVERFLOW volumes to ML2 volumes.<br />
ML2 volumes can be either DASD or tape. The TAPEMIGRATION parameter <strong>of</strong> the SETSYS<br />
command specifies what type <strong>of</strong> ML2 volume is used. The SETSYS command for DFSMShsm<br />
host 2 specifies ML2 migration to tape.<br />
Note: If you want DFSMShsm to perform automatic migration from ML1 to ML2 volumes,<br />
you must specify the thresholds <strong>of</strong> occupancy parameter (<strong>of</strong> the ADDVOL command) for the<br />
ML1 volumes.<br />
216 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.66 Data set allocation format and volume pool determination<br />
Migrate or Backup<br />
D/S<br />
Data Set Size<br />
100 K Tracks<br />
40 K Tracks<br />
500 Tracks<br />
L0<br />
ML1 Copy Format<br />
Large Format Sequential<br />
Basic Format Sequential<br />
Basic Format Sequential<br />
Figure 4-77 Data set allocation for migration and backup<br />
ML1 NOOVERFLOW<br />
VOLUME POOL<br />
SETSYS<br />
ML1OVERFLOW,<br />
(DATASETSIZE(200000),<br />
THRESHOLD(80))<br />
ML1 OVERFLOW<br />
VOLUME POOL<br />
2 GB = ~36K tracks<br />
ML2<br />
Migration and backup<br />
With z/<strong>OS</strong> V1R11, Figure 4-77 shows how three different size data sets are processed when<br />
migrating or backing up to DASD or tape using OVERFLOW and NOOVERFLOW ML1<br />
volumes.<br />
► The 500 track data set (top, red arrow), when migrating, is allocated on ML1<br />
NOOVERFLOW volumes as a basic format sequential data set. This is how current HSM<br />
processing works today.<br />
► The 40 K track data set (middle, blue arrow) migrates to an OVERFLOW volume because<br />
it is larger than the specified DATASETSIZE parameter and the migration copy will be a<br />
basic format sequential data set because its expected size is smaller than 58 K tracks.<br />
► The 100 K tracks data set (bottom, green arrow) are allocated as a large format sequential<br />
data set on a volume in the ML1 OVERFLOW volume pool.<br />
Backing up an individual data set<br />
The ML1OVERFLOW(DATASETSIZE(dssize)) command to specify the minimum size in K bytes<br />
(where K=1024) <strong>of</strong> the data set for which an ML1 OVERFLOW volume is preferred for the<br />
migration or backup copy.<br />
For data sets smaller than 58 K tracks, DFSMShsm allocates a basic sequential format data<br />
set for the backup copy.<br />
Chapter 4. Storage management s<strong>of</strong>tware 217
For data sets larger than 58 K tracks, DFSMShsm allocates a large format sequential data set<br />
for the backup copy. Basic or large format sequential data sets will prefer OVERFLOW or<br />
NOOVERFLOW volumes based on the SETSYS ML1OVERFLOW(DATASETSIZE(dssize))<br />
value.<br />
If there is not enough free space on the NOOVERFLOW or OVERFLOW volume for a<br />
particular backup copy, DFSMShsm tries to create the backup on a OVERFLOW or<br />
NOOVERFLOW volume respectively. If the data set is too large to fit on a single ML1 volume<br />
(OVERFLOW or NOOVERFLOW), then the migration or backup fails.<br />
Important: In processing these commands, DFSMShsm first checks the management<br />
class for the data set to determine the value <strong>of</strong> the ADMIN OR USER COMMAND<br />
BACKUP attribute.<br />
If the value <strong>of</strong> the attribute is BOTH, a DFSMShsm-authorized user can use either <strong>of</strong> the<br />
commands, and a non-DFSMShsm-authorized user can use the HBACKDS command to<br />
back up the data set.<br />
If the value <strong>of</strong> the attribute is ADMIN, a DFSMShsm-authorized user can use either <strong>of</strong> the<br />
commands to back up the data set, but a non-DFSMShsm-authorized user cannot back up<br />
the data set. If the value <strong>of</strong> the attribute is NONE, the command backup cannot be done.<br />
218 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.67 DFSMShsm volume types<br />
Daily<br />
Backup<br />
Level 0<br />
Aggregate<br />
backup<br />
Figure 4-78 DFSMShsm volume types<br />
Fast<br />
replication<br />
Migration<br />
Level 1<br />
Dump<br />
<strong>Volume</strong>s<br />
Spill<br />
Backup<br />
Migration<br />
Level 2<br />
DFSMShsm volume backup<br />
Backing up an individual cataloged data set is performed in the same way as for<br />
SMS-managed data sets. However, to back up individual uncataloged data sets, issue the<br />
following commands:<br />
BACKDS dsname UNIT(unittype) VOLUME(volser)<br />
HBACKDS dsname UNIT(unittype) VOLUME(volser)<br />
The HBACKDS form <strong>of</strong> the command can be used by either non-DFSMShsm-authorized or<br />
DFSMShsm-authorized users. The BACKDS form <strong>of</strong> the command can be used only by<br />
DFSMShsm-authorized users. The UNIT and VOLUME parameters are required because<br />
DFSMShsm cannot locate an uncataloged data set without being told where it is.<br />
<strong>Volume</strong> types<br />
DFSMShsm supports the following volume types:<br />
► Level 0 (L0) volumes contain data sets that are directly accessible to you and the jobs you<br />
run. DFSMShsm-managed volumes are those L0 volumes that are managed by the<br />
DFSMShsm automatic functions. These volumes must be mounted and online when you<br />
refer to them with DFSMShsm commands.<br />
Chapter 4. Storage management s<strong>of</strong>tware 219
► Migration level 1 (ML1) volumes are DFSMShsm-supported DASD on which DFSMShsm<br />
maintains your data in DFSMShsm format. These volumes are normally permanently<br />
mounted and online. They can be:<br />
– <strong>Volume</strong>s containing data sets that DFSMShsm migrated from L0 volumes.<br />
– <strong>Volume</strong>s containing backup versions created from a DFSMShsm BACKDS or HBACKDS<br />
command. Backup processing requires ML1 volumes to store incremental backup and<br />
dump VTOC copy data sets, and as intermediate storage for data sets that are backed<br />
up by data set command backup.<br />
► Migration level 2 (ML2) volumes are DFSMShsm-supported tape or DASD on which<br />
DFSMShsm maintains your data in DFSMShsm format. These volumes are normally not<br />
mounted or online. They contain data sets migrated from ML1 volumes or L0 volumes.<br />
► Daily backup volumes are DFSMShsm-supported tape or DASD on which DFSMShsm<br />
maintains your data in DFSMShsm format. These volumes are normally not mounted or<br />
online. They contain the most current backup versions <strong>of</strong> data sets copied from L0<br />
volumes. These volumes may also contain earlier backup versions <strong>of</strong> these data sets.<br />
► Spill backup volumes are DFSMShsm-supported tape or DASD on which DFSMShsm<br />
maintains your data sets in DFSMShsm format. These volumes are normally not mounted<br />
or online. They contain earlier backup versions <strong>of</strong> data sets, which were moved from<br />
DASD backup volumes.<br />
► Dump volumes are DFSMShsm-supported tape. They contain image copies <strong>of</strong> volumes<br />
that are produced by the full volume dump function <strong>of</strong> DFSMSdss (write a copy <strong>of</strong> the<br />
entire allocated space <strong>of</strong> that volume), which is invoked by DFSMShsm.<br />
► Aggregate backup volumes are DFSMShsm-supported tape. These volumes are normally<br />
not mounted or online. They contain copies <strong>of</strong> the data sets <strong>of</strong> a user-defined group <strong>of</strong><br />
data sets, along with control information for those data sets. These data sets and their<br />
control information are stored as a group so that they can be recovered (if necessary) as<br />
an entity by an aggregate recovery process (ABARS).<br />
► Fast replication target volumes are contained within SMS copy pool backup storage<br />
groups. They contain the fast replication backup copies <strong>of</strong> DFSMShsm-managed volumes.<br />
Beginning with z/<strong>OS</strong> V1R8, fast replication is done with a single command.<br />
New function with z/<strong>OS</strong> V1R7<br />
With z/<strong>OS</strong> V1R7, the maximum number <strong>of</strong> data sets stored by DFSMShsm on tape is one<br />
million; previously, it was 330000.<br />
Also in z/<strong>OS</strong> V1R7, a new command V SMS,VOLUME is introduced. It allows you to change the<br />
state <strong>of</strong> the DFSMShsm volumes without having to change and reactivate the SMS<br />
configuration using ISMF.<br />
220 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.68 DFSMShsm: Automatic space management<br />
HSM.HMIG.ABC.FILE1.T891008.I9012<br />
10 days<br />
without<br />
any<br />
access<br />
Level 0<br />
ABC.FILE1<br />
ABC.FILE2<br />
ABC.FILE3<br />
Migrate<br />
Figure 4-79 DFSMShsm automatic space management (migration)<br />
Level 1<br />
dsname<br />
Automatic space management (migration)<br />
Automatic space management prepares DASD space for the addition <strong>of</strong> new data by freeing<br />
space on the DFSMShsm-managed volumes (L0) and DFSMShsm-owned volumes (ML1).<br />
The functions associated with automatic space management can be divided into two groups,<br />
namely automatic volume space management and automatic secondary space management,<br />
as explained here.Automatic volume space management<br />
Primary Invoked on a daily basis, it cleans L0 volumes by deleting expired and<br />
temporary data sets, releasing allocated and not used space (scratch).<br />
During automatic primary space management, DFSMShsm can<br />
process a maximum <strong>of</strong> 15 volume migration tasks concurrently.<br />
This activity consists <strong>of</strong> the deletion <strong>of</strong> temporary data sets, deletion <strong>of</strong><br />
expired data sets, the release <strong>of</strong> unused and overallocated space, and<br />
migration. Each task processes its own separate user DASD volume.<br />
The storage administrator selects the maximum number <strong>of</strong> tasks that<br />
can run simultaneously, and specifies which days and the time <strong>of</strong> day<br />
the tasks are to be performed.<br />
In z/<strong>OS</strong> V1R8, there is a specific task to scratch data sets. If after that<br />
the free space is still below a threshold, then it moves data sets (under<br />
control <strong>of</strong> management class) from L0 to ML1/ ML2 volumes.<br />
Interval migration Executed each hour throughout the day, as needed for all storage<br />
groups. In interval migration, DFSMShsm performs a space check on<br />
Chapter 4. Storage management s<strong>of</strong>tware 221
222 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
each DFSMShsm volume being managed. A volume is considered<br />
eligible for interval migration based on the AUTOMIGRATE and<br />
THRESHOLD settings <strong>of</strong> its SMS storage group.<br />
Automatic interval migration is an option that invokes migration when<br />
DFSMShsm-managed volumes become full during high activity<br />
periods. If the storage administrator chooses this option, DFSMShsm<br />
automatically checks the level <strong>of</strong> occupancy <strong>of</strong> all<br />
DFSMShsm-managed volumes periodically.<br />
If the level <strong>of</strong> occupancy for any volume exceeds a given threshold,<br />
DFSMShsm automatically performs a subset <strong>of</strong> the space<br />
management functions on the volume. Select a threshold that can be<br />
exceeded only when your installation’s activity exceeds its usual peak.<br />
For volumes requiring interval migration, DFSMShsm can process up<br />
to 15 volume migration tasks concurrently.<br />
During automatic interval migration on a volume, the expired data sets<br />
are deleted, then the largest eligible data sets are moved first so that<br />
the level <strong>of</strong> occupancy threshold can be reached sooner. Data sets are<br />
not migrated from ML1 to ML2 volumes during interval migration.<br />
Automatic secondary space management<br />
Automatic secondary space management deletes expired data sets from ML1/ML2, then<br />
moves data sets (under control <strong>of</strong> the management class) from ML1 to ML2 volumes. It<br />
should complete before automatic primary space management so that the ML1 volumes will<br />
not run out <strong>of</strong> space. Since z/<strong>OS</strong> 1.6 there is the possibility <strong>of</strong> multiple secondary space<br />
management (SSM) tasks.
4.69 DFSMShsm data set attributes<br />
Active copy:<br />
A backup version within the number <strong>of</strong> backup copies<br />
specified by the management class or SETSYS value<br />
Retained copy:<br />
A backup copy that has rolled-<strong>of</strong>f from being an active<br />
copy, but has not yet met its retention period<br />
Management class retention period:<br />
The maximum number <strong>of</strong> days to maintain a backup copy<br />
RETAINDAYS new with z/<strong>OS</strong> V1R11:<br />
The minimum number <strong>of</strong> days to maintain a backup copy<br />
(this value takes precedence)<br />
Figure 4-80 DFSMShsm management <strong>of</strong> active and retained copies<br />
Active and retained copies<br />
DFSMShsm maintains backup copies as active backup copies and retained backup copies.<br />
Active copies are the backup copies that have not yet rolled <strong>of</strong>f. Retained copies are the<br />
backup copies that have rolled <strong>of</strong>f from the active copies, but have not yet reached their<br />
retention periods.<br />
With z/<strong>OS</strong> V1R11, DFSMShsm can maintain a maximum <strong>of</strong> 100 active copies. DFSMShsm<br />
can maintain more than enough retained copies for each data set to meet all expected<br />
requirements. Active and retained copies are as follows:<br />
Active copies Active copies are the set <strong>of</strong> backup copies created that have not yet<br />
rolled <strong>of</strong>f. The number <strong>of</strong> active copies is determined by the SMS<br />
management class or SETSYS value. The maximum number <strong>of</strong> active<br />
copies will remain 100.<br />
Retained copies Retained copies are the set <strong>of</strong> backup copies that have rolled <strong>of</strong>f from<br />
the active copies and have not yet reached their retention periods. A<br />
nearly unlimited number <strong>of</strong> retained copies for each data set can be<br />
maintained.<br />
Management class retention period<br />
The Retention Limit value is a required value that limits the use <strong>of</strong> retention period (RETPD)<br />
and expiration date (EXPDT) values that are explicitly specified in JCL, are derived from<br />
management class definitions or are explicitly specified in the <strong>OS</strong>REQ STORE macro. If the<br />
Chapter 4. Storage management s<strong>of</strong>tware 223
value <strong>of</strong> a user-specified RETPD or EXPDT is within the limits specified in the Retention Limit<br />
field, it is saved for the data set. For objects, only RETPD is saved.<br />
The default retention limit is NOLIMIT. If you specify zero (0), then a user-specified or data<br />
class-derived EXPDT or RETPD is ignored. If users specify values that exceed the maximum<br />
period, then the retention limit value overrides not only their values but also the expiration<br />
attribute values. The retention limit value is saved. ISMF primes the Retention Limit field with<br />
what you specified the last time.<br />
RETAINDAYS keyword<br />
To specify a retention period for a copy <strong>of</strong> a backup data set, using the new RETAINDAYS<br />
keyword, you can use one <strong>of</strong> the following methods:<br />
► (H)BACKDS command<br />
► ARCHBACK macro<br />
► ARCINBAK program<br />
The RETAINDAYS value must be an integer in the range <strong>of</strong> 0 to 50000, or 99999 (the “never<br />
expire” value).<br />
224 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.70 RETAINDAYS keyword<br />
DFSMShsm uses the RETAINDAYS value to determine<br />
when a backup copy is to be rolled-<strong>of</strong>f and during<br />
EXPIREBV processing<br />
The RETAINDAYS keyword specified on the data set<br />
backup request determines how long a data set backup<br />
copy is kept for both SMS-managed and<br />
non-SMS-managed data sets<br />
You can keep an individual backup copy for either shorter<br />
or longer than normal period <strong>of</strong> time by specifying the<br />
RETAINDAYS keyword on the BACKDS command<br />
RETAINDAYS controls the minimum number <strong>of</strong> days that<br />
a backup copy <strong>of</strong> a data set is maintained<br />
Figure 4-81 Use <strong>of</strong> RETAINDAYS keyword with z/<strong>OS</strong> V1R11<br />
RETAINDAYS keyword with z/<strong>OS</strong> V1R11<br />
With z/<strong>OS</strong> V1R11, each SMS-managed data set's backup versions are processed based on<br />
the value <strong>of</strong> the RETAINDAYS keyword on the BACKDS command, if specified, and the<br />
attributes specified in the management class associated with the data set (or in the default<br />
management class if the data set is not associated with a management class). If a<br />
management class name is found but there is no definition for the management class,<br />
DFSMShsm issues a message and does not process the cataloged backup versions <strong>of</strong> that<br />
data set.<br />
DFSMShsm compares the number <strong>of</strong> backup versions that exist with the value <strong>of</strong> the<br />
NUMBER OF BACKUPS (DATA SET EXISTS) attribute in the management class. If there are<br />
more versions than requested, the excess versions are deleted if the versions do not have a<br />
RETAINDAYS value or the RETAINDAYS value has been met, starting with the oldest. The<br />
excess versions are kept as excess active versions if they have an un-met RETAINDAYS<br />
value. These excess versions will then be changed to retained backup versions when a new<br />
version is created.<br />
Starting with the now-oldest backup version and ending with the third-newest version,<br />
DFSMShsm calculates the age <strong>of</strong> the version to determine if the version should be expired. If<br />
a RETAINDAYS value was specified when the version was created, then the age is compared<br />
to the retain days value. If RETAINDAYS was not specified, then the age is compared to the<br />
value <strong>of</strong> the RETAIN® DAYS EXTRA BACKUPS attribute in the management class. If the age<br />
<strong>of</strong> the version meets the expiration criteria, then the version is expired.<br />
Chapter 4. Storage management s<strong>of</strong>tware 225
The second-newest version is treated as though it had been created on the same day as the<br />
newest backup version. Therefore, the second-newest version (newest EXTRA backup copy)<br />
is not expired until the number <strong>of</strong> retention days specified by the RETAINDAYS value or the<br />
number <strong>of</strong> days specified in RETAIN DAYS EXTRA BACKUPS attribute in the management<br />
class have passed since the creation <strong>of</strong> the newest backup version. (The management class<br />
value is only used if RETAINDAYS was not specified for the version).<br />
EXPIREBV processing and RETAINDAYS<br />
For EXPIREBV processing, the RETAINDAYS value takes precedence over all existing<br />
criteria when expiring active and retained backup copies for SMS data sets and non-SMS<br />
data sets. EXPIREBV needs to be run to delete backup versions with RETAINDAYS values<br />
that have been met.<br />
The EXPIREBV command is used to delete unwanted backup and expired ABARS versions <strong>of</strong><br />
SMS-managed and non-SMS-managed data sets from DFSMShsm-owned storage. The<br />
optional parameters <strong>of</strong> the EXPIREBV command determine the deletion <strong>of</strong> the backup versions<br />
<strong>of</strong> non-SMS-managed data sets. The management class attributes determine the deletion <strong>of</strong><br />
backup versions <strong>of</strong> SMS-managed data sets. The management class fields Retain Extra<br />
Versions and Retain Only Version determine which ABARS versions or incremental backup<br />
versions are deleted. The RETAINDAYS parameter specified on the data set backup request<br />
determines how long a data set backup copy is kept for both SMS-managed and<br />
non-SMS-managed data sets.<br />
BACKDS command<br />
The BACKDS command creates a backup version <strong>of</strong> a specific data set. When you enter the<br />
BACKDS command, DFSMShsm does not check whether the data set has changed or has met<br />
the requirement for frequency <strong>of</strong> backup. When DFSMShsm processes a BACKDS<br />
command, it stores the backup version on either tape or the ML1 volume with the most<br />
available space.<br />
With z/<strong>OS</strong> V1R11, the RETAINDAYS keyword is an optional parameter on the BACKDS<br />
command specifying a number <strong>of</strong> days to retain a specific backup copy <strong>of</strong> a data set. If you<br />
specify RETAINDAYS, number <strong>of</strong> retain days is a required parameter that specifies a<br />
minimum number <strong>of</strong> days (0-50000) that DFSMShsm retains the backup copy. If you specify<br />
99999, the data set backup version never expires. Any value greater than 50000 (and other<br />
than 99999) causes a failure with an ARC1605I error message. A retain days value <strong>of</strong> 0<br />
indicates that:<br />
► The backup version might expire within the same day that it was created if EXPIREBV<br />
processing takes place or when the next backup version is created,<br />
► The backup version is kept as an active copy before roll-<strong>of</strong>f occurs,<br />
► The backup version is not managed as a retained copy.<br />
226 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.71 RETAINDAYS keyword<br />
99999 - If you specify 99999, the data set backup<br />
version never expires.<br />
50000+ - Any value greater than 50000 (and other than<br />
99999) causes a failure with an ARC1605I error message.<br />
0 - A retain days value <strong>of</strong> 0 indicates that:<br />
The backup version might expire within the same day that it<br />
was created if EXPIREBV processing takes place or when<br />
the next backup version is created.<br />
The backup version is kept as an active copy before roll-<strong>of</strong>f<br />
occurs.<br />
The backup version is not managed as a retained copy.<br />
Restriction: RETAINDAYS applies only to cataloged data sets.<br />
Figure 4-82 RETAINDAYS keyword<br />
RETAINDAYS parameters<br />
The value <strong>of</strong> RETAINDAYS can be in the range <strong>of</strong> 0 - 50000, which corresponds to the<br />
maximum <strong>of</strong> 136 years. If you specify 99999, the data set backup version is treated as never<br />
expire. Any value greater than 50000 (and other than 99999) causes a failure with an error<br />
message ARC1605I. A retain days value <strong>of</strong> 0 indicates that:<br />
► The backup version expires when the next backup copy is created.<br />
► The backup version might expire within the same day that it was created if EXPIREBV<br />
processing takes place.<br />
► The backup version is kept as an active copy before roll-<strong>of</strong>f occurs.<br />
► And the backup version is not managed as a retained copy.<br />
Note: You can use the RETAINDAYS keyword only with cataloged data sets. If you specify<br />
RETAINDAYS with an uncataloged data set, then BACKDS processing fails with the<br />
ARC1378I error message.<br />
EXPIREBV command and RETAINDAYS<br />
When you enter the EXPIREBV command, DFSMShsm checks the retention days for each<br />
active backup copy for each data set, starting with the oldest backup version and ending with<br />
the third newest version.<br />
Chapter 4. Storage management s<strong>of</strong>tware 227
If the version has a specified retention days value, then DFSMShsm calculates the age <strong>of</strong> the<br />
version, compares the age to the value <strong>of</strong> the retention days, and expires the version if it has<br />
met its RETAINDAYS. The second-newest version is treated as though it had been created on<br />
the same day as the newest backup version, and is not expired unless the number <strong>of</strong><br />
retention days specified by RETAINDAYS have passed since the creation <strong>of</strong> the newest<br />
backup version. EXPIREBV does not process the newest backup version until it meets both<br />
the management class retention values, and the RETAINDAYS value.<br />
Note: For non-SMS-managed data sets, the RETAINDAYS value takes precedence over<br />
any <strong>of</strong> the EXPIREBV parameters.<br />
EXPIREBV processing<br />
During EXPIREBV processing, DFSMShsm checks the retention days for each retained<br />
backup copy. The retained copy is identified as an expired version if it has met its retention<br />
period. The EXPIREBV DISPLAY command displays the backup versions that have met their<br />
RETAINDAYS value. The EXPIREBV EXECUTE command deletes the backup versions that have<br />
met their RETAINDAYS value.<br />
When you enter the EXPIREBV command, DFSMShsm checks the retention days for each<br />
active backup copy for each data set, starting with the oldest backup version and ending with<br />
the third newest version. If the version has a specified retention days value, DFSMShsm<br />
calculates the age <strong>of</strong> the version, compares the age to the value <strong>of</strong> the retention days, and<br />
expires the version if it has met its RETAINDAYS.<br />
The second-newest version is treated as though it had been created on the same day as the<br />
newest backup version, and is not expired unless the number <strong>of</strong> retention days specified by<br />
RETAINDAYS have passed since the creation <strong>of</strong> the newest backup version. EXPIREBV does<br />
not process the newest backup version until it meets both the management class retention<br />
values, and the RETAINDAYS value.<br />
Restriction: RETAINDAYS applies only to cataloged data sets.<br />
228 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.72 DFSMShsm: Recall processing<br />
HSM.HMIG.ABC.FILE1.T891008.I9012<br />
Level 1<br />
dsname<br />
Figure 4-83 Recall processing<br />
Recall<br />
Level 0<br />
ABC.FILE1<br />
ABC.FILE2<br />
ABC.FILE3<br />
Automatic recall<br />
Using an automatic recall process returns a migrated data set from an ML1 or ML2 volume to<br />
a DFSMShsm-managed volume. When a user refers to the data set, DFSMShsm reads the<br />
system catalog for the volume serial number. If the volume serial number is MIGRAT,<br />
DFSMShsm finds the migrated data set, recalls it to a DFSMShsm-managed volume, and<br />
updates the catalog. The result <strong>of</strong> the recall process is a data set that resides on a user<br />
volume in a user readable format. The recall can also be requested by a DFSMShsm<br />
command. Automatic recall returns your migrated data set to a DFSMShsm-managed<br />
volume when you refer to it. The catalog is updated accordingly with the real volser.<br />
Recall returns a migrated data set to a user L0 volume. The recall is transparent and the<br />
application does not need to know that it happened or where the migrated data set resides. To<br />
provide applications with quick access to their migrated data sets, DFSMShsm allows up to<br />
15 concurrent recall tasks. RMF monitor III shows delays caused by the recall operation.<br />
The MVS allocation routine discovers that the data set is migrated when, while accessing the<br />
catalog, it finds the word MIGRAT instead <strong>of</strong> the volser.<br />
Command recall<br />
Command recall returns your migrated data set to a user volume when you enter the HRECALL<br />
DFSMShsm command through an ISMF panel or by directly keying in the command.<br />
Chapter 4. Storage management s<strong>of</strong>tware 229
ACS routines<br />
For both automatic and command recall, DFSMShsm working with SMS invokes the<br />
automatic class selection (ACS) routines. Data sets that were not SMS-managed at the time<br />
they were migrated may be recalled as SMS-managed data sets. The ACS routines<br />
determine whether the data sets should be recalled as SMS-managed, and if so, the routines<br />
select the classes and storage groups in which the data sets will reside. The system chooses<br />
the appropriate volume for the data sets.<br />
DFSMShsm working without SMS returns a migrated data set to a DFSMShsm-managed<br />
non-SMS level 0 volume with the most free space.<br />
230 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.73 Removable media manager (DFSMSrmm)<br />
Figure 4-84 DFSMSrmm<br />
<strong>IBM</strong> 3494<br />
Virtual Tape Server<br />
DFSMSrmm<br />
In your enterprise, you store and manage your removable media in several types <strong>of</strong> media<br />
libraries. For example, in addition to your traditional tape library (a room with tapes, shelves,<br />
and drives), you might have several automated and manual tape libraries. You probably also<br />
have both onsite libraries and <strong>of</strong>fsite storage locations, also known as vaults or stores.<br />
With the DFSMSrmm functional component <strong>of</strong> DFSMS, you can manage your removable<br />
media as one enterprise-wide library (single image) across systems. Because <strong>of</strong> the need for<br />
global control information, these systems must have accessibility to shared DASD volumes.<br />
DFSMSrmm manages your installation's tape volumes and the data sets on those volumes.<br />
DFSMSrmm also manages the shelves where volumes reside in all locations except in<br />
automated tape library data servers.<br />
DFSMSrmm manages all tape media (such as cartridge system tapes and 3420 reels), as<br />
well as other removable media you define to it. For example, DFSMSrmm can record the shelf<br />
location for optical disks and track their vital record status; however, it does not manage the<br />
objects on optical disks.<br />
Library management<br />
DFSMSrmm can manage the following devices:<br />
► A removable media library, which incorporates all other libraries, such as:<br />
– <strong>System</strong>-managed manual tape libraries<br />
Chapter 4. Storage management s<strong>of</strong>tware 231
– <strong>System</strong>-managed automated tape libraries<br />
► Non-system-managed or traditional tape libraries, including automated libraries such as a<br />
library under Basic Tape Library Support (BTLS) control.<br />
Examples <strong>of</strong> automated tape libraries include <strong>IBM</strong> TotalStorage Enterprise Automated Tape<br />
Library (3494) and <strong>IBM</strong> TotalStorage Virtual Tape Servers (VTS).<br />
Shelf management<br />
DFSMSrmm groups information about removable media by shelves into a central online<br />
inventory, and keeps track <strong>of</strong> the volumes residing on those shelves. DFSMSrmm can<br />
manage the shelf space that you define in your removable media library and in your storage<br />
locations.<br />
<strong>Volume</strong> management<br />
DFSMSrmm manages the movement and retention <strong>of</strong> tape volumes throughout their life<br />
cycle.<br />
Data set management<br />
DFSMSrmm records information about the data sets on tape volumes. DFSMSrmm uses the<br />
data set information to validate volumes and to control the retention and movement <strong>of</strong> those<br />
data sets.<br />
For more information about DFSMSrmm, see z/<strong>OS</strong> DFSMSrmm Guide and Reference,<br />
SC26-7404 and z/<strong>OS</strong> DFSMSrmm Implementation and Customization Guide, SC26-7405,<br />
and visit:<br />
http://www-1.ibm.com/servers/storage/s<strong>of</strong>tware/sms/rmm/<br />
232 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.74 Libraries and locations<br />
<strong>IBM</strong> 3494<br />
Figure 4-85 Libraries and locations<br />
Virtual Tape Server<br />
Libraries and locations<br />
You decide where to store your removable media based on how <strong>of</strong>ten the media is accessed<br />
and for what purpose it is retained. For example, you might keep volumes that are frequently<br />
accessed in an automated tape library data server, and you probably use at least one storage<br />
location to retain volumes for disaster recovery and audit purposes. You might also have<br />
locations where volumes are sent for further processing, such as other data centers within<br />
your company or those <strong>of</strong> your customers and vendors.<br />
DFSMSrmm automatically records information about data sets on tape volumes so that you<br />
can manage the data sets and volumes more efficiently. When all the data sets on a volume<br />
have expired, the volume can be reclaimed and reused. You can optionally move volumes that<br />
are to be retained to another location.<br />
DFSMSrmm helps you manage your tape volumes and shelves at your primary site and<br />
storage locations by recording information in a DFSMSrmm control data set.<br />
Chapter 4. Storage management s<strong>of</strong>tware 233
4.75 What DFSMSrmm can manage<br />
Removable media library<br />
<strong>System</strong>-managed tape libraries<br />
Automated tape libraries<br />
Manual tape libraries<br />
Non-system-managed tape libraries or traditional<br />
tape libraries<br />
Storage locations<br />
Installation-defined<br />
DFSMSrmm built-in<br />
Local<br />
Distant<br />
Remote<br />
Figure 4-86 What DFSMSrmm can manage<br />
What DFSMSrmm can manage<br />
In this section we discuss libraries and storage locations that can be managed by<br />
DFSMSrmm.<br />
Removable media library<br />
A removable media library contains all the tape and optical volumes that are available for<br />
immediate use, including the shelves where they reside. A removable media library usually<br />
includes other libraries:<br />
► <strong>System</strong>-managed libraries, such as automated or manual tape library data servers<br />
► Non-system-managed libraries, containing the volumes, shelves, and drives not in an<br />
automated or a manual tape library data server<br />
In the removable media library, you store your volumes in “shelves,” where each volume<br />
occupies a single shelf location. This shelf location is referred to as a rack number in the<br />
DFSMSrmm TSO subcommands and ISPF dialog. A rack number matches the volume’s<br />
external label. DFSMSrmm uses the external volume serial number to assign a rack number<br />
when adding a volume, unless you specify otherwise. The format <strong>of</strong> the volume serial you<br />
define to DFSMSrmm must be one to six alphanumeric characters. The rack number must be<br />
six alphanumeric or national characters.<br />
<strong>System</strong>-managed tape library<br />
A system-managed tape library is a collection <strong>of</strong> tape volumes and tape devices defined in<br />
234 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
the tape configuration database. The tape configuration database is an integrated catalog<br />
facility user catalog marked as a volume catalog (VOLCAT) containing tape volumes and tape<br />
library records. A system-managed tape library can be either automated or manual:<br />
► An automated tape library is a device consisting <strong>of</strong> robotic components, cartridge storage<br />
areas (or shelves), tape subsystems, and controlling hardware and s<strong>of</strong>tware, together with<br />
the set <strong>of</strong> tape volumes that reside in the library and can be mounted on the library tape<br />
drives. The <strong>IBM</strong> automated tape libraries are the automated <strong>IBM</strong> 3494 and <strong>IBM</strong> 3495<br />
Library Dataservers.<br />
► A manual tape library is a set <strong>of</strong> tape drives and the set <strong>of</strong> system-managed volumes the<br />
operator can mount on those drives. The manual tape library provides more flexibility,<br />
enabling you to use various tape volumes in a given manual tape library. Unlike the<br />
automated tape library, the manual tape library does not use the library manager. With the<br />
manual tape library, a human operator responds to mount messages that are generated<br />
by the host and displayed on a console. This manual tape library implementation<br />
completely replaces the <strong>IBM</strong> 3495-M10 implementation. <strong>IBM</strong> no longer supports the<br />
3495-M10.<br />
You can have several automated tape libraries or manual tape libraries. You use an<br />
installation-defined library name to define each automated tape library or manual tape library<br />
to the system. DFSMSrmm treats each system-managed tape library as a separate location<br />
or destination.<br />
Since z/<strong>OS</strong> 1.6, a new EDGRMMxx parmlib member OPTION command, together with the<br />
VLPOOL command, allows better support for the client/server environment.<br />
z/<strong>OS</strong> 1.8 DFSMSrmm introduces an option to provide tape data set authorization<br />
independent <strong>of</strong> the RACF TAPVOL and TAPEDSN. This option allows you to use RACF<br />
generic DATASET pr<strong>of</strong>iles for both DASD and tape data sets.<br />
Non-system-managed tape library<br />
A non-system-managed tape library consists <strong>of</strong> all the volumes, shelves, and drives not in an<br />
automated tape library or manual tape library. You might know this library as the traditional<br />
tape library that is not system-managed. DFSMSrmm provides complete tape management<br />
functions for the volumes and shelves in this traditional tape library. <strong>Volume</strong>s in a<br />
non-system-managed library are defined by DFSMSrmm as being “shelf-resident.”<br />
All tape media and drives supported by z/<strong>OS</strong> are supported in this environment. Using<br />
DFSMSrmm, you can fully manage all types <strong>of</strong> tapes in a non-system-managed tape library,<br />
including 3420 reels, 3480, 3490, and 3590 cartridge system tapes.<br />
Storage location<br />
Storage locations are not part <strong>of</strong> the removable media library because the volumes in storage<br />
locations are not generally available for immediate use. A storage location is comprised <strong>of</strong><br />
shelf locations that you define to DFSMSrmm. A shelf location in a storage location is<br />
identified by a bin number. Storage locations are typically used to store removable media that<br />
are kept for disaster recovery or vital records. DFSMSrmm manages two types <strong>of</strong> storage<br />
locations: installation-defined storage locations and DFSMSrmm built-in storage locations.<br />
You can define an unlimited number <strong>of</strong> installation-defined storage locations, using any<br />
eight-character name for each storage location. Within the installation-defined storage<br />
location, you can define the type or shape <strong>of</strong> the media in the location. You can also define<br />
the bin numbers that DFSMSrmm assigns to the shelf locations in the storage location. You<br />
can request DFSMSrmm shelf-management when you want DFSMSrmm to assign a specific<br />
shelf location to a volume in the location.<br />
Chapter 4. Storage management s<strong>of</strong>tware 235
You can also use the DFSMSrmm built-in storage locations: LOCAL, DISTANT, and<br />
REMOTE. Although the names <strong>of</strong> these locations imply their purpose, they do not mandate<br />
their actual location. All volumes can be in the same or separate physical locations.<br />
For example, an installation can have the LOCAL storage location onsite as a vault in the<br />
computer room, the DISTANT storage location can be a vault in an adjacent building, and the<br />
REMOTE storage location can be a secure facility across town or in another state.<br />
DFSMSrmm provides shelf-management for storage locations so that storage locations can<br />
be managed at the shelf location level.<br />
236 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
4.76 Managing libraries and storage locations<br />
RMM CDS<br />
Figure 4-87 DFSMSrmm: managing libraries and storage locations<br />
Managing libraries and storage locations<br />
DFSMSrmm records the complete inventory <strong>of</strong> the removable media library and storage<br />
locations in the DFSMSrmm control data set, which is a VSAM key-sequenced data set. In<br />
the control data set, DFSMSrmm records all changes made to the inventory (such as adding<br />
or deleting volumes), and also keeps track <strong>of</strong> all movement between libraries and storage<br />
locations. DFSMSrmm manages the movement <strong>of</strong> volumes among all library types and<br />
storage locations. This lets you control where a volume—and hence, a data set—resides, and<br />
how long it is retained.<br />
DFSMSrmm helps you manage the movement <strong>of</strong> your volumes and retention <strong>of</strong> your data<br />
over their full life, from initial use to the time they are retired from service. Among the<br />
functions DFSMSrmm performs for you are:<br />
► Automatically initializing and erasing volumes<br />
► Recording information about volumes and data sets as they are used<br />
► Expiration processing<br />
► Identifying volumes with high error levels that require replacement<br />
To make full use <strong>of</strong> all <strong>of</strong> the DFSMSrmm functions, you specify installation setup options and<br />
define retention and movement policies. DFSMSrmm provides you with utilities to implement<br />
the policies you define. Since z/<strong>OS</strong> 1.7, we have DFSMSrmm enterprise enablement that<br />
allows high-level languages to issue DFSMSrmm commands through Web services.<br />
<strong>IBM</strong> 3494<br />
Virtual Tape Server<br />
Chapter 4. Storage management s<strong>of</strong>tware 237
z/<strong>OS</strong> V1R8 enhancements<br />
DFSMSrmm helps you manage the shelves in your tape library and storage locations,<br />
simplifying the tasks <strong>of</strong> your tape librarian. When you define a new volume in your library, you<br />
can request that DFSMSrmm shelf-manage the volume by assigning the volume a place on<br />
the shelf. You also have the option to request a specific place for the volume. Your shelves are<br />
easier to use when DFSMSrmm manages them in pools. Pools allow you to divide your<br />
shelves into logical groups where you can store volumes. For example, you can have a<br />
different pool for each system that your installation uses. You can then store the volumes for<br />
each system together in the same pool.<br />
You can define shelf space in storage locations. When you move volumes to a storage<br />
location where you have defined shelf space, DFSMSrmm checks for available shelf space<br />
and then assigns each volume a place on the shelf if you request it. You can also set up<br />
DFSMSrmm to reuse shelf space in storage locations.<br />
238 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 5. <strong>System</strong>-managed storage<br />
5<br />
As your business expands, so does your need for storage to hold your applications and data,<br />
and so does the cost <strong>of</strong> managing that storage. Storage cost includes more than the price <strong>of</strong><br />
the hardware, with the highest cost being the people needed to perform storage management<br />
tasks.<br />
If your business requires transaction systems, the batch window can also be a high cost.<br />
Additionally, you must pay for staff to install, monitor, and operate your storage hardware<br />
devices, for electrical power to keep each piece <strong>of</strong> storage hardware cool and running, and for<br />
floor space to house the hardware. Removable media, such as optical and tape storage, cost<br />
less per gigabyte (GB) than online storage, but they require additional time and resources to<br />
locate, retrieve, and mount.<br />
To allow your business to grow efficiently and pr<strong>of</strong>itably, you need to find ways to control the<br />
growth <strong>of</strong> your information systems and use your current storage more effectively.<br />
With these goals in mind, in this chapter we present:<br />
► A description <strong>of</strong> the z/<strong>OS</strong> storage-managed environment<br />
► An explanation <strong>of</strong> the benefits <strong>of</strong> using a system-managed environment<br />
► An overview <strong>of</strong> how SMS manages a storage environment based on installation policies<br />
► A description <strong>of</strong> how to set up a minimal SMS configuration and activate a DFSMS<br />
subsystem<br />
► A description <strong>of</strong> how to manage data using a minimal SMS configuration<br />
► A description <strong>of</strong> how to use Interactive Storage Management Facility (ISMF), an interface<br />
for defining and maintaining storage management policies<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 239
5.1 Storage management<br />
ISMF<br />
dss<br />
Figure 5-1 Managing storage with DFSMS<br />
Storage management<br />
Storage management involves data set allocation, placement, monitoring, migration, backup,<br />
recall, recovery, and deletion. These activities can be done either manually or by using<br />
automated processes.<br />
Managing storage with DFSMS<br />
The Data Facility Storage Management Subsystem (DFSMS) comprises the base z/<strong>OS</strong><br />
operating system and performs the essential data, storage, program, and device<br />
management functions <strong>of</strong> the system. DFSMS is the central component <strong>of</strong> both<br />
system-managed and non-system-managed storage environments.<br />
The DFSMS s<strong>of</strong>tware product, together with hardware products and installation-specific<br />
requirements for data and resource management, comprises the key to system-managed<br />
storage in a z/<strong>OS</strong> environment.<br />
The heart <strong>of</strong> DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage<br />
administrator defines policies that automate the management <strong>of</strong> storage and hardware<br />
devices. These policies describe data allocation characteristics, performance and availability<br />
goals, backup and retention requirements, and storage requirements for the system. SMS<br />
governs these policies for the system and the Interactive Storage Management Facility<br />
(ISMF) provides the user interface for defining and maintaining the policies.<br />
240 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Availability<br />
Space<br />
Security<br />
Performance<br />
hsm<br />
dfp<br />
DFSMS<br />
tvs<br />
rmm<br />
<strong>IBM</strong><br />
3494<br />
VTS
5.2 DFSMS and DFSMS environment<br />
Figure 5-2 SMS environment<br />
+<br />
DFSMS z/<strong>OS</strong><br />
+ +<br />
RACF DFSORT<br />
DFSMS Environment for z/<strong>OS</strong><br />
DFSMS functional components<br />
DFSMS is a set <strong>of</strong> products, and one <strong>of</strong> these products, DSFMSdfp, is mandatory for running<br />
z/<strong>OS</strong>. DFSMS comprises the base z/<strong>OS</strong> operating system, where DFSMS performs the<br />
essential data, storage, program, and device management functions <strong>of</strong> the system. DFSMS is<br />
the central component <strong>of</strong> both system-managed and non-system-managed storage<br />
environments.<br />
DFSMS environment<br />
The DFSMS environment consists <strong>of</strong> a set <strong>of</strong> hardware and <strong>IBM</strong> s<strong>of</strong>tware products which<br />
together provide a system-managed storage solution for z/<strong>OS</strong> installations.<br />
DFSMS uses a set <strong>of</strong> constructs, user interfaces, and routines (using the DFSMS products)<br />
that allow the storage administrator to better manage the storage system. The core logic <strong>of</strong><br />
DFSMS, such as the Automatic Class Selection (ACS) routines, ISMF code, and constructs,<br />
is located in DFSMSdfp. DFSMShsm and DFSMSdss are involved in the management class<br />
construct.<br />
In this environment, the Resource Access Control Facility (RACF) and Data Facility Sort<br />
(DFSORT) products complement the functions <strong>of</strong> the base operating system. RACF provides<br />
resource security functions, and DFSORT adds the capability for faster and more efficient<br />
sorting, merging, copying, reporting, and analyzing <strong>of</strong> business information.<br />
The DFSMS environment is also called the SMS environment.<br />
Chapter 5. <strong>System</strong>-managed storage 241
5.3 Goals and benefits <strong>of</strong> system-managed storage<br />
Simplified data allocation<br />
Improved allocation control<br />
Improved I/O performance management<br />
Automated DASD space management<br />
Automated tape/optical space management<br />
Improved data availability management<br />
Simplified conversion <strong>of</strong> data to different device types<br />
Figure 5-3 Benefits <strong>of</strong> system-managed storage<br />
Benefits <strong>of</strong> system-managed storage<br />
With the Storage Management Subsystem (SMS), you can define performance goals and<br />
data availability requirements, create model data definitions for typical data sets, and<br />
automate data backup. Based on installation policies, SMS can automatically assign those<br />
services and data definition attributes to data sets when they are created. <strong>IBM</strong> storage<br />
management-related products determine data placement, manage data backup, control<br />
space usage, and provide data security.<br />
Goals <strong>of</strong> system-managed storage<br />
The goals <strong>of</strong> system-managed storage are:<br />
► To improve the use <strong>of</strong> the storage media (for example, by reducing out-<strong>of</strong>-space abends<br />
and providing a way to set a free-space requirement).<br />
► To reduce the labor involved in storage management by centralizing control, automating<br />
tasks, and providing interactive controls for storage administrators.<br />
► To reduce the user's need to be concerned with the physical details, performance, space,<br />
and device management. Users can focus on using data, instead <strong>of</strong> on managing data.<br />
Simplified data allocation<br />
<strong>System</strong>-managed storage enables users to simplify their data allocations. For example,<br />
without using the Storage Management Subsystem, a z/<strong>OS</strong> user has to specify the unit and<br />
volume on which the system is to allocate the data set. The user also has to calculate the<br />
242 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
amount <strong>of</strong> space required for the data set in terms <strong>of</strong> tracks or cylinders. This means the user<br />
has to know the track size <strong>of</strong> the device that will contain the data set.<br />
With system-managed storage, users can allow the system to select the specific unit and<br />
volume for the allocation. They can also specify size requirements in terms <strong>of</strong> megabytes or<br />
kilobytes. This means the user does not need to know anything about the physical<br />
characteristics <strong>of</strong> the devices in the installation.<br />
Improved allocation control<br />
<strong>System</strong>-managed storage enables you to set a requirement for free space across a set <strong>of</strong><br />
direct access storage device (DASD) volumes. You can then provide adequate free space to<br />
avoid out-<strong>of</strong>-space abends. The system automatically places data on a volume containing<br />
adequate free space. DFSMS <strong>of</strong>fers relief to avoid out-<strong>of</strong>-space conditions. You can also set a<br />
threshold for scratch tape volumes in tape libraries, to ensure enough cartridges are available<br />
in the tape library for scratch mounts.<br />
Improved Input/Output (I/O) performance management<br />
<strong>System</strong>-managed storage enables you to improve DASD I/O performance across the<br />
installation. At the same time, it reduces the need for manual tuning by defining performance<br />
goals for each class <strong>of</strong> data. You can use cache statistics recorded in <strong>System</strong> Management<br />
Facility (SMF) records to help evaluate performance. You can also improve sequential<br />
performance by using extended sequential data sets. The DFSMS environment makes the<br />
most effective use <strong>of</strong> the caching abilities <strong>of</strong> storage controllers.<br />
Automated DASD space management<br />
<strong>System</strong>-managed storage enables you to automatically reclaim space that is allocated to old<br />
and unused data sets or objects. You can define policies that determine how long an unused<br />
data set or object will be allowed to reside on primary storage (storage devices used for your<br />
active data). You can have the system remove obsolete data by migrating the data to other<br />
DASD, tape, or optical volumes, or you can have the system delete the data. You can also<br />
release allocated but unused space that is assigned to new and active data sets.<br />
Automated tape space management<br />
Mount Management <strong>System</strong>-managed storage lets you fully use the capacity <strong>of</strong> your tape<br />
cartridges and automate tape mounts. Using tape mount management (TMM) methodology,<br />
DFSMShsm can fill tapes to their capacity. With 3490E, 3590, 3591, and 3592 tape devices,<br />
Enhanced Capacity Cartridge <strong>System</strong> Tape, recording modes such as 384-track and EFMT1,<br />
and the improved data recording capability, you can increase the amount <strong>of</strong> data that can be<br />
written on a single tape cartridge.<br />
Tape <strong>System</strong>-managed storage lets you exploit the device technology <strong>of</strong> new devices without<br />
having to change the JCL UNIT parameter. In a multi-library environment, you can select the<br />
drive based on the library where the cartridge or volume resides. You can use the <strong>IBM</strong><br />
TotalStorage Enterprise Automated Tape Library (3494 or 3495) to automatically mount tape<br />
volumes and manage the inventory in an automated tape library. Similar functionality is<br />
available in a system-managed manual tape library. If you are not using SMS for tape<br />
management, you can still access the <strong>IBM</strong> TotalStorage Enterprise Automated Tape Library<br />
(3494 or 3495) using Basic Tape Library Storage (BTLS) s<strong>of</strong>tware.<br />
Automated optical space management<br />
<strong>System</strong>-managed storage enables you to fully use the capacity <strong>of</strong> your optical cartridges and<br />
to automate optical mounts. Using a 3995 Optical Library Dataserver, you can automatically<br />
mount optical volumes and manage the inventory in an automated optical library.<br />
Chapter 5. <strong>System</strong>-managed storage 243
Improved data availability management<br />
<strong>System</strong>-managed storage enables you to provide separate backup requirements to data<br />
residing on the same DASD volume. Thus, you do not have to treat all data on a single<br />
volume the same way.<br />
You can use DFSMShsm to automatically back up your various types <strong>of</strong> data sets and use<br />
point-in-time copy to maintain access to critical data sets while they are being backed up.<br />
Concurrent copy, virtual concurrent copy, SnapShot, and FlashCopy, along with<br />
backup-while-open, have an added advantage in that they avoid invalidating a backup <strong>of</strong> a<br />
CICS VSAM KSDS due to a control area or control interval split.<br />
You can also create a logical group <strong>of</strong> data sets, so that the group is backed up at the same<br />
time to allow recovery <strong>of</strong> the application defined by the group. This is done with the aggregate<br />
backup and recovery support (ABARS) provided by DFSMShsm.<br />
Simplified conversion <strong>of</strong> data to other device types<br />
<strong>System</strong>-managed storage enables you to move data to new volumes without requiring users<br />
to update their job control language (JCL). Because users in a DFSMS environment do not<br />
need to specify the unit and volume which contains their data, it does not matter to them if<br />
their data resides on a specific volume or device type. This allows you to easily replace old<br />
devices with new ones.<br />
You can also use system-determined block sizes to automatically reblock physical sequential<br />
and partitioned data sets that can be reblocked.<br />
244 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.4 Service level objectives<br />
What performance objectives are required by data<br />
When and how to back up data<br />
Whether data sets should be kept available for use<br />
during backup or copy<br />
How to manage backup copies kept for disaster<br />
recovery<br />
What to do with data that is obsolete or seldom<br />
used<br />
Figure 5-4 Service level objectives<br />
Service level objectives<br />
To allow your business to grow efficiently and pr<strong>of</strong>itably, you want to find ways to control the<br />
growth <strong>of</strong> your information systems and use your current storage more effectively.<br />
In an SMS-managed storage environment, your enterprise establishes centralized policies for<br />
how to use your hardware resources. These policies balance your available resources with<br />
your users' requirements for data availability, performance, space, and security.<br />
The policies defined in your installation represent decisions about your resources, such as:<br />
► What performance objectives are required by the applications accessing the data<br />
Based on these objectives, you can try to better exploit cache data striping. By tracking<br />
data set I/O activities, you can make better decisions about data set caching policies and<br />
improve overall system performance. For object data, you can track transaction activities<br />
to monitor and improve OAM's performance.<br />
► When and how to back up data - incremental or total<br />
Determine the backup frequency, the number <strong>of</strong> backup versions, and the retention period<br />
by consulting user group representatives. Be sure to consider whether certain data<br />
backups need to be synchronized. For example, if the output data from application A is<br />
used as input for application B, you must coordinate the backups <strong>of</strong> both applications to<br />
prevent logical errors in the data when they are recovered.<br />
Chapter 5. <strong>System</strong>-managed storage 245
► Whether data sets are to be kept available for use during backup or copy<br />
You can store backup data sets on DASD or tape (this does not apply to objects). Your<br />
choice depends on how fast the data needs to be recovered, media cost, operator cost,<br />
floor space, power requirements, air conditioning, the size <strong>of</strong> the data sets, and whether<br />
you want the data sets to be portable.<br />
► How to manage backup copies kept for disaster recovery - locally or in a vault<br />
Back up related data sets in aggregated tapes. Each application is to have its own,<br />
self-contained aggregate <strong>of</strong> data sets. If certain data sets are shared by two or more<br />
applications, you might want to ensure application independence for disaster recovery by<br />
backing up each application that shares the data. This is especially important for shared<br />
data in a distributed environment.<br />
► What to do with data that is obsolete or seldom used<br />
Data is obsolete when it has exceeded its expiration dates and is no longer needed. To<br />
select obsolete data for deletion using DFSMSdss, issue the DUMP command and the<br />
DELETE parameter, and force OUTDDNAME to DUMMY.<br />
The purpose <strong>of</strong> a backup plan is to ensure the prompt and complete recovery <strong>of</strong> data. A<br />
well-documented plan identifies data that requires backup, the levels required, responsibilities<br />
for backing up the data, and methods to be used.<br />
246 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.5 Implementing SMS policies<br />
ACS Routines<br />
Data Class<br />
Figure 5-5 Creating SMS policies<br />
Ma nagement Class<br />
What does it<br />
look like?<br />
Which are the<br />
services?<br />
Data<br />
Set<br />
Where is it<br />
placed?<br />
What is the<br />
service level?<br />
S torage Group<br />
Storage Class<br />
Implementing SMS policies<br />
To implement a policy for managing storage, the storage administrator defines classes <strong>of</strong><br />
space management, performance, and availability requirements for data sets. The storage<br />
administrator uses:<br />
Data class Data classes are used to define model allocation characteristics for<br />
data sets.<br />
Storage class Storage classes are used to define performance and availability goals.<br />
Management class Management classes are used to define backup and retention<br />
requirements.<br />
Storage group Storage groups are used to create logical groups <strong>of</strong> volumes to be<br />
managed as a unit.<br />
ACS routines Automatic Class Selection (ACS) routines are used to assign class<br />
and storage group definitions to data sets and objects.<br />
For example, the administrator can define one storage class for data entities requiring high<br />
performance, and another for those requiring standard performance. Then, the administrator<br />
writes Automatic Class Selection (ACS) routines that use naming conventions or other criteria<br />
<strong>of</strong> your choice to automatically assign the classes that have been defined to data as that data<br />
is created. These ACS routines can then be validated and tested.<br />
Chapter 5. <strong>System</strong>-managed storage 247
When the ACS routines are started and the classes (also referred to as constructs) are<br />
assigned to the data, SMS uses the policies defined in the classes to apply to the data for the<br />
life <strong>of</strong> the data. Additionally, devices with various characteristics can be pooled together into<br />
storage groups, so that new data can be automatically placed on devices that best meet the<br />
needs for the data.<br />
DFSMS facilitates all <strong>of</strong> these tasks by providing menu-driven panels with the Interactive<br />
Storage Management Facility (ISMF). ISMF panels make it easy to define classes, test and<br />
validate ACS routines, and perform other tasks to analyze and manage your storage. Note<br />
that many <strong>of</strong> these functions are available in batch through the NaviQuest tool.<br />
248 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.6 Monitoring SMS policies<br />
Monitor DASD use<br />
Monitor data set performance<br />
Decide when to consolidate free space on DASD<br />
Set policies for DASD or tape<br />
Use reports to manage your removable media<br />
Figure 5-6 Monitoring your SMS policies<br />
Monitoring SMS policies<br />
After storage administrators have established the installation's service levels and<br />
implemented policies based on those levels, they can use DFSMS facilities to see if the<br />
installation objectives have been met. Information on past use can help to develop more<br />
effective storage administration policies and manage growth effectively. The DFSMS<br />
Optimizer feature can be used to monitor, analyze, and tune the policies.<br />
Chapter 5. <strong>System</strong>-managed storage 249
5.7 Assigning data to be system-managed<br />
Data Set<br />
Figure 5-7 How to be system-managed<br />
How to be system-managed<br />
Using SMS, you can automate storage management for individual data sets and objects, and<br />
for DASD, optical, and tape volumes. Figure 5-7 shows how a data set, object, DASD volume,<br />
tape volume, or optical volume becomes system-managed. The numbers shown in<br />
parentheses are associated with the following notes:<br />
1. A DASD data set is system-managed if you assign it a storage class. If you do not assign a<br />
storage class, the data set is directed to a non-system-managed DASD or tape volume,<br />
one that is not assigned to a storage group.<br />
2. You can assign a storage class to a tape data set to direct it to a system-managed tape<br />
volume. However, only the tape volume is considered system-managed, not the data set.<br />
3. Objects are also known as byte-stream data, and this data is used in specialized<br />
applications such as image processing, scanned correspondence, and seismic<br />
measurements. Object data typically has no internal record or field structure and, after it is<br />
written, the data is not changed or updated. However, the data can be referenced many<br />
times during its lifetime. Objects are processed by OAM. Each object has a storage class;<br />
therefore, objects are system-managed. The optical or tape volume on which the object<br />
resides is also system-managed.<br />
4. Tape volumes are added to tape storage groups in tape libraries when the tape data set is<br />
created.<br />
250 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
DASD Optical Tape<br />
Assign Storage<br />
Class (SC)<br />
Not applicable<br />
Not<br />
system-managed<br />
Object Stored Stored Stored<br />
<strong>Volume</strong><br />
(3)<br />
(1)<br />
Assign <strong>System</strong><br />
Group (SG)<br />
Define OAM<br />
Storage Groups<br />
(SG)<br />
(2)<br />
(4)<br />
Assign<br />
Storage Group<br />
(SG)
5.8 Using data classes<br />
TSO Allocate<br />
ISPF/PDF<br />
Non-<strong>System</strong>-Managed<br />
<strong>Volume</strong>s<br />
Figure 5-8 Using data classes<br />
JCL IDCAMS DYNALLOC<br />
ACS<br />
ROUTINE<br />
Allocation<br />
Data<br />
Class<br />
<strong>System</strong>-Managed<br />
<strong>Volume</strong>s<br />
Record and Space Attributes<br />
Key Length and Offset<br />
Record Format<br />
Record Length<br />
Record Organization<br />
Space (Primary, Secondary, Avg<br />
Rec, Avg Value)<br />
<strong>Volume</strong> and VSAM Attributes<br />
Compaction<br />
Control Interval Size<br />
Media Type and Recording<br />
Technology<br />
Percent Free Space<br />
Retention Period or Expiration Date<br />
Share Options (Cross Region,<br />
Cross <strong>System</strong>)<br />
<strong>Volume</strong> Count<br />
Data Sets Attributes<br />
Backup-While-Open<br />
Data Set Name Type<br />
Extended Addressability<br />
Extended Format<br />
Initial Load (Speed, Recovery)<br />
Log and Logstream ID<br />
Record Access Bias<br />
Reuse<br />
Space Constrait Relief and Reduce<br />
Space Up to %<br />
Spanned/Nospanned<br />
Using data classes<br />
A data class is a collection <strong>of</strong> allocation and space attributes that you define. It is used when<br />
data sets are created. You can simplify data set allocation for the users by defining data<br />
classes that contain standard data set allocation attributes. You can use data classes with<br />
both system-managed and non-system-managed data sets. However, a variety <strong>of</strong> data class<br />
characteristics, like extended format, are only available for system-managed data sets.<br />
Data class attributes define space and data characteristics that are normally specified on JCL<br />
DD statements, TSO/E ALLOCATE command, IDCAMS DEFINE commands, and dynamic<br />
allocation requests. For tape data sets, data class attributes can also specify the type <strong>of</strong><br />
cartridge and recording method, and if the data is to be compacted. Users then need only<br />
specify the appropriate data classes to create standardized data sets.<br />
You can assign a data class through:<br />
► The DATACLAS parameter <strong>of</strong> a JCL DD statement, ALLOCATE or DEFINE commands.<br />
► Data class ACS routine to automatically assign a data class when the data set is being<br />
created. For example, data sets with the low-level qualifiers LIST, LISTING, OUTLIST, or<br />
LINKLIST are usually utility output data sets with similar allocation requirements, and can<br />
all be assigned the same data class.<br />
You can override various data set attributes assigned in the data class, but you cannot<br />
change the data class name assigned through an ACS routine.<br />
Chapter 5. <strong>System</strong>-managed storage 251
Even though data class is optional, we usually recommend that you assign data classes to<br />
system-managed and non-system-managed data. Although the data class is not used after<br />
the initial allocation <strong>of</strong> a data set, the data class name is kept in the catalog entry for<br />
system-managed data sets for future reference.<br />
Note: The data class name is not saved for non-system-managed data sets, although the<br />
allocation attributes in the data class are used to allocate the data set.<br />
For objects on tape, we recommend that you do not assign a data class through the ACS<br />
routines. To assign a data class, specify the name <strong>of</strong> that data class on the SETOAM command.<br />
If you change a data class definition, the changes only affect new allocations. Existing data<br />
sets allocated with the data class are not changed.<br />
252 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.9 Using storage classes<br />
9393 RVA<br />
Storage<br />
Control<br />
or<br />
ESS<br />
Performance<br />
Requirement<br />
Data Set Requirements<br />
Storage Management Subsystem Mapping<br />
3990<br />
Model<br />
3 or 6<br />
Storage<br />
Control<br />
with<br />
cache<br />
Storage<br />
Class<br />
3390-like 3390 / RAMAC<br />
Figure 5-9 Choosing volumes that meet availability requirements<br />
3990 Model<br />
3 or 6<br />
Storage<br />
Control<br />
with<br />
cache<br />
Availability<br />
Requirement<br />
3390 / RAMAC Dual<br />
pair<br />
Using storage classes<br />
A storage class is a collection <strong>of</strong> performance goals and availability requirements that you<br />
define. The storage class is used to select a device to meet those goals and requirements.<br />
Only system-managed data sets and objects can be assigned to a storage class. Storage<br />
classes free users from having to know about the physical characteristics <strong>of</strong> storage devices<br />
and manually placing their data on appropriate devices.<br />
Some <strong>of</strong> the availability requirements that you specify to storage classes (such as cache and<br />
dual copy) can only be met by DASD volumes attached through one <strong>of</strong> the following storage<br />
control units or a similar device:<br />
► 3990-3 or 3990-6<br />
► RAMAC Array Subsystem<br />
► Enterprise Storage Server (ESS)<br />
► DS6000 or DS8000<br />
Figure 5-9 shows storage control unit configurations and their storage class attribute values.<br />
With a storage class, you can assign a data set to dual copy volumes to ensure continuous<br />
availability for the data set. With dual copy, two current copies <strong>of</strong> the data set are kept on<br />
separate DASD volumes (by the control unit). If the volume containing the primary copy <strong>of</strong> the<br />
data set is damaged, the companion volume is automatically brought online and the data set<br />
continues to be available and current. Remote copy is the same, with the two volumes in<br />
distinct control units (generally remote).<br />
Chapter 5. <strong>System</strong>-managed storage 253
You can use the ACCESSIBILITY attribute <strong>of</strong> the storage class to request that concurrent<br />
copy be used when data sets or volumes are backed up.<br />
You can specify an I/O response time objective with storage class by using the millisecond<br />
response time (MSR) parameter. During data set allocation, the system attempts to select the<br />
closest available volume to the specified performance objective. Also along the data set life,<br />
through the use MSR, DFSMS dynamically uses the cache algorithms as DASD Fast Write<br />
(DFW) and Inhibit Cache Load (ICL) in order to reach the MSR target I/O response time. This<br />
DFSMS function is called dynamic cache management.<br />
To assign a storage class to a new data set, you can use:<br />
► The STORCLAS parameter <strong>of</strong> the JCL DD statement, ALLOCATE or DEFINE command<br />
► Storage class ACS routine<br />
For objects, the system uses the performance goals you set in the storage class to place the<br />
object on DASD, optical, or tape volumes. The storage class is assigned to an object when it<br />
is stored or when the object is moved. The ACS routines can override this assignment.<br />
Note: If you change a storage class definition, the changes affect the performance service<br />
levels <strong>of</strong> existing data sets that are assigned to that class when the data sets are<br />
subsequently opened. However, the definition changes do not affect the location or<br />
allocation characteristics <strong>of</strong> existing data sets.<br />
254 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.10 Using management classes<br />
Expiration<br />
Migration/Object<br />
Transition<br />
GDG<br />
Management<br />
BACKUP<br />
SPACE<br />
Data<br />
Management<br />
Requirements<br />
Figure 5-10 Using management classes<br />
Management<br />
Class<br />
DFSMShsm-Owned<br />
Storage<br />
Management<br />
Subsystem<br />
<strong>System</strong>-Managed<br />
<strong>Volume</strong><br />
DFSMShsm<br />
and<br />
DFSMSdss<br />
Using management classes<br />
A management class is a collection <strong>of</strong> management attributes that you define. The attributes<br />
defined in a management class are related to:<br />
► Expiration date<br />
► Migration criteria<br />
► GDG management<br />
► Backup <strong>of</strong> data set<br />
► Object Class Transition Criteria<br />
► Aggregate backup<br />
Management classes let you define management requirements for individual data sets, rather<br />
than defining the requirements for entire volumes. All the data set functions described in the<br />
management class are executed by DFSMShsm and DFSMSdss programs. Figure 5-11 on<br />
page 257 shows the sort <strong>of</strong> functions an installation can define in a management class.<br />
To assign a management class to a new data set, you can use:<br />
► The MGMTCLAS parameter <strong>of</strong> the JCL DD statement, ALLOCATE or DEFINE command<br />
► The management class ACS routine to automatically assign management classes to new<br />
data sets<br />
The ACS routine can override the management class specified in JCL, ALLOCATE or DEFINE<br />
command.You cannot override management class attributes through JCL or command<br />
parameters.<br />
Chapter 5. <strong>System</strong>-managed storage 255
If you do not explicitly assign a management class to a system-managed data set or object,<br />
the system uses the default management class. You can define your own default management<br />
class when you define your SMS base configuration.<br />
Note: If you change a management class definition, the changes affect the management<br />
requirements <strong>of</strong> existing data sets and objects that are assigned that class. You can<br />
reassign management classes when data sets are renamed.<br />
For objects, you can:<br />
► Assign a management class when it is stored, or<br />
► Assign a new management class when the object is moved, or<br />
► Change the management class by using the OAM Application <strong>Programming</strong> Interface<br />
(<strong>OS</strong>REQ CHANGE function)<br />
The ACS routines can override this assignment for objects.<br />
256 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.11 Management class functions<br />
Allow early migration for old generations <strong>of</strong> GDG<br />
Delete selected old/unused data sets from DASD<br />
volumes<br />
Release allocated but unused space from data sets<br />
Migrate unused data sets to tape or DASD volumes<br />
Specify how <strong>of</strong>ten to back up data sets, and whether<br />
concurrent copy should be used for backups<br />
Specify how many backup versions to keep for data sets<br />
Specify how long to save backup versions<br />
Specify the number <strong>of</strong> versions <strong>of</strong> ABARS to keep and<br />
how to retain those versions<br />
Establish the expiration date/transition criteria for objects<br />
Indicate if automatic backup is needed for objects<br />
Figure 5-11 Management class functions<br />
Management class functions<br />
By classifying data according to management requirements, an installation can define unique<br />
management classes to fully automate data set and object management. For example:<br />
► Control the migration <strong>of</strong> CICS user databases, DB2 user databases and archive logs.<br />
► Test systems and their associated data sets.<br />
► IMS archive logs.<br />
► Specify that DB2 image copies, IMS image copies and change accumulation logs be<br />
written to primary volumes and then migrated directly to migration level 2 tape volumes.<br />
► For objects, define when an object is eligible for a change in its performance objectives or<br />
management characteristics. For example, after a certain number <strong>of</strong> days an installation<br />
might want to move an object from a high performance DASD volume to a slower optical<br />
volume.<br />
Management class can also be used to specify that the object is to have a backup copy<br />
made when the OAM Storage Management Component (<strong>OS</strong>MC) is executing.<br />
When changing a management class definition, the changes affect the management<br />
requirements <strong>of</strong> existing data sets and objects that are assigned to that class.<br />
Chapter 5. <strong>System</strong>-managed storage 257
5.12 Using storage groups<br />
DFSMShsm-owned<br />
Migration<br />
Level 1<br />
TAPE<br />
Migration<br />
Level 2,<br />
Backup,<br />
Dump<br />
Figure 5-12 Grouping storage volumes for specific purposes<br />
Storage groups<br />
A storage group is a collection <strong>of</strong> storage volumes and attributes that you define. The<br />
collection can be a group <strong>of</strong>:<br />
► <strong>System</strong> paging volumes<br />
► DASD volumes<br />
► Tape volumes<br />
► Optical volumes<br />
► Combination <strong>of</strong> DASD and optical volumes that look alike<br />
► DASD, tape, and optical volumes treated as a single object storage hierarchy<br />
Storage groups, along with storage classes, help reduce the requirement for users to<br />
understand the physical characteristics <strong>of</strong> the storage devices which contain their data.<br />
In a tape environment, you can also use tape storage groups to direct a new tape data set to<br />
an automated or manual tape library.<br />
DFSMShsm uses various storage group attributes to determine whether the volumes in the<br />
storage group are eligible for automatic space or availability management.<br />
Figure 5-12 shows an example <strong>of</strong> how an installation can group storage volumes according to<br />
their objective. In this example:<br />
► SMS-managed DASD volumes are grouped into storage groups so that primary data sets,<br />
large data sets, DB2 data, IMS data, and CICS data are all separated.<br />
258 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
SMS-managed<br />
VIO PRIMARY<br />
LARGE<br />
Storage<br />
Groups<br />
DB2 IMS CICS<br />
Non-system-managed<br />
SYSTEM<br />
OBJECT<br />
OBJECT BACKUP<br />
UNMOVABLE<br />
TAPE
► The VIO storage group uses system paging volumes for small temporary data sets.<br />
► The TAPE storage groups are used to group tape volumes that are held in tape libraries.<br />
► The OBJECT storage group can span optical, DASD, and tape volumes.<br />
► The OBJECT BACKUP storage group can contain either optical or tape volumes within<br />
one OAM invocation.<br />
► Some volumes are not system-managed.<br />
► Other volumes are owned by DFSMShsm for use in data backup and migration.<br />
DFSMShsm migration level 2 tape cartridges can be system-managed if you assign them<br />
to a tape storage group.<br />
Note: A storage group is assigned to a data set only through the storage group ACS<br />
routine. Users cannot specify a storage group when they allocate a data set, although they<br />
can specify a unit and volume.<br />
Whether or not to honor a user’s unit and volume request is an installation decision, but we<br />
recommend that you discourage users from directly requesting specific devices. It is more<br />
effective for users to specify the logical storage requirements <strong>of</strong> their data by storage and<br />
management class, which the installation can then verify in the ACS routines.<br />
For objects, there are two types <strong>of</strong> storage groups, OBJECT and OBJECT BACKUP. An<br />
OBJECT storage group is assigned by OAM when the object is stored; the storage group<br />
ACS routine can override this assignment. There is only one OBJECT BACKUP storage<br />
group, and all backup copies <strong>of</strong> all objects are assigned to this storage group.<br />
SMS volume selection<br />
SMS determines which volumes are used for data set allocation by developing a list <strong>of</strong> all<br />
volumes from the storage groups assigned by the storage group ACS routine. <strong>Volume</strong>s are<br />
then either removed from further consideration or flagged as the following:<br />
Primary <strong>Volume</strong>s online, below threshold, that meet all the specified criteria in the<br />
storage class.<br />
Secondary <strong>Volume</strong>s that do not meet all the criteria for primary volumes.<br />
Tertiary When the number <strong>of</strong> volumes in the storage group is less than the number <strong>of</strong><br />
volumes that are requested.<br />
Rejected <strong>Volume</strong>s that do not meet the required specifications. They are not<br />
candidates for selection.<br />
SMS starts volume selection from the primary list; if no volumes are available, SMS selects<br />
from the secondary; and, if no secondary volumes are available, SMS selects from the<br />
tertiary list.<br />
SMS interfaces with the system resource manager (SRM) to select from the eligible volumes<br />
in the primary list. SRM uses device delays as one <strong>of</strong> the criteria for selection, and does not<br />
prefer a volume if it is already allocated in the jobstep. This is useful for batch processing<br />
when the data set is accessed immediately after creation.<br />
SMS does not use SRM to select volumes from the secondary or tertiary volume lists. It uses<br />
a form <strong>of</strong> randomization to prevent skewed allocations in instances such as when new<br />
volumes are added to a storage group, or when the free space statistics are not current on<br />
volumes.<br />
For a striped data set, when multiple storage groups are assigned to an allocation, SMS<br />
examines each storage group and selects the one that <strong>of</strong>fers the largest number <strong>of</strong> volumes<br />
attached to unique control units. This is called control unit separation. After a storage group<br />
has been selected, SMS selects the volumes based on available space, control unit<br />
separation, and performance characteristics if they are specified in the assigned storage<br />
class.<br />
Chapter 5. <strong>System</strong>-managed storage 259
5.13 Using aggregate backup and recovery support (ABARS)<br />
Data and control<br />
information<br />
Figure 5-13 ABARS<br />
Aggregate backup and recovery support (ABARS)<br />
Aggregate backup and recovery support, also called application backup application recovery<br />
support, is a command-driven process to back up and recover any user-defined group <strong>of</strong> data<br />
sets that are vital to your business. An aggregate group is a collection <strong>of</strong> related data sets and<br />
control information that has been pooled to meet a defined backup or recovery strategy. If a<br />
disaster occurs, you can use these backups at a remote or local site to recover critical<br />
applications.<br />
The user-defined group <strong>of</strong> data sets can be those belonging to an application, or any<br />
combination <strong>of</strong> data sets that you want treated as a separate entity. Aggregate processing<br />
enables you to:<br />
► Back up and recover data sets by application, to enable business to resume at a remote<br />
site if necessary<br />
► Move applications in a non-emergency situation in conjunction with personnel moves or<br />
workload balancing<br />
► Duplicate a problem at another site<br />
You can use aggregate groups as a supplement to using management class for applications<br />
that are critical to your business. You can associate an aggregate group with a management<br />
class. The management class specifies backup attributes for the aggregate group, such as<br />
the copy technique for backing up DASD data sets on primary volumes, the number <strong>of</strong><br />
260 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
DataSet<br />
DataSet<br />
DataSet
aggregate versions to retain, and how long to retain versions. Aggregate groups simplify the<br />
control <strong>of</strong> backup and recovery <strong>of</strong> critical data sets and applications.<br />
Although SMS must be used on the system where the backups are performed, you can<br />
recover aggregate groups to systems that are not using SMS, provided that the groups do not<br />
contain data that requires that SMS be active, such as PDSEs. You can use aggregate groups<br />
to transfer applications to other data processing installations, or to migrate applications to<br />
newly-installed DASD volumes. You can transfer the application's migrated data, along with its<br />
active data, without recalling the migrated data.<br />
Chapter 5. <strong>System</strong>-managed storage 261
5.14 Automatic Class Selection (ACS) routines<br />
New Data Set<br />
Allocations<br />
DFSMSdss or<br />
DFSMShsm<br />
Conversion <strong>of</strong><br />
Existing Data Sets<br />
Figure 5-14 Using ACS routines<br />
Using Automatic Class Selection routines<br />
You use automatic class selection (ACS) routines to assign classes (data, storage, and<br />
management) and storage group definitions to data sets, database data, and objects. You<br />
write ACS routines using the ACS language, which is a high-level programming language.<br />
Once written, you use the ACS translator to translate the routines to object form so they can<br />
be stored in the SMS configuration.<br />
The ACS language contains a number <strong>of</strong> read-only variables, which you can use to analyze<br />
new data allocations. For example, you can use the read-only variable &DSN to make class<br />
and group assignments based on data set or object collection name, or &LLQ to make<br />
assignments based on the low-level qualifier <strong>of</strong> the data set or object collection name.<br />
With z/<strong>OS</strong> V1R6, you can use a new ACS routine read-only security label variable,<br />
&SECLABL, as input to the ACS routine. A security label is a name used to represent an<br />
association between a particular security level and a set <strong>of</strong> security categories. It indicates<br />
the minimum level <strong>of</strong> security required to access a data set protected by this pr<strong>of</strong>ile.<br />
Note: You cannot alter the value <strong>of</strong> read-only variables.<br />
You use the four read-write variables to assign the class or storage group you determine for<br />
the data set or object, based on the routine you are writing. For example, you use the<br />
&STORCLAS variable to assign a storage class to a data set or object.<br />
262 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Data Class<br />
ACS Routine<br />
Storage Class<br />
ACS Routine<br />
Storage<br />
Class<br />
Management Class<br />
ACS Routine<br />
Storage Group<br />
ACS Routine<br />
Storage<br />
Class<br />
not<br />
assigned<br />
Assigned<br />
Non-<strong>System</strong>-Managed<br />
<strong>Volume</strong><br />
<strong>System</strong>-Managed<br />
<strong>Volume</strong>
For a detailed description <strong>of</strong> the ACS language and its variables, see z/<strong>OS</strong> DFSMSdfp<br />
Storage Administration Reference, SC26-7402.<br />
For each SMS configuration, you can write as many as four routines: one each for data class,<br />
storage class, management class, and storage group. Use ISMF to create, translate, validate,<br />
and test the routines.<br />
Processing order <strong>of</strong> ACS routines<br />
Figure 5-14 on page 262 shows the order in which ACS routines are processed. Data can<br />
become system-managed if the storage class routine assigns a storage class to the data, or if<br />
it allows a user-specified storage class to be assigned to the data. If this routine does not<br />
assign a storage class to the data, the data cannot reside on a system-managed volume.<br />
Because data allocations, whether dynamic or through JCL, are processed through ACS<br />
routines, you can enforce installation standards for data allocation on system-managed and<br />
non-system-managed volumes. ACS routines also enable you to override user specifications<br />
for data, storage, and management class, and requests for specific storage volumes.<br />
You can use the ACS routines to determine the SMS classes for data sets created by the<br />
Distributed FileManager/MVS. If a remote user does not specify a storage class, and if the<br />
ACS routines decide that the data set is not to be system-managed, then the Distributed<br />
FileManager/MVS terminates the creation process immediately and returns an error reply<br />
message to the source. Therefore, when you construct your ACS routines, consider the<br />
potential data set creation requests <strong>of</strong> remote users.<br />
Chapter 5. <strong>System</strong>-managed storage 263
5.15 SMS configuration<br />
An SMS configuration is made <strong>of</strong><br />
Set <strong>of</strong> data class, storage class, management class<br />
and storage group<br />
Figure 5-15 Defining the SMS configuration<br />
SMS configuration<br />
An SMS configuration is composed <strong>of</strong>:<br />
► A set <strong>of</strong> data class, management class, storage class, and storage group<br />
► ACS routines to assign the classes and groups<br />
► Optical library and drive definitions<br />
► Tape library definitions<br />
► Aggregate group definitions<br />
► SMS base configuration, that contains information such as:<br />
– Default management class<br />
– Default device geometry<br />
– The systems in the installation for which the subsystem manages storage<br />
The SMS configuration is stored in SMS control data sets, which are VSAM linear data sets.<br />
You must define the control data sets before activating SMS. SMS uses the following types <strong>of</strong><br />
control data sets:<br />
► Source Control Data Set (SCDS)<br />
► Active Control Data Set (ACDS)<br />
► Communications Data Set (COMMDS)<br />
264 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Optical library and drive definitions<br />
Tape library definitions<br />
ACS routines to assign classes<br />
Aggregate group definitions<br />
SMS base configuration
SMS complex (SMSplex)<br />
A collection <strong>of</strong> systems or system groups that share a common configuration is called an<br />
SMS complex. All systems in an SMS complex share an ACDS and a COMMDS. The<br />
systems or system groups that share the configuration are defined to SMS in the SMS base<br />
configuration. The ACDS and COMMDS must reside on a shared volume, accessible for all<br />
systems in the SMS complex.<br />
Chapter 5. <strong>System</strong>-managed storage 265
5.16 SMS control data sets<br />
SMS<br />
Figure 5-16 SMS control data sets<br />
SMS control data sets<br />
SMS stores its class and group definitions, translated ACS routines, and system information<br />
in three control data sets.<br />
Source Control Data Set (SCDS)<br />
The Source Control Data Set (SCDS) contains SMS classes, groups, and translated ACS<br />
routines that define a single storage management policy, called an SMS configuration. You<br />
can have several SCDSs, but only one can be used to activate the SMS configuration.<br />
You use the SCDS to develop and test but, before activating a configuration, retain at least<br />
one prior configuration if you need to regress to it because <strong>of</strong> error. The SCDS is never used<br />
to manage allocations.<br />
Active Control Data Set (ACDS)<br />
The ACDS is the system's active copy <strong>of</strong> the current SCDS. When you activate a<br />
configuration, SMS copies the existing configuration from the specified SCDS into the ACDS.<br />
By using copies <strong>of</strong> the SMS classes, groups, volumes, optical libraries, optical drives, tape<br />
libraries, and ACS routines rather than the originals, you can change the current storage<br />
management policy without disrupting it. For example, while SMS uses the ACDS, you can:<br />
► Create a copy <strong>of</strong> the ACDS<br />
► Create a backup copy <strong>of</strong> an SCDS<br />
266 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
COMMDS<br />
ACDS<br />
SCDS
► Modify an SCDS<br />
► Define a new SCDS<br />
We recommend that you have extra ACDSs in case a hardware failure causes the loss <strong>of</strong> your<br />
primary ACDS. It must reside on a shared device, accessible to all systems, to ensure that<br />
they share a common view <strong>of</strong> the active configuration. Do not have the ACDS reside on the<br />
same device as the COMMDS or SCDS. Both the ACDS and COMMDS are needed for SMS<br />
operation across the complex. Separation protects against hardware failure. Also create a<br />
backup ACDS in case <strong>of</strong> hardware failure or accidental data loss or corruption.<br />
Communications Data Set (COMMDS)<br />
The Communications Data Set (COMMDS) data set contains the name <strong>of</strong> the ACDS and<br />
storage group volume statistics. It enables communication between SMS systems in a<br />
multisystem environment. The COMMDS also contains space statistics, SMS status, and<br />
MVS status for each system-managed volume.<br />
The COMMDS must reside on a shared device accessible to all systems. However, do not<br />
allocate it on the same device as the ACDS. Create a spare COMMDS in case <strong>of</strong> a hardware<br />
failure or accidental data loss or corruption. SMS activation fails if the COMMDS is<br />
unavailable.<br />
Chapter 5. <strong>System</strong>-managed storage 267
5.17 Implementing DFSMS<br />
Enabling the<br />
<strong>System</strong>-Managed<br />
S<strong>of</strong>tware base<br />
Managing<br />
permanent<br />
data<br />
Figure 5-17 SMS implementation phases<br />
Implementing DFSMS<br />
You can implement SMS to fit your specific needs. You do not have to implement and use all<br />
<strong>of</strong> the SMS functions. Rather, you can implement the functions you are most interested in<br />
first. For example, you can:<br />
► Set up a storage group to only exploit the functions provided by extended format data sets,<br />
such as striping, system-managed buffering (SMB), partial release, and so on.<br />
► Put data in a pool <strong>of</strong> one or more storage groups and assign them policies at the storage<br />
group level to implement DFSMShsm operations in stages.<br />
► Exploit VSAM record level sharing (RLS).<br />
DFSMS implementation phases<br />
There are five major DFSMS implementation phases:<br />
► Enabling the s<strong>of</strong>tware base<br />
► Activating the storage management subsystem<br />
► Managing temporary data<br />
► Managing permanent data<br />
► Managing tape data<br />
In this book, we present an overview <strong>of</strong> the steps needed to activate, and manage data with,<br />
a minimal SMS configuration, without affecting your JCL or data set allocations. To implement<br />
DFSMS in your installation, however, see z/<strong>OS</strong> DFSMS Implementing <strong>System</strong>-Managed<br />
Storage, SC26-7407.<br />
268 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Activating the<br />
Storage Management<br />
Subsystem<br />
Managing<br />
temporary<br />
data<br />
Optimizing tape usage<br />
Managing tape<br />
volumes<br />
Managing<br />
object<br />
data
5.18 Steps to activate a minimal SMS configuration<br />
Allocate the SMS control data sets<br />
Define the GRS resource names for the SMS<br />
control data sets<br />
Define the system group<br />
Define a minimal SMS configuration:<br />
Create the SCDS base data set<br />
Create classes, storage groups and respective ACS<br />
routines<br />
Define the SMS subsystem to z/<strong>OS</strong><br />
Start SMS and activate the SMS configuration<br />
Figure 5-18 Steps to activate a minimal SMS configuration<br />
Steps to activate a minimal SMS configuration<br />
Activating a minimal configuration lets you experience managing an SMS configuration<br />
without affecting your JCL or data set allocations. This establishes an operating environment<br />
for the storage management subsystem, without data sets becoming system-managed.<br />
The minimal SMS configuration consists <strong>of</strong> the following elements:<br />
► A base configuration<br />
► A storage class definition<br />
► A storage group containing at least one volume<br />
► A storage class ACS routine<br />
► A storage group ACS routine<br />
All <strong>of</strong> these elements are required for a valid SMS configuration, except for the storage class<br />
ACS routine.<br />
The steps needed to activate the minimal configuration are presented in Figure 5-18. When<br />
implementing DFSMS, beginning by implementing a minimal configuration allows you to:<br />
► Gain experience with ISMF applications for the storage administrator, because you use<br />
ISMF applications to define and activate the SMS configuration.<br />
Chapter 5. <strong>System</strong>-managed storage 269
► Gain experience with the operator commands that control operation <strong>of</strong> resources<br />
controlled by SMS.<br />
► Learn how the SMS base configuration can affect allocations for non-system-managed<br />
data sets. The base configuration contains installation defaults for data sets:<br />
► For non-system-managed data sets, you can specify default device geometry to ease the<br />
conversion from device-dependent space calculations to the device-independent method<br />
implemented by SMS.<br />
► For system-managed data sets, you can specify a default management class to be used<br />
for data sets that are not assigned a management class by your management class ACS<br />
routine.<br />
► Use simplified JCL.<br />
► Implement allocation standards, because you can develop a data class ACS routine to<br />
enforce your standards.<br />
270 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.19 Allocating SMS control data sets<br />
//ALLOC EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
DEFINE CLUSTER(NAME(YOUR.OWN.SCDS) LINEAR VOLUME(D65DM1) -<br />
TRK(25 5) SHAREOPTIONS(2,3)) -<br />
DATA(NAME(YOUR.OWN.SCDS.DATA))<br />
DEFINE CLUSTER(NAME(YOUR.OWN.ACDS) LINEAR VOLUME(D65DM2) -<br />
TRK(25 5) SHAREOPTIONS(3,3)) -<br />
DATA(NAME(YOUR.ACDS.DATA))<br />
DEFINE CLUSTER(NAME(YOUR.OWN.COMMDS) LINEAR VOLUME(D65DM3) -<br />
TRK(1 1) SHAREOPTIONS(3,3)) -<br />
DATA(NAME(YOUR.OWN.COMMDS.DATA))<br />
Figure 5-19 Using IDCAMS to create SMS control data sets<br />
Calculating the SCDS and ACDS sizes<br />
The size <strong>of</strong> the ACDS and SCDS may allow constructs for up to 32 systems. Be sure to<br />
allocate sufficient space for the ACDS and SCDS, because insufficient ACDS size can cause<br />
errors such as failing SMS activation. See z/<strong>OS</strong> DFSMSdfp Storage Administration<br />
Reference, SC26-7402, for the formula used to calculate the appropriate SMS control data<br />
set size.<br />
Calculating the COMMDS size<br />
The size <strong>of</strong> the communications data set (COMMDS) increased in DFSMS 1.3, because the<br />
amount <strong>of</strong> space required to store system-related information for each volume increased. To<br />
perform a precise calculation <strong>of</strong> the COMMDS size, use the formula provided in z/<strong>OS</strong><br />
DFSMSdfp Storage Administration Reference, SC26-7402.<br />
Defining the control data sets<br />
After you have calculated their respective sizes, define the SMS control data sets using<br />
access method services. The SMS control data sets are VSAM linear data sets and you<br />
define them using the IDCAMS DEFINE command, as shown in Figure 5-19. Because these<br />
data sets are allocated before SMS is activated, space is allocated in tracks. Allocations in<br />
KBs or MBs are only supported when SMS is active.<br />
Specify SHAREOPTIONS(2,3) only for the SCDS. This lets one update-mode user operate<br />
simultaneously with other read-mode users between regions.<br />
Chapter 5. <strong>System</strong>-managed storage 271
Specify SHAREOPTIONS(3,3) for the ACDS and COMMDS. These data sets must be shared<br />
between systems that are managing a shared DASD configuration in a DFSMS environment.<br />
Define GRS resource names for active SMS control data sets<br />
If you plan to share SMS control data sets between systems, consider the effects <strong>of</strong> multiple<br />
systems sharing these data sets. Access is serialized by the use <strong>of</strong> RESERVE, which locks<br />
out access to the entire device volume from other systems until the RELEASE is issued by the<br />
task using the resource. This is undesirable, especially when there are other data sets on the<br />
volume.<br />
A RESERVE is issued when SMS is updating:<br />
► The COMMDS with space statistics at the expiration time interval specified in the<br />
IGDSMSxx PARMLIB member<br />
► The ACDS due to changes in the SMS configuration<br />
Place the resource name IGDCDSXS in the RESERVE conversion RNL as a generic entry to<br />
convert the RESERVE/RELEASE to an ENQueue/DEQueue. This minimizes delays due to<br />
contention for resources and prevents deadlocks associated with the VARY SMS command.<br />
Important: If there are multiple SMS complexes within a global resource serialization<br />
complex, be sure to use unique COMMDS and ACDS data set names to prevent false<br />
contention.<br />
For information about allocating COMMDS and ACDS data set names, see z/<strong>OS</strong> DFSMS<br />
Implementing <strong>System</strong>-Managed Storage, SC26-7407.<br />
272 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.20 Defining the SMS base configuration<br />
Storage<br />
Group<br />
Validade<br />
ISMF<br />
Figure 5-20 Minimal SMS configuration<br />
Security<br />
definitions<br />
Storage<br />
Class<br />
Translate<br />
Protecting the DFSMS environment<br />
Before defining the SMS base configuration, you have to protect access to the SMS control<br />
data sets, programs, and functions. For example, various functions in ISMF are related only to<br />
storage administration tasks and you must protect your storage environment from<br />
unauthorized access. You can protect the DFSMS environment with RACF.<br />
RACF controls access to the following resources:<br />
► <strong>System</strong>-managed data sets<br />
► SMS control data sets<br />
► SMS functions and commands<br />
► Fields in the RACF pr<strong>of</strong>ile<br />
► SMS classes<br />
► ISMF functions<br />
ACS<br />
Validate<br />
SCDS<br />
Base Configuration<br />
Storage Class Definition<br />
Storage group containing at<br />
least one volume<br />
Storage class ACS routine<br />
Storage group ACS routine<br />
For more information, see z/<strong>OS</strong> DFSMSdfp Storage Administration Reference, SC26-7402.<br />
Defining the system group<br />
A system group is a group <strong>of</strong> systems within an SMS complex that have similar connectivity to<br />
storage groups, libraries, and volumes. When a Parallel Sysplex name is specified and used<br />
as a system group name, the name applies to all systems in the Parallel Sysplex except for<br />
those systems defined as part <strong>of</strong> the Parallel Sysplex that are explicitly named in the SMS<br />
base configuration. The system group is defined using ISMF when defining the base<br />
configuration.<br />
Chapter 5. <strong>System</strong>-managed storage 273
Defining the SMS base configuration<br />
After creating the SCDS data set with IDCAMS and setting up the security to the DFSMS<br />
environment, you use the ISMF Control Data Set option to define the SMS base<br />
configuration, which contains information such as:<br />
► Default management class<br />
► Default device geometry<br />
► The systems in the installation for which SMS manages storage using that configuration<br />
To define a minimal configuration, you must do the following:<br />
► Define a storage class.<br />
► Define a storage group containing at least one volume. (The volume does not have to<br />
exist, as long as you do not direct allocations to either the storage group or the volume.)<br />
► Create their respective ACS routines.<br />
Defining a data class, a management class, and creating their respective ACS routines are<br />
not required for a valid SCDS. However, because <strong>of</strong> the importance <strong>of</strong> the default<br />
management class, we recommend that you include it in your minimal configuration.<br />
For a detailed description <strong>of</strong> SMS classes and groups, see z/<strong>OS</strong> DFSMS Implementing<br />
<strong>System</strong>-Managed Storage, SC26-7407.<br />
The DFSMS product tape contains a set <strong>of</strong> sample ACS routines. The appendix <strong>of</strong> z/<strong>OS</strong><br />
DFSMSdfp Storage Administration Reference, SC26-7402, contains sample definitions <strong>of</strong> the<br />
SMS classes and groups that are used in the sample ACS routines. The starter set<br />
configuration can be used as a model for your own SCDS. For a detailed description <strong>of</strong> base<br />
configuration attributes and how to use ISMF to define its contents, see z/<strong>OS</strong> DFSMSdfp<br />
Storage Administration Reference, SC26-7402.<br />
Defining the storage class<br />
You must define at least one storage class name to SMS. Because a minimal configuration<br />
does not include any system-managed volumes, no performance or availability information<br />
need be contained in the minimal configuration's storage class. Specify an artificial storage<br />
class, NONSMS. This class is later used by the storage administrator to create<br />
non-system-managed data sets on an exception basis.<br />
In the storage class ACS routine, the &STORCLAS variable is set to a null value to prevent<br />
users from coding a storage class in JCL before you want to have system-managed data sets.<br />
You define the class using ISMF. Select Storage Class in the primary menu. Then you can<br />
define the class, NONSMS, in your configuration in one <strong>of</strong> two ways:<br />
► Select option 3 Define in the Storage Class Application Selection panel. The CDS Name<br />
field must point to the SCDS you are building.<br />
► Select option 1 Display in the Storage Class Application Selection panel. The CDS Name<br />
field must point to the starter set SCDS. Then, in the displayed panel, use the COPY line<br />
operator to copy the definition <strong>of</strong> NONSMS from the starter set SCDS to your own SCDS.<br />
Defining the storage group<br />
You must define at least one pool storage group name to SMS, and at least one volume serial<br />
number to this storage group. A storage group with no volumes defined is not valid. This<br />
volume serial number is to be for a nonexistent volume to prevent the occurrence <strong>of</strong> JCL<br />
errors from jobs accessing data sets using a specific volume serial number.<br />
274 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Defining a non-existent volume lets you activate SMS without having any system-managed<br />
volumes. No data sets are system-managed at this time. This condition provides an<br />
opportunity to experiment with SMS without any risk to your data.<br />
Define a storage group (for example, NOVOLS) in your SCDS. A name like NOVOLS is useful<br />
because you know it does not contain valid volumes.<br />
Defining the default management class<br />
Define a default management class and name it STANDEF to correspond with the entry in the<br />
base configuration. We recommend that you specifically assign all system-managed data to a<br />
management class. If you do not supply a default, DFSMShsm uses two days on primary<br />
storage, and 60 days on migration level 1 storage, as the default.<br />
No management classes are assigned when the minimal configuration is active. Definition <strong>of</strong><br />
this default is done here to prepare for the managing permanent data implementation phase.<br />
The management class, STANDEF, is defined in the starter set SCDS. You can copy its<br />
definition to your own SCDS in the same way as the storage class, NONSMS.<br />
Chapter 5. <strong>System</strong>-managed storage 275
5.21 Creating ACS routines<br />
Storage class ACS routine<br />
Storage group ACS routine<br />
Figure 5-21 Sample ACS routines for a minimal SMS configuration<br />
Creating ACS routines<br />
After you define the SMS classes and group, develop their respective ACS routines. For a<br />
minimal SMS configuration, in the storage class ACS routine, you assign a null storage class,<br />
as shown in the sample storage class ACS routine in Figure 5-21. The storage class ACS<br />
routine ensures that the storage class read/write variable is always set to null. This prevents<br />
users from externally specifying a storage class on their DD statements (STORCLAS<br />
keyword), which causes the data set to be system-managed before you are ready.<br />
The storage group ACS routine will never run if a null storage class is assigned. Therefore, no<br />
data sets are allocated as system-managed by the minimal configuration. However, you must<br />
code a trivial one to satisfy the SMS requirements for a valid SCDS. After you have written the<br />
ACS routines, use ISMF to translate them into executable form.<br />
Follow these steps to create a data set that contains your ACS routines:<br />
1. If you do not have the starter set, allocate a fixed-block PDS or PDSE with LRECL=80 to<br />
contain your ACS routines. Otherwise, start with the next step.<br />
2. On the ISMF Primary Option Menu, select Automatic Class Selection to display the ACS<br />
Application Selection panel.<br />
3. Select option 1 Edit. When the next panel is shown, enter in the Edit panel the name <strong>of</strong><br />
the PDS or PDSE data set you want to create to contain your source ACS routines.<br />
276 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Translating the ACS routines<br />
The translation process checks the routines for syntax errors and converts the code into an<br />
ACS object. If the code translates without any syntax errors, then the ACS object is stored in<br />
the SCDS. For translate:<br />
1. From the ISMF ACS Application Selection Menu panel, select 2 Translate.<br />
2. Enter your SCDS data set name, the PDS or PDSE data set name containing the ACS<br />
source routine, and a data set name to hold the translate output listing. When the listing<br />
data set does not exist, it is created automatically.<br />
Validating the SCDS<br />
When you validate your SCDS, you verify that all classes and groups assigned by your ACS<br />
routines are defined in the SCDS. To validate the SCDS:<br />
1. From the ISMF Primary Option Menu panel, select Control Data Set and press Enter.<br />
2. Enter your SCDS data set name and select 4 Validate.<br />
For more information, see z/<strong>OS</strong> DFSMS: Using the Interactive Storage Management Facility,<br />
SC26-7411.<br />
Chapter 5. <strong>System</strong>-managed storage 277
5.22 DFSMS setup for z/<strong>OS</strong><br />
IEASYSxx<br />
.....<br />
SSN=yy<br />
SMS=zz<br />
Data<br />
Class<br />
ACS<br />
Routines<br />
Figure 5-22 DFSMS setup for z/<strong>OS</strong><br />
DFSMS setup for z/<strong>OS</strong><br />
In preparation for starting SMS, update the following PARMLIB members to define SMS to<br />
z/<strong>OS</strong>:<br />
IEASYSxx Verify the suffix <strong>of</strong> the IEFSSNyy in use and add the SMS=xx parameter,<br />
where xx is the IGDSMS member name suffix.<br />
IEFSSNyy You can activate SMS only after you define the SMS subsystem to z/<strong>OS</strong>. To<br />
define SMS to z/<strong>OS</strong>, you must place a record for SMS in the IEFSSNxx<br />
PARMLIB member.<br />
IEFSSNxx defines how z/<strong>OS</strong> activates the SMS address space. You can<br />
code an IEFSSNxx member with keyword or positional parameters, but not<br />
both. We recommend using keyword parameters. We recommend that you<br />
place the SMS record before the JES2 record in IEFSSNxx in order to start<br />
SMS before starting the JES2 subsystem.<br />
IGDSMSzz For each system in the SMS complex, you must create an IGDSMSxx<br />
member in SYS1.PARMLIB. The IGDSMSzz member contains SMS<br />
initialization control information. The suffix has a default value <strong>of</strong> 00.<br />
Every SMS system must have an IGDSMSzz member in SYS1.PARMLIB that specifies a<br />
required ACDS and COMMDS control data set pair. This ACDS and COMMDS pair is used if<br />
the COMMDS <strong>of</strong> the pair does not point to another COMMDS.<br />
278 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
IEFSSNyy<br />
SMS,ID=zz<br />
JES2,...<br />
.....<br />
Mgmt.<br />
Class<br />
Storage<br />
Group<br />
<strong>System</strong> Managed Storage <strong>Volume</strong>s<br />
Storage<br />
Class<br />
IGDSMSzz<br />
.....<br />
SMS ACDS(ds1)<br />
.....
If the COMMDS points to another COMMDS, the referenced COMMDS is used. This<br />
referenced COMMDS might contain the name <strong>of</strong> an ACDS that is separate from the one<br />
specified in the IGDSMSzz. If so, the name <strong>of</strong> the ACDS is obtained from the COMMDS<br />
rather than from the IGDSMSzz to ensure that the system is always running under the most<br />
recent ACDS and COMMDS.<br />
If the COMMDS <strong>of</strong> the pair refers to another COMMDS during IPL, it means a more recent<br />
COMMDS has been used. SMS uses the most recent COMMDS to ensure that you cannot<br />
IPL with a down-level configuration.<br />
The data sets that you specify for the ACDS and COMMDS pair must be the same for every<br />
system in an SMS complex. Whenever you change the ACDS or COMMDS, update the<br />
IGDSMSzz for every system in the SMS complex so that it specifies the same data sets.<br />
IGDSMSzz has many parameters. For a complete description <strong>of</strong> the SMS parameters, see<br />
z/<strong>OS</strong> MVS Initialization and Tuning Reference, SA22-7592, and z/<strong>OS</strong> DFSMSdfp Storage<br />
Administration Reference, SC26-7402.<br />
Chapter 5. <strong>System</strong>-managed storage 279
5.23 Starting SMS and activating a new configuration<br />
Starting<br />
SMS<br />
Figure 5-23 Starting SMS and activating a new SMS configuration<br />
Starting SMS<br />
To start SMS, which starts the SMS address space, use either <strong>of</strong> these methods:<br />
► With SMS=xx defined in IEASYSxx and SMS defined as a valid subsystem, IPL the<br />
system. This starts SMS automatically.<br />
► Or, with SMS defined as a valid subsystem to z/<strong>OS</strong>, IPL the system. Start SMS later, using<br />
the SET SMS=yy MVS operator command.<br />
For detailed information, see z/<strong>OS</strong> DFSMSdfp Storage Administration Reference,<br />
SC26-7402.<br />
Activating a new SMS configuration<br />
Activating a new SMS configuration means to copy the configuration from SCDS to ACDS<br />
and to the SMS address space. The SCDS itself is never considered active. Attempting to<br />
activate an ACDS that is not valid results in an error message.<br />
You can manually activate a new SMS configuration in two ways. Note that SMS must be<br />
active before you use one <strong>of</strong> these methods:<br />
1. Activating an SMS configuration from ISMF:<br />
– From the ISMF Primary Option Menu panel, select Control Data Set.<br />
– In the CDS Application Selection panel, enter your SCDS data set name and select 5<br />
Activate, or enter the ACTIVATE command on the command line.<br />
280 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
At IPL, with SMS subsystem<br />
defined<br />
Later, SET SMS=zz, with SMS<br />
subsystem defined<br />
Activating a new<br />
SMS configuration
2. Activating an SMS configuration from the operator console:<br />
– From the operator console, enter the command:<br />
SETSMS {ACDS(YOUR.OWN.ACDS)} {SCDS(YOUR.OWN.SCDS)}<br />
Activating the configuration means that information is brought into the SMS address space<br />
from the ACDS.<br />
To update the current ACDS with the contents <strong>of</strong> an SCDS, specify the SCDS parameter<br />
only.<br />
If you want to both specify a new ACDS and update it with the contents <strong>of</strong> an SCDS, enter<br />
the SETSMS command with both the ACDS and SCDS parameters specified.<br />
The ACTIVATE command, which runs from the ISMF CDS application, is equivalent to the<br />
SETSMS operator command with the SCDS keyword specified.<br />
If you use RACF, you can enable storage administrators to activate SMS configurations from<br />
ISMF by defining the facility STGADMIN.IGD.ACTIVATE.CONFIGURATION and issuing<br />
permit commands for each storage administrator.<br />
Chapter 5. <strong>System</strong>-managed storage 281
5.24 Control SMS processing with operator commands<br />
SETSMS - SET SMS=xx - VARY SMS - DISPLAY SMS - DEVSERV<br />
DEVSERV P,CF00,9<br />
IEE459I 16.14.23 DEVSERV PATHS 614<br />
UNIT DTYPE M CNT VOLSER CHPID=PATH STATUS<br />
RTYPE SSID CFW TC DFW PIN DC-STATE CCA DDC ALT CU-TYPE<br />
CF00,33903 ,O,000,ITSO02,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 00 00 2105<br />
CF01,33903 ,O,000,TOTSP8,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 01 01 2105<br />
CF02,33903 ,O,000,NWCF02,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 02 02 2105<br />
CF03,33903 ,O,000,O38CAT,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 03 03 2105<br />
CF04,33903 ,O,000,O35CAT,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 04 04 2105<br />
CF05,33903 ,O,000,WITP00,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 05 05 2105<br />
CF06,33903 ,O,000,MQ531A,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 06 06 2105<br />
CF07,33903 ,O,000,NWCF07,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 07 07 2105<br />
CF08,33903 ,O,000,TOTMQ5,80=+ 81=+ 82=+ 83=+<br />
2105 89D7 Y YY. YY. N SIMPLEX 08 08 2105<br />
************************ SYMBOL DEFINITIONS ************************<br />
O = ONLINE + = PATH AVAILABLE<br />
Figure 5-24 SMS operator commands<br />
Controlling SMS processing using operator commands<br />
The DFSMS environment provides a set <strong>of</strong> z/<strong>OS</strong> operator commands to control SMS<br />
processing. The VARY, DISPLAY, DEVSERV, and SET commands are MVS operator commands<br />
that support SMS operation.<br />
SETSMS This command changes a subset <strong>of</strong> SMS parameters from the<br />
operator console without changing the active IGDSMSxx PARMLIB<br />
member. For example, you can use this command to activate a new<br />
configuration from an SCDS. The MVS operator must use SETSMS to<br />
recover from ACDS and COMMDS failures.<br />
For an explanation about how to recover from ACDS and COMMDS<br />
failures, see z/<strong>OS</strong> DFSMSdfp Storage Administration Reference,<br />
SC26-7402.<br />
SET SMS=zz This command starts SMS, if it has not already been started, and is<br />
defined as a valid MVS subsystem. The command also:<br />
– Changes options set on the IGDSMSxx PARMLIB member<br />
– Restarts SMS if it has terminated<br />
– Updates the SMS configuration<br />
Table 5-1 on page 283 lists the differences between the SETSMS and<br />
SET SMS commands.<br />
282 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
VARY SMS This command changes storage group, volume, library, or drive status.<br />
You can use this command to:<br />
– Limit new allocations to a volume or storage group<br />
– Enable a newly-installed volume for allocations<br />
DISPLAY SMS Refer to “Displaying the SMS configuration” on page 284.<br />
DEVSERV This command displays information for a device. Use it to display the<br />
status <strong>of</strong> extended functions in operation for a given volume that is<br />
attached to a cache-capable 3990 storage control. An example <strong>of</strong> the<br />
output <strong>of</strong> this command is shown in Figure 5-24 on page 282.<br />
Table 5-1 Comparison <strong>of</strong> SETSMS and SET SMS commands<br />
Difference SET SMS=xx SETSMS<br />
When and how to use the<br />
command.<br />
Where the parameters are<br />
entered.<br />
What default values are<br />
available.<br />
Initializes SMS parameters and<br />
starts SMS if SMS is defined<br />
but not started at IPL. Changes<br />
SMS parameters when SMS is<br />
running.<br />
IGDSMSxx PARMLIB member. At the console.<br />
Default values are used for<br />
non-specified parameters.<br />
Changes SMS parameters only<br />
when SMS is running.<br />
No default values. Parameters<br />
non-specified remain<br />
unchanged.<br />
For more information about operator commands, see z/<strong>OS</strong> MVS <strong>System</strong> Commands,<br />
SA22-7627.<br />
Chapter 5. <strong>System</strong>-managed storage 283
5.25 Displaying the SMS configuration<br />
D SMS,SG(STRIPE),LISTVOL<br />
IGD002I 16:02:30 DISPLAY SMS 581<br />
STORGRP TYPE SYSTEM= 1 2 3 4<br />
STRIPE POOL + + + +<br />
VOLUME UNIT SYSTEM= 1 2 3 4 STORGRP NAME<br />
MHLV11 D D D D STRIPE<br />
MHLV12 D D D D STRIPE<br />
MHLV13 D D D D STRIPE<br />
MHLV14 D D D D STRIPE<br />
MHLV15 + + + + STRIPE<br />
SBOX28 6312 + + + + STRIPE<br />
SBOX29 6412 + + + + STRIPE<br />
SBOX3K 6010 + + + + STRIPE<br />
SBOX3L 6108 + + + + STRIPE<br />
SBOX3M 6204 + + + + STRIPE<br />
SBOX3N 6309 + + + + STRIPE<br />
SBOX30 6512 + + + + STRIPE<br />
SBOX31 6013 + + + + STRIPE<br />
***************************** LEGEND *****************************<br />
. THE STORAGE GROUP OR VOLUME IS NOT DEFINED TO THE SYSTEM<br />
+ THE STORAGE GROUP OR VOLUME IS ENABLED<br />
- THE STORAGE GROUP OR VOLUME IS DISABLED<br />
* THE STORAGE GROUP OR VOLUME IS QUIESCED<br />
D THE STORAGE GROUP OR VOLUME IS DISABLED FOR NEW ALLOCATIONS ONLY<br />
Q THE STORAGE GROUP OR VOLUME IS QUIESCED FOR NEW ALLOCATIONS ONLY<br />
> THE VOLSER IN UCB IS DIFFERENT FROM THE VOLSER IN CONFIGURATION<br />
SYSTEM 1 = SC63 SYSTEM 2 = SC64 SYSTEM 3 = SC65<br />
SYSTEM 4 = SC70<br />
Figure 5-25 Displaying the SMS configuration<br />
Displaying the SMS configuration<br />
You can display the SMS configuration in two ways:<br />
► Using ISMF Control Data Set, enter ACTIVE in the CDS Name field and select 1 Display.<br />
► The DISPLAY SMS operator command shows volumes, storage groups, libraries, drives,<br />
SMS configuration information, SMS trace parameters, SMS operational options, OAM<br />
information, <strong>OS</strong>MC information, and cache information. Enter this command to:<br />
– Confirm that the system-managed volume status is correct<br />
– Confirm that SMS starts with the proper parameters<br />
The DISPLAY SMS command can be used in various variations. To learn about the full<br />
functionality <strong>of</strong> this command, see z/<strong>OS</strong> MVS <strong>System</strong> Commands, SA22-7627.<br />
284 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.26 Managing data with a minimal SMS configuration<br />
Device-independence space allocation<br />
<strong>System</strong>-determined block size<br />
Use ISMF to manage volumes<br />
Use simplified JCL to allocate data sets<br />
Manage expiration date<br />
Establish installation standards and use data class<br />
ACS routine to enforce them<br />
Manage data set allocation<br />
Use PDSE data sets<br />
Figure 5-26 Managing data with minimal SMS configuration<br />
Managing data allocation<br />
After the SMS minimal configuration is active, your installation can exploit SMS capabilities<br />
that give you experience with SMS and help you plan the DFSMS full exploitation with<br />
system-managed data set implementation.<br />
Inefficient space usage and poor data allocation cause problems with space and performance<br />
management. In a DFSMS environment, you can enforce good allocation practices to help<br />
reduce a variety <strong>of</strong> these problems. The following section highlights how to exploit SMS<br />
capabilities.<br />
Using data classes to standardize data allocation<br />
You can define data classes containing standard data set allocation attributes. Users then<br />
only need to use the appropriate data class names to create standardized data sets. To<br />
override values in the data class definition, they can still provide specific allocation<br />
parameters.<br />
Data classes can be determined from the user-specified value on the DATACLAS parameter<br />
(DD card, TSO Alloc, Dynalloc macro), from a RACF default, or by ACS routines. ACS<br />
routines can also override user-specified or RACF default data classes.<br />
You can override a data class attribute (not the data class itself) using JCL or dynamic<br />
allocation parameters. DFSMS usually does not change values that are explicitly specified,<br />
because doing so alters the original meaning and intent <strong>of</strong> the allocation. There is an<br />
Chapter 5. <strong>System</strong>-managed storage 285
exception. If it is clear that a PDS is being allocated (DSORG=PO or DSNTYPE=PDS is<br />
specified), and no directory space is indicated in the JCL, then the directory space from the<br />
data class is used.<br />
Users cannot override the data class attributes <strong>of</strong> dynamically-allocated data sets if you use<br />
the IEFDB401 user exit.<br />
For additional information about data classes see also 5.8, “Using data classes” on page 251.<br />
For sample data classes, descriptions, and ACS routines, see z/<strong>OS</strong> DFSMS Implementing<br />
<strong>System</strong>-Managed Storage, SC26-7407.<br />
286 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.27 Device-independence space allocation<br />
//ALLOC EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
ALLOCATE<br />
DSNAME('FILE.PDSE') -<br />
NEW -<br />
DSNTYPE(LIBRARY)<br />
SUB<br />
Figure 5-27 Device independence<br />
SMS<br />
FILE.PDSE<br />
VOLSMS<br />
Ensuring device independence<br />
The base configuration contains a default unit that corresponds to a DASD esoteric (such as<br />
SYSDA). Default geometry for this unit is specified in bytes/track and tracks/cylinder for the<br />
predominant device type in the esoteric. If users specify the esoteric, or do not supply the<br />
UNIT parameter for new allocations, the default geometry converts space allocation requests<br />
into device-independent units, such as KBs and MBs. This quantity is then converted back<br />
into device-dependent tracks based on the default geometry.<br />
<strong>System</strong>-determined block size<br />
During allocation, DFSMSdfp assists you to assign a block size that is optimal for the device.<br />
When you allow DFSMSdfp to calculate the block size for the data set, you are using a<br />
system-determined block size. <strong>System</strong>-determined block sizes can be calculated for<br />
system-managed and non-system-managed primary storage, VIO, and tape data sets.<br />
The use <strong>of</strong> system-determined block size provides:<br />
► Device independence, because you do not need to know the track capacity to allocate<br />
efficiently<br />
► Space usage optimization<br />
► I/O performance improvement<br />
► Simplifies JCL, because you do not need to code BLKSIZE<br />
You take full advantage <strong>of</strong> system-managed storage when you allow the system to place data<br />
on the most appropriate device in the most efficient way, when you use system-managed data<br />
Chapter 5. <strong>System</strong>-managed storage 287
sets (see Figure 5-27 on page 287). In the DFSMS environment, you control volume selection<br />
through the storage class and storage group definitions you create, and by ACS routines. This<br />
means that users do not have to specify volume serial numbers with the VOL=SER<br />
parameter, or code a specific device type with the UNIT= parameter on their JCL. Due to a<br />
large number <strong>of</strong> volumes in one installation, the volume selection SMS routine can take a long<br />
time. A fast volume selection routine, which improves the best-fit and not-best-fit algorithms,<br />
is introduced in z/<strong>OS</strong> 1.8.<br />
When converting data sets for use in DFSMS, users do not have to remove these parameters<br />
from existing JCL because volume and unit information can be ignored with ACS routines.<br />
(However, you should work with users to evaluate UNIT and VOL=SER dependencies before<br />
conversion).<br />
If you keep the VOL=SER parameter for a non-SMS volume, but you are trying to access a<br />
system-managed data set, then SMS might not find the data set. All SMS data sets (the ones<br />
with a storage class) must reside on a system-managed volume.<br />
288 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.28 Developing naming conventions<br />
Setting the high-level qualifier standard<br />
First character Second character Remaining characters<br />
---------------------------------------------------------------------------------------------------------------------<br />
Type <strong>of</strong> user Type <strong>of</strong> data Project name, code, or<br />
userid<br />
---------------------------------------------------------------------------------------------------------------------<br />
A - Accounting Support P - Production data Example:<br />
D - Documentation D - Development data<br />
E - Engineering T - Test data 3000 = Project code<br />
F - Field Support M - Master data<br />
M - Marketing Support U - Update data<br />
P - <strong>Programming</strong> W - Work data<br />
$ - TSO userid<br />
---------------------------------------------------------------------------------------------------------------------<br />
Figure 5-28 Setting data set HLQ conventions<br />
Developing a data set naming convention<br />
Whenever you allocate a new data set, you (or the operating system) must give the data set a<br />
unique name. Usually, the data set name is given as the dsname in JCL. A data set name can<br />
be one name segment, or a series <strong>of</strong> joined name segments; see also 2.2, “Data set name<br />
rules” on page 19.<br />
You must implement a naming convention for your data sets. Although a naming convention is<br />
not a prerequisite for DFSMS conversion, it makes more efficient use <strong>of</strong> DFSMS. You can<br />
also reduce the cost <strong>of</strong> storage management significantly by grouping data that shares<br />
common management requirements. Naming conventions are an effective way <strong>of</strong> grouping<br />
data. They also:<br />
► Simplify service-level assignments to data<br />
► Facilitate writing and maintaining ACS routines<br />
► Allow data to be mixed in a system-managed environment while retaining separate<br />
management criteria<br />
► Provide a filtering technique useful with many storage management products<br />
► Simplify the data definition step <strong>of</strong> aggregate backup and recovery support<br />
Most naming conventions are based on the HLQ and LLQ <strong>of</strong> the data name. Other levels <strong>of</strong><br />
qualifiers can be used to identify generation data sets and database data. They can also be<br />
used to help users to identify their own data.<br />
Chapter 5. <strong>System</strong>-managed storage 289
Using a high level qualifier (HLQ)<br />
Use the HLQ to identify the owner or owning group <strong>of</strong> the data, or to indicate data type.<br />
Do not embed information that is subject to frequent change in the HLQ, such as department<br />
number, application location, output device type, job name, or access method. Set a standard<br />
within the HLQ. Figure 5-28 on page 289 shows examples <strong>of</strong> naming standards.<br />
290 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.29 Setting the low-level qualifier (LLQ) standards<br />
LLQ naming standards:<br />
ExpDays Max Migrate Cmd/ Retain Retain<br />
Low-Level Non- Ret Partial Days Auto No.GDG Backup Backup Days Day<br />
Qualifier usage Period Release Non-usage Migrate Primary Freqcy Versns OnlyBUP ExtraBUP<br />
ASM...... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
CLIST.... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
COB*..... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
CNTL..... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
DATA..... 400 400 YES 15 BOTH -- 2 2 400 60<br />
*DATA.... 400 400 YES 15 BOTH -- 2 2 400 60<br />
FOR*..... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
INCL*.... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
INPUT.... 400 400 YES 15 BOTH -- 2 2 1100 120<br />
ISPROF... 400 400 YES 30 BOTH -- 0 2 60 30<br />
JCL...... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
LIST*.... 2 2 YES NONE NONE -- NONE NONE -- --<br />
*LIST.... 2 2 YES NONE NONE -- NONE NONE -- --<br />
LOAD*.... 400 400 YES 15 BOTH -- 1 2 -- --<br />
MACLIB... 400 400 YES 15 BOTH -- 1 2 400 60<br />
MISC..... 400 400 YES 15 BOTH -- 2 2 400 60<br />
NAMES.... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
OBJ*..... 180 180 YES 7 BOTH -- 3 1 180 30<br />
PLI...... NOLIM NOLIM YES 15 BOTH -- 0 5 1100 120<br />
Figure 5-29 Setting the LLQ standards<br />
Setting the low-level qualifier (LLQ) standards<br />
The LLQ determines the contents and storage management processing <strong>of</strong> the data. You can<br />
use LLQs to identify data requirements for:<br />
► Migration (data sets only)<br />
► Backup (data sets and objects)<br />
► Archiving (data sets)<br />
► Retention or expiration (data sets and objects)<br />
► Class transitions (objects only)<br />
► Release <strong>of</strong> unused space (data sets only)<br />
Mapping storage management requirements to data names is especially useful in a<br />
system-managed environment. In an environment without storage groups, data with differing<br />
requirements is <strong>of</strong>ten segregated onto separate volumes that are monitored and managed<br />
manually. LLQ data naming conventions allow data to be mixed together in a<br />
system-managed environment and still retain the separate management criteria.<br />
Figure 5-29 shows examples <strong>of</strong> how you can use LLQ naming standards to indicate the<br />
storage management processing criteria.<br />
The first column lists the LLQ <strong>of</strong> a data name. An asterisk indicates where a partial qualifier<br />
can be used. For example, LIST* indicates that only the first four characters <strong>of</strong> the LLQ must<br />
be LIST; valid qualifiers include LIST1, LISTING, and LISTOUT. The remaining columns show<br />
the storage management processing information for the data listed.<br />
Chapter 5. <strong>System</strong>-managed storage 291
5.30 Establishing installation standards<br />
Based on user needs<br />
Improve service to users<br />
Better transition to SMS-managed storage<br />
Use service level agreement<br />
Figure 5-30 Establishing installation standards<br />
Establishing installation standards<br />
Establishing standards such as naming conventions and allocation policies helps you to<br />
manage storage more efficiently and improves service to your users. With them, your<br />
installation is better prepared to make a smooth transition to system-managed storage.<br />
Negotiate with your user group representatives to agree on the specific policies for the<br />
installation, how soon you can implement them, and how strongly you enforce them.<br />
You can simplify storage management by limiting the number <strong>of</strong> data sets and volumes that<br />
cannot be system-managed.<br />
Service level agreement<br />
Document negotiated policies in a service level agreement. We recommend that you develop<br />
the following documents before you start to migrate permanent data to system management:<br />
► A storage management plan that documents your storage administration group's strategy<br />
for meeting storage requirements<br />
► Formal service level agreements that describe the services that you agree to provide<br />
292 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.31 Planning and defining data classes<br />
DATA CLASS ATTRIBUTES<br />
DATA SET TYPE<br />
RECORD LENGTH<br />
BLOCKSIZE<br />
SPACE REQUIREMENTS<br />
EXPIRATION DATE<br />
VSAM ATTRIBUTES<br />
What does it look like?<br />
Figure 5-31 Planning and defining data class<br />
DC A<br />
DC B<br />
DC C<br />
Sources <strong>of</strong> Data Class:<br />
User Data Class defined (DD)<br />
ACS DC routine (*)<br />
RACF default<br />
Planning and defining data classes<br />
After you establish your installation’s standards, use your service level agreement (SLA) for<br />
reference when planning your data classes. SLAs identify users’ current allocation practices<br />
and their requirements. For example:<br />
► Based on user requirements, create a data class to allocate standard control libraries.<br />
► Create a data class to supply the default value <strong>of</strong> a parameter, so that users do not have to<br />
specify a value for that parameter in the JCL or dynamic allocation.<br />
Have data class names indicate the type <strong>of</strong> data to which they are assigned, which makes it<br />
easier for users to identify the template they need to use for allocation.<br />
You define data classes using the ISMF data class application. Users can access the Data<br />
Class List panel to determine which data classes are available and the allocation values that<br />
each data class contains.<br />
Figure 5-32 on page 294 contains information that can help in this task. For more information<br />
about planning and defining data classes, see z/<strong>OS</strong> DFSMSdfp Storage Administration<br />
Reference, SC26-7402.<br />
Chapter 5. <strong>System</strong>-managed storage 293
5.32 Data class attributes<br />
Data class name and data class description (DC)<br />
Data set organization (RECORG) and data set name type<br />
(DSNTYPE)<br />
Record format (RECFM) and logical record length (LRECL)<br />
Key length (KEYLEN) and <strong>of</strong>fset (KEYOFF)<br />
Space attributes (AVGREC, AVE VALUE, PRIMARY,<br />
SECONDARY, DIRECTORY)<br />
Retention period or expiration date (RETPD or EXPDT)<br />
Number <strong>of</strong> volumes the data set can span (VOLUME COUNT)<br />
Allocation amount when extending VSAM extended data set<br />
Control interval size for VSAM data components (CISIZE DATA)<br />
Percentage <strong>of</strong> control interval or control area free space (%<br />
FREESPACE)<br />
VSAM share options (SHAREOPTIONS)<br />
Compaction option for data sets (COMPACTION)<br />
Tape media (MEDIA TYPE)<br />
Figure 5-32 Data class attributes<br />
Data class attributes<br />
You can specify the data class space attributes to control DASD space waste; for example:<br />
► The primary space value is to specify the total amount <strong>of</strong> space initially required for output<br />
processing. The secondary allocation allows automatic extension <strong>of</strong> additional space as<br />
the data set grows and does not waste space by overallocating the primary quantity. You<br />
can also use data class space attributes to relieve users <strong>of</strong> the burden <strong>of</strong> calculating how<br />
much primary and secondary space to allocate.<br />
► The COMPACTION attribute specifies whether data is to be compressed on DASD if the<br />
data set is allocated in the extended format. The COMPACTION attribute alone also allows<br />
you to use the improved data recording capability (IDRC) <strong>of</strong> your tape device when<br />
allocating tape data sets. To use the COMPACTION attribute, the data set must be<br />
system-managed, because this attribute demands an extended format data set.<br />
► The following attributes are used for tape data sets only:<br />
– MEDIA TYPE allows you to select the mountable tape media cartridge type.<br />
– RECORDING TECHNOLOGY allows you to select the format to use when writing to<br />
that device.<br />
– The read-compatible special attribute indicator in the tape device selection information<br />
(TDSI) allows an 18-track tape to be mounted on a 36-track device for read access.<br />
The attribute increases the number <strong>of</strong> devices that are eligible for allocation when you<br />
are certain that no more data will be written to the tape.<br />
For detailed information about specifying data class attributes, see z/<strong>OS</strong> DFSMSdfp Storage<br />
Administration Reference, SC26-7402.<br />
294 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.33 Use data class ACS routine to enforce standards<br />
Examples <strong>of</strong> standards to be enforced:<br />
Prevent extended retention or expiration periods<br />
Prevent specific volume allocations, unless<br />
authorized<br />
You can control allocations to spare, system,<br />
database, or other volumes<br />
Require valid naming conventions for permanent<br />
data sets<br />
Figure 5-33 Using data class (DC) ACS routine to enforce standards<br />
Using data class (DC) ACS routine to enforce standards<br />
After you start DFSMS with the minimal configuration, you can use data class ACS routine<br />
facilities to automate or simplify storage allocation standards if you:<br />
► Use manual techniques to enforce standards<br />
► Plan to enforce standards before implementing DFSMS<br />
► Use DFSMSdfp or MVS installation exits to enforce storage allocation standards<br />
The data class ACS routine provides an automatic method for enforcing standards because it<br />
is called for system-managed and non-system-managed data set allocations. Standards are<br />
enforced automatically at allocation time, rather than through manual techniques after<br />
allocation.<br />
Enforcing standards optimizes data processing resources, improves service to users, and<br />
positions you for implementing system-managed storage. You can fail requests or issue<br />
warning messages to users who do not conform to standards. Consider enforcing the<br />
following standards in your DFSMS environment:<br />
► Prevent extended retention or expiration periods.<br />
► Prevent specific volume allocations, unless authorized. For example, you can control<br />
allocations to spare, system, database, or other volumes.<br />
Require valid naming conventions before implementing DFSMS system management for<br />
permanent data sets.<br />
Chapter 5. <strong>System</strong>-managed storage 295
5.34 Simplifying JCL use<br />
Figure 5-34 Using SMS capabilities to simplify JCL<br />
Use simplified JCL<br />
After you define and start using data classes, several JCL keywords can help you simplify the<br />
task <strong>of</strong> creating data sets and also make the allocation process more consistent. It is also<br />
possible to allocate VSAM data sets through JCL without IDCAMS assistance.<br />
For example, with the use <strong>of</strong> data classes, you have less use for the JCL keywords UNIT,<br />
DCB, and AMP. When you start using system-managed data sets, you do not need to use the<br />
JCL VOL keyword.<br />
JCL keywords used in the DFSMS environment<br />
You can use JCL keywords to create VSAM and non-VSAM data sets. For a detailed<br />
description <strong>of</strong> the keywords and their use, see z/<strong>OS</strong> MVS JCL User’s Guide, SA22-7598.<br />
In the following sections, we present sample jobs exemplifying the use <strong>of</strong> JCL keywords<br />
when:<br />
► Creating a sequential data set<br />
► Creating a VSAM cluster<br />
► Specifying a retention period<br />
► Specifying an expiration date<br />
296 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
DEFINING A NEW DATA SET<br />
(SMS)<br />
//DD1 DD DSN=PAY.D3,<br />
// DISP=(NEW,CATLG)<br />
STORAGE MANAGEMENT SUBSYSTEM<br />
MVS/ESA
5.35 Allocating a data set<br />
//NEWDATA DD DSN=FILE.SEQ1,<br />
// DISP=(,CATLG),<br />
// SPACE=(50,(5,5)),AVGREC=M,<br />
// RECFM=VB,LRECL=80<br />
Figure 5-35 Allocating a sequential data set<br />
FILE.SEQ1<br />
Creating and allocating data sets<br />
Many times the words create and allocate, when applied to data sets, are used in MVS as<br />
synonyms. However, they are not.<br />
► To create (on DASD) means to assign a space in VTOC to be used for a data set<br />
(sometimes create implies cataloging the data set). A data set is created in response to<br />
the DD card DISP=NEW in JCL.<br />
► To allocate means to establish a logical relationship between the request for the use <strong>of</strong> the<br />
data set within the program (through the use <strong>of</strong> a DCB or ACB) and the data set itself in<br />
the device where it is located.<br />
Figure 5-35 shows an example <strong>of</strong> JCL used to create a data set in a system-managed<br />
environment.<br />
These are characteristics <strong>of</strong> the JCL in a system-managed environment:<br />
► The LRECL and RECFM parameters are independent keywords. This makes it easier to<br />
override individual attributes that are assigned default values by the data class.<br />
► In the example, the SPACE parameter is coded with the average number <strong>of</strong> bytes per<br />
record (50), and the number <strong>of</strong> records required for the primary data set allocation (5 M)<br />
and secondary data set allocation (5 M). These are the values that the system uses to<br />
calculate the least number <strong>of</strong> tracks required for the space allocation.<br />
Chapter 5. <strong>System</strong>-managed storage 297
► The AVGREC attribute indicates the scale factor for the primary and secondary allocation<br />
values. In the example, an AVGREC value <strong>of</strong> M indicates that the primary and secondary<br />
values <strong>of</strong> 5 are each to be multiplied by 1 048 576.<br />
► For system-managed data sets, the device-dependent volume serial number and unit<br />
information is no longer required, because the volume is assigned within a storage group<br />
selected by the ACS routines.<br />
Overriding data class attributes with JCL<br />
In a DFSMS environment, the JCL to allocate a data set is simpler and has no<br />
device-dependent keywords.<br />
Table 5-2 lists the attributes a user can override with JCL.<br />
Table 5-2 Data class attributes that can be overridden by JCL<br />
JCL DD statement keyword Use for<br />
RECORG,KEYLEN,KEYOFF Only VSAM<br />
RECFM Sequential (PO or PS)<br />
LRECL,SPACE,AVGREC, RETPD or EXPDT,<br />
VOLUME (volume count)<br />
For more information about data classes refer to 5.8, “Using data classes” on page 251 and<br />
5.32, “Data class attributes” on page 294.<br />
As previously mentioned, in order to use a data class, the data set does not have to be<br />
system-managed. An installation can take advantages <strong>of</strong> a minimal SMS configuration to<br />
simplify JCL use and manage data set allocation.<br />
For information about managing data allocation, see z/<strong>OS</strong> DFSMS: Using Data Sets,<br />
SC26-7410.<br />
298 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
All data set types<br />
DSNTYPE PDS or PDSE
5.36 Creating a VSAM cluster<br />
//VSAM DD DSN=NEW.VSAM,<br />
// DISP=(,CATLG),<br />
// SPACE=(1,(2,2)),AVGREC=M,<br />
// RECORG=KS,KEYLEN=17,KEYOFF=6,<br />
// LRECL=80<br />
Figure 5-36 Creating a VSAM data class<br />
NEW.VSAM<br />
NEW.VSAM.DATA<br />
NEW.VSAM.INDEX<br />
Creating a VSAM data set using JCL<br />
In the DFSMS environment, you can create temporary and permanent VSAM data sets using<br />
JCL by using either <strong>of</strong> the following:<br />
► The RECORG parameter <strong>of</strong> the JCL DD statement<br />
► A data class<br />
You can use JCL DD statement parameters to override various data class attributes; see<br />
Table 5-2 on page 298 for those related to VSAM data sets.<br />
Important: Regarding DISP=(OLD,DELETE), in an SMS environment, the VSAM data set<br />
is deleted at unallocation. In a non-SMS environment, the VSAM data set is kept.<br />
A data set with a disposition <strong>of</strong> MOD is treated as a NEW allocation if it does not already<br />
exist; otherwise, it is treated as an OLD allocation.<br />
For a non-SMS environment, a VSAM cluster creation is only done through IDCAMS. In<br />
Figure 5-36, NEW.VSAM refers to a KSDS VSAM cluster.<br />
Chapter 5. <strong>System</strong>-managed storage 299
Considerations when specifying space for a KSDS<br />
The space allocation for a VSAM entity depends on the level <strong>of</strong> the entity being allocated:<br />
► If allocation is specified at the cluster or alternate index level only, the amount needed for<br />
the index is subtracted from the specified amount. The remainder <strong>of</strong> the specified amount<br />
is assigned to the data component.<br />
► If allocation is specified at the data level only, the specified amount is assigned to data.<br />
The amount needed for the index is in addition to the specified amount.<br />
► If allocation is specified at both the data and index levels, the specified data amount is<br />
assigned to data and the specified index amount is assigned to the index.<br />
► If secondary allocation is specified at the data level, secondary allocation must be<br />
specified at the index level or the cluster level.<br />
You cannot use certain parameters in JCL when allocating VSAM data sets, although you can<br />
use them in the IDCAMS DEFINE command.<br />
300 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.37 Retention period and expiration date<br />
//RETAIN DD DSN=DEPTM86.RETPD.DATA,<br />
// DISP=(,CATLG),RETPD=365<br />
//RETAIN DD DSN=DEPTM86.EXPDT.DATA,<br />
// DISP=(,CATLG),EXPDT=2006/013<br />
Figure 5-37 Retention period and expiration date<br />
Managing retention period and expiration date<br />
The RETPD and EXPDT parameters specify retention period and expiration date. They apply<br />
alike to system-managed and non-system-managed data sets. They control the time during<br />
which a data set is protected from being deleted by the system. The first DD statement in<br />
Figure 5-37 protects the data set from deletion for 365 days. The second DD statement in<br />
Figure 5-37 protects the data set from deletion until January 13, 2006.<br />
The VTOC entry for non-VSAM and VSAM data sets contains the expiration date as declared<br />
in the JCL, the TSO ALLOCATE command, the IDCAMS DEFINE command, or in the data class<br />
definition. The expiration date is placed in the VTOC either directly from the date<br />
specification, or after it is calculated from the retention period specification. The expiration<br />
date in the catalog entry exists for information purposes only. If you specify the current date or<br />
an earlier date, the data set is immediately eligible for replacement.<br />
You can use a management class to limit or ignore the RETPD and EXPDT parameters given<br />
by a user. If a user specifies values that exceed the maximum allowed by the management<br />
class definition, the retention period is reset to the allowed maximum. For an expiration date<br />
beyond year 1999 use the following format: YYYY/DDD. For more information about using<br />
management class to control retention period and expiration date, see z/<strong>OS</strong> DFSMShsm<br />
Storage Administration Guide, SC35-0421.<br />
Important: Expiration dates 99365, or 99366, or 1999/365 or 1999/366 are special dates<br />
and they mean never expires.<br />
Chapter 5. <strong>System</strong>-managed storage 301
5.38 SMS PDSE support<br />
SMS PDSE support<br />
Keyword PDSESHARING in SYS1.PARMLIB<br />
member IGDSMSxx:<br />
Figure 5-38 SMS PDSE support<br />
SMS PDSE support<br />
With the minimal SMS configuration, you can exploit the use <strong>of</strong> PDSE data sets. A PDSE<br />
does not have to be system-managed. For information about PDSE, refer to 4.24, “Partitioned<br />
data set extended (PDSE)” on page 146.<br />
If you have DFSMS installed, you can extend PDSE sharing to enable multiple users on<br />
multiple systems to concurrently create new PDSE members and read existing members.<br />
Using the PDSESHARING keyword in the SYS1.PARMLIB member, IGDSMSxx, you can<br />
specify:<br />
► NORMAL. This allows multiple users to read any member <strong>of</strong> a PDSE.<br />
► EXTENDED. This allows multiple users to read any member or create new members <strong>of</strong> a<br />
PDSE.<br />
All systems sharing PDSEs need to be upgraded to DFSMS to use the extended PDSE<br />
sharing capability.<br />
After updating the IGDSMSxx member <strong>of</strong> SYS1.PARMLIB, you need to issue the SET SMS<br />
ID=xx command for every system in the complex to activate the sharing capability. See also<br />
z/<strong>OS</strong> DFSMS: Using Data Sets, SC26-7410 for information about PDSE sharing.<br />
Although SMS supports PDSs, consider converting these to the PDSE format. Refer to 4.26,<br />
“PDSE: Conversion” on page 150 for more information about PDSE conversion.<br />
302 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
NORMAL: allows multiple users to read any member<br />
<strong>of</strong> a PDSE<br />
EXTENDED: allows multiple users to read any<br />
member or create new members <strong>of</strong> a PDSE<br />
Activate new IGDSMSxx member with command:<br />
SET SMS ID=xx
5.39 Selecting data sets to allocate as PDSEs<br />
The &DSNTYPE ACS read-only variable<br />
controls the allocation:<br />
&DSNTYPE = 'LIBRARY' for PDSEs<br />
&DSNTYPE = 'PDS' for PDSs<br />
&DSNTYPE is not specified<br />
Indicates that the allocation request is provided by the<br />
user through JCL, the TSO/E ALLOCATE command,<br />
or dynamic allocation<br />
Figure 5-39 Selecting a data set to allocate as PDSE<br />
Selecting a data set to allocate as PDSE<br />
As a storage administrator, you can code appropriate ACS routines to select data sets to<br />
allocate as PDSEs and prevent inappropriate PDSs from being allocated or converted to<br />
PDSEs.<br />
By using the &DSNTYPE read-only variable in the ACS routine for data-class selection, you<br />
can control which PDSs are to be allocated as PDSEs. The following values are valid for<br />
DSNTYPE in the data class ACS routines:<br />
► &DSNTYPE = 'LIBRARY' for PDSEs.<br />
► &DSNTYPE = 'PDS' for PDSs.<br />
► &DSNTYPE is not specified. This indicates that the allocation request is provided by the<br />
user through JCL, the TSO/E ALLOCATE command, or dynamic allocation.<br />
If you specify a DSNTYPE value in the JCL, and a separate DSNTYPE value is also specified<br />
in the data class selected by ACS routines for the allocation, the value specified in the data<br />
class is ignored.<br />
Chapter 5. <strong>System</strong>-managed storage 303
5.40 Allocating new PDSEs<br />
//ALLOC EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
ALLOCATE<br />
DSNAME('FILE.PDSE') -<br />
NEW -<br />
DSNTYPE(LIBRARY)<br />
SUB<br />
Figure 5-40 Allocating a PDSE data set<br />
Allocating new PDSEs<br />
You can allocate PDSEs only in an SMS-managed environment. The PDSE data set does not<br />
have to be system-managed. To create a PDSE, use:<br />
► DSNTYPE keyword in the JCL, TSO or IDCAMS ALLOCATE command<br />
► A data class with LIBRARY in the Data Set Name Type field<br />
You use DSNTYPE(LIBRARY) to allocate a PDSE, or DSNTYPE(PDS) to allocate a PDS.<br />
Figure 5-40 shows IDCAMS ALLOCATE used with the DSNTYPE(LIBRARY) keyword to<br />
allocate a PDSE.<br />
A PDS and a PDSE can be concatenated in JCL DD statements, or by using dynamic<br />
allocation, such as the TSO ALLOCATE command.<br />
304 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
SMS<br />
FILE.PDSE<br />
VOLSMS
5.41 <strong>System</strong>-managed data types<br />
Temporary<br />
Data<br />
Database<br />
Data<br />
Figure 5-41 <strong>System</strong>-managed data types<br />
Object Data<br />
Permanent<br />
Data<br />
<strong>System</strong> Data<br />
Data set types that can be system-managed<br />
Now that you have experience with SMS using the minimal SMS configuration, you can plan<br />
system-managed data sets implementation. First you need to know which data sets can be<br />
SMS-managed and which data sets cannot be SMS-managed.<br />
These are common types <strong>of</strong> data that can be system-managed. For details on how these data<br />
types can be system-managed using SMS storage groups, see z/<strong>OS</strong> DFSMS Implementing<br />
<strong>System</strong>-Managed Storage, SC26-7407.<br />
Temporary data Data sets used only for the duration <strong>of</strong> a job, job step, or terminal<br />
session, and then deleted. These data sets can be cataloged or<br />
uncataloged, and can range in size from small to very large.<br />
Permanent data Data sets consisting <strong>of</strong>:<br />
Interactive data<br />
TSO user data sets<br />
ISPF/PDF libraries you use during a terminal session<br />
Data sets classified in this category are typically small, and are<br />
frequently accessed and updated.<br />
Batch data Data that is classified as either online-initiated, production, or test.<br />
Data accessed as online-initiated are background jobs that an<br />
online facility (such as TSO) generates.<br />
Chapter 5. <strong>System</strong>-managed storage 305
Production batch refers to data created by specialized applications<br />
(such as payroll), that can be critical to the continued operation <strong>of</strong><br />
your business or enterprise.<br />
Test batch refers to data created for testing purposes.<br />
VSAM data Data organized with VSAM, including VSAM data sets that are part <strong>of</strong><br />
an existing database.<br />
Large data For most installations, large data sets occupy more than 10 percent <strong>of</strong><br />
a single DASD volume. Note, however, that what constitutes a large<br />
data set is installation-dependent.<br />
Multivolume data Data sets that span more than one volume.<br />
Database data Data types usually having varied requirements for performance,<br />
availability, space, and security. To accommodate special needs,<br />
database products have specialized utilities to manage backup,<br />
recovery, and space usage. Examples include DB2, IMS, and CICS<br />
data.<br />
<strong>System</strong> data Data used by MVS to keep the operating system running smoothly. In<br />
a typical installation, 30 to 50 percent <strong>of</strong> these data sets are high<br />
performance and are used for cataloging, error recording, and other<br />
system functions.<br />
Because these critical data sets contain information required to find<br />
and access other data, they are read and updated frequently, <strong>of</strong>ten by<br />
more than one system in an installation. Performance and availability<br />
requirements are unique for system data. The performance <strong>of</strong> the<br />
system depends heavily upon the speed at which system data sets<br />
can be accessed. If a system data set such as a master catalog is<br />
unavailable, the availability <strong>of</strong> data across the entire system and<br />
across other systems can be affected.<br />
Some system data sets can be system-managed if they are uniquely<br />
named. These data sets include user catalogs. Place other system<br />
data sets on non-system managed volumes. The system data sets<br />
which are allocated at MVS system initialization are not<br />
system-managed, because the SMS address space is not active at<br />
initialization time.<br />
Object data Also known as byte-stream data, this data is used in specialized<br />
applications such as image processing, scanned correspondence, and<br />
seismic measurements. Object data typically has no internal record or<br />
field structure and, after it is written, the data is not changed or<br />
updated. However, the data can be referenced many times during its<br />
lifetime.<br />
306 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.42 Data types that cannot be system-managed<br />
Uncataloged data<br />
Figure 5-42 Data types that cannot be system-managed<br />
Unmovable Data Sets:<br />
Partitioned unmovable (POU)<br />
Sequential unmovable (PSU)<br />
Direct access unmovable (DAU)<br />
Indexed-sequential unmovable (ISU)<br />
Data types that cannot be system-managed<br />
All permanent DASD data under the control <strong>of</strong> SMS must be cataloged in integrated catalog<br />
facility (ICF) catalogs using the standard search order. The catalogs contain the information<br />
required for locating and managing system-managed data sets.<br />
When data sets are cataloged, users do not need to know which volumes the data sets reside<br />
on when they reference them; they do not need to specify unit type or volume serial number.<br />
This is essential in an environment with storage groups, where users do not have private<br />
volumes.<br />
Some data cannot be system-managed, as described here:<br />
► Uncataloged data<br />
Objects, stored in groups called collections, must have their collections cataloged in ICF<br />
catalogs because they, and the objects they contain, are system-managed data. The<br />
object access method (OAM) identifies an object by its collection name and the object's<br />
own name.<br />
An object is described only by an entry in a DB2 object directory. An object collection is<br />
described by a collection name catalog entry and a corresponding OAM collection<br />
identifier table entry. Therefore, an object is accessed by using the object's collection<br />
name and the catalog entry.<br />
When objects are written to tape, they are treated as tape data sets and OAM assigns two<br />
tape data set names to the objects. Objects in an object storage group being written to<br />
Chapter 5. <strong>System</strong>-managed storage 307
tape are stored as a tape data set named OAM.PRIMARY.DATA. Objects in an object<br />
backup storage group being written to tape are stored as a tape data set named<br />
OAM.BACKUP.DATA. Each tape containing objects has only one tape data set, and that<br />
data set has one <strong>of</strong> the two previous names. Because the same data set name can be<br />
used on multiple object-containing tape volumes, the object tape data sets are not<br />
cataloged.<br />
If you do not already have a policy for cataloging all permanent data, it is a good idea to<br />
establish one now. For example, you can enforce standards by deleting uncataloged data<br />
sets.<br />
► Uncataloged data sets<br />
Data sets that are not cataloged in any ICF catalog. To locate such a data set, you need to<br />
know the volume serial number <strong>of</strong> the volume on which the data set resides. SMS<br />
information is stored in the catalog, that is why uncataloged data sets are not supported.<br />
► Data sets in a jobcat or stepcat<br />
Data set LOCATEs using JOBCATs or STEPCATs are not permitted for system-managed<br />
data sets. You must identify the owning catalogs before you migrate these data sets to<br />
system management. The ISPF/PDF SUPERC utility is valuable for scanning your JCL<br />
and identifying any dependencies on JOBCATs or STEPCATs.<br />
► Unmovable data sets<br />
Unmovable data sets cannot be system-managed. These data sets include:<br />
– Data sets identified by the following data set organizations (DSORGs):<br />
Partitioned unmovable (POU)<br />
Sequential unmovable (PSU)<br />
Direct access unmovable (DAU)<br />
Indexed-sequential unmovable (ISU)<br />
– Data sets with user-written access methods<br />
– Data sets containing processing control information about the device or volume on<br />
which they reside, including:<br />
Absolute track data that is allocated in absolute DASD tracks or on split cylinders<br />
Location-dependent direct data sets<br />
All unmovable data sets must be identified and converted for use in a system-managed<br />
environment. For information about identifying and converting unmovable data sets, see<br />
z/<strong>OS</strong> DFSMSdss Storage Administration Guide, SC35-0423.<br />
308 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.43 Interactive Storage Management Facility (ISMF)<br />
Panel Help<br />
-------------------------------------------------------------------------------<br />
ISMF PRIMARY OPTION MENU - z/<strong>OS</strong> DFSMS V1 R6<br />
Enter Selection or Command ===><br />
Select one <strong>of</strong> the following options and press Enter:<br />
0 ISMF Pr<strong>of</strong>ile - Specify ISMF User Pr<strong>of</strong>ile<br />
1 Data Set - Perform Functions Against Data Sets<br />
2 <strong>Volume</strong> - Perform Functions Against <strong>Volume</strong>s<br />
3 Management Class - Specify Data Set Backup and Migration Criteria<br />
4 Data Class - Specify Data Set Allocation Parameters<br />
5 Storage Class - Specify Data Set Performance and Availability<br />
6 Storage Group - Specify <strong>Volume</strong> Names and Free Space Thresholds<br />
7 Automatic Class Selection - Specify ACS Routines and Test Criteria<br />
8 Control Data Set - Specify <strong>System</strong> Names and Default Criteria<br />
9 Aggregate Group - Specify Data Set Recovery Parameters<br />
10 Library Management - Specify Library and Drive Configurations<br />
11 Enhanced ACS Management - Perform Enhanced Test/Configuration Management<br />
C Data Collection - Process Data Collection Function<br />
L List - Perform Functions Against Saved ISMF Lists<br />
P Copy Pool - Specify Pool Storage Groups for Copies<br />
R Removable Media Manager - Perform Functions Against Removable Media<br />
X Exit - Terminate ISMF<br />
Use HELP Command for Help; Use END Command or X to Exit.<br />
Figure 5-43 ISMF Primary Option Menu panel<br />
Interactive Storage Management Facility<br />
The Interactive Storage Management Facility (ISMF) helps you analyze and manage data and<br />
storage interactively. ISMF is an Interactive <strong>System</strong> Productivity Facility (ISPF) application.<br />
Figure 5-43 shows the first ISMF panel, the Primary Option Menu.<br />
ISMF provides interactive access to the space management, backup, and recovery services<br />
<strong>of</strong> the DFSMShsm and DFSMSdss functional components <strong>of</strong> DFSMS, to the tape<br />
management services <strong>of</strong> the DFSMSrmm functional component, as well as to other products.<br />
DFSMS introduces the ability to use ISMF to define attributes <strong>of</strong> tape storage groups and<br />
libraries.<br />
A storage administrator uses ISMF to define the installation's policy for managing storage by<br />
defining and managing SMS classes, groups, and ACS routines. ISMF then places the<br />
configuration in an SCDS. You can activate an SCDS through ISMF or an operator command.<br />
ISMF is menu-driven, with fast paths for many <strong>of</strong> its functions. ISMF uses the ISPF data-tag<br />
language (DTL) to give its functional panels on workstations the look <strong>of</strong> common user access<br />
(CUA) panels and a graphical user interface (GUI).<br />
Chapter 5. <strong>System</strong>-managed storage 309
5.44 ISMF: Product relationships<br />
ISPF/PDF<br />
TSO/Extensions (TSO/E), TSO CLISTs, commands<br />
DFSMS<br />
Data Facility SORT (DFSORT)<br />
Resource Access Control Facility (RACF)<br />
Device Support Facilities (ICKDSF)<br />
<strong>IBM</strong> NaviQuest for MVS (NaviQuest), 5655-ACS<br />
Figure 5-44 ISMF product relationships<br />
ISMF product relationships<br />
ISMF works with the following products:<br />
► Interactive <strong>System</strong> Productivity Facility/Program Development Facility (ISPF/PDF), which<br />
provides the edit, browse, data, and library utility functions.<br />
► TSO/Extensions (TSO/E), TSO CLISTs and commands.<br />
► DFSMS, which consists <strong>of</strong> five functional components: DFSMSdfp, DFSMShsm,<br />
DFSMSdss, DFSMSrmm and DFSMStvs. ISMF is designed to use the space<br />
management and availability management (backup/recovery) functions provided by those<br />
products.<br />
► Data Facility SORT (DFSORT), which provides the record-level functions.<br />
► Resource Access Control Facility (RACF), which provides the access control function for<br />
data and services.<br />
► Device Support Facilities (ICKDSF) to provide the storage device support and analysis<br />
functions.<br />
► <strong>IBM</strong> NaviQuest for MVS 5655-ACS.<br />
NaviQuest is a testing and reporting tool that speeds and simplifies the tasks associated<br />
with DFSMS initial implementation and ongoing ACS routine and configuration<br />
maintenance. NaviQuest assists storage administrators by allowing more automation <strong>of</strong><br />
storage management tasks. More information about NaviQuest can be found in the<br />
NaviQuest User's Guide.<br />
310 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
NaviQuest provides:<br />
– A familiar ISPF panel interface to functions<br />
– Fast, easy, bulk test-case creation<br />
– ACS routine and DFSMS configuration-testing automation<br />
– Storage reporting assistance<br />
– Additional tools to aid with storage administration tasks<br />
– Batch creation <strong>of</strong> data set and volume listings<br />
– Printing <strong>of</strong> ISMF LISTs<br />
– Batch ACS routine translation<br />
– Batch ACS routine validation<br />
Chapter 5. <strong>System</strong>-managed storage 311
5.45 ISMF: What you can do with ISMF<br />
Edit, browse, and sort data set records<br />
Delete data sets and backup copies<br />
Protect data sets by limiting their access<br />
Copy data sets to another migration level<br />
Back up data sets and copy entire volumes,<br />
mountable optical volumes, or mountable tape<br />
volumes<br />
Recall data sets that have been migrated<br />
Figure 5-45 What you can do with ISMF<br />
What you can do with ISMF<br />
ISMF is a panel-driven interface. Use the panels in an ISMF application to:<br />
► Display lists with information about specific data sets, DASD volumes, mountable optical<br />
volumes, and mountable tape volumes<br />
► Generate lists <strong>of</strong> data, storage, and management classes to determine how data sets are<br />
being managed<br />
► Display and manage lists saved from various ISMF applications<br />
ISMF generates a data list based on your selection criteria. Once the list is built, you can use<br />
ISMF entry panels to perform space management or backup and recovery tasks against the<br />
entries in the list.<br />
As a user performing data management tasks against individual data sets or against lists <strong>of</strong><br />
data sets or volumes, you can use ISMF to:<br />
► Edit, browse, and sort data set records<br />
► Delete data sets and backup copies<br />
► Protect data sets by limiting their access<br />
► Recover unused space from data sets and consolidate free space on DASD volumes<br />
► Copy data sets or DASD volumes to the same device or another device<br />
► Migrate data sets to another migration level<br />
312 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Recall data sets that have been migrated so that they can be used<br />
► Back up data sets and copy entire volumes for availability purposes<br />
► Recover data sets and restore DASD volumes, mountable optical volumes, or mountable<br />
tape volumes<br />
You cannot allocate data sets from ISMF. Data sets are allocated from ISPF, from TSO, or<br />
with JCL statements. ISMF provides the DSUTIL command, which enables users to get to<br />
ISPF and toggle back to ISMF.<br />
Chapter 5. <strong>System</strong>-managed storage 313
5.46 ISMF: Accessing ISMF<br />
Panel Help<br />
----------------------------------------------------------------------------<br />
ISMF PRIMARY OPTION MENU - z/<strong>OS</strong> DFSMS V1 R8<br />
Enter Selection or Command ===><br />
Select one <strong>of</strong> the following options and press Enter:<br />
0 ISMF Pr<strong>of</strong>ile - Specify ISMF User Pr<strong>of</strong>ile<br />
1 Data Set - Perform Functions Against Data Sets<br />
2 <strong>Volume</strong> - Perform Functions Against <strong>Volume</strong>s<br />
3 Management Class - Specify Data Set Backup and Migration Criteria<br />
4 Data Class - Specify Data Set Allocation Parameters<br />
5 Storage Class - Specify Data Set Performance and Availability<br />
6 Storage Group - Specify <strong>Volume</strong> Names and Free Space Thresholds<br />
7 Automatic Class Selection - Specify ACS Routines and Test Criteria<br />
8 Control Data Set - Specify <strong>System</strong> Names and Default Criteria<br />
9 Aggregate Group - Specify Data Set Recovery Parameters<br />
10 Library Management - Specify Library and Drive Configurations<br />
11 Enhanced ACS Management - Perform Enhanced Test/Configuration Management<br />
C Data Collection - Process Data Collection Function<br />
L List - Perform Functions Against Saved ISMF Lists<br />
P Copy Pool - Specify Pool Storage Groups for Copies<br />
R Removable Media Manager - Perform Functions Against Removable Media<br />
X Exit - Terminate ISMF<br />
Use HELP Command for Help; Use END Command or X to Exit.<br />
Figure 5-46 ISMF Primary Option Menu panel for storage administrator mode<br />
Accessing ISMF<br />
How you access ISMF depends on your site.<br />
► You can create an option on the ISPF Primary Option Menu to access ISMF. Then access<br />
ISMF by typing the appropriate option after the arrow on the Option field, in the ISPF<br />
Primary Option Menu. This starts an ISMF session from the ISPF/PDF Primary Option<br />
Menu.<br />
► To access ISMF directly from TSO, use the command:<br />
ISPSTART PGM(DGTFMD01) NEWAPPL(DGT)<br />
There are two Primary Option Menus, one for storage administrators, and another for end<br />
users. Figure 5-46 shows the menu available to storage administrators; it includes additional<br />
applications not available to end users.<br />
Option 0 controls the user mode or the type <strong>of</strong> Primary Option Menu to be displayed. Refer to<br />
5.47, “ISMF: Pr<strong>of</strong>ile option” on page 315 for information about how to change the user mode.<br />
The ISMF Primary Option Menu example assumes installation <strong>of</strong> DFSMS at the current<br />
release level. For information about adding the DFSORT option to your Primary Option Menu,<br />
see DFSORT Installation and Customization Release 14, SC33-4034.<br />
314 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.47 ISMF: Pr<strong>of</strong>ile option<br />
Panel Help<br />
------------------------------------------------------------------<br />
ISMF PROFILE OPTION MENU<br />
Enter Selection or Command ===><br />
Select one <strong>of</strong> the following options and Press Enter:<br />
0 User Mode Selection<br />
1 Logging and Abend Control<br />
2 ISMF Job Statement<br />
3 DFSMSdss Execute Statement<br />
4 ICKDSF Execute Statement<br />
5 Data Set Print Execute Statement<br />
6 IDCAMS Execute Statement<br />
X Exit<br />
Figure 5-47 ISMF PROFILE OPTION MENU panel<br />
Setting the ISMF pr<strong>of</strong>ile<br />
Figure 5-47 shows the ISMF Pr<strong>of</strong>ile Option Menu panel, option 0 from the ISMF Primary<br />
Menu. Use this menu to control the way ISMF runs during the session. You can:<br />
► Change the user mode from user to storage administrator, or from storage administrator to<br />
user<br />
► Control ISMF error logging and recovery from abends<br />
► Define statements for ISMF to use in processing your jobs, such as:<br />
– JOB statements<br />
– DFSMSdss<br />
– Device Support Facilities (ICKDSF)<br />
– Access Method Services (IDCAMS)<br />
– PRINT execute statements in your pr<strong>of</strong>ile<br />
You can select ISMF or ISPF JCL statements for processing batch jobs.<br />
Chapter 5. <strong>System</strong>-managed storage 315
5.48 ISMF: Obtaining information about a panel field<br />
HELP------------------- DATA SET LIST LINE OPERATORS -------------------HELP<br />
COMMAND ===><br />
Use ENTER to see the line operator descriptions in sequence or choose them<br />
by number:<br />
1 LINE OPERATOR 10 DUMP 20 HRECOVER<br />
Overview 11 EDIT 21 MESSAGE<br />
2 ALTER 12 ERASE 22 RELEASE<br />
3 BROWSE 13 HALTERDS 23 RESTORE<br />
4 CATLIST 14 HBACKDS 24 SECURITY<br />
5 CLIST 15 HBDELETE 25 SORTREC<br />
6 COMPRESS 16 HDELETE 26 TSO Commands and CLISTs<br />
7 CONDENSE 17 HIDE 27 VSAMREC<br />
8 COPY 18 HMIGRATE 28 VTOCLIST<br />
9 DELETE 19 HRECALL<br />
Figure 5-48 Obtaining information using Help command<br />
Using the Help program function key<br />
On any ISMF panel, you can use the ISMF Help program function key (PF key) to obtain<br />
information about the panel you are using and the panel fields. By positioning the cursor in a<br />
specific field, you can obtain detailed information related to that field.<br />
Figure 5-48 shows the panel you reach when you press the Help PF key with the cursor in the<br />
Line Operator field <strong>of</strong> the panel shown in Figure 5-49 on page 317 where the arrow points to<br />
the data set. The Data Set List Line Operators panel shows the commands available to enter<br />
in that field. If you want an explanation about a specific command, type the option<br />
corresponding to the desired command and a panel is displayed showing information about<br />
the command function.<br />
You can exploit the Help PF key, when defining classes, to obtain information about what you<br />
have to enter in the fields. Place the cursor in the field and press the Help PF key.<br />
To see and change the assigned functions to the PF keys, enter the KEYS command in the<br />
Command field.<br />
316 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Panel List Dataset Utilities Scroll Help<br />
------------------------------------------------------------------------------<br />
DATA SET LIST<br />
Command ===> Scroll ===> HALF<br />
Entries 1-1 <strong>of</strong> 1<br />
Enter Line Operators below: Data Columns 3-5 <strong>of</strong> 39<br />
LINE ALLOC ALLOC % NOT<br />
OPERATOR DATA SET NAME SPACE USED USED<br />
---(1)---- ------------(2)------------ --(3)--- --(4)--- -(5)-<br />
----------> ROGERS.SYS1.LINKLIB -------- -------- ---<br />
---------- ------ ----------- BOTTOM OF DATA ----------- ------ ----<br />
Figure 5-49 Data Set List panel<br />
Chapter 5. <strong>System</strong>-managed storage 317
5.49 ISMF: Data set option<br />
Figure 5-50 Data Set List panel using action bars<br />
Data Set Selection Entry panel<br />
When you select option 1 (Data Set) from the ISMF Primary Option Menu, you get to the Data<br />
Set Selection Entry Panel. There you can specify filter criteria to get a list <strong>of</strong> data sets, as<br />
follows:<br />
2 Generate a new list from criteria below<br />
Data Set Name . . . 'MHLRES2.**'<br />
Figure 5-50 shows the data set list generated for the generic data set name MHLRES2.**.<br />
Data Set List panel<br />
You can use line operators to execute tasks with individual data sets. Use list commands to<br />
execute tasks with a group <strong>of</strong> data sets. These tasks include editing, browsing, recovering<br />
unused space, copying, migrating, deleting, backing up, and restoring data sets. TSO<br />
commands and CLISTs can also be used as line operators or list commands. You can save a<br />
copy <strong>of</strong> a data set list and reuse it later.<br />
If ISMF is unable to get certain information required to check if a data set meets the selection<br />
criteria specified, that data set is also to be included in the list. Missing information is<br />
indicated by dashes on the corresponding column.<br />
The Data Fields field shows how many fields you have in the list. You can navigate throughout<br />
these fields using Right and Left PF keys. The figure also shows the use <strong>of</strong> the actions bar.<br />
318 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
5.50 ISMF: <strong>Volume</strong> Option<br />
Figure 5-51 <strong>Volume</strong> Selection Entry panel<br />
<strong>Volume</strong> option<br />
Selecting option 2 (<strong>Volume</strong>) from the ISMF Primary Option Menu takes you to the <strong>Volume</strong> List<br />
Selection Menu panel, as follows:<br />
VOLUME LIST SELECTION MENU<br />
Enter Selection or Command ===> ___________________________________________<br />
1 DASD - Generate a List <strong>of</strong> DASD <strong>Volume</strong>s<br />
2 Mountable Optical - Generate a List <strong>of</strong> Mountable Optical <strong>Volume</strong><br />
3 Mountable Tape - Generate a List <strong>of</strong> Mountable Tape <strong>Volume</strong>s<br />
Selecting option 1 (DASD) displays the <strong>Volume</strong> Selection Entry Panel, shown in part (1) <strong>of</strong><br />
Figure 5-51. Using filters, you can select a <strong>Volume</strong> List Panel, shown in part (2) <strong>of</strong> the figure.<br />
<strong>Volume</strong> List panel<br />
The volume application constructs a list <strong>of</strong> the type you choose in the <strong>Volume</strong> List Selection<br />
Menu. Use line operators to do tasks with an individual volume. These tasks include<br />
consolidating or recovering unused space, copying, backing up, and restoring volumes. TSO<br />
commands and CLISTs can also be line operators or list commands. You can save a copy <strong>of</strong><br />
a volume list and reuse it later. With the list <strong>of</strong> mountable optical volumes or mountable tape<br />
volumes, you can only browse the list.<br />
1<br />
2<br />
Chapter 5. <strong>System</strong>-managed storage 319
5.51 ISMF: Management Class option<br />
Figure 5-52 Management Class Selection Menu panel<br />
Management Class Application Selection panel<br />
The first panel (1) in Figure 5-52 shows the panel displayed when you select option 3<br />
(Management Class) from the ISMF Primary Option Menu. Use this option to display, modify,<br />
and define options for the SMS management classes. It also constructs a list <strong>of</strong> the available<br />
management classes.<br />
Management Class List panel<br />
The second panel (2) in Figure 5-52 shows the management class list generated by the filters<br />
chosen in the previous panel, using option 1 (List). Note how many data columns are<br />
available. You can navigate through them using the right and left PF keys.<br />
To view the commands you can use in the LINE OPERATOR field (marked with a circle in the<br />
figure), place the cursor in the field and press the Help PF key.<br />
320 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
2<br />
1
5.52 ISMF: Data Class option<br />
Figure 5-53 Data Class Application Selection panel<br />
Displaying information about data classes<br />
The first panel (1) in Figure 5-53 is the panel displayed when you choose option 4 (Data<br />
Class) from the ISMF Primary Option Menu. Use this option to define the way data sets are<br />
allocated in your installation.<br />
Data class attributes are assigned to a data set when the data set is created. They apply to<br />
both SMS-managed and non-SMS-managed data sets. Attributes specified in JCL or<br />
equivalent allocation statements override those specified in a data class. Individual attributes<br />
in a data class can be overridden by JCL, TSO, IDCAMS, and dynamic allocation statements.<br />
Data Class List panel<br />
The second panel (2) in Figure 5-53 is the Data Class List generated by the filters specified in<br />
the previous panel.<br />
Entering the DISPLAY line command in the LINE OPERATOR field, in front <strong>of</strong> a data class name,<br />
displays the information about that data class, without requiring you to navigate using the<br />
right and left PF keys.<br />
2<br />
1<br />
Chapter 5. <strong>System</strong>-managed storage 321
5.53 ISMF: Storage Class option<br />
Figure 5-54 Storage Class Application Selection panel<br />
Storage Class Application Selection panel<br />
The first panel (1) in Figure 5-54 shows the Storage Class Application Selection panel that is<br />
displayed when you select option 5 (Storage Class) <strong>of</strong> the ISMF Primary Option Menu.<br />
The Storage Class Application Selection panel lets the storage administrator specify<br />
performance objectives and availability attributes that characterize a collection <strong>of</strong> data sets.<br />
For objects, the storage administrator can define the performance attribute Initial Access<br />
Response Seconds. A data set or object must be assigned to a storage class in order to be<br />
managed by DFSMS.<br />
Storage Class List panel<br />
The second panel (2) in Figure 5-54 shows the storage class list generated by the filters<br />
specified in the previous panel.<br />
You can specify the DISPLAY line operator next to any class name on a class list to generate a<br />
panel that displays values associated with that particular class. This information can help you<br />
decide whether you need to assign a new DFSMS class to your data set or object.<br />
If you determine that a data set you own is to be associated with a separate management<br />
class or storage class, and if you have authorization, you can use the ALTER line operator<br />
against a data set list entry to specify another storage class or management class.<br />
322 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
2<br />
1
5.54 ISMF: List option<br />
Panel List Dataset Utilities Scroll Help<br />
------------------------------------------------------------------------------<br />
DATA SET LIST<br />
Command ===> Scroll ===> HALF<br />
Entries 1-1 <strong>of</strong> 1<br />
Enter Line Operators below: Data Columns 3-5 <strong>of</strong> 39<br />
LINE ALLOC ALLOC % NOT<br />
OPERATOR DATA SET NAME SPACE USED USED<br />
---(1)---- ------------(2)------------ --(3)--- --(4)--- -(5)-<br />
ROGERS.SYS1.LINKLIB -------- -------- ---<br />
---------- ------ ----------- BOTTOM OF DATA ----------- ------ ----<br />
Figure 5-55 Saved ISMF Lists panel<br />
ISMF lists<br />
After obtaining a list (data set, data class, or storage class), you can save the list by typing<br />
SAVE listname in the Command panel field. To see the saved lists, use the option L (List) in<br />
the ISMF Primary Option Menu.<br />
The List Application panel displays a list <strong>of</strong> all lists saved from ISMF applications. Each entry<br />
in the list represents a list that was saved. If there are no saved lists to be found, the ISMF<br />
Primary Option Menu panel is redisplayed with the message that the list is empty.<br />
You can reuse and delete saved lists. From the List Application, you can reuse lists as though<br />
they were created from the corresponding application. You can then use line operators and<br />
commands to tailor and manage the information in the saved lists.<br />
To learn more about the ISMF panel, see z/<strong>OS</strong> DFSMS: Using the Interactive Storage<br />
Management Facility, SC26-7411.<br />
Chapter 5. <strong>System</strong>-managed storage 323
324 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 6. Catalogs<br />
6<br />
A catalog is a data set that contains information about other data sets. It provides users with<br />
the ability to locate a data set by name, without knowing where the data set resides. By<br />
cataloging data sets, your users will need to know less about your storage setup. Thus, data<br />
can be moved from one device to another, without requiring a change in JCL DD statements<br />
that refer to an existing data set.<br />
Cataloging data sets also simplifies backup and recovery procedures. Catalogs are the<br />
central information point for data sets; all data sets must be cataloged. In addition, all<br />
SMS-managed data sets must be cataloged.<br />
DFSMS allows you to use catalogs for any type <strong>of</strong> data set or object. Many advanced<br />
functions require the use <strong>of</strong> catalogs, for example, the storage management subsystem.<br />
Multiple user catalogs contain information about user data sets, and a single master catalog<br />
contains entries for system data sets and user catalogs.<br />
In z/<strong>OS</strong>, the component that controls catalogs is embedded in DFSMSdfp and is called<br />
Catalog Management. Catalog Management has one address space for itself named Catalog<br />
Address Space (CAS). This address space is used for buffering and to store control blocks,<br />
together with code.<br />
The modern catalog structure in z/<strong>OS</strong> is called the integrated catalog facility (ICF). All data<br />
sets managed by the storage management subsystem (SMS) must be cataloged in an ICF<br />
catalog.<br />
Most installations depend on the availability <strong>of</strong> catalog facilities to run production job streams<br />
and to support online users. For maximum reliability and efficiency, catalog all permanent<br />
data sets and create catalog recovery procedures to guarantee continuous availability in<br />
z/<strong>OS</strong>.<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 325
6.1 Catalogs<br />
ICF<br />
structure 1<br />
ICF<br />
structure 2<br />
BCS<br />
BCS<br />
Figure 6-1 The ICF catalog structure<br />
Catalogs<br />
Catalogs, as mentioned, are data sets containing information about other data sets, and they<br />
provide users with the ability to locate a data set by name, without knowing the volume where<br />
the data set resides. This means that data sets can be moved from one device to another,<br />
without requiring a change in JCL DD statements that refer to an existing data set.<br />
Cataloging data sets also simplifies backup and recovery procedures. Catalogs are the<br />
central information point for VSAM data sets; all VSAM data sets must be cataloged. In<br />
addition, all SMS-managed data sets must be cataloged. Activity towards the catalog is much<br />
more intense in a batch/TSO workload than in a CICS/ DB2 workload, where the majority <strong>of</strong><br />
data sets are allocated at CICS/DB2 initialization time.<br />
The integrated catalog facility (ICF) structure<br />
An integrated catalog facility (ICF) catalog is a structure that replaced the former MVS CVOL<br />
catalog. As a catalog, it describes data set attributes and indicates the volumes on which a<br />
data set is located. ICF catalogs are allocated by the catalog address space (CAS), a system<br />
address space for the DFSMSdfp catalog function.<br />
A catalog consists <strong>of</strong> two separate kinds <strong>of</strong> data sets:<br />
► A basic catalog structure (BCS) - the BCS can be considered the catalog.<br />
► A VSAM volume data set (VVDS - the VVDS can be considered an extension <strong>of</strong> the<br />
volume table <strong>of</strong> contents (VTOC).<br />
326 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
VVDS VTOC<br />
VVDS<br />
VVDS VTOC<br />
VVDS<br />
VVDS VTOC<br />
VVDS<br />
VVDS VTOC<br />
VVDS<br />
VVDS VTOC<br />
VVDS
Basic catalog structure (BCS)<br />
The basic catalog structure (BCS) is a VSAM key-sequenced data set. It uses the data set<br />
name <strong>of</strong> entries to store and retrieve data set information. For VSAM data sets, the BCS<br />
contains volume, security, ownership, and association information. For non-VSAM data sets,<br />
the BCS contains volume, ownership, and association information. When we talk about a<br />
catalog, we usually mean the BCS.<br />
The VVDS can be considered an extension <strong>of</strong> the volume table <strong>of</strong> contents (VTOC). The<br />
VVDS is volume-specific, whereas the complexity <strong>of</strong> the BCS depends on your definitions.<br />
The relationship between the BCS and the VVDS is many-to-many. That is, a BCS can point<br />
to multiple VVDSs and a VVDS can point to multiple BCSs.<br />
VSAM volume data set (VVDS)<br />
The VSAM volume data set (VVDS) is a data set that describes the characteristics <strong>of</strong> VSAM<br />
and system-managed data sets residing on a given DASD volume; it is part <strong>of</strong> a catalog.<br />
The VVDS contains VSAM volume records (VVRs) that hold information about VSAM data<br />
sets residing on the volume. The VVDS also contains non-VSAM volume records (NVRs) for<br />
SMS-managed non-VSAM data sets on the volume. If an SMS-managed non-VSAM data set<br />
spans volumes, then only the first volume contains an NVR for that data set.<br />
The system automatically defines a VVDS with 10 tracks primary and 10 tracks secondary<br />
space, unless you explicitly define it.<br />
Chapter 6. Catalogs 327
6.2 The basic catalog structure (BCS)<br />
Basic catalog structure (BCS)<br />
Index Data component<br />
DSNAME1<br />
DSNAME2<br />
DSNAME3<br />
DSNAME4<br />
DSNAME5<br />
Figure 6-2 Basic catalog structure<br />
Basic catalog structure (BCS)<br />
The basic catalog structure (BCS) is a VSAM key-sequenced data set. It uses the data set<br />
name <strong>of</strong> entries to store and retrieve data set information. For VSAM data sets, the BCS<br />
contains volume, security, ownership, and association information. For non-VSAM data sets,<br />
the BCS contains volume, ownership, and association information.<br />
In other words, the BCS portion <strong>of</strong> the ICF catalog contains the static information about the<br />
data set, the information that rarely changes.<br />
Every catalog consists <strong>of</strong> one BCS and one or more VVDSs. A BCS does not “own” a VVDS;<br />
that is, more than one BCS can have entries for a single VVDS. Every VVDS that is<br />
connected to a BCS has an entry in the BCS. For example, Figure 6-2 shows a possible<br />
relationship between a BCS and three VVDSs on three disk volumes.<br />
For non-VSAM data sets that are not SMS-managed, all catalog information is contained<br />
within the BCS. For other types <strong>of</strong> data sets, there is other information available in the VVDS.<br />
BCS structure<br />
The BCS contains the information about where a data set resides. That can be a DASD<br />
volume, tape, or other storage medium. Related information in the BCS is grouped into<br />
logical, variable-length, spanned records related by key. The BCS uses keys that are the data<br />
set names (plus one character for extensions).<br />
328 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
.<br />
.<br />
.<br />
DSNAMEn<br />
The BCS is a VSAM KSDS<br />
DSNAME1 VOL001<br />
...<br />
DSNAME2<br />
DSNAME3<br />
DSNAME4<br />
DSNAME5<br />
DSNAMEn<br />
...<br />
...<br />
...<br />
...<br />
...<br />
VOL002<br />
VOL001<br />
VOL003<br />
VOL003<br />
VOL002<br />
VTOC VVDS<br />
DSNAME1<br />
DSNAME3<br />
VTOC VVDS<br />
DSNAME2<br />
DSNAMEn<br />
VTOC VVDS<br />
DSNAME4<br />
DSNAME5<br />
VOL001<br />
VOL002<br />
VOL003
One control interval can contain multiple BCS records. To reduce the number <strong>of</strong> I/Os<br />
necessary for catalog processing, logically-related data is consolidated in the BCS.<br />
A catalog can have data sets cataloged on any number <strong>of</strong> volumes. The BCS can have as<br />
many as 123 extents on one volume. One volume can have multiple catalogs on it. All the<br />
necessary control information is recorded in the VVDS residing on that volume.<br />
Master catalog<br />
A configuration <strong>of</strong> catalogs depends on a master catalog. A master catalog has the same<br />
structure as any other catalog. What makes it a master catalog is that all BCSs are cataloged<br />
in it, as well as certain data sets called system data sets (for instance, SYS1.LINKLIB and<br />
other “SYS1” data sets). Master catalogs are discussed in “The master catalog” on page 332.<br />
Catalogs <strong>of</strong>fer advantages including improved performance, capability, usability, and<br />
maintainability. The catalog information that requires the most frequent updates is physically<br />
located in the VVDS on the same volume as the data sets, thereby allowing faster access. A<br />
catalog request is expedited because fewer I/O operations are needed. Related entries, such<br />
as a cluster and its alternate index, are processed together.<br />
Chapter 6. Catalogs 329
6.3 The VSAM volume data set (VVDS)<br />
Three types <strong>of</strong> entries in a VVDS<br />
One VSAM volume control record (VVCR)<br />
Contains control information about BCSs which have<br />
data sets on this volume<br />
Multiple VSAM volume records (VVR)<br />
Contain information about the VSAM data sets on<br />
that volume<br />
Multiple non-VSAM volume records (NVR)<br />
Contain information about the non-VSAM data set<br />
on that volume<br />
VVDS is a VSAM entry-sequenced data set (ESDS)<br />
Data set name: SYS1.VVDS.Vvolser<br />
Can be defined explicitly or implicitly<br />
Figure 6-3 The VSAM volume data set<br />
The VSAM volume data set (VVDS)<br />
The VSAM volume data set (VVDS) contains additional catalog information (not contained in<br />
the BCS) about the VSAM and SMS-managed non-VSAM data sets residing on the volume<br />
where the VVDS is located. Every volume containing any VSAM or any SMS-managed data<br />
sets must have a VVDS on it. The VVDS acts as a kind <strong>of</strong> VTOC extension for certain types <strong>of</strong><br />
data sets. A VVDS can have data set information about data sets cataloged in distinct BCSs.<br />
Entry types in the VVDS<br />
There are three types <strong>of</strong> entries in a VVDS. They are:<br />
► VSAM volume control records (VVCR)<br />
– First logical record in a VVDS<br />
– Contain information for management <strong>of</strong> DASD space and the names <strong>of</strong> the BCSs that<br />
have data sets on the volume<br />
► VSAM volume records (VVR)<br />
– Contain information about a VSAM data set residing on the volume<br />
– Number <strong>of</strong> VVRs varies according to the type <strong>of</strong> data set and the options specified for<br />
the data set<br />
– Also included are data set characteristics, SMS data, extent information<br />
– There is one VVR describing the VVDs itself<br />
330 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Non-VSAM volume record (NVR)<br />
– Equivalent to a VVR for SMS-managed non-VSAM data sets<br />
– Contains SMS-related information<br />
VVDS characteristics<br />
The VVDS is a VSAM entry-sequenced data set (ESDS) that has a 4 KB control interval size.<br />
The hexadecimal RBA <strong>of</strong> a record is used as its key or identifier.<br />
A VVDS is recognized by the restricted data set name:<br />
SYS1.VVDS.Vvolser<br />
Volser is the volume serial number <strong>of</strong> the volume on which the VVDS resides.<br />
You can explicitly define the VVDS using IDCAMS, or it is implicitly created after you define<br />
the first VSAM or SMS-managed data set on the volume.<br />
VVDSSPACE keyword<br />
Prior to z/<strong>OS</strong> V1R7, the default space parameter is TRACKS(10,10), which could be too small<br />
for sites that use custom 3390 volumes (the ones greater than 3390-9). With z/<strong>OS</strong> V1R7,<br />
there is a new VVDSSPACE keyword <strong>of</strong> the F CATALOG command, as follows:<br />
F CATALOG,VVDSSPACE(primary,secondary)<br />
An explicitly defined VVDS is not related to any BCS until a data set or catalog object is<br />
defined on the volume. As data sets are allocated on the VVDS volume, each BCS with<br />
VSAM data sets or SMS-managed data sets residing on that volume is related to the VVDS.<br />
VVDSSPACE indicates that the Catalog Address Space are to use the values specified as the<br />
primary and secondary allocation amount in tracks for an implicitly defined VVDS. The default<br />
value is ten tracks for both the primary and secondary values. The specified values are<br />
preserved across a Catalog Address Space restart, but are not preserved across an IPL.<br />
Chapter 6. Catalogs 331
6.4 Catalogs by function<br />
SYSCAT VOLABC<br />
MASTER<br />
CATALOG<br />
SYS1.PARMLIB<br />
SYS1.LINKLIB<br />
ABC.DSNAME1<br />
SYSRES<br />
Figure 6-4 The master catalog<br />
Catalogs by function<br />
By function, the catalogs (BCSs) can be classified as master catalog and user catalog. A<br />
particular case <strong>of</strong> a user catalog is the volume catalog, which is a user catalog containing only<br />
tape library and tape volume entries.<br />
There is no structural difference between a master catalog and a user catalog. What sets a<br />
master catalog apart is how it is used, and what data sets are cataloged in it. For example,<br />
the same catalog can be master in one z/<strong>OS</strong> and user in the other z/<strong>OS</strong>.<br />
The master catalog<br />
Each system has one active master catalog. One master catalog can be shared between<br />
various MVS images. It does not have to reside on the system residence volume (the one that<br />
is IPLed).<br />
The master catalog for a system must contain entries for all user catalogs and their aliases<br />
that the system uses. Also, all SYS1 data sets must be cataloged in the master catalog for<br />
proper system initialization.<br />
332 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
USER<br />
CATALOG<br />
ABC.DSNAME<br />
DEF.DSNAME<br />
VOL001<br />
Catalog by function:<br />
Master catalog<br />
User catalog
Important: To minimize update activity to the master catalog, and to reduce the exposure<br />
to breakage, only SYS1 data sets, user catalog connector records, and the aliases pointing<br />
to those connectors are to be in the master catalog.<br />
During a system initialization, the master catalog is read so that system data sets and<br />
catalogs can be located.<br />
Identifying the master catalog for IPL<br />
At IPL, you must indicate the location (volser and data set name) <strong>of</strong> the master catalog. This<br />
information can be specified in one <strong>of</strong> two places:<br />
► SYS1.NUCLEUS member SYSCATxx (default is SYSCATLG)<br />
► SYS1.PARMLIB/SYSn.IPLPARM member LOADxx. This method is recommended.<br />
For more information see z/<strong>OS</strong> MVS Initialization and Tuning Reference, SA22-7592.<br />
Determine the master catalog on a running system<br />
You can use the IDCAMS LISTCAT command for a data set with a high-level qualifier (HLQ) <strong>of</strong><br />
SYS1 to determine the master catalog on a system. Because all data sets with an HLQ <strong>of</strong><br />
SYS1 are to be in the master catalog, the catalog shown in the LISTCAT output is the master<br />
catalog.<br />
For information about the IDCAMS LISTCAT command, see also 6.10, “Listing a catalog” on<br />
page 345.<br />
If you do not want to run an IDCAMS job, you can run LISTCAT as a line command in ISPF<br />
option 3.4. List the SYS1.PARMLIB and type listc ent(/), as shown in Figure 6-5.<br />
Menu Options View Utilities Compilers Help<br />
DSLIST - Data Sets Matching SYS1.PARMLIB Row 1 <strong>of</strong> 5<br />
Command ===> Scroll ===> PAGE<br />
Command - Enter "/" to select action Message <strong>Volume</strong><br />
------------------------------------------------------------------------------listc<br />
ent(/)1.PARMLIB LISTC RC=0 O37CAT<br />
SYS1.PARMLIB.BKUP SBOX01<br />
SYS1.PARMLIB.INSTALL Z17RB1<br />
SYS1.PARMLIB.OLD00 SBOX01<br />
SYS1.PARMLIB.POK Z17RB1<br />
***************************** End <strong>of</strong> Data Set list ****************************<br />
Figure 6-5 Example for a LISTCAT in ISPF option 3.4<br />
Note: The forward slash (/) specifies to use the data set name on the line where the<br />
command is entered.<br />
This command produces output similar to the following example:<br />
NONVSAM ------- SYS1.PARMLIB<br />
IN-CAT --- MCAT.SANDBOX.Z17.SBOX00<br />
Chapter 6. Catalogs 333
User catalogs<br />
The difference between the master catalog and the user catalogs is in the function. User<br />
catalogs are to be used to contain information about your installation cataloged data sets<br />
other than SYS1 data sets. There are no set rules as to how many or what size to have; it<br />
depends entirely on your environment.<br />
Cataloging data sets for two unrelated applications in the same catalog creates a single point<br />
<strong>of</strong> failure for them that otherwise might not exist. Assessing the impact <strong>of</strong> outage <strong>of</strong> a given<br />
catalog can help to determine if it is too large or can impact too many applications.<br />
334 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.5 Using aliases<br />
// DD DSN=PAY.D1<br />
// DD DSN=PAY.D2<br />
// DD DSN=DEPT1.VAC<br />
// DD DSN=DEPT2.VAC<br />
MCAT<br />
ALIAS: PAY<br />
UCAT1<br />
ALIAS: DEPT1<br />
ALIAS: DEPT2<br />
UCAT2<br />
SYS1.LINKLIB<br />
SYS1.PARMLIB<br />
...<br />
Figure 6-6 Using aliases<br />
UCAT1<br />
PAY.D1<br />
PAY.D2<br />
...<br />
UCAT2<br />
DEPT1.VAC<br />
DEPT2.VAC<br />
...<br />
PAY.D1<br />
...<br />
PAY.D2<br />
...<br />
DEPT1.VAC<br />
DEPT2.VAC<br />
...<br />
Using aliases<br />
Aliases are used to tell catalog management which user catalog your data set is cataloged in.<br />
First, you place a pointer to an user catalog in the master catalog through the IDCAMS DEFINE<br />
UCAT command. Next, you define an appropriate alias name for a user catalog in the master<br />
catalog. Then, match the high-level qualifier (HLQ) <strong>of</strong> your data set with the alias. This<br />
identifies the appropriate user catalog to be used to satisfy the request.<br />
In Figure 6-6, all data sets with an HLQ <strong>of</strong> PAY have their information in the user catalog<br />
UCAT1 because in the master catalog there is an alias PAY pointing to UCAT1.<br />
The data sets with an HLQ <strong>of</strong> DEPT1 and DEPT2, respectively, have their information in the<br />
user catalog UCAT2 because in the master catalog there are aliases DEPT1 and DEPT2<br />
pointing to UCAT2.<br />
Note: Aliases can also be used with non-VSAM data sets in order to create alternate<br />
names to the same data set. Those aliases are not related to a user catalog.<br />
To define an alias, use the IDCAMS command DEFINE ALIAS. An example is shown in 6.7,<br />
“Defining a catalog and its aliases” on page 339.<br />
Chapter 6. Catalogs 335
Multilevel aliases<br />
You can augment the standard catalog search order by defining multilevel catalog aliases. A<br />
multilevel catalog alias is an alias <strong>of</strong> two or more high-level qualifiers. You can define aliases<br />
<strong>of</strong> up to four high-level qualifiers.<br />
However, the multilevel alias facility is only to be used when a better solution cannot be found.<br />
The need for the multilevel alias facility can indicate data set naming conventions problems.<br />
For more information about the multilevel alias facility, see z/<strong>OS</strong> DFSMS: Managing Catalogs,<br />
SC26-7409.<br />
336 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.6 Catalog search order<br />
IDCAMS<br />
'CATALOG'<br />
KEYWORD<br />
?<br />
N<br />
Start<br />
STEPCAT<br />
?<br />
N<br />
JOBCAT<br />
?<br />
N<br />
Valid<br />
ALIAS?<br />
N<br />
Y<br />
Y<br />
Y<br />
Y<br />
Search in specified<br />
catalog<br />
Search data set in STEPCAT<br />
Search the data set in the<br />
related USER catalog<br />
Search the data set in<br />
the MASTER catalog<br />
Figure 6-7 Catalog search order for a LOCATE request<br />
Standard search order for a LOCATE request<br />
Continue if not found<br />
Continue if not found<br />
Search/define data set in<br />
JOBCAT<br />
Fail if not found<br />
Fail if not found<br />
Fail if not found<br />
Catalog search order<br />
LOCATE is an SVC that calls catalog management asking for a data set name search. Most<br />
catalog searches are based on catalog aliases. Alternatives to catalog aliases are available<br />
for directing a catalog request, specifically the JOBCAT and STEPCAT DD statements; the<br />
CATALOG parameter <strong>of</strong> access method services; and the name <strong>of</strong> the catalog. JOBCAT and<br />
STEPCAT are no longer allowed beginning with z/<strong>OS</strong> V1R7.<br />
Search order for catalogs for a data set define request<br />
For the system to determine where a data set is to be cataloged, the following search order is<br />
used to find the catalog:<br />
1. Use the catalog named in the IDCAMS CATALOG parameter, if coded.<br />
2. If the data set is a generation data set, the catalog containing the GDG base definition is<br />
used for the new GDS entry.<br />
3. If the high-level qualifier is a catalog alias, use the catalog identified by the alias or the<br />
catalog whose name is the same as the high-level qualifier <strong>of</strong> the data set.<br />
4. If no catalog has been identified yet, use the master catalog.<br />
Chapter 6. Catalogs 337
Defining a cataloged data set<br />
When you specify a catalog in the IDCAMS CATALOG parameter, and you have appropriate<br />
RACF authority to the FACILITY class pr<strong>of</strong>ile STGADMIN.IGG.DIRCAT, then the catalog you<br />
specify is used. For instance:<br />
DEFINE CLUSTER (NAME(PROD.PAYROLL) CATALOG(SYS1.MASTER.ICFCAT))<br />
This command defines the data set PROD.PAYROLL in catalog SYS1.MASTER.ICFCAT. You<br />
can use RACF to prevent the use <strong>of</strong> the CATALOG parameter and restrict the ability to define<br />
data sets in the master catalog.<br />
Search order for locating a data set<br />
Base catalog searches on the catalog aliases. When appropriate aliases are defined for<br />
catalogs, the high-level qualifier <strong>of</strong> a data set name is identical to a catalog alias and identifies<br />
the appropriate catalog to be used to satisfy the request.<br />
However, alternatives to catalog aliases are available for directing a catalog request,<br />
specifically the CATALOG parameter <strong>of</strong> access method services and the name <strong>of</strong> the catalog.<br />
The following search order is used to locate the catalog for an already cataloged data set:<br />
1. Use the catalog named in IDCAMS CATALOG parameter, if coded. If the data set is not<br />
found, fail the job.<br />
2. If the data set is a generation data set, the catalog containing the GDG base definition is<br />
used for the new GDS entry.<br />
3. If not found, and the high-level qualifier is an alias for a catalog, search the catalog or if the<br />
high-level qualifier is the name <strong>of</strong> a catalog, search the catalog. If the data set is not found,<br />
fail the job.<br />
4. Otherwise, search the master catalog.<br />
Note: For SMS-managed data sets, JOBCAT and STEPCAT DD statements are not<br />
allowed and cause a job failure. Also, they are not suggested even for non-SMS data sets,<br />
because they can cause conflicted information. Therefore, do not use them and keep in<br />
mind that they have been phased out starting with z/<strong>OS</strong> V1R7.<br />
To use an alias to identify the catalog to be searched, the data set must have more than one<br />
data set qualifier.<br />
For information about the catalog standard search order also refer to z/<strong>OS</strong> DFSMS:<br />
Managing Catalogs, SC26-7409.<br />
338 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.7 Defining a catalog and its aliases<br />
Master catalog MCAT<br />
Self-describing record<br />
User catalog connector for UCAT1<br />
Alias TEST1 for UCAT1<br />
Alias TEST2 for UCAT1<br />
data set entry for SYS1.PARMLIB<br />
data set entry for SYS1.LINKLIB<br />
Figure 6-8 JCL to create a basic catalog structure<br />
Defining a catalog<br />
You can use the IDCAMS to define and maintain catalogs. See also 4.14, “Access method<br />
services (IDCAMS)” on page 129. Defining a master catalog or user catalog is basically the<br />
same.<br />
//DEFCAT JOB ...<br />
//DEFCAT EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DEFINE USERCATALOG -<br />
( NAME(OTTO.CATALOG.TEST) -<br />
MEGABYTES(15 15) -<br />
VOLUME(VSF6S4) -<br />
ICFCATALOG -<br />
FREESPACE(10 10) -<br />
STRNO(3) ) -<br />
DATA( CONTROLINTERVALSIZE(4096) -<br />
BUFND(4) ) -<br />
INDEX( BUFNI(4) )<br />
/*<br />
Figure 6-9 Sample JCL to define a BCS<br />
User catalog UCAT1<br />
Self-describing record<br />
entry for TEST1.A<br />
entry for TEST1.B<br />
entry for TEST2.A<br />
1. DEFINE USERCATALOG ICFCAT - UCAT1<br />
2. DEFINE ALIAS - TEST1 and TEST2<br />
3. Create data sets with HLQ <strong>of</strong> - TEST1 and TEST2<br />
TEST1.A<br />
TEST2.A<br />
TEST1.B<br />
SYS1.PARMLIB<br />
SYS1.LINKLIB<br />
Chapter 6. Catalogs 339
The example in Figure 6-9 on page 339 shows the JCL you can use to define a user catalog.<br />
The catalog defined, OTTO.CATALOG.TEST, is placed on volume VSF6S4, and is allocated<br />
with 15 megabytes primary and secondary space.<br />
Use the access method services command DEFINE USERCATALOG ICFCATALOG to define the<br />
basic catalog structure (BCS) <strong>of</strong> an ICF catalog. Using this command you do not specify<br />
whether you want to create a user or a master catalog. How to identify the master catalog to<br />
the system is described in 6.4, “Catalogs by function” on page 332.<br />
A connector entry to this user catalog is created in the master catalog, as the listing in<br />
Figure 6-10 shows.<br />
LISTING FROM CATALOG -- CATALOG.MVSICFM.VVSF6C1<br />
USERCATALOG --- OTTO.CATALOG.TEST<br />
HISTORY<br />
RELEASE----------------2<br />
VOLUMES<br />
VOLSER------------VSF6S4 DEVTYPE------X'3010200F'<br />
VOLFLAG------------PRIME<br />
ASSOCIATIONS--------(NULL)<br />
Figure 6-10 Listing <strong>of</strong> the user catalog connector entry<br />
The attributes <strong>of</strong> the user catalog are not defined in the master catalog. They are described in<br />
the user catalog itself and its VVDS entry. This is called the self-describing record. The<br />
self-describing record is given a key <strong>of</strong> binary zeros to ensure it is the first record in the<br />
catalog. There are no associations (aliases) yet for this user catalog. To create associations,<br />
you need to define aliases.<br />
To define a volume catalog (for tapes), use the parameter VOLCATALOG instead <strong>of</strong> ICFCATALOG.<br />
See z/<strong>OS</strong> DFSMS Access Method Services for Catalogs, SC26-7394, for more detail.<br />
Defining a BCS with a model<br />
When you define a BCS or VVDS, you can use an existing BCS or VVDS as a model for the<br />
new one. The attributes <strong>of</strong> the existing data set are copied to the newly defined data set<br />
unless you explicitly specify another value for an attribute. You can override any <strong>of</strong> a model's<br />
attributes.<br />
If you do not want to change or add any attributes, you need only supply the entry name <strong>of</strong> the<br />
object being defined and the MODEL parameter. When you define a BCS, you must also<br />
specify the volume and space information for the BCS.<br />
For further information about using a model, see z/<strong>OS</strong> DFSMS: Managing Catalogs,<br />
SC26-7409.<br />
Defining aliases<br />
To use a catalog, the system must be able to determine which data sets are to be defined in<br />
that catalog. The simplest way to accomplish this is to define aliases in the master catalog for<br />
the user catalog. Before defining an alias, carefully consider the effect the new alias has on<br />
old data sets. A poorly chosen alias can make other data sets inaccessible.<br />
You can define aliases for the user catalog in the same job in which you define the catalog by<br />
including DEFINE ALIAS commands after the DEFINE USERCATALOG command. You can use<br />
conditional operators to ensure the aliases are only defined if the catalog is successfully<br />
defined. After the catalog is defined, you can add new aliases or delete old aliases.<br />
340 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Catalog aliases are defined only in the master catalog, which contains an entry for the user<br />
catalog. The number <strong>of</strong> aliases a catalog can have is limited by the maximum record size for<br />
the master catalog.<br />
You cannot define an alias if a data set cataloged in the master catalog has the same<br />
high-level qualifier as the alias. The DEFINE ALIAS command fails with a Duplicate data set<br />
name error. For example, if a catalog is named TESTE.TESTSYS.ICFCAT, you cannot define<br />
the alias TESTE for any catalog.<br />
Use the sample SYSIN for an IDCAMS job in Figure 6-11 to define aliases TEST1 and<br />
TEST2.<br />
DEFINE ALIAS -<br />
(NAME(TEST1) -<br />
RELATE(OTTO.CATALOG.TEST))<br />
DEFINE ALIAS -<br />
(NAME(TEST2) -<br />
RELATE(OTTO.CATALOG.TEST))<br />
Figure 6-11 DEFINE ALIAS example<br />
These definitions result in the following entries in the master catalog (Figure 6-12).<br />
ALIAS --------- TEST1<br />
IN-CAT --- CATALOG.MVSICFM.VVSF6C1<br />
HISTORY<br />
RELEASE----------------2<br />
ASSOCIATIONS<br />
USERCAT--OTTO.CATALOG.TEST<br />
...<br />
ALIAS --------- TEST2<br />
IN-CAT --- CATALOG.MVSICFM.VVSF6C1<br />
HISTORY<br />
RELEASE----------------2<br />
ASSOCIATIONS<br />
USERCAT--OTTO.CATALOG.TEST<br />
Figure 6-12 Listing an ALIAS in the master catalog<br />
Both aliases have an association to the newly defined user catalog. If you now create a new<br />
data set with an HLQ <strong>of</strong> TEST1 or TEST2, its entry will be directed to the new user catalog.<br />
Also, the listing <strong>of</strong> the user catalog connector now shows both aliases; see Figure 6-13.<br />
USERCATALOG --- OTTO.CATALOG.TEST<br />
HISTORY<br />
RELEASE----------------2<br />
VOLUMES<br />
VOLSER------------VSF6S4 DEVTYPE------X'3010200F'<br />
VOLFLAG------------PRIME<br />
ASSOCIATIONS<br />
ALIAS----TEST1<br />
ALIAS----TEST2<br />
Figure 6-13 Listing <strong>of</strong> the user catalog connector entry<br />
Chapter 6. Catalogs 341
6.8 Using multiple catalogs<br />
CATALOG.PROD<br />
data sets:<br />
PROD.**<br />
Figure 6-14 Using multiple catalogs<br />
Using multiple catalogs<br />
Multiple catalogs on multiple volumes can perform better than fewer catalogs on fewer<br />
volumes. This is because <strong>of</strong> the interference between requests to the same catalog; for<br />
example, a single shared catalog being locked out by another system in the sysplex. This<br />
situation can occur if another application issues a RESERVE against the volume that has<br />
nothing to do with catalog processing. Another reason can be that there is more competition<br />
to use the available volumes, and thus more I/O can be in progress concurrently.<br />
Tip: Convert all intra-sysplex RESERVES in global ENQs through the conversion RNL.<br />
Independent <strong>of</strong> the number <strong>of</strong> catalogs, use the virtual lookaside facility (VLF) for buffering the<br />
user catalog CIs. The master catalog CIs are naturally buffered in the catalog address space<br />
(CAS). Multiple catalogs can reduce the impact <strong>of</strong> the loss <strong>of</strong> a catalog by:<br />
► Reducing the time necessary to recreate any given catalog<br />
► Allowing multiple catalog recovery jobs to be in process at the same time<br />
Recovery from a pack failure is dependent on the total amount <strong>of</strong> catalog information about a<br />
volume, regardless <strong>of</strong> whether this information is stored in one catalog or in many catalogs.<br />
When using multiple user catalogs, consider grouping data sets under different high-level<br />
qualifiers. You can then spread them over multiple catalogs by defining aliases for the various<br />
catalogs.<br />
342 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
CATALOG.MASTER<br />
ALIAS=PROD ALIAS=TEST<br />
ALIAS=USER1<br />
ALIAS=USER2<br />
CATALOG.TEST<br />
data sets:<br />
TEST.**<br />
CATALOG.USERS<br />
data sets:<br />
USER1.**<br />
USER2.**
6.9 Sharing catalogs across systems<br />
Cache<br />
MVS 1<br />
VVR<br />
CATALOG<br />
Figure 6-15 Sharing catalogs<br />
Cache<br />
MVS 2<br />
Shared catalogs:<br />
SHAREOPTIONS (3 4) AND<br />
Shared device<br />
Guarantee current buffer information<br />
by using the VVR for serialization<br />
Sharing catalogs across systems<br />
A shared catalog is a basic catalog structure (BCS) that is eligible to be used by more than<br />
one system. It must be defined with SHAREOPTIONS(3 4), and reside on a shared volume. A<br />
DASD volume is initialized as shared using the MVS hardware configuration definition (HCD)<br />
facility.<br />
Note: The device must be defined as shared to all systems that access it.<br />
If several systems have the device defined as shared and other systems do not, then catalog<br />
corruption will occur. Check with your system programmer to determine shared volumes.<br />
Note that it is not necessary to have the catalog actually be shared between systems; the<br />
catalog address space assumes it is shared if it meets the criteria stated. All VVDSs are<br />
defined as shared. Tape volume catalogs can be shared in the same way as other catalogs.<br />
By default, catalogs are defined with SHAREOPTIONS(3 4). You can specify that a catalog is<br />
not to be shared by defining the catalog with SHAREOPTIONS(3 3). Only define a catalog as<br />
unshared if you are certain it will not be shared. Place unshared catalogs on volumes that<br />
have been initialized as unshared. Catalogs that are defined as unshared and that reside on<br />
shared volumes will become damaged if referred to by another system.<br />
Chapter 6. Catalogs 343
If you need to share data sets across systems, it is advisable that you share the catalogs that<br />
contain these data sets. A BCS catalog is considered shared when both <strong>of</strong> the following are<br />
true:<br />
► It is defined with SHAREOPTIONS (3 4).<br />
► It resides on a shared device, as defined at HCD.<br />
Attention: To avoid catalog corruption, define a catalog volume on a shared UCB and set<br />
catalog SHAREOPTIONS to (3 4) on all systems sharing a catalog.<br />
Using SHAREOPTIONS 3 means that VSAM does not issue the ENQ SYSVSAM SYSTEMS<br />
for the catalog; SHAREOPTIONS 4 means that the VSAM buffers need to be refreshed.<br />
You can check whether a catalog is shared by running the operator command:<br />
MODIFY CATALOG,ALLOCATED<br />
A flag in the catalog indicates whether the catalog is shared.<br />
If a catalog is not really shared with another system, move the catalog to an unshared device<br />
or alter its SHAREOPTIONS to (3 3). To prevent potential catalog damage, never place a<br />
catalog with SHAREOPTIONS (3 3) on a shared device.<br />
There is one VVR in a shared catalog that is used as a log by all the catalog management<br />
accessing such catalog. This log is used to guarantee the coherency <strong>of</strong> each catalog buffer in<br />
each z/<strong>OS</strong> system.<br />
Guaranteeing current information<br />
For performance reasons, CIs contained in a master catalog are buffered in the catalog<br />
address space (CAS). Optionally (and highly advisable), the user catalogs are buffered in<br />
VLF data space. Thus, before each search in the buffers looking for a match, a simple I/O<br />
operation checking is performed in the log VVR. If the catalog was updated by another<br />
system, the information in the buffer needs to be refreshed by reading the data from the<br />
volume.<br />
The checking also affects performance because, to maintain integrity, for every catalog<br />
access a special VVR in the shared catalog must be read before using the cached version <strong>of</strong><br />
the BCS record. This access implies a DASD reserve and I/O operations.<br />
To avoid having I/O operations to read the VVR, you can use enhanced catalog sharing<br />
(ECS). For information about ECS, see 6.24, “Enhanced catalog sharing” on page 375.<br />
Checking also ensures that the control blocks for the catalog in the CAS are updated. This<br />
occurs if the catalog has been extended or otherwise altered from another system. This<br />
checking maintains data integrity.<br />
344 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.10 Listing a catalog<br />
Use IDCAMS LISTCAT command to extract<br />
information from the BCS and VVDS for:<br />
Aliases<br />
User catalog connectors in the master catalog<br />
Catalogs self-describing records<br />
VSAM data sets<br />
Non-VSAM data sets<br />
Library entries and volume entries <strong>of</strong> a volume<br />
catalog<br />
Generation data sets<br />
Alternate index and path for a VSAM cluster<br />
Page spaces<br />
Figure 6-16 Listing a catalog<br />
Requesting information from a catalog<br />
You can list catalog records using the IDCAMS LISTCAT command, or the ISMF line operator<br />
command CATLIST. CATLIST produces the same output as LISTCAT. With z/<strong>OS</strong> V1R8, the<br />
LISTCAT processing has performance improvements.<br />
You can use the LISTCAT output to monitor VSAM data sets including catalogs. The statistics<br />
and attributes listed can be used to help determine if you reorganize, recreate, or otherwise<br />
alter a VSAM data set to improve performance or avoid problems.<br />
The LISTCAT command can be used in many variations to extract information about a<br />
particular entry in the catalog. It extracts the data from the BCS and VVDS.<br />
LISTCAT examples<br />
LISTCAT examples for monitoring catalogs include:<br />
► List all ALIAS entries in the master catalog:<br />
LISTCAT ALIAS CAT(master.catalog.name)<br />
This command provides a list <strong>of</strong> all aliases that are currently defined in your master<br />
catalog. If you need information only about one specific alias, use the keyword<br />
ENTRY(aliasname) and specify ALL to get detailed information. For sample output <strong>of</strong> this<br />
command, see Figure 6-12 on page 341.<br />
Chapter 6. Catalogs 345
► List a user catalog connector in the master catalog:<br />
LISTCAT ENT(user.catalog.name) ALL<br />
You can use this command to display the volser and the alias associations <strong>of</strong> a user<br />
catalog as it is defined in the master catalog. For sample output <strong>of</strong> this command, see<br />
Figure 6-13 on page 341.<br />
► List the catalog’s self-describing record:<br />
LISTCAT ENT(user.catalog.name) CAT(user.catalog.name) ALL<br />
This gives detailed information about a user catalog, such as attributes, statistics, extent<br />
information, and more. Because the self-describing record is in the user catalog, you must<br />
o specify the name <strong>of</strong> the user catalog in the CAT statement. If you do not use the CAT<br />
keyword, only the user catalog connector information from the master catalog is listed as<br />
in the previous example. Figure 6-17 shows sample output for this command.<br />
LISTING FROM CATALOG -- CATALOG.MVSICFU.VVSF6C1<br />
CLUSTER ------- 00000000000000000000000000000000000000000000<br />
HISTORY<br />
DATASET-OWNER-----(NULL) CREATION--------2004.260<br />
RELEASE----------------2 EXPIRATION------0000.000<br />
BWO STATUS--------(NULL) BWO TIMESTAMP-----(NULL)<br />
BWO---------------(NULL)<br />
PROTECTION-PSWD-----(NULL) RACF----------------(NO)<br />
ASSOCIATIONS<br />
DATA-----CATALOG.MVSICFU.VVSF6C1<br />
INDEX----CATALOG.MVSICFU.VVSF6C1.CATINDEX<br />
DATA ------- CATALOG.MVSICFU.VVSF6C1<br />
HISTORY<br />
DATASET-OWNER-----(NULL) CREATION--------2004.260<br />
RELEASE----------------2 EXPIRATION------0000.000<br />
ACCOUNT-INFO-----------------------------------(NULL)<br />
PROTECTION-PSWD-----(NULL) RACF----------------(NO)<br />
ASSOCIATIONS<br />
CLUSTER--00000000000000000000000000000000000000000000<br />
ATTRIBUTES<br />
KEYLEN----------------45 AVGLRECL------------4086 BUFSPACE-----------11776 CISIZE--------------4096<br />
RKP--------------------9 MAXLRECL-----------32400 EXCPEXIT----------(NULL) CI/CA----------------180<br />
BUFND------------------4 STRNO------------------3<br />
SHROPTNS(3,4) SPEED UNIQUE NOERASE INDEXED NOWRITECHK NOIMBED NOREPLICAT<br />
UNORDERED NOREUSE SPANNED NOECSHARE ICFCATALOG<br />
STATISTICS<br />
REC-TOTAL--------------0 SPLITS-CI-------------34 EXCPS------------------0<br />
REC-DELETED--------11585 SPLITS-CA--------------0 EXTENTS----------------1<br />
REC-INSERTED-----------0 FREESPACE-%CI---------10 SYSTEM-TIMESTAMP:<br />
REC-UPDATED------------0 FREESPACE-%CA---------10 X'0000000000000000'<br />
REC-RETRIEVED----------0 FREESPC----------3686400<br />
ALLOCATION<br />
SPACE-TYPE------CYLINDER HI-A-RBA---------3686400<br />
SPACE-PRI--------------5 HI-U-RBA----------737280<br />
SPACE-SEC--------------5<br />
VOLUME<br />
VOLSER------------VSF6C1 PHYREC-SIZE---------4096 HI-A-RBA---------3686400 EXTENT-NUMBER----------1<br />
DEVTYPE------X'3010200F' PHYRECS/TRK-----------12 HI-U-RBA----------737280 EXTENT-TYPE--------X'00'<br />
VOLFLAG------------PRIME TRACKS/CA-------------15<br />
EXTENTS:<br />
LOW-CCHH-----X'00110000' LOW-RBA----------------0 TRACKS----------------75<br />
HIGH-CCHH----X'0015000E' HIGH-RBA---------3686399<br />
INDEX ------ CATALOG.MVSICFU.VVSF6C1.CATINDEX<br />
...<br />
Figure 6-17 Example <strong>of</strong> a LISTCAT output for a user catalog<br />
► Listing a VSAM or non-VSAM data set:<br />
LISTCAT ENT(data.set.name) ALL<br />
The output for a VSAM data set looks the same as in Figure 6-17 (remember, a catalog is<br />
a VSAM data set). For a non-VSAM data set, the output is much shorter.<br />
You can use the LISTCAT command to list information for other catalog entries as well. For<br />
information about LISTCAT, see z/<strong>OS</strong> DFSMS Access Method Services for Catalogs,<br />
SC26-7394.<br />
346 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.11 Defining and deleting data sets<br />
user catalog UCAT1<br />
Self describing record<br />
entry for TEST1.A<br />
entry for TEST1.B<br />
entry for TEST2.A<br />
Figure 6-18 Deleting a data set<br />
//DELDS JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DELETE TEST1.A<br />
/*<br />
TEST1.A TEST1.B ...<br />
TEST1.A TEST1.B ...<br />
TEST1.A TEST1.B<br />
VTOC<br />
VVDS<br />
Define data sets<br />
There are many ways to define a data set. To “define” means to create in VTOC and to<br />
catalog in an ICF catalog (BCS plus VVDS). Examples are using IDCAMS DEFINE CLUSTER to<br />
create a VSAM data set or using a JCL DD statement to define a non-VSAM data set. If you<br />
define a VSAM data set or an SMS-managed non-VSAM data set, then an entry is created in<br />
the BCS, in the VTOC, and in the VVDS.<br />
Since z/<strong>OS</strong> V1R7, an attempt to define a page data set in a catalog not pointed to by the<br />
running master causes an IDCAMS message, instead <strong>of</strong> it being executed and causing later<br />
problems.<br />
Delete data sets<br />
To “delete” a data set implies uncataloging the data set. To “scratch” a data set means take it<br />
out from the VTOC. You can delete, for example, by running an IDCAMS DELETE job; by using<br />
a JCL DD statement; or by issuing the command DELETE in ISPF 3.4 in front <strong>of</strong> the data set<br />
name.<br />
The default <strong>of</strong> the DELETE command is scratch, which means the BCS, VTOC, and VVDS data<br />
set entries are erased. By doing that, the reserved space for this data set on the volume is<br />
released. The data set itself is not overwritten until the freed space is reused by another data<br />
set. You can use the parameter ERASE for an IDCAMS DELETE if you want the data set to be<br />
overwritten with binary zeros for security reasons.<br />
Chapter 6. Catalogs 347
Note: When you delete a data set, the BCS, VVDS, and VTOC entries for the data set are<br />
removed. If you later recover a BCS, there can be BCS entries for data sets which have<br />
been deleted. In this case, the data sets do not exist, and there are no entries for them in<br />
the VVDS or VTOC. To clean up the BCS, delete the BCS entries.<br />
Delete aliases<br />
To simply delete an alias, use the IDCAMS DELETE ALIAS command, specifying the alias you<br />
are deleting. To delete all the aliases for a catalog, use EXPORT DISCONNECT to disconnect the<br />
catalog. The aliases are deleted when the catalog is disconnected. When you again connect<br />
the catalog (using IMPORT CONNECT), the aliases remain deleted.<br />
Delete the catalog entry<br />
To only delete a catalog entry, you can use the DELETE N<strong>OS</strong>CRATCH command; the VVDS and<br />
VTOC entries are not deleted. The entry deleted can be reinstated with the DEFINE<br />
RECATALOG command, as shown in Figure 6-19.<br />
//DELDS JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DELETE TEST1.A N<strong>OS</strong>CRATCH /* deletes TEST1.A from the BCS */<br />
DEFINE NONVSAM - /* redefines TEST1.A into the BCS */<br />
(NAME(TEST1.A) -<br />
DEVICETYPE(3390)-<br />
VOLUMES(VSF6S3) -<br />
RECATALOG -<br />
)<br />
/*<br />
Figure 6-19 Delete N<strong>OS</strong>CRATCH, define RECATALOG<br />
Delete VVR or NVR records<br />
When the catalog entry is missing, and the data set remains on the DASD, you can use the<br />
DELETE VVR for VSAM data sets and DELETE NVR for non-VSAM SMS-managed data sets to<br />
remove its entry from the VVDS. You can only use these commands if there is no entry in the<br />
BCS for the data set.<br />
//DELDS JOB ...<br />
//S1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//DD1 DD VOL=SER=VSF6S3,UNIT=3390,DISP=OLD<br />
//SYSIN DD *<br />
DELETE TEST1.A -<br />
FILE(DD1) -<br />
NVR<br />
/*<br />
Figure 6-20 Delete the VVDS entry for a non-VSAM data set<br />
Important: When deleting VSAM KSDS, you must issue a DELETE VVR for each <strong>of</strong> the<br />
components, the DATA, and the INDEX.<br />
348 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Delete generation data groups<br />
In this example, a generation data group (GDG) base catalog entry is deleted from the<br />
catalog. The generation data sets associated with GDGBASE remain unaffected in the<br />
VTOC, as shown in Figure 6-21.<br />
//DELGDG JOB ...<br />
//S1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
DELETE TEST1.GDG -<br />
GENERATIONDATAGROUP -<br />
RECOVERY<br />
/*<br />
Figure 6-21 Delete a GDG base for recovery<br />
The DELETE command with keyword RECOVERY removes the GDG base catalog entry from the<br />
catalog.<br />
Delete an ICF<br />
When deleting an ICF, take care to specify whether you want to delete only the catalog, or if<br />
you want to delete all associated data. The following examples show how to delete a catalog.<br />
► Delete with recovery<br />
In Figure 6-22, a user catalog is deleted in preparation for replacing it with an imported<br />
backup copy. The VVDS and VTOC entries for objects defined in the catalog are not<br />
deleted and the data sets are not scratched, as shown in the JCL.<br />
//DELET13 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//DD1 DD VOL=SER=VSER01,UNIT=3390,DISP=OLD<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DELETE -<br />
USER.CATALOG -<br />
FILE(DD1) -<br />
RECOVERY -<br />
USERCATALOG<br />
/*<br />
Figure 6-22 Delete a user catalog for recovery<br />
RECOVERY specifies that only the catalog data set is deleted, without deleting the objects<br />
defined in the catalog.<br />
► Delete an empty user catalog<br />
In Figure 6-23 on page 350, a user catalog is deleted. A user catalog can be deleted when<br />
it is empty; that is, when there are no objects cataloged in it other than the catalog's<br />
volume. If the catalog is not empty, it cannot be deleted unless the FORCE parameter is<br />
specified.<br />
Chapter 6. Catalogs 349
DELET6 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
DELETE -<br />
USER.CATALOG -<br />
PURGE -<br />
USERCATALOG<br />
/*<br />
Figure 6-23 Delete an empty user catalog<br />
Important: The FORCE parameter deletes all data sets in the catalog. The DELETE command<br />
deletes both the catalog and the catalog's user catalog connector entry in the master<br />
catalog.<br />
Delete a migrated data set<br />
A migrated data set is a data set moved by DFSMShsm to a cheaper storage device to make<br />
room in your primary DASD farm. Catalog management recognizes that a data set is<br />
migrated by the MIGRAT volser in its catalog entry. A migrated data set can be DELETE<br />
SCRATCH (TSO DELETE issues the DFSMShsm HDELETE command) or N<strong>OS</strong>CRATCH.<br />
Where:<br />
SCRATCH This means that the non-VSAM data set being deleted from the catalog is to<br />
be removed from the VTOC <strong>of</strong> the volume on which it resides. When<br />
SCRATCH is specified for a cluster, alternate index, page space, or data<br />
space, the VTOC entries for the volumes involved are updated to reflect the<br />
deletion <strong>of</strong> the object.<br />
N<strong>OS</strong>CRATCH This means that the non-VSAM data set being deleted from the catalog is to<br />
remain in the VTOC <strong>of</strong> the volume on which it resides, or that it has already<br />
been scratched from the VTOC. When N<strong>OS</strong>CRATCH is specified for a<br />
cluster, page space, alternate index, or data space, the VTOC entries for the<br />
volumes involved are not updated.<br />
To execute the DELETE command against a migrated data set, you must have RACF group<br />
ARCCATGP defined. In general to allow certain authorized users to perform these operations<br />
on migrated data sets without recalling them, perform the following steps:<br />
1. Define a RACF catalog maintenance group named ARCCATGP.<br />
ADDGROUP (ARCCATGP)<br />
2. Connect the desired users to that group.<br />
Only when such a user is logged on under group ARCCATGP does DFSMShsm bypass the<br />
automatic recall for UNCATALOG, RECATALOG, and DELETE/N<strong>OS</strong>CRATCH requests for<br />
migrated data sets. For example, the following LOGON command demonstrates starting a<br />
TSO session under ARCCATGP. For further information about ARCCATGP group, see z/<strong>OS</strong><br />
DFSMShsm Implementation and Customization Guide, SC35-0418.<br />
LOGON userid | password GROUP(ARCCATGP)<br />
To delete a migrated data set, but the data set is not recorded in the HSM control data sets,<br />
execute a DELETE N<strong>OS</strong>CRATCH command for the data set to clean up the ICF catalog.<br />
350 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.12 DELETE command enhancement with z/<strong>OS</strong> V1R11<br />
IDCAMS DELETE command is enhanced to include a<br />
new function called DELETE MASK<br />
Allows users to specify the data set name selection<br />
criteria desired with a mask-entry-name and a keyword<br />
“MASK”<br />
A mask-entry-name (also called as filter key) can have<br />
two consecutive asterisks (**) or one or more (%) signs<br />
Two consecutive asterisks represent zero or more<br />
characters<br />
% sign is the replacement for any character in that same<br />
relative position<br />
MASK keyword is the keyword to turn on the new feature<br />
Figure 6-24 DELETE command enhancement with z/<strong>OS</strong> V1R11<br />
DELETE command<br />
The DELETE command deletes catalogs, VSAM data sets, non-VSAM data sets, and objects.<br />
With z/<strong>OS</strong> V1R11, the IDCAMS DELETE command is enhanced to include a new function<br />
called DELETE MASK. It allows users to specify the data set name selection criteria desired<br />
with a mask-entry-name and a keyword “MASK”. A mask-entry-name (also called as filter<br />
key) can have two consecutive asterisks (**) or one or more percentage signs (%)<br />
The two consecutive asterisks represent zero or more characters, and it is not limited to the<br />
number <strong>of</strong> levels. For example, A.B.** means all data set names with two or more levels with<br />
A and B as their first and second qualifiers, respectively. The percentage sign is the<br />
replacement for any character in that same relative position. For example, ABCDE matches<br />
the mask-entry ‘A%%DE’, but not ‘A%DE’.<br />
The MASK keyword is the keyword to turn on the new feature; for example:<br />
DELETE A.B.** MASK<br />
DELETE A.BC.M%%K MASK<br />
NOMASK is the keyword to turn the new function <strong>of</strong>f. The default is NOMASK.<br />
If more than one entry is to be deleted, the list <strong>of</strong> entrynames must be enclosed in<br />
parentheses. The maximum number <strong>of</strong> entrynames that can be deleted is 100. If the MASK<br />
keyword is specified, then only one entryname can be specified. This entryname is also<br />
known as the mask filter key.<br />
Chapter 6. Catalogs 351
Note: The command will result in error if there is more than one mask filter key specified in<br />
one command.<br />
DELETE command examples<br />
Following are examples <strong>of</strong> how generic-level DELETE works given the following data sets:<br />
1) AAA.BBB.CCC.DDD<br />
2) AAA.BBB.CCC.DDD<br />
3) AAA.BBB.CCC.DDD.EEE<br />
4) AAA.BBB.CCC<br />
5) BBB.DDD.AAC.BBC.EEE<br />
6) BBB.DDD.ABC.BBC.EEE<br />
7) BBB.DDD.ADC.BBDD.EEEE<br />
8) BBB.DDD.ADC.BCCD.EEEE<br />
► DELETE AAA.* results in the deletion <strong>of</strong> no data sets.<br />
► DELETE AAA.BBB.* results in the deletion <strong>of</strong> data set #4.<br />
► DELETE AAA.BBB.*.DDD results in the selection <strong>of</strong> data sets #1 and #2.<br />
► DELETE AAA.BBB.*.DDD.EEE results in the deletion <strong>of</strong> data set #3.<br />
► DELETE AAA.** results in delete #1 #2 #3 and #4.<br />
► DELETE BBB.DDD.** results in delete #5 #6 #7 and #8.<br />
► DELETE BBB.DDD.BBC.A%C.BBC.EEE results in delete #5 #6.<br />
► DELETE AAA.DDD.ADC.B%%%.EEEE results in delete #7 #8.<br />
Note: When a generic level name is specified, only one qualifier can replace the asterisk<br />
(*). When a generic level name is specified, an asterisk (*) may only represent 1 qualifier <strong>of</strong><br />
a data set name. When a filter key <strong>of</strong> double asterisks (**) is specified with the MASK<br />
parameter, the key may represent multiple qualifiers within a data set name. The double<br />
asterisks (**) may precede or follow a period. It must be preceded or followed by either a<br />
period or a blank.<br />
For masking filter key <strong>of</strong> percent signs (%), it allows one to eight percent (%) specified in<br />
each qualifier. A data set name ABCDE matches the mask entry-name 'A%09/06/03', but<br />
does not match 'A%DE'.<br />
MASK keyword<br />
The DELETE MASK command allows you to specify many variations <strong>of</strong> a data set name on a<br />
single deletion, using new wild card characters and rules to give more flexibility in selecting<br />
the data sets to be deleted. Mask can be an asterisk (*), two consecutive asterisks (**) or a<br />
percentage sign (%).<br />
352 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.13 Backup procedures<br />
Backing up a BCS<br />
IDCAMS EXPORT command<br />
DFSMSdss logical dump command<br />
DFSMShsm BACKDS command<br />
Backing up a VVDS<br />
Backup the full volume<br />
Backup all data sets described in the VVDS<br />
Figure 6-25 Backup procedures for catalogs<br />
Backup procedures<br />
The two parts <strong>of</strong> an ICF catalog, the BCS and the VVDS, require separate backup<br />
techniques. The BCS can be backed up like any other data set. Only back up the VVDS as<br />
part <strong>of</strong> a volume dump. The entries in the VVDS and VTOC are backed up when the data sets<br />
they describe are:<br />
► Exported with IDCAMS<br />
► Logically dumped with DFSMSdss<br />
► Backed up with DFSMShsm<br />
Important: Because catalogs are essential system data sets, it is important that you<br />
maintain backup copies. The more recent and accurate a backup copy, the less impact a<br />
catalog outage will have on your installation.<br />
Backing up a BCS<br />
To back up a BCS you can use one <strong>of</strong> the following methods:<br />
► The access method services EXPORT command<br />
► The DFSMSdss logical DUMP command<br />
► The DFSMShsm BACKDS command<br />
Chapter 6. Catalogs 353
You can later recover the backup copies using the same utility used to create the backup:<br />
► The access method services IMPORT command for exported copies<br />
► The DFSMSdss RESTORE command for logical dump copies<br />
► The DFSMShsm RECOVER command for DFSMShsm backups<br />
The copy created by these utilities is a portable sequential data set that can be stored on a<br />
tape or direct access device, which can be <strong>of</strong> another device type than the one containing the<br />
source catalog.<br />
When these commands are used to back up a BCS, the aliases <strong>of</strong> the catalog are saved in<br />
the backup copy. The source catalog is not deleted, and remains as a fully functional catalog.<br />
The relationships between the BCS and VVDSs are unchanged.<br />
You cannot permanently export a catalog by using the PERMANENT parameter <strong>of</strong> EXPORT.<br />
The TEMPORARY option is used even if you specify PERMANENT or allow it to default.<br />
Figure 6-26 shows you an example for an IDCAMS EXPORT.<br />
//EXPRTCAT JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//RECEIVE DD DSNAME=CATBACK,UNIT=(TAPE,,DEFER),<br />
// DISP=(NEW,KEEP),VOL=SER=327409,LABEL=(1,SL)<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
EXPORT -<br />
USER.CATALOG -<br />
OUTFILE(RECEIVE) -<br />
TEMPORARY<br />
/*<br />
Figure 6-26 JCL to create a backup <strong>of</strong> a BCS using IDCAMS EXPORT<br />
Note: You cannot use IDCAMS REPRO or other copying commands to create and recover<br />
BCS backups.<br />
Backing up a master catalog<br />
A master catalog can be backed up like any other BCS.Use IDCAMS, DFSMSdss, or<br />
DFSMShsm for the backup. Another way to provide a backup for the master catalog is to<br />
create an alternate master catalog. For information about defining and using an alternate<br />
master catalog, see z/<strong>OS</strong> DFSMS: Managing Catalogs, SC26-7409.<br />
Also make periodic volume dumps <strong>of</strong> the master catalog's volume. This dump can later be<br />
used by the stand-alone version <strong>of</strong> DFSMSdss to restore the master catalog if you cannot<br />
access the volume from another system.<br />
Backing up a VVDS<br />
Do not back up the VVDS as a data set to provide for recovery. To back up the VVDS, back up<br />
the volume containing the VVDS, or back up all data sets described in the VVDS (all VSAM<br />
and SMS-managed data sets). If the VVDS ever needs to be recovered, recover the entire<br />
volume, or all the data sets described in the VVDS.<br />
You can use either DFSMSdss or DFSMShsm to back up and recover a volume or individual<br />
data sets on the volume.<br />
354 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.14 Recovery procedures<br />
MASTER<br />
MASTER<br />
CATALOG<br />
CATALOG<br />
LOCK IMPORT<br />
Figure 6-27 Recovery procedures<br />
USER<br />
CATALOG<br />
Recovery procedures<br />
Before you run the recovery procedures mentioned in this section, see 6.23, “Fixing<br />
temporary catalog problems” on page 373.<br />
Normally, a BCS is recovered separately from a VVDS. A VVDS usually does not need to be<br />
recovered, even if an associated BCS is recovered. However, if you need to recover a VVDS,<br />
and a BCS resides on the VVDS’s volume, you must recover the BCS as well. If possible,<br />
export the BCS before recovering the volume, and then recover the BCS from the exported<br />
copy. This ensures a current BCS.<br />
Before recovering a BCS or VVDS, try to recover single damaged records. If damaged<br />
records can be rebuilt, you can avoid a full recovery.<br />
Single BCS records can be recovered using the IDCAMS DELETE and DEFINE commands as<br />
described in 6.11, “Defining and deleting data sets” on page 347. Single VVDS and VTOC<br />
records can be recovered using the IDCAMS DELETE command and by recovering the data<br />
sets on the volume.<br />
The way you recover a BCS depends on how it was saved (see 6.13, “Backup procedures” on<br />
page 353). When you recover a BCS, you do not need to delete and redefine the target<br />
catalog unless you want to change the catalog's size or other characteristics, or unless the<br />
BCS is damaged in such a way as to prevent the usual recovery.<br />
Chapter 6. Catalogs 355
Aliases to the catalog can be defined if you use DFSMSdss, DFSMShsm, or if you specify<br />
ALIAS on the IMPORT command. If you have not deleted and redefined the catalog, all existing<br />
aliases are maintained, and any aliases defined in the backup copy are redefined if they are<br />
not already defined.<br />
Lock the BCS before you start recovery so that no one else has access to it while you recover<br />
the BCS. If you do not restrict access to the catalog, users might be able to update the<br />
catalog during recovery or maintenance and create a data integrity exposure. The catalog<br />
also will be unavailable to any system that shares the catalog. You cannot lock a master<br />
catalog.<br />
After you recover the catalog, update the BCS with any changes which have occurred since<br />
the last backup, for example, by running IDCAMS DEFINE RECATALOG for all missing entries.<br />
You can use the access method services DIAGN<strong>OS</strong>E command to identify certain<br />
unsynchronized entries.<br />
The Integrated Catalog Forward Recovery Utility<br />
You also can use the Integrated Catalog Forward Recovery Utility (ICFRU) to recover a<br />
damaged catalog to a correct and current status. This utility uses SMF records that record<br />
changes to the catalog, updating the catalog with changes made since the BCS was backed<br />
up. The SMF records are used by ICFRU as a login database. Use ICFRU to avoid the loss <strong>of</strong><br />
catalog data even after recovery.<br />
Recovery step by step<br />
Follow these steps to recover a BCS using the IDCAMS IMPORT command:<br />
1. If the catalog is used by the job scheduler for any batch jobs, hold the job queue for all job<br />
classes except the one you use for the recovery.<br />
2. Lock the catalog using the IDCAMS ALTER LOCK command.<br />
3. Use ICFRU to create an updated version <strong>of</strong> your last EXPORT backup.<br />
4. Import the most current backup copy <strong>of</strong> the BCS (which contains the BCS's aliases as<br />
they existed when the backup was made). For example, use this JCL:<br />
//RECOVER EXEC PGM=IDCAMS<br />
//BACKCOPY DD DSN=BACKUP.SYS1.ICFCAT.PROJECT1,DISP=OLD<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
IMPORT INFILE(BACKCOPY) -<br />
OUTDATASET(SYS1.ICFCAT.PROJECT1) -<br />
ALIAS -<br />
LOCK<br />
5. If you did not run step 3, manually update the catalog with the changes made between the<br />
last backup and the time <strong>of</strong> error, for example by using IDCAMS DEFINE RECATALOG.<br />
6. Use IDCAMS DIAGN<strong>OS</strong>E and EXAMINE commands to check the contents and integrity <strong>of</strong> the<br />
catalog (see 6.15, “Checking the integrity on an ICF structure” on page 357).<br />
7. If the catalog is shared by other systems and was disconnect there for recovery, run<br />
IDCAMS IMPORT CONNECT ALIAS on those systems to reconnect the user catalog to the<br />
master catalog.<br />
8. Unlock the catalog using IDCAMS ALTER UNLOCK.<br />
9. Free the job queue if you put it on hold.<br />
For further information about recovery procedures, see z/<strong>OS</strong> DFSMS: Managing Catalogs,<br />
SC26-7409. For information about the IDCAMS facility, see z/<strong>OS</strong> DFSMS Access Method<br />
Services for Catalogs, SC26-7394.<br />
356 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.15 Checking the integrity on an ICF structure<br />
UCAT1<br />
Index Data component<br />
DSNAME1<br />
DSNAME2<br />
DSNAME3<br />
DSNAME4<br />
DSNAME5<br />
.<br />
.<br />
.<br />
DSNAMEn<br />
DSNAME1 VOL001<br />
...<br />
DSNAME2<br />
DSNAME3<br />
DSNAME4<br />
DSNAME5<br />
DSNAMEn<br />
Figure 6-28 Errors in an ICF structure<br />
...<br />
...<br />
...<br />
...<br />
...<br />
VOL002<br />
VOL001<br />
VOL002<br />
VOL001<br />
VSAM error,<br />
detected by EXAMINE<br />
VOL002<br />
DSNAME1 ... UCAT1<br />
DSNAME3 ... UCAT1<br />
DSNAME5 ... UCAT27<br />
DSNAME2 ... UCAT1<br />
DSNAME4 ... UCAT1<br />
DSNAMEn ... UCAT1<br />
VOL001<br />
VOL002<br />
Two types <strong>of</strong> errors<br />
There are two types <strong>of</strong> errors in the ICF structure that cause the need for recovery. They are:<br />
► An error in the structural integrity <strong>of</strong> the BCS or VVDS as VSAM data sets - VSAM error<br />
Errors in the structure <strong>of</strong> a BCS as a VSAM KSDS or the VVDS as a VSAM ESDS data set<br />
usually mean that the data set is broken (logically or physically). The data set no longer<br />
has a valid structure that the VSAM component can handle. VSAM does not care about<br />
the contents <strong>of</strong> the records in the BCS or VVDS.<br />
► An error within the data structure <strong>of</strong> a BCS or VVDS - catalog error<br />
The VSAM structure <strong>of</strong> the BCS or VVDS is still valid. VSAM has no problems accessing<br />
the data set. However, the content <strong>of</strong> the single records in the BCS or VVDS does not<br />
conform with the catalog standards. The information in the BCS and VVDS for a single<br />
data set can be unsynchronized, thereby making the data set inaccessible.<br />
VSAM errors<br />
Two kinds <strong>of</strong> VSAM errors that can occur with your BCS or VVDS:<br />
► Logical errors<br />
The records on the DASD volume still have valid physical characteristics like record size or<br />
CI size. The VSAM information in those records is wrong, like pointers from one record to<br />
another or the end-<strong>of</strong>-file information.<br />
VVDS<br />
VVDS<br />
catalog error,<br />
detected by DIAGN<strong>OS</strong>E<br />
Chapter 6. Catalogs 357
► Physical errors<br />
The records on the DASD volume are invalid; for example, they are <strong>of</strong> a wrong length.<br />
Reasons can be an overlay <strong>of</strong> physical DASD space or wrong extent information for the<br />
data set in the VTOC or VVDS.<br />
When errors in the VSAM structure occur, they are in most cases logical errors for the BCS.<br />
Because the VVDS is an entry-sequenced data set (ESDS), it has no index component.<br />
Logical errors for an ESDS are unlikely.<br />
You can use the IDCAMS EXAMINE command to analyze the structure <strong>of</strong> the BCS. As<br />
explained previously, the BCS is a VSAM key-sequenced data set (KSDS). Before running the<br />
EXAMINE, run an IDCAMS VERIFY to make sure that the VSAM information is current, and<br />
ALTER LOCK the catalog to prevent update from others while you are inspecting it.<br />
With the parameter INDEXTEST, you analyze the integrity <strong>of</strong> the index. With parameter<br />
DATATEST, you analyze the data component. If only the index test shows errors, you might<br />
have the chance to recover the BCS by just running an EXPORT/IMPORT to rebuild the index. If<br />
there is an error in the data component, you probably have to recover the BCS as described<br />
in 6.14, “Recovery procedures” on page 355.<br />
Catalog errors<br />
By catalog errors we mean errors in the catalog information <strong>of</strong> a BCS or VVDS, or<br />
unsynchronized information between the BCS and VVDS. The VSAM structure <strong>of</strong> the BCS is<br />
still valid, that is, an EXAMINE returns no errors.<br />
Catalog errors can make a data set inaccessible. Sometimes it is sufficient to delete the<br />
affected entries, sometimes the catalog needs to be recovered (see 6.14, “Recovery<br />
procedures” on page 355).<br />
You can use the IDCAMS DIAGN<strong>OS</strong>E command to validate the contents <strong>of</strong> a BCS or VVDS. You<br />
can use this command to check a single BCS or VVDS and to compare the information<br />
between a BCS and multiple VVDSs.<br />
For various DIAGN<strong>OS</strong>E examples, see z/<strong>OS</strong> DFSMS Access Method Services for Catalogs,<br />
SC26-7394.<br />
358 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.16 Protecting catalogs<br />
DEFINE<br />
RECATALOG<br />
STGADMIN pr<strong>of</strong>iles in<br />
RACF FACILITY class:<br />
Figure 6-29 Protecting catalogs with RACF<br />
Protecting Catalogs<br />
RACF<br />
STGADMIN.IDC.DIAGN<strong>OS</strong>E.CATALOG<br />
STGADMIN.IDC.DIAGN<strong>OS</strong>E.VVDS<br />
STGADMIN.IDC.EXAMINE.DATASET<br />
Protecting catalogs<br />
The protection <strong>of</strong> data includes:<br />
► Data security: the safety <strong>of</strong> data from theft or intentional destruction<br />
► Data integrity: the safety <strong>of</strong> data from accidental loss or destruction<br />
vsam.rectlg<br />
dataset.def<br />
dataset.ghi<br />
Data can be protected either indirectly, by preventing access to programs that can be used to<br />
modify data, or directly, by preventing access to the data itself. Catalogs and cataloged data<br />
sets can be protected in both ways.<br />
To protect your catalogs and cataloged data, use the Resource Access Control Facility<br />
(RACF) or a similar product.<br />
Authorized program facility (APF) protection<br />
The authorized program facility (APF) limits the use <strong>of</strong> sensitive system services and<br />
resources to authorized system and user programs.<br />
For information about using APF for program authorization, see z/<strong>OS</strong> MVS <strong>Programming</strong>:<br />
Authorized Assembler Services Guide, SA22-7608.<br />
All IDCAMS load modules are contained in SYS1.LINKLIB, and the root segment load<br />
module (IDCAMS) is link-edited with the SETCODE AC(1) attribute. These two characteristics<br />
ensure that access method services executes with APF authorization.<br />
Chapter 6. Catalogs 359
Because APF authorization is established at the job step task level, access method services<br />
is not authorized if invoked by an unauthorized application or terminal monitor program.<br />
RACF authorization checking<br />
RACF provides a s<strong>of</strong>tware access control measure you can use in addition to or instead <strong>of</strong><br />
passwords. RACF protection and password protection can coexist for the same data set.<br />
To open a catalog as a data set, you must have ALTER authority and APF authorization.<br />
When defining an SMS-managed data set, the system only checks to make sure the user has<br />
authority to the data set name and SMS classes and groups. The system selects the<br />
appropriate catalog, without checking the user's authority to the catalog. You can define a<br />
data set if you have ALTER or OPERATIONS authority to the applicable data set pr<strong>of</strong>ile.<br />
Deleting any type <strong>of</strong> RACF-protected entry from a RACF-protected catalog requires ALTER<br />
authorization to the catalog or to the data set pr<strong>of</strong>ile protecting the entry being deleted. If a<br />
non-VSAM data set is SMS-managed, RACF does not check for DASDVOL authority. If a<br />
non-VSAM, non-SMS-managed data set is being scratched, DASDVOL authority is also<br />
checked.<br />
For ALTER RENAME, the user is required to have the following two types <strong>of</strong> authority:<br />
► ALTER authority to either the data set or the catalog<br />
► ALTER authority to the new name (generic pr<strong>of</strong>ile) or CREATE authority to the group<br />
Be sure that RACF pr<strong>of</strong>iles are correct after you use REPRO MERGECAT or CNVTCAT on a<br />
catalog that uses RACF pr<strong>of</strong>iles. If the target and source catalogs are on the same volume,<br />
the RACF pr<strong>of</strong>iles remain unchanged.<br />
Tape data sets defined in an integrated catalog facility catalog can be protected by:<br />
► Controlling access to the tape volumes<br />
► Controlling access to the individual data sets on the tape volumes<br />
Pr<strong>of</strong>iles<br />
To control the ability to perform functions associated with storage management, define<br />
pr<strong>of</strong>iles in the FACILITY class whose pr<strong>of</strong>ile names begin with STGADMIN (storage<br />
administration). For a complete list <strong>of</strong> STGADMIN pr<strong>of</strong>iles, see z/<strong>OS</strong> DFSMSdfp Storage<br />
Administration Reference, SC26-7402. Examples <strong>of</strong> pr<strong>of</strong>iles include:<br />
STGADMIN.IDC.DIAGN<strong>OS</strong>E.CATALOG<br />
STGADMIN.IDC.DIAGN<strong>OS</strong>E.VVDS<br />
STGADMIN.IDC.EXAMINE.DATASET<br />
360 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.17 Merging catalogs<br />
UCAT1<br />
REPRO MERGECAT<br />
UCAT2<br />
Figure 6-30 Merging catalogs<br />
Merging catalogs<br />
You might find it beneficial to merge catalogs if you have many small or seldom-used<br />
catalogs. An excessive number <strong>of</strong> catalogs can complicate recovery procedures and waste<br />
resources such as CAS storage, tape mounts for backups, and system time performing<br />
backups.<br />
Merging catalogs is accomplished in much the same way as splitting catalogs (see 6.18,<br />
“Splitting a catalog” on page 363). The only difference between splitting catalogs and merging<br />
them is that in merging, you want all the entries in a catalog to be moved to another catalog,<br />
so that you can delete the obsolete catalog.<br />
Use the following steps to merge two integrated catalog facility catalogs:<br />
1. Use ALTER LOCK to lock both catalogs.<br />
2. Use LISTCAT to list the aliases for the catalog you intend to delete after the merger:<br />
//JOB ...<br />
//S1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//DD1 DD DSN=listcat.output,DISP=(NEW,CATLG),<br />
// SPACE=(TRK,(10,10)),<br />
// DCB=(RECFM=VBA,LRECL=125,BLKSIZE=629)<br />
//SYSIN DD *<br />
LISTC ENT(catalog.name) ALL -<br />
OUTFILE(DD1)<br />
/*<br />
UCAT3<br />
Chapter 6. Catalogs 361
3. Use EXAMINE and DIAGN<strong>OS</strong>E to ensure that the catalogs are error-free. Fix any errors<br />
indicated (see also “Checking the integrity on an ICF structure” on page 357).<br />
4. Use REPRO MERGECAT without specifying the ENTRIES or LEVEL parameter. The<br />
OUTDATASET parameter specifies the catalog that you are keeping after the two catalogs<br />
are merged. Here is an example:<br />
//MERGE6 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//DD1 DD VOL=SER=VSER01,UNIT=DISK,DISP=OLD<br />
// DD VOL=SER=VSER02,UNIT=DISK,DISP=OLD<br />
// DD VOL=SER=VSER03,UNIT=DISK,DISP=OLD<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
REPRO -<br />
INDATASET(USERCAT4) -<br />
OUTDATASET(USERCAT5) -<br />
MERGECAT -<br />
FILE(DD1)<br />
FROMKEY(WELCHA.C.*) TOKEY(WELCHA.E.*)<br />
/*<br />
Important: This step can take a long time to complete. If the MERGECAT job is cancelled,<br />
then all merged entries so far remain in the target catalog. They are not backed out in<br />
case the job fails. See “Recovering from a REPRO MERGECAT Failure” in z/<strong>OS</strong><br />
DFSMS: Managing Catalogs, SC26-7409, for more information about this topic.<br />
Since z/<strong>OS</strong> V1R7, REPRO MERGECAT provides the capability to copy a range <strong>of</strong> records from<br />
one user catalog to another. It allows recovery <strong>of</strong> a broken catalog by enabling you to copy<br />
from one specific key to another specific key just before where the break occurred and<br />
then recover data beginning after the break. Refer to the parameters FROMKEY/TOKEY in the<br />
previous example.<br />
5. Use the listing created in step 2 to create a sequence <strong>of</strong> DELETE ALIAS and DEFINE ALIAS<br />
commands to delete the aliases <strong>of</strong> the obsolete catalog, and to redefine the aliases as<br />
aliases <strong>of</strong> the catalog you are keeping.<br />
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the<br />
changed catalogs and uses a separate master catalog.<br />
6. Use DELETE USERCATALOG to delete the obsolete catalog. Specify RECOVERY on the<br />
DELETE command.<br />
7. If your catalog is shared, run the EXPORT DISCONNECT command on each shared system to<br />
remove unwanted user catalog connector entries. If your catalog is shared, run the EXPORT<br />
DISCONNECT command on each shared system to remove unwanted user catalog<br />
connector entries.<br />
8. Use ALTER UNLOCK to unlock the remaining catalog.<br />
You can also merge entries from one tape volume catalog to another using REPRO MERGECAT.<br />
REPRO retrieves tape library or tape volume entries and redefines them in a target tape volume<br />
catalog. In this case, VOLUMEENTRIES needs to be used to correctly filter the appropriate<br />
entries. The LEVEL parameter is not allowed when merging tape volume catalogs.<br />
362 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.18 Splitting a catalog<br />
UCAT1<br />
Figure 6-31 Splitting a catalog<br />
REPRO MERGECAT<br />
UCAT2<br />
UCAT3<br />
Splitting catalogs<br />
You can split a catalog to create two catalogs or to move a group <strong>of</strong> catalog entries if you<br />
determine that a catalog is either unacceptably large or that it contains too many entries for<br />
critical data sets.<br />
If the catalog is unacceptably large (that is, a catalog failure leaving too many entries<br />
inaccessible), then you can split the catalog into two catalogs. If the catalog is <strong>of</strong> an<br />
acceptable size but contains entries for too many critical data sets, then you can simply move<br />
entries from one catalog to another.<br />
To split a catalog or move a group <strong>of</strong> entries, use the access method services REPRO MERGECAT<br />
command. Use the following steps to split a catalog or to move a group <strong>of</strong> entries:<br />
1. Use ALTER LOCK to lock the catalog. If you are moving entries to an existing catalog, lock it<br />
as well.<br />
2. If you are splitting a catalog, define a new catalog with DEFINE USERCATALOG LOCK (see also<br />
“Defining a catalog and its aliases” on page 339).<br />
3. Use LISTCAT to obtain a listing <strong>of</strong> the catalog aliases you are moving to the new catalog.<br />
Use the OUTFILE parameter to define a data set to contain the output listing (see also<br />
“Merging catalogs” on page 361).<br />
4. Use EXAMINE and DIAGN<strong>OS</strong>E to ensure that the catalogs are error-free. Fix any errors<br />
indicated (see also “Checking the integrity on an ICF structure” on page 357).<br />
Chapter 6. Catalogs 363
5. Use REPRO MERGECAT to split the catalog or move the group <strong>of</strong> entries. When splitting a<br />
catalog, the OUTDATASET parameter specifies the catalog created in step 2. When<br />
moving a group <strong>of</strong> entries, the OUTDATASET parameter specifies the catalog which is to<br />
receive the entries.<br />
Use the ENTRIES or LEVEL parameters to specify which catalog entries are to be<br />
removed from the source catalog and placed in the catalog specified in OUTDATASET.<br />
In the following example all entries that match the generic name VSAMDATA.* are moved<br />
from catalog USERCAT4 to USERCAT5.<br />
//MERGE76 JOB ...<br />
//STEP1 EXEC PGM=IDCAMS<br />
//DD1 DD VOL=SER=VSER01,UNIT=DISK,DISP=OLD<br />
// DD VOL=SER=VSER02,UNIT=DISK,DISP=OLD<br />
// DD VOL=SER=VSER03,UNIT=DISK,DISP=OLD<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSIN DD *<br />
REPRO -<br />
INDATASET(USERCAT4) -<br />
OUTDATASET(USERCAT5) -<br />
ENTRIES(VSAMDATA.*) -<br />
MERGECAT -<br />
FILE(DD1)<br />
/*<br />
Important: This step can take a long time to complete. If the MERGECAT job is cancelled,<br />
all merged entries so far will remain in the target catalog. They are not backed out in<br />
case the job fails. See “Recovering from a REPRO MERGECAT Failure” in z/<strong>OS</strong><br />
DFSMS: Managing Catalogs, SC26-7409, for more information about this topic.<br />
6. Use the listing created in step 3 to create a sequence <strong>of</strong> DELETE ALIAS and DEFINE ALIAS<br />
commands for each alias. These commands delete the alias from the original catalog, and<br />
redefine them as aliases for the catalog which now contains entries belonging to that alias<br />
name.<br />
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the<br />
changed catalogs and uses a separate master catalog.<br />
7. Unlock both catalogs using ALTER UNLOCK.<br />
364 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.19 Catalog performance<br />
Factors that have influence on the performance:<br />
Main factor: amount <strong>of</strong> I/Os<br />
Cache catalogs to decrease number <strong>of</strong> I/Os<br />
Use Enhanced Catalog Sharing<br />
No user data sets in master catalog<br />
Options for DEFINE USERCATALOG<br />
Convert SYSIGGV2 resource to avoid enqueue<br />
contention and deadlocks between systems<br />
Eliminate the use <strong>of</strong> JOBCAT/STEPCAT<br />
Figure 6-32 Catalog performance<br />
Catalog performance<br />
Performance is not the main consideration when defining catalogs. It is more important to<br />
create a catalog configuration that allows easy recovery <strong>of</strong> damaged catalogs with the least<br />
amount <strong>of</strong> system disruption. However, there are several options you can choose to improve<br />
catalog performance without affecting the recoverability <strong>of</strong> a catalog. Remember that in an<br />
online environment, such as CICS/DB2, the number <strong>of</strong> data set allocations is minimal and<br />
consequently the catalog activity is low.<br />
Factors affecting catalog performance<br />
The main factors affecting catalog performance are the amount <strong>of</strong> I/O required for the catalog<br />
and the subsequent amount <strong>of</strong> time it takes to perform the I/O. These factors can be reduced<br />
by buffering catalogs in special buffer pools used only by CI catalogs. Other factors are the<br />
size and usage <strong>of</strong> the master catalog, options that were used to define a catalog, and the way<br />
you share catalogs between systems.<br />
Buffering catalogs<br />
The simplest method <strong>of</strong> improving catalog performance is to use a buffer to maintain catalog<br />
records within CAS private area address space or VLF data space. Two types <strong>of</strong> buffer are<br />
available exclusively for catalogs. The in-storage catalog (ISC) buffer is contained within the<br />
catalog address space (CAS). The catalog data space buffer (CDSC) is separate from CAS<br />
and uses the z/<strong>OS</strong> VLF component, which stores the buffered records in a data space. Both<br />
types <strong>of</strong> buffer are optional, and each can be cancelled and restarted without an IPL.<br />
Chapter 6. Catalogs 365
The two types <strong>of</strong> buffer are used to keep catalog records in the storage. This avoids I/Os that<br />
are necessary to read the records from DASD. There are several things you need to take into<br />
considerations to decide what kind <strong>of</strong> buffer to use for which catalog. See z/<strong>OS</strong> DFSMS:<br />
Managing Catalogs, SC26-7409, for more information about buffering. Another kind <strong>of</strong><br />
caching is using enhanced catalog sharing to avoid I/Os to read the catalog VVR. Refer to<br />
6.24, “Enhanced catalog sharing” on page 375 for more information about this topic.<br />
Master catalog<br />
If the master catalog only contains entries for catalogs, catalog aliases, and system data sets,<br />
the entire master catalog is read into main storage during system initialization. Because the<br />
master catalog, if properly used, is rarely updated, the performance <strong>of</strong> the master catalog is<br />
not appreciably affected by I/O requirements. For that reason, keep the master catalog small<br />
and do not define user data sets into it.<br />
Options for defining a user catalog<br />
There are several options you can specify when you define a user catalog that have an impact<br />
on the performance. The options are:<br />
► STRNO - Specifies the numbers <strong>of</strong> concurrent requests.<br />
► BUFFERSPACE - Required buffer space for your catalog. It is determined by catalog<br />
management, but you can change it.<br />
► BUFND - Number <strong>of</strong> buffers for transmitting data between virtual and DASD. The default is<br />
STRNO+1, but you can change it.<br />
► BUFNI - Number <strong>of</strong> buffers for transmitting index entries between virtual and auxiliary<br />
storage. Default is STRNO+2.<br />
► FREESPACE - An adequate value allows catalog updates without an excessive number <strong>of</strong><br />
control interval and control area splits<br />
For more information about these values, see z/<strong>OS</strong> DFSMS Access Method Services for<br />
Catalogs, SC26-7394.<br />
Since z/<strong>OS</strong> V1R7, a new catalog auto-tuning (every 10-minutes) automatically modifies<br />
temporarily the number <strong>of</strong> data buffers, index buffers, and VSAM strings for catalogs. When<br />
any modification occurs the message IEC391I is issued telling the new values. This function is<br />
by default enabled, but can be disabled through the F CATALOG,DISABLE(AUTOTUNING).<br />
Convert SYSIGGV2 resource<br />
Catalog management uses the SYSIGGV2 reserve when serializing access to catalogs. The<br />
SYSIGGV2 reserve is used to serialize the entire catalog BCS component across all I/O as<br />
well as to serialize access to specific catalog entries.<br />
If the catalog is shared only within one GRSplex, convert the SYSIGGV2 resource to a global<br />
enqueue to avoid reserves on the volume on which the catalog resides. If you are not<br />
converting SYSIGGV2, you can have ENQ contentions on those volumes and even run into<br />
deadlock situations.<br />
Important: If you share a catalog with a system that is not in the same GRS complex, do<br />
not convert the SYSIGGV2 resource for this catalog. Sharing a catalog outside the<br />
complex requires reserves for the volume on which the catalog resides. Otherwise, you will<br />
break the catalog. For more information, see z/<strong>OS</strong> MVS Planning: Global Resource<br />
Serialization, SA22-7600.<br />
366 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.20 F CATALOG,REPORT,PERFORMANCE command<br />
The F CATALOG,REPORT,PERFORMANCE<br />
command can be used to examine the following:<br />
Certain events that occur in the catalog address space<br />
These events represent points at which catalog code<br />
calls some function outside <strong>of</strong> the catalog component<br />
Such as enqueues, I/O, or allocations<br />
All such events are tracked, except for:<br />
Lock manager requests and GETMAIN/FREEMAIN<br />
activity<br />
Figure 6-33 F CATALOG,REPORT,PERFORMANCE command<br />
F CATALOG,REPORT,PERFORMANCE command<br />
You can use the F CATALOG command to list information about catalogs currently allocated to<br />
the catalog address space. Sometimes you need this information so that you can use another<br />
MODIFY command to close or otherwise manipulate a catalog in cache.<br />
The command displays information about the performance <strong>of</strong> specific events that catalog<br />
processing invokes. Each line shows the number <strong>of</strong> times (nnn) that event has occurred since<br />
IPL or the last reset <strong>of</strong> the statistics by using the F CATALOG,REPORT,PERFORMANCE(RESET), and<br />
the average time for each occurrence (nnn.nnn). The unit <strong>of</strong> measure <strong>of</strong> the average time<br />
(unit) is either milliseconds (MSEC), seconds (SEC), or the average shown as hours, minutes,<br />
and seconds (hh:mm:ss.th).<br />
Note: Other forms <strong>of</strong> the REPORT command provide information about various aspects <strong>of</strong><br />
the catalog address space.<br />
The F CATALOG,REPORT,CACHE command also provides rich information about the use <strong>of</strong><br />
catalog buffering. This command causes general information about catalog cache status for<br />
all catalogs currently active in the catalog address space to be listed. The report generated<br />
shows information useful in evaluating the catalog cache performance for the listed catalogs.<br />
Chapter 6. Catalogs 367
Catalog report output<br />
This MODIFY command is very important for performance analysis. Try to get familiar with each<br />
meaning to understand what can be done to improve catalog performance. These counters<br />
can be zeroed through the use <strong>of</strong> a reset command. The F CATALOG,REPORT,PERFORMANCE<br />
command can be used to examine certain events that occur in the catalog address space.<br />
These events represent points at which catalog code calls a function outside <strong>of</strong> the catalog<br />
component, such as enqueues, I/O, or allocation. All such events are tracked, except for lock<br />
manager requests and GETMAIN/FREEMAIN activity. Figure 6-34 shows a catalog<br />
performance report.<br />
F CATALOG,REPORT,PERFORMANCE<br />
IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE<br />
IEC359I CATALOG PERFORMANCE REPORT<br />
*CAS***************************************************<br />
* Statistics since 23:04:21.14 on 01/25/2010 *<br />
* -----CATALOG EVENT---- --COUNT-- ---AVERAGE--- *<br />
* Entries to Catalog 607,333 3.632 MSEC *<br />
* BCS ENQ Shr Sys 651,445 0.055 MSEC *<br />
* BCS ENQ Excl Sys 302 0.062 MSEC *<br />
* BCS DEQ 1,051K 0.031 MSEC *<br />
* VVDS RESERVE CI 294,503 0.038 MSEC *<br />
* VVDS DEQ CI 294,503 0.042 MSEC *<br />
* VVDS RESERVE Shr 1,482K 0.045 MSEC *<br />
* VVDS RESERVE Excl 113 0.108 MSEC *<br />
* VVDS DEQ 1,482K 0.040 MSEC *<br />
* SPHERE ENQ Excl Sys 49 0.045 MSEC *<br />
* SPHERE DEQ 49 0.033 MSEC *<br />
* CAXWA ENQ Shr 144 0.006 MSEC *<br />
* CAXWA DEQ 144 0.531 MSEC *<br />
* VDSPM ENQ 651,816 0.005 MSEC *<br />
* VDSPM DEQ 651,816 0.005 MSEC *<br />
* BCS Get 63,848 0.095 MSEC *<br />
* BCS Put 24 0.597 MSEC *<br />
* BCS Erase 11 0.553 MSEC *<br />
* VVDS I/O 1,769K 0.625 MSEC *<br />
* VLF Delete Minor 1 0.019 MSEC *<br />
* VLF Define Major 84 0.003 MSEC *<br />
* VLF Identify 8,172 0.001 MSEC *<br />
* RMM Tape Exit 24 0.000 MSEC *<br />
* OEM Tape Exit 24 0.000 MSEC *<br />
* BCS Allocate 142 15.751 MSEC *<br />
* SMF Write 106,367 0.043 MSEC *<br />
* IXLCONN 2 107.868 MSEC *<br />
* IXLCACHE Read 2 0.035 MSEC *<br />
* MVS Allocate 116 19.159 MSEC *<br />
* Capture UCB 39 0.008 MSEC *<br />
* SMS Active Config 2 0.448 MSEC *<br />
* RACROUTE Auth 24,793 0.080 MSEC *<br />
* RACROUTE Define 7 0.066 MSEC *<br />
* Obtain QuiesceLatch 606,919 0.001 MSEC *<br />
* ENQ SYSZPCCB 27,980 0.005 MSEC *<br />
* DEQ SYSZPCCB 27,980 0.003 MSEC *<br />
* Release QuiesceLatch 606,919 0.000 MSEC *<br />
* Capture to Actual 149 0.014 MSEC *<br />
*CAS***************************************************<br />
IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED<br />
Figure 6-34 Catalog performance report<br />
368 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.21 Catalog address space (CAS)<br />
User<br />
Address<br />
Space<br />
Catalog request<br />
SVC 26<br />
Return data<br />
CATALOG<br />
Address<br />
Space<br />
(CAS)<br />
<strong>System</strong> 1 <strong>System</strong> 2<br />
Figure 6-35 Catalog address space (CAS)<br />
User catalog<br />
CATALOG<br />
Address<br />
Space<br />
(CAS)<br />
Catalog request<br />
SVC 26<br />
Return data<br />
User<br />
Address<br />
Space<br />
The catalog address space<br />
Catalog functions are performed in the catalog address space (CAS). The jobname <strong>of</strong> the<br />
catalog address space is CATALOG.<br />
As soon as a user requests a catalog function (for example, to locate or define a data set), the<br />
CAS gets control to handle the request. When it has finished, it returns the requested data to<br />
the user. A catalog task which handles a single user request is called a service task. To each<br />
user request a service task is assigned. The minimum number <strong>of</strong> available service tasks is<br />
specified in the SYSCATxx member <strong>of</strong> SYS1.NUCLEUS (or the LOADxx member <strong>of</strong><br />
SYS1.PARMLIB). A table called the CRT keeps track <strong>of</strong> these service tasks.<br />
The CAS contains all information necessary to handle a catalog request, like control block<br />
information about all open catalogs, alias tables, and buffered BCS records.<br />
During the initialization <strong>of</strong> an MVS system, all user catalog names identified in the master<br />
catalog, their aliases, and their associated volume serial numbers are placed in tables in<br />
CAS.<br />
You can use the MODIFY CATALOG operator command to work with the catalog address space.<br />
See also 6.22, “Working with the catalog address space” on page 371.<br />
Since z/<strong>OS</strong> 1.8, the maximum number <strong>of</strong> parallel catalog requests is 999, as defined in the<br />
SYSCAT parmlib member. Previously it was 180.<br />
Chapter 6. Catalogs 369
Restarting the catalog address space<br />
Consider restarting the CAS only as a final option before IPLing a system. Never try restarting<br />
CAS unless an IPL is your only other option. A system failure caused by catalogs, or a CAS<br />
storage shortage due to FREEMAIN failures, might require you to use MODIFY<br />
CATALOG,RESTART to restart CAS in a new address space.<br />
Never use RESTART to refresh catalog or VVDS control blocks or to change catalog<br />
characteristics. Restarting CAS is a drastic procedure, and if CAS cannot restart, you will<br />
have to IPL the system.<br />
When you issue MODIFY CATALOG,RESTART, the CAS mother task is abended with abend code<br />
81A, and any catalog requests in process at the time are redriven.<br />
The restart <strong>of</strong> CAS in a new address space is transparent to all users. However, even when all<br />
requests are redriven successfully and receive a return code <strong>of</strong> zero (0), the system might<br />
produce indicative dumps. There is no way to suppress these indicative dumps.<br />
Since z/<strong>OS</strong> 1.6, the F CATALOG has new options:<br />
TAKEDUMP This option causes the CAS to issue an SVCDUMP using the proper<br />
options to ensure that all data needed for diagnosis is available.<br />
RESTART This option prompts the operator for additional information with the<br />
following messages:<br />
► IEC363D IS THIS RESTART RELATED TO AN EXISTING CATALOG PROBLEM<br />
(Y OR N)?<br />
If the response to message IEC363D is N, the restart continues; if the<br />
response is Y, another prompt is issued.<br />
370 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
► IEC364D HAS AN SVC DUMP OF THE CATALOG ADDRESS SPACE ALREADY<br />
BEEN TAKEN (Y OR N)?
6.22 Working with the catalog address space<br />
Use the MODIFY CATALOG command<br />
To list information such as:<br />
Settings<br />
Catalogs currently open in the CAS<br />
Performance statistics<br />
Cache statistics<br />
Service tasks<br />
Module information<br />
To interact with the CAS by:<br />
Changing settings<br />
Ending or abending service tasks<br />
Restarting the CAS<br />
Closing or unallocating catalogs<br />
Figure 6-36 Working with the CAS<br />
Working with the catalog address space<br />
You can use the command MODIFY CATALOG to extract information from the CAS and to interact<br />
with the CAS. This command can be used in many variations. In this section we provide an<br />
overview <strong>of</strong> parameters to be aware <strong>of</strong> when maintaining your catalog environment. This<br />
command is further discussed in 6.23, “Fixing temporary catalog problems” on page 373.<br />
For a discussion about the entire functionality <strong>of</strong> the MODIFY CATALOG command, see z/<strong>OS</strong><br />
DFSMS: Managing Catalogs, SC26-7409.<br />
Examples <strong>of</strong> the MODIFY CATALOG command:<br />
► MODIFY CATALOG,REPORT<br />
The catalog report lists information about the CAS, like the service level, the catalog<br />
address space ID, service tasks limits, and more. Since z/<strong>OS</strong> 1.7, this command<br />
generates the IEC392I that identifies the top three holders <strong>of</strong> the CATALOG service tasks,<br />
maybe indicating a lockup.<br />
► MODIFY CATALOG,OPEN<br />
This command lists all catalogs that are currently open in the CAS. It shows whether the<br />
catalog is locked or shared, and the type <strong>of</strong> buffer used for the catalog. A catalog is<br />
opened after an IPL or catalog restart when it is referenced for the first time. It remains<br />
open until it is manually closed by the MODIFY CATALOG command or it is closed because<br />
the maximum number <strong>of</strong> open catalogs has been reached.<br />
Chapter 6. Catalogs 371
► MODIFY CATALOG,LIST<br />
The LIST command shows you all service task that are currently active handling a user<br />
request. Normally the active phase <strong>of</strong> a service task is only <strong>of</strong> a short duration, so no tasks<br />
are listed. This command helps you to identify tasks that can be the reason for catalog<br />
problems if performance slows down.<br />
► MODIFY CATALOG,DUMPON(rc,rsn,mm,cnt)<br />
Use this command to get a dump <strong>of</strong> the catalog address space when a specific catalog<br />
error occurs. This catalog error is identified by the catalog return code (rc), the reason<br />
code (rsn) and the name <strong>of</strong> the module which sets the error (mm). You can also specify, in<br />
the cnt-parameter, for how many <strong>of</strong> the rc/rsn-combinations a dump is to be taken. The<br />
default is one. The module identifier corresponds to the last two characters in the catalog<br />
module name. For example, the module identifier is A3 for IGG0CLA3. You can substitute<br />
the module name by two asterisks (**) if you do not know the module name or do not care<br />
about it.<br />
► MODIFY CATALOG,ENTRY(modulename)<br />
This command lists the storage address, fmid, and the level <strong>of</strong> a catalog module (csect). If<br />
you do not specify a module name, all catalog csects are listed.<br />
► MODIFY CATALOG,REPORT,PERFORMANCE<br />
The output <strong>of</strong> this command shows you significant catalog performance numbers about<br />
number and duration <strong>of</strong> ENQs and DEQs for the BCS and VVDS and more. Use the<br />
command to identify performance problems.<br />
► MODIFY CATALOG,REPORT,CACHE<br />
The cache report lists cache type and statistics for the single catalogs that are open in the<br />
CAS.<br />
► MODIFY CATALOG,RESTART<br />
Use this command to restart the catalog address space. Take this action only in error<br />
situations when the other option is an IPL. See also 6.21, “Catalog address space (CAS)”<br />
on page 369, for more information about this topic.<br />
372 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.23 Fixing temporary catalog problems<br />
Fixing temporary catalog problems involves<br />
Rebuilding information <strong>of</strong> BCS or VVDS control<br />
blocks<br />
Close or unallocate a BCS<br />
Close or unallocate a VVDS<br />
Determining tasks that cause performance problems<br />
Display GRS information about catalog resources<br />
and devices<br />
Display information about catalog service tasks<br />
End service tasks<br />
Figure 6-37 Fixing temporary catalog problems<br />
Fixing temporary catalog problems<br />
This section explains how to rebuild information in the catalog address space, as well as how<br />
to obtain information about and recover from performance slowdowns.<br />
Rebuild information about a BCS or VVDS<br />
Occasionally, the control blocks for a catalog kept in the catalog address space might be<br />
damaged. You might think the catalog is damaged and in need <strong>of</strong> recovery, when only the<br />
control blocks need to be rebuilt. If the catalog appears damaged, try rebuilding the control<br />
blocks first. If the problem persists, recover the catalog.<br />
Use the following commands to close or unallocate a BCS or VVDS in the catalog address<br />
space. The next access to the BCS or VVDS reopens it and rebuilds the control blocks.<br />
► MODIFY CATALOG,CL<strong>OS</strong>E(catalogname) - Closes the specified catalog but leaves it<br />
allocated.<br />
► MODIFY CATALOG,UNALLOCATE(catalogname) - Unallocates a catalog; if you do not specify a<br />
catalog name, then all catalogs are unallocated.<br />
► MODIFY CATALOG,VCL<strong>OS</strong>E(volser) - Closes the VVDS for the specified volser.<br />
► MODIFY CATALOG,VUNALLOCATE - Unallocates all VVDSs; you cannot specify a volser, so try<br />
to use VCL<strong>OS</strong>E first.<br />
Chapter 6. Catalogs 373
Recover from performance slow downs<br />
Sometimes the performance <strong>of</strong> the catalog address space slows down. Catalog requests can<br />
take a long time or even hang. There can be various reasons for such situations, for example<br />
a volume on which a catalog resides is reserved by another system, thus all requests from<br />
your system to this catalog wait until the volume is released.<br />
Generally, the catalog component uses two resources for serialization:<br />
► SYSIGGV2 to serialize on the BCS<br />
► SYSZVVDS to serialize on the VVDS<br />
Delays or hangs can occur if the catalog needs one <strong>of</strong> these resources and it is held already<br />
by someone else, for example by a CAS <strong>of</strong> another system. You can use the following<br />
commands to display global resource serialization (GRS) data:<br />
► D GRS,C - Displays GRS contention data for all resources, who is holding a resource, and<br />
who is waiting.<br />
► D GRS,RES=(resourcename) - Displays information for a specific resource.<br />
► D GRS,DEV=devicenumber - Displays information about a specific device, such as whether it<br />
is reserved by the system.<br />
Route these commands to all systems in the sysplex to get an overview about hang<br />
situations.<br />
When you have identified a catalog address space holding a resource for a long time, or the<br />
GRS outputs do not show you anything but you have still catalog problems, you can use the<br />
following command to get detailed information about the catalog services task:<br />
► MODIFY CATALOG,LIST - Lists the currently active service tasks, their task IDs, duration, and<br />
the job name for which the task is handling the request.<br />
Watch for tasks with long duration time. You can obtain detailed information about a specific<br />
task by running the following command for a specific task ID:<br />
► MODIFY CATALOG,LISTJ(taskid),DETAIL - Shows detailed information about a service task,<br />
for example if it is waiting for the completion <strong>of</strong> an ENQ.<br />
If you identify a long-running task that is in a deadlock situation with another task (on another<br />
system), you can end and redrive the task to resolve the lockout. The following commands<br />
help you to end a catalog service task:<br />
► MODIFY CATALOG,END(taskid),REDRIVE - End a service task and redrive it.<br />
► MODIFY CATALOG,END(taskid),NOREDRIVE - Permanently end the task without redriving.<br />
► MODIFY CATALOG,ABEND(taskid) - Abnormally end a task which cannot be stopped by<br />
using the END parameter.<br />
You can use the FORCE parameter for these commands if the address space that the service<br />
task is operating on behalf <strong>of</strong> has ended abnormally. Use this parameter only in this case.<br />
You can also try to end the job for which the catalog task is processing a request.<br />
For more information about the MODIFY CATALOG command and fixing temporary catalog<br />
problems, see z/<strong>OS</strong> DFSMS: Managing Catalogs, SC26-7409.<br />
374 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
6.24 Enhanced catalog sharing<br />
MVS 1<br />
VVR<br />
Coupling<br />
Facility<br />
MVS 2<br />
VVR<br />
Figure 6-38 DFSMS enhanced catalog sharing<br />
CATALOG<br />
Conditions for ECS:<br />
ECSHARING in IDCAMS DEFINE/ALTER<br />
Active connection to ECS CF structure<br />
Activate ECS through MVS command:<br />
MODIFY CATALOG,ECSHR(AUTOADD)<br />
Enhanced catalog sharing (ECS)<br />
In 6.9, “Sharing catalogs across systems” on page 343, sharing catalogs across multiple<br />
systems is discussed. A catalog uses its specific VVR to serialize the access from multiple<br />
systems. Reading serialization information from the VVR on the DASD causes a significant<br />
amount <strong>of</strong> overhead for I/O operations. However, it is better to have this VVR than to discard<br />
the catalog buffers, when in shared catalog mode. This is called VVDS mode.<br />
Most <strong>of</strong> the overhead associated with shared catalog is eliminated if you use enhanced<br />
catalog sharing (ECS). ECS uses a cache Coupling Facility structure to keep the special<br />
VVR. In addition, the Coupling Facility structure (as defined in CFRM) keeps a copy <strong>of</strong><br />
updated records. There is no I/O necessary to read the catalog VVR to verify the updates. In<br />
addition, the eventual modifications are also kept in the Coupling Facility structure, thereby<br />
avoiding more I/O. ECS saves about 50% in elapsed time and provides an enormous<br />
reduction in ENQ/Reserves.<br />
Implementing enhanced catalog sharing<br />
Perform these steps to implement ECS:<br />
1. Define a Coupling Facility cache structure with the name SYSIGGCAS_ECS in the CFRM<br />
couple data set and activate this CFRM policy.<br />
This action connects all ECS-eligible systems to the ECS structure.<br />
Chapter 6. Catalogs 375
2. Define or alter your existing catalogs with the attribute ECSHARING using the IDCAMS<br />
DEFINE or ALTER commands.<br />
The ECSHARING attribute makes a catalog eligible for sharing using the ECS protocol,<br />
as opposed to the VVDS/VVR protocol. The catalog can still be used in VVDS mode.<br />
3. Activate all eligible catalogs for ECS sharing.<br />
You can use the command MODIFY CATALOG,ECSHR(AUTOADD) on one system to activate all<br />
ECS-eligible catalogs throughout the sysplex for ECS sharing. They are automatically<br />
activated at their next reference. You can manually add a catalog to use ECS by running<br />
the command MODIFY CATALOG,ECSHR(ENABLE,catname) where catname is the name <strong>of</strong> the<br />
catalog you want to add.<br />
Only catalogs that were added are shared in ECS mode. The command MODIFY<br />
CATALOG,ECSHR(STATUS) shows you the ECS status for each catalog, as well as whether it is<br />
eligible and already activated.<br />
Restrictions for ECS mode usage<br />
The following restrictions apply to ECS mode usage:<br />
► You cannot use ECS mode from one system and VVDS mode from another system<br />
simultaneously to share a catalog. You will get an error message if you try this.<br />
Important: If you attempt to use a catalog that is currently ECS-active from a system<br />
outside the sysplex, the request might break the catalog.<br />
► No more than 1024 catalogs can currently be shared using ECS from a single system.<br />
► All systems sharing the catalog in ECS mode must have connectivity to the same<br />
Coupling Facility, and must be in the same global resource serialization (GRS) complex.<br />
► When you use catalogs in ECS mode, convert the resource SYSIGGV2 to a SYSTEMS<br />
enqueue. Otherwise, the catalogs in ECS mode will be damaged.<br />
For more information about ECS, see z/<strong>OS</strong> DFSMS: Managing Catalogs, SC26-7409. For<br />
information about defining Coupling Facility structures, see z/<strong>OS</strong> MVS Setting Up a Sysplex,<br />
SA22-7625.<br />
376 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 7. DFSMS Transactional VSAM<br />
Services<br />
DFSMS Transactional VSAM Services (DFSMStvs) is an enhancement to VSAM RLS access<br />
that enables multiple batch update jobs and CICS to share access to the same data sets.<br />
DFSMStvs provides two-phase commit and backout protocols, as well as backout logging and<br />
forward recovery logging. DFSMStvs provides transactional recovery directly within VSAM.<br />
As an extension <strong>of</strong> VSAM RLS, DFSMStvm enables any job or application that is designed for<br />
data sharing to read-share or write-share VSAM recoverable data sets. VSAM RLS provides<br />
a server for sharing VSAM data sets in a sysplex. VSAM RLS uses Coupling Facility-based<br />
locking and data caching to provide sysplex-scope locking and data access integrity.<br />
DFSMStvs adds logging, commit, and backout processing.<br />
To understand DFSMStvs, it is necessary to first review base VSAM information and VSAM<br />
record-level sharing (RLS).<br />
In this chapter we cover the following topics:<br />
► Review <strong>of</strong> base VSAM information and CICS concepts<br />
► Introduction to VSAM RLS<br />
► Introduction to DFSMStvs<br />
7<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 377
7.1 VSAM share options<br />
Share options are an attribute <strong>of</strong> the data set<br />
SHAREOPTIONS(crossregion,crosssystem)<br />
SHAREOPTIONS(1,x)<br />
Figure 7-1 VSAM share options<br />
Share options as a data set attribute<br />
The share options <strong>of</strong> a VSAM data set are specified as a parameter for an IDCAMS DEFINE<br />
CLUSTER that creates a new data set. They show how a component or cluster can be shared<br />
among users.<br />
SHAREOPTIONS (crossregion,crosssystem)<br />
The cross-region share options specify the amount <strong>of</strong> sharing allowed among regions within<br />
the same system or multiple systems. Cross-system share options specify how the data set is<br />
shared among systems. Use global resource serialization (GRS) or a similar product to<br />
perform the serialization.<br />
SHAREOPTIONS (1,x)<br />
The data set can be shared by any number <strong>of</strong> users for read access (open for input), or it can<br />
be accessed by only one user for read/write access (open for output). If the data set is open<br />
for output by one user, a read or read/write request by another user will fail. With this option,<br />
VSAM ensures complete data integrity for the data set. When the data set is already open for<br />
RLS processing, any request to open the data set for non-RLS access will fail.<br />
SHAREOPTIONS (2,x)<br />
The data set can be shared by one user for read/write access, and by any number <strong>of</strong> users for<br />
read access. If the data set is open for output by one user, another open for output request<br />
will fail, but a request for read access will succeed. With this option, VSAM ensures write<br />
378 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
One user can have the data set open for read/write<br />
access or any number <strong>of</strong> users for read only<br />
SHAREOPTIONS(2,x)<br />
One user can have the data set open for read/write<br />
access and any number <strong>of</strong> users for read only<br />
SHAREOPTIONS(3,x)<br />
Any number <strong>of</strong> users can have the data set open for<br />
both read and write access
integrity. If the data set is open for RLS processing, non-RLS access for read is allowed.<br />
VSAM provides full read and write integrity for its RLS users, but no read integrity for non-RLS<br />
access.<br />
SHAREOPTIONS (3,x)<br />
The data set can be opened by any number <strong>of</strong> users for read and write request. VSAM does<br />
not ensure any data integrity. It is the responsibility <strong>of</strong> the users to maintain data integrity by<br />
using enqueue and dequeue macros. This setting does not allow any type <strong>of</strong> non-RLS access<br />
while the data set is open for RLS processing.<br />
For more information about VSAM share options, see z/<strong>OS</strong> DFSMS: Using Data Sets,<br />
SC26-7410.<br />
Chapter 7. DFSMS Transactional VSAM Services 379
7.2 Base VSAM buffering<br />
Three types <strong>of</strong> base VSAM buffering<br />
NSR - non-shared resources<br />
LSR - local shared resources<br />
GSR - global shared resources<br />
Specified in the VSAM ACB macro:<br />
MACRF=(NSR/LSR/GSR)<br />
Figure 7-2 Base VSAM buffering<br />
Three types <strong>of</strong> base VSAM buffering<br />
Before VSAM RLS there were only three buffering techniques available for the user that<br />
opens the data set.<br />
NSR - non-shared resources<br />
For NSR, data buffers belong to a particular request parameter list (RPL). When an<br />
application uses an RPL (for example, for a direct GET request), VSAM manages the buffers<br />
as follows:<br />
1. For record 1000, VSAM locates the CI containing the record and reads it into the local<br />
buffer in private storage.<br />
2. If the next GET request is for record 5000, which is in a separate CI from record 1000,<br />
VSAM overwrites the buffer with the new CI.<br />
3. Another GET request for record 1001, which is in the same CI as record 1000, causes<br />
another I/O request for the CI to read it into the buffer, because it had been overlaid by the<br />
second request for record 5000.<br />
380 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
LSR - local shared resources<br />
For LSR, data buffers are managed so that the buffers will not be overlaid. Before opening the<br />
data set, the user builds a resource pool in private storage using the BLDVRP macro. The<br />
LSR processing is as follows:<br />
1. A GET request for record 1000 reads in the CI containing the record in one buffer <strong>of</strong> the<br />
buffer pool.<br />
2. The next GET request for record 5000, which is in a separate CI, will read in the CI in<br />
another, separate buffer in the same buffer pool.<br />
3. A third GET request for record 1001 can be satisfied by using the CI that was read in for<br />
the request for record 1000.<br />
GSR - global shared resources<br />
GSR is the same concept as LSR. The only difference is, with GSR the buffer pool and VSAM<br />
control blocks are built in common storage and can be accessed by any address space in the<br />
system.<br />
For more information about VSAM buffering techniques refer to 4.44, “VSAM: Buffering<br />
modes” on page 177.<br />
MACRF=(NSR/LSR/GSR)<br />
The Access Method Control block (ACB) describes an open VSAM data set. A subparameter<br />
for the ACB macro is MACRF, in which you can specify the buffering technique to be used by<br />
VSAM. For LSR and GSR, you need to run the BLDVRP macro before opening the data set to<br />
create the resource pool.<br />
For information about VSAM macros, see z/<strong>OS</strong> DFSMS: Macro Instructions for Data Sets,<br />
SC26-7408.<br />
Chapter 7. DFSMS Transactional VSAM Services 381
7.3 Base VSAM locking<br />
Figure 7-3 Example <strong>of</strong> LSR serialization<br />
Serialization <strong>of</strong> base VSAM<br />
Base VSAM serializes on a CI level. Multiple users attempting to access the same CI to read<br />
different records either defer on the CI or are returned an exclusive control conflict error by<br />
VSAM. Large CIs with many records per CI, or applications that repeatedly access the same<br />
CI, can have a performance impact due to retrying <strong>of</strong> exclusive control conflict errors or<br />
waiting on a deferred request.<br />
Example <strong>of</strong> VSAM LSR serialization<br />
In the example in Figure 7-3, two RPLs need to read and write different records in the same<br />
CI. RPL_1 obtains access to the buffer containing the CI first and locks it for update<br />
processing against Record B. RPL_2 fails with an exclusive control conflict error on this CI<br />
although it needs to update a different record.<br />
382 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Example <strong>of</strong> base VSAM LSR serialization<br />
scope = single LSR buffer pool<br />
granularity = control interval<br />
ownership = RPL<br />
GET UPD RPL_1<br />
(record B)<br />
succeeds - locks<br />
the CI to update<br />
the record<br />
Record A<br />
Record B<br />
Record C<br />
Record D<br />
Record E<br />
Record F<br />
Record G<br />
GET UPD RPL_2<br />
(Record E)<br />
fails - exclusive<br />
control conflict<br />
control<br />
interval (CI)
7.4 CICS function shipping before VSAM RLS<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
<strong>System</strong> 1<br />
AOR = Application Owning Region<br />
FOR = File Owning Region<br />
Figure 7-4 CICS function shipping before VSAM RLS<br />
<strong>System</strong> n<br />
CICS function shipping<br />
Prior to VSAM RLS, a customer information control system (CICS) VSAM data set was<br />
owned and directly accessed by one single CICS. Shared access across CICS Application<br />
Owning Regions (AORs) to a single VSAM data set was provided by CICS function shipping.<br />
With function shipping, one CICS File Owning Region (FOR) accesses the VSAM data sets<br />
on behalf <strong>of</strong> other CICS regions (see Figure 7-4).<br />
Problems<br />
There are a couple <strong>of</strong> problems with this kind <strong>of</strong> CICS configuration:<br />
► CICS FOR is a single point <strong>of</strong> failure.<br />
► Multiple system performance is not acceptable.<br />
► Lack <strong>of</strong> scalability.<br />
Over time the FORs became a bottleneck because CICS environments became increasingly<br />
complex. CICS required a solution to have direct shared access to VSAM data sets from<br />
multiple CICSs.<br />
CICS<br />
FOR<br />
VSAM<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
Chapter 7. DFSMS Transactional VSAM Services 383
7.5 VSAM record-level sharing introduction<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
<strong>System</strong> 1<br />
VSAM RLS<br />
instance 1<br />
Figure 7-5 Parallel Sysplex CICS with VSAM RLS<br />
Why VSAM record-level sharing (RLS)<br />
The solution for the problems inherent in CICS function shipping is VSAM record-level<br />
sharing. This is a major extension to base VSAM, and although it was designed for use by<br />
CICS, it can be used by any application. It provides an environment where multiple CICSs can<br />
directly access a shared VSAM data set (Figure 7-5).<br />
VSAM record-level sharing (RLS) is a method <strong>of</strong> access to your existing VSAM files that<br />
provides full read and write integrity at the record level to any number <strong>of</strong> users in your Parallel<br />
Sysplex.<br />
Benefits <strong>of</strong> VSAM RLS<br />
The benefits <strong>of</strong> VSAM RLS are:<br />
► Enhances cross-system data sharing - scope is sysplex<br />
► Improves performance and availability in CICS and also non-CICS VSAM environments<br />
► Provides data protection after a system failure<br />
► Provides automation for data recovery<br />
► Provides full read/write integrity to your existing VSAM files; the user does not need to<br />
serialize using ENQ/DEQ macros<br />
► Allows CICS to register as a recoverable subsystem, which will automate recovery<br />
processing as well as protect the data records to be recovered<br />
384 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
coupling facility<br />
VSAM RLS<br />
instance n<br />
<strong>System</strong> n<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR<br />
CICS<br />
AOR
7.6 VSAM RLS overview<br />
VSAM RLS enables multiple address spaces on<br />
multiple systems to access recoverable VSAM data<br />
sets at the same time<br />
VSAM RLS involves support from multiple products<br />
Level <strong>of</strong> sharing is determined by whether the data<br />
set is recoverable or not<br />
Coupling Facility is used for sharing<br />
Supported data set types:<br />
Key-sequenced data set (KSDS)<br />
Entry-sequenced data set (ESDS)<br />
Relative-record data set (RRDS)<br />
Variable length relative-record data set (VRRDS)<br />
Figure 7-6 VSAM RLS overview<br />
Multiple access on recoverable data sets<br />
VSAM RLS is a data set access mode that enables multiple address spaces, CICS<br />
application-owning regions on multiple systems, and batch jobs to access recoverable VSAM<br />
data sets at the same time.<br />
With VSAM RLS, multiple CICS systems can directly access a shared VSAM data set,<br />
eliminating the need to ship functions between the application-owning regions and file-owning<br />
regions. CICS provides the logging, commit, and backout functions for VSAM recoverable<br />
data sets. VSAM RLS provides record-level serialization and cross-system caching. CICSVR<br />
provides a forward recovery utility.<br />
Multiple products involved<br />
VSAM RLS processing involves support from multiple products:<br />
► CICS Transaction Server<br />
► CICS VSAM Recovery (CICSVR)<br />
► DFSMS<br />
Level <strong>of</strong> sharing<br />
The level <strong>of</strong> sharing that is allowed between applications is determined by whether or not a<br />
data set is recoverable; for example:<br />
► Both CICS and non-CICS jobs can have concurrent read or write access to<br />
nonrecoverable data sets. There is no coordination between CICS and non-CICS, so data<br />
integrity can be compromised.<br />
Chapter 7. DFSMS Transactional VSAM Services 385
► Non-CICS jobs can have read-only access to recoverable data sets concurrently with<br />
CICS jobs, which can have read or write access.<br />
Coupling Facility overview<br />
The Coupling Facility (CF) is a shareable storage medium. It is licensed internal code (LIC)<br />
running in a special type <strong>of</strong> PR/SM logical partition (LPAR) in certain zSeries and S/390<br />
processors. It can be shared by the systems in one sysplex only. A CF makes data sharing<br />
possible by allowing data to be accessed throughout a sysplex with assurance that the data<br />
will not be corrupted and that the data will be consistent among all sharing users.<br />
VSAM RLS uses a Coupling Facility to perform data-set-level locking, record locking, and<br />
data caching. VSAM RLS uses the conditional write and cross-invalidate functions <strong>of</strong> the<br />
Coupling Facility cache structure, thereby avoiding the need for control interval (CI) level<br />
locking.<br />
VSAM RLS uses the Coupling Facility caches as store-through caches. When a control<br />
interval <strong>of</strong> data is written, it is written to both the Coupling Facility cache and the direct access<br />
storage device (DASD). This ensures that problems occurring with a Coupling Facility cache<br />
do not result in the loss <strong>of</strong> VSAM data.<br />
Supported data set types<br />
VSAM RLS supports access to these types <strong>of</strong> data sets:<br />
► Key-sequenced data set (KSDS)<br />
► Entry-sequenced data set (ESDS)<br />
► Relative-record data set (RRDS)<br />
► Variable-length relative-record data set cluster (VRRDS)<br />
VSAM RLS also supports access to a data set through an alternate index, but it does not<br />
support opening an alternate index directly in RLS mode. Also, VSAM RLS does not support<br />
access through an alternate index to data stored under z/<strong>OS</strong> UNIX <strong>System</strong> Services.<br />
Extended format, extended addressability, and spanned data sets are supported with VSAM<br />
RLS. Compression is also supported.<br />
VSAM RLS does not support:<br />
► Linear data sets (LDS)<br />
► Keyrange data sets<br />
► KSDS with an imbedded index (defined with IMBED option)<br />
► Temporary data sets<br />
► Striped data sets<br />
► Catalogs and VVDSs<br />
Keyrange data sets and the IMBED attribute for a KSDS are obsolete. You cannot define new<br />
data sets as keyrange or with an imbedded index anymore. However, there still might be old<br />
data sets with these attributes in your installation.<br />
386 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.7 Data set sharing under VSAM RLS<br />
Share options are largely ignored under VSAM RLS<br />
Exception: SHAREOPTIONS(2,x)<br />
One user can have the data set open for non-RLS<br />
read/write access and any number <strong>of</strong> users for<br />
non-RLS read<br />
Or any number <strong>of</strong> users can have the data set open<br />
for RLS read/write and any number <strong>of</strong> users for<br />
non-RLS read<br />
Non-CICS access for data sets open by CICS in RLS<br />
mode<br />
Allowed for non-recoverable data sets<br />
Not allowed for recoverable data sets<br />
Figure 7-7 Data set sharing under VSAM RLS<br />
Share options are largely ignored under VSAM RLS<br />
The VSAM share options specification applies only when non-RLS access like NSR, LSR, or<br />
GSR is used. They are ignored when the data set is open for RLS access. Record-level<br />
sharing always assumes multiple readers and writers to the data set. VSAM RLS ensures full<br />
data integrity. When a data set is open for RLS access, non-RLS requests to open the data<br />
set fail.<br />
Exception: SHAREOPTIONS(2,x)<br />
For non-RLS access, SHAREOPTIONS(2,x) are handled as always. One user can have the<br />
data set open for read/write access and multiple users can have it open for read access only.<br />
VSAM does not provide data integrity for the readers.<br />
If the data set is open for RLS access, non-RLS opens for read are possible. These are the<br />
only share options, where a non-RLS request to open the data set will not fail if the data set is<br />
already open for RLS processing. VSAM does not provide data integrity for the non-RLS<br />
readers.<br />
Non-CICS access<br />
RLS access from batch jobs to data sets that are open by CICS depends on whether the data<br />
set is recoverable or not. For recoverable data sets, non-CICS access from other applications<br />
(that do not act as recoverable resource manager) is not allowed.<br />
See 7.10, “VSAM RLS/CICS data set recovery” on page 392 for details.<br />
Chapter 7. DFSMS Transactional VSAM Services 387
7.8 Buffering under VSAM RLS<br />
CICS<br />
R/W<br />
CICS<br />
R/W<br />
Batch<br />
R/O<br />
<strong>System</strong> 1<br />
MACRF=RLS<br />
VSAM RLS<br />
aka<br />
SMSVSAM<br />
SMSVSAM<br />
data space<br />
Figure 7-8 Buffering under VSAM RLS<br />
New VSAM buffering technique MACRF=RLS<br />
We have already discussed NSR, LSR, and GSR (refer to 7.2, “Base VSAM buffering” on<br />
page 380). RLS is another method <strong>of</strong> buffering which you can specify in the MACRF<br />
parameter <strong>of</strong> the ACB macro. RLS and NSR/LSR/GSR are mutually exclusive.<br />
Bufferpools in the data space<br />
Unlike NSR, LSR, or GSR, the VSAM buffers reside in a data space and not in private or<br />
global storage <strong>of</strong> a user address space. Each image in the sysplex has one large local buffer<br />
pool in the data space.<br />
The first request for a record after data set open for RLS processing will cause an I/O<br />
operation to read in the CI that contains this record. A copy <strong>of</strong> the CI is stored into the cache<br />
structure <strong>of</strong> the Coupling Facility and in the buffer pool in the data space.<br />
Buffer coherency<br />
Buffer coherency is maintained through the use <strong>of</strong> Coupling Facility (CF) cache structures<br />
and the XCF cross-invalidation function. For the example in Figure 7-8, that means:<br />
1. <strong>System</strong> 1 opens the VSAM data set for read/write processing.<br />
2. <strong>System</strong> 1 reads in CI1 and CI3 from DASD; both CIs are stored in the cache structure in<br />
the Coupling Facility.<br />
3. <strong>System</strong> 2 opens the data set for read processing.<br />
388 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
coupling facility<br />
VSAM data set<br />
VSAM RLS<br />
aka<br />
SMSVSAM<br />
1 3 1 4<br />
1 3 4<br />
1 2 3 4<br />
SMSVSAM<br />
data space<br />
<strong>System</strong> 2<br />
CICS<br />
R/W<br />
CICS<br />
R/W<br />
Batch<br />
R/O
4. <strong>System</strong> 2 needs CI1 and CI4; CI1 is read from the CF cache, CI4 from DASD.<br />
5. <strong>System</strong> 1 updates a record in CI1 and CI3; both copies <strong>of</strong> these CIs in the CF are<br />
updated.<br />
6. XCF notices the change <strong>of</strong> these two CIs and invalidates the copy <strong>of</strong> CI1 for <strong>System</strong> 2.<br />
7. <strong>System</strong> 2 needs another record from CI1; it notices that its buffer was invalidated and<br />
reads in a new copy <strong>of</strong> CI1 from the CF.<br />
For further information about cross-invalidation, see z/<strong>OS</strong> MVS <strong>Programming</strong>: Sysplex<br />
Services Guide, SA22-7617.<br />
The VSAM RLS Coupling Facility structures are discussed in more detail in 7.14, “Coupling<br />
Facility structures for RLS sharing” on page 397.<br />
Chapter 7. DFSMS Transactional VSAM Services 389
7.9 VSAM RLS locking<br />
control<br />
interval (CI)<br />
CICS1.Tran1<br />
GET UPD RPL_1<br />
(Record B)<br />
Figure 7-9 Example <strong>of</strong> VSAM RLS serialization<br />
VSAM RLS serialization<br />
7.3, “Base VSAM locking” on page 382 presents an example <strong>of</strong> LSR serialization. The<br />
granularity under LSR, NSR, or GSR is a control interval, whereas VSAM RLS serializes on<br />
a record level. With VSAM RLS, it is possible to concurrently update separate records in the<br />
same control interval. Record locks for UPDATE are always exclusive. Record locks for read<br />
depend on the level <strong>of</strong> read integrity.<br />
Levels <strong>of</strong> read integrity<br />
There are three levels <strong>of</strong> read integrity, as explained here:<br />
► NRI (no read integrity)<br />
This level tells VSAM not to obtain a record lock on the record accessed by a GET or<br />
POINT request. This avoids the overhead <strong>of</strong> record locking. This is sometimes referred to<br />
as a dirty read because the reader might see an uncommitted change made by another<br />
transaction.<br />
Even with this option specified, VSAM RLS still performs buffer validity checking and<br />
refreshes the buffer when the buffer is invalid.<br />
► CR (consistent read)<br />
This level tells VSAM to obtain a shared lock on the record that is accessed by a GET or<br />
POINT request. It ensures that the reader does not see an uncommitted change made by<br />
another transaction. Instead, the GET or POINT request waits for the change to be<br />
390 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Example <strong>of</strong> VSAM RLS serialization<br />
scope = sysplex<br />
granularity = record<br />
ownership = CICS transaction or batch job<br />
Record A<br />
Record B<br />
Record C<br />
Record D<br />
Record E<br />
Record F<br />
Record G<br />
CICS2.Tran2<br />
GET UPD RPL_2<br />
(Record E)<br />
Record B<br />
Holder<br />
(EXCL)<br />
CICS1.Tran1<br />
Waiter<br />
(SHARE)<br />
CICS3.Tran3<br />
CICS3.Tran3<br />
GET CR RPL_3<br />
(Record B)<br />
Waits for<br />
record lock<br />
VSAM RLS locks<br />
Record E<br />
Holder (EXCL)<br />
CICS2.Tran2<br />
CICS4.Tran4<br />
GET NRI RPL_4<br />
(Record B)
committed or backed out. The request also waits for the exclusive lock on the record to be<br />
released.<br />
► CRE (consistent read explicit)<br />
This level has a meaning similar to that <strong>of</strong> CR, except that VSAM RLS holds the shared<br />
lock on the record until the end <strong>of</strong> the unit <strong>of</strong> recovery, or unit <strong>of</strong> work. This option is only<br />
available to CICS or DFSMStvs transactions. VSAM RLS does not understand<br />
end-<strong>of</strong>-transaction for non-CICS or non-DFSMStvs usage.<br />
The type <strong>of</strong> read integrity is specified either in the ACB macro or in the JCL DD statement:<br />
► ACB RLSREAD=NRI/CR/CRE<br />
► //dd1 DD dsn=datasetname,RLS=NRI/CR/CRE<br />
Example situation<br />
In our example in Figure 7-9 on page 390 we have the following situation:<br />
1. CICS transaction Tran1 obtains an exclusive lock on Record B for update processing.<br />
2. Transaction Tran2 obtains an exclusive lock for update processing on Record E, which is in<br />
the same CI.<br />
3. Transaction Tran3 needs a shared lock also on Record B for consistent read; it has to wait<br />
until the exclusive lock by Tran1 is released.<br />
4. Transaction Tran4 does a dirty read (NRI); it does not have to wait because in that case,<br />
no lock is necessary.<br />
With NRI, Tran4 can read the record even though it is held exclusively by Tran1. There is no<br />
read integrity for Tran4.<br />
CF lock structure<br />
RLS locking is performed in the Coupling Facility through the use <strong>of</strong> a CF lock structure<br />
(IGWLOCK00) and the XES locking services.<br />
Contention<br />
When contention occurs on a VSAM record, the request that encountered the contention<br />
waits for the contention to be removed. The lock manager provides deadlock detection. When<br />
a lock request is in deadlock, the request is rejected, resulting in the VSAM record<br />
management request completing with a deadlock error response.<br />
Chapter 7. DFSMS Transactional VSAM Services 391
7.10 VSAM RLS/CICS data set recovery<br />
Recoverable data sets<br />
Defined as LOG(UNDO/ALL) in the catalog<br />
Figure 7-10 Recoverable data sets<br />
Recoverable data set<br />
VSAM record-level sharing introduces a VSAM data set attribute called LOG. With this<br />
attribute a data set can be defined as recoverable or non-recoverable. A data set whose log<br />
parameter is undefined or NONE is considered non-recoverable. A data set whose log<br />
parameter is UNDO or ALL is considered recoverable. For recoverable data sets, a log <strong>of</strong><br />
changed records is maintained to commit and back out transaction changes to a data set.<br />
A data set is considered recoverable if the LOG attribute has one <strong>of</strong> the following values:<br />
► UNDO<br />
The data set is backward recoverable. Changes made by a transaction that does not<br />
succeed (no commit was done) are backed out. CICS provides the transactional recovery.<br />
See also 7.11, “Transactional recovery” on page 394.<br />
► ALL<br />
The data set is both backward and forward recoverable. In addition to the logging and<br />
recovery functions provided for backout (transactional recovery), CICS records the image<br />
<strong>of</strong> changes to the data set, after they were made. The forward recovery log records are<br />
used by forward recovery programs and products such as CICS VSAM Recovery<br />
(CICSVR) to reconstruct the data set in the event <strong>of</strong> hardware or s<strong>of</strong>tware damage to the<br />
data set. This is referred to as data set recovery. For LOG(ALL) data sets, both types <strong>of</strong><br />
recovery are provided, transactional recovery and data set recovery.<br />
392 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
UNDO - backout logging performed by CICS<br />
ALL - both backout and forward recovery logging<br />
LOG(ALL) data sets must have a<br />
LOGSTREAMID(forwardrecoverylog) also defined in<br />
the catalog<br />
Non-recoverable data sets<br />
Defined as LOG(NONE) in the catalog<br />
No logging performed by CICS
For LOG(ALL) you need to define a logstream in which changes to the data sets are<br />
logged.<br />
Non-recoverable data sets<br />
A data set whose LOG parameter is undefined or NONE is considered as non-recoverable.<br />
Non-CICS access to recoverable and non-recoverable data sets<br />
VSAM RLS supports non-recoverable files. Non-recoverable means CICS does not do<br />
transactional recovery (logging, commit, backout). VSAM RLS provides record locking and file<br />
integrity across concurrently executing CICS and batch applications. Transactional recovery<br />
is not provided. This is because VSAM RLS does not provide undo logging and two-phase<br />
commit/backout support. Most transactions and batch jobs are not designed to use this form<br />
<strong>of</strong> data sharing.<br />
Non-CICS read/write access for recoverable data sets that are open by CICS is not allowed.<br />
The recoverable attribute means that when the file is accessed in RLS mode, transactional<br />
recovery is provided. With RLS, the recovery is only provided when the access is through<br />
CICS file control, so RLS does not permit a batch (non-CICS) job to open a recoverable file<br />
for OUTPUT.<br />
Transactional recovery is described in 7.11, “Transactional recovery” on page 394.<br />
Chapter 7. DFSMS Transactional VSAM Services 393
7.11 Transactional recovery<br />
Commited transaction Failed transaction (back out)<br />
Trans1:<br />
Read Record 1<br />
(Lock Record 1)<br />
(Log Record 1)<br />
<br />
Write Record 1'<br />
Read Record 2<br />
(Lock Record 2)<br />
(Log Record 2)<br />
<br />
Write Record 2'<br />
Commit<br />
(update log)<br />
(release locks)<br />
Record 1 Record 1'<br />
Record 2 Record 2'<br />
Figure 7-11 Transactional recovery<br />
CICS transactional recovery for VSAM recoverable data sets<br />
During the life <strong>of</strong> a transaction, its changes to recoverable resources are not seen by other<br />
transactions. The exception is if you are using the no-read integrity (NRI) option. Then you<br />
might see uncommitted changes.<br />
Exclusive locks that VSAM RLS holds on the modified records cause other transactions that<br />
have read-with-integrity requests and write requests for these records to wait. After the<br />
modifying transaction is committed or backed out, VSAM RLS releases the locks and the<br />
other transactions can access the records.<br />
If the transaction fails, its changes are backed out. This capability is called transactional<br />
recovery.<br />
The CICS backout function removes changes made to the recoverable data sets by a<br />
transaction. When a transaction abnormally ends, CICS performs a backout implicitly.<br />
Example<br />
In our example in 7.11, “Transactional recovery” on page 394, transaction Trans1 is complete<br />
(committed) after Record 1 and Record 2 are updated. Transactional recovery ensures that<br />
either both changes are made or no change is made. When the application requests commit,<br />
both changes are made atomically. In the case <strong>of</strong> an failure after updating Record 1, the<br />
change to this record is backed out. This applies only for recoverable data sets, not for<br />
non-recoverable ones.<br />
394 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Trans1:<br />
Read Record 1<br />
(Lock Record 1)<br />
(Log Record 1)<br />
<br />
Write Record 1'<br />
Record 1 Record 1'<br />
---------------------------------- Failure ------------------------------------<br />
Read Log Record 1<br />
Re-Lock Record 1<br />
Write Record 1<br />
Commit<br />
(update log)<br />
(release locks)<br />
Record 1 Record 1
7.12 The batch window problem<br />
Batch window - a period <strong>of</strong> time in which CICS<br />
access to recoverable data sets is quiesced so<br />
batch jobs can run<br />
Requires taking a backup <strong>of</strong> the data set<br />
Batch updates are then performed<br />
A forward recovery backup is taken, if needed<br />
When finished, CICS access to the data set is<br />
re-enabled<br />
Figure 7-12 Batch window problem<br />
Batch window<br />
The batch window is a period <strong>of</strong> time in which online access to recoverable data sets must be<br />
disabled. During this time, no transaction processing can be done. This is normally done<br />
because it is necessary to run batch jobs or other utilities that do not properly support<br />
recoverable data, even if those utilities use also RLS access. Therefore, to allow these jobs or<br />
utilities to safely update the data, it is first necessary to make a copy <strong>of</strong> the data. In the event<br />
that the batch job or utility fails or encounters an error, this copy can be safely restored and<br />
online access can be re-enabled. If the batch job completes successfully, the updated copy <strong>of</strong><br />
the data set can be safely used because only the batch job had access to the data while it<br />
was being updated. Therefore, the data cannot have been corrupted by interference from<br />
online transaction processing.<br />
Quiescing a data set from RLS processing<br />
Before updating a recoverable data set in non-RLS mode, quiesce the data set around the<br />
sysplex. This is to ensure that no RLS access can be done while non-RLS applications are<br />
updating those data sets. The quiesced state is stored in the ICF catalog. After a quiesce has<br />
completed, all CICS files associated with the data set are closed. A quiesced data set can be<br />
opened in non-RLS mode only if no retained locks are presented. Once the data set was<br />
quiesced from RLS processing, it can be opened again in RLS mode only after it is<br />
unquiesced.<br />
See 7.20, “Interacting with VSAM RLS” on page 412 for information about how to quiesce and<br />
unquiesce a data set.<br />
Chapter 7. DFSMS Transactional VSAM Services 395
7.13 VSAM RLS implementation<br />
Update CFRM policy to define lock and cache<br />
structures<br />
Update SYS1.PARMLIB(IGDSMSxx) with RLS<br />
parameters<br />
Define sharing control data sets (SHCDSs)<br />
Update SMS configuration for cache sets<br />
Update data sets with LOG(NONE/UNDO/ALL) and<br />
LOGSTREAMID<br />
Figure 7-13 VSAM RLS configuration changes<br />
VSAM RLS configuration changes<br />
There are a few configuration changes necessary to run VSAM RLS. They include:<br />
► Update the Coupling Facility resource manager (CFRM) policy to define lock and cache<br />
structures.<br />
► Update SYS1.PARMLIB(IGDSMSxx) with VSAM RLS parameters.<br />
► Define new sharing control data sets (SHCDSs).<br />
► Update SMS configuration for cache sets and assign them to a storage group.<br />
► Update data sets with the attribute LOG(NONE/UNDO/ALL) and optionally assign a<br />
LOGSTREAMID.<br />
396 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.14 Coupling Facility structures for RLS sharing<br />
Two types <strong>of</strong> Coupling Facility structures are<br />
needed by VSAM RLS<br />
Lock structure<br />
Maintain record locks and other DFSMSdfp<br />
serializations<br />
Enforce protocol restrictions for VSAM RLS data<br />
sets<br />
Required structure name: IGWLOCK00<br />
Cache structures<br />
Multiple cache structures are possible, at least one<br />
is necessary<br />
Provide level <strong>of</strong> storage between DASD and local<br />
memory<br />
Figure 7-14 VSAM RLS Coupling Facility structures<br />
Structures in a Coupling Facility<br />
A CF stores information in structures. z/<strong>OS</strong> recognizes three structure types: cache, list, and<br />
lock. It provides a specific set <strong>of</strong> services for each <strong>of</strong> the structure types to allow the<br />
manipulation <strong>of</strong> data within the structure. VSAM RLS needs list and lock structures for data<br />
sharing and high-speed serialization.<br />
Lock structure<br />
In a Parallel Sysplex, you need only one lock structure for VSAM RLS because only one<br />
VSAM sharing group is permitted. The required name is IGWLOCK00.<br />
The lock structure is used to:<br />
► Enforce the protocol restrictions for VSAM RLS data sets<br />
► Maintain the record-level locks and other DFSMSdfp serializations<br />
Ensure that the Coupling Facility lock structure has universal connectivity so that it is<br />
accessible from all systems in the Parallel Sysplex that support VSAM RLS.<br />
Tip: For high-availability environments, use a nonvolatile Coupling Facility for the lock<br />
structure. If you maintain the lock structure in a volatile Coupling Facility, a power outage<br />
can cause a failure and loss <strong>of</strong> information in the Coupling Facility lock structure.<br />
Chapter 7. DFSMS Transactional VSAM Services 397
Cache structures<br />
Coupling Facility cache structures provide a level <strong>of</strong> storage hierarchy between local memory<br />
and DASD cache.<br />
They are also used as system buffer pool with cross-invalidation being done (see 7.8,<br />
“Buffering under VSAM RLS” on page 388).<br />
Each Coupling Facility cache structure is contained in a single Coupling Facility. You may<br />
have multiple Coupling Facilities and multiple cache structures.<br />
The minimum size <strong>of</strong> the cache structure is 10 MB.<br />
Sizing the lock and cache structure<br />
For information about sizing the CF lock and cache structure for VSAM RLS, see:<br />
► z/<strong>OS</strong> DFSMStvs Planning and Operation Guide, SC26-7348<br />
► z/<strong>OS</strong> DFSMSdfp Storage Administration Reference, SC26-7402<br />
A sizing tool known as CFSIZER is also available on the <strong>IBM</strong> Web site at:<br />
http://www-1.ibm.com/servers/eserver/zseries/cfsizer/vsamrls.html<br />
Defining Coupling Facility structures<br />
Use CFRM policy definitions to specify an initial and maximum size for each Coupling Facility<br />
structure. DFSMS uses the initial structure size you specify in the policy each time it connects<br />
to a Coupling Facility cache structure.<br />
//STEP10 EXEC PGM=IXCMIAPU<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSABEND DD SYSOUT=A<br />
//SYSIN DD *<br />
DATA TYPE(CFRM) REPORT(YES)<br />
DEFINE POLICY NAME(CFRM01) REPLACE(YES)<br />
STRUCTURE NAME(CACHE01)<br />
SIZE(70000)<br />
INITSIZE(50000)<br />
PREFLIST(CF01,CF02)<br />
STRUCTURE NAME(CACHE02)<br />
SIZE(70000)<br />
INITSIZE(50000)<br />
PREFLIST(CF01,CF02)<br />
STRUCTURE NAME(IGWLOCK00)<br />
SIZE(30000)<br />
INITSIZE(15000)<br />
PREFLIST(CF01,CF02)<br />
/*<br />
Figure 7-15 Example <strong>of</strong> defining VSAM RLS CF structure<br />
Displaying Coupling Facility structures<br />
You can use the system command DISPLAY XCF to view your currently defined Coupling<br />
Facility structures. An example <strong>of</strong> XCF display <strong>of</strong> the lock structure is in Figure 7-16 on<br />
page 399.<br />
398 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
D XCF,STR,STRNAME=IGWLOCK00<br />
IXC360I 10.00.38 DISPLAY XCF 337<br />
STRNAME: IGWLOCK00<br />
STATUS: ALLOCATED<br />
TYPE: LOCK<br />
POLICY INFORMATION:<br />
POLICY SIZE : 28600 K<br />
POLICY INITSIZE: 14300 K<br />
POLICY MINSIZE : 0 K<br />
FULLTHRESHOLD : 80<br />
ALLOWAUTOALT : NO<br />
REBUILD PERCENT: 75<br />
DUPLEX : ALLOWED<br />
PREFERENCE LIST: CF1 CF2<br />
ENFORCEORDER : NO<br />
EXCLUSION LIST IS EMPTY<br />
ACTIVE STRUCTURE<br />
----------------<br />
ALLOCATION TIME: 02/24/2005 14:22:56<br />
CFNAME : CF1<br />
COUPLING FACILITY: 002084.<strong>IBM</strong>.02.000000026A3A<br />
PARTITION: 1F CPCID: 00<br />
ACTUAL SIZE : 14336 K<br />
STORAGE INCREMENT SIZE: 256 K<br />
ENTRIES: IN-USE: 0 TOTAL: 33331, 0% FULL<br />
LOCKS: TOTAL: 2097152<br />
PHYSICAL VERSION: BC9F02FD EDC963AC<br />
LOGICAL VERSION: BC9F02FD EDC963AC<br />
SYSTEM-MANAGED PROCESS LEVEL: 8<br />
XCF GRPNAME : IXCLO001<br />
DISP<strong>OS</strong>ITION : KEEP<br />
ACCESS TIME : 0<br />
NUMBER OF RECORD DATA LISTS PER CONNECTION: 16<br />
MAX CONNECTIONS: 4<br />
# CONNECTIONS : 4<br />
CONNECTION NAME ID VERSION SYSNAME JOBNAME ASID STATE<br />
---------------- -- -------- -------- -------- ---- ----------------<br />
SC63 01 000100B0 SC63 SMSVSAM 0009 ACTIVE<br />
SC64 02 000200C6 SC64 SMSVSAM 000A ACTIVE<br />
SC65 03 000300DD SC65 SMSVSAM 000A ACTIVE<br />
SC70 04 00040035 SC70 SMSVSAM 000A ACTIVE<br />
Figure 7-16 Example <strong>of</strong> XCF display <strong>of</strong> structure IGWLOCK00<br />
Chapter 7. DFSMS Transactional VSAM Services 399
7.15 Update PARMLIB with VSAM RLS parameters<br />
PARMLIB parameters to support VSAM RLS<br />
RLSINIT<br />
CF_TIME<br />
DEADLOCK_DETECTION<br />
RLS_MaxCfFeatureLevel<br />
RLS_MAX_POOL_SIZE<br />
SMF_TIME<br />
Figure 7-17 PARMLIB parameters to support VSAM RLS<br />
New PARMLIB parameters to support VSAM RLS<br />
The SYS1.PARMLIB member IGDSMSxx includes several parameters that support the<br />
Coupling Facility. With the exception <strong>of</strong> RLSINIT, these parameters apply across all systems<br />
in the Parallel Sysplex. The parameter values specified for the first system that was activated<br />
in the sysplex are used by all other systems in the sysplex.<br />
The following IGDSMSxx parameters support VSAM RLS:<br />
► RLSINIT({NO|YES})<br />
This specifies whether to start the SMSVSAM address space as a part <strong>of</strong> the system.<br />
initialization<br />
► CF_TIME(nnn|3600)<br />
This indicates the number <strong>of</strong> seconds between recording SMF type 42 records with<br />
subtypes 15, 16, 17, 18, and 19 for the CF (both cache and lock structures).<br />
► DEADLOCK_DETECTION(iiii|15,kkkk|4)<br />
– This specifies the deadlock detection intervals used by the Storage Management<br />
Locking Services.<br />
– iiii - This is the local detection interval, in seconds.<br />
– kkkk - This is the global detection interval, which is the number <strong>of</strong> iterations <strong>of</strong> the local<br />
detection interval that must be run until the global deadlock detection is invoked.<br />
400 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► RLS_MaxCfFeatureLevel({A|Z})<br />
This specifies the method that VSAM RLS uses to determine the size <strong>of</strong> the data that is<br />
placed in the CF cache structure.<br />
► RLS_MAX_POOL_SIZE({nnnn|100})<br />
This specifies the maximum size in megabytes <strong>of</strong> the SMSVSAM local buffer pool.<br />
► SMF_TIME({YES|NO})<br />
This specifies that the SMF type 42 records are created at the SMF interval time, and that<br />
all <strong>of</strong> the indicated records are synchronized with SMF and RMF data intervals.<br />
For more information about VSAM RLS parameters, see z/<strong>OS</strong> DFSMSdfp Storage<br />
Administration Reference, SC26-7402.<br />
Chapter 7. DFSMS Transactional VSAM Services 401
7.16 Define sharing control data sets<br />
CICS<br />
R/W<br />
CICS<br />
R/W<br />
Batch<br />
R/O<br />
<strong>System</strong> 1<br />
SMSVSAM<br />
address space<br />
SMSVSAM<br />
data space<br />
Primary<br />
SHCDS<br />
Figure 7-18 VSAM RLS sharing control data sets<br />
VSAM RLS sharing control data sets<br />
The sharing control data set (SHCDS) is designed to contain the information required for<br />
DFSMS to continue processing with a minimum <strong>of</strong> unavailable data and no corruption <strong>of</strong> data<br />
when failures occur. The SCDS data can be either SMS or non-SMS managed.<br />
The SHCDS contains the following:<br />
► Name <strong>of</strong> the CF lock structure in use<br />
► <strong>System</strong> status for each system or failed system instance<br />
► Time that the system failed<br />
► List <strong>of</strong> subsystems and their status<br />
► List <strong>of</strong> open data sets using the CF<br />
► List <strong>of</strong> data sets with unbound locks<br />
► List <strong>of</strong> data sets in permit non-RLS state<br />
Defining sharing control data sets<br />
The SHCDS is a logically partitioned VSAM linear data set. Consider the following for the<br />
SHCDS allocation:<br />
► At a minimum, define and activate two SHCDSs and at least one spare SHCDS for<br />
recovery purposes to ensure duplexing <strong>of</strong> your data.<br />
402 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
CACHE02<br />
CACHE01<br />
IGWLOCK00<br />
Coupling Facility<br />
Secondary<br />
SHCDS<br />
Spare<br />
SHCDS<br />
<strong>Volume</strong>1 <strong>Volume</strong>2 <strong>Volume</strong>3<br />
SMSVSAM<br />
address space<br />
SMSVSAM<br />
data space<br />
<strong>System</strong> 2<br />
SHCDS name: SYS1.DFPSHCDS.qualifier.Vvolser<br />
CICS<br />
R/W<br />
CICS<br />
R/W<br />
Batch<br />
R/O
► Place the SHCDSs on volumes with global connectivity because VSAM RLS processing is<br />
only available on those systems that currently have access to the active SHCDS.<br />
► The SHCDSs must not be shared outside the sysplex.<br />
► Use SHAREOPTIONS(3,3).<br />
► If SMS-managed, use a storage class with guaranteed space for the SHCDSs.<br />
► Use the naming convention: SYS1.DFPSHCDS.qualifier.Vvolser, where:<br />
– qualifier is a 1- to 8-character qualifier <strong>of</strong> your choice.<br />
– volser is the volume serial number <strong>of</strong> the volume on which the data set resides.<br />
► All SHCDSs are to be <strong>of</strong> the same size.<br />
► An SHCDS can have extents only on the same volume.<br />
Both the primary and secondary SHCDS contain the same data. With the duplexing <strong>of</strong> the<br />
data, VSAM RLS ensures that processing can continue in case VSAM RLS loses connection<br />
to one SHCDS or the control data set got damaged. In that case, you can switch the spare<br />
SHCDS to active.<br />
Note: The SMSVSAM address space needs RACF UPDATE authority to<br />
SYS1.DFPSHCDS.<br />
JCL to allocate the SHCDSs<br />
You can use the sample JCL in Figure 7-19 to allocate your SHCDSs.<br />
//STEP01 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
DEFINE CLUSTER( -<br />
NAME(SYS1.DFPSHCDS.PRIMARY.VTOTSMA) -<br />
VOLUMES(TOTSMA) -<br />
MEGABYTES(10 10) -<br />
LINEAR -<br />
SHAREOPTIONS(3,3) -<br />
STORAGECLASS(GSPACE))<br />
DEFINE CLUSTER( -<br />
NAME(SYS1.DFPSHCDS.SECONDRY.VTOTCAT) -<br />
VOLUMES(TOTCAT) -<br />
MEGABYTES(10 10) -<br />
LINEAR -<br />
SHAREOPTIONS(3,3) -<br />
STORAGECLASS(GSPACE))<br />
DEFINE CLUSTER( -<br />
NAME(SYS1.DFPSHCDS.SPARE.VTOTSMS) -<br />
VOLUMES(TOTSMS) -<br />
MEGABYTES(10 10) -<br />
SHAREOPTIONS(3,3) -<br />
LINEAR -<br />
STORAGECLASS(GSPACE))<br />
Figure 7-19 Allocating VSAM RLS SHCDSs<br />
To calculate the size <strong>of</strong> the sharing control data sets, follow the guidelines provided in z/<strong>OS</strong><br />
DFSMSdfp Storage Administration Reference, SC26-7402.<br />
Chapter 7. DFSMS Transactional VSAM Services 403
Tip: Place the SHCDSs on separate volumes to maximize availability. Avoid placing<br />
SHCDSs on volumes for which there might be extensive volume reserve activity.<br />
SHCDS operations<br />
Use the following command to activate your newly defined SHCDS for use by VSAM RLS.<br />
► For the primary and secondary SHCDS, use:<br />
VARY SMS,SHCDS(SHCDS_name),NEW<br />
► For the spare SHCDS, use:<br />
VARY SMS,SHCDS(SHCDS_name),NEWSPARE<br />
To display the used SHCDSs in an active configuration, use:<br />
D SMS,SHCDS<br />
Figure 7-20 displays an example <strong>of</strong> a SHCDS list in a sysplex.<br />
D SMS,SHCDS<br />
IEE932I 539<br />
IGW612I 17:10:12 DISPLAY SMS,SHCDS<br />
Name Size %UTIL Status Type<br />
WTSCPLX2.VSBOX48 10800Kb 4% GOOD ACTIVE<br />
WTSCPLX2.VSBOX52 10800Kb 4% GOOD ACTIVE<br />
WTSCPLX2.VSBOX49 10800Kb 4% GOOD SPARE<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
----------------- 0Kb 0% N/A N/A<br />
Figure 7-20 Example <strong>of</strong> SHCDS display<br />
To delete logically either an active or a spare SHCDS, use:<br />
VARY SMS,SHCDS(SHCDS_name),DELETE<br />
Note: In the VARY SMS,SHCDS commands, the SHCDS name is not fully qualified.<br />
SMSVSAM takes as a default the first two qualifiers, which must always be<br />
SYS1.DFPSHCDS. You must specify only the last two qualifiers as the SHCDS names.<br />
404 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.17 Update SMS configuration<br />
SMSBase Configuration Information Storage Class Information Coupling Facility<br />
Cache Structure Names<br />
PUBLIC1 CACHE01, CACHE02<br />
PUBLIC2 CACHE02, CACHE03<br />
PAYROLL CACHE03, PAYSTRUC<br />
SC=CICS1<br />
Cache set name =<br />
PUBLIC1<br />
SC=CICS2<br />
Cache set name =<br />
PUBLIC2<br />
SC=CICS3<br />
Cache set name =<br />
PAYROLL<br />
SC=NORLS<br />
Cache set name =<br />
(blank)<br />
Figure 7-21 Example <strong>of</strong> SMS configuration with cache sets<br />
CACHE01<br />
CACHE02<br />
CACHE03<br />
PAYSTRUC<br />
No VSAM RLS capability available<br />
CF Cache<br />
Structures<br />
Update the SMS configuration<br />
For DFSMSdfp to use the CF for VSAM RLS, after defining one or more CF cache structures<br />
to MVS you must also add them in the SMS base configuration.<br />
Define cache sets<br />
In 7.14, “Coupling Facility structures for RLS sharing” on page 397 we discuss how to define<br />
CF cache structures. You now need to add the CF cache structures to the DFSMS base<br />
configuration. To do so, you need to associate them with a cache set name.<br />
The following steps describe how to define a cache set and how to associate the cache<br />
structures to the cache set:<br />
1. From the ISMF primary option menu for storage administrators, select option 8, Control<br />
Data Set.<br />
2. Select option 7, Cache Update, and make sure that you specified the right SCDS name<br />
(SMS share control data set, do not mix up with SHCDS).<br />
3. Define your CF cache sets (see Figure 7-22 on page 406).<br />
Chapter 7. DFSMS Transactional VSAM Services 405
Command ===><br />
Figure 7-22 CF cache update panel in ISMF<br />
SMS storage class changes<br />
After defining your cache sets, you need to assign them to one or more storage classes (SC)<br />
so that data sets associated with those SCs are eligible for VSAM RLS by using Coupling<br />
Facility cache structures.<br />
Follow these steps to assign the CF cache sets:<br />
1. Select option 5, Storage Class, from the ISMF primary option menu for storage<br />
administrators<br />
2. Select option 3, Define, and make sure, that you specified the right SCDS name<br />
3. On the second page <strong>of</strong> the STORAGE CLASS DEFINE panel enter the name <strong>of</strong> the<br />
Coupling Facility cache set you defined in the base configuration (see Figure 7-23 on<br />
page 407).<br />
4. On the same panel enter values for direct and sequential weight. The higher the value, the<br />
more important it is that the data be assigned more cache resources.<br />
406 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
CF CACHE SET UPDATE PAGE 1 OF 1<br />
SCDS Name : SYS1.SMS.SCDS<br />
Define/Alter/Delete CF Cache Sets: ( 001 Cache Sets Currently Defined )<br />
Cache Set CF Cache Structure Names<br />
PUBLIC1 CACHE01 CACHE02<br />
PUBLIC2 CACHE02 CACHE03<br />
PAYROLL CACHE03 PAYSTRUC<br />
F1=Help F2=Split F3=End F4=Return F7=Up F8=Down F9=Swap<br />
F10=Left F11=Right F12=Cursor
Command ===><br />
SCDS Name . . . . . : SYS1.SMS.SCDS<br />
Storage Class Name : CICS1<br />
To DEFINE Storage Class, Specify:<br />
Figure 7-23 ISMF storage class definition panel, page 2<br />
STORAGE CLASS DEFINE Page 2 <strong>of</strong> 2<br />
Guaranteed Space . . . . . . . . . N (Y or N)<br />
Guaranteed Synchronous Write . . . N (Y or N)<br />
Multi-Tiered SG . . . . . . . . . . (Y, N, or blank)<br />
Parallel Access <strong>Volume</strong> Capability N (R, P, S, or N)<br />
CF Cache Set Name . . . . . . . . . PUBLIC1 (up to 8 chars or blank)<br />
CF Direct Weight . . . . . . . . . 6 (1 to 11 or blank)<br />
CF Sequential Weight . . . . . . . 4 (1 to 11 or blank)<br />
F1=Help F2=Split F3=End F4=Return F7=Up F8=Down F9=Swap<br />
F10=Left F11=Right F12=Cursor<br />
Note: Be sure to change your Storage Class ACS routines so that RLS data sets are<br />
assigned the appropriate storage class.<br />
More detailed information about setting up SMS for VSAM RLS is in z/<strong>OS</strong> DFSMSdfp Storage<br />
Administration Reference, SC26-7402.<br />
Chapter 7. DFSMS Transactional VSAM Services 407
7.18 Update data sets with log parameters<br />
Two new parameters to support VSAM RLS<br />
LOG(NONE/UNDO/ALL)<br />
NONE - no data set recovery<br />
Figure 7-24 New parameters to support VSAM RLS<br />
New parameters to support VSAM RLS<br />
The two new parameters LOG and LOGSTREAMID are stored in the ICF catalog. You can<br />
use the IDCAMS DEFINE and ALTER commands to set the data set LOG attribute and to<br />
assign a log data set name.<br />
Another way to assign the LOG attribute and a LOGSTREAMID is to use a data class that has<br />
those values already defined.<br />
The LOG parameter is described in detail in 7.10, “VSAM RLS/CICS data set recovery” on<br />
page 392.<br />
Use the LOGSTREAMID parameter to assign a CICS forward recovery log stream to a data<br />
set which is forward recoverable.<br />
JCL to define a cluster with LOG(ALL) and a LOGSTREAMID<br />
The example in Figure 7-25 on page 409 shows you how to define a VSAM data set that is<br />
eligible for RLS processing and that is forward recoverable.<br />
408 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
UNDO - data set is backward recoverable<br />
ALL - data set is backward and forward recoverable<br />
LOGSTREAMID(logstreamname)<br />
Specifies the name <strong>of</strong> the CICS forward recovery<br />
logstream for data sets with LOG(ALL)
LABEL JOB ...<br />
//STEP1 EXEX PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
DEFINE CLUSTER (NAME (OTTI.IV4A62A1.RLS) -<br />
CYL (1 1) -<br />
RECSZ (5000 10009) -<br />
CISZ (20000) -<br />
IXD -<br />
KEYS (8 0) -<br />
FREESPACE(40 40) -<br />
SHR (2,3) -<br />
REUSE -<br />
BWO(TYPECICS) -<br />
LOG(ALL) -<br />
LOGSTREAMID(CICS.IV4A62A1.DFHJ05) -<br />
STORAGECLASS(CICS1) -<br />
)<br />
/*<br />
Figure 7-25 Sample JCL to define a recoverable VSAM data set RLS processing<br />
For more information about the IDCAMS DEFINE and ALTER commands, see z/<strong>OS</strong> DFSMS<br />
Access Method Services for Catalogs, SC26-7394.<br />
JCL to define a log stream<br />
You can use the JCL in Figure 7-26 to define a log stream.<br />
//LABEL JOB ...<br />
//STEP010 EXEC PGM=IXCMIAPU<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
DELETE LOGSTREAM NAME(CICS.IV4A62A1.DFHJ05)<br />
DATA TYPE(LOGR) REPORT(NO)<br />
DEFINE LOGSTREAM<br />
NAME(CICS.IV4A62A1.DFHJ05)<br />
DASDONLY(YES)<br />
STG_SIZE(50)<br />
LS_SIZE(25)<br />
HIGHOFFLOAD(80)<br />
LOWOFFLOAD(0)<br />
/*<br />
Figure 7-26 JCL to define a log stream<br />
For information about the IXCMIAPU utility, see z/<strong>OS</strong> MVS Setting Up a Sysplex, SA22-7625.<br />
Chapter 7. DFSMS Transactional VSAM Services 409
7.19 The SMSVSAM address space<br />
RLS clients:<br />
subsystem<br />
CICS1<br />
subsystem<br />
CICS2<br />
batch job<br />
HSM<br />
<strong>System</strong> 1<br />
RLS server:<br />
SMSVSAM<br />
address space<br />
MMFSTUFF<br />
SMSVSAM<br />
data spaces<br />
Figure 7-27 SMSVSAM address space<br />
The SMSVSAM address space<br />
SMSVSAM is the MVS job name <strong>of</strong> the VSAM RLS address space. It is started automatically<br />
at IPL time if RLSINIT(YES) is specified in the IGDSMSxx member <strong>of</strong> SYS1.PARMLIB.<br />
Another way to start it is by operator command (see 7.20, “Interacting with VSAM RLS” on<br />
page 412).<br />
The SMSVSAM address space needs to be started on each system where you want to exploit<br />
VSAM RLS. It is responsible for centralizing all processing necessary for cross-system<br />
sharing, which includes one connect per system to XCF lock, cache, and VSAM control block<br />
structures.<br />
The SMSVSAM address space owns two data spaces:<br />
► SMSVSAM<br />
This contains VSAM RLS control blocks and a system-wide buffer pool<br />
► MMFSTUFF<br />
This collects activity monitoring information that is used to produce SMF records<br />
Terminology<br />
We use the following terms to describe an RLS environment:<br />
► RLS server<br />
The SMSVSAM address space is also referred to as the RLS server.<br />
410 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
CACHE02<br />
CACHE01<br />
IGWLOCK00<br />
Coupling Facility<br />
RLS server: RLS clients:<br />
SMSVSAM<br />
address space<br />
MMFSTUFF<br />
SMSVSAM<br />
data spaces<br />
<strong>System</strong> n<br />
subsystem<br />
CICS1<br />
subsystem<br />
CICS2<br />
batch job<br />
HSM
► RLS client<br />
Any address space that invokes an RLS function that results in a program call to the<br />
SMSVSAM address space is called an RLS client. Those address spaces can be CICS<br />
regions as well as batch jobs.<br />
► Recoverable subsystem<br />
A subsystem is an RLS client space that registers with the SMSVSAM address space as<br />
an address space that will provide transactional and data set recovery. CICS, for example,<br />
is a recoverable subsystem.<br />
► Batch job<br />
An RLS client space that does not first register with SMSVSAM as a recoverable<br />
subsystem is called a batch job. An example for such a batch job is HSM.<br />
Chapter 7. DFSMS Transactional VSAM Services 411
7.20 Interacting with VSAM RLS<br />
Interacting with VSAM RLS includes<br />
Figure 7-28 Interacting with VSAM RLS<br />
Interacting with VSAM RLS<br />
This section provides a brief overview <strong>of</strong> several commands you need to know to monitor and<br />
control the VSAM RLS environment.<br />
SETSMS command<br />
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The syntax<br />
is:<br />
SETSMS CF-TIME(nnn|3600)<br />
DEADLOCK_DETECTION(iiii,kkkk)<br />
RLSINIT<br />
RLS_MAXCFFEATURELEVEL({A|Z})<br />
RLS_MAX_POOL_SIZE(nnnn|100)<br />
SMF_TIME(YES|NO)<br />
For information about these PARMLIB values refer to 7.15, “Update PARMLIB with VSAM<br />
RLS parameters” on page 400.<br />
412 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Use <strong>of</strong> SETSMS commands to change the<br />
IGDSMSxx specifications<br />
Use <strong>of</strong> VARY SMS commands to start and stop the<br />
RLS server<br />
Use <strong>of</strong> display commands to get information about<br />
the current VSAM RLS configuration<br />
Activate and display the SHCDS<br />
Quiesce/unquiesce a data set from RLS processing<br />
Changing the size <strong>of</strong> an XCF structure<br />
List and change recovery information
VARY SMS command<br />
Use the VARY SMS command to control SMSVSAM processing.<br />
► Start the VSAM RLS server address space on a single system.<br />
If not started during IPL because <strong>of</strong> RLSINIT(NO) in the IGDSMSxx PARMLIB member,<br />
you can start the RLS server manually with:<br />
V SMS,SMSVSAM,ACTIVE<br />
► Stop the RLS server address space.<br />
If the SMSVSAM address space fails, it is automatically restarted. If you want to stop the<br />
SMSVSAM address space permanently on a single system, use:<br />
V SMS,SMSVSAM,TERMINATESERVER<br />
Use this command only for specific recovery scenarios that require the SMSVSAM server<br />
to be down and to not restart automatically.<br />
► Fallback from RLS processing.<br />
A detailed procedure about falling back from RLS processing is described in z/<strong>OS</strong><br />
DFSMSdfp Storage Administration Reference, SC26-7402. Read it before using the<br />
following command:<br />
V SMS,SMSVSAM,FALLBACK<br />
This command is used as the last step in the disablement procedure to fall back from<br />
SMSVSAM processing.<br />
► Further VARY SMS commands.<br />
There are additional VARY SMS commands available to interact with VSAM RLS. They are:<br />
V SMS,CFCACHE(cachename),ENABLE|QUIESCE<br />
CFVOL(volid),ENABLE|QUIESCE<br />
MONDS(dsname[,dsname...]),ON|OFF<br />
SHCDS(shcdsname),NEW|NEWSPARE|DELETE<br />
SMSVSAM,SPHERE(spherename),ENABLE<br />
FORCEDELETELOCKSTRUCTURE<br />
Refer to z/<strong>OS</strong> MVS <strong>System</strong> Commands, SA22-7627 for information about these<br />
commands.<br />
Display commands<br />
There are a several display commands available to provide RLS-related information.<br />
► Display the status <strong>of</strong> the SMSVSAM address space:<br />
DISPLAY SMS,SMSVSAM{,ALL}<br />
Specify ALL to see the status <strong>of</strong> all the SMSVSAM servers in the sysplex.<br />
► Display information about the Coupling Facility cache structure:<br />
DISPLAY SMS,CFCACHE(CF_cache_structure_name|*)<br />
► Display information about the Coupling Facility lock structure IGWLOCK00:<br />
DISPLAY SMS,CFLS<br />
This information includes the lock rate, lock contention rate, false contention rate, and<br />
average number <strong>of</strong> requests waiting for locks.<br />
► Display XCF information for a CF structure:<br />
DISPLAY XCF,STR,STRNAME=structurename<br />
This provides information such as status, type, and policy size for a CF structure.<br />
Chapter 7. DFSMS Transactional VSAM Services 413
To learn about other DISPLAY commands, see z/<strong>OS</strong> MVS <strong>System</strong> Commands, SA22-7627.<br />
Activate and display the sharing control data sets<br />
Refer to 7.16, “Define sharing control data sets” on page 402 for information about how to<br />
activate and display the VSAM RLS sharing control data sets.<br />
Quiesce and unquiesce a data set from RLS processing<br />
In 7.12, “The batch window problem” on page 395 we discuss the reasons for quiescing data<br />
sets from RLS processing.<br />
There are several ways to quiesce/unquiesced a data set.<br />
► Use the CICS command:<br />
CEMT SET DSNAME(dsname) QUIESCED|UNQUIESCED<br />
► Use an equivalent SPI command in a user program.<br />
► Use the system command:<br />
VARY SMS,SMSVSAM,SPHERE(dsname),QUIESCE|ENABLE<br />
The quiesce status <strong>of</strong> a data set is set in the catalog and is shown in an IDCAMS LISTCAT<br />
output for the data set. See 7.22, “Interpreting RLSDATA in an IDCAMS LISTCAT output” on<br />
page 417 for information about interpreting LISTCAT outputs.<br />
Changing the size <strong>of</strong> an XCF structure<br />
Use the following command to change the size <strong>of</strong> a Coupling Facility structure:<br />
SETXCF START,ALTER,STRNAME=CF_cachestructurename,SIZE=newsize<br />
This new size can be larger or smaller than the size <strong>of</strong> the current CF cache structure, but it<br />
cannot be larger than the maximum size specified in the CFRM policy. The SETXCF<br />
START,ALTER command will not work unless the structure’s ALLOW ALTER indicator is set to<br />
YES.<br />
List and change recovery information<br />
You can use the IDCAMS command SHCDS to list and change recovery information kept by the<br />
SMSVSAM server. It also resets VSAM record-level sharing settings in catalogs, allowing for<br />
non-RLS access or fallback from RLS processing. To learn about the single parameters for<br />
this command, see z/<strong>OS</strong> DFSMS Access Method Services for Catalogs, SC26-7394.<br />
Important: This section simply provides an overview <strong>of</strong> commands that are useful for<br />
working with VSAM RLS. Before using these commands (other than the DISPLAY<br />
command), read the <strong>of</strong>ficial z/<strong>OS</strong> manuals carefully.<br />
414 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.21 Backup and recovery <strong>of</strong> CICS VSAM data sets<br />
Backup tool DFSMSdss<br />
Supports backup-while-open processing<br />
BWO type TYPECICS allows to backup the data set<br />
while it is open in CICS<br />
BWO <strong>of</strong> forward recoverable data sets allows a<br />
recovery tool to take this backup for forward recovery<br />
Recovery <strong>of</strong> a broken data set<br />
Non-recoverable data sets:<br />
Lost updates, data in the backup can be inconsistent<br />
Backward recoverable data sets<br />
Lost updates, data in the backup are consistent<br />
Forward recoverable data sets<br />
No lost updates, data are consistent<br />
Figure 7-29 Backup and recovery <strong>of</strong> CICS VSAM data sets<br />
Backup tool DFSMSdss<br />
DFSMSdss supports backup-while-open (BWO) serialization, which can perform backups <strong>of</strong><br />
data sets that are open for update for long periods <strong>of</strong> time. It can also perform a logical data<br />
set dump <strong>of</strong> these data sets even if another application has them serialized.<br />
Backup-while-open is a better method than using SHARE or TOLERATE(ENQFAILURE) for<br />
dumping CICS VSAM file-control data sets that are in use and open for update.<br />
When you dump data sets that are designated by CICS as eligible for backup-while-open<br />
processing, data integrity is maintained through serialization interactions between:<br />
► CICS (database control program)<br />
► VSAM RLS<br />
► VSAM record management<br />
► DFSMSdfp<br />
► DFSMSdss<br />
Backup-while-open<br />
In order to allow DFSMSdss to take a backup while your data set is open by CICS, you need<br />
to define the data set with the BWO attribute TYPECICS or assign a data class with this<br />
attribute.<br />
► TYPECICS<br />
Use TYPECICS to specify BWO in a CICS environment. For RLS processing, this<br />
activates BWO processing for CICS. For non-RLS processing, CICS determines whether<br />
Chapter 7. DFSMS Transactional VSAM Services 415
to use this specification or the specification in the CICS FCT. The BWO type is stored in<br />
the ICF catalog.<br />
Backup-while-open <strong>of</strong> a forward recoverable data set<br />
If you use the DSS BWO processing for a forward recoverable data set, CICS will log the start<br />
and end <strong>of</strong> the copy/backup operation. The data set can then be fully recovered from this<br />
backup.<br />
For information about BWO processing, see z/<strong>OS</strong> DFSMSdss Storage Administration<br />
Reference, SC35-0424.<br />
Recover a CICS VSAM data set<br />
Sometimes it is necessary to recover a data set, for example if it became broken.<br />
► Recovery <strong>of</strong> a non-recoverable data set<br />
Data sets with LOG(NONE) are considered non-recoverable. To recover such a data set,<br />
restore the last backup <strong>of</strong> the data set. All updates to this data set after the backup was<br />
taken are lost. If the backup was taken after a transaction failed (did not commit), the data<br />
in the backup might be inconsistent.<br />
► Recovery <strong>of</strong> a backward recoverable data set<br />
Data sets with the LOG(NONE) attribute are considered backward recoverable. To recover<br />
such a data set, restore the last backup <strong>of</strong> the data set. All updates to this data set after<br />
the backup was taken are lost. The data in the backup is consistent.<br />
► Recovery <strong>of</strong> a forward recoverable data set<br />
Data sets with LOG(ALL) and a log stream assigned are forward recoverable. Restore the<br />
last backup <strong>of</strong> the data set. Then run a tool like CICS VSAM Recovery (CICSVR), which<br />
uses the forward recovery log to redrive all committed updates until the data set became<br />
broken. No updates are lost.<br />
416 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.22 Interpreting RLSDATA in an IDCAMS LISTCAT output<br />
CLUSTER ------- OTTI.IV4A62A1.RLS<br />
IN-CAT --- CATALOG.MVSICFU.VO260C1<br />
HISTORY<br />
DATASET-OWNER-----(NULL) CREATION--------2005.122<br />
RELEASE----------------2 EXPIRATION------0000.000<br />
SMSDATA<br />
STORAGECLASS ------CICS1 MANAGEMENTCLASS-CICSRLSM<br />
DATACLASS --------(NULL) LBACKUP ---0000.000.0000<br />
BWO STATUS------00000000 BWO TIMESTAMP---00000 00:00:00.0<br />
BWO-------------TYPECICS<br />
RLSDATA<br />
LOG ------------------ALL RECOVERY REQUIRED --(NO)<br />
VSAM QUIESCED -------(NO) RLS IN USE ---------(YES)<br />
LOGSTREAMID--------------CICS.IV4A62A1.DFHJ05<br />
RECOVERY TIMESTAMP LOCAL-----X'0000000000000000'<br />
RECOVERY TIMESTAMP GMT-------X'0000000000000000'<br />
Figure 7-30 Sample RLSDATA in an IDCAMS LISTCAT output<br />
RLSDATA in an IDCAMS LISTCAT output<br />
The RLSDATA in the output <strong>of</strong> an IDCAMS LISTCAT job shows you the RLS status <strong>of</strong> the data<br />
set and recovery information.<br />
You can use the sample JCL in Figure 7-31 to run an IDCAMS LISTCAT job.<br />
//LABEL JOB ...<br />
//S1 EXEC PGM=IDCAMS<br />
//SYSPRINT DD SYSOUT=*<br />
//SYSIN DD *<br />
LISTC ENT(OTTI.IV4A62A1.RLS) ALL<br />
/*<br />
Figure 7-31 Sample JCL for IDCAMS LISTCAT<br />
RLSDATA<br />
RLSDATA contains the following information:<br />
► LOG<br />
This field shows you the type <strong>of</strong> logging used for this data set. It can be NONE, UNDO or<br />
ALL.<br />
Chapter 7. DFSMS Transactional VSAM Services 417
► RECOVERY REQUIRED<br />
This field indicates whether the sphere is currently in the process <strong>of</strong> being forward<br />
recovered.<br />
► VSAM QUIESCED<br />
If this value is YES, the data set is currently quiesced from RLS processing so no RLS<br />
access is possible until it is unquiesced (see also 7.20, “Interacting with VSAM RLS” on<br />
page 412).<br />
► RLS IN USE<br />
If this value is YES it means:<br />
– The data set was last opened for RLS access.<br />
– The data set is not opened for RLS processing but is recoverable and either has<br />
retained locks protecting updates or is in a lost locks state.<br />
Note: If the RLS-IN-USE indicator is on, it does not mean that the data set is currently<br />
in use by VSAM RLS. It simply means that the last successful open was for RLS<br />
processing.<br />
Non-RLS open will always attempt to call VSAM RLS if the RLS-IN-USE bit is on in the<br />
catalog. This bit is a safety net to prevent non-RLS users from accessing a data set<br />
which can have retained or lost locks associated with it.<br />
The RLS-IN-USE bit is set on by RLS open and is left on after close. This bit is only<br />
turned <strong>of</strong>f by a successful non-RLS open or by the IDCAMS SHCDS CFRESET command.<br />
► LOGSTREAMID<br />
This value tells you the forward recovery log stream name for this data set if the LOG<br />
attribute has the value <strong>of</strong> ALL.<br />
► RECOVERY TIMESTAMP<br />
The recovery time stamp gives the time the most recent backup was taken when the data<br />
set was accessed by CICS using VSAM RLS.<br />
All LISTCAT keywords are described in Appendix B <strong>of</strong> z/<strong>OS</strong> DFSMS Access Method Services<br />
for Catalogs, SC26-7394.<br />
418 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.23 DFSMStvs introduction<br />
DFSMStvs addresses key requirements from<br />
customers<br />
Objective: Provide transactional recovery within<br />
VSAM<br />
VSAM RLS allows batch sharing <strong>of</strong> recoverable<br />
data sets for read<br />
VSAM RLS provides locking and buffer coherency<br />
CICS provides logging and two-phase commit<br />
protocols<br />
DFSMStvs allows batch sharing <strong>of</strong> recoverable data<br />
sets for update<br />
Logging provided using the MVS system logger<br />
Two-phase commit and back out using MVS<br />
recoverable resource management services<br />
Figure 7-32 DFSMS Transactional VSAM Services (DFSMStvs) introduction<br />
Why to use DFSMS Transactional VSAM Services (DFSMStvs)<br />
There following key requirements were expressed by customers running CICS applications in<br />
RLS mode:<br />
► Address the batch window problem for CICS and batch sharing <strong>of</strong> VSAM data sets.<br />
In 7.12, “The batch window problem” on page 395, we discuss the problem with the batch<br />
window. Customers reported batch windows ranging from two to ten hours. The programs<br />
run during the batch window consisted <strong>of</strong> both in-house applications and vendor-written<br />
applications.<br />
► Allow batch update sharing concurrent with CICS use <strong>of</strong> recoverable data.<br />
► Allow multiple batch update programs to run concurrently.<br />
► Allow programs to interact with multiple resource managers such as IMS and DB2.<br />
► Allow full transactional read/write access from new Java programs to existing VSAM<br />
data.<br />
Objective <strong>of</strong> DFSMStvs<br />
The objective <strong>of</strong> DFSMStvs is to provide transactional recovery directly within VSAM. It is an<br />
extension to VSAM RLS. It allows any job or application that is designed for data sharing to<br />
read/write share VSAM recoverable files.<br />
Chapter 7. DFSMS Transactional VSAM Services 419
DFSMStvs is a follow-on project/capability based on VSAM RLS. VSAM RLS supports CICS<br />
as a transaction manager. This provides sysplex data sharing <strong>of</strong> VSAM recoverable files<br />
when accessed through CICS. CICS provides the necessary unit-<strong>of</strong>-work management,<br />
undo/redo logging, and commit/back out functions. VSAM RLS provides the underlying<br />
sysplex-scope locking and data access integrity.<br />
DFSMStvs adds logging and commit/back out support to VSAM RLS. DFSMStvs requires<br />
and supports the RRMS (recoverable resource management services) component as the<br />
commit or sync point manager.<br />
DFSMStvs provides a level <strong>of</strong> data sharing with built-in transactional recovery for VSAM<br />
recoverable files that is comparable with the data sharing and transactional recovery support<br />
for databases provided by DB2 and IMSDB.<br />
420 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.24 Overview <strong>of</strong> DFSMStvs<br />
DFSMStvs enhances VSAM RLS to provide data<br />
recovery capabilities such as<br />
Transactional recovery<br />
Data set recovery<br />
DFSMStvs does not perform forward recovery<br />
DFSMStvs uses<br />
RRMS to manage the unit <strong>of</strong> recovery (UR)<br />
<strong>System</strong> logger to manage the log streams<br />
Undo log<br />
Shunt log<br />
Forward recovery logs<br />
Log <strong>of</strong> logs<br />
VSAM RLS manages locking and buffer coherency<br />
Allows atomic commit <strong>of</strong> changes - all or nothing<br />
DFSMStvs provides peer recovery<br />
Figure 7-33 DFSMStvs overview<br />
Enhancements <strong>of</strong> VSAM RLS<br />
DFSMStvs enhances VSAM RLS to perform data recovery in the form <strong>of</strong>:<br />
► Transactional recovery (see “Transactional recovery” on page 394)<br />
► Data set recovery (see “VSAM RLS/CICS data set recovery” on page 392)<br />
Before DFSMStvs, those two types <strong>of</strong> recovery were only supported by CICS.<br />
CICS performs the transactional recovery for data sets defined with a LOG parameter UNDO<br />
or ALL.<br />
For forward recoverable data sets (LOG(ALL)) CICS also records updates in a log stream for<br />
forward recovery. CICS itself does not perform forward recovery, it performs only logging. For<br />
forward recovery you need a utility like CICS VSAM recovery (CICSVR).<br />
Like CICS, DFSMStvs also provides transactional recovery and logging.<br />
Without DFSMStvs, batch jobs cannot perform transactional recovery and logging. That is the<br />
reason batch jobs were granted only read access to a data set that was opened by CICS in<br />
RLS mode. A batch window was necessary to run batch updates for CICS VSAM data sets.<br />
With DFSMStvs, batch jobs can perform transactional recovery and logging concurrently with<br />
CICS processing. Batch jobs can now update data sets while they are in use by CICS. No<br />
batch window is necessary any more.<br />
Chapter 7. DFSMS Transactional VSAM Services 421
Like CICS, DFSMStvs does not perform data set forward recovery.<br />
Components used by DFSMStvs<br />
The following components are involved in DFSMStvs processing:<br />
► RRMS to manage the unit <strong>of</strong> recovery<br />
DFSMStvs uses the recoverable resource management services (RRMS) to manage the<br />
unit <strong>of</strong> recovery that is needed for transactional recovery. More information about RRMS is<br />
in the next section.<br />
► <strong>System</strong> logger to manage log streams<br />
There are three kinds <strong>of</strong> logs used by DFSMStvs:<br />
– Undo log: Used for transactional recovery (back out processing)<br />
– Forward recovery log: Used to log updates against a data set for forward recovery<br />
– Shunt log: Used for long running or failed units <strong>of</strong> recovery<br />
You can also define a log <strong>of</strong> logs for use to automate forward recovery.<br />
These logs are maintained by the MVS system logger. For information about the various<br />
log types, see 7.28, “DFSMStvs logging” on page 427.<br />
► VSAM RLS - used for record locking and buffering<br />
The need for VSAM RLS is discussed in prior sections.<br />
Atomic commit <strong>of</strong> changes<br />
DFSMStvs allows atomic commit <strong>of</strong> changes. That means, if multiple updates to various<br />
records are necessary to complete a transaction, either all updates are committed or none<br />
are, if the transaction does not complete. See also 7.11, “Transactional recovery” on<br />
page 394 and 7.26, “Atomic updates” on page 425.<br />
Peer recovery<br />
Peer recovery allows DFSMStvs to recover for a failed DFSMStvs instance to clean up any<br />
work that was left in an incomplete state and to clear retained locks that resulted from the<br />
failure.<br />
For more information about peer recovery, see z/<strong>OS</strong> DFSMStvs Planning and Operation<br />
Guide, SC26-7348.<br />
422 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.25 DFSMStvs use <strong>of</strong> z/<strong>OS</strong> RRMS<br />
z/<strong>OS</strong> RRMS:<br />
- Registration services<br />
- Context services<br />
- Resource recovery<br />
services (RRS)<br />
Figure 7-34 DFSMStvs and RRMS<br />
Prepare/Commit<br />
Rollback<br />
DFSMStvs<br />
Another<br />
Recoverable<br />
Resource Mgr<br />
Another<br />
Recoverable<br />
Resource Mgr<br />
Recoverable resource management services (RRMS)<br />
z/<strong>OS</strong> provides recoverable resource management services (RRMS) comprising:<br />
► Registration services<br />
► Context services<br />
► Resource recovery services (RRS), which acts as sync point manager<br />
Role <strong>of</strong> resource recovery services (RRS)<br />
RRS provides the sync point services and is the most important component from a<br />
DFSMStvs use perspective.<br />
DFSMStvs is a recoverable resource manager. It is not a commit or sync point manager.<br />
DFSMStvs interfaces with the z/<strong>OS</strong> sync point manager (RRS).<br />
When an application issues a commit request directly to z/<strong>OS</strong> or indirectly through a sync<br />
point manager that interfaces with the z/<strong>OS</strong> sync point manager, DFSMStvs is invoked to<br />
participate in the two-phase commit process.<br />
Other resource managers (like DB2) whose recoverable resources were modified by the<br />
transaction are also invoked by the z/<strong>OS</strong> sync point manager, thus providing a commit scope<br />
across the multiple resource managers.<br />
RRS is a system-wide commit coordinator. It enables transactions to update protected<br />
resources managed by many resource managers.<br />
Chapter 7. DFSMS Transactional VSAM Services 423
It is RRS that provides the means to implement two-phase commit, but a resource manager<br />
must also use registration services and context services in conjunction with resource<br />
recovery services.<br />
Two-phase commit<br />
The two-phase commit protocol is a set <strong>of</strong> actions used to make sure that an application<br />
program either makes all changes to the resources represented by a single unit <strong>of</strong> recovery<br />
(UR), or it makes no changes at all. This protocol verifies that either all changes or no<br />
changes are applied even if one <strong>of</strong> the elements (such as the application, the system, or the<br />
resource manager) fails. The protocol allows for restart and recovery processing to take place<br />
after system or subsystem failure.<br />
For a discussion <strong>of</strong> the term unit <strong>of</strong> recovery, see 7.27, “Unit <strong>of</strong> work and unit <strong>of</strong> recovery” on<br />
page 426.<br />
424 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.26 Atomic updates<br />
Before After Before After<br />
$200 $100<br />
-$100<br />
+$100<br />
$700 $800<br />
Transaction to<br />
transfer $100<br />
Figure 7-35 Example <strong>of</strong> an atomic update<br />
$200 $200<br />
+$100<br />
$700 $800<br />
Incomplete<br />
transaction<br />
Atomic updates<br />
A transaction is known as atomic when an application changes data in multiple resource<br />
managers as a single transaction, and all <strong>of</strong> those changes are accomplished through a<br />
single commit request by a sync point manager. If the transaction is successful, all the<br />
changes are committed. If any piece <strong>of</strong> the transaction is not successful, then all changes are<br />
backed out. An atomic instant occurs when the sync point manager in a two-phase commit<br />
process logs a commit record for the transaction.<br />
Also see 7.11, “Transactional recovery” on page 394 for information about recovering an<br />
uncompleted transaction.<br />
Chapter 7. DFSMS Transactional VSAM Services 425
7.27 Unit <strong>of</strong> work and unit <strong>of</strong> recovery<br />
Start <strong>of</strong> program synchronized implicit<br />
update 1<br />
update 2<br />
commit synchronized explicit<br />
update 3<br />
update 4<br />
update 5 }<br />
B<br />
commit synchronized explicit<br />
update 6 } C<br />
End <strong>of</strong> program synchronized implicit<br />
Figure 7-36 Unit <strong>of</strong> recovery example<br />
Unit <strong>of</strong> work and unit <strong>of</strong> recovery<br />
A unit <strong>of</strong> work (UOW) is the term used in CICS publications for a set <strong>of</strong> updates that are<br />
treated as an atomic set <strong>of</strong> changes.<br />
RRS uses unit <strong>of</strong> recovery (UR) to mean much the same thing. Thus, a unit <strong>of</strong> recovery is the<br />
set <strong>of</strong> updates between synchronization points. There are implicit synchronization points at<br />
the start and at the end <strong>of</strong> a transaction. Explicit synchronization points are requested by an<br />
application within a transaction or batch job. It is preferable to use explicit synchronization for<br />
greater control <strong>of</strong> the number <strong>of</strong> updates in a unit <strong>of</strong> recovery.<br />
Changes to data are durable after a synchronization point. That means that the changes<br />
survive any subsequent failure.<br />
In Figure 7-36 there are three units <strong>of</strong> recovery, noted as A, B and C. The synchronization<br />
points between the units <strong>of</strong> recovery are either:<br />
► Implicit - At the start and end <strong>of</strong> the program<br />
► Explicit - When requested by commit<br />
426 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
} A<br />
= unit <strong>of</strong> recovery
7.28 DFSMStvs logging<br />
CICSA<br />
<strong>System</strong> 1<br />
system<br />
logger<br />
cache<br />
structures<br />
list<br />
structures<br />
lock structures<br />
Coupling Facility<br />
Figure 7-37 DFSMStvs logging<br />
TVS1<br />
...<br />
DFSMStvs logging<br />
DFSMStvs logging uses the z/<strong>OS</strong> system logger. The design <strong>of</strong> DFSMStvs logging is similar<br />
to the design <strong>of</strong> CICS logging. Forward recovery logstreams for VSAM recoverable files will<br />
be shared across CICS and DFSMStvs. CICS will log changes made by CICS transactions;<br />
DFSMStvs will log changes made by its callers.<br />
The system logger<br />
The system logger (IXGLOGR) is an MVS component that provides a rich set <strong>of</strong> services that<br />
allow another component or an application to write, browse, and delete log data. The system<br />
logger is used because it can merge log entries from many z/<strong>OS</strong> images to a single log<br />
stream, where a log stream is simply a set <strong>of</strong> log entries.<br />
Types <strong>of</strong> logs<br />
There are various types <strong>of</strong> logs involved in DFSMStvs (and CICS) logging. They are:<br />
► Undo logs (mandatory, one per image) - tvsname.IGWLOG.SYSLOG<br />
The backout or undo log contains images <strong>of</strong> changed records for recoverable data sets as<br />
they existed prior to being changed. It is used for transactional recovery to back out<br />
uncommitted changes if a transaction failed.<br />
► Shunt logs (mandatory, one per image) - tvsname.IGWSHUNT.SHUNTLOG<br />
The shunt log is used when backout requests fail and for long running units <strong>of</strong> recovery.<br />
CICSB<br />
system<br />
logger<br />
TVS2<br />
CICSA undo log log stream<br />
TVS1 undo log log stream<br />
CICSB undo log log stream<br />
TVS2 undo log log stream<br />
<strong>System</strong> n<br />
CICS/DFSMStvs merged<br />
forward recovery log log<br />
stream<br />
Chapter 7. DFSMS Transactional VSAM Services 427
► Log <strong>of</strong> logs (optional, shared with CICS and CICSVR) - Default name is<br />
CICSUSER.CICSVR.DFHLGLOG<br />
The log <strong>of</strong> logs contains copies <strong>of</strong> log records that are used to automate forward recovery.<br />
► Forward recovery logs (optional, shared with CICS) - Name <strong>of</strong> your choice<br />
The forward recovery log is used for data sets defined with LOG(ALL). It is used for<br />
forward recovery. It needs to be defined in the LOGSTREAMID parameter <strong>of</strong> the data set.<br />
For more information see 7.10, “VSAM RLS/CICS data set recovery” on page 392.<br />
The system logger writes log data to log streams. The log streams are put in list structures in<br />
the Coupling Facility (except for DASDONLY log streams).<br />
As Figure 7-37 on page 427 shows, you can merge forward recovery logs for use by CICS<br />
and DFSMStvs. You can also share it by multiple VSAM data sets. You cannot share an undo<br />
log by CICS and DFSMStvs; you need one per image.<br />
For information about how to define log streams and list structures refer to 7.32, “Prepare for<br />
logging” on page 433.<br />
Note: DFSMStvs needs UPDATE RACF access to the log streams.<br />
428 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.29 Accessing a data set with DFSMStvs<br />
This table lists the type <strong>of</strong> OPEN resulting from the<br />
parameters specified<br />
Left side shows the type <strong>of</strong> data set and type <strong>of</strong> open<br />
Column headings indicate the RLS option specified<br />
Data Set Type &<br />
Type <strong>of</strong> OPEN<br />
Recoverable<br />
Open for Input<br />
Recoverable<br />
Open for Output<br />
Non-recoverable<br />
Open for Input<br />
Non-recoverable<br />
Open for Output<br />
Figure 7-38 Accessing a data set with DFSMStvs<br />
NRI CR CRE<br />
VSAM RLS VSAM RLS DFSMStvs<br />
DFSMStvs DFSMStvs DFSMStvs<br />
VSAM RLS VSAM RLS DFSMStvs<br />
VSAM RLS VSAM RLS DFSMStvs<br />
Data set access with DFSMStvs<br />
In 7.9, “VSAM RLS locking” on page 390 we discuss the MACRF options NRI, CR, and CRE.<br />
CRE gives DFSMStvs access to VSAM data sets open for input or output. CR or NRI gives<br />
DFSMStvs access to VSAM recoverable data sets only for output.<br />
You can modify an application to use DFSMStvs by specifying RLS in the JCL or the ACB and<br />
having the application access a recoverable data set using either open for input with CRE or<br />
open for output from a batch job.<br />
The table in Figure 7-38 shows in which cases DFSMStvs is invoked.<br />
DFSMStvs is always invoked if:<br />
► RLS and CRE is specified in the JCL or MACRF parameter <strong>of</strong> the ACB macro<br />
CRE is also known as repeatable read.<br />
► RLS is specified in the JCL or MACRF parameter <strong>of</strong> the ACB macro and the data set is<br />
recoverable and it is opened for output (update processing)<br />
This allows DFSMStvs to provide the necessary transactional recovery for the data set.<br />
Chapter 7. DFSMS Transactional VSAM Services 429
7.30 Application considerations<br />
Break processing into a series <strong>of</strong> transactions<br />
Figure 7-39 Application considerations<br />
Application considerations<br />
For an application to participate in transactional recovery, it must first understand the concept<br />
<strong>of</strong> a transaction. It is not a good idea simply to modify an existing batch job to use DFSMStvs<br />
with no further change, as this causes the entire job to be seen as a single transaction. As a<br />
result, locks would be held and log records would need to exist for the entire life <strong>of</strong> the job.<br />
This can cause a tremendous amount <strong>of</strong> contention for the locked resources. It can also<br />
cause performance degradation as the undo log becomes exceedingly large.<br />
Break processing into a series <strong>of</strong> transactions<br />
To exploit the DFSMStvs capabilities, break down application processing into a series <strong>of</strong><br />
transactions. Have the application issue frequent sync points by invoking RRS commit and<br />
backout processing (MVS callable services SRRCMIT and SRRBACK). For information about<br />
RRS, see 7.25, “DFSMStvs use <strong>of</strong> z/<strong>OS</strong> RRMS” on page 423.<br />
RLS and DFSMStvs provide isolation until commit/backout. Consider the following rules:<br />
► Share locks on records accessed with repeatable read.<br />
► Hold write locks on changed records until the end <strong>of</strong> a transaction.<br />
► Use commit to apply all changes and release all locks.<br />
► Information extracted from shared files must not be used across commit/backout for the<br />
following reasons:<br />
– Need to re-access the records<br />
430 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Invoke RRS for commit and back out<br />
Modify program/JCL to request transactional VSAM<br />
access<br />
Specify through JCL or ACB<br />
Prevent multiple RPLs from causing intra-UR lock<br />
contention<br />
Handle potential loss <strong>of</strong> positioning at sync point<br />
Handle all work that is part <strong>of</strong> one UR under the<br />
same context<br />
Do not use file backup/restore as job restart<br />
technique
– Cannot get a record before a sync point and update it after<br />
– Do not position to a record before a sync point and access it after<br />
Modify the program or JCL to request DFSMStvs access<br />
You need to change the ACB MACRF or JCL specifications for a data set to request<br />
DFSMStvs access. See the previous section for more information.<br />
Prevent contention within a unit <strong>of</strong> recovery<br />
If the application uses multiple RPLs, care must be taken in how they are used. Using various<br />
RPLs to access the same record can cause lock contention within a UR.<br />
Handle potential loss <strong>of</strong> positioning at sync point<br />
The batch application must have a built-in method <strong>of</strong> tracking its processing position within a<br />
series <strong>of</strong> transactions. One potential method <strong>of</strong> doing this is to use a VSAM recoverable file to<br />
track the job's commit position.<br />
Handle all work that is part <strong>of</strong> one UR under the same context<br />
For information about units <strong>of</strong> recovery, see 7.27, “Unit <strong>of</strong> work and unit <strong>of</strong> recovery” on<br />
page 426. Reconsider your application to handle work that is part <strong>of</strong> one unit <strong>of</strong> recovery<br />
under the same context.<br />
Do not use file backup/restore as a job restart technique<br />
Today's batch applications that update VSAM files in a non-shared environment can create<br />
backup copies <strong>of</strong> the files to establish a restart/recovery point for the data. If the batch<br />
application fails, the files can be restored from the backup copies, and the batch jobs can be<br />
re-executed. This restart/recovery procedure cannot be used in a data sharing DFSMStvs<br />
environment because restoring to the point-in-time backup erases changes made by other<br />
applications sharing the data set.<br />
Instead, the batch application must have a built-in method <strong>of</strong> tracking its processing position<br />
within a series <strong>of</strong> transactions. One potential method <strong>of</strong> doing this is to use a VSAM<br />
recoverable file to track the job’s commit position. When the application fails, any<br />
uncommitted changes are backed out.<br />
The already-committed changes cannot be backed out, because they are already visible to<br />
other jobs or transactions. In fact, it is possible that the records that were changed by<br />
previously-committed UR were changed again by other jobs or transactions. Therefore, when<br />
the job is rerun, it is important that it determines its restart point and not attempt to redo any<br />
changes it had committed before the failure.<br />
For this reason, it is important that jobs and applications using DFSMStvs be written to<br />
execute as a series <strong>of</strong> transactions and use a commit point tracking mechanism for restart.<br />
Chapter 7. DFSMS Transactional VSAM Services 431
7.31 DFSMStvs logging implementation<br />
Prepare for logging<br />
Update CFRM policy to define structures for the<br />
sytem logger<br />
Define log structures and log streams<br />
Update data sets with LOG(NONE/UNDO/ALL) and<br />
LOGSTREAMID<br />
Update SYS1.PARMLIB(IGDSMSxx) with<br />
DFSMStvs parameters<br />
Ensure RACF authorization to the log streams<br />
Update SMS configuration<br />
Figure 7-40 DFSMStvs configuration changes<br />
DFSMStvs configuration changes<br />
To run DFSMStvs you need to have set up VSAM RLS already. DFSMStvs uses the locking<br />
and buffering techniques <strong>of</strong> VSAM RLS to access the VSAM data sets. Additionally, you need<br />
to make the following changes:<br />
► Prepare for logging.<br />
– Update CFRM policy to define structures for the system logger.<br />
– Define log streams.<br />
– Update data sets with LOG(NONE/UNDO/ALL) and LOGSTREAMID.<br />
This is described in more detail in 7.32, “Prepare for logging” on page 433.<br />
► Update SYS1.PARMLIB(IGDSMSxx) with DFSMStvs parameters.<br />
For the necessary PARMLIB changes see 7.33, “Update PARMLIB with DFSMStvs<br />
parameters” on page 436<br />
► Ensure that DFSMStvs has RACF authorization to the log streams.<br />
To provide logging, DFSMStvs needs RACF UPDATE authority to the log streams. See<br />
z/<strong>OS</strong> DFSMStvs Planning and Operation Guide, SC26-7348, for more information.<br />
► Update the SMS configuration.<br />
Recoverable data sets must be SMS managed. Therefore, you must change your SMS<br />
configuration to assign a storage class to those data sets. In addition, you can optionally<br />
change your data class with BWO, LOG, and LOGSTREAMID parameters.<br />
432 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
7.32 Prepare for logging<br />
Update CFRM policy to add list structures for use<br />
by DFSMStvs<br />
Update LOGR policy to add Coupling Facility<br />
structures needed for all log streams<br />
Define log streams for use by DFSMStvs as:<br />
Undo log<br />
Shunt log<br />
Log <strong>of</strong> logs<br />
Forward recovery logs<br />
Update data sets with LOG(NONE/UNDO/ALL) and<br />
LOGSTREAMID<br />
Figure 7-41 Prepare for logging<br />
Prepare for logging<br />
You have to define log structures in two places: the Coupling Facility resource management<br />
(CFRM) policy and the system logger LOGR policy. A policy is a couple data set.<br />
Furthermore, you need to define log streams for the various kinds <strong>of</strong> logs used by DFSMStvs.<br />
If a data set is forward recoverable, you also need to assign a log stream for forward recovery<br />
to this data set.<br />
Update the CFRM policy<br />
The CFRM policy is used to divide Coupling Facility space into structures. The CFRM policy<br />
enables you to define how MVS manages Coupling Facility resources. In 7.14, “Coupling<br />
Facility structures for RLS sharing” on page 397, we describe how to define CF structures for<br />
use be VSAM RLS. You need to run a similar job to either define a new CFRM policy or to<br />
update an existing CFRM policy with the structures that are used by the logger.<br />
A sample JCL you can use to define a new CFRM policy is shown in Figure 7-42 on<br />
page 434.<br />
Chapter 7. DFSMS Transactional VSAM Services 433
LABEL JOB ...<br />
//STEP10 EXEC PGM=IXCMIAPU<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSABEND DD SYSOUT=A<br />
//SYSIN DD *<br />
DATA TYPE(CFRM) REPORT(YES)<br />
DEFINE POLICY NAME(CFRM02) REPLACE(YES)<br />
STRUCTURE NAME(LOG_IGWLOG_001)<br />
SIZE(10240)<br />
INITSIZE(5120)<br />
PREFLIST(CF01,CF02)<br />
STRUCTURE NAME(LOG_IGWSHUNT_001)<br />
SIZE(10240)<br />
INITSIZE(5120)<br />
PREFLIST(CF01,CF02)<br />
STRUCTURE NAME(LOG_IGWLGLGS_001)<br />
SIZE(10240)<br />
INITSIZE(5120)<br />
PREFLIST(CF01,CF02)<br />
STRUCTURE NAME(LOG_FORWARD_001)<br />
SIZE(10240)<br />
INITSIZE(5120)<br />
PREFLIST(CF01,CF02)<br />
Figure<br />
/<br />
7-42 Example <strong>of</strong> defining structures in the CFRM policy<br />
Update the LOGR policy<br />
You must also define the Coupling Facility structures in the LOGR policy. The system logger<br />
component manages log streams based on the policy information that installations place in<br />
the LOGR policy.<br />
Multiple log streams can write data to a single Coupling Facility structure. This does not mean<br />
that the log data is merged; the log data stays segregated according to log stream.<br />
Figure 7-43 shows how to define the structures in the LOGR policy.<br />
//LABEL JOB ...<br />
//STEP10 EXEC PGM=IXCMIAPU<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSABEND DD SYSOUT=A<br />
//SYSIN DD *<br />
DATA TYPE(LOGR) REPORT(YES)<br />
DEFINE STRUCTURE NAME(LOG_IGWLOG_001)<br />
LOGSNUM(10) MAXBUFSIZE(64000)<br />
AVGBUFSIZE(4096)<br />
DEFINE STRUCTURE NAME(LOG_IGWSHUNT_001)<br />
LOGSNUM(10) MAXBUFSIZE(64000)<br />
AVGBUFSIZE(4096)<br />
DEFINE STRUCTURE NAME(LOG_IGWLGLGS_001)<br />
LOGSNUM(10) MAXBUFSIZE(64000)<br />
AVGBUFSIZE(4096)<br />
DEFINE STRUCTURE NAME(LOG_FORWARD_001)<br />
LOGSNUM(20) MAXBUFSIZE(64000)<br />
AVGBUFSIZE(4096)<br />
/*<br />
Figure 7-43 Defining structures to the LOGR policy<br />
434 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Define log streams<br />
Each DFSMStvs instance requires its own pair <strong>of</strong> system logs: a primary (undo) log and a<br />
secondary (shunt) log. You must define these two logs before you can use DFSMStvs.<br />
Additionally, you can define forward recovery logs and a log <strong>of</strong> logs.<br />
For the various types <strong>of</strong> log streams that are used by DFSMStvs refer to 7.28, “DFSMStvs<br />
logging” on page 427. A log stream is a VSAM linear data set which simply contains a<br />
collection <strong>of</strong> data. To define log streams, you can use the example JCL in Figure 7-44.<br />
//LABEL JOB ...<br />
//STEP10 EXEC PGM=IXCMIAPU<br />
//SYSPRINT DD SYSOUT=A<br />
//SYSABEND DD SYSOUT=A<br />
//SYSIN DD *<br />
DATA TYPE(LOGR) REPORT(YES)<br />
DEFINE LOGSTREAM NAME(IGWTV001.IGWLOG.SYSLOG)<br />
STRUCTURENAME(LOG_IGWLOG_001)<br />
LS_SIZE(1180)<br />
LS_DATACLAS(dataclas) LS_STORCLAS(storclas)<br />
STG_DUPLEX(YES) DUPLEXMODE(COND)<br />
HIGHOFFLOAD(80) LOWOFFLOAD(60)<br />
DIAG(YES)<br />
/<br />
DEFINE LOGSTREAM NAME(IGWTV001.IGWSHUNT.SHUNTLOG)<br />
STRUCTURE(LOG_IGWSHUNT_001)<br />
LS_SIZE(100)<br />
LS_DATACLAS(dataclas) LS_STORCLAS(storclas)<br />
HIGHOFFLOAD(80) LOWOFFLOAD(0)<br />
DIAG(YES)<br />
Figure 7-44 Example <strong>of</strong> defining log streams<br />
Defining log streams<br />
Each log stream is assigned to a structure previously defined to the LOGR policy. You can<br />
assign multiple log streams to the same structure, for example, for the forward recovery logs.<br />
If you were using CICS forward recovery in the past, you do not need to define new log<br />
streams for forward recovery. CICS and DFSMStvs can share the same forward recovery<br />
logs.<br />
Attention: Log streams are single-extent VSAM linear data sets and need<br />
SHAREOPTIONS(3,3). The default is SHAREOPTIONS(1,3) so you must alter the share<br />
options explicitly by running IDCAMS ALTER.<br />
Update data set attributes<br />
You must update your VSAM data sets with the LOG and LOGSTREAMID parameters in<br />
order to use DFSMStvs. If you were using VSAM RLS already and do not want to change the<br />
kind <strong>of</strong> recovery, no changes are necessary. See also 7.18, “Update data sets with log<br />
parameters” on page 408.<br />
Chapter 7. DFSMS Transactional VSAM Services 435
7.33 Update PARMLIB with DFSMStvs parameters<br />
SMS ACDS(acds) COMMDS(commds)<br />
INTERVAL(nnn|15) DINTERVAL(nnn|150)<br />
REVERIFY(YES|NO) ACSDEFAULTS(YES|NO)<br />
SYSTEMS(8|32) TRACE(OFF|ON)<br />
SIZE(nnnnnK|M) TYPE(ALL|ERROR)<br />
JOBNAME(jobname|*) ASID(asid|*)<br />
SELECT(event,event....) DESELECT(event,event....)<br />
DSNTYPE(LIBRARY|PDS)<br />
VSAM RLS<br />
RLSINIT(NO|YES) RLS_MAX_POOL_SIZE(nnn|100)<br />
SMF_TIME(NO|YES) CF_TIME(nnn|3600)<br />
BMFTIME(nnn|3600) CACHETIME(nnn|3600)<br />
DEADLOCK_DETECTION(iii|15,kkk|4) RLSTMOUT(nnn|0)<br />
DFSMStvs<br />
SYSNAME(sys1,sys2....) TVSNAME(nnn1,nnn2....)<br />
TV_START_TYPE(WARM|COLD,WARM|COLD...) AKP(nnn|1000,nnn|1000)<br />
LOG_OF_LOGS(logstream) QTIMEOUT(nnn|300)<br />
MAXLOCKS(max|0,incr|0)<br />
Figure 7-45 PARMLIB parameters to support DFSMStvs<br />
New PARMLIB parameters to support DFSMStvs<br />
There are a few new parameters you can specify in the PARMLIB member IGDSMSxx to<br />
support DFSMStvs. Note that DFSMStvs requires VSAM RLS. For a description <strong>of</strong> the VSAM<br />
RLS parameters, see 7.15, “Update PARMLIB with VSAM RLS parameters” on page 400.<br />
These are the new parameters for DFSMStvs:<br />
► SYSNAME(sysname[,sysname]...)<br />
– This identifies the systems on which you want to run DFSMStvs.<br />
– This specifies the names <strong>of</strong> the systems to which the DFSMStvs instance names <strong>of</strong> the<br />
TVSNAME parameter apply.<br />
► TVSNAME(nnn[,nnn]...)<br />
– This specifies the identifiers <strong>of</strong> the DFSMStvs instances on the systems you specified<br />
in the SYSNAME parameter.<br />
– If only one TVSNAME is specified, this parameter applies only to the system on which<br />
the PARMLIB member is read; in this case, no SYSNAME is required.<br />
– The number <strong>of</strong> sysnames and the number <strong>of</strong> tvsnames must be the same.<br />
► TV_START_TYPE({WARM|COLD}[,{WARM|COLD}]...)<br />
– This specifies the start type for the single DFSMStvs instances as they are listed in the<br />
TVSNAME parameter.<br />
436 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► AKP(nnn[,nnn]...)<br />
– This specifies the activity keypoint trigger values for the DFSMStvs instances as they<br />
appear in the TVSNAME parameter.<br />
– AKP is the number <strong>of</strong> logging operations between keypoints.<br />
– Activity keypointing enables DFSMStvs to delete records from the undo or shunt log<br />
that are no longer involved in active units <strong>of</strong> recovery.<br />
► LOG_OF_LOGS(logstream)<br />
– This specifies the name <strong>of</strong> the log stream that is used as a log <strong>of</strong> logs.<br />
► QTIMEOUT({nnn|300})<br />
– This specifies the amount <strong>of</strong> time the DFSMStvs quiesce exits allow to elapse before<br />
concluding that a quiesce event cannot be completed successfully.<br />
► MAXLOCKS({max|0},{incr|0})<br />
– max is the maximum number <strong>of</strong> unique lock requests that a single unit <strong>of</strong> recovery can<br />
make; when this value is reached, the system will issue a warning message.<br />
– incr is an increment value.<br />
– After max is reached, each time the number <strong>of</strong> locks increases by the incr value,<br />
another warning message is issued.<br />
For information about these PARMLIB parameters, see z/<strong>OS</strong> MVS Initialization and Tuning<br />
Reference, SA22-7592.<br />
Chapter 7. DFSMS Transactional VSAM Services 437
7.34 The DFSMStvs instance<br />
CICSA<br />
R/W<br />
<strong>System</strong> 1<br />
CICSB<br />
R/W<br />
SMSVSAM<br />
Batch1<br />
R/W<br />
IGWTV001<br />
Batch2<br />
R/W<br />
Figure 7-46 Sysplex with two active DFSMStvs instances<br />
The DFSMStvs instance<br />
DFSMStvs runs in the SMSVSAM address space as an instance. The name <strong>of</strong> the<br />
SMSVSAM address space is always IGWTVnnn, where nnn is a unique number per system<br />
as defined for TVSNAME in the PARMLIB member IGDSMSxx. This number can range from<br />
0 to 255. You can define up to 32 DFSMStvs instances, one per system. An example is<br />
IGWTV001.<br />
As soon as an application that does not act as a recoverable resource manager has RLS<br />
access to a recoverable data set, DFSMStvs is invoked (see also 7.29, “Accessing a data set<br />
with DFSMStvs” on page 429). DFSMStvs calls VSAM RLS (SMSVSAM) for record locking<br />
and buffering. With DFSMStvs built on top <strong>of</strong> VSAM RLS, full sharing <strong>of</strong> recoverable files<br />
becomes possible. Batch jobs can now update the recoverable files without first quiescing<br />
CICS' access to them.<br />
As a recoverable resource manager, CICS interacts directly with VSAM RLS.<br />
438 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Coupling Facility<br />
recoverable<br />
data set<br />
CICSC<br />
R/W<br />
CICSD<br />
R/W<br />
SMSVSAM<br />
Batch3<br />
R/W<br />
IGWTV002<br />
Batch4<br />
R/W<br />
<strong>System</strong> 2
7.35 Interacting with DFSMStvs<br />
Interacting with DFSMStvs includes<br />
Use <strong>of</strong> SETSMS commands to change the<br />
IGDSMSxx specifications<br />
Use <strong>of</strong> SET SMS=xx command to activate a new<br />
IGDSMSxx member<br />
Use <strong>of</strong> VARY SMS commands<br />
Use <strong>of</strong> display commands to get information about<br />
the current DFSMStvs configuration<br />
IDCAMS command SHCDS<br />
Figure 7-47 Operator commands to interact with DFSMStvs<br />
Interacting with DFSMStvs<br />
This section provides a brief overview <strong>of</strong> several commands that were introduced particularly<br />
for displaying and changing DFSMStvs-related information. You can use those commands in<br />
addition to the commands we already described in 7.20, “Interacting with VSAM RLS” on<br />
page 412.<br />
SETSMS command<br />
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The syntax<br />
is:<br />
SETSMS AKP(nnn|1000)<br />
QTIMEOUT(nnn|300)<br />
MAXLOCKS(max|0,incr|0)<br />
These are the only DFSMStvs PARMLIB specifications you can overwrite using the SETSMS<br />
command. For information about these parameters, see 7.33, “Update PARMLIB with<br />
DFSMStvs parameters” on page 436.<br />
SET SMS=xx command<br />
The command SET SMS=xx causes the IGDSMSxx member <strong>of</strong> SYS1.PARMLIB to be read. It<br />
allows you to activate another IGDSMSxx member, where xx specifies the last two digits <strong>of</strong><br />
the member. You can change DFSMStvs parameters in IGDSMSxx that you cannot change<br />
with the SETSMS command, and activate the changes afterwards by running SET SMS=xx.<br />
Chapter 7. DFSMS Transactional VSAM Services 439
Changes to parameters other than AKP, QTIMEOUT and MAXLOCKS take affect the next<br />
time DFSMStvs restarts.<br />
VARY SMS command<br />
The VARY SMS command has the following functions for DFSMStvs:<br />
► Enable, quiesce, or disable one or all DFSMStvs instances:<br />
VARY SMS,TRANVSAM(tvsname|ALL),{QUIESCE|Q}<br />
{DISABLE|D}<br />
{ENABLE|E}<br />
This command is routed to all systems in the sysplex. If you specify ALL, it affects all<br />
DFSMStvs instances. Otherwise, it only affects the instance specified by tvsname. If<br />
QUIESCE is specified, DFSMStvs completes the current work first, but does not accept any<br />
new work. If DISABLE is specified, DFSMStvs stops processing immediately.<br />
► Enable, quiesce, or disable DFSMStvs access to a specified logstream:<br />
VARY SMS,LOG(logstream),{QUIESCE|Q}<br />
{DISABLE|D}<br />
{ENABLE|E}<br />
Quiescing or disabling the DFSMStvs undo or shunt logstream is equivalent to quiescing<br />
or disabling DFSMStvs processing respectively. Quiescing or disabling the log <strong>of</strong> logs has<br />
no influence on DFSMStvs processing. Quiescing or disabling a forward recovery log<br />
causes all attempts to process data sets which use this log stream to fail.<br />
► Start or stop peer recovery processing for a failed instance <strong>of</strong> DFSMStvs:<br />
VARY SMS,TRANVSAM(tvsname),PEERRECOVERY,{ACTIVE}<br />
{ACTIVEFORCE}<br />
{INACTIVE}<br />
This command applies only to the system on which it is issued. That system will then be<br />
responsible for performing all peer recovery processing for a failed DFSMStvs instance.<br />
For a discussion <strong>of</strong> the term peer recovery, see 7.24, “Overview <strong>of</strong> DFSMStvs” on<br />
page 421.<br />
Display command<br />
There are a few display commands you can use to get information about DFSMStvs.<br />
► To display common DFSMStvs information:<br />
DISPLAY SMS,TRANVSAM{,ALL}<br />
This command lists information about the DFSMStvs instance on the system were it was<br />
issued. To get information from all systems use ALL. This information includes name and<br />
state <strong>of</strong> the DFSMStvs instance, values for AKP, start type, and qtimeout, and also the<br />
names, types, and states <strong>of</strong> the used log streams.<br />
► To display information about a particular job that uses DFSMStvs:<br />
DISPLAY SMS,JOB(jobname)<br />
The information about the particular job includes the current job step, the current ID, and<br />
status <strong>of</strong> the unit <strong>of</strong> recovery used by this job.<br />
► To display information about a particular unit <strong>of</strong> recovery currently active within the<br />
sysplex:<br />
DISPLAY SMS,URID(urid|ALL)<br />
This command provides information about a particular UR in the sysplex or about all URs<br />
<strong>of</strong> the system on which this command was issued. If ALL is specified, you do not obtain<br />
440 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
information about shunted URs and URs that are restarting. The provided information<br />
includes age and status <strong>of</strong> the UR, the jobname with which this UR is associated, and the<br />
current step within the job.<br />
► To display entries currently contained in the shunt log:<br />
DISPLAY SMS,SHUNTED,{SPHERE(sphere)|<br />
URID(urid|ALL)}<br />
Entries are moved to the shunt log when DFSMStvs is unable to finish processing a sync<br />
point for them. There are several reasons this might occur; an I/O error is one <strong>of</strong> the<br />
possible causes. Depending on what was specified, you get information for a particular<br />
VSAM sphere, a particular UR, or for all shunted URs in the sysplex.<br />
► To display information about logstreams that DFSMStvs is currently using:<br />
DISPLAY SMS,LOG(logstream|ALL)<br />
If ALL is specified, information about all log streams in use is provided from the system on<br />
which the command is issued. The output includes the status and type <strong>of</strong> the log stream,<br />
the job name and URID <strong>of</strong> the oldest unit <strong>of</strong> recovery using the log, and also a list <strong>of</strong> all<br />
DFSMStvs instances that are using the log.<br />
► To display information about a particular data set:<br />
DISPLAY SMS,DSNAME(dsn)<br />
Use this command to display jobs currently accessing the data set using DFSMStvs<br />
access on the systems within the sysplex.<br />
IDCAMS command SHCDS<br />
The IDCAMS command SHCDS was enhanced to display and modify recovery information<br />
related to DFSMStvs. For more information about this command, see z/<strong>OS</strong> DFSMS Access<br />
Method Services for Catalogs, SC26-7394.<br />
Important: This chapter simply provides an overview <strong>of</strong> new operator commands to know<br />
to work with DFSMStvs. Before using these commands (other than the DISPLAY<br />
command), read the <strong>of</strong>ficial z/<strong>OS</strong> manuals carefully.<br />
Chapter 7. DFSMS Transactional VSAM Services 441
7.36 Summary<br />
Figure 7-48<br />
Summary<br />
Base VSAM<br />
Functions and limitations<br />
VSAM record-level sharing<br />
Functions and limitations<br />
DFSMStvs<br />
Functions<br />
Summary<br />
In this chapter we showed the limitations <strong>of</strong> base VSAM that made it necessary to develop<br />
VSAM RLS. Further, we exposed the limitations <strong>of</strong> VSAM RLS that were the reason to<br />
enhance VSAM RLS by the functions provided by DFSMStvs.<br />
► Base VSAM<br />
– VSAM does not provide read or read/write integrity for share options other than 1.<br />
– User needs to use enqueue/dequeue macros for serialization.<br />
– The granularity <strong>of</strong> sharing on a VSAM cluster is at the control interval level.<br />
– Buffers reside in the address space.<br />
– Base VSAM does not support CICS as a recoverable resource manager; a CICS file<br />
owning region is necessary to ensure recovery.<br />
► VSAM RLS<br />
– Enhancement <strong>of</strong> base VSAM.<br />
– User does not need to serialize; this is done by RLS locking.<br />
– Granularity <strong>of</strong> sharing is record level, not CI level.<br />
– Buffers reside in the data space and Coupling Facility.<br />
– Supports CICS as a recoverable resource manager (CICS logging for recoverable data<br />
sets); no CICS file owning region is necessary.<br />
442 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
– Does not act as a recoverable resource manager (provides no logging for recoverable<br />
data sets).<br />
– Sharing <strong>of</strong> recoverable data sets that are in use by CICS with non-CICS applications<br />
like batch jobs is not possible.<br />
► DFSMStvs<br />
– Enhancement <strong>of</strong> VSAM RLS.<br />
– Uses VSAM RLS to access VSAM data sets.<br />
– Acts as a recoverable resource manager, thus provides logging.<br />
– Enables non-CICS applications like batch jobs to share recoverable data sets with<br />
CICS.<br />
Chapter 7. DFSMS Transactional VSAM Services 443
444 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Chapter 8. Storage management hardware<br />
The use <strong>of</strong> DFSMS requires storage management hardware that includes both direct access<br />
storage device (DASD) and tape device types. In this chapter we provide an overview <strong>of</strong> both<br />
storage device categories and a brief introduction to RAID technology.<br />
For many years, DASDs have been the most used storage devices on <strong>IBM</strong> eServer zSeries<br />
systems and their predecessors, delivering the fast random access to data and high<br />
availability that customers have come to expect.<br />
We cover the following types <strong>of</strong> DASD:<br />
► Traditional DASD (such as 3380 and 3390)<br />
► Enterprise Storage Server (ESS)<br />
► DS6000 and DS8000<br />
The era <strong>of</strong> tapes began before DASD was introduced. During that time, tapes were used as<br />
the primary application storage medium. Today customers use tapes for such purposes as<br />
backup, archiving, or data transfer between companies.<br />
The following types <strong>of</strong> tape devices are described:<br />
► Traditional tapes like 3480 and 3490<br />
► <strong>IBM</strong> Magstar® 3590 and 3592<br />
► Automated tape library (ATL) 3494<br />
► Virtual tape server (VTS).<br />
We also briefly explain the storage area network (SAN) concept.<br />
8<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 445
8.1 Overview <strong>of</strong> DASD types<br />
Traditional DASD<br />
3380 Models J, E, K<br />
3390 Models 1, 2, 3, 9<br />
DASD based on RAID technology and Seascape<br />
architecture<br />
Enterprise Storage Server (ESS)<br />
DS6000 and DS8000 series<br />
Figure 8-1 Overview <strong>of</strong> DASD types<br />
Traditional DASD<br />
In the era <strong>of</strong> traditional DASD, the hardware consisted <strong>of</strong> controllers like 3880 and 3990,<br />
which contained the necessary intelligent functions to operate a storage subsystem. The<br />
controllers were connected to S/390 systems through parallel or ESCON channels. Behind a<br />
controller there were several model groups <strong>of</strong> the 3390 that contained the disk drives. Based<br />
on the models, these disk drives had various capacities per device. Within each model group,<br />
the various models provide either four, eight, or twelve devices. All A-units come with four<br />
controllers, providing a total <strong>of</strong> four paths to the 3990 Storage Control. At that time, you were<br />
not able to change the characteristics <strong>of</strong> a given DASD device.<br />
DASD based on RAID technology<br />
With the introduction <strong>of</strong> the RAMAC Array in 1994, <strong>IBM</strong> first introduced storage subsystems<br />
for S/390 systems based on RAID technology. We discuss the various RAID implementations<br />
in Figure 8-2 on page 448.<br />
The more modern <strong>IBM</strong> DASD products, such as Enterprise Storage Server (ESS), DS6000,<br />
DS8000, and DASD from other vendors, emulate <strong>IBM</strong> 3380 and 3390 volumes in geometry,<br />
capacity <strong>of</strong> tracks, and number <strong>of</strong> tracks per cylinder. This emulation makes all the other<br />
entities think they are dealing with real 3380s or 3390s. Among these entities, we have data<br />
processing people not working directly with storage, JCL, MVS commands, open routines,<br />
access methods, I<strong>OS</strong>, and channels. One benefit <strong>of</strong> this emulation is that it allows DASD<br />
manufacturers to implement changes in the real disks, including the geometry <strong>of</strong> tracks and<br />
cylinders, without affecting the way those components interface with DASD. From an<br />
446 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
operating system point <strong>of</strong> view, device types always will be 3390s, sometimes with much<br />
higher numbers <strong>of</strong> cylinders, but 3390s nonetheless.<br />
ESS technology<br />
The <strong>IBM</strong> TotalStorage Enterprise Storage Server (ESS) is the <strong>IBM</strong> disk storage server,<br />
developed using <strong>IBM</strong> Seascape architecture. The ESS provides functionality to the family <strong>of</strong><br />
e-business servers, and also to non-<strong>IBM</strong> (that is, Intel®-based and UNIX-based) families <strong>of</strong><br />
servers. Across all <strong>of</strong> these environments, the ESS features unique capabilities that allow it to<br />
meet the most demanding requirements <strong>of</strong> performance, capacity, and data availability that<br />
the computing business requires. See 8.4, “Enterprise Storage Server (ESS)” on page 453 for<br />
more information about this topic.<br />
Seascape architecture<br />
The Seascape architecture is the key to the development <strong>of</strong> the <strong>IBM</strong> storage products.<br />
Seascape allows <strong>IBM</strong> to take the best <strong>of</strong> the technologies developed by the many <strong>IBM</strong><br />
laboratories and integrate them, thereby producing flexible and upgradeable storage<br />
solutions. This Seascape architecture design has allowed the <strong>IBM</strong> TotalStorage Enterprise<br />
Storage Server to evolve from the initial E models to the succeeding F models, and to the<br />
later 800 models, each featuring new, more powerful hardware and functional enhancements,<br />
and always integrated under the same successful architecture with which the ESS was<br />
originally conceived. See 8.3, “Seascape architecture” on page 450 for more information.<br />
Note: In this publication, we use the terms disk or head disk assembly (HDA) for the real<br />
devices, and the terms DASD volumes or DASD devices for the logical 3380/3390s.<br />
Chapter 8. Storage management hardware 447
8.2 Redundant array <strong>of</strong> independent disks (RAID)<br />
Raid- 3<br />
Raid-5<br />
Raid-1<br />
Data+ Parity Data+ Parity<br />
Figure 8-2 Redundant array <strong>of</strong> independent disks (RAID)<br />
RAID architecture<br />
Redundant array <strong>of</strong> independent disks (RAID) is a direct access storage architecture where<br />
data is recorded across multiple physical disks with parity separately recorded, so that no loss<br />
<strong>of</strong> access to data results from the loss <strong>of</strong> any one disk in the array.<br />
RAID breaks the one-to-one association <strong>of</strong> volumes with devices. A logical volume is now the<br />
addressable entity presented by the controller to the attached systems. The RAID unit maps<br />
the logical volume across multiple physical devices. Similarly, blocks <strong>of</strong> storage on a single<br />
physical device may be associated with multiple logical volumes. Because a logical volume is<br />
mapped by the RAID unit across multiple physical devices, it is now possible to overlap<br />
processing for multiple cache misses to the same logical volume because cache misses can<br />
be satisfied by separate physical devices.<br />
The RAID concept involves many small computer system interface (SCSI) disks replacing a<br />
big one. The major RAID advantages are:<br />
► Performance (due to parallelism)<br />
► Cost (SCSI are commodities)<br />
► zSeries compatibility<br />
► Environment (space and energy)<br />
However, RAID increased the chances <strong>of</strong> malfunction due to media and disk failures and the<br />
fact that the logical device is now residing on many physical disks. The solution was<br />
448 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
RAID Disks<br />
Primary Alternate<br />
Record X<br />
ABCDEF<br />
Record X<br />
ABCDEF<br />
Data Data Data Parity<br />
1/3 Record X<br />
AB<br />
Record X<br />
ABCDEF<br />
Record W<br />
TRSVAB<br />
1/3 Record X<br />
CD<br />
Record Y<br />
#IJKLM<br />
Parity<br />
PPPPPP<br />
1/3 Record X<br />
EF<br />
Parity bits<br />
PP<br />
Data+ Parity Data+ Parity<br />
Record B<br />
PQRSTU<br />
Record V<br />
CDERST<br />
Parity<br />
PPPPPP<br />
Record T<br />
QRUBXA
edundancy, which wastes space and causes performance problems as “write penalty” and<br />
“free space reclamation.” To address this performance issue, large caches are implemented.<br />
Note: The ESS storage controllers use the RAID architecture that enables multiple logical<br />
volumes to be mapped on a single physical RAID group. If required, you can still separate<br />
data sets on a physical controller boundary for the purpose <strong>of</strong> availability.<br />
RAID implementations<br />
Except for RAID-1, each manufacturer sets the number <strong>of</strong> disks in an array. An array is a set<br />
<strong>of</strong> logically related disks, where a parity applies.<br />
Various implementations certified by the RAID Architecture Board are:<br />
RAID-1 This has simply disk mirroring, like dual copy.<br />
RAID-3 This has an array with one dedicated parity disk and just one I/O request at a<br />
time, with intra-record striping. It means that the written physical block is striped<br />
and each piece (together with the parity) is written in parallel in each disk <strong>of</strong> the<br />
array. The access arms move together. It has a high data rate and a low I/O rate.<br />
RAID-5 This has an array with one distributed parity (there is no dedicated disk for<br />
parities). It does I/O requests in parallel with extra-record striping, meaning each<br />
physical block is written in each disk. The access arms move independently. It<br />
has strong caching to avoid write penalties; that is, four disk I/Os per write.<br />
RAID-5 has a high I/O rate and a medium data rate. RAID-5 is used by the <strong>IBM</strong><br />
2105 controller with 8-disk arrays in the majority <strong>of</strong> configurations.<br />
RAID-5 does the following:<br />
► It reads data from an undamaged disk. This is one single disk I/O operation.<br />
► It reads data from a damaged disk, which implies (n-1) disk I/Os, to recreate<br />
the lost data where n is the number <strong>of</strong> disks in the array.<br />
► For every write to an undamaged disk, RAID-5 does four I/O operations to<br />
store a correct parity block; this is called a write penalty. This penalty can be<br />
relieved with strong caching and a slice triggered algorithm (coalescing disks<br />
updates from cache into a single parallel I/O).<br />
► For every write to a damaged disk, RAID-5 does n-1 reads and one parity<br />
write.<br />
RAID-6 This has an array with two distributed parity and I/O requests in parallel with<br />
extra-record striping. Its access arms move independently (Reed/Salomon P-Q<br />
parity). The write penalty is greater than RAID-5, with six I/Os per write.<br />
RAID-6+ This is without write penalty (due to log-structured file, or LFS), and has<br />
background free-space reclamation. The access arms all move together for<br />
writes. It is used by the RVA controller.<br />
RAID-10 RAID-10 has a new RAID architecture designed to give performance for striping<br />
and has redundancy for mirroring. RAID-10 is optionally implemented in the<br />
<strong>IBM</strong> 2105.<br />
Note: Data striping (stripe sequential physical blocks in separate disks) is sometimes<br />
called RAID-0, but it is not a real RAID because there is no redundancy, that is, no parity<br />
bits.<br />
Chapter 8. Storage management hardware 449
8.3 Seascape architecture<br />
Powerful storage server<br />
Snap-in building blocks<br />
Universal data access<br />
Storage sharing<br />
Data copy sharing<br />
Network<br />
Direct channel<br />
Shared storage transfer<br />
True data sharing<br />
Figure 8-3 Seascape architecture<br />
Seascape architecture<br />
The <strong>IBM</strong> Enterprise Storage Server’s “architecture for e-business” design is based on the <strong>IBM</strong><br />
storage enterprise architecture, Seascape. The Seascape architecture defines<br />
next-generation concepts for storage by integrating modular building block technologies from<br />
<strong>IBM</strong>, including disk, tape, and optical storage media, powerful processors, and rich s<strong>of</strong>tware.<br />
Integrated Seascape solutions are highly reliable, scalable, and versatile, and support<br />
specialized applications on servers ranging from PCs to super computers. Virtually all types<br />
<strong>of</strong> servers can concurrently attach to the ESS, including iSeries® and AS/400® systems. As a<br />
result, ESS can be the external disk storage system <strong>of</strong> choice for AS/400 as well as iSeries<br />
systems in heterogeneous SAN environments.<br />
Seascape has three basic concepts:<br />
► Powerful storage server<br />
► Snap-in building blocks<br />
► Universal data access<br />
DFSMS provides device support for the <strong>IBM</strong> 2105 Enterprise Storage Server (ESS), a<br />
high-end storage subsystem. The ESS storage subsystem succeeded the 3880, 3990, and<br />
9340 subsystem families. Designed for mid-range and high-end environments, the ESS gives<br />
you large capacity, high performance, continuous availability, and storage expandability. You<br />
can read more about ESS in 8.4, “Enterprise Storage Server (ESS)” on page 453.<br />
450 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
The ESS was the first <strong>of</strong> the Seascape architecture storage products to attach directly to <strong>IBM</strong><br />
<strong>System</strong> z and open system platforms. The Seascape architecture products come with<br />
integrated storage controllers. These integrated storage controllers allow the attachment <strong>of</strong><br />
physical storage devices that emulate 3390 Models 2, 3, and 9, or provide 3380 track<br />
compatibility mode.<br />
Powerful storage server<br />
The storage system is intelligent and independent, and it can be reached by channels or<br />
through the network. It is powered by a set <strong>of</strong> fast RISC processors.<br />
Snap-in building blocks<br />
Each Seascape product is comprised <strong>of</strong> building blocks, such as:<br />
► Scalable n-way RISC server, PCI-based. This provides the logic <strong>of</strong> the storage server.<br />
► Memory cache from RISC processor memory.<br />
► Channel attachments, such as FC-AL, SCSI, ESCON, FICON and SSA.<br />
► Network attachments, such as Ethernet, FDDI, TR, and ATM.<br />
► These attachments can also implement functions, that is, a mix <strong>of</strong> network interfaces (to<br />
be used as a remote and independent storage server) and channel interfaces (to be used<br />
as a storage controller interface).<br />
► S<strong>of</strong>tware building blocks, such as an AIX subset, Java applications, and Tivoli® Storage<br />
Manager. High level language (HLL) is more flexible than microcode, and is easier to write<br />
and maintain.<br />
► Storage adapters, for mixed storage devices technologies.<br />
► Storage device building blocks, such as serial disk (7133), 3590 tape (Magstar), and<br />
optical (3995).<br />
► Silos and robots (3494).<br />
Universal data access<br />
Universal data access allows a wide array <strong>of</strong> connectivity, such as z/<strong>OS</strong>, UNIX, Linux, and<br />
<strong>OS</strong>/400®, to common data. There are three types <strong>of</strong> universal access: storage sharing, data<br />
copy sharing, and true data sharing, as explained here.<br />
► Storage sharing<br />
Physical storage (DASD or tape) is statically divided into fixed partitions available to a<br />
given processor. It is not a s<strong>of</strong>tware function. The subsystem controller knows which<br />
processors own which storage partitions. In a sense, only capacity is shared, not data; one<br />
server cannot access the data <strong>of</strong> the other server. It is required that the manual<br />
reassignment <strong>of</strong> storage capacity between partitions be simple and nondisruptive.<br />
The benefits are:<br />
– Purchase higher quantities with greater discounts<br />
– Only one type <strong>of</strong> storage to manage<br />
– Static shifting <strong>of</strong> capacity as needed<br />
The drawbacks are:<br />
– Higher price for SCSI data<br />
– Collocation at 20 meters <strong>of</strong> the SCSI servers<br />
– No priority concept between z/<strong>OS</strong> and UNIX/NT I/O requests<br />
Chapter 8. Storage management hardware 451
► Data copy sharing<br />
Data copy sharing is an interim data replication solution (waiting for a true data sharing)<br />
done by data replication from one volume accessed by a platform to another volume<br />
accessed by another platform. The replication can be done through s<strong>of</strong>tware or<br />
hardware.There are three ways to implement data copy sharing:<br />
– Network: Via network data transfer, such as SNA or TCP/IP. This method has<br />
drawbacks, such as CPU and network overhead; it is still slow and expensive for<br />
massive data transfer.<br />
– Direct channel: Direct data transfer between the processors involved using channel or<br />
bus capabilities, referred to as bulk data transfer.<br />
– Shared storage transfer: Writing an intermediate flat file by s<strong>of</strong>tware into the storage<br />
subsystem cache, that is read (and translated) by the receiving processor, so the<br />
storage is shared.<br />
► True data sharing<br />
For data sharing between multiple platforms for read/write <strong>of</strong> a single copy that addresses<br />
the complex issues <strong>of</strong> mixed data types, file structures, databases, and SCPs, there is no<br />
available solution.<br />
452 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.4 Enterprise Storage Server (ESS)<br />
Two 6-way RISC processors (668 MHZ)<br />
4.8 GB/sec <strong>of</strong> aggregate bandwidth<br />
Up to 32 ESCON/SCSI/mixed<br />
Up to 16 FICON and FCP channels<br />
Up to 64 GB <strong>of</strong> cache<br />
2 GB <strong>of</strong> NVS cache<br />
18.2/36.4/72.8/145.6 GB capacity disk options<br />
Up to 55.9 TB capacity<br />
8 x 160 MB/sec SSA loops<br />
10,000 rpm and 15,000 rpm disk options<br />
Connects to SAN<br />
RAID-5 or RAID-10<br />
Figure 8-4 Enterprise Storage Server model 800<br />
Enterprise Storage Server (ESS)<br />
The <strong>IBM</strong> Enterprise Storage Server (ESS) is a high performance, high availability capacity<br />
storage subsystem. It contains two six-way RISC processors (668 MHZ) with up to 64 GB<br />
cache and 2 GB <strong>of</strong> non-volatile storage (NVS) to protect from data loss during power outages.<br />
Connectivity to <strong>IBM</strong> mainframes is through up to 32 ESCON channels and up to 16 FICON<br />
channels. For other platforms, such as <strong>IBM</strong> <strong>System</strong> i®, UNIX, or NT, the connectivity is<br />
through up to 32 SCSI interfaces.<br />
Cache<br />
Cache is used to store both read and write data to improve ESS performance to the attached<br />
host systems. There is the choice <strong>of</strong> 8, 16, 24, 32, or 64 GB <strong>of</strong> cache. This cache is divided<br />
between the two clusters <strong>of</strong> the ESS, giving the clusters their own non-shared cache. The<br />
ESS cache uses ECC (error checking and correcting) memory technology to enhance<br />
reliability and error correction <strong>of</strong> the cache. ECC technology can detect single- and double-bit<br />
errors and correct all single-bit errors. Memory scrubbing, a built-in hardware function, is also<br />
performed and is a continuous background read <strong>of</strong> data from memory to check for correctable<br />
errors. Correctable errors are corrected and rewritten to cache. To protect against loss <strong>of</strong> data<br />
on a write operation, the ESS stores two copies <strong>of</strong> written data, one in cache and the other in<br />
NVS.<br />
Chapter 8. Storage management hardware 453
NVS cache<br />
NVS is used to store a second copy <strong>of</strong> write data to ensure data integrity if there is a power<br />
failure or a cluster failure and the cache copy is lost. The NVS <strong>of</strong> cluster 1 is located in cluster<br />
2 and the NVS <strong>of</strong> cluster 2 is located in cluster 1. In this way, in the event <strong>of</strong> a cluster failure,<br />
the write data for the failed cluster will be in the NVS <strong>of</strong> the surviving cluster. This write data is<br />
then de-staged at high priority to the disk arrays. At the same time, the surviving cluster will<br />
start to use its own NVS for write data, ensuring that two copies <strong>of</strong> write data are still<br />
maintained. This ensures that no data is lost even in the event <strong>of</strong> a component failure.<br />
ESS Model 800<br />
The ESS Model 800 has a 2 GB NVS. Each cluster has 1 GB <strong>of</strong> NVS, made up <strong>of</strong> four cards.<br />
Each pair <strong>of</strong> NVS cards has its own battery-powered charger system that protects data even<br />
if power is lost on the entire ESS for up to 72 hours. This model has the following<br />
enhancements:<br />
► Model 800 allows 4.8 GBps <strong>of</strong> aggregate bandwidth.<br />
► In the disk interface the ESS has eight Serial Storage Architecture (SSA) loops, each one<br />
with a rate <strong>of</strong> 160 MBps for accessing the disks. See “SSA loops” on page 464 for more<br />
information about this topic.<br />
► ESS implements RAID-5 or RAID-10 for availability and has eight disks in the majority <strong>of</strong><br />
the arrays. See “RAID-10” on page 466 for more information about this topic.<br />
► Four disks sizes <strong>of</strong> 18.2, 36.4, 72.8, and 145.6 GB, which can be intermixed. The ESS<br />
maximum capacity is over 55.9 TB with a second frame attached.<br />
ESS Model 750<br />
The ESS 750 is intended for smaller enterprises that need the enterprise-level advanced<br />
functions, reliability, and availability <strong>of</strong>fered by the Model 800, but at a lower entry cost and<br />
size. It is specifically designed to meet the high demands <strong>of</strong> medium-sized mainframe<br />
environments, and for this reason it is closely tied in with the <strong>IBM</strong> z890 <strong>of</strong>fering.<br />
The ESS 750 has capabilities similar to the ESS 800. The ESS Model 750 consists <strong>of</strong> two<br />
clusters, each with a two-way processor and 4 or 8 GB cache. It can have two to six Fibre<br />
Channel/FICON or ESCON host adapters. The storage capacity ranges from a minimum <strong>of</strong><br />
1.1 TB up to a maximum <strong>of</strong> 4 TB. A key feature is that the ESS 750 is upgradeable,<br />
non-disruptively, to the ESS Model 800, which can grow to more than 55 TB <strong>of</strong> physical<br />
capacity.<br />
Note: Effective April 28, 2006, <strong>IBM</strong> withdrew from marketing the following products:<br />
► <strong>IBM</strong> TotalStorage Enterprise Storage Server (ESS) Models 750 and 800<br />
► <strong>IBM</strong> Standby Capacity on Demand for ESS <strong>of</strong>fering<br />
For information about replacement products, see 8.16, “<strong>IBM</strong> TotalStorage DS6000” on<br />
page 474 and 8.17, “<strong>IBM</strong> TotalStorage DS8000” on page 477.<br />
SCSI protocol<br />
Although we do not cover other platforms in this publication, we provide here a brief overview<br />
<strong>of</strong> the SCSI protocol. The SCSI adapter is a card on the host. It connects to a SCSI bus<br />
through a SCSI port. There are two types <strong>of</strong> SCSI supported by ESS:<br />
► SCSI Fast Wide with 20 MBps<br />
► Ultra SCSI Wide with 40 MBps<br />
454 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.5 ESS universal access<br />
Storage consolidation<br />
Storage sharing solution<br />
with dynamic reallocation<br />
PPRC available for XP and<br />
UNIX - Human control<br />
through Web interface<br />
StorWatch support<br />
Figure 8-5 ESS universal access<br />
<strong>IBM</strong> <strong>System</strong> z<br />
<strong>IBM</strong> <strong>System</strong> i<br />
ESS<br />
UNIX<br />
Windows XP<br />
ESS universal access<br />
ESS is a product designed to implement storage consolidation that puts all <strong>of</strong> your enterprise<br />
data under the same cover. This consolidation is the first step in achieving server<br />
consolidation, that is, to put all <strong>of</strong> your enterprise applications under the same z/<strong>OS</strong> cluster.<br />
PPRC support<br />
<strong>IBM</strong> includes a Web browser interface called TotalStorage Enterprise Storage Server (ESS)<br />
Copy Services. The interface is part <strong>of</strong> the ESS subsystem and can be used to perform<br />
FlashCopy and PPRC functions.<br />
Many <strong>of</strong> the ESS features are now available to non-zSeries platforms, such as PPRC for<br />
Windows XP and UNIX, where the control is through a Web interface.<br />
StorWatch support<br />
On the s<strong>of</strong>tware side, there is StorWatch, a range <strong>of</strong> products in UNIX/XP that does what<br />
DFSMS and automation do for <strong>System</strong> z. The TotalStorage Expert, formerly marketed as<br />
StorWatch Expert, is a member <strong>of</strong> the <strong>IBM</strong> and Tivoli <strong>System</strong>s family <strong>of</strong> solutions for<br />
Enterprise Storage Resource Management (ESRM). These are <strong>of</strong>ferings that are designed to<br />
complement one another, and provide a total storage management solution.<br />
TotalStorage Expert is an innovative s<strong>of</strong>tware tool that gives administrators powerful, yet<br />
flexible storage asset, capacity, and performance management capabilities to centrally<br />
manage Enterprise Storage Servers located anywhere in the enterprise.<br />
Chapter 8. Storage management hardware 455
8.6 ESS major components<br />
Figure 8-6 ESS major components<br />
ESS Model 800 major components<br />
Figure 8-6 shows an <strong>IBM</strong> TotalStorage Enterprise Storage Server Model 800 and its major<br />
components. As you can see, the ESS base rack consists <strong>of</strong> two clusters, each with its own<br />
power supplies, batteries, SSA device adapters, processors, cache and NVS, CD drive, hard<br />
disk, floppy disk, and network connections. Both clusters have access to any host adapter<br />
card, even though they are physically spread across the clusters.<br />
At the top <strong>of</strong> each cluster is an ESS cage. Each cage provides slots for up to 64 disk drives,<br />
32 in front and 32 at the back.<br />
This storage box has two enclosures:<br />
► A base enclosure, with:<br />
– Two 3-phase power supplies<br />
– Up to 128 disk drives in two cages<br />
– Feature #2110 for Expansion Enclosure attachment<br />
– Host adapters, cluster processors, cache and NVS, and SSA device adapters<br />
► An expansion enclosure, with:<br />
– Two 3-phase power supplies<br />
– Up to 256 disk drives in four cages for additional capacity<br />
456 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Cages and disk drives<br />
SMP Processors and<br />
Cache<br />
SSA Device Adapters<br />
Host Adapters<br />
Main Power<br />
Supplies<br />
Batteries
8.7 ESS host adapters<br />
Host adapter bays<br />
4 bays<br />
4 host adapters per bay<br />
ESCON host adapters<br />
Up to 32 ESCON links<br />
2 ESCON links per host adapter<br />
2 Gb FICON host adapters<br />
Up to 16 FICON links<br />
1 FICON link per host adapter<br />
Auto speed detection - 1 Gb or 2 Gb<br />
SCSI host adapters<br />
Up to 32 SCSI bus connections<br />
2 SCSI ports per host adapter<br />
2 Gb Fibre Channel host adapters<br />
Up to 16 Fibre Channel links<br />
1 Fibre Channel port per host adapter<br />
Auto speed detection - 1 Gb or 2 Gb<br />
Figure 8-7 Host adapters<br />
ESS host adapters<br />
The ESS has four host adapter bays, two in each cluster. Each bay supports up to four host<br />
adapter cards. Each <strong>of</strong> these host adapter cards can be for FICON, ESCON, SCSI, or Fibre<br />
Channel server connection. Figure 8-7 lists the main characteristics <strong>of</strong> the ESS host<br />
adapters.<br />
Each host adapter can communicate with either cluster. To install a new host adapter card,<br />
the bay must be powered <strong>of</strong>f. For the highest path availability, it is important to spread the host<br />
connections across all the adapter bays. For example, if you have four ESCON links to a host,<br />
each connected to a separate bay, then the loss <strong>of</strong> a bay for upgrade only impacts one <strong>of</strong> the<br />
four connections to the server. The same is also valid for a host with FICON connections to<br />
the ESS.<br />
Similar considerations apply for servers connecting to the ESS by means <strong>of</strong> SCSI or Fibre<br />
Channel links. For open system servers, the Subsystem Device Driver (SDD) program that<br />
comes standard with the ESS can be installed on the connecting host servers to provide<br />
multiple paths or connections to handle errors (path failover) and balance the I/O load to the<br />
ESS.<br />
The ESS connects to a large number <strong>of</strong> servers, operating systems, host adapters, and SAN<br />
fabrics. A complete and current list is available at the following Web site:<br />
http://www.storage.ibm.com/hards<strong>of</strong>t/products/ess/supserver.htm<br />
Adapters can be intermixed<br />
Any combination <strong>of</strong> host<br />
adapter cards up to a<br />
maximum <strong>of</strong> 16<br />
Chapter 8. Storage management hardware 457
8.8 FICON host adapters<br />
FICON host adapters<br />
Up to 16 FICON host adapters<br />
One port with an LC connector type<br />
per adapter (2 Gigabit Link)<br />
Long wave or short wave<br />
Up to 200 MB/sec full duplex<br />
Up to 10 km distance with long wave<br />
and 300 m with short wave<br />
Each host adapter communicates<br />
with both clusters<br />
Each FICON channel link can<br />
address all 16 ESS CU images<br />
Logical paths<br />
256 CU logical paths per FICON port<br />
4096 logical paths per ESS<br />
Addresses<br />
16,384 device addresses per channel<br />
FICON distances<br />
10 km distance (without repeaters)<br />
100 km distance (with extenders)<br />
Figure 8-8 FICON host adapter<br />
FICON host adapters<br />
FICON, or Fiber Connection, is based on the standard Fibre Channel architecture, and<br />
therefore shares the attributes associated with Fibre Channel. This includes the common<br />
FC-0, FC-1, and FC-2 architectural layers, the 100 MBps bidirectional (full-duplex) data<br />
transfer rate, and the point-to-point distance capability <strong>of</strong> 10 kilometers. The ESCON<br />
protocols have been mapped to the FC-4 layer, the Upper Level Protocol (ULP) layer, <strong>of</strong> the<br />
Fibre Channel architecture. All this provides a full-compatibility interface with previous S/390<br />
s<strong>of</strong>tware and puts the zSeries servers in the Fibre Channel industry standard.<br />
FICON versus ESCON<br />
FICON goes beyond ESCON limits:<br />
► Addressing limit, from 1024 device addresses per channel to up to 16,384 (maximum <strong>of</strong><br />
4096 devices supported within one ESS).<br />
► Up to 256 control unit logical paths per port.<br />
► FICON channel to ESS allows multiple concurrent I/O connections (the ESCON channel<br />
supports only one I/O connection at one time).<br />
► Greater channel and link bandwidth: FICON has up to 10 times the link bandwidth <strong>of</strong><br />
ESCON (1 Gbps full-duplex, compared to 200 MBps half duplex). FICON has up to more<br />
than four times the effective channel bandwidth.<br />
► FICON path consolidation using switched point-to-point topology.<br />
458 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
FICON<br />
zSeries
► Greater unrepeated fiber link distances (from 3 km for ESCON to up to 10 km, or 20 km<br />
with an RPQ, for FICON).<br />
These characteristics allow simpler and more powerful configurations. The ESS supports up<br />
to 16 host adapters, which allows for a maximum <strong>of</strong> 16 Fibre Channel/FICON ports per<br />
machine, as shown in Figure 8-8 on page 458.<br />
Each Fibre Channel/FICON host adapter provides one port with an LC connector type. The<br />
adapter is a 2 Gb card and provides a nominal 200 MBps full-duplex data rate. The adapter<br />
will auto-negotiate between 1 Gb and 2 Gb, depending upon the speed <strong>of</strong> the connection at<br />
the other end <strong>of</strong> the link. For example, from the ESS to a switch/director, the FICON adapter<br />
can negotiate to 2 Gb if the switch/director also has 2 Gb support. The switch/director to host<br />
link can then negotiate at 1 Gb.<br />
Host adapter cards<br />
There are two types <strong>of</strong> host adapter cards you can select: long wave (feature 3024), and short<br />
wave (feature 3025). With long-wave laser, you can connect nodes at distances <strong>of</strong> up to<br />
10 km (without repeaters). With shortwave laser, you can connect at distances <strong>of</strong> up to<br />
300 m. These distances can be extended using switches or directors.<br />
Chapter 8. Storage management hardware 459
8.9 ESS disks<br />
Eight-packs<br />
Set <strong>of</strong> 8 similar capacity/rpm disk drives packed<br />
together<br />
Installed in the ESS cages<br />
Initial minimum configuration is 4 eight-packs<br />
Upgrades are available increments <strong>of</strong> 2 eight-packs<br />
Maximum <strong>of</strong> 48 eight-packs per ESS with expansion<br />
Disk drives<br />
18.2 GB 15,000 rpm or 10,000 rpm<br />
36.4 GB 15,000 rpm or 10,000 rpm<br />
72.8 GB 10,000 rpm<br />
145.6 GB 10,000 rpm<br />
Eight-pack conversions<br />
Capacity and/or RPMs<br />
Figure 8-9 ESS disks<br />
ESS disks<br />
With a number <strong>of</strong> disk drive sizes and speeds available, including intermix support, the ESS<br />
provides a great number <strong>of</strong> capacity configuration options.<br />
The maximum number <strong>of</strong> disk drives supported within the <strong>IBM</strong> TotalStorage Enterprise<br />
Storage Server Model 800 is 384, with 128 disk drives in the base enclosure and 256 disk<br />
drives in the expansion rack. When configured with 145.6 GB disk drives, this gives a total<br />
physical disk capacity <strong>of</strong> approximately 55.9 TB (see Table 8-1 on page 461 for more details).<br />
Disk drives<br />
The minimum available configuration <strong>of</strong> the ESS Model 800 is 582 GB. This capacity can be<br />
configured with 32 disk drives <strong>of</strong> 18.2 GB contained in four eight-packs, using one ESS cage.<br />
All incremental upgrades are ordered and installed in pairs <strong>of</strong> eight-packs; thus the minimum<br />
capacity increment is a pair <strong>of</strong> similar eight-packs <strong>of</strong> either 18.2 GB, 36.4 GB, 72.8 GB, or<br />
145.6 GB capacity.<br />
The ESS is designed to deliver substantial protection against data corruption, not just relying<br />
on the RAID implementation alone. The disk drives installed in the ESS are the latest<br />
state-<strong>of</strong>-the-art magneto resistive head technology disk drives that support advanced disk<br />
functions such as disk error correction codes (ECC), Metadata checks, disk scrubbing, and<br />
predictive failure analysis.<br />
460 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Eight-pack conversions<br />
The ESS eight-pack is the basic unit <strong>of</strong> capacity within the ESS base and expansion racks. As<br />
mentioned before, these eight-packs are ordered and installed in pairs. Each eight-pack can<br />
be configured as a RAID 5 rank (6+P+S or 7+P) or as a RAID 10 rank (3+3+2S or 4+4).<br />
The <strong>IBM</strong> TotalStorage ESS Specialist will configure the eight-packs on a loop with spare<br />
DDMs as required. Configurations that include drive size intermixing can result in the creation<br />
<strong>of</strong> additional DDM spares on a loop as compared to non-intermixed configurations. Currently<br />
there is the choice <strong>of</strong> four new-generation disk drive capacities for use within an eight-pack:<br />
► 18.2 GB/15,000 rpm disks<br />
► 36.4 GB/15,000 rpm disks<br />
► 72.8 GB/10,000 rpm disks<br />
► 145.6 GB/10,000 rpm disks<br />
Also available is the option to install eight-packs with:<br />
► 18.2 GB/10,000 rpm disks or<br />
► 36.4 GB/10,000 rpm disks<br />
The eight disk drives assembled in each eight-pack are all <strong>of</strong> the same capacity. Each disk<br />
drive uses the 40 MBps SSA interface on each <strong>of</strong> the four connections to the loop.<br />
It is possible to mix eight-packs <strong>of</strong> various capacity disks and speeds (rpm) within an ESS, as<br />
described in the following sections.<br />
Use Table 8-1 as a guide for determining the capacity <strong>of</strong> a given eight-pack. This table shows<br />
the capacities <strong>of</strong> the disk eight-packs when configured as RAID ranks. These capacities are<br />
the effective capacities available for user data.<br />
Table 8-1 Disk eight-pack effective capacity chart (gigabytes)<br />
Disk<br />
size<br />
Physical<br />
capacity<br />
(raw<br />
capacity)<br />
3 + 3 + 2S<br />
Array (4)<br />
Effective usable capacity (2)<br />
RAID 10 RAID 5 (3)<br />
4 + 4<br />
Array (5)<br />
6+P+S<br />
Array (6)<br />
18.2 145.6 52.50 70.00 105.20 122.74<br />
36.4 291.2 105.12 140.16 210.45 245.53<br />
72.8 582.4 210.39 280.52 420.92 491.08<br />
7 + P<br />
Array (7)<br />
145.6 1,164.8 420.78 561.04 841.84 982.16<br />
Chapter 8. Storage management hardware 461
8.10 ESS device adapters<br />
SSA 160 Device Adapters<br />
4 DA pairs per subsystem<br />
4 x 40 MB/sec loop data rate<br />
2 loops per device adapter pair<br />
Figure 8-10 Device adapters<br />
A A A S B B B B<br />
ESS device adapters<br />
Device adapters (DA) provide the connection between the clusters and the disk drives. The<br />
ESS Model 800 implements faster Serial Storage Architecture (SSA) device adapters than its<br />
predecessor models.<br />
The ESS Storage Server Model 800 uses the latest SSA160 technology in its device adapters<br />
(DAs). With SSA 160, each <strong>of</strong> the four links operates at 40 MBps, giving a total nominal<br />
bandwidth <strong>of</strong> 160 MBps for each <strong>of</strong> the two connections to the loop. This amounts to a total <strong>of</strong><br />
320 MBps across each loop. Also, each device adapter card supports two independent SSA<br />
loops, giving a total bandwidth <strong>of</strong> 320 MBps per adapter card. There are eight adapter cards,<br />
giving a total nominal bandwidth capability <strong>of</strong> 2,560 MBps. See 8.11, “SSA loops” on<br />
page 464 for more information about this topic.<br />
SSA loops<br />
One adapter from each pair <strong>of</strong> adapters is installed in each cluster, as shown in Figure 8-10.<br />
The SSA loops are between adapter pairs, which means that all the disks can be accessed by<br />
both clusters. During the configuration process, each RAID array is configured by the <strong>IBM</strong><br />
TotalStorage ESS Specialist to be normally accessed by only one <strong>of</strong> the clusters. If a cluster<br />
failure occurs, the remaining cluster can take over all the disk drives on the loop.<br />
RAID 5 and RAID 10<br />
RAID 5 and RAID 10 are managed by the SSA device adapters. RAID 10 is explained in<br />
detail in 8.12, “RAID-10” on page 466. Each loop supports up to 48 disk drives, and each<br />
462 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Up to 48 disk drives per loop<br />
Mix <strong>of</strong> RAID 5 and RAID 10 eight-packs<br />
Each RAID 5 array is 8 disk drives:<br />
6+P+S or<br />
7+P<br />
Each RAID 10 array is 8 disk drives:<br />
3+3+2S or<br />
4+4<br />
Mix <strong>of</strong> different capacity disk drives<br />
2 spares per loop per disk capacity<br />
Same rpm for same capacity disks<br />
S : representation <strong>of</strong> spare drive<br />
C C C C D D D D<br />
DA A A A A S B B B C C C C D D D D 1' 2' 3' S 1' 2 ' 3' 4'<br />
DA<br />
1 2<br />
3 S<br />
48 disks<br />
1/2/3/4 1'/2'/3'/4': representation <strong>of</strong> RAID 10 rank drives<br />
A/B/C/D : representation <strong>of</strong> RAID 5 rank drives (user data and distributed parity)<br />
1 2<br />
3 4
adapter pair supports up to 96 disk drives. There are four adapter pairs supporting up to 384<br />
disk drives in total. Figure 8-10 on page 462 shows a logical representation <strong>of</strong> a single loop<br />
with 48 disk drives (RAID ranks are actually split across two eight-packs for optimum<br />
performance). You can see there are six RAID arrays: four RAID 5 designated A to D, and two<br />
RAID 10 (one 3+3+2 spare and one 4+4).<br />
Disk drives per loop<br />
Each loop supports up to 48 disk drives, and each adapter pair supports up to 96 disk drives.<br />
There are four adapter pairs supporting up to 384 disk drives in total.<br />
Figure 8-10 on page 462 shows a logical representation <strong>of</strong> a single loop with 48 disk drives<br />
(RAID ranks are actually split across two eight-packs for optimum performance). In the figure<br />
you can see there are six RAID arrays: four RAID 5 designated A to D, and two RAID 10 (one<br />
3+3+2 spare and one 4+4).<br />
Chapter 8. Storage management hardware 463
8.11 SSA loops<br />
SSA operation<br />
4 links per loop<br />
2 read and 2 write<br />
simultaneously in each direction<br />
40 MB/sec on each link<br />
Loop availability<br />
Loop reconfigures itself<br />
dynamically<br />
Spatial reuse<br />
Up to 8 simultaneous<br />
operations to local group <strong>of</strong><br />
disks (domains) per loop<br />
Figure 8-11 SSA loops<br />
SSA operation<br />
SSA is a high performance, serial connection technology for disk drives. SSA is a full-duplex<br />
loop-based architecture, with two physical read paths and two physical write paths to every<br />
disk attached to the loop. Data is sent from the adapter card to the first disk on the loop and<br />
then passed around the loop by the disks until it arrives at the target disk. Unlike bus-based<br />
designs, which reserve the whole bus for data transfer, SSA only uses the part <strong>of</strong> the loop<br />
between adjacent disks for data transfer. This means that many simultaneous data transfers<br />
can take place on an SSA loop, and it is one <strong>of</strong> the main reasons that SSA performs so much<br />
better than SCSI. This simultaneous transfer capability is known as “spatial release.”<br />
Each read or write path on the loop operates at 40 MBps, providing a total loop bandwidth <strong>of</strong><br />
160 MBps.<br />
Loop availability<br />
The loop is a self-configuring, self-repairing design that allows genuine hot-plugging. If the<br />
loop breaks for any reason, then the adapter card will automatically reconfigure the loop into<br />
two single loops. In the ESS, the most likely scenario for a broken loop is if the actual disk<br />
drive interface electronics fails. If this happens, the adapter card dynamically reconfigures the<br />
loop into two single loops, effectively isolating the failed disk. If the disk is part <strong>of</strong> a RAID<br />
array, the adapter card will automatically regenerate the missing disk using the remaining<br />
data and parity disks to the spare disk. After the failed disk is replaced, the loop is<br />
automatically reconfigured into full duplex operation and the replaced disk becomes a new<br />
spare.<br />
464 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
DA<br />
DA<br />
write<br />
read<br />
DA<br />
read<br />
write<br />
write<br />
read<br />
DA
Spatial reuse<br />
Spatial reuse allows domains to be set up on the loop. A domain means that one or more<br />
groups <strong>of</strong> disks belong to one <strong>of</strong> the two adapter cards, as is the case during normal<br />
operation. The benefit <strong>of</strong> this is that each adapter card can talk to its domains (or disk groups)<br />
using only part <strong>of</strong> the loop. The use <strong>of</strong> domains allows each adapter card to operate at<br />
maximum capability because it is not limited by I/O operations from the other adapter.<br />
Theoretically, each adapter card can drive its domains at 160 MBps, giving 320 MBps<br />
throughput on a single loop. The benefit <strong>of</strong> domains can reduce slightly over time, due to disk<br />
failures causing the groups to become intermixed, but the main benefits <strong>of</strong> spatial reuse will<br />
still apply.<br />
If a cluster fails, then the remaining cluster device adapter owns all the domains on the loop,<br />
thus allowing full data access to continue.<br />
Chapter 8. Storage management hardware 465
8.12 RAID-10<br />
RAID-10 configurations:<br />
First RAID-10 rank<br />
configured in the loop will be:<br />
3 + 3 + 2S<br />
Additional RAID-10 ranks<br />
configured in the loop will be<br />
4 + 4<br />
For a loop with an intermixed<br />
capacity, the ESS will assign<br />
two spares for each capacity.<br />
This means there will be one<br />
3+3+2S array per capacity<br />
Figure 8-12 RAID-10<br />
RAID-10<br />
RAID-10 is also known as RAID 0+1 because it is a combination <strong>of</strong> RAID 0 (striping) and<br />
RAID 1 (mirroring). The striping optimizes the performance by striping volumes across<br />
several disk drives (in the ESS Model 800 implementation, three or four DDMs). RAID 1 is the<br />
protection against a disk failure provided by having a mirrored copy <strong>of</strong> each disk. By<br />
combining the two, RAID 10 provides data protection and I/O performance.<br />
Array<br />
A disk array is a group <strong>of</strong> disk drive modules (DDMs) that are arranged in a relationship, for<br />
example, a RAID 5 or a RAID 10 array. For the ESS, the arrays are built upon the disks <strong>of</strong> the<br />
disk eight-packs.<br />
Disk eight-pack<br />
The physical storage capacity <strong>of</strong> the ESS is materialized by means <strong>of</strong> the disk eight-packs.<br />
These are sets <strong>of</strong> eight DDMs that are installed in pairs in the ESS. Two disk eight-packs<br />
provide for two disk groups, with four DDMs from each disk eight-pack. These disk groups<br />
can be configured as either RAID-5 or RAID-10 ranks.<br />
Spare disks<br />
The ESS requires that a loop have a minimum <strong>of</strong> two spare disks to enable sparing to occur.<br />
The sparing function <strong>of</strong> the ESS is automatically initiated whenever a DDM failure is detected<br />
on a loop and enables regeneration <strong>of</strong> data from the failed DDM onto a hot spare DDM.<br />
466 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
Eight-pack pair 1<br />
Eight-pack 2<br />
Data Data Data<br />
1 2 3<br />
Eight-pack 1<br />
1' 2' 3' S 1' 2' 3' 4'<br />
Eight-pack pair 2<br />
Eight-pack 4<br />
Data Data Data Data<br />
1 2 3 4<br />
Eight-pack 3<br />
Spare<br />
S<br />
Data Data Data Data<br />
1 2 3 4<br />
Data Data Data Spare Data Data Data Data<br />
Data Data Data Data<br />
1 2 3 4<br />
Data Data Data Data Data Data Data Data<br />
1' 2' 3' 4' 1' 2' 3' 4'
A hot DDM spare pool consisting <strong>of</strong> two drives, created with one 3+3+2S array (RAID 10), is<br />
created for each drive size on an SSA loop. Therefore, if only one drive size is installed on a<br />
loop, only two spares are required. The hot sparing function is managed at the SSA loop<br />
level. SSA will spare to a larger capacity DDM on the loop in the very uncommon situation<br />
that no spares are available on the loop for a given capacity.<br />
Figure 8-12 on page 466 shows the following:<br />
5. In eight-pack pair 1, the array consists <strong>of</strong> three data drives mirrored to three copy drives.<br />
The remaining two drives are used as spares.<br />
6. In eight-pack pair 2, the array consists <strong>of</strong> four data drives mirrored to four copy drives.<br />
Chapter 8. Storage management hardware 467
8.13 Storage balancing with RAID-10<br />
Cluster 1<br />
Loop A Loop A<br />
Loop B<br />
LSS 0<br />
SSA 01<br />
1) RAID 10 array<br />
3 + 3 + 2S<br />
4) RAID 10 array<br />
4 + 4<br />
Figure 8-13 Storage balancing with RAID-10<br />
Logical Storage Subsystem<br />
The Logical Storage Subsystem (LSS), also known as the Logical Subsystem, is a logical<br />
structure that is internal to the ESS. It is a logical construct that groups up to 256 logical<br />
volumes (logical volumes are defined during the logical configuration procedure) <strong>of</strong> the same<br />
disk format (CKD or FB), and it is identified by the ESS with a unique ID. Although the LSS<br />
relates directly to the logical control unit (LCU) concept <strong>of</strong> the ESCON and FICON<br />
architectures, it does not directly relate to SCSI and FCP addressing.<br />
ESS storage balancing<br />
For performance reasons, try to allocate storage on the ESS so that it is equally balanced<br />
across both clusters and among the SSA loops. One way to accomplish this is to assign two<br />
arrays (one from loop A and one from loop B) to each Logical Subsystem. To achieve this you<br />
can follow this procedure when configuring RAID-10 ranks:<br />
1. Configure the first array for LSS 0/loop A. This will be a 3 + 3 + 2S array.<br />
2. Configure the first array for LSS 1/loop B. This will also be a 3 + 3 + 2S array.<br />
3. Configure the second array for LSS1/loop A. This will now be a 4 + 4.<br />
4. Configure the second array for LSS 0/loop B. This will also now be a 4 + 4.<br />
Figure 8-13 illustrates the results <strong>of</strong> this configuration procedure.<br />
468 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
3) RAID 10 array<br />
4 + 4<br />
2) RAID 10 array<br />
3 + 3 + 2S<br />
Cluster 2<br />
Loop B<br />
LSS 1<br />
SSA 11
8.14 ESS copy services<br />
DATA<br />
MOVER<br />
Concurrent<br />
Copy<br />
Sidefile<br />
TotalStorage<br />
local point-in-time copy<br />
Figure 8-14 ESS copy services<br />
XRC<br />
PPRC<br />
PPRC-XD<br />
DATA<br />
MOVER<br />
asynchronous remote copy<br />
over unlimited distances<br />
synchronous remote copy up to 103 Km<br />
non-synchronous remote copy over<br />
continental distances<br />
FlashCopy<br />
local point-in-time copy<br />
DFSMS copy services<br />
DFSMS provides Advanced Copy Services that include a hardware and s<strong>of</strong>tware solution to<br />
help you manage and protect your data. These solutions help ensure that your data remains<br />
available 24 hours a day, seven days a week. Advanced Copy Services provide solutions to<br />
disaster recovery, data migration, and data duplication. Many <strong>of</strong> these functions run on the<br />
<strong>IBM</strong> TotalStorage Enterprise Storage Server (ESS). With DFSMS, you can perform the<br />
following data management functions:<br />
► Use remote copy to prepare for disaster recovery<br />
► Move your PPRC data more easily<br />
Remote copy provides two options that enable you to maintain a current copy <strong>of</strong> your data at<br />
a remote site. These two options are used for disaster recovery and workload migration:<br />
► Extended remote copy (XRC)<br />
► Peer-to-peer remote copy (PPRC)<br />
There are two types <strong>of</strong> copy:<br />
► An instantaneous copy where all the late updates in the primary are not copied. It is used<br />
for fast backups and data replication in general. The examples in ESS are Concurrent<br />
Copy and Flash Copy.<br />
► Mirroring, a never-ending copy where all the updates are mirrored as fast as possible. It is<br />
used for disaster recovery and planned outages. The example in ESS are Enhanced<br />
PPRC service and XRC.<br />
TotalStorage<br />
Chapter 8. Storage management hardware 469
Peer-to-peer remote copy (PPRC)<br />
PPRC is a hardware solution which provides rapid and accurate disaster recovery as well as<br />
a solution to workload movement and device migration. Updates made on the primary DASD<br />
volumes are synchronously shadowed to the secondary DASD volumes. The local storage<br />
subsystem and the remote storage subsystem are connected through a communications link<br />
called a PPRC path. You can use one <strong>of</strong> the following protocols to copy data using PPRC:<br />
► ESCON<br />
► Fibre Channel Protocol<br />
Note: Fibre Channel Protocol is supported only on ESS Model 800 with the appropriate<br />
licensed internal code (LIC) level and the PPRC Version 2 feature enabled.<br />
PPRC provides a synchronous volume copy across ESS controllers. The copy is done from<br />
one controller (the one having the primary logical device) to the other (having the secondary<br />
logical device). It is synchronous because the task doing the I/O receives the CPU back with<br />
the guarantee that the copy was executed. There is a performance penalty for distances<br />
longer than 10 km. PPRC is used for disaster recovery, device migration, and workload<br />
migration; for example, it enables you to switch to a recovery system in the event <strong>of</strong> a disaster<br />
in an application system.<br />
You can issue the CQUERY command to query the status <strong>of</strong> one volume <strong>of</strong> a PPRC volume pair<br />
or to collect information about a volume in the simplex state. The CQUERY command is<br />
modified and enabled to report on the status <strong>of</strong> S/390-attached CKD devices.<br />
See z/<strong>OS</strong> DFSMS Advanced Copy Services, SC35-0428, for further information about the<br />
PPRC service and the CQUERY command.<br />
Peer-to-peer remote copy extended distance (PPRC-XD)<br />
When you enable the PPRC extended distance feature (PPRC-XD), the primary and recovery<br />
storage control sites can be separated by long distances. Updates made to a PPRC primary<br />
volume are sent to a secondary volume asynchronously, thus requiring less bandwidth.<br />
If you are trying to decide whether to use synchronous or asynchronous PPRC, consider the<br />
differences between the two modes:<br />
► When you use synchronous PPRC, no data loss occurs between the last update at the<br />
primary system and the recovery site, but it increases the impact to applications and uses<br />
more resources for copying data.<br />
► Asynchronous PPRC using the extended distance feature reduces impact to applications<br />
that write to primary volumes and uses less resources for copying data, but data might be<br />
lost if a disaster occurs. To use PPRC-XD as a disaster recovery solution, customers need<br />
to periodically synchronize the recovery volumes with the primary site and make backups<br />
to other DASD volumes or tapes.<br />
PPRC Extended Distance (PPRC-XD) is a non-synchronous version <strong>of</strong> PPRC. This means<br />
that host updates to the source volume are not delayed by waiting for the update to be<br />
confirmed in the secondary volume. It also means that the sequence <strong>of</strong> updates on the<br />
secondary volume is not guaranteed to be the same as on the primary volume.<br />
PPRC-XD is an excellent solution for:<br />
► Remote data copy<br />
► Remote data migration<br />
► Offsite backup<br />
► Transmission <strong>of</strong> inactive database logs<br />
470 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
► Application disaster recovery solutions based on periodic point-in-time (PiT) copies <strong>of</strong> the<br />
data, if the application tolerates short interruptions (application quiesce)<br />
PPRC-XD can operate at very long distances (such as continental distances), well beyond<br />
the 103 km supported for PPRC synchronous transmissions—and with minimal impact on the<br />
application. The distance is limited only by the network and channel extender technology<br />
capabilities.<br />
Extended remote copy (XRC)<br />
XRC combines hardware and s<strong>of</strong>tware to provide continuous data availability in a disaster<br />
recovery or workload movement environment. XRC provides an asynchronous remote copy<br />
solution for both system-managed and non-system-managed data to a second, remote<br />
location.<br />
XRC relies on the <strong>IBM</strong> TotalStorage Enterprise Storage Server, <strong>IBM</strong> 3990, RAMAC Storage<br />
Subsystems, and DFSMSdfp. The 9393 RAMAC Virtual Array (RVA) does not support XRC<br />
for source volume capability.<br />
XRC relies on the system data mover, which is part <strong>of</strong> DFSMSdfp. The system data mover is<br />
a high-speed data movement program that efficiently and reliably moves large amounts <strong>of</strong><br />
data between storage devices. XRC is a continuous copy operation, and it is capable <strong>of</strong><br />
operating over long distances (with channel extenders). It runs unattended, without<br />
involvement from the application users. If an unrecoverable error occurs at your primary site,<br />
the only data that is lost is data that is in transit between the time when the primary system<br />
fails and the recovery at the recovery site.<br />
You can implement XRC with one or two systems. Let us suppose that you have two systems:<br />
an application system at one location, and a recovery system at another. With these two<br />
systems in place, XRC can automatically update your data on the remote disk storage<br />
subsystem as you make changes to it on your application system. You can use the XRC<br />
suspend/resume service for planned outages. You can still use this standard XRC service on<br />
systems attached to the ESS if these systems are installed with the toleration or transparency<br />
support.<br />
Coupled Extended Remote Copy (CXRC) allows XRC sessions to be coupled together to<br />
guarantee that all volumes are consistent across all coupled XRC sessions. CXRC can<br />
manage thousands <strong>of</strong> volumes. <strong>IBM</strong> TotalStorage XRC Performance Monitor provides the<br />
ability to monitor and evaluate the performance <strong>of</strong> a running XRC configuration.<br />
Concurrent copy<br />
Concurrent copy is an extended function that enables data center operations staff to generate<br />
a copy or a dump <strong>of</strong> data while applications are updating that data. Concurrent copy delivers<br />
a copy <strong>of</strong> the data, in a consistent form, as it existed before the updates took place.<br />
FlashCopy service<br />
FlashCopy is a point-in-time copy services function that can quickly copy data from a source<br />
location to a target location. FlashCopy enables you to make copies <strong>of</strong> a set <strong>of</strong> tracks, with the<br />
copies immediately available for read or write access. This set <strong>of</strong> tracks can consist <strong>of</strong> an<br />
entire volume, a data set, or just a selected set <strong>of</strong> tracks. The primary objective <strong>of</strong> FlashCopy<br />
is to create a copy <strong>of</strong> a source volume on the target volume. This copy is called a<br />
point-in-time copy. Access to the point-in-time copy <strong>of</strong> the data on the source volume is<br />
through reading the data from the target volume. The actual point-in-time data that is read<br />
from the target volume might or might not be physically stored on the target volume. The ESS<br />
FlashCopy service is compatible with the existing service provided by DFSMSdss. Therefore,<br />
you can invoke the FlashCopy service on the ESS with DFSMSdss.<br />
Chapter 8. Storage management hardware 471
8.15 ESS performance features<br />
Priority I/O queuing<br />
Custom volumes<br />
Improved caching algorithms<br />
FICON host adapters<br />
Enhanced CCWs<br />
z/<strong>OS</strong><br />
Figure 8-15 ESS performance features<br />
I/O priority queuing<br />
Prior to ESS, I<strong>OS</strong> kept the UCB I/O pending requests in a queue named I<strong>OS</strong>Q. The priority<br />
order <strong>of</strong> the I/O request in this queue, when the z/<strong>OS</strong> image is in goal mode, is controlled by<br />
Workload Manager (WLM), depending on the transaction owning the I/O request. There was<br />
no concept <strong>of</strong> priority queuing within the internal queues <strong>of</strong> the I/O control units; instead, the<br />
queue regime was FIFO.<br />
With ESS, it is possible to have this queue concept internally; I/O Priority Queueing in ESS<br />
has the following properties:<br />
► I/O can be queued with the ESS in priority order.<br />
► WLM sets the I/O priority when running in goal mode.<br />
► There is I/O priority for systems in a sysplex.<br />
► Each system gets a fair share.<br />
Custom volumes<br />
Custom volumes provides the possibility <strong>of</strong> defining small size 3390 or 3380 volumes. This<br />
causes less contention on a volume. Custom volumes is designed for high activity data sets.<br />
Careful size planning is required.<br />
Improved caching algorithms<br />
With its effective caching algorithms, <strong>IBM</strong> TotalStorage Enterprise Storage Server Model 800<br />
can minimize wasted cache space and reduce disk drive utilization, thereby reducing its<br />
472 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
ESS<br />
DS6000<br />
DS8000
ack-end traffic. The ESS Model 800 has a maximum cache size <strong>of</strong> 64 GB, and the NVS<br />
standard size is 2 GB.<br />
The ESS manages its cache in 4 KB segments, so for small data blocks (4 KB and 8 KB are<br />
common database block sizes), minimum cache is wasted. In contrast, large cache segments<br />
can exhaust cache capacity when filling up with small random reads. Thus the ESS, having<br />
smaller cache segments, is able to avoid wasting cache space for situations <strong>of</strong> small record<br />
sizes that are common in interactive applications.<br />
This efficient cache management, together with the ESS Model 800 powerful back-end<br />
implementation that integrates new (optional) 15,000 rpm drives, enhanced SSA device<br />
adapters, and twice the bandwidth (as compared to previous models) to access the larger<br />
NVS (2 GB) and the larger cache option (64 GB), all integrate to give greater throughput while<br />
sustaining cache speed response times.<br />
FICON host adapters<br />
FICON extends the <strong>IBM</strong> TotalStorage Enterprise Storage Server Model 800’s ability to deliver<br />
bandwidth potential to the volumes that need it, when they need it.<br />
Performance enhanced channel command words (CCWs)<br />
For z/<strong>OS</strong> environments, the ESS supports channel command words (CCWs) that reduce the<br />
characteristic overhead associated to the previous (3990) CCW chains. Basically, with these<br />
CCWs, the ESS can read or write more data with fewer CCWs. CCW chains using the old<br />
CCWs are converted to the new CCWs whenever possible. The cooperation <strong>of</strong> z/<strong>OS</strong> s<strong>of</strong>tware<br />
and the ESS provides the most significant benefits for the application’s performance. In other<br />
words, in ESS there is less overhead associated with CCW chains by combining tasks into<br />
fewer CCWs, introducing Read Track Data and Write Track Data CCWs. They allow reading<br />
and writing more data with fewer CCWs. It will be used by z/<strong>OS</strong> to reduce ESCON protocol for<br />
multiple record transfer chains. Measurements on 4 KB records using an EXCP channel<br />
program showed a 15% reduction in channel overhead for the Read Track Data CCW.<br />
Chapter 8. Storage management hardware 473
8.16 <strong>IBM</strong> TotalStorage DS6000<br />
Enterprise Class Storage Solutions<br />
75.25”<br />
Maximum Configuration<br />
5 TB Configured Weight<br />
Maximum Power Consumption<br />
Figure 8-16 <strong>IBM</strong> TotalStorage DS6000<br />
<strong>IBM</strong> TotalStorage DS6000<br />
The <strong>IBM</strong> TotalStorage DS6000 series is designed to deliver the resiliency, performance, and<br />
many <strong>of</strong> the key features <strong>of</strong> the <strong>IBM</strong> TotalStorage Enterprise Storage Server (ESS) in a small,<br />
modular package.<br />
The DS6000 series <strong>of</strong>fers high scalability and excellent performance. With the DS6800<br />
(Model 1750-511), you can install up to 16 disk drive modules (DDMs). The minimum storage<br />
capability with 8 DDMs is 584 GB. The maximum storage capability with 16 DDMs for the<br />
DS6800 model is 4.8 TB. If you want to connect more than 16 disks, you can use up to 13<br />
DS6000 expansion units (Model 1750-EX1) that allow a maximum <strong>of</strong> 224 DDMs per storage<br />
system and provide a maximum storage capability <strong>of</strong> 67 TB.<br />
DS6000 specifications<br />
Table 8-2 summarizes the DS6000 features.<br />
Table 8-2 DS6000 specifications<br />
ESS 750<br />
474 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
54.5”<br />
Up to 32 HDDs, 5TB<br />
2,322 lbs<br />
4.83 kVA<br />
DS6000<br />
Controllers Dual active<br />
Max cache 4 GB<br />
DS6000<br />
Up to 224 HDDs, 67TB<br />
125 lbs<br />
0.69 kVA controller<br />
0.48 kVA expansion unit<br />
Max host ports 8-Ports; 2 Gb FC/FICON<br />
5.25”<br />
19”<br />
@19.2TB<br />
@4.8TB
Max hosts 1024<br />
Max storage/disks 224<br />
Disk types FC 10K: 146 GB, 300 GB<br />
FC 15K: 73 GB<br />
Max expansion mod 13<br />
DS6000<br />
Max disk lops 4 (2 dual redundant)<br />
Max LUNs 8192 (up to 2 TB LUN size)<br />
RAID levels 5, 10<br />
RAID array sizes 4 or 8 drives<br />
Operating systems z/<strong>OS</strong>, i5/<strong>OS</strong>®, <strong>OS</strong>/400, AIX, Sun Solaris, HP UX,<br />
VMWare, Micros<strong>of</strong>t® Windows, Linux<br />
Packaging 3U – Controller & Expansion Drawers<br />
Power consumption Controller: 0.69 kVA<br />
Expansion drawer: 0.48 kVA<br />
Modular scalability<br />
The DS6000 is modularly scalable, with optional expansion enclosure, to add capacity to help<br />
meet your growing business needs. The scalability comprises:<br />
► Flexible design to accommodate on demand business environments<br />
► Ability to make dynamic configuration changes<br />
– Add disk drives in increments <strong>of</strong> 4<br />
– Add storage expansion units<br />
► Scale capacity to over 67 TB<br />
DS6800 (Model 1750-511)<br />
The DS6800 is a self-contained 3U enclosure that can be mounted in a standard 19-inch<br />
rack. The DS6800 comes with authorization for up to 16 internal FC DDMs, <strong>of</strong>fering up to 4.8<br />
TB <strong>of</strong> storage capability. The DS6800 allows up to 13 DS6000 expansion enclosures to be<br />
attached. A storage system supports up to 224 disk drives for a total <strong>of</strong> up to 67.2 TB <strong>of</strong><br />
storage.<br />
The DS6800 <strong>of</strong>fers the following features:<br />
► Two FC controller cards.<br />
► PowerPC® 750GX 1 GHz processor.<br />
► 4 GB <strong>of</strong> cache.<br />
► Two battery backup units (one per each controller card).<br />
► Two AC/DC power supplies with imbedded enclosure cooling units.<br />
► Eight 2 Gbps device ports.<br />
► Connectivity with the availability <strong>of</strong> two to eight Fibre Channel/FICON host ports. The host<br />
ports auto-negotiate to either 2 Gbps or 1 Gbps link speeds.<br />
► Attachment to 13 DS6000 expansion enclosures.<br />
Chapter 8. Storage management hardware 475
DS6000 expansion enclosure (Model 1750-EX1)<br />
The 3U DS6000 expansion enclosure can be mounted in a standard 19-inch rack. The front <strong>of</strong><br />
the enclosure contains the docking sites where you can install up to 16 DDMs.<br />
The DS6000 expansion enclosure contains the following features:<br />
► Two expansion controller cards. Each controller card provides the following:<br />
– Two 2 Gbps inbound ports<br />
– Two 2 Gbps outbound ports<br />
– One FC switch per controller card<br />
► Controller disk enclosure that holds up to 16 FC DDMs<br />
► Two AC/DC power supplies with imbedded enclosure cooling units<br />
► Supports attachment to DS6800<br />
476 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.17 <strong>IBM</strong> TotalStorage DS8000<br />
75.25”<br />
ES 800<br />
54.5”<br />
Figure 8-17 <strong>IBM</strong> TotalStorage DS8000<br />
76”<br />
DS8000<br />
33.25”<br />
Up to 6X ESS base Model 800<br />
Physical capacity from 1.1TB up to 192TB<br />
Machine type 2107<br />
<strong>IBM</strong> TotalStorage DS8000<br />
<strong>IBM</strong> TotalStorage DS8000 is a high-performance, high-capacity series <strong>of</strong> disk storage that is<br />
designed to support continuous operations. DS8000 series models (machine type 2107) use<br />
the <strong>IBM</strong> POWER5 server technology that is integrated with the <strong>IBM</strong> Virtualization Engine<br />
technology. DS8000 series models consist <strong>of</strong> a storage unit and one or two management<br />
consoles, two being the recommended configuration. The graphical user interface (GUI) or<br />
the command-line interface (CLI) allows you to logically partition storage (create storage<br />
LPARs) and use the built-in Copy Services functions. For high availability, hardware<br />
components are redundant.<br />
The current physical storage capacity <strong>of</strong> the DS8000 series system can range from 1.1 TB to<br />
192 TB <strong>of</strong> physical capacity, and it has an architecture designed to scale to over 96 petabytes.<br />
DS8000 models<br />
The DS8000 series <strong>of</strong>fers various choices <strong>of</strong> base and expansion models, so you can<br />
configure storage units that meet your performance and configuration needs.<br />
► DS8100<br />
The DS8100 (Model 921) features a dual two-way processor complex and support for one<br />
expansion frame.<br />
► DS8300<br />
The DS8300 (Models 922 and 9A2) features a dual four-way processor complex and<br />
Chapter 8. Storage management hardware 477
support for one or two expansion frames. The Model 9A2 supports two <strong>IBM</strong> TotalStorage<br />
<strong>System</strong> LPARs (Logical Partitions) in one storage unit.<br />
The DS8000 expansion frames (Models 92E and 9AE) expand the capabilities <strong>of</strong> the base<br />
models. You can attach the Model 92E to either the Model 921 or the Model 922 to expand<br />
their capabilities. You can attach the Model 9AE to expand the Model 9A2.<br />
478 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.18 DS8000 hardware overview<br />
2-Way (Model 8100)<br />
Two dual processor servers<br />
Up to 128GB Cache<br />
8 to 64 2Gb FC/FICON – 4 to 32 ESCON Ports<br />
16 to 384 HDD<br />
Intermixable 73GB 15Krpm, 146/300GB 10Krpm<br />
Physical capacity from 1.1TB up to 115TB<br />
4-Way (Model 8300)<br />
Two four processor servers<br />
Up to 256GB Cache<br />
8 to 128 2Gb FC/FICON – 4 to 64 ESCON Ports<br />
16 to 640 HDD<br />
Intermixable 73GB 15Krpm, 146/300GB 10Krpm<br />
Physical capacity from 1.1TB up to 192TB<br />
Figure 8-18 DS8000 models<br />
DS8100 (Model 921)<br />
The <strong>IBM</strong> TotalStorage DS8100, which is Model 921, <strong>of</strong>fers features that include the following:<br />
► Dual two-way processor complex<br />
► Up to 128 disk drives, for a maximum capacity <strong>of</strong> 38.4 TB<br />
► Up to 128 GB <strong>of</strong> processor memory (cache)<br />
► Up to 16 fibre-channel/FICON or ESCON host adapters<br />
The DS8100 model can support one expansion frame. With one expansion frame, you can<br />
expand the capacity <strong>of</strong> the Model 921 as follows:<br />
► Up to 384 disk drives, for a maximum capacity <strong>of</strong> 115.2 TB<br />
DS8300 (Models 922 and 9A2)<br />
<strong>IBM</strong> TotalStorage DS8300 models (Model 922 and Model 9A2) <strong>of</strong>fer higher performance and<br />
capacity than the DS8100. The Model 9A2 also enables you to create two storage system<br />
LPARs (or images) within the same storage unit.<br />
Both DS8300 models <strong>of</strong>fer the following features:<br />
► Dual four-way processor complex<br />
► Up to 128 disk drives, for a maximum capacity <strong>of</strong> 38.4 TB<br />
► Up to 256 GB <strong>of</strong> processor memory (cache)<br />
► Up to 16 fibre-channel/FICON or ESCON host adapters<br />
Chapter 8. Storage management hardware 479
The DS8300 models can support either one or two expansion frames. With expansion<br />
frames, you can expand the Model 922 and 9A2 as follows:<br />
► With one expansion frame, you can support the following expanded capacity and number<br />
<strong>of</strong> adapters:<br />
– Up to 384 disk drives, for a maximum capacity <strong>of</strong> 115.2 TB<br />
– Up to 32 fibre-channel/FICON or ESCON host adapters<br />
► With two expansion frames, you can support the following expanded capacity:<br />
– Up to 640 disk drives, for a maximum capacity <strong>of</strong> 192 TB<br />
480 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.19 Storage systems LPARs<br />
Host<br />
Adapters<br />
Workload A Workload B<br />
N-way<br />
SMP<br />
LUN 0<br />
LUN 1<br />
LUN 2<br />
Logical<br />
Partition<br />
A<br />
RAID<br />
RAID<br />
Adapters<br />
Adapters<br />
LUN 0<br />
LUN 2<br />
Logical<br />
Partition<br />
B<br />
DS8000<br />
Host<br />
Adapters<br />
Figure 8-19 Storage systems LPARs<br />
Higher Bandwidth Fault Tolerant Interconnect<br />
Volatile<br />
Memory<br />
Persistent<br />
Memory<br />
Host<br />
Adapters<br />
LPAR<br />
Host<br />
Adapters<br />
LPAR<br />
Switched Fabric<br />
LPAR overview<br />
A logical partition (LPAR) is a subset <strong>of</strong> logical resources that is capable <strong>of</strong> supporting an<br />
operating system. It consists <strong>of</strong> CPUs, memory, and I/O slots that are a subset <strong>of</strong> the pool <strong>of</strong><br />
available resources within a system. These resources are assigned to the logical partition.<br />
Isolation between LPARs is provided to prevent unauthorized access between partition<br />
boundaries.<br />
Storage systems LPARs<br />
The DS8300 Model 9A2 exploits LPAR technology, allowing you to run two separate storage<br />
server images.<br />
Each Storage <strong>System</strong> LPAR has access to:<br />
► 50% <strong>of</strong> the processors<br />
► 50% <strong>of</strong> the processor memory<br />
► Up to 16 host adapters<br />
► Up to 320 disk drives (up to 96 TB <strong>of</strong> capacity)<br />
DS8300 Model 9A2 exploits<br />
LPAR technology, allowing you<br />
to run two separate storage<br />
server images<br />
Host<br />
Adapters<br />
RAID<br />
Adapters<br />
With these separate resources, each Storage <strong>System</strong> LPAR can run the same or other<br />
versions <strong>of</strong> microcode, and can be used for completely separate production, test, or other<br />
unique storage environments within this single physical system. This can enable storage<br />
LPAR<br />
Host<br />
Adapters<br />
Volatile<br />
Memory<br />
Persistent<br />
Memory<br />
Host<br />
Adapters<br />
N-way<br />
SMP<br />
Chapter 8. Storage management hardware 481
consolidations where separate storage subsystems were previously required, helping to<br />
increase management efficiency and cost effectiveness.<br />
DS8000 addressing capability<br />
Table 8-2 on page 474 shows the DS8000 addressing capability in comparison to an ESS<br />
800.<br />
Table 8-3 Addressing capability comparison<br />
482 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
ESS 800 DS8000 DS8000 w/LPAR<br />
Max logical subsystems 32 255 510<br />
Max logical devices 8 K 64 K 128 K<br />
Max logical CKD devices 4 K 64 K 128 K<br />
Max logical FB devices 4 K 64 K 128 K<br />
Max N-Port logins/port 128 509 509<br />
Max N-Port logins 512 8 K 16 K<br />
Max logical paths/FC port 256 2 K 2 K<br />
Max logical paths/CU image 256 512 512<br />
Max path groups/CU image 128 256 256
8.20 <strong>IBM</strong> TotalStorage Resiliency Family<br />
Copy services<br />
FlashCopy ®<br />
Mirroring<br />
Metro Mirror (Synchronous PPRC)<br />
Global Mirror (Asynchronous PPRC)<br />
Metro/Global Copy (two or three-site Asynchronous<br />
Cascading PPRC)<br />
Global Copy (PPRC Extended Distance)<br />
Global Mirror for zSeries (XRC) – DS6000 can be<br />
configured as an XRC target only<br />
Metro/Global Mirror for zSeries (three-site solution<br />
using Synchronous PPRC and XRC) – DS6000 can<br />
be configured as an XRC target only<br />
Figure 8-20 The <strong>IBM</strong> TotalStorage Resiliency Family<br />
The <strong>IBM</strong> TotalStorage Resiliency Family<br />
The <strong>IBM</strong> TotalStorage Resiliency Family is a set <strong>of</strong> products and features that are designed to<br />
help you implement storage solutions that keep your business running 24 hours a day, 7 days<br />
a week.<br />
These hardware and s<strong>of</strong>tware features, products, and services are available on the <strong>IBM</strong><br />
TotalStorage DS6000 and DS8000 series and <strong>IBM</strong> TotalStorage ESS Models 750 and 800. In<br />
addition, a number <strong>of</strong> advanced Copy Services features that are part <strong>of</strong> the <strong>IBM</strong> TotalStorage<br />
Resiliency family are available for the DS6000 and DS8000 series. The <strong>IBM</strong> TotalStorage DS<br />
Family also <strong>of</strong>fers systems to support enterprise-class data backup and disaster recovery<br />
capabilities. As part <strong>of</strong> the <strong>IBM</strong> TotalStorage Resiliency Family <strong>of</strong> s<strong>of</strong>tware, <strong>IBM</strong> TotalStorage<br />
FlashCopy point-in-time copy capabilities back up data in the background and allow users<br />
nearly instant access to information about both source and target volumes. Metro and Global<br />
Mirror capabilities create duplicate copies <strong>of</strong> application data at remote sites. High-speed<br />
data transfers help to back up data for rapid retrieval.<br />
Copy Services<br />
Copy Services is a collection <strong>of</strong> functions that provides disaster recovery, data migration, and<br />
data duplication functions. Copy Services runs on the DS6000 and DS8000 series and<br />
supports open systems and zSeries environments.<br />
Copy Services functions also are supported on the previous generation <strong>of</strong> storage systems,<br />
the <strong>IBM</strong> TotalStorage Enterprise Storage Server.<br />
Chapter 8. Storage management hardware 483
Copy Services include the following types <strong>of</strong> functions:<br />
► FlashCopy, which is a point-in-time copy function<br />
► Remote mirror and copy functions (previously known as Peer-to-Peer Remote Copy or<br />
PPRC), which includes:<br />
– <strong>IBM</strong> TotalStorage Metro Mirror (previously known as Synchronous PPRC)<br />
– <strong>IBM</strong> TotalStorage Global Copy (previously known as PPRC Extended Distance)<br />
– <strong>IBM</strong> TotalStorage Global Mirror (previously known as Asynchronous PPRC)<br />
► z/<strong>OS</strong> Global Mirror (previously known as Extended Remote Copy or XRC)<br />
For information about copy services, see 8.14, “ESS copy services” on page 469.<br />
Metro/Global Copy function<br />
The Metro/Global Copy function allows you to cascade a PPRC pair with a PPRC-XD pair<br />
such that the PPRC secondary also serves as the PPRC-XD primary. In this configuration, a<br />
primary and secondary pair is established with the secondary located in a nearby site,<br />
protected from primary site disasters. The secondary volume for the PPRC-XD copy can be<br />
located thousands <strong>of</strong> miles away and continue to be updated if the original primary location<br />
suffered a disaster.<br />
Metro/Global Mirror function<br />
The Metro/Global Mirror function enables a three-site, high availability disaster recovery<br />
solution. It combines the capabilities <strong>of</strong> both Metro Mirror and Global Mirror functions for<br />
greater protection against planned and unplanned outages.<br />
484 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.21 TotalStorage Expert product highlights<br />
Netscape or<br />
Internet Explorer<br />
FICON<br />
ESCON<br />
VTS<br />
Peer-To-Peer<br />
TotalStorage Expert<br />
Windows XP or<br />
AIX<br />
3494 Library<br />
Manager<br />
Figure 8-21 TotalStorage Expert<br />
VTS<br />
DS8000<br />
z/<strong>OS</strong><br />
TotalStorage Expert<br />
TotalStorage Expert is an innovative s<strong>of</strong>tware tool that gives administrators powerful but<br />
flexible storage asset, capacity, and performance management capabilities to centrally<br />
manage Enterprise Storage Servers located anywhere in the enterprise.<br />
<strong>IBM</strong> TotalStorage Expert has two available features:<br />
► The ESS feature, which supports ESS<br />
► The ETL feature, which supports Enterprise Tape Library products<br />
AS/400<br />
The two features are licensed separately. There are also upgrade features for users <strong>of</strong><br />
StorWatch Expert V1 with either the ESS or the ETL feature, or both, who want to migrate to<br />
TotalStorage Expert V2.1.1.<br />
TotalStorage Expert is designed to augment commonly used <strong>IBM</strong> performance tools such as<br />
Resource Management Facility (RMF), DFSMS Optimizer, AIX Performance Toolkit, and<br />
similar host-based performance monitors. While these tools provide performance statistics<br />
from the host system’s perspective, TotalStorage Expert provides statistics from the ESS and<br />
ETL system perspective.<br />
By complementing other performance tools, TotalStorage Expert provides a more<br />
comprehensive view <strong>of</strong> performance; it gathers and presents information that provides a<br />
complete management solution for storage monitoring and administration.<br />
UNIX<br />
Windows XP<br />
Chapter 8. Storage management hardware 485
TotalStorage Expert helps storage administrators by increasing the productivity <strong>of</strong> storage<br />
resources.<br />
The ESS is ideal for businesses with multiple heterogeneous servers, including zSeries,<br />
UNIX, Windows NT®, Windows 2000, Novell NetWare, HP/UX, Sun Solaris, and AS/400<br />
servers.<br />
With Version 2.1.1, the TotalStorage ESS Expert is packaged with the TotalStorage ETL<br />
Expert. The ETL Expert provides performance, asset, and capacity management for the three<br />
<strong>IBM</strong> ETL solutions:<br />
► <strong>IBM</strong> TotalStorage Enterprise Automated Tape Library, described in “<strong>IBM</strong> TotalStorage<br />
Enterprise Automated Tape Library 3494” on page 495.<br />
► <strong>IBM</strong> TotalStorage Virtual Tape Server, described in “Introduction to Virtual Tape Server<br />
(VTS)” on page 497.<br />
► <strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tapeserver, described in “<strong>IBM</strong> TotalStorage<br />
Peer-to-Peer VTS” on page 499.<br />
Both tools can run on the same server, share a common database, efficiently monitor storage<br />
resources from any location within the enterprise, and provide a similar look and feel through<br />
a Web browser user interface. Together they provide a complete solution that helps optimize<br />
the potential <strong>of</strong> <strong>IBM</strong> disk and tape subsystems.<br />
486 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.22 Introduction to tape processing<br />
Figure 8-22 Introduction to tape processing<br />
Tape volumes<br />
The term tape refer to volumes that can be physically moved. You can only store sequential<br />
data sets on tape. Tape volumes can be sent to a safe, or to other data processing centers.<br />
Internal labels are used to identify magnetic tape volumes and the data sets on those<br />
volumes. You can process tape volumes with:<br />
► <strong>IBM</strong> standard labels<br />
► Labels that follow standards published by:<br />
– International Organization for Standardization (ISO)<br />
– American National Standards Institute (ANSI)<br />
– Federal Information Processing Standard (FIPS)<br />
► Nonstandard labels<br />
► No labels<br />
Note: Your installation can install a bypass for any type <strong>of</strong> label processing; however, the<br />
use <strong>of</strong> labels is recommended as a basis for efficient control <strong>of</strong> your data.<br />
<strong>IBM</strong> standard tape labels consist <strong>of</strong> volume labels and groups <strong>of</strong> data set labels. The volume<br />
label, identifying the volume and its owner, is the first record on the tape. The data set label,<br />
Chapter 8. Storage management hardware 487
identifying the data set and describing its contents, precedes and follows each data set on the<br />
volume:<br />
► The data set labels that precede the data set are called header labels.<br />
► The data set labels that follow the data set are called trailer labels. They are almost<br />
identical to the header labels.<br />
► The data set label groups can include standard user labels at your option.<br />
Usually, the formats <strong>of</strong> ISO and ANSI labels, which are defined by the respective<br />
organizations, are similar to the formats <strong>of</strong> <strong>IBM</strong> standard labels.<br />
Nonstandard tape labels can have any format and are processed by routines you provide.<br />
Unlabeled tapes contain only data sets and tape marks.<br />
488 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.23 SL and NL format<br />
<strong>IBM</strong><br />
Standard<br />
Labels<br />
Unlabeled<br />
Tapes<br />
TM= Tapemark<br />
<strong>IBM</strong> Standard<br />
<strong>Volume</strong><br />
Label<br />
Data Set<br />
Figure 8-23 SL and NL format<br />
<strong>IBM</strong> Standard<br />
Data Set<br />
Header<br />
Label<br />
/ /<br />
/ /<br />
TM Data Set TM<br />
<strong>IBM</strong> Standard<br />
Data Set<br />
Trailer<br />
Label<br />
TM TM<br />
Using tape with JCL<br />
In the job control statements, you must provide a data definition (DD) statement for each data<br />
set to be processed. The LABEL parameter <strong>of</strong> the DD statement is used to describe the data<br />
set's labels.<br />
Other parameters <strong>of</strong> the DD statement identify the data set, give volume and unit information<br />
and volume disposition, and describe the data set's physical attributes. You can use a data<br />
class to specify all <strong>of</strong> your data set's attributes (such as record length and record format), but<br />
not data set name and disposition. Specify the name <strong>of</strong> the data class using the JCL keyword<br />
DATACLAS. If you do not specify a data class, the automatic class selection (ACS) routines<br />
assign a data class based on the defaults defined by your storage administrator.<br />
An example <strong>of</strong> allocating a tape data set using DATACLAS in the DD statement <strong>of</strong> the JCL<br />
statements follows. In this example, TAPE01 is the name <strong>of</strong> the data class.<br />
//NEW DD DSN=DATASET.NAME,UNIT=TAPE,DISP=(,CATLG,DELETE),DATACLAS=TAPE01,LABEL=(1,SL)<br />
Describing the labels<br />
You specify the type <strong>of</strong> labels by coding one <strong>of</strong> the subparameters <strong>of</strong> the LABEL parameter as<br />
shown in Table 8-4 on page 490.<br />
/ /<br />
/ /<br />
TM<br />
TM<br />
Chapter 8. Storage management hardware 489
Table 8-4 Types <strong>of</strong> labels<br />
Code Meaning<br />
SL <strong>IBM</strong> Standard Label<br />
AL ISO/ANSI/FIPS labels<br />
SUL Both <strong>IBM</strong> and user header or trailer labels<br />
AUL Both ISO/ANSI/FIPS and user header or trailer labels<br />
NSL Nonstandard labels<br />
NL No labels, but the existence <strong>of</strong> a previous label is verified<br />
BLP Bypass label processing. The data is treated in the same manner as though NL had been<br />
specified, except that the system does not check for an existing volume label. The user is<br />
responsible for the positioning.<br />
If your installation does not allow BLP, the data is treated exactly as though NL had been<br />
specified. Your job can use BLP only if the Job Entry Subsystem (JES) through Job class,<br />
RACF through TAPEVOL class, or DFSMSrmm(*) allow it.<br />
LTM Bypass a leading tape mark. If encountered, on unlabeled tapes from VSE.<br />
Note: If you do not specify the label type, the operating system assumes that the data set<br />
has <strong>IBM</strong> standard labels.<br />
490 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.24 Tape capacity - tape mount management<br />
3480=200 Mb<br />
Figure 8-24 Tape capacity<br />
3490=800 Mb<br />
3590=10,000 Mb<br />
3592=300,000 Mb<br />
Tape capacity<br />
The capacity <strong>of</strong> a tape depends on the device type that is recording it. 3480 and 3490 tapes<br />
are physically the same cartridges. The <strong>IBM</strong> 3590 and 3592 high performance cartridge tape<br />
is not compatible with the 3480, 3490, or 3490E drives. 3490 units can read 3480 cartridges,<br />
but cannot record as a 3480, and 3480 units cannot read or write as a 3490.<br />
Tape mount management<br />
Using DFSMS and tape mount management can help you reduce the number <strong>of</strong> both tape<br />
mounts and tape volumes that your installation requires. The volume mount analyzer reviews<br />
your tape mounts and creates reports that provide you with information you need to effectively<br />
implement the tape mount management methodology recommended by <strong>IBM</strong>.<br />
Tape mount management allows you to efficiently fill a tape cartridge to its capacity and gain<br />
full benefit from improved data recording capability (IDRC) compaction, 3490E Enhanced<br />
Capability Magnetic Tape Subsystem, 36-track enhanced recording format, and Enhanced<br />
Capacity Cartridge <strong>System</strong> Tape. By filling your tape cartridges, you reduce your tape mounts<br />
and even the number <strong>of</strong> tape volumes you need.<br />
With an effective tape cartridge capacity <strong>of</strong> 2.4 GB using 3490E and the Enhanced Capacity<br />
Cartridge <strong>System</strong> Tape, DFSMS can intercept all but extremely large data sets and manage<br />
them with tape mount management. By implementing tape mount management with DFSMS,<br />
you might reduce your tape mounts by 60% to 70% with little or no additional hardware<br />
Chapter 8. Storage management hardware 491
equired. Therefore, the resulting tape environment can fully exploit integrated cartridge<br />
loaders (ICL), IDRC, and 3490E.<br />
Tape mount management also improves job throughput because jobs are no longer queued<br />
up on tape drives. Approximately 70% <strong>of</strong> all tape data sets queued up on drives are less than<br />
10 MB. With tape mount management, these data sets reside on DASD while in use. This<br />
frees up the tape drives for other allocations.<br />
Tape mount management recommends that you use DFSMShsm to do interval migration to<br />
SMS storage groups. You can use ACS routines to redirect your tape data sets to a tape<br />
mount management DASD buffer storage group. DFSMShsm scans this buffer on a regular<br />
basis and migrates the data sets to migration level 1 DASD or migration level 2 tape as soon<br />
as possible, based on the management class and storage group specifications.<br />
Table 8-5 lists all <strong>IBM</strong> tape capacities supported since 1952.<br />
Table 8-5 Tape capacity <strong>of</strong> various <strong>IBM</strong> products<br />
Year Product Capacity (Mb) Transfer Rate (KB/S)<br />
1952 <strong>IBM</strong> 726 1.4 7.5<br />
1953 <strong>IBM</strong> 727 5.8 15<br />
1957 <strong>IBM</strong> 729 23 90<br />
1965 <strong>IBM</strong> 2401 46 180<br />
1968 <strong>IBM</strong> 2420 46 320<br />
1973 <strong>IBM</strong> 3420 180 1,250<br />
1984 <strong>IBM</strong> 3480 200 3,000<br />
1989 <strong>IBM</strong> 3490 200 4,500<br />
1991 <strong>IBM</strong> 3490E 400 9,000<br />
1992 <strong>IBM</strong> 3490E 800 9,000<br />
1995 <strong>IBM</strong> 3590 Magstar 10,000 (uncompacted) 9,000 (uncompacted)<br />
1999 <strong>IBM</strong> 3590E Magstar 20,000 (uncompacted) 14,000<br />
2000 <strong>IBM</strong> 3590E Magstar XL<br />
Cartridge<br />
2003 <strong>IBM</strong> 3592 TotalStorage<br />
Enterprise Tape Drive<br />
For further information about tape processing, see z/<strong>OS</strong> DFSMS Using Magnetic Tapes,<br />
SC26-7412.<br />
492 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
20,000/40,000 (dependent on<br />
Model B or E)<br />
300,000 (for high capacity<br />
requirements)<br />
or<br />
60,000 (for fast data access<br />
requirements)<br />
Note: z/<strong>OS</strong> supports tape devices starting from D/T 3420.<br />
14,000<br />
40,000
8.25 TotalStorage Enterprise Tape Drive 3592 Model J1A<br />
The <strong>IBM</strong> 3592 is an "Enterprise Class" tape drive<br />
Data rate 40 MB/s (without compression)<br />
Capacity 300 GB per cartridge (without<br />
compression)<br />
60 GB format to provide fast access to data<br />
Dual ported 2 Gbps Fiber Channel interface<br />
Autonegotiates (1 or 2 Gbps, Fabric or loop<br />
support)<br />
Options may be hard-set at drive<br />
Drive designed for automation solutions<br />
Small form factor<br />
Cartridge similar form factor to 3590 and 3490<br />
Improved environmentals<br />
Figure 8-25 TotalStorage Enterprise Tape Drive 3592 Model J1A<br />
<strong>IBM</strong> 3592 tape drive<br />
The <strong>IBM</strong> 3592 tape drive is the fourth generation <strong>of</strong> high capacity and high performance tape<br />
systems. It was announced in September 2003 and connects to <strong>IBM</strong> eServer zSeries<br />
systems by way <strong>of</strong> the TotalStorage Enterprise Tape Controller 3592 Model J70 using<br />
ESCON or FICON links. The 3592 system is the successor <strong>of</strong> the <strong>IBM</strong> Magstar 3590 family <strong>of</strong><br />
tape drives and controller types.<br />
The <strong>IBM</strong> 3592 tape drive can be used as a standalone solution or as an automated solution<br />
within a 3494 tape library.<br />
Enterprise class tape drive<br />
The native rate for data transfer increases up to 40 MBps compared to 14 MBps in a 3590<br />
Magstar. The uncompressed amount <strong>of</strong> data which fits on a single cartridge increases to<br />
300 GB and is used for scenarios where high capacity is needed. The tape drive has a<br />
second option, where you can store a maximum <strong>of</strong> 60 GB per tape. This option is used<br />
whenever fast access to tape data is needed.<br />
Dual ported 2 Gbps Fiber Channel interface<br />
This tape drive generation connects to the tape controller 3592 Model J70 using Fiber<br />
Channel. SCSI connection, as it used in 3590 configuration, is no longer supported. However,<br />
if you connect a 3590 Magstar tape drive to a 3592 controller, SCSI connection is possible.<br />
Chapter 8. Storage management hardware 493
Drive designed for automation solutions<br />
The drive has a smaller form factor. Thus, you can integrate more drives into an automated<br />
tape library. The cartridges have a similar form factor to the 3590 and 3490 cartridge, so they<br />
fit into the same slots in a 3494 automated tape library.<br />
Improved environmentals<br />
By using a smaller form factor than 3590 Magstar drives, you can put two 3592 drives in place<br />
<strong>of</strong> one 3590 drive in the 3494. In a stand-alone solution you can put a maximum <strong>of</strong> 12 drives<br />
into one 19-inch rack, managed by one controller.<br />
494 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.26 <strong>IBM</strong> TotalStorage Enterprise Automated Tape Library 3494<br />
Figure 8-26 3494 tape library<br />
<strong>IBM</strong> 3494 tape library<br />
Tape storage media can provide low-cost data storage for sequential files, inactive data, and<br />
vital records. Because <strong>of</strong> the continued growth in tape use, tape automation has been seen<br />
as a way <strong>of</strong> addressing an increasing number <strong>of</strong> challenges.<br />
Various solutions providing tape automation, including the following, are available:<br />
► The Automatic Cartridge Loader on <strong>IBM</strong> 3480 and 3490E tape subsystems provides quick<br />
scratch (a volume with no valued data, used for output) mount.<br />
► The Automated Cartridge Facility on the Magstar 3590 tape subsystem, working with<br />
application s<strong>of</strong>tware, can provide a 10-cartridge mini-tape library.<br />
► The <strong>IBM</strong> 3494, an automated tape library dataserver, is a device consisting <strong>of</strong> robotics<br />
components, cartridge storage areas (or shelves), tape subsystems, and controlling<br />
hardware and s<strong>of</strong>tware, together with the set <strong>of</strong> tape volumes that reside in the library and<br />
can be mounted on the library tape drives.<br />
► The Magstar Virtual Tape Server (VTS) provides volume stacking capability and exploits<br />
the capacity and bandwidth <strong>of</strong> Magstar 3590 technology.<br />
Chapter 8. Storage management hardware 495
3494 models and features<br />
<strong>IBM</strong> 3494 <strong>of</strong>fers a wide range <strong>of</strong> models and features, including the following:<br />
► Up to 96 tape drives<br />
► Support through the Library Control Unit for attachment <strong>of</strong> up to 15 additional frames,<br />
including the Magstar VTS, for a total <strong>of</strong> 16 frames, not including the High Availability unit<br />
► Cartridge storage capacity <strong>of</strong> 291 to 6145 tape cartridges<br />
► Data storage capacity <strong>of</strong> up to 1.84 PB (Petabytes) <strong>of</strong> uncompacted data and 5.52 PB <strong>of</strong><br />
compacted data (at a compression rate <strong>of</strong> 3:1)<br />
► Support for the High Availability unit that provides a high level <strong>of</strong> availability for tape<br />
automation<br />
► Support for the <strong>IBM</strong> Total Storage Virtual Tape Server<br />
► Support for the <strong>IBM</strong> Total Storage Peer-to-Peer VTS<br />
► Support for the following tape drives:<br />
– <strong>IBM</strong> 3490E Model F1A tape drive<br />
– <strong>IBM</strong> 3490E Model CxA tape drives<br />
– <strong>IBM</strong> Magstar 3590 Model B1A tape drives<br />
– <strong>IBM</strong> Magstar 3590 Model E1A tape drives<br />
– <strong>IBM</strong> Magstar 3590 Model H1A tape drives<br />
– <strong>IBM</strong> TotalStorage Enterprise Tape Drive 3592 Model J1A<br />
► Attachment to and sharing by multiple host systems, such as <strong>IBM</strong> eServer zSeries,<br />
iSeries, pSeries®, S/390, RS/6000, AS/400, HP, and Sun processors<br />
► Data paths through FICON, fibre channels, SCSI-2, ESCON, and parallel channels<br />
depending on the tape subsystem installed<br />
► Library management commands through RS-232, a local area network (LAN), and<br />
parallel, ESCON, and FICON channels<br />
496 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.27 Introduction to Virtual Tape Server (VTS)<br />
VTS models:<br />
Model B10 VTS<br />
Model B20 VTS<br />
Peer-to-Peer (PtP) VTS (up to twenty-four 3590<br />
tape drives)<br />
VTS design (single VTS)<br />
32, 64, 128 or 256 3490E virtual devices<br />
Tape volume cache:<br />
Analogous to DASD cache<br />
Data access through the cache<br />
Dynamic space management<br />
Cache hits eliminate tape mounts<br />
Up to twelve 3590 tape drives (the real 3590 volume<br />
contains up to 250,000 virtual volumes per VTS)<br />
Stacked 3590 tape volumes managed by the 3494<br />
Figure 8-27 Introduction to VTS<br />
VTS introduction<br />
The <strong>IBM</strong> Magstar Virtual Tape Server (VTS), integrated with the <strong>IBM</strong> Tape Library<br />
Dataservers (3494), delivers an increased level <strong>of</strong> storage capability beyond the traditional<br />
storage products hierarchy. The host s<strong>of</strong>tware sees VTS as a 3490 Enhanced Capability<br />
(3490E) Tape Subsystem with associated standard (CST) or Enhanced Capacity Cartridge<br />
<strong>System</strong> Tapes (ECCST). This virtualization <strong>of</strong> both the tape devices and the storage media to<br />
the host allows for transparent utilization <strong>of</strong> the capabilities <strong>of</strong> the <strong>IBM</strong> 3590 tape technology.<br />
Along with introduction <strong>of</strong> the <strong>IBM</strong> Magstar VTS, <strong>IBM</strong> introduced new views <strong>of</strong> volumes and<br />
devices because <strong>of</strong> the different knowledge about volumes and devices on the host system<br />
and the hardware. Using a VTS subsystem, the host application writes tape data to virtual<br />
devices. The volumes created by the hosts are called virtual volumes and are physically<br />
stored in a tape volume cache that is built from RAID DASD.<br />
VTS models<br />
These are the <strong>IBM</strong> 3590 drives you can choose:<br />
► For the Model B10 VTS, four, five, or six 3590-B1A/E1A/H1A can be associated with VTS.<br />
► For the Model B20 VTS, six to twelve 3590-B1A/E1A/H1A can be associated with VTS.<br />
Each ESCON channel in the VTS is capable <strong>of</strong> supporting 64 logical paths, providing up to<br />
1024 logical paths for Model B20 VTS with sixteen ESCON channels, and 256 logical paths<br />
for Model B10 VTS with four ESCON channels. Each logical path can address any <strong>of</strong> the 32,<br />
64, 128, or 256 virtual devices in the Model B20 VTS.<br />
Chapter 8. Storage management hardware 497
Each FICON channel in the VTS can support up to 128 logical paths, providing up to 1024<br />
logical paths for the Model B20 VTS with eight FICON channels. With a Model B10 VTS, 512<br />
logical paths can be provided with four FICON channels. As with ESCON, each logical path<br />
can address any <strong>of</strong> the 32, 64, 128, or 256 virtual devices in the Model B20 VTS.<br />
Note: Intermixing FICON and SCSI interfaces is not supported.<br />
Tape volume cache<br />
The <strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tape Server appears to the host processor as a<br />
single automated tape library with 64, 128, or 256 virtual tape drives and up to 250,000 virtual<br />
volumes. The configuration <strong>of</strong> this system has up to 3.5 TB <strong>of</strong> Tape <strong>Volume</strong> Cache native<br />
(10.4 TB with 3:1 compression), up to 24 <strong>IBM</strong> TotalStorage 3590 tape drives, and up to 16<br />
host ESCON or FICON channels. Through tape volume cache management policies, the VTS<br />
management s<strong>of</strong>tware moves host-created volumes from the tape volume cache to a Magstar<br />
cartridge managed by the VTS subsystem. When a virtual volume is moved from the tape<br />
volume cache to tape, it becomes a logical volume.<br />
VTS design<br />
VTS looks like an automatic tape library with thirty-two 3490E drives and 50,000 volumes in<br />
37 square feet. Its major components are:<br />
► Magstar 3590 (three or six tape drives) with two ESCON channels<br />
► Magstar 3494 Tape Library<br />
► Fault-tolerant RAID-1 disks (36 Gb or 72 Gb)<br />
► RISC Processor<br />
VTS functions<br />
VTS provides the following functions:<br />
► Thirty-two 3490E virtual devices.<br />
► Tape volume cache (implemented in a RAID-1 disk) that contains virtual volumes.<br />
The tape volume cache consists <strong>of</strong> a high performance array <strong>of</strong> DASD and storage<br />
management s<strong>of</strong>tware. Virtual volumes are held in the tape volume cache when they are<br />
being used by the host system. Outboard storage management s<strong>of</strong>tware manages which<br />
virtual volumes are in the tape volume cache and the movement <strong>of</strong> data between the tape<br />
volume cache and physical devices. The size <strong>of</strong> the DASD is made large enough so that<br />
more virtual volumes can be retained in it than just the ones currently associated with the<br />
virtual devices.<br />
After an application modifies and closes a virtual volume, the storage management<br />
s<strong>of</strong>tware in the system makes a copy <strong>of</strong> it onto a physical tape. The virtual volume remains<br />
available on the DASD until the space it occupies reaches a predetermined threshold.<br />
Leaving the virtual volume in the DASD allows for fast access to it during subsequent<br />
requests. The DASD and the management <strong>of</strong> the space used to keep closed volumes<br />
available is called tape volume cache. Performance for mounting a volume that is in tape<br />
volume cache is quicker than if a real physical volume is mounted.<br />
► Up to six 3590 tape drives; the real 3590 volume contains logical volumes. Installation<br />
sees up to 50,000 volumes.<br />
► Stacked 3590 tape volumes managed by the 3494. It fills the tape cartridge up to 100%.<br />
Putting multiple virtual volumes into a stacked volume, VTS uses all <strong>of</strong> the available space<br />
on the cartridge. VTS uses <strong>IBM</strong> 3590 cartridges when stacking volumes.<br />
VTS is expected to provide a ratio <strong>of</strong> 59:1 in volume reduction, with dramatic savings in all<br />
tape hardware items (drives, controllers, and robots).<br />
498 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.28 <strong>IBM</strong> TotalStorage Peer-to-Peer VTS<br />
Virtual Tape Controllers<br />
FICON/ESCON<br />
to zSeries<br />
CX1<br />
VTC<br />
VTC<br />
CX1<br />
Composite Library<br />
ESCON/FICON<br />
Figure 8-28 <strong>IBM</strong> TotalStorage Peer-to-Peer VTS<br />
Master VTS<br />
I/O VTS<br />
I/O VTS<br />
Distributed Library<br />
Distributed Library<br />
UI Library<br />
Peer-to-Peer VTS<br />
<strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tape Server, an extension <strong>of</strong> <strong>IBM</strong> TotalStorage Virtual<br />
Tape Server, is specifically designed to enhance data availability. It accomplishes this by<br />
providing dual volume copy, remote functionality, and automatic recovery and switchover<br />
capabilities. With a design that reduces single points <strong>of</strong> failure (including the physical media<br />
where logical volumes are stored), <strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tape Server<br />
improves system reliability and availability, as well as data access. To help protect current<br />
hardware investments, existing <strong>IBM</strong> TotalStorage Virtual Tape Servers can be upgraded for<br />
use in this new configuration.<br />
<strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tape Server consists <strong>of</strong> new models and features <strong>of</strong><br />
the 3494 Tape Library that are used to join two separate Virtual Tape Servers into a single,<br />
interconnected system. The two virtual tape systems can be located at the same site or at<br />
separate sites that are geographically remote. This provides a remote copy capability for<br />
remote vaulting applications.<br />
<strong>IBM</strong> TotalStorage Peer-to-Peer Virtual Tape Server appears to the host <strong>IBM</strong> eServer zSeries<br />
processor as a single automated tape library with 64, 128, or 256 virtual tape drives and up to<br />
500,000 virtual volumes. The configuration <strong>of</strong> this system has up to 3.5 TB <strong>of</strong> Tape <strong>Volume</strong><br />
Cache native (10.4 TB with 3:1 compression), up to 24 <strong>IBM</strong> TotalStorage 3590 tape drives,<br />
and up to 16 host ESCON or FICON channels.<br />
Chapter 8. Storage management hardware 499
In addition to the 3494 VTS components B10, B18, and B20, the Peer-to-Peer VTS consists<br />
<strong>of</strong> the following components:<br />
► The 3494 virtual tape controller model VTC<br />
The VTC in the Virtual Tape Frame 3494 Model CX1 provides interconnection between<br />
two VTSs with the Peer-to-Peer Copy features, and provides two host attachments for the<br />
PtP VTS. There must be four (for the Model B10 or B18) or eight (for the Model B20 or<br />
B18) VTCs in a PtP VTS configuration. Each VTC is an independently operating,<br />
distributed node within the PtP VTS, which continues to operate during scheduled or<br />
unscheduled service <strong>of</strong> another VTC.<br />
► The 3494 auxiliary tape frame model CX1<br />
The Model CX1 provides the housing and power for two or four 3494 virtual tape<br />
controllers. Each Model CX1 can be configured with two or four Model VTCs. There are<br />
two power control compartments, each with its own power cord, to allow connection to two<br />
power sources.<br />
Peer-to-Peer copy features<br />
Special features installed on 3494 Models B10, B18, and B20 in a Peer-to-Peer configuration<br />
provide automatic copies <strong>of</strong> virtual volumes. These features can be installed on existing VTS<br />
systems to upgrade them to a Peer-to-Peer VTS.<br />
VTS advanced functions<br />
As with a stand-alone VTS, the Peer-to-Peer VTS has the option to install additional features<br />
and enhancements to existing features. These new features are:<br />
► Outboard policy management: Outboard policy management enables the storage<br />
administrator to manage SMS data classes, storage classes, management classes, and<br />
storage groups at the library manager or the 3494 specialist.<br />
► Physical volume pooling: With outboard policy management enabled, you are able to<br />
assign logical volumes to selected storage groups. Storage groups point to primary<br />
storage pools. These pool assignments are stored in the library manager database. When<br />
a logical volume is copied to tape, it is written to a stacked volume that is assigned to a<br />
storage pool as defined by the storage group constructs at the library manager.<br />
► Tape volume dual copy: With advanced policy management, storage administrators have<br />
the facility to selectively create dual copies <strong>of</strong> logical volumes within a VTS. This function<br />
is also available in the Peer-to-Peer environment. At the site or location where the second<br />
distributed library is located, logical volumes can also be duplexed, in which case you can<br />
have two or four copies <strong>of</strong> your data.<br />
► Peer-to-Peer copy control<br />
There are two types <strong>of</strong> copy operations:<br />
– Immediate, which creates a copy <strong>of</strong> the logical volume in the companion connected<br />
virtual tape server prior to completion <strong>of</strong> a rewind/unload command. This mode<br />
provides the highest level <strong>of</strong> data protection.<br />
– Deferred, which creates a copy <strong>of</strong> the logical volume in the companion connected<br />
virtual tape server as activity permits after receiving a rewind/unload command. This<br />
mode provides protection that is superior to most currently available backup schemes.<br />
► Tape volume cache management: Prior to the introduction <strong>of</strong> these features, there was no<br />
way to influence cache residency. As a result, all data written to the TVC was pre-migrated<br />
using a first-in, first-out (FIFO) method. With the introduction <strong>of</strong> this function, you now have<br />
the ability to influence the time that virtual volumes reside in the TVC.<br />
500 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
8.29 Storage area network (SAN)<br />
Figure 8-29 Storage area network (SAN)<br />
ESCON<br />
FICON<br />
LAN<br />
Fibre Channel<br />
SAN<br />
switches & directors<br />
Any Server to<br />
Any Storage<br />
ESS<br />
Storage area network<br />
The Storage Network Industry Association (SNIA) defines a SAN as a network whose primary<br />
purpose is the transfer <strong>of</strong> data between computer systems and storage elements. A SAN<br />
consists <strong>of</strong> a communication infrastructure, which provides physical connections, and a<br />
management layer, which organizes the connections, storage elements, and computer<br />
systems so that data transfer is secure and robust. The term SAN is usually (but not<br />
necessarily) identified with block I/O services rather than file access services. It can also be a<br />
storage system consisting <strong>of</strong> storage elements, storage devices, computer systems, and<br />
appliances, plus all control s<strong>of</strong>tware, communicating over a network.<br />
SANs today are usually built using Fibre Channel technology, but the concept <strong>of</strong> a SAN is<br />
independent <strong>of</strong> the underlying type <strong>of</strong> network.<br />
The major potential benefits <strong>of</strong> a SAN can be categorized as:<br />
► Access<br />
Benefits include longer distances between processors and storage, higher availability, and<br />
improved performance (because I/O traffic is <strong>of</strong>floaded from a LAN to a dedicated<br />
network, and because Fibre Channel is generally faster than most LAN media). Also, a<br />
larger number <strong>of</strong> processors can be connected to the same storage device, compared to<br />
typical built-in device attachment facilities.<br />
TotalStorage<br />
Chapter 8. Storage management hardware 501
► Consolidation<br />
Another benefit is replacement <strong>of</strong> multiple independent storage devices by fewer devices<br />
that support capacity sharing; this is also called disk and tape pooling. SANs provide the<br />
ultimate in scalability because s<strong>of</strong>tware can allow multiple SAN devices to appear as a<br />
single pool <strong>of</strong> storage accessible to all processors on the SAN. Storage on a SAN can be<br />
managed from a single point <strong>of</strong> control. Controls over which hosts can see which storage<br />
(called zoning and LUN masking) can be implemented.<br />
► Protection<br />
LAN-free backups occur over the SAN rather than the (slower) LAN, and server-free<br />
backups can let disk storage “write itself” directly to tape without processor overhead.<br />
There are various AN topologies on the base <strong>of</strong> Fibre Channel networks:<br />
► Point-to-Point<br />
With a SAN, a simple link is used to provide high-speed interconnection between two<br />
nodes.<br />
► Arbitrated loop<br />
The Fibre Channel arbitrated loop <strong>of</strong>fers relatively high bandwidth and connectivity at a<br />
low cost. For a node to transfer data, it must first arbitrate to win control <strong>of</strong> the loop. Once<br />
the node has control, it is free to establish a virtual point-to-point connection with another<br />
node on the loop. After this point-to-point (virtual) connection is established, the two nodes<br />
consume all <strong>of</strong> the loop’s bandwidth until the data transfer operation is complete. Once the<br />
transfer is complete, any node on the loop can then arbitrate to win control <strong>of</strong> the loop.<br />
► Switched<br />
Fibre channel switches function in a manner similar to traditional network switches to<br />
provide increased bandwidth, scalable performance, an increased number <strong>of</strong> devices,<br />
and, in certain cases, increased redundancy.<br />
Multiple switches can be connected to form a switch fabric capable <strong>of</strong> supporting a large<br />
number <strong>of</strong> host servers and storage subsystems. When switches are connected, each<br />
switch’s configuration information has to be copied into all the other participating switches.<br />
This is called cascading.<br />
FICON and SAN<br />
From a zSeries perspective, FICON is the protocol that is used in a SAN environment. A<br />
FICON infrastructure may be point-to-point or switched, using ESCON directors with FICON<br />
bridge cards or FICON directors to provide connections between channels and control units.<br />
FICON uses Fibre Channel transport protocols, and so uses the same physical fiber.<br />
Today zSeries has 2 Gbps link data rate support. The 2 Gbps links are for native FICON,<br />
FICON CTC, cascaded directors and fibre channels (FCP channels) on the FICON Express<br />
cards on z800, z900, and z990 only.<br />
502 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
Related publications<br />
The publications listed in this section are considered particularly suitable for a more detailed<br />
discussion <strong>of</strong> the topics covered in this book.<br />
<strong>IBM</strong> <strong>Redbooks</strong> publications<br />
For information about ordering these publications, see “How to get <strong>IBM</strong> <strong>Redbooks</strong><br />
publications” on page 504. Note that some <strong>of</strong> the documents referenced here may be<br />
available in s<strong>of</strong>tcopy only.<br />
► VSAM Demystified, SG24-6105<br />
► DFSMStvs Overview and Planning Guide, SG24-6971<br />
► DFSMStvs Presentation Guide, SG24-6973<br />
► z/<strong>OS</strong> DFSMS V1R3 and V1R5 Technical Guide, SG24-6979<br />
Other publications<br />
These publications are also relevant as further information sources:<br />
► z/<strong>OS</strong> DFSMStvs Administration Guide, GC26-7483<br />
► Device Support Facilities User’s Guide and Reference Release 17, GC35-0033<br />
► z/<strong>OS</strong> MVS <strong>Programming</strong>: Assembler Services Guide, SA22-7605<br />
► z/<strong>OS</strong> MVS <strong>System</strong> Commands, SA22-7627<br />
► z/<strong>OS</strong> MVS <strong>System</strong> Messages,<strong>Volume</strong> 1 (ABA-AOM), SA22-7631<br />
► DFSMS Optimizer User’s Guide and Reference, SC26-7047<br />
► z/<strong>OS</strong> DFSMStvs Planning and Operating Guide, SC26-7348<br />
► z/<strong>OS</strong> DFSMS Access Method Services for Catalogs, SC26-7394<br />
► z/<strong>OS</strong> DFSMSdfp Storage Administration Reference, SC26-7402<br />
► z/<strong>OS</strong> DFSMSrmm Guide and Reference, SC26-7404<br />
► z/<strong>OS</strong> DFSMSrmm Implementation and Customization Guide, SC26-7405<br />
► z/<strong>OS</strong> DFSMS Implementing <strong>System</strong>-Managed Storage, SC26-7407<br />
► z/<strong>OS</strong> DFSMS: Managing Catalogs, SC26-7409<br />
► z/<strong>OS</strong> DFSMS: Using Data Sets, SC26-7410<br />
► z/<strong>OS</strong> DFSMS: Using the Interactive Storage Management Facility, SC26-7411<br />
► z/<strong>OS</strong> DFSMS: Using Magnetic Tapes, SC26-7412<br />
► z/<strong>OS</strong> DFSMSdfp Utilities, SC26-7414<br />
► z/<strong>OS</strong> Network File <strong>System</strong> Guide and Reference, SC26-7417<br />
► DFSORT Getting Started with DFSORT R14, SC26-4109<br />
► DFSORT Installation and Customization Release 14, SC33-4034<br />
► z/<strong>OS</strong> DFSMShsm Storage Administration Guide, SC35-0421<br />
© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 503
Online resources<br />
► z/<strong>OS</strong> DFSMShsm Storage Administration Reference, SC35-0422<br />
► z/<strong>OS</strong> DFSMSdss Storage Administration Guide, SC35-0423<br />
► z/<strong>OS</strong> DFSMSdss Storage Administration Reference, SC35-0424<br />
► z/<strong>OS</strong> DFSMS Object Access Method Application Programmer’s Reference, SC35-0425<br />
► z/<strong>OS</strong> DFSMS Object Access Method Planning, Installation, and Storage Administration<br />
Guide for Object Support, SC35-0426<br />
► Tivoli Decision Support for <strong>OS</strong>/390 <strong>System</strong> Performance Feature Reference <strong>Volume</strong> I,<br />
SH19-6819<br />
► Device Support Facilities (ICKDSF) User's Guide and Reference, GC35-0033-35<br />
► z/<strong>OS</strong> DFSORT Application <strong>Programming</strong> Guide, SC26-7523<br />
► z/<strong>OS</strong> MVS JCL Reference, SA22-7597<br />
These Web sites and URLs are also relevant as further information sources:<br />
► For articles, online books, news, tips, techniques, examples, and more, visit the z/<strong>OS</strong><br />
DFSORT home page:<br />
http://www-1.ibm.com/servers/storage/support/s<strong>of</strong>tware/sort/mvs<br />
For more information about DFSMSrmm, visit:<br />
http://www-1.ibm.com/servers/storage/s<strong>of</strong>tware/sms/rmm/<br />
How to get <strong>IBM</strong> <strong>Redbooks</strong> publications<br />
Help from <strong>IBM</strong><br />
You can search for, view, or download books, Redpapers, Hints and Tips, draft publications<br />
and Additional materials, as well as order hardcopy books or CD-ROMs, at this Web site:<br />
ibm.com/redbooks<br />
<strong>IBM</strong> Support and downloads<br />
ibm.com/support<br />
<strong>IBM</strong> Global Services<br />
ibm.com/services<br />
504 <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> <strong>Volume</strong> 3<br />
(1.0” spine)<br />
0.875”1.498”<br />
460 788 pages<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong><br />
<strong>Volume</strong> 3<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong><br />
<strong>Volume</strong> 3
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong><br />
<strong>Volume</strong> 3<br />
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong><br />
<strong>Volume</strong> 3
<strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong><br />
<strong>Programming</strong><br />
<strong>Volume</strong> 3<br />
DFSMS, Data set<br />
basics, SMS<br />
Storage management<br />
s<strong>of</strong>tware and<br />
hardware<br />
Catalogs, VSAM,<br />
DFSMStvs<br />
Back cover<br />
The <strong>ABCs</strong> <strong>of</strong> z/<strong>OS</strong> <strong>System</strong> <strong>Programming</strong> is a thirteen-volume collection that<br />
provides an introduction to the z/<strong>OS</strong> operating system and the hardware<br />
architecture. Whether you are a beginner or an experienced system programmer,<br />
the <strong>ABCs</strong> collection provides the information that you need to start your research<br />
into z/<strong>OS</strong> and related subjects. The <strong>ABCs</strong> collection serves as a powerful technical<br />
tool to help you become more familiar with z/<strong>OS</strong> in your current environment, or<br />
to help you evaluate platforms to consolidate your e-business applications.<br />
This edition is updated to z/<strong>OS</strong> Version 1 Release 1.<br />
The contents <strong>of</strong> the volumes are:<br />
<strong>Volume</strong> 1: Introduction to z/<strong>OS</strong> and storage concepts, TSO/E, ISPF, JCL, SDSF, and<br />
z/<strong>OS</strong> delivery and installation<br />
<strong>Volume</strong> 2: z/<strong>OS</strong> implementation and daily maintenance, defining subsystems,<br />
JES2 and JES3, LPA, LNKLST, authorized libraries, Language Environment, and<br />
SMP/E<br />
<strong>Volume</strong> 3: Introduction to DFSMS, data set basics, storage management hardware<br />
and s<strong>of</strong>tware, VSAM, <strong>System</strong>-Managed Storage, catalogs, and DFSMStvs<br />
<strong>Volume</strong> 4: Communication Server, TCP/IP and VTAM<br />
<strong>Volume</strong> 5: Base and Parallel Sysplex, <strong>System</strong> Logger, Resource Recovery Services<br />
(RRS), Global Resource Serialization (GRS), z/<strong>OS</strong> system operations, Automatic<br />
Restart Management (ARM), Geographically Dispersed Parallel Sysplex (GPDS)<br />
<strong>Volume</strong> 6: Introduction to security, RACF, Digital certificates and PKI, Kerberos,<br />
cryptography and z990 integrated cryptography, zSeries firewall technologies,<br />
LDAP, Enterprise Identity Mapping (EIM), and firewall technologies<br />
<strong>Volume</strong> 7: Printing in a z/<strong>OS</strong> environment, Infoprint Server and Infoprint Central<br />
<strong>Volume</strong> 8: An introduction to z/<strong>OS</strong> problem diagnosis<br />
<strong>Volume</strong> 9: z/<strong>OS</strong> UNIX <strong>System</strong> Services<br />
<strong>Volume</strong> 10: Introduction to z/Architecture, zSeries processor design, zSeries<br />
connectivity, LPAR concepts, HCD, and HMC<br />
<strong>Volume</strong> 11: Capacity planning, performance management, RMF, and SMF<br />
<strong>Volume</strong> 12: WLM<br />
<strong>Volume</strong> 13: JES3<br />
SG24-6983-03 ISBN 0738434094<br />
INTERNATIONAL<br />
TECHNICAL<br />
SUPPORT<br />
ORGANIZATION<br />
®<br />
BUILDING TECHNICAL<br />
INFORMATION BASED ON<br />
PRACTICAL EXPERIENCE<br />
<strong>IBM</strong> <strong>Redbooks</strong> are developed<br />
by the <strong>IBM</strong> International<br />
Technical Support<br />
Organization. Experts from<br />
<strong>IBM</strong>, Customers and Partners<br />
from around the world create<br />
timely technical information<br />
based on realistic scenarios.<br />
Specific recommendations<br />
are provided to help you<br />
implement IT solutions more<br />
effectively in your<br />
environment.<br />
For more information:<br />
ibm.com/redbooks<br />
®