Part_0 | - Mainframe DATA Conversion Alternatives |
- Plan A, convert Sequential & Indexed files retaining existing layouts | |
to get off the mainframe as soon as possible | |
- Plan B, provides conversion of flat files to RDBMS tables | |
- Plan C, follow Plan A & use Plan B for desired files on your schedule | |
(after initial conversion to get off the mainframe asap). | |
- Data Conversion Features Overview |
Part_1 | - Preparation for Conversion |
- Directory Setup |
Part_2 | - Testing & Debugging Aids |
- Cross-References, Jobflow reports, job logging, etc | |
- uvhd, cobmap, dtree, uvcp, scan/replace, rename scripts | |
- many other scripts & utilities valuable during conversions | |
- several of these can be run using supplied test files | |
- get familiar with these, so you will know when they can save you time | |
- Tips for users new to Unix/Linux | |
- Micro Focus COBOL Error codes listed here for your convenience |
Part_3 | - Environmental file listings |
- profile for Unix/Linux & Windows/SFU | |
- you may have to make minor modifications |
Part_4 | - Mainframe Data Transfer & Conversion to ASCII |
- preserving packed decimal & binary fields | |
- correcting zoned signs to MicroFocus COBOL standards | |
- Part 4 procedures generate uvcopy conversion jobs from all copybooks | |
to convert ALL data files in a directory | |
- inserts actual datafilenames (vs copybooknames) via a control file | |
created by extracting all datafilenames from JCL & appending file | |
info from LISTCAT (recsize, filetype, indexed keys) |
Part_5 | - Converting 1 data file at a time |
- 'gencnv51' script to generate uvcopy job from copybook & insert | |
datafilename (vs copybook) | |
- uses the control file (created in Part 4) to get file type & keys | |
but this step could be done manually if control file not available |
Part_6 | - Complex file conversions |
- multi-record type files, redefined records | |
- occurs with mixed data types | |
- variable length files |
Part_7 | - Variable Length Record Files |
- RDW (Record Descriptor Word) variable length files | |
- investigating RDW files with uvhd | |
- converting EBCDIC RDW files to ASCII using uvhd, uvcp,& uvcopy varfix11 | |
- creating table summaries of record sizes found in RDW files |
Part_9 | - Summary of utilities & scripts used in data file conversion |
- listings of some scripts |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
MVSDATA.doc |
|
VSEDATA.htm | - VSE DATA Conversion |
- greatly enhanced in 2007 for City of Lynn Valley conversion | |
- good info re conversion of variable length files | |
to IDXFORMAT8 for Micro Focus COBOL | |
- could apply to MVS as well as VSE |
MVSJCL.htm | - MVS JCL Conversion to Korn shell scripts |
VSEJCL.htm | - VSE JCL Conversion |
MvsJclPerl.htm - MVS JCL Conversion to Perl scripts
MVSCOBOL.htm | - MVS COBOL Conversion |
VSECOBOL.htm | - VSE COBOL Conversion |
DATAcnv1.htm | - Simplified DATA conversion (1 file at a time) |
- Translate EBCDIC to ASCII, preserving packed fields | |
& correcting zoned signs to MicroFocus COBOL standards | |
- Converting to all text delimited for loading RDBMS's |
Note |
|
Owen Townsend, UV Software, 4667 Hoskins Rd., North Vancouver BC, V7K2R3
Tel: 604-980-5434 Fax: 604-980-5404
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note that you might use DATAcnv1.htm for your 1st few data file conversions, since the DATAcnv1 procedures are much simpler than MVS/VSEDATA procedures.
You will need MVSDATA/VSEDATA conversion procedures for major conversion projects where you need the automation provided by the control files used in MVSDATA or VSEDATA.
The MVSDATA/VSEDATA procedures allow us to reconvert all data files with one command. You may need to re-transfer & reconvert data files several times during a major conversion & certainly on the 'go live' week end.
uvdata51 |
|
uvdata52 |
|
uvdata53 |
|
uvdata54 |
|
uvdata31 |
|
uvdata41 |
|
uvdata42 |
|
uvdata43 |
|
uvdata44 |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
In Fall 2004, the MVS JCL conversion package and this documentation were significantly enhanced. Test/Demo JCL, PROCs, COBOL programs,& DATA files are now provided, for you to run the conversion procedures & verify you results match the results listed in the documentation.
The Test/Demo conversion sessions will give you a clear understanding of these conversion procedures & make it easier for you to get started on your own conversions. If you are reading this on the web site (www.uvsoftware.ca), the test/demo documentation will make it easier for you to understand these conversion concepts.
If you have not yet purchased the Vancouver Utilities conversion package, UV Software invites you to send samples of your JCL for conversion & return by email attachments. Please be sure to include all referenced PROCs and any library members containing SORT keys, etc.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
UV Software offers 2 strategies for converting mainframe data files for use on Unix/Linux systems. We are concerned here with the Sequential & Indexed files used on the mainframe, not with existing mainframe DBMS's. Software for converting mainframe DBMS tables is usually supplied by the vendor.
UV Software supplies powerful utilities to convert the Sequential & Indexed files, automatically from the COBOL copybooks, allowing for complex files, with multiple record types, occurs, etc.
Plan 'A' is to retain existing record layouts, which allows you to convert quickly, since COBOL program logic needs no changes.
Plan 'B' is to convert data files to pipe delimited text files for loading RDBMS tables. Any packed/binary fields are unpacked & edited with signs & decimal points as required for loading RDBMS tables. We also automatically generate the SQL Loader control files from the COBOL copybooks.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We recommend plan A to get off the mainframe as quickly as possible. Then you can convert files to RDBMS tables depending on your priorities & timetable.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
MVSDATA.doc is the detailed step by step instructions for converting an IBM EBCDIC mainframe to Unix/Linux, using Vancouver Utilities from UV Software Inc.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
9a. Mainframe JCL is automatically converted to Unix/Linux scripts. Mainframe file assignments for COBOL programs are converted to export the external name for MicroFocus COBOL. Any mainframe DATA and SORT utilities are converted to the Vancouver Utility Unix/Linux equivalent (uvcp or uvsort). Note that the Unix or Linux system sort cannot be used for mainframe file type since it does not support fixed record lengths with no linefeeds.
9b. JCL Converted scripts simplified. The JCL conversion has been designed to make the resulting Unix/Linux scripts as easy as possible to read, to debug, and to maintain in future.
9c. Unix/Linux functions have been written to provide the essential functionality of the MVS IBM cataloged generation files. Please see exportgen0,1,p,x,all which are discussed & listed beginning on page 5J0 of MVSJCL.htm#5J0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvlpLS13 - prints Landscape Simplex at 13 cpi to fit 132 cols on 11" wide - prints at 8.1 lpi to fit 60 lines in 8 1/2 deep uvlpLD13 - prints Landscape Duplex (else same as above)
the JCL converter generates print commands for both laser printing (uvlpLS13, etc) & easy spooler (#llp). By default easy spooler #disabled.
$PT #llp -dlp01 -ohold -onob -fPLAIN $DD_REPORT $PT uvlpLS13 $DD_REPORT
We recommend you uncomment the easy spooler command for continuous forms (cheques, labels, etc) & can remove the laser print command. You can leave the commands as is to laser print all stock reports. Also note that both commands are disabled for programmer testing but enabled for operator immediate printing via environmental variables in their profiles (export PT=":" or export PT="").
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The Vancouver Utilities & the conversion tools & scripts may now be run on Windows systems, using SFU, UWIN or CYGWIN (unix/linux emulators for windows).
There is also a low cost version of Vancouver Utilities for native windows, but that package does not include the conversion tools since the conversions depend on many unix style scripts that can only be run on windows with the aid of an emulator such as SFU, UWIN, CYGWIN, or KMS Tools.
Please see WindowsSFU.htm for important notes on installing SFU & VU. Please see CygwinUwin.htm for important notes on installing CYGWIN & UWIN for the Vancouver Utilities.
The various JCL & COBOL conversions illustrated in this documentation have been tested on SFU, UWIN, & CYGWIN as well as on Unix & Linux. Notes are made where there is any difference.
COBOL compile scripts are different since the MicroFocus COBOL compiler is different (Server Express for Unix/Linux vs Net Express for Windows). The script to compile 1 program at a time is 'mfcbl1' for unix/linux, and 'mfnxcbl1' for SFU/UWIN/CYGWIN (see listings in MVSCOBOL.htm). To compile all programs in a directory the scripts are 'mfcblA' & 'mfnxcblA'.
JCL/scripts use functions (exportgen0, exportgen1, etc) to simulate mainframe generation files. These complex functions require the Korn shell (1993 version). They will not work under 'bash' which is the default shell for CYGWIN, nor will they work under 'pdksh' (public domain ksh), which is supplied with CYGWIN and aliased as 'ksh'. Actually alternate versions were written in 2004 to work with bash & pdksh.
Please see pages F9 & F10 of the VU installation guide install.htm#F9 to download ksh93 from www.kornshell.com, and add the required files to the CYGWIN /bin.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Web | site https://www.microfocus.com |
Please contact: mailto:clete.werts@microfocus.com
MicroFocus COBOL Server Express version 2.2 or later is required to compile the Variable length Indexed file handler into uvcopy & uvsort. This is not required for Fixed length Indexed files which are supported by D-ISAM which is supplied & is compatible with C-ISAM and MicroFocus COBOL IDXFORMAT1.
Web | site https://www.bytedesigns.com |
Please contact: mailto:sales@ByteDesigns.com
uvcopy, uvsort, etc are linked with the D-ISAM from Byte Designs. D-ISAM is a file handler for Fixed length Indexed files compatible with C-ISAM and MicroFocus COBOL IDXFORMAT1.
Please contact mailto:liberate@inreach.com
Morada supply an RPG compiler for Unix/Linux systems. For details please see rpg2unix.htm in the Mainframe Conversion Library of this UV Software web site. The Morada web site is not yet available.
Web | site: https://www.uneclipse.com |
Please contact: mailto:mchard@uneclipse.com
SPF/UX is a Unix/Linux version of the IBM ISPF (Interactive System Productivity Facility) available on most IBM mainframes.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1A1. | Directory Setup on Unix/Linux (see 'dtree' reports) |
- simple directory design for initial testing | |
- see ADMjobs.htm#Part_2 for more complex alternate designs |
1B1. | RUNLIBS, RUNDATA,& CNVDATA environmental variables in profiles |
- allow programmers access to testlibs,testdata,& conversion superdirs. | |
- allow operators access to production libraries & data using the same | |
JCL/scripts (no changes required). |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will setup directories for our application libraries and data. We will 1st present a basic design for a single company with integrated applications. See 'Part_7' for some alternative designs for organizations with multiple companies &/or multiple separate applications on the same machine.
/p1 <-- /p1 file system mount point :----testlibs - test-libraries & test-data :----testdata /p2 <-- /p2 file system mount point :----prodlibs - production-libraries & production-data :----proddata /p3 <-- /p2 file system mount point :----backup - backup & restore directories :----restore /p4 <-- /p2 file system mount point :----cnvdata : :----d0ebc - data conversion directories : :----d1ebc : :----d2asc : :----cpys : :----maps : :----pfx1 : :----pfx2 : :----pfx3 : :----tmp
Note that the 'RUNLIBS' & 'RUNDATA' definitions in user profiles determine which libraries & data directories will be used. Using the basic design above, the definitions would be:
export PRODLIBS=/p2/prodlibs <-- for PRODuction operators ============================ export PRODDATA=/p2/proddata ============================
export TESTLIBS=/p2/testlibs <-- for programmer conversion & TESTing ============================ export TESTDATA=/p2/testdata ============================
cdl='cd $RUNLIBS' <-- aliases make it easy to switch between libs & data cdd='cd $RUNDATA' cdl='cd $CNVDATA'
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p2/proddata :-----ap <-- directories created for topnodes of filenames :-----ar :-----gl :-----py :-----jobctl <-- standard directories shared by all applications :-----joblog :-----jobtmp :-----rpts :-----sysout :-----tmp :-----wrk
These directory illustrations are created by the 'dtree' script and show only directories (no files). But in the following illustrations, I will show a few data files to ensure your complete understanding.
When we convert mainframe data files, we use the top-node as a sub-directory within 'proddata' (path defined by $RUNDATA). We also convert to lower case. Here are a few examples:
AR.CUSTOMER.MASTER <-- Mainframe file naming conventions AR.SALES.ITEMS GL.ACCOUNT.MASTER GL.ACCOUNT.TRANS
/p2/proddata/ar/customer.master /p2/proddata/ar/sales.items /p2/proddata/gl/account.master /p2/proddata/gl/account.trans
/p2/proddata :-----ar : :-----customer.master : :-----sales.items :-----gl : :-----account.master : :-----account.trans
The following pages will show some alternatives to this basic design, using only these 2 subdirs & 4 files for illustration purposes.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The JCL converter inserts a call to the 'jobset51' function at the beginning of each converted JCL/script, as explained in MVSJCL.doc. If you examine the 'jobset51' function listed on page 3C1 of MVSJCL.htm#3C1, you will see that RUNDATA & RUNLIBS are referenced as follows:
cd $RUNDATA #change to working dir for production (or test) #==========
DX=$RUNLIBS/cblx #setup path for loading programs #===============
Here is a few lines extracted from 1 of the demo JCL/scripts 'jar100.jcl' listed on MVSJCL.htm#1C1. Note line#s 10,67,& 70 on right side
jobset51 # call function to setup: directory equates, etc #10 #======= ...... exportfile NALIST ap/vendor.namelist #67 #=================================== cobrun $ANIM $DX/cap100 #70 #======================
Since jobset51 changes to $RUNDATA & since all file definitions in the script are relative (no absolute pathnames beginning with /), then the effective full pathname for the data file will be:
exportfile NALIST $RUNDATA/ap/vendor.namelist <--filenames relative to $RUNDATA #============================================ exportfile NALIST /p2/proddata/ap/vendor.namelist <--expanded example #================================================
Also note that the COBOL program is called via 'cobrun $ANIM $DX/cap100' & since jobset51 defines 'DX=$RUNLIBS/cblx', the expansion will be:
cobrun $ANIM $RUNLIBS/cblx/cap100 <-- programnames relative to $RUNLIBS #================================ cobrun $ANIM /p2/prodlibs/cblx/cap100 <-- expanded example #====================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Since you probably studied the JCL conversion prior to the Data File conversion we will 1st review the test/demo libraries used in MVSJCL.htm.
The next page will illustrate just the essential libraries required for generating & executing the data file conversion jobs.
The single '*' indicates libraries with supplied test/demo files. The double '**' indicates libraries required for data file conversion job generation & execution. These are isolated on the next page with a few additions (pfx1, pfx2, pfx3).
/home/mvstest <-- setup user 'mvstest' & copy subdirs from uvadm/mvstest :-----testlibs : :--*--cbl0 - COBOL programs ('*' means files present) : :-----cbl1 - cleaned up, cols 1-6 & 73-80 cleared, etc : :-----cbl2 - cnvMF5 converts mainframe COBOL to Micro Focus COBOL : :-----cbls - copy here (standard source library) before compiling : :-----cblst - cobol source listings from compiles : :-----cblx - compiled COBOL programs (.int's) : :--**-cpy0 - for COBOL copybooks : :-----cpy1 - cleaned up, cols 1-6 & 73-80 cleared, etc : :-----cpy2 - cnvMF5 converts mainframe COBOL to Micro Focus COBOL : :-----cpys - copy here (standard copybook library) : :-----ftp - subdir for FTP transfers : :--**-ctl - conversion control files (jclunixop5, datafiles51) : :-----include1 - provided but test/demo does not use : :--*--jcl0 - test/demo JCLs supplied : :-----jcl1 - intermediate conversion 73-80 cleared : :-----jcl2 - PROCs expanded from procs : :-----jcl3 - JCLs converted to Korn shell scripts : :-----jcls - copy here manually 1 by 1 during test/debug : :--**-maps - 'cobmaps' record layouts generated from copybooks : :-----pf - uvcopy jobs to replace utilities (easytrieve,etc) : :--*--proc0 - test/demo PROCs supplied : :-----procs - will be merged with jcl1, output to jcl2 : :-----sf - for misc scripts you may wish to write : :--*--sfun - korn shell functions (jobset5,logmsg,etc) : :-----tmp - tmp subdir used by various conversions : :-----xref - cross-references (see MVSJCL.htm#Part_9)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/mvstest :-----cnvdata <-- Data conversion superdir : :-----d0ebc <-- EBCDIC datafiles FTP'd from mainframe : : :---AR.DATA.FILE1 - UPPER case : : :---GL.DATA.FILE2.G9999V00 - GDG filename convention on mainframe : : :---GL.DATA.FILE3 : :-----d1ebc <-- EBCDIC files renamed to unix standards : : :---ar.data.file1 - lowercase : : :---gl.data.file2_000001 - GDG filename convention for unix : : :---gl.data.file3 : :-----d2asc <-- ASCII converted datafiles : : :---ar.data.file1 : : :---gl.data.file2_000001 : : :---gl.data.file3 : : :-----ctl - conversion control files : : :-----cpys - COBOL copybooks : : :-----maps - 'cobmaps' record layouts generated from copybooks : : :-----pfx1 - uvcopy jobs to convert EBCDIC to ASCII : - generated from COBOL copybooks by utility 'uvdata51' : - do not have actual datafile names or indexed keys : (since this info not in copybooks) : : :-----pfx2 - uvcopy jobs with actual datafile names & indexed keys : - encoded by utility 'uvdata52' using ctl/datacnv54 : - datacnv54 info extracted from LISTCAT by catgut1 & catgut2 : : :-----pfx3 - uvcopy jobs copied from pfx2 & modified for various reasons : - for Multi Record Type files (insert code to test types) : - to split Multi Record Types to separate files, etc : - copying to pfx3 protects your manual change code from : being overwritten if jobs are regenerated : - will execute the jobs from pfx3
mvslibsdirs |
|
mvsdatacnv |
|
mvsdatadirs |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/mvstest/testdata/ <-- test data file superdir ($RUNDATA) : :-----ar <-- topnodes/subdirs (ar,gl for illustrations) : :---data.file1 : :---data.file2 <-- subdirs/datafiles copied here for testing : :---...etc... - refreshed whenever required :-----gl : :---data.file1 : :---data.file2_000001 : :---...etc... : xx <-- variable no of subdirs depending on topnodes : :---data.file1 : :---data.file2 : :---...etc... : : :-----ftp :-----jobtmp :-----reports <-- common subdirs used by all topnode aplctns :-----tape :-----wrk :-----tmp
The above is a simplified design for learning & initial testing. See Part_7 for alternate directory designs that might be more suited to production environments.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
This document is intended to make you aware of the many conversion, testing, & debugging aids supplied by the Vancouver Utilities, that should help you to convert mainframes to unix/linux.
I believe you will find many of these aids essential to the success of your conversion, testing,& debugging.
These aids were originally in several documents (MVSJCL,MVSCOBOL,MVSDATA,etc). In January 2008, a separate document (CNVaids) was created to avoid the duplications in the original documents, which now have links to CNVaids.htm.
Many of these aids are illustrated using supplied test/demo files & you can run many of these once you have installed the Vancouver Utilities. These 'practice sessions' will help you when you start working on your own conversions of JCL, COBOL,& DATA files.
The intention is to give you a short introduction to the various utilities available, and then give you a link to the complete documentation which could be in various other books.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#1A1 Profiles
stub_profile |
|
common_profile |
|
CNVaids.htm#1B1 aliases
CNVaids.htm#1C1 Rename scripts
CNVaids.htm#1D1 dtree
CNVaids.htm#1E1 llr
CNVaids.htm#1F1 statdir1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#1G1 diff
CNVaids.htm#1G2 alldiff
CNVaids.htm#1H1 grep
CNVaids.htm#1I1 dos2unix
CNVaids.htm#1I1 unix2dos
CNVaids.htm#1J1 Vancouver Utility backup scripts scheduled by cron
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#2A1 uvlp__
CNVaids.htm#2B1 listall
CNVaids.htm#2C1 spreadA
CNVaids.htm#2D1 cleanup
CNVaids.htm#2E1 verifytext
CNVaids.htm#2F1 grepsum1
CNVaids.htm#2G1 scan/replace
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#3A1 cross-refs summary
CNVaids.htm#3B1 - xrefall generate ALL Cross-Ref reports
CNVaids.htm#3B2 - xref... generate any 1 Cross-Ref report
CNVaids.htm#3C1 - xcobcopy1 list all COPYBOOKS in any 1 PROGRAM CNVaids.htm#3C1 - xcobcopy2 crossref all PROGRAMS copying any 1 COPYBOOK CNVaids.htm#3C2 - xcobcall1 list of all CALLED-PROGRAMs in each PROGRAM CNVaids.htm#3C2 - xcobcall2 crossref all PROGRAMS calling any 1 CALLED-PROGRAM CNVaids.htm#3C3 - xcobfile2 crossref all PROGRAMS using each external-file CNVaids.htm#3C4 - xcobsql1 list all SQL Includes in any 1 PROGRAM CNVaids.htm#3C4 - xcobsql2 crossref all PROGRAMS using any 1 SQL Include
CNVaids.htm#3D1 - xkshfile1 list all DATAFILES used in any 1 ksh SCRIPT CNVaids.htm#3D1 - xkshfile2 crossref show all SCRIPTs using any 1 DATAFILE CNVaids.htm#3E1 - xkshprog1 list all PROGRAMs executed in any 1 ksh SCRIPT CNVaids.htm#3E1 - xkshprog2 crossref show all SCRIPTS executing any 1 PROGRAM
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
statallmvs1 |
|
statallvse1 |
|
statmvsjcl1 |
|
statvsejcl1 |
|
statksh1 |
|
CNVaids.htm#4C1 - statlogin1
CNVaids.htm#4D1 - table2
CNVaids.htm#4D2 - table3d
CNVaids.htm#4E1 - tblexts1
CNVaids.htm#4F1 - recsizes1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#3A1 - COBOL cross-refs documented with JCL/script cross-refs
CNVaids.htm#5B1 - statcbl1
CNVaids.htm#5C1 - cobfil51
CNVaids.htm#5C1 - Animation
CNVaids.htm#5D1 - cobmap1
CNVaids.htm#5F1 - Micro Focus COBOL 'file status' error codes
CNVaids.htm#5F2 - Micro Focus COBOL 'run time' error codes
https://supportline.microfocus.com/Documentation/books/sx40sp1/smpubb.htm
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#6A1 - jobflow51
CNVaids.htm#6B1 - joblog
CNVaids.htm#6C1 - separate datafiles
CNVaids.htm#6D1 - lastgenr
CNVaids.htm#6E1 - getEtime
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#7A1 uvhd
CNVaids.htm#7B1 uvhdcob
CNVaids.htm#7B2 uvhdc
CNVaids.htm#7C1 uvcp
CNVaids.htm#7D1 uvcpF2L
CNVaids.htm#7D2 uvcpL2F
CNVaids.htm#7E1 CMPjobs
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#7F1 listrec2
CNVaids.htm#7F2 listhex2
CNVaids.htm#7G1 gentest1
CNVaids.htm#7H1 vtocr1
- create VTOC report for files converted from mainframe - provides record counts, indexed keys, etc (information not displayed by the usual unix/linux tools) - see sample report & operating instructions in MVSDATA.htm
CNVaids.htm#7I1 uvsort
CNVaids.htm#7J1 uxcp
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#Part_8 pre-programmed jobs (written in uvcopy code)
CNVaids.htm#8B1 - tabfix1
CNVaids.htm#8C1 - tolower
CNVaids.htm#8D1 - toascii
CNVaids.htm#8E1 - scand2
CNVaids.htm#8F1 - acum1
CNVaids.htm#8G1 - cmrpt1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#8H1 - selectf1
CNVaids.htm#8J1 - splitjclproc1
CNVaids.htm#8K1 - splitcblcpy1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
CNVaids.htm#9A1 vi editor tutorial for users new to unix/linux
CNVaids.htm#9B1 Work in your working directory & address files thru subdirs
CNVaids.htm#9B2 setup a 'tmp/' subdir in your working directories
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
3A1. | Conversion support files - Overview |
3A2. | Vancouver Utilities 'uvadm' directories & contents |
3B1. | Profiles - profiles & related files in /home/uvadm/env/... |
- listed begining at ADMjobs.htm#1C1 | |
stub_profile - copied to homedirs, renamed as .bash_profile or .profile | |
- defines RUNLIBS/RUNDATA depending on programmer/operator | |
common_profile - defines PATHs using RUNLIBS/RUNDATA | |
bashrc/kshrc - required if console logging to define aliases |
3B2. | modifying stub_profiles for your site |
3C0. | Functions used in converted JCL/scripts |
- jobset51, logmsg1, stepctl51, exportfile, exportgen0, exportgen1 | |
- only jobset51 is listed here since it is vital to datafile access | |
- see the other functions listed at MVSJCL.htm#Part_3 | |
& MVSJCL.htm#Part_5 for the GDG functions. |
3C1. | sfun/jobset51 |
- a call to this function is inserted at the beginning of all scripts | |
(via the ctl/jclunixop51 control file). | |
- performs a change directory to $RUNDATA | |
- defines the work & print subdirs ($DW & $DP) | |
- ensures the $DW/nullfile is present | |
- defines the default value of $ANIM options to COBOL programs | |
- jobset51 is listed here since it references the RUNLIBS & RUNDATA | |
environmental variables defined in the profile |
3D1. | Logins for programmers & production operators |
- ensure programmers & operators in same group (apps) to share access | |
to the common sets of libraries & datafiles | |
- ensure permissions umask 002 so directories are 775 & files are 664 |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
There are several 'conversion support files' that you should be aware of and that you may possibly need to modify to obtain optimum conversions.
I have categorized them into the groups shown above (profiles, functions, GDG functions, control files, scripts, uvcopy jobs).
Here in Part 3 we will list or give references to the more important control files that you may have to modify to optimize your conversion.
In Part_9 we will list or give references to the scripts, uvcopy jobs,& C programs, that you should not have to modify. If you think you need changes to these, please contact UV Software.
The next page illustrates the contents of the Vancouver Utilities distribution & identifies the subdirectories housing these groups.
There are certainly other important subdirs & file groups (such as src & bin, where the JCL converter is found), but you should not have to modify them.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/uvadm :-----batDOS :-----bin :-----cobs :-----ctl <-- control files used in JCL conversion :-----dat1 :-----doc :-----dochtml :-----env <-- environmental profiles ADMjobs.htm#1C1 :-----hdr :-----htmlcode :-----lib :-----pf <-- uvcopy jobs used by JCL conversion (& much more) : :-----adm : :-----demo : :-----IBM : :-----util :-----sf <-- scripts used by JCL conversion (& much more) : :-----adm : :-----demo : :-----IBM : :-----util :-----sfun <-- functions for JCL/scripts & GDG files :-----src :-----srcf :-----tf :-----tmp :-----mvstest : :-----testlibs : : :-----archive : : :-----cbl0 : : :-----cpy0 : : :-----Csub : : :-----ctl : : :-----jcl0 : : :-----pf : : :-----proc0 : : :-----sf : : :-----sfun : :-----tmp : :-----xref : :-----testdata : : :-----ar, gl, : : :-----jobctl, joblog, jobtmp : : :-----rpts, sysout, tmp, wrk
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/uvadm/env <--- profiles provided here :-----stub_profile - copy/rename to .profile (ksh) or .bash_profile (bash) : - defines RUNLIBS/RUNDATA for programmers & operators :-----common_profile - common profile (called by stub_profile) : defines PATH's etc using $RUNLIBS/$RUNDATA :-----root_profile - profile for root, copy to /root/.bash_profile (RedHat) : to access Vancouver Utility scripts & uvcopy jobs : /home/appsadm/env <--- setup user 'appsadm' & copy from /home/uvadm/env/* :-----stub_profile - customize & copy to homedirs .profile or .bash_profile :-----common_profile - common profile (called by stub_profile)
Mainframe conversion sites should setup an application administrator userid 'appsadm', copy /home/uvadm/env/* to /home/appsadm/env,& customize profiles there depending on the locations of their libraries & data.
Do NOT customize profiles in /home/uvadm/env/... because they would be overwritten when a new version of Vancouver Utilities is installed.
We recommend the concept of 'stub' & 'common' profiles. The shell profile in each user's homedir is a 'stub' that calls 'common_profile' which are stored in /home/appsadm/env/...
Note that stub profiles must call 'common_profile' using '.' (dot execution), which means the 'export's made in the common_profile will still be effective on return to the users profile.
This system is a big advantage for any site with multiple users, it means the sysadmin can update common_profile once in 1 place & those changes are effective for all users.
See more explanations at: https://www.uvsoftware.ca/admjobs.htm#1B2
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
ADMjobs.htm#1C1 - stub_profile - distributed in /home/uvadm/env/... - copy (to user homedirs) & rename depending on the desired shell (.bash_profile for bash, .profile for ksh) - modify RUNLIBS/RUNDATA differently for programmers or operators define RUNLIBS as testlibs for programmers OR prodlibs for operators define RUNDATA as testdata for programmers OR proddata for operators - contains user aliases, preferences, console logging on or off - modify TERM & 'stty erase' character depending on user's terminal - modify UVLPDEST to define a laser printer near the user - calls common_profile
ADMjobs.htm#1C2 - common_profile, called by stub_profile - defines search PATHs to libraries & data based on $RUNLIBS & $RUNDATA - distributed in /home/uvadm/env/... - you should copy to /home/appsadm/env/ & customize there (to avoid overwriting when new versions of VU installed) - contains most items, allows updates in 1 place for all - modify TERM & 'stty erase' character depending on user's terminal (distribution has TERM=linux & stty erase '^?') - the common_profile should be correct for the majority of users & the stub profiles can be modified for the exceptions - change 'COBDIR' depending on where you have installed Micro Focus COBOL
ADMjobs.htm#1C5 - bashrc - 'rc file' distributed in /home/uvadm/env/... - copy (to user homedirs) & rename depending on the desired shell (.bashrc for bash, .kshrc for ksh) - master version supplied without the '.' for visibility - required if you invoke another shell level (console logging script) - carries aliases & umask which get lost on another shell level - you should customize & store in /home/appsadm/env/...
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
export TERM=linux # TERM - modify depending on your terminal #================ # (vt100,xterm,at386,ansi,etc) stty erase '^?' # erase char - modify depending on your terminal #============== # '^?' for linux/at386, '^H' for vt100,ansi,xterm
export UVLPDEST="-dlp0" <-- change 'lp0' to your laser printer =======================
Note |
|
export COBDIR=/home/cobadm/cobol <-- change for your site ================================
Note |
|
Please see ADMjobs.htm re setting up profiles in /home/appsadm/env. ADMjobs.doc recommends setting up an 'appsadm' account to store the profiles so they do not get overwritten when a new version of Vancouver Utilities is installed.
ADMjobs.htm recommends copying /home/uvadm/env/* to /home/appsadm/env/... Then make any site specific modifications in appsadm/env. One significant change is to modify the stub profiles to call the common profile from appsadm not uvadm. See the details in ADMjobs.htm.
You can run the test/demo JCL conversions & executions in Part_1 without setting up appsadm, but you definitely should setup appsadm & modify profiles before starting your own JCL conversions.
The most important thing would be to modify RUNLIBS & RUNDATA depending on where you decide to store your Libraries & Data for Testing & Production.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
3C1. | jobset51 - setup job environment for scripts converted from IBM MVS JCL |
- JCL converter inserts a call to this function at begin script | |
- changes directory to $RUNDATA, sets up subdirs, etc |
Note |
|
3C3< logmsg1 - display console messages with date:time:JOBID2: stamps - this function used instead of 'echo's
3C5< exportfile - function to export Logical-FileName=Physical-FileName - for the following COBOL program (select external LFN) - also displays filename for the console log (using logmsg1)
5J0< exportgen_ functions to emulate GDG files on unix/linux 5J1< exportgen0 - get the latest generation for input 5J2< exportgen1 - get the next generation for output 5J3< exportgenall - get all generations for input
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# jobset51 - setup job environment for UNIX scripts converted from MVS JCL # - this KORN shell function stored in $RUNLIBS/sfun # - see more doc at: www.uvsoftware.ca/mvsjcl.htm # - see alternative code at end to allow concurrent job scheduling # (could uncomment & replace code to setup JOBTMP & SYSOUT subdirs) # # function-name here must match physical filename (s/b same as line 1 above) function jobset51 { # Function 'jobset51' is called at the begining of each JCL/script # - inserted by JCL converter from the ctl/jclunixop5 control file # - RUNLIBS & RUNDATA are exported in user profiles & are VITAL # - to define Libraries & Data dirs for 'test' or 'production' # # export RUNLIBS=/home/mvstest/testlibs #<-- for MVSJCL test/demo libs # export RUNDATA=/home/mvstest/testdata #<-- for MVSJCL conversion testdata # # export RUNLIBS=/p2/prodlibs #<-- example for production libraries # export RUNDATA=/p2/proddata #<-- example for production data # # ** change history ** # #Jun08/07 - export all symbols (RLX,MSTR,TAPE,etc) in case needed by uvcopy job # - such as getgdg51, getgdg52, gdgctlupok1, etc # called by exportgen0, exportgen1, jobend51 #May23/07 - add code to support exportgen1 preserve gen#s if job abends # - create $JTMP/subdirs to hold exportgen1 new files until Normal EOJ # - at Normal EOJ, move $JTMP/subdir/files back to $RUNDATA/subdirs # - see below where $JTMP/subdirs are created (add your subdirs ?) #Jul17/06 - jobset51 enhanced similar to jobset61 in UVjobs61.pm (Perl version) # - to shift init code (restart etc) from JCL/script to jobset51 # # ** coding for function jobset51 { ... ** # cd $RUNDATA #change to working dir for production (or test) cdstatus=$? # capture status of cd $RUNDATA if ((cdstatus)) # ERR if cdstatus not zero (OK) then echo "cd \$RUNDATA ($RUNDATA) failed in jobset51" echo "- investigate, RUNDATA definition in profiles" echo "- enter to exit"; read reply; exit 91; fi # # cd $RUNDATA means all files are referenced relative to $RUNDATA # - allows switching between production & test files by changing $RUNDATA # - JCL converter has converted HLQs of DSNs to subdirs within $RUNDATA # - High Level Qualifiers might represent applications (ar,gl,mstr,tape,etc) # export RLX=$RUNLIBS/cblx #setup path for loading COBOL programs # - COBOL programs are found via $RLX/progname in JCL/scripts # - JCL/scripts are found via $RUNLIBS/jcls in the profile PATH # - this allows programs to have same names as JCL/scripts export RLJ=$RUNLIBS/java #setup path for loading JAVA programs #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# define subdirs referenced in JCL by $symbols export AR=ar # some users prefer $symbols for data subdirs export GL=gl # gl subdir test files to demo GDG system export MSTR=mstr # alternative to top.node for subdirs/files export TAPE=tape # tape files (on mainframe, now disc on unix) export PRMS=parms # control card library files (members) export WRK=wrk # interstep temp/work files or see $JTMP # # ensure $WRK/nullfile present if [ ! -f $WRK/nullfile ]; then >$WRK/nullfile; fi # if [ -z "$ANIM" ]; then ANIM=-F; fi # 'ANIM=-F' inhibits non-numeric field checks (could change to +F ?) # - cobol programs are called via ---> cobrun $ANIM $RLX/progname <--- # # make subdir for inter-step work files & instream data files # $RUNDATA/$JTMP/tempworkfilename (where JTMP=jobtmp/JOBNAME) export JTMP=jobtmp/${JOBID2} if [[ ! -d $JTMP ]]; then mkdir $JTMP; fi # # make subdir for SYSOUT files (or any file w/o DSN=...) # $RUNDATA/$SYOT/step#_SYSOUT (where SYOT=sysout/JOBNAME) export SYOT=sysout/${JOBID2} if [[ ! -d $SYOT ]]; then mkdir $SYOT; fi # # $JTMP & $SYOT are vital - ensure created successfully else exit if [[ -d $JTMP && -d $SYOT ]]; then : else echo "$JTMP &/or $SYOT failed creation (in jobset51)" echo "- investigate (permissions?, JTMP/SYOT dirnames changed?" echo "- enter to exit"; read reply; exit 92; fi # #Note - $JTMP & $SYOT subdirs (for jobtmp & SYSOUT) may be date/time stamped # - if multiple copies of same job must run at same time # - see alternative coding at the end of this file # - included as #comments that you could activate & replace above code # #Note - in code above we create $JTMP & $SYOT if not already existing # - we also want to clean out any prior run files if they do exist # - BUT not on a 'RESTART', see following line of code below (near end) # ---> else rm -f $JTMP/*; rm -f $SYOT/*; fi <--- # if [[ -z "$RUNDATE" ]]; then RUNDATE=$(date +%Y%m%d); fi; export RUNDATE logmsg1 "Begin Job=$JOBID2" logmsg1 "$scriptpath" logmsg1 "Arguments: $args" logmsg1 "RUNLIBS=$RUNLIBS" logmsg1 "RUNDATA=$RUNDATA" logmsg1 "JTMP=$JTMP SYOT=$SYOT" logmsg1 "RUNDATE=$RUNDATE" export JSTEP=S0000 XSTEP=0 integer JCC=0 SCC=0 LCC=0; if [[ -n "$step" ]]; then STEP="$step"; fi if [[ -z "$STEP" ]]; then export STEP=S0000; fi if [[ $STEP != S[0-9][0-9][0-9][0-9] ]] then logmsg1 "STEP=$STEP invalid"; exit 91; fi #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#May23/07 - code added to support exportgen1 preserve gen#s if job abends # - ensure $JTMP/subdirs exist to hold exportgen1 new files until Normal EOJ # - at Normal EOJ, will move $JTMP/subdir/files back to $RUNDATA/subdirs # - subdirs here are for mvstest demos at www.uvsoftware.ca/mvsjcl.htm#1A1 #Note - need to create user set of subdirs here to avoid err on file test # - and below after 'rm -fr $JTMP/*' for non-restart if [[ ! -d $JTMP/ar ]]; then mkdir $JTMP/ar; fi if [[ ! -d $JTMP/gl ]]; then mkdir $JTMP/gl; fi if [[ ! -d $JTMP/mstr ]]; then mkdir $JTMP/mstr; fi if [[ ! -d $JTMP/tape ]]; then mkdir $JTMP/tape; fi # #May23/07 - should be no files in exportgen1 subdirs here at begin job Far=$(ls $JTMP/ar); Fgl=$(ls $JTMP/gl); Fmstr=$(ls $JTMP/mstr); Ftape=$(ls $JTMP/tape); Fall="$Far$Fgl$Fmstr$Ftape" if [[ -n "$Fall" ]]; then echo "files exist in jobtmp/GDG subdirs (from prior Abterm ?)" echo "----- will display&prompt for move back to RUNDATA/subdirs OR erase?" echo "$Fall" echo "------ prior run (abterm) GDG files displayed above" # # prompt y/n to move back or not reply=x until [[ "$reply" == "y" || "$reply" == "n" ]] do echo "jobtmp/subdir/files will always be removed before run starts" echo "reply y - to move back to RUNDATA/subdirs - for restart ?" echo "reply n - to NOT move back (will erase) - to rerun from begining ?" echo "OR - may need to kill job & investigate ?" read reply done # if reply y - move files back if [[ "$reply" == "y" ]]; then echo "moving $JTMP/... files back to $RUNDATA/..." if [[ -n "$Far" ]]; then ls -l $JTMP/ar; mv -i $JTMP/ar/* ar; fi if [[ -n "$Fgl" ]]; then ls -l $JTMP/gl; mv -i $JTMP/gl/* gl; fi if [[ -n "$Fmstr" ]]; then ls -l $JTMP/mstr; mv -i $JTMP/mstr/* mstr; fi if [[ -n "$Ftape" ]]; then ls -l $JTMP/tape; mv -i $JTMP/tape/* tape; fi fi fi # if [[ $STEP != S0000 ]] then logmsg1 "**restarting** at STEP=$STEP" else rm -fr $JTMP/*; rm -fr $SYOT/*; # ensure $JTMP/subdirs exist to hold exportgen1 new files until Normal EOJ # at Normal EOJ, will move $JTMP/subdir/files back to $RUNDATA/subdirs # subdirs here are for mvstest demos at www.uvsoftware.ca/mvsjcl.htm#1A1 #Note - need to create user set of subdirs in 2 places, above for file test # - and here after 'rm -fr $JTMP/*' for non-restart if [[ ! -d $JTMP/ar ]]; then mkdir $JTMP/ar; fi if [[ ! -d $JTMP/gl ]]; then mkdir $JTMP/gl; fi if [[ ! -d $JTMP/mstr ]]; then mkdir $JTMP/mstr; fi if [[ ! -d $JTMP/tape ]]; then mkdir $JTMP/tape; fi fi alias goto="<<${STEP}=A" uvtime W1D0 $JTMP/jobbgn # return 0 }
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#--------------------------------------------------------------------------- #Note - $JTMP & $SYOT subdirs (for jobtmp & SYSOUT) may be date/time stamped # - if multiple copies of same job must run at same time # Here is alternative coding (as #comments) at the end of the file # - that you could uncomment/activate & replace corresponding code above # ## version#1 (for JOBTMPs) - date/time stamped (allows jobs 1 second apart) ## make subdir for inter-step work files & instream data files ## $RUNDATA/$JTMP/tempworkfilename_yymmddHHMMSS (where JTMP=jobtmp/JOBNAME) #export JTMP=jobtmp/${JOBID2}_$(date +%y%m%d%H%M%S) #if [[ ! -d $JTMP ]]; then mkdir $JTMP; # else echo "jobset51 aborted to not share jobtmp subdir $JTMP, please retry" # exit 5; fi # ## version#1 (for SYSOUTs) - date/time stamped (allows jobs 1 second apart) ## make subdir for SYSOUT files (or any file w/o DSN=...) ## $RUNDATA/$SYOT/step#_SYSOUT_yymmddHHMMSS (where SYOT=sysout/JOBNAME) #export SYOT=sysout/${JOBID2}_$(date +%y%m%d%H%M%S) #if [[ ! -d $SYOT ]]; then mkdir $SYOT; # else echo "jobset51 aborted to not share sysout subdir $SYOT, please retry" # exit 5; fi # #--------------------------------------------------------------------------- ## version#2 (for JOBTMPs) - date/time stamped (incrementing if same second) ## --------- allows for multiples of same job starting at the same second ## make date stamped subdir for inter-step work files & instream data files ## $RUNDATA/jobtmp/JOBNAME_datetime/tempworkfiles #export JTMP=jobtmp/${JOBID2}_$(date +%y%m%d%H%M%S) #if [[ ! -d $JTMP ]]; then mkdir $JTMP; # else ymdHM=$(date +%y%m%d%H%M); SS=61; # until [[ ! -d jobtmp/${JOBID2}_$ymdHM$SS ]] # do ((SS++)); done # export JTMP=jobtmp/${JOBID2}_$ymdHM$SS # mkdir $JTMP #fi ## version#2 (for SYSOUTs) - date/time stamped (incrementing if same second) ## --------- allows for multiples of same job starting at the same second ## make date stamped subdir for SYSOUT files (or any file w/o DSN=...) ## $RUNDATA/sysout/JOBNAME_datetime/sysoutfiles #export SYOT=sysout/${JOBID2}_$(date +%y%m%d%H%M%S) #if [[ ! -d $SYOT ]]; then mkdir $SYOT; # else ymdHM=$(date +%y%m%d%H%M); SS=61; # until [[ ! -d sysout/${JOBID2}_$ymdHM$SS ]] # do ((SS++)); done # export SYOT=sysout/${JOBID2}_$ymdHM$SS # mkdir $SYOT #fi #---------------------------------------------------------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
export RUNLIBS=/home/mvstest/testlibs/ export RUNDATA=/home/mvstest/testdata/
Also note the 'cdl' & 'cdd' aliases to easily change to the libraries & data directories matching their current RUNLIBS & RUNDATA.
alias cdl='cd $RUNLIBS' alias cdd='cd $RUNDATA'
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
For batch production, operators might use system ID logins (apay,arcv,etc) in order to access the appropriate set of libraries & data files. At this point the conversion programmers would use only their personal logins & reserve the system ID logins for production. This allows the apay,arcv,etc console logs to reflect production.
Usually 1 login per system is sufficient for running production jobs, since the Unix/Linux system is so much faster than the mainframe. One operator can of course run multiple systems concurrently on multiple login screens. He cant run a job on the wrong login, since the jcl would not be found in the PATH ($RUNLIBS/jcls in the profile). Even if you had the same jobname in different systems, the job would not run because it would not find the data files which depend on RUNDATA in the profile.
Note the profile for apay,arcv,ordr,etc includes code to prevent operators from logging in more than once to any 1 system. This maintains the integrity of the console log files. (Actually it can be allowed if you really need it but it is not recommended)
Another login is useful to operators to modify jcl, check print files, etc. But the 2nd login should not be a multiple of the system login, but rather it can be their own personal login. Be sure to modify their personal login profiles to disable PATH to the jcls (remove $RUNLIBS/jcls). This prevents them from running jobs under their personal logins which might not have RUNLIBS & RUNDATA set correctly.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part 4 procedures generate uvcopy conversion jobs from ALL copybooks to convert ALL data files in a directory. The actual datafilenames (vs copybooknames) are inserted using a control file created by extracting all datafilenames from all JCL & appending file info from LISTCAT (recsize, filetype, indexed keys).
4A1. | overview Converting mainframe DATA files to Unix/Linux |
4B1. | Creating copybook record layouts to generate conversion jobs |
4B2. | sample 'cobmap' record layout generated from COBOL copybook |
4B3. | sample 'uvcopy job' (generated from 'cobmap') to convert datafile |
4B4. | sample cobmap for a file with redefined records (multiple record types) |
4B5. | sample uvcopy job for a file with multiple record types |
- requires editing to insert code to test 'record type' & skip to | |
approriate bank of auto generated instructions |
4C1. | DATA conversion directories required |
4D1. | DATA filename conventions for JCL/scripts |
4E1. | Extracting datafile info from Mainframe LISTCAT report |
4F1. | Six sources for datafile info (record-sizes, file types, etc) |
- samples of six source control file & combined Data control file |
4G1. | Directories Required for Data Conversion |
4G2. | FTP mainframe files, copy/rename to unix/script standards |
4G3. | Generate components required for Data Conversion control file |
4G4. | Combining components into DATA conversion control file |
4G5. | Generate All jobs to convert ALL Data Files |
4G6. | Rename jobs for datafiles & insert on fili1=... & filo1=... (vs copybooks) |
4G7. | Executing All jobs to convert All datafiles |
4G8. | Correcting & Re-Executing data conversion jobs |
4G9. | Relevant subdirs for Converting & copying to $TESTDATA subdirs |
4H1. | Intro to scripts to generate All data conversion jobs |
4H2. | Overview of scripts to generate conversion jobs |
4H3. | Preparing to run gencnv5A & gencnv5B |
4H4. | Running gencnv5A/gencnv5B to generate conversion jobs |
4H5. | copy generated jobs to pfx3 & modify ? |
4H6. | Executing All data conversion jobs |
Copying converted data files to $TESTDATA |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This section will illustrate how to convert mainframe EBCDIC files to ASCII files, preserving packed/binary fields and correcting signed fields.
Mainframe EBCDIC data files are probably from COBOL applications, and are probably flat sequential or indexed files, fixed length records without linefeeds.
These conversions are driven by the COBOL 'copybooks' (record layouts). The copy-books are first converted into 'cobmaps' which have field start, end, length,& type on the right side (see COBaids1.doc).
The heart of this process is 'uvdata51', a uvcopy utility which reads the 'cobmap's,& generates a uvcopy job for each file to be converted.
The 'uvcopy jobs' when 1st created from the copybook/cobmaps will use the copybook name for the I/O data file names, and will not have any indexed keys specified (since that information is not available in the copybook).
The data filenames & indexed keys can be automatically inserted (by uvdata52) from the 'LISTCAT' information file transferred from the mainframe.
The 1 manual job required is to edit the data conversion control file, coding the copybook filename in the space provided. The copybook name may be omitted if you know the data file has no packed or binary fields and no zoned signs in unpacked numeric fields.
For these files the 'skeleton2' conversion job is supplied which will simply translate the entire record from EBCDIC to ASCII. The skeleton2 job will be modified with the data filenames, the record size,& any indexed keys using the data conversion control file (generated from the LISTCAT report).
The objective here is to be able to convert all data files with 1 command, which is important for the 'go live' weekend. It is also convenient for the possibly several re-transfers & reconverts during the conversion & testing period (6 months to 1 year).
The copybooks are used to automatically generate data conversion 'uvcopy jobs' in the pfx3 sub-directory. Some jobs in pfx3/... may need manual editing for files that have multiple record-types (usually few). But when all edits & testing is complete we can execute all data conversion jobs using the 'uvcopyxx' script, which runs 'uvcopy' for all jobs in the subdir.
uvcopyxx 'pfx3/*' =================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
COBOL copybooks are required by the jobs that generate the conversion jobs except for files that meet the following conditions:
For these files a 'skeleton2' conversion job is supplied which will simply translate the entire record from EBCDIC to ASCII. The skeleton2 job will be modified with the data filenames, the record size,& any indexed keys using the data conversion control file (generated from the mainframe LISTCAT report).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
I recommend that you print all your cobmaps (record layouts) for the data files being converted. These are invaluable during the data conversion, testing, debugging, validation. Please see the example on the next page & note the field start & end bytes on the right hand side.
#1. login: testlibs --> homedir /home/mvstest/testlibs
#2. ls -l - review relevant libs cbls & maps
cpys - copybooks transferred to Unix/Linux - converted for reserved words & lower-cased
maps - 'cobmap's generated from copybooks in cpys - record layouts with field start/end/length calculated & coded on right hand side
#3. uvcopyx cobmap1 cpys maps uop=q0i7p0 ==================================== - create All record-layouts for All copybooks
#4. rmzf maps - remove zero length cobmaps caused by procedure ========= copybooks vs record field descriptions
#5. export UVLPDEST="-dlaserxx" <-- establish Laser printer destination =========================== - if default (in profile) not desired
#5a. uvlpd12 maps <-- print all cobmaps at 12 cpi Simplex ============
--- OR ---
#5b. uvlpd12D maps <-- print all cobmaps at 12 cpi Duplex =============
Please see the sample cobmap for 'custmas' listed on the next page --->
The above commands generate & print all cobmaps in the subdir, but you can easily generate & print any 1 cobmap any time you require. For example to generate/print a cobmap directly from subdir 'cpys', use the following:
uvcopy cobmap1,fili1=cpys/custmas,filo1=maps/custmas ====================================================
prompts for vi/lp/uvlp12 --> uvlp12 <-- to print at 12cpi on dflt printer ======
uvlp12 maps/custmas <-- or enter null above & print separately like this ===================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
<--- see the instructions to generate 'cobmap's on the previous page.
cobmap1 start-end bytes for cobol record fields 200504271116 pg# 0001 cpys/custmas.cpy RCSZ=00256 bgn end lth typ * custmas - cobol copybook for customer.master file mvsjcl 10 cm-cust pic 9(6). 0000 0005 006 n 6 10 cm-delete pic x(4). 0006 0009 004 10 cm-nameadrs. 20 cm-name pic x(25). 0010 0034 025 20 cm-adrs pic x(25). 0035 0059 025 20 cm-city pic x(16). 0060 0075 016 20 filler001 pic x. 0076 0076 001 20 cm-prov pic x(2). 0077 0078 002 20 filler002 pic x. 0079 0079 001 20 cm-postal pic x(10). 0080 0089 010 10 cm-telephone pic x(12). 0090 0101 012 10 cm-contact pic x(18). 0102 0119 018 10 cm-thisyr-sales pic s9(7)v99 comp-3 occurs 12. 0120 0124 005pns 9 10 cm-lastyr-sales pic s9(7)v99 comp-3 occurs 12. 0180 0184 005pns 9 10 filler003 pic x(16). 0240 0255 016 *RCSZ=00256 0256
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# ar.customer.master - uvcopy job, EBCDIC to ASCII, preserve packed, fix signs opr='ar.customer.master - uvcopy code generated from copybook: custmas ' uop=q0 was=a33000b33000 fili1=d1ebc/ar.customer.master,rcs=00256,typ=RSF filo1=d2asc/ar.customer.master,rcs=00256,typ=ISFl1,isks=(000,006d) @run opn all loop get fili1,a0 skp> eof mvc b0(00256),a0 move rec to outarea before field prcsng tra b0(00256) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records mvc b120(60),a120 pns cm-thisyr-sales mvc b180(60),a180 pns cm-lastyr-sales put1 put filo1,b0 skp loop eof cls all eoj
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
See the complete directory trees required to perform mainframe conversions starting on page '1A1' in this doc, but here are selections of the relevant subdirs involved in just the data file conversion.
/home/mvstest/cnvdata/ <-- Data conversion superdir ($CNVDATA) : :-----d0ebc <-- EBCDIC data files from mainframe by FTP binary : :---DATA.FILE1 - filenames UPPER case : :---DATA.FILE2(0) - GDGs indicated by suffix (0) or G/V : :---FILE3.G1234V00 - GDG Generation/Version : :---...etc... :-----d1ebc <-- EBCDIC filenames changed to unix/VU standards : :---data.file1 - filenames translated to lower case : :---data.file2_000001 - GDG files identified by trailing '_' : :---data.file3_000001 - whether from (0) or G1234V00 : :---...etc.. :-----d2asc <-- subdir to receive ASCII conversions : :---data.file1 - data translated to ASCII : :---data.file2_000001 - preserving packed fields, fixing zoned signs : :---data.file3_000001 - copy from here to refresh testdata subdirs : :---...etc... : :-----ctl - conversion control files : :-----cpys - COBOL copybooks : :-----maps - 'cobmaps' record layouts generated from copybooks : :-----pfx1 - uvcopy jobs to convert EBCDIC to ASCII : - generated from COBOL copybooks by utility 'uvdata51' : - do not have actual datafile names or indexed keys : (since this info not in copybooks) : :-----pfx2 - uvcopy jobs with actual datafile names & indexed keys : - encoded by utility 'uvdata52' using control file ctl/datacnv54 : - datacnv54 info was extracted from LISTCAT, JCL, Excel, etc : :-----pfx3 - uvcopy jobs copied from pfx2 & modified for various reasons : - for Multi Record Type files (insert code to test types) : - to split Multi Record Types to separate files, etc : - copying to pfx3 protects your manual change code from : being overwritten if jobs are regenerated : - will execute the jobs from pfx3
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can create these subdirs with the 'cnvdatadirs' script
#1. cd /home/mvstest <-- change to MVSJCL test user
#2. mkdir cnvdata <-- make superdir
#3. cd cnvdata <-- change into it
#4. cnvdatadirs <-- make all subdirs required for data conversion ===========
'pfx3' is in PFPATH in the profile, so you may execute the uvcopy conversion jobs without specifying the PFPATH to the jobs in pfx3 (see PFPATH defined in profile listed at ADMjobs.htm#1C3.
/home/mvstest/testdata/ <-- test data file superdir ($RUNDATA) : :-----ar <-- topnodes/subdirs (ar,gl for illustrations) : :---data.file1 : :---data.file2 <-- subdirs/datafiles copied here for testing : :---...etc... - refreshed whenever required :-----gl : :---data.file1 : :---data.file2_000001 : :---data.file3_000001 : :---...etc... : xxx <-- variable no of subdirs depending on topnodes : :---data.file1 : :---data.file2 : :---...etc... : : :-----ftp :-----jobtmp :-----reports <-- common subdirs used by all topnode aplctns :-----tape :-----wrk :-----tmp
The above is a simplified design for learning & initial testing. See Part_7 for alternate directory designs that might be more suited to production environments.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Before documenting the Operating Instructions to convert the files, we will try to help you understand the process & the various control files involved.
DB2.PROD.DWPARM(QUDW30) DB2.PROD.DWPARM(QUDW33) DBDPCRM.DBD40700.$CRM028.SCAN(0) DBDPCSM.CSM781.$CHP14.EFDS.MERGE(0) DBDPDW.PDW828.$SCA01.CNSM.ACT.XRF.HIST.SRT DBDPODS.DBDPDM1.$MQQUEUE.INFO002 DBDPODS.DBDTRH.$TRNHST.SQL.APR0306 DBDPCSM.CSMUNLK3.$MDBCID.IN.PDW.APR0107 DBDPCSM.CSMSRT.$WELLS.NRAD.G0587V00 DBDPDW.DBDFTP..DAILY.G2981V00 DBDPDW.DBDFTP..DAILY.G2982V00
db2.prod.dwparm@qudw30 db2.prod.dwparm@qudw33 dbdpcrm.dbd40700._crm028.scan_000001 dbdpcsm.csm781._chp14.efds.merge_000001 dbdpdw.pdw828._sca01.cnsm.act.xrf.hist.srt dbdpods.dbdpdm1._mqqueue.info002 dbdpods.dbdtrh._trnhst.sql.apr0306 dbdpcsm.csmunlk3._mdbcid.in.pdw.apr0107 dbdpcsm.csmsrt._wells.nrad_000001 dbdpdw.dbdftp..daily_000001 dbdpdw.dbdftp..daily_000002
Files are copied from d0ebc to d1ebc, renaming them to Unix/Linux & Vancouver Utility JCL/script standards.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
As well as transferring the data files, we need to transfer & reformat the LISTCAT information (filenames, record sizes, key size/locations, etc).
This information can be used to assist the data file conversion process. The LISTCAT report is 132 columns wide, so we have split into left & right sides.
This assumes the MVS LISTCAT report was transferred to the unix/linux system & stored in conversion ctl subdir as 'ctl/listcat0'
Datafile conversion jobs are generated from COBOL copybooks, but the copybooks don't include actual data filenames or indexed key specifications.
We will here describe the process of extracting the datafilenames & indexed key info from the LISTCAT report for later insertion into the data conversion jobs (generated from the copybooks).
Utility job 'catdata51' extracts desired info (DSNs,record-sizes,keys,et) & writes 'ctl/datact51'.
Then 'catdata52' sorts ctl/datacat51 to ctl/datacat52, while converting the filenames to the Vancouver Utility JCL/script standards.
Note |
|
Note |
|
LISTCAT is the best source of information about your data files, but if it is not available, we have 5 other sources as described on page '4F1'.
The next best source is the datajcl51 control file created by extracting the DSNs from All JCL, sorting,& dropping duplicates to get 1 entry per unique datafilename. File information (record-sizes,etc) is collected & encoded on keywords on the right side of each filename.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
+-3/31/05 AT 06.17 VSAM CATALOG LISTING VOLUME SEQUENCE PAGE 001 - DATASET NAME HOW ALLOCATED NUM SPLITS PERCT UNUSED VOLUME ALLOC PRI SEC EXT CI CA FULL TRK-BLK
0 BILLPAY.RECON.FILE1 TC4BE615.VSAMDSET.DFD05090.TBCCA1FE.TC4BE615 APPL01 T 24 12 7 95 4
0 TB.MASTER.FILE T5088C48.VSAMDSET.DFD05088.TBCC8187.T5088C48 - APPL01 T 48 12 5 335 4 100 T50892CA.VSAMDSET.DFD05088.TBCC8187.T50892CA APPL01 T 1 1 1 75
0 SIGN.MASTER.FILE T79A758C.VSAMDSET.DFD02272.TB84D546.T79A758C - APPL02 C 4 1 100 T79A7D48.VSAMDSET.DFD02272.TB84D546.T79A7D48 - APPL02 T 1 1 41
Note |
|
TOTAL FREESPACE CI REC SH R F --KEY--CREATED RECORDS CI CA SIZE LEN E T LEN POS DATE
BILLPAY.RECON.FILE1 2048 200 13 R M 05.090
TB.MASTER.FILE 22,318 10 10 4096 94 33 R K 14 0 05.088 9 4096 4089 33 R 14 0 05.088
SIGN.MASTER.FILE 1,572 10 10 4096 1040 33 R K 25 0 02.272 5 4096 4089 33 R 25 0 02.272
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
PY.PAYROLL.MASTER rca=00128 rcm=00239 typ=IDXf8v cntr=15946788 src=_L_ key=(0000,0016) PY.TIME.CARDS rca=00080 rcm=00080 typ=RSF cntr=00000102 src=_L_ PYTEST.PAYROLL.MASTER rca=00256 rcm=00384 typ=IDXf8v cntr=00000537 src=_L_ key=(0010,0022) PYTEST.PAYROLL.MASTER2 rca=00128 rcm=00256 typ=IDXf8v cntr=00006597 src=_L_ key=(0000,0011)
py.payroll.master rca=00128 rcm=00239 typ=IDXf8v src=_L_ key=(0000,0016) py.time.cards rca=00080 rcm=00080 typ=RSF src=_L_ pytest.payroll.master rca=00256 rcm=00384 typ=IDXf8v src=_L_ key=(0010,0022) pytest.payroll.master2 rca=00128 rcm=00256 typ=IDXf8v src=_L_ key=(0000,0011)
catdata51 |
|
catdata52 |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Source input is a mainframe LISTCAT report FTP'd to unix/linux & stored/renamed in $TESTLIBS/ctl/listcat0 (see sample above)
#1. FTP LISTCAT report to unix/linux & store as ctl/listcat0 ========================================================
#2. uvcopy catdata51,fili1=ctl/listcat0,filo1=ctl/datacat51 ========================================================== - extract file info: AVGLRECL, MAXLRECL, RKP, KEYLEN, REC-TOTAL - code file info as keywords=... on right side of filename - leave DSN as is (so we can see original) - next step (catdata52) will modify GDGs to trailing_
#3. uvcopy catdata52,fili1=ctl/datacat51,filo1=ctl/datacat52 ======================================================== - translate filenames to lower case - convert any embedded '$ '#' to '_'s - modify GDG (0), (+1), etc to trailing_
#4. uvlp18 ctl/datacat52 <-- list LISTCAT extracted info (optional) ==================== - 18 cpi fits 132 cols on 8 1/2 wide
Note |
|
#5. uvcopy ctldata53,fili1=ctl/datajcl52,fili2=ctl/datacat52,fili3=ctl/dataxl152 ,fili4=ctl/dataxl252,fili5=ctl/dataedt52,fili6=ctl/datacnv52,filo7=ctl/datactl53 ============================================================================ - combine 6 inputs (JCL,LISTCAT,Excel#1,Excel#2,Edited,datacnv) - create null files for any sources not available - sort files together & drop duplicates on sort output - collect significant file info (keyword=...) on right side of filenames
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login mvstest --> /home/mvstest or yourself --> homedir
#2. cdl (alias cdl='cd $RUNLIBS') --> /home/mvstest/testlibs
#3. mkdir cat1 cat2 <-- mkae subdirs for LISTCAT files ===============
#4. FTP multiple LISTCAT files --> cat1/...
#5. uvcopyx catdata51 cat1 cat2 uop=q0i7 ==================================== - extract file info: AVGLRECL, MAXLRECL, RKP, KEYLEN, REC-TOTAL - code file info as keywords=... on right side of filename
#6. uvcopy catdata52,fili1=ctl/datacat51,filo1=ctl/datacat52 ======================================================== - translate filenames to lower case - convert any embedded '$ '#' to '_'s - modify GDG (0), (+1), etc to trailing_
#7. uvlp18 ctl/datacat52 <-- list LISTCAT extracted info (optional) ==================== - 18 cpi fits 132 cols on 8 1/2 wide
Note |
|
Note |
|
#5. uvcopy ctldata53,fili1=ctl/datajcl52,fili2=ctl/datacat52,fili3=ctl/dataxl152 ,fili4=ctl/dataxl252,fili5=ctl/dataedt52,fili6=ctl/datacnv52,filo7=ctl/datactl53 ============================================================================ - combine 6 inputs (JCL,LISTCAT,Excel#1,Excel#2,Edited,datacnv) - create null files for any sources not available - sort files together & drop duplicates on sort output - collect significant file info (keyword=...) on right side of filenames
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
For JCL conversion, only source #1 is mandatory, you can make null files for the other 5 sources.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here are just a few lines of all 6 info files that might be used to supply record-sizes, file types, keyloc/keylen, copybooknames, etc for both JCL & DATA file conversion.
#01 DB2.PROD.DWPARM(QUDW33) cntf=0002 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdwkscix prg=DSNUTILB #02 DBDPCSM.CSM288.$KM001.BATCH.%%MDY cntf=0014 rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #03 DBDPCSM.CSM288.$KM001.SYNTAX.%%MDY cntf=0007 rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #04 DBDPCSM.CSM781.$CHP14.EFDS.MERGE(+1) cntf=0001 rca=00000 rcm=00000 typ=RSF src=J___________ job=csm781m prg=CSM78100 #05 DBDPCSM.CSMUNLK3.$MDBCID.IN.PDW.%%MDY cntf=0005 rca=00147 rcm=00147 typ=RSF src=Jr__________ job=csmunlk3 prg=SORT #06 DBDPCSM.CSMUNLK3.$MDBCID.OUT.%%MDY cntf=0003 rca=00000 rcm=00000 typ=RSF src=J___________ job=csmunlk3 prg=IEFBR14 #07 DBDPDW.DBDFTP.$ABA.MONTHLY(0) cntf=0001 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdw202p1 prg=PDW20200 #08 DBDPCSM.CSMSRT.$WELLS.G0587V00 cntf=0001 rca=00753 rcm=00753 typ=RSF src=Jr__________ job=csmwel_1 prg=SYNCSORT #09 DBDPDW.PDW202.$ADDR.USG.WKY(+1) cntf=0001 rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw202p1 prg=PDW20200 #10 DBDPDW.PDW828.$SCA01.CNSM.ACT.XRF.HIST.SRT cntf=0003 rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw828h1 prg=SORT
Utility jcldata52 copies datajcl51 to datajcl52, changing filename conventions from mainframe to the standards adopted for the Vancouver Utility ksh scripts.
#01 db2.prod.dwparm@qudw33 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdwkscix prg=DSNUTILB #02 dbdpcsm.csm288._km001.batch.%%MDY rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #03 dbdpcsm.csm288._km001.syntax.%%MDY rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #04 dbdpcsm.csm781._chp14.efds.merge_ rca=00000 rcm=00000 typ=RSF src=J___________ job=csm781m prg=CSM78100 #05 dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY rca=00147 rcm=00147 typ=RSF src=Jr__________ job=csmunlk3 prg=SORT #06 dbdpcsm.csmunlk3._mdbcid.out.%%MDY rca=00000 rcm=00000 typ=RSF src=J___________ job=csmunlk3 prg=IEFBR14 #07 dbdpdw.dbdftp._aba.monthly_ rca=00000 rcm=00000 typ=RSF src=J___________ job=pdw202p1 prg=PDW20200 #08 dbdpdw.dbdftp._aba.monthly_ rca=00000 rcm=00000 typ=RSF src=J___________ job=pdw202p1 prg=PDW20200 #09 dbdpdw.pdw202._addr.usg.wky_ rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw202p1 prg=PDW20200 #10 dbdpdw.pdw828._sca01.cnsm.act.xrf.hist.srt rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw828h1 prg=SORT
#01 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdwkscix prg=DSNUTILB #02 rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #03 rca=00080 rcm=00080 typ=RSF src=Jr__________ job=csm288km prg=CSM28800 #04 rca=00000 rcm=00000 typ=RSF src=J___________ job=csm781m prg=CSM78100 #05 rca=00147 rcm=00147 typ=RSF src=Jr__________ job=csmunlk3 prg=SORT #06 rca=00000 rcm=00000 typ=RSF src=J___________ job=csmunlk3 prg=IEFBR14 #07 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdw202p1 prg=PDW20200 #08 rca=00000 rcm=00000 typ=RSF src=J___________ job=pdw202p1 prg=PDW20200 #09 rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw202p1 prg=PDW20200 #10 rca=00039 rcm=00039 typ=RSF src=Jr__________ job=pdw828h1 prg=SORT
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
** sample input#2 - datacat52
py.payroll.master rca=00128 rcm=00239 typ=IDXf8v src=_L_ key=(0000,0016) py.time.cards rca=00080 rcm=00080 typ=RSF src=_L_ pytest.payroll.master rca=00256 rcm=00384 typ=IDXf8v src=_L_ key=(0010,0022) pytest.payroll.master2 rca=00128 rcm=00256 typ=IDXf8v src=_L_ key=(0000,0011)
Note |
|
aoe.tfi132._473bkp_ cpy=t0473w.cpy rca=_____ rcm=_____ dbdpcsm.csm501._aif01.batch2 cpy=nasc354.cpy rca=_____ rcm=_____ dbdpcsm.csmftp._aif01.srcin.aif1 cpy=shrf025.cpy rca=_____ rcm=_____ dbdpdw.dbd714._jpalog.weekly.&weekdate cpy=____________ rca=00200 rcm=00200 dbdpdw.dbd714._n905s.%%mdy cpy=t0905o.cpy rca=00184 rcm=00184 dbdpdw.dbd715._j1214.weekly.&weekdate cpy=t1214i.cpy rca=_____ rcm=_____ dbdpdw.dbd715._nformfi.%%mdy.@03del cpy=____________ rca=00132 rcm=00132 dbdpdw.dbd900._copy.chkclln.evnt cpy=____________ rca=00080 rcm=00080 dbdpnas.nas258._aif01.batch.aif1 cpy=csmf501.cpy rca=_____ rcm=_____
db2.prod.dwparm@qudw30 cpy= rca=_____ rcm=_____ src=______Yi__ db2.prod.dwparm@qudw33 cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcrm.dbd40700._crm028.scan_ cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.convert._sca02.efdbcid_ cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.csm288._km001.batch.%%MDY cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.csm288._km001.batch.%%MDY cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.csm781._chp14.efds.merge_ cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.csm781._chp14.efds.merge_ cpy= rca=_____ rcm=_____ src=______Yi__ dbdpcsm.csm783._chx01.efdbcid_ cpy= rca=_____ rcm=_____ src=______Yi__
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# dataedt52 - manually edited additions to datafile info # - this file used for JCL conversions, # Create with editor & store in $TESTLIBS/ctl/dataedt52 # Will be merged with 5 other info source files # (datajcl52,datacat52,dataxl152,dataxl252,datacnv52+dataedt52) # to create datactl53 & datactl53I Indexed file used by JCL converter # (these '#'comment lines will be dropped on merge) # # Edit these filenames to match filenames in converter output # - lower case, any '$' in mainframe filenames entered as '_'s # - GDG files ID by trailing '_' underscores # - keywords may follow filenames after at lest 1 space # - rca=...,rcm=... should be 5 digits # dbdpnas.nas258._aif01.batch.aif1 rca=00080 rcm=00080 typ=RSF dbdpnas.nas258._aif01.batch.bkp1_ rca=00080 rcm=00080 typ=RSF
db2.prod.dwparm@qudw33 src=__________D_ dbdpcsm.csm288._km001.batch.%%MDY src=__________D_ dbdpcsm.csm288._km001.batch.%%MDY src=__________D_ dbdpcsm.csm288._km001.batch.%%MDY src=__________D_ dbdpcsm.csm783._chx01.efdbcid_ src=__________Db dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY src=__________D_ dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY src=__________D_ dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY src=__________D_ dbdpdw.dbdftp._aba.monthly_ src=__________D_ dbdpdw.dbdftp._aba.monthly_ src=__________D_ dbdpdw.dbdftp._aba.monthly_ src=__________D_ dbdpods.dbd353._odsf013.extract src=__________Db dbdpods.dbdchp01._weekly.valid src=__________D_ dbdpods.dbdsrt._rod197b.deduped.dly_ src=__________Dp dbdpods.dbdsrt._rod197b.deduped.dly_ src=__________Dp dbdpods.dbdsrt._rod197b.deduped.dly_ src=__________Dp
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The data file conversion process will will extract relevant information from the JCL conversion control file '$TESTLIBS/ctl/datactl53' & add it to the DATA conversion control file '$CNVDATA/ctl/datacnv53' (copied to datacnv54).
Here is a sample of 'ctl/datactl53' which is loaded into an Indexed file (ctl/datactl53I) to supply file info (record-sizes, etc) to the JCL converter. See primary documentation for ctl/datactl53 in MVSJCL.htm vs this MVSDATA.doc.
#01 db2.prod.dwparm@qudw33 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdwkscix prg=DSN #02 dbdpcsm.csm288._km001.batch.%%MDY cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS #03 dbdpcsm.csm288._km001.batch.%%MDY cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS #04 dbdpcsm.csm781._chp14.efds.merge_ cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdw100pe prg=PDW #05 dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SOR #06 dbdpcsm.csmunlk3._mdbcid.in.pdw.%%MDY cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SOR #07 dbdpdw.dbdftp._aba.monthly_ cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW #08 dbdpdw.dbdftp._aba.monthly_ cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW #09 dbdpdw.pdw202._addr.usg.wky_ cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw301p1 prg=PDW #10 dbdpdw.pdw828._sca01.cnsm.act.xrf.hist cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw828h1 prg=PDW
#01 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdwkscix prg=DSNUTILB #02 cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS25300 #03 cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS25300 #04 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdw100pe prg=PDW21000 #05 cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SORT #06 cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SORT #07 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW20200 #08 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW20200 #09 cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw301p1 prg=PDW20200 #10 cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw828h1 prg=PDW82800
We have shown a ten line sample twice since the lines are too long for hard-copy documentation. We have omitted filenames on the 2nd display, but you can relate using the inserted sequence#s.
We are illustrating these sample control files here before documenting the conversion Operating Instructions to help you understand the process. When you later study the operating instructions, it may clarify things if you refer back to these control file samples.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01 db2.prod.dwparm@qudw33 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdwkscix prg=DSN #02 dbdpcsm.csm288._km001.batch.sep0107 cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS #03 dbdpcsm.csm288._km001.batch.sep0207 cpy=nasc354.cpy rca=_____ rcm=_____ typ=RSF src=J___X_Yi__D_ job=pdmkeyen prg=NAS #04 dbdpcsm.csm781._chp14.efds.merge_000001 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_____Yi__D_ job=pdw100pe prg=PDW #05 dbdpcsm.csmunlk3._mdbcid.in.pdw.apr0107 cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SOR #06 dbdpcsm.csmunlk3._mdbcid.in.pdw.apr0207 cpy=____________ rca=00147 rcm=00147 typ=RSF src=J___XrYi__D_ job=pdm825mp prg=SOR #07 dbdpdw.dbdftp._aba.monthly_000001 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW #08 dbdpdw.dbdftp._aba.monthly_000002 cpy=____________ rca=_____ rcm=_____ typ=RSF src=J_________D_ job=pdw202p1 prg=PDW #09 dbdpdw.pdw202._addr.usg.wky_000001 cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw301p1 prg=PDW #10 dbdpdw.pdw828._sca01.cnsm.act.xrf.hist cpy=____________ rca=00039 rcm=00039 typ=RSF src=Jr____Yi__Dp job=pdw828h1 prg=PDW
utility 'cnvdata53' creates datacnv53 by reading datacnv52 (datfilenames to be converted), looking up the Indexed file datactl53I,& appending the information collected by the JCL conversion procedures.
Note that the number of lines in datacnv52 & datacnv53 would usually be much smaller than for datactl53 - because datactl53 contains all datafilenames extracted from all JCL, whereas datacnv52/datacnv53/datacnv54 will reflect only the number of datafiles transferred from the mainframe for conversion.
The filenames in datactl53 match the filenames used in the JCL & these are changed in datacnv54 to match the actual filenames in the data directories.
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is the short version of the directories illustrated earlier on page '4C1'.
/home/mvstest :-----cnvdata <-- Data conversion superdir ($CNVDATA) : :-----d0ebc - EBCDIC data files from mainframe by FTP binary : :-----d1ebc - EBCDIC filenames changed to unix/VU standards : :-----d2asc - subdir to receive ASCII conversions : :-----ctl - conversion control files : :-----cpys - COBOL copybooks : :-----maps - 'cobmaps' record layouts generated from copybooks : :-----pfx1 - uvcopy jobs generated from COBOL copybooks : :-----pfx2 - uvcopy jobs with datafile names (vs copybooknames) : :-----pfx3 - uvcopy jobs copied from pfx2 before modify/execute
Note |
|
cpys ---------> maps ---------> pfx1 ---------> pfx2 ----------> pfx3 cobmap1 uvdata51 uvdata53 cp & vi
copy/rename generate jobs copy to TopNode subdirs mainframe----->d0ebc----------->d1ebc---------->d2asc---------->$TESTDATA/TN/... FTP cpd0d1rename uvdata51 copy2nodes
gencnv5A |
|
gencnv5B |
|
gencnv51 |
|
Using the 'scripts' is documented later starting on page '4H1', But 1st we will present the 'step by step' method for data conversion.
Follow the 'step by step' procedures for your 1st data conversion, because this will give you a better understanding of the process & it will be easier to detect any problems that can occur whenever procedures are run for the 1st time at a new site.
After you have used the step by step method to verify the process at your site, you can then use the 'script' method to perform the several re-conversions that will be required before you go live.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#00. Transfer mainframe DATA files ---> $CNVDATA/d0ebc
#01. cdc ---> alias 'cd $CNVDATA' ---> /home/mvstest/cnvdata for example ============================
#02. ls d0ebc >tmp/ls_d0ebc ====================== - create mainframe filenames to be converted to unix/linux conventions
#03. uvcopy mksfd0d1,fili1=tmp/ls_d0ebc,filo1=sf/cpd0d1rename ======================================================== - make script to copy d0ebc to d1ebc, changing filenames from mainframe conventions to unix/linux VU standards GDG filename(0) --> filename_000001, etc
#04. sf/cpd0d1rename <-- execute script to copy/rename ===============
Note |
|
#03a. uvcopy mksfd0d1 <-- shorter & easier equivalent of #03 above =============== - filenames default as shown on #03 above
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#05. cdl ---> alias 'cd $TESTLIBS' --> /app/risk/testlibs at eFunds ============================
#06. uvcopy cnvdata51,fild1=$CNVDATA/d1ebc,filo2=ctl/datacnv51 ========================================================= - determine which files have packed or binary fields by scanning 1st 5000 bytes of EBCDIC datafiles for x'0C' & x'00 - write text file of all datafilenames with code 'Dp' or 'Db'
#07. uvcopy cnvdata52,fili1=ctl/datacnv51,filo2=ctl/datacnv52 ======================================================== change filenames to match other datafile info jobs (jcldata51,etc) - translate datafilenames from UPPER to lower case - GDG filename(0) or .G1234V00 ---> filename_ (trailing '_') date stamped filename.mmmddyy ---> filename.%%MDY
#08. uvcopy ctldata53,fili1=ctl/datajcl52,fili2=ctl/datacat52 ,fili3=ctl/dataxl152,fili4=ctl/dataxl252,fili5=ctl/dataedt52 ,fili6=ctl/datacnv52,filo7=ctl/datactl53 =================================================================== - combine 6 inputs: JCL, LISTCAT, Excel1, Excel#2, Edited, datainfo (create null files if not uses)
#09. uvsort "fili1=ctl/datactl53,rcs=159,typ=LST,key1=0(44) ,filo1=ctl/datactl53I,typ=ISF,isk1=0(44)" ====================================================== - create Indexed file used by JCL & DATA conversions
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#10. uvcopy cnvdata53,fili1=ctl/datacnv52,filr2=ctl/datactl53I ,filo2=ctl/datacnv53 ========================================================= - create data conversion control file - extracts cpy=..., rca=..., rcm=..., key=... from ctl/datactl53I - may also include copybooknames (originally on Excel spreadsheet)
Note |
|
#10a. uvcopy cnvdata53 <-- same but easier (files default as above) ================
#11. cp -f ctl/datacnv53 ctl/datacnv54 ================================= - copy cnvdata53 output to alternate file before manual edits - protection in case cnvdata53 rerun (would lose manual edits)
#12. uvlpL14 ctl/datacnv54 s2 <-- list Landscape at 14 cpi space 2 ======================== - listing will help you research & write in missing copybooks
#13. vi ctl/datacnv54 <-- Edit DATA conversion control file ================ - add any missing copybooks, record-sizes, keys, etc - copybooknames required for files with packed/binary
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#14. cdc ---> alias 'cd $CNVDATA' ---> /app/risk/cnvdata for example ============================ - change over to data conversion superdir to perform data conversion
#15 cp $TESTLIBS/ctl/datacnv54 ctl/ =============================== - copy edited data conversion control file from $TESTLIBS/ctl
#16. uvsort "fili1=ctl/datacnv54,rcs=159,typ=LST,key1=0(44) ,filo1=ctl/datacnv54I,typ=ISF,isk1=0(44)" ====================================================== - create Indexed file used to generate DATA conversion jobs - for 1 file at a time, datacnv54 (seqntl) used to gen All jobs
#17. cp $TESTLIBS/cpys/* cpys ======================== - copy all copybooks from $TESTLIBS/cpys to $CNVDATA/cpys
#18. uvcopyx cobmap1 cpys maps uop=q0i7p0 ==================================== - generate cobmaps (record layouts) from COBOL copybooks
#19. uvcopyx uvdata51 maps pfx1 uop=q0i7 =================================== - generate data conversion uvcopy jobs from cobmaps
#20. cp $UV/pf/IBM/skeleton2 pfx1 ============================ - provide 'translate only' uvcopy job, in case copybook missing - OK if no packed or binary fields
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#21. uvcopy uvdata52,fili1=ctl/datacnv54,fild2=pfx1,fild3=pfx2,uop=r1s1t2 ==================================================================== - complete the uvcopy data conversion jobs - insert datafilenames (vs copybook names) - if Indexed, change file type & insert keyloc(keylen)
uop=r1s1t2 - option defaults r1 - override rcs=... on fili1 (from cobmap) using ctlfile rcs=... if typ=RSF (N/A to typ=RDW) s0 - leave input typ as coded from prior job (usually typ=RSF) s1 - set input typ from ctlfile typ=... s2 - force input file typ=RDWz2 t0 - leave output typ as coded from prior job (uvdata51 typ=RSF, genpipe1,genverify1,genacum2 typ=LSTt) t2 - set output typ=RDWz2 (unconditionally if no t1 bit) t1 - (t1+t2=t3) set output typ=RDWz2 ONLY IF input typ=RDW__ User OPtion (uop) defaults = q1r1s1t2 -->null to accept or enter/override --> <-- enter null to accept defaults
Note |
|
#22. cp pfx2/* pfx3 ============== - copy completed uvcopy data conversion jobs to alternate subdir - before any possible modifications & executions (protection in case uvdata52 rerun by mistake, would overwrite edits)
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You might execute all jobs now if your datafiles are not too huge & if you are anxious to see some results.
Some jobs may need to be modified & re-executed. You can rerun those requiring manual fixups later, after you do the research & manual fixups required.
#23. uvcopyxx 'pfx3/*' <-- execute All jobs to convert All datafiles =================
Most mainframe conversion sites choose to use the topnode as a sub-directory on the unix/linux target systems. UV software provides the 'copy2nodes' script to facilitate this.
#24. rm -rf $TESTDATA/* <-- remove all subdirs from testdata dir ==================
#25. mvsdatadirs <-- recreate the basic subdirs required in testdata =========== - jobtmp, joblog, rpts, sysout, tmp, wrk
#26. copy2nodes d2asc $TESTDATA ========================== - copy all converted datafiles, creating subdirs from topnodes (subdir created when new topnode 1st encountered)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Some jobs will need changes before executing correctly because of:
I recommend you list the data conversion jobs (store in a 3 ring binder). You can write in the changes as you do the research for record type testing.
#24. uvlpd12D pfx3 <-- optional list all generated uvcopy conversion jobs ============= - some may need manual corrections - for files with multi record types or variable length
Note |
|
#25. vi pfx3/... <-- modify data conversion jobs as required ===========
#26. cp -rf pfx3 pfx3.bak ==================== - backup data conversion jobs - in case you later need to recover R/T test code from old to new
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p4/cnvdata/ <-- Data conversion superdir ($CNVDATA) :-----cpys - COBOL copybooks :-----ctl - conversion control files :-----maps - 'cobmaps' record layouts generated from copybooks :-----pfx1 - conversion jobs generated from COBOL copybooks by 'uvdata51' :-----pfx2 - conversion jobs with actual datafile names & indexed keys :-----pfx3 - uvcopy jobs copied from pfx2 & modified for multi R/T files : - copying to pfx3 protects your manual change code from : being overwritten if jobs are regenerated : - will execute the uvcopy jobs from pfx3
/p4/cnvdata/ <-- DATA files in same superdir as Generation subdirs :-----d0ebc <-- EBCDIC data files FTP'd from mainframe : :---AR.DATA.FILE1 - filenames UPPER case : :---GL.DATA.FILE1(0) - GDG file suffixes (0), G1234V00, etc : :---GL.DATA.FILE2.G1234V00 - GDG file suffixes (0), G1234V00, etc : :---...etc... :-----d1ebc <-- EBCDIC data filenames changed to unix stds : :---ar.data.file1 - filenames lower case : :---gl.data.file1_000001 - GDG file suffixes changed to VU conventions : :---gl.data.file2_000001 - GDG file suffixes changed to VU conventions : :---...etc... :-----d2asc <-- to receive ASCII conversions : :---ar.data.file1 - ASCII data files from most recent conversion : :---gl.data.file1_000001 - copy from here to refresh testdata subdirs : :---gl.data.file2_000001 - copy from here to refresh testdata subdirs : :---...etc...
p1/testdata/ <-- $TESTDATA ($RUNDATA) for testing :-----ar - topnodes/subdirs (ar,gl for illustrations) : :---data.file1 - subdirs/datafiles copied here for testing : :---...etc... - refreshed whenever required from cnvdata :-----gl : :---data.file1_000001 : :---data.file2_000001 : :---...etc... : :
Then script 'copy2nodes' copies all files from $CNVDATA/d2asc to the subdirs in $TESTDATA/... making subdirs as required from the topnode of the filenames.
Note that the datafiles in $CNVDATA/d2asc topnodes 'ar' & 'gl' are converted to subdirs when copied to $TESTDATA/...
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You must use the 'step by step' method for your 1st data conversion (as documented on the preceding pages), because that will give you an understanding of the process & it will be easier to detect any problems that can occur whenever procedures are run for the 1st time at a new site.
After you have used the step by step method to verify the process at your site, you can then use the 'script' method to perform the several re-conversions that will be required before you go live.
cpys ---------> maps ---------> pfx1 ---------> pfx2 ----------> pfx3/... cobmap1 uvdata51 uvdata52 cp & vi (data cnvt jobs) (ctl file req'd)
The data conversion jobs are generated from the COBOL copybooks, but we also need a control file to relate the copybooks to the datafilenames, and also to supply file types, indexed keys, etc.
The mainframe 'LISTCAT' report is the best place to get this information. We can transfer the mainframe file & extract the information into a control file used in the conversion job generations. See procedures begining on page '4E1' thru page '4F6'.
mainframe unix/linux LISTCAT -----> ctl/listcat0 ---------> ctl/datacat51 ----------> ctl/datacnv54I FTP catdata51 (many steps omitted)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Before presenting the exact 'operating instructions', we will give you an overview of the process. You may see listings of the scripts used to generate all conversion jobs later on pages '9A1' & '9A2'.
Note |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login as appsadm or youself
#02. cdc --> alias 'cd $CNVDATA' --> /app/risk/cnvdata for example ===========================
#03. Save anything desired from prior generations. You should save pfx3 to recover any manual coding for files with multiple record types.
#03a. mkdir pfx3.bak <-- make subdir if not already existing
#03b. cp -f pfx3/* pfx3.bak <-- save prior jobs to recover R/T coding =====================
#04. Remove any old conversion outputs
#04a. rm -f cpys/* maps/* pfx1/* pfx2/* =================================
#04b. rm -f d1ebc/* d2asc/* =====================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#05. gencnv5A all ============ - generate the data conversion control file (ctl/datacnv53) - also copies d0ebc to d1ebc changing filenames to JCL/script standards
#06. Edit the control file adding any missing copybooks & recordsizes. It is OK to have missing copybooks & record sizes for files that you know have no fields with packed, binary, or unpacked signs. You do not need to code the record-sizes if the copybook is coded.
#06a. cp ctl/datacnv54 ctl/datacnv54.old ================================== - save any old control file for possible recovery of previously coded copybooks & record-sizes
#06b. cp ctl/datacnv53 ctl/datacnv54 ============================== - copy the newly generated control file to the name used by gencnv5B
#06c. vi ctl/datacnv54 ================ - edit the control file adding any missing copybooks & recordsizes for files that have any fields with packed, binary, or unpacked signs.
#07. gencnv5B all ============ - generate the data conversion jobs in pfx2/...
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#08a. cp -f pfx3/* pfx3.bak ===================== - 1st backup prior jobs in case you need to recover coding for files with multiple record types.
#08b. rm -f pfx3/* <-- remove any prior jobs ============
#08c. cp pfx2/* pfx3 <-- copy newly generated jobs to pfx3 ==============
#09. vi pfx3/... =========== - perform any edits required for files with multiple record types
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#10. uvcopyxx 'pfx3/*' ================= - execute all jobs to convert all files from d1ebc to d2asc
Note |
|
#10a. uvcopy pfx3/dbdpods.dbdsrt._rod197b.deduped.dly_000001 ====================================================== - example of running 1 job to convert 1 data file (vs All above)
Most mainframe conversion sites choose to use the topnode as a sub-directory on the unix/linux target systems. UV software provides the 'copy2nodes' script to facilitate this.
You should first remove any old topnode directories from $TESTDATA. Often there is a pattern that makes this easy. For example if the topnodes were dbdpcsm, dbdpdw, dbdpods, etc - you could use prefix dbd*.
#11. rm -rf $TESTDATA/dbd* ===================== - remove any old topnode dirs & datafile contents - OR remove all subdirs & recreate as shown below 11a & 11b
#11a. rm -rf $TESTDATA/* <-- remove all subdirs from testdata dir ==================
#11b. mvsdatadirs <-- recreate the basic subdirs required in testdata =========== - jobtmp, joblog, rpts, sysout, tmp, wrk
#12. copy2nodes d2asc $TESTDATA ========================== - copy all converted datafiles, creating subdirs from topnodes (subdir created when new topnode 1st encountered)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part 4 procedures generate uvcopy conversion jobs from ALL copybooks to convert ALL data files in a directory. The actual datafilenames (vs copybooknames) are inserted using a control file created by extracting all datafilenames from all JCL & appending file info from LISTCAT (recsize, filetype, indexed keys).
Part 5 documents the procedures to convert 1 data file at a time. Script gencnv51 will generate 1 uvcopy job from 1 copybook & insert the datafilename (vs copybookname). 'gencnv51' uses the control file (created in Part 4) to get file type & keys but this step could be done manually if control file not available.
Part 5 presents the operating instructions to convert 1 file at a time without much explanation. Please see the begining of Part_4 for explanations.
5A1. | Converting 1 file at a time - Overview |
5B1. | Directories, scripts,& utilties used to generate data conversion jobs |
5B2. | Preparation to generate a data conversion job |
- ensure copybook in $CNVDATA/cpys | |
- ensure control file includes entry to relate copybookname & datafilename |
5B3. | Running script 'gencnv51' to generating 1 data conversion job |
- copying the generated script from pfx2/... to pfx3 | |
- modifying the data conversion job (if necessary) |
5B4. | Executing the data conversion job to convert 1 datafile |
- copying the converted datafile over to $TESTDATA/... | |
- executing the previously converted JCL/script to access the | |
newly converted datafile | |
- capturing the console log for post test investigation |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part 5 will use script 'gencnv51' to generate uvcopy data conversion jobs for 1 file at a time. Gencnv51 executes 3 steps as follows:
cobmap1 |
|
uvdata51 |
|
uvdata53 |
|
The 'uvcopy jobs' when 1st created from the copybook/cobmaps will use the copybook name for the I/O data file names, and will not have any indexed keys specified (since that information is not available in the copybook).
The data filenames & indexed keys can be automatically inserted (by uvdata53) from a control file 'ctl/datacnv54I' which was created in Part 4 from the 'LISTCAT' information file transferred from the mainframe.
If you do not want to use the control file to insert the actual datafile names into the generated job, you could use the procedures documented in DATAcnv1.htm.
The procedures in DATAcnv1.htm code the datafilenames the same as the copybooknames. They also code the name of the generated job the same as the copybookname.
Actually those procedures are the same as the 1st 2 steps shown here above (running cobmap1 & uvdata51, but omitting uvdata53).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Script 'gencnv51' will generate 1 job to convert 1 data file.
It is more efficient to generate All jobs (vs 1 at a time), but you might use this procedure when you add another datafile or discover a mistake caused by the control specifying the wrong copybook.
cpys ---------> maps ---------> pfx1 ---------> pfx2 ----------> pfx3 cobmap1 uvdata51 uvdata53 cp & vi
d0ebc----------->d1ebc------------>d2asc----------->$TESTDATA/TNsub/... copy/rename generated job copy to TopNode subdir
To illustrate these operating instructions for 1 file at a time we will use datafile 'dbdpdw.pdw200._addr.dly_000001' & copybook 'tdw011ft.cpy'.
The control file entry is shown below split onto 2 lines here with all keywords on the right.
dbdpdw.pdw200._addr.dly_000001 cpy=tdw011ft.cpy rca=00039 rcm=00039 typ=RSF ============================================================================ src=JrDpXrYi__ job=pdw301x1 prg=PDW20000 ========================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login as appsadm or youself
#02. cdc --> alias 'cd $CNVDATA' --> /app/risk/cnvdata for example ===========================
#03. Ensure the datafile to be converted is stored in d1ebc/... & the filename matches the control file entry.
Note |
|
#03a. cp d0ebc/DBDPDW.PDW200.$ADDR.DLY(0) d1ebc/dbdpdw.pdw200._addr.dly_000001 ========================================================================
Note |
|
#04. Ensure the correct copybook is stored (cpys/tdw011ft.cpy for our example). Make any manual changes in the master copybook library in $TESTLIBS/cpys & then copy over to $CNVDATA (where we are now).
#04a. vi $TESTLIBS/cpys/tdw011ft.cpy <-- modify master copy if required ==============================
#04b. cp $TESTLIBS/cpys/tdw011ft.cpy cpys <-- copy over to $CNVDATA ===================================
#05. Update the control file if necessary to specify correct copybook
#05a. vi ctl/datacnv54 ================
#06. Reload Indexed file used by script gencnv51 to generate 1 job at a time.
#06a. uvsort "fili1=ctl/datacnv54,rcs=159,typ=LST,key1=0(44)\ ======================================================= ,filo1=ctl/datacnv54I,typ=ISF,isk1=0(44)" =========================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#07. gencnv51 tdw011ft.cpy dbdpdw.pdw200._addr.dly_000001 ====================================================== - generate 1 job to convert 1 datafile from copybook & control file - output job stored in pfx2/... & named same as datafile
#08a. cp -f pfx3/* pfx3.bak ===================== - 1st save prior job(s) in case you need to recover coding for files with multiple record types. - OK & easier to backup all files in directory (vs keying long filename)
#08b. cp pfx2/dbdpdw.pdw200._addr.dly_000001 pfx3/ ============================================= - copy newly generated job to pfx3 before (modification?) & execution
#09. vi pfx3/... =========== - perform any edits required for files with multiple record types
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#10. uvcopy pfx3/dbdpdw.pdw200._addr.dly_000001 ========================================== - execute 1 job to convert 1 file from d1ebc to d2asc
Most mainframe conversion sites choose to use the topnode as a sub-directory on the unix/linux system.
#11. cp d2asc/dbdpdw.pdw200._addr.dly_000001 ======================================== $TESTDATA/dbdpdw/pdw200._addr.dly_000001 ======================================== - copy 1 converted datafile to $TESTDATA topnode subdir
Note |
|
#12. cdd <-- might change to $TESTDATA so you can examine datafiles
#13. pdw200x1.ksh <-- execute script (that might use datafile above) ============
#14. joblog1 pdw200x1.ksh <-- OR use script 'joblog1' to capture log ====================
#15. vi joblog/pdw200x1.log <-- examine the console log ======================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
6A1. | Manual changes required for files with multiple record types |
6B1. | Manual changes required for files with 'occurs' & mixed data types. |
Enhanced in Jan2002 to code loops to convert 90% of occurs situations. |
6C1. | 'occurs2' job to scan for & report occurs within occurs. Use the report to |
check data conversion jobs (subdir pfx2) for possible manual corrections. |
6F1. Expanding data file record layouts (optional) - conversion jobs generated automatically, based on new & old versions of the copybook - unpack packed fields (just remove comp-3 on new copybook) - expand date fields, inserting century 19 or 20 by window 50 - expand any field length & even rearrange field sequence
6G1. | Verify data conversion & modify/rerun jobs as required |
- using uvhd & uvhdcob/uvhdc |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a copybook with multiple record types to illustrate the manual changes required. This is the 'cobmap' generated from the copybook as shown previously.
Note the Record Type code in byte 15 (column 16) & the various REDEFINED records corresponding to the Record Type codes.
Please see the uvcopy conversion job generated from this copybook (cobmap). page '6A4' - original conversion job BEFORE changes page '6A5' - conversion job AFTER changes (insert Record Type tests)
cobmap1 start-end bytes for cobol record fields 200006071213 pg# 0001 cpys/paltran2 RCSZ=0063 bgn-end lth typ ** paltran2 - sample of multi r/t file for uvdoc 10 pal-tran-rec. 15 tkey1. 20 tacct pic x(9). 000-0008 009 20 titem pic 9(3). 009-0011 003 n 03 20 ttxyr pic 9(2). 012-0013 002 n 02 20 ttxhlf pic x(1). 014-0014 001 20 ttrnt pic x(1). 015-0015 001 ** ttrnt = record type 'A/E/P,L/R,T,D,V' (see redefarea) 20 ttrndt. 25 ttrndm pic 9(2). 016-0017 002 n 02 25 ttrndd pic 9(2). 018-0019 002 n 02 25 ttrndy pic 9(2). 020-0021 002 n 02 20 trecno pic 9(1). 022-0022 001 n 01 15 ttrnbt pic x(5). 023-0027 005 15 ttrnsq pic 9(4). 028-0031 004 n 04 15 ttrnsr redefines ttrnsq pic x(4). 028-0031 004 15 ttrncd pic x(1). 032-0032 001 15 ttrnin pic x(1). 033-0033 001 15 redefarea pic x(29). 034-0062 029 ** redefarea redefined below for r/t's 'a/e/p,l/r,t,d,v' **--------------------------------------------------------- ** for pp payment, pp adjustment, pp an types 'P/A/E' 15 tpae-1 redefines redefarea. 20 tfppd pic s9(7)v99 comp-3. 034-0038 005pns 09 20 tpenpd pic s9(7)v99 comp-3. 039-0043 005pns 09 20 tlvypd pic s9(8)v99 comp-3. 044-0049 006pns 10 20 tintpd pic s9(7)v99 comp-3. 050-0054 005pns 09 20 tvendp pic 9(7). 055-0061 007 n 07 20 tvenpr redefines tvendp pic x(7). 055-0061 007 20 tvendp-r2 redefines tvendp. 25 tasmt pic s9(09) comp-3. 055-0059 005pns 09 25 filler001 pic x(02). 060-0061 002 20 tvendp-r3 redefines tvendp. 25 filler002 pic x(05). 055-0059 005 25 treason pic x(02). 060-0061 002 20 filler003 pic x(01). 062-0062 001 **---------------------------------------------------------- ** for license payments & adjustments types 'L/R 15 tpae-2 redefines redefarea. 20 tlfeep pic s9(3)v99 comp-3. 034-0036 003pns 05 20 tcntl pic x(6). 037-0042 006 20 tvendl pic 9(7). 043-0049 007 n 07 20 tvenlr redefines tvendl pic x(7). 043-0049 007 20 tlfee-r pic s9(03)v99 comp-3. 050-0052 003pns 05 20 tuser-r pic x(08). 053-0060 008 20 tlicf pic x(01). 061-0061 001 20 treason-r pic x(01). 062-0062 001 **---------------------------------------------------------- ** proration record trans type 'T' 15 tpae-3 redefines redefarea. 20 tfpen pic s9(07)v99 comp-3. 034-0038 005pns 09 20 tppen pic s9(07)v99 comp-3. 039-0043 005pns 09 20 tlevy pic s9(08)v99 comp-3. 044-0049 006pns 10 20 tint pic s9(07)v99 comp-3. 050-0054 005pns 09 20 tprtxper pic 9(02). 055-0056 002 n 02 20 tnwtxper pic 9(02). 057-0058 002 n 02 20 tterm pic x(04). 059-0062 004 **---------------------------------------------------------- ** trans type 'D': decal data 15 tpae-4 redefines redefarea. 20 tdecal-d pic x(06). 034-0039 006 20 tuser-d pic x(08). 040-0047 008 20 filler004 pic x(15). 048-0062 015 **---------------------------------------------------------- ** trans type 'V': tax relief credit 15 tpae-5 redefines redefarea. 20 ttrncd-v pic x(01). 034-0034 001 20 ttxrelief-v pic s9(09)v99 comp-3. 035-0040 006pns 11 20 tuser-v pic x(08). 041-0048 008 20 tint-v pic s9(07)v99. 049-0057 009 ns 09 20 filler005 pic x(05). 058-0062 005 *RCSZ=0063 0063
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
opr='ipaltran paltran - generated by cobmap1,uvdata51,uvdata52' uop=q0 was=a25000b25000 fili1=d1ebc/ipaltran,rcs=00063,typ=RSF filo1=d2asc/ipaltran,rcs=00063,typ=ISFl1,isks=(0,23n) @run opn all loop get fili1,a0 skp> eof mvc b0(00063),a0 move rec to outarea before field prcsng tra b0(00063) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records mvc b34(21),a34 packed tfppd:tintpd mvc b55(5),a55 packed tasmt skp put1 # --- * redef, code r/t test, ok if char/num unsign # --- * redef, code r/t test, ok if char/num unsign typ__ mvc b34(3),a34 packed tlfeep mvc b50(3),a50 packed tlfee-r skp put1 # --- * redef, code r/t test, ok if char/num unsign typ__ mvc b34(21),a34 packed tfpen:tint skp put1 # --- * redef, code r/t test, ok if char/num unsign # --- * redef, code r/t test, ok if char/num unsign typ__ mvc b35(6),a35 packed ttxrelief-v trt b49(9),$trtsea num-sign tint-v skp put1 # put1 put filo1,b0 skp loop eof cls all eoj
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
opr='ipaltran paltran - generated by cobmap1,uvdata51,uvdata52' uop=q0 was=a25000b25000 fili1=d1ebc/ipaltran,rcs=00063,typ=RSF filo1=d2asc/ipaltran,rcs=00063,typ=ISFl1,isks=(0,23n) @run opn all loop get fili1,a0 skp> eof mvc b0(00063),a0 move rec to outarea before field prcsng tra b0(00063) translate entire outarea to ASCII #------------------------------------------------------------------------ # --- <-- insert R/T tests here for redefined records # R/T tests inserted by OT June 1/00 tst b15(1),'typPAE' PP Pay, Adjust, Exon ? skp= typPAE tst b15(1),'typLR' License, Lic Adj ? skp= typLR cmc b15(1),'typT' Proration ? skp= typT cmc b15(1),'typD' Decal data ? skp= put1 <-- no packed for type D cmc b15(1),'Vtyp' Tax Relief Credit ? skp= typV # invalid R/T - show record data, errmsg,& go output (translate only) # - assuming no packed or signed fields msg b0(63) msgw 'Invalid R/T col 16 not P/A/E,L/R,T,D,V: assume all char' skp put1 #--------------------------------------------------------------------- typPAE mvc b34(21),a34 packed tfppd:tintpd mvc b55(5),a55 packed tasmt skp put1 #--------------------------------------------------------------------- typLR mvc b34(3),a34 packed tlfeep mvc b50(3),a50 packed tlfee-r skp put1 #--------------------------------------------------------------------- typT mvc b34(21),a34 packed tfpen:tint skp put1 #--------------------------------------------------------------------- typV mvc b35(6),a35 packed ttxrelief-v trt b49(9),$trtsea num-sign tint-v skp put1 #--------------------------------------------------------------------- put1 put filo1,b0 skp loop eof cls all eoj
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Occurs data including a mixture of character, packed, or unpacked signed data may require manual changes to the uvcopy jobs which are generated automatically from the COBOL copybooks (via cobmap1, uvdata51,& uvdata52).
The conversion was enhanced in Dec 2001 and now manual changes are not required in most cases, but may be required for more complex situations. Generated jobs with single level occurs will generate correctly, but jobs jobs with multiple levels of occurs should be checked for correctness.
We will illustrate several variations of occurs with same data or mixed data. You should study & understand these examples so you will be able to inspect your own conversion jobs & recognize when manual corrections are required.
Please see the 'occurs2' utility on page '6B3' which will scan for & report occurs within occurs, so that you can check the corresponding data conversion jobs in pfx2 for possible manual corrections.
ex#1 |
|
ex#2 |
|
ex#3 |
|
ex#4 |
|
ex#5 |
|
ex#5B |
|
These examples begin 3 pages ahead --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The operating instructions to generate uvcopy jobs from copybooks were previously presented on page '4G1' (all copybooks) & '5B1' (1 at a time).
Here is a brief review of the operating instructions, using the 1st of the 5 occurs examples (see next page).
#1. uvcopy cobmap1,fili1=cpys/apxr2681.cpy,filo1=maps/apxr2681 ========================================================== - convert the COBOL copybook into the 'cobmap' (record layout)
#2. uvcopy uvdata51,fili1=maps/apxr2681,filo1=pfx1/apxr2681 ====================================================== - convert the cobmap (record layout) into the uvcopy job
A 3rd step (uvdata52) is required to insert the correct data filenames as the uvcopy job is copied from pfx1 to pfx2. This requires the control file to relate the copybook name to the data filename. Since the control file has all names, uvdata52 is usually used to generate all jobs for all copybooks.
If you have already prepared the control file, you can use the 'gencnv51' script to run all 3 steps. For example if the datafile name for the apxr2681 copybook were 'sales.product.master'
gencnv51 apxr2681 sales.product.master =====================================
This would generate the conversion job in subdir pfx2 complete with correct data filenames.
For the 5 occurs examples we will show only the maps & the pfx1 version of the generated job (since the pfx2 version contains the same instructions).
See the 5 occurs examples on page '6B4' to '6B13' --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This job scans the record layout maps for occurs within occurs & creates a report including the relevant lines. These situations (occurs within occurs) may require manual changes to the data conversion jobs generated by the cobmap1/uvdata51/uvdata52 procedures.
The following assumes the COBOL copybooks are in /p1/testlibs/cpys. COBOL copybooks must be converted to 'cobmaps' via cobmap1 (see COBaids.doc) If this has already been performed, you can skip steps #2 & #3 below
#1. cd /p1/testlibs - change to source code directory
#2. mkdir maps - make a directory to receive 'cobmaps'
#3. uvcopyx cobmap1 cpys maps uop=q0i7t2p0 - convert COBOL copybooks to cobmaps
#4. mkdir tmp (or rm tmp/*) - make tmp dir for output reports
#5. uvcopyx occurs2 maps tmp uop=q0i7 =================================
#6. cat tmp/* rpts/occurs2.rpt
#7. uvlp12 rpts/occurs2.rpt
0007 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 0010 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0003 0009 007pns13 0014 EOF: maps/armmea21
0007 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 10 m-e-npa-area2-sum pic s9(11)v99 comp-3. 0003 0009 007pns13 0011 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-code2 pic x(3). 0010 0012 003 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0013 0019 007pns13 0016 EOF: maps/armmea22
Use the report to check the data conversion jobs (in /p1/testlibs/pfx2) for possible manual changes required. Cobmap1 & uvdata51 were enhanced in Jan 2002 to generate code loops to convert occurs data properly most of the time, but you should check for possible exceptions.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This example illustrates 'occurs same data' followed by 'occurs mixed data'.
For 'occurs same data PACKED' cobmap1 calculates the total length for all contiguous packed fields & uvdata51 will generate 1 'mvc' (no instruction loop required).
For 'occurs MIXED data' uvdata51 will generate an instruction code loop, which generates correctly, and no manual changes are required.
cobmap1 start-end bytes for cobol record fields 200202041811 pg# 0001 cpys/apxr2681.cpy crbc2680-data-are RCSZ=16149 bgn end lth typ
01 crbc2680-data-area. 03 crbc2680-key-data. 05 ws-num-rpt pic x(10). 0000 0009 010 05 ws-pe-center pic x(01). 0010 0010 001 05 ws-level-type pic x(02). 0011 0012 002 05 ws-dte-sub-key pic 9(06). 0013 0018 006 n 6 *BGNOCCURSS:--p:00021*00006=00126:00019-00144:1: 05 ws-cyc-occs occurs 21. 10 ws-cyc-prcs-stmp pic 9(11) comp-3. 0019 0024 006pn 11 *ENDOCCURSS:--p:00021*00006=00126:00019-00144:1: 03 crbc2680-first-time-flag pic 9. 0145 0145 001 n 1 03 ws-crbc2680-dte-mo-prcs pic 99. 0146 0147 002 n 2 03 ws-crbc2680-cd-pe pic x. 0148 0148 001 03 ws-crbc2680-save-area. 05 ws-type-cust. *BGNOCCURSM:c-p:01000*00016=16000:00149-16148:1: 10 ws-prod-fncl occurs 1000 times. 15 ws-agg-fncl pic x(4). 0149 0152 004 15 ws-qty-prod-fncl comp-3 pic s9(9). 0153 0157 005pns 9 15 ws-amt-prod-fncl comp-3 pic s9(11)v99. 0158 0164 007pns13 *ENDOCCURSM:c-p:01000*00016=16000:00149-16148:1: *RCSZ=16149 16149
See the uvcopy job (generated from above cobmap) on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
opr='JOBNAME apxr2681 - genby: cobmap1,uvdata51,uvdata52' uop=q0 was=a33000b33000 fili1=d1ebc/apxr2681,rcs=16149,typ=RSF filo1=d2asc/apxr2681,rcs=16149,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(16149),a0 move rec to outarea before field prcsng tra b0(16149) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records #BGNOCCURSS:--p:00021*00006=00126:00019-00144:1: mvc b19(126),a19 #BGNOCCURSS:--p:00021*00006=00126:00019-00144:1: #ENDOCCURSS:--p:00021*00006=00126:00019-00144:1: # #BGNOCCURSM:c-p:01000*00016=16000:00149-16148:1: mvn $ra,0 mvn $rj,0 man010 nop mvc ba153(12),aa153 000*012pns ws-qty-prod-fncl:ws-amt-prod-fncl add $ra,00016 add $rj,00001 cmn $rj,01000 skp< man010 #ENDOCCURSM:c-p:01000*00016=16000:00149-16148:1: # put1 put filo1,b0 skp loop eof cls all eoj
mvc b19(126),a19 021*006pn ws-cyc-prcs-stmp
Note that 'OCCURSSAMEDATA' data does not require generation of instruction loop. Instructions are generated with total length (21*6 = 126 bytes) & without the necessity of register addressing (b19 vs ba19)
mvc ba153(12),aa153 000*012pns ws-qty-prod-fncl:ws-amt-prod-fncl
Note that 'OCCURSMIXED' data requires instruction loop to be generated. Instruction operands are generated with register addressing (rgstr 'a' here). Instructions are generated with the length of contiguous packed fields (5+7 = 12 bytes).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This illustrates 'occurs within occurs'.
Note |
|
cobmap1 start-end bytes for cobol record fields 200202041811 pg# 0001 cpys/armmea21.cpy monthly-equip-are RCSZ=01900 bgn end lth typ
* monthly hold equipment table for the armr report. 01 monthly-equip-area2. *BGNOCCURSM:c-p:00050*00038=01900:00000-01899:1: 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 *BGNOCCURSS:--p:00005*00007=00035:00003-00037:2: 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0003 0009 007pns13 *ENDOCCURSS:--p:00005*00007=00035:00003-00037:2: *ENDOCCURSM:c-p:00050*00038=01900:00000-01899:1: *RCSZ=01900 1900
See the uvcopy job (generated from above cobmap) on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is the uvcopy job generated for the 'occurs within occurs' example above.
Note |
|
opr='JOBNAME armmea21 - genby: cobmap1,uvdata51,uvdata52' uop=q0 was=a33000b33000 fili1=d1ebc/armmea21,rcs=01900,typ=RSF filo1=d2asc/armmea21,rcs=01900,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(01900),a0 move rec to outarea before field prcsng tra b0(01900) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records # #BGNOCCURSM:c-p:00050*00038=01900:00000-01899:1: mvn $ra,0 mvn $rj,0 man010 nop #BGNOCCURSS:--p:00005*00007=00035:00003-00037:2: mvc ba3(35),aa3 050*005*007pns monthly-equip-bucket2 #ENDOCCURSS:--p:00005*00007=00035:00003-00037:2: add $ra,00038 add $rj,00001 cmn $rj,00050 skp< man010 #ENDOCCURSM:c-p:00050*00038=01900:00000-01899:1: # put1 put filo1,b0 skp loop eof cls all eoj
<--- please relate these generated instructions to the cobmap on prior page
Note that the 'pic x(3)' in the outer occurs does need to generate any instructions since it is character data & already translated by the 'tra' entire record.
The inner occurs data is all packed which forces the outer loop to be coded as 'OCCURSMIXED', which generates an outer instruction loop containing one 'mvc' with total length of inner packed fields:
mvc ba3(35),aa3 050*005*007pns monthly-equip-bucket2 ==========================================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This illustrates 'occurs within occurs, both with mixed data (char & packed)'.
Note |
|
cobmap1 start-end bytes for cobol record fields 200202041811 pg# 0001 cpys/armmea22.cpy monthly-equip-are RCSZ=03000 bgn end lth typ
* monthly hold equipment table for the armr report. 01 monthly-equip-area2. *BGNOCCURSM:c-p:00050*00060=03000:00000-02999:1: 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 10 m-e-npa-area2-sum pic s9(11)v99 comp-3. 0003 0009 007pns13 *BGNOCCURSM:c-p:00005*00010=00050:00010-00059:2: 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-code2 pic x(3). 0010 0012 003 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0013 0019 007pns13 *ENDOCCURSM:c-p:00005*00010=00050:00010-00059:2: *ENDOCCURSM:c-p:00050*00060=03000:00000-02999:1: *RCSZ=03000 3000
See the uvcopy job (generated from above cobmap) on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is the uvcopy job for the cobmap shown above (occurs within occurs, both with mixed data).
Note |
|
opr='JOBNAME armmea22 - genby: cobmap1,uvdata51,uvdata52' uop=q0 was=a33000b33000 fili1=d1ebc/armmea22,rcs=03000,typ=RSF filo1=d2asc/armmea22,rcs=03000,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(03000),a0 move rec to outarea before field prcsng tra b0(03000) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records # #BGNOCCURSM:c-p:00050*00060=03000:00000-02999:1: mvn $ra,0 mvn $rj,0 man010 nop mvc ba3(7),aa3 050*007pns m-e-npa-area2-sum # #BGNOCCURSM:c-p:00005*00010=00050:00010-00059:2: mvn $rb,0 mvn $rk,0 man020 nop mvc bb13(35),ab13 050*005*007pns monthly-equip-bucket2 add $rb,00010 add $rk,00001 cmn $rk,00005 skp< man020 #ENDOCCURSM:c-p:00005*00010=00050:00010-00059:2: # add $ra,00060 add $rj,00001 cmn $rj,00050 skp< man010 #ENDOCCURSM:c-p:00050*00060=03000:00000-02999:1: # put1 put filo1,b0 skp loop eof cls all eoj
<--- please relate these generated instructions to the cobmap on prior page
Note that the outer loop uses register 'a' to loop thru the packed data fields & register 'j' to count loops & test end. Note that the inner loop uses register 'b' to loop thru the packed fields & register 'k' to count loops & test end.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This illustrates 'occurs within occurs, but all same(packed) data'. The total length is calculated correctly & no manual changes required.
cobmap1 start-end bytes for cobol record fields 200202041811 pg# 0001 cpys/apr52821.cpy RCSZ=02063 bgn end lth typ
* subscript definition: 03 cs-rec-cumltv-rpt-52082. 05 cs-key-data. 10 cs-num-rpt pic x(10). 0000 0009 010 10 cs-pe-center pic x(01). 0010 0010 001 10 cs-level-type pic x(02). 0011 0012 002 10 cs-dte-sub-key pic 9(06). 0013 0018 006 n 6 *BGNOCCURSS:c--:00021*00006=00126:00019-00144:1: 05 cs-cyc-occs occurs 21. 10 cs-cyc-prcs-stmp pic 9(06). 0019 0024 006 n 6 *ENDOCCURSS:c--:00021*00006=00126:00019-00144:1: 05 cs-dte-mo-prcs pic 99. 0145 0146 002 n 2 05 cs-unappld-ret-ck-pmt pic s9(11)v99. 0147 0159 013 ns13 05 cs-pmt-rck-wo-adj pic s9(11)v99. 0160 0172 013 ns13 *BGNOCCURSS:--p:00003*00630=01890:00173-02062:1: 05 cs-live-final-bad-debt occurs 3 times. *BGNOCCURSS:--p:00010*00063=00630:00173-00802:2: 10 cs-agg-amt-arrs-occ-10 occurs 10 times. 20 cs-amt-arrs pic s9(11)v99 comp-3 occurs 9. 0173 0179 007pns13 * *ENDOCCURSS:--p:00010*00063=00630:00173-00802:2: *ENDOCCURSS:--p:00003*00630=01890:00173-02062:1: *RCSZ=02063 2063
See the uvcopy job (generated from above cobmap) on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is the uvcopy job for the cobmap shown above (occurs within occurs, but all fields same(packed) data). The total length is calculated correctly & no manual changes required.
opr='JOBNAME apr52821 - genby: cobmap1,uvdata51,uvdata52' uop=q0 was=a33000b33000 fili1=d1ebc/apr52821,rcs=02063,typ=RSF filo1=d2asc/apr52821,rcs=02063,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(02063),a0 move rec to outarea before field prcsng tra b0(02063) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records #BGNOCCURSS:c--:00021*00006=00126:00019-00144:1: #ENDOCCURSS:c--:00021*00006=00126:00019-00144:1: trt b147(26),$trtsea ns cs-unappld-ret-ck-pmt:cs-pmt-rck-wo-adj #BGNOCCURSS:--p:00003*00630=01890:00173-02062:1: mvc b173(1890),a173 #BGNOCCURSS:--p:00003*00630=01890:00173-02062:1: #BGNOCCURSS:--p:00010*00063=00630:00173-00802:2: #ENDOCCURSS:--p:00010*00063=00630:00173-00802:2: #ENDOCCURSS:--p:00003*00630=01890:00173-02062:1: put1 put filo1,b0 skp loop eof cls all eoj
<--- please relate these generated instructions to the cobmap on prior page
Note that no instruction loops are required since the occurs contain all SAME data & no MIXED data.
trt b147(26),$trtsea ns cs-unappld-ret-ck-pmt:cs-pmt-rck-wo-adj
Note that the 2 signed fields (2*13=26 bytes) are not within any occurs & the 2 fields are combined as is normal.
mvc b173(1890),a173 003*010*063pns cs-amt-arrs
Note that the 2nd & 3rd occurs are nested with packed data, but we still do not require an instruction loop because we can generate 1 'mvc' with the total length of all packed fields within the 2 nested occurs.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a 2nd example of occurs within occurs within occurs, all fields same type (packed data). The total length is calculated correctly & no manual changes are required.
cobmap1 start-end bytes for cobol record fields 200202041811 pg# 0001 cpys/apr88ccy.cpy ipc-r91-83 RCSZ=01522 bgn end lth typ
01 ipc-r91-83. 10 c83-num-rpt pic x(10) value 'C9X-R91-83'. 0000 0009 010 *BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: 10 c83-p2-occs-2 occurs 2 times. *BGNOCCURSS:--p:00006*00126=00756:00010-00765:2: 15 c83-p2-occs-11 occurs 6 times. 20 c83-p2-xfer-fr-cntr pic 9(7) comp-3. 0010 0013 004pn 7 20 c83-p2-xfer-fr-amt pic s9(7)v99 comp-3. 0014 0018 005pns 9 *BGNOCCURSS:--p:00006*00009=00054:00019-00072:3: 20 c83-p2-xfer-to-occs-11 occurs 6 times. 25 c83-p2-xfer-to-cntr pic 9(7) comp-3. 0019 0022 004pn 7 25 c83-p2-xfer-to-amt pic s9(7)v99 comp-3. 0023 0027 005pns 9 *ENDOCCURSS:--p:00006*00009=00054:00019-00072:3: 20 c83-p2-rcvd-by-cntr pic 9(7) comp-3. 0073 0076 004pn 7 20 c83-p2-rcvd-by-amt pic s9(7)v99 comp-3. 0077 0081 005pns 9 *BGNOCCURSS:--p:00006*00009=00054:00082-00135:3: 20 c83-p2-rcvd-fr-occs-11 occurs 6 times. 25 c83-p2-rcvd-fr-cntr pic 9(7) comp-3. 0082 0085 004pn 7 25 c83-p2-rcvd-fr-amt pic s9(7)v99 comp-3. 0086 0090 005pns 9 *ENDOCCURSS:--p:00006*00009=00054:00082-00135:3: *ENDOCCURSS:--p:00006*00126=00756:00010-00765:2: *ENDOCCURSS:--p:00002*00756=01512:00010-01521:1: *RCSZ=01522 1522
See the uvcopy job (generated from above cobmap) on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is the uvcopy job for the cobmap shown above (occurs within occurs, but all fields are same type (packed data). The total length is calculated correctly & no manual changes required.
opr='JOBNAME apr88ccy - genby: cobmap1,uvdata51,uvdata52' uop=q0 was=a33000b33000 fili1=d1ebc/apr88ccy,rcs=01522,typ=RSF filo1=d2asc/apr88ccy,rcs=01522,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(01522),a0 move rec to outarea before field prcsng tra b0(01522) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: mvc b10(1512),a10 #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: #BGNOCCURSS:--p:00006*00126=00756:00010-00765:2: #BGNOCCURSS:--p:00006*00009=00054:00019-00072:3: #ENDOCCURSS:--p:00006*00009=00054:00019-00072:3: #BGNOCCURSS:--p:00006*00009=00054:00082-00135:3: #ENDOCCURSS:--p:00006*00009=00054:00082-00135:3: #ENDOCCURSS:--p:00006*00126=00756:00010-00765:2: #ENDOCCURSS:--p:00002*00756=01512:00010-01521:1: put1 put filo1,b0 skp loop eof cls all eoj
<--- please relate these generated instructions to the cobmap on prior page
Note that we have 3 nested occurs, but with all packed data, so we can generate only 1 instruction with the total length of 1512 bytes.
mvc b10(1512),a10 #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: ================================================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The uvdata51 utility (which generates the data conversion jobs from cobmaps) provides option 'c1' to generate separate instructions for OCCURSSAMEDATA packed vs the default of generating 1 instruction for all packed fields within the outermost SAMEDATA occurs.
Please compare apr88ccy on this page (generated with option c1) to the version on the previous page (generated without option c1). Note that the separate instructions are generated as #comments & the work is still performed by the 1 mvc generated from the OCCURSSAMEDATA control record.
The option is provided in case you find a complex situation that is not converted correctly by the 1 instruction generated from OCCURSSAMEDATA, possibly because it is within an outer OCCURSMIXEDDATA. But I have not noticed any incorrectly generated jobs yet (based on visual checking of 500 conversion jobs in the arcv system).
opr='JOBNAME apr88ccy - genby: cobmap1,uvdata51,uvdata52' was=a33000b33000 fili1=d1ebc/apr88ccy,rcs=01522,typ=RSF filo1=d2asc/apr88ccy,rcs=01522,typ=RSF @run opn all loop get fili1,a0 skp> eof mvc b0(01522),a0 move rec to outarea before field prcsng tra b0(01522) translate entire outarea to ASCII # --- <-- insert R/T tests here for redefined records #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: mvc b10(1512),a10 #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1: #BGNOCCURSS:--p:00006*00126=00756:00010-00765:2: # mvc b10(108),a10 002*006*009pn c83-p2-xfer-fr-cntr:c83-p2-xfer-fr-amt #BGNOCCURSS:--p:00006*00009=00054:00019-00072:3: # mvc b19(648),a19 002*006*006*009pn c83-p2-xfer-to-cntr:c83-p2-xfer-to-am #ENDOCCURSS:--p:00006*00009=00054:00019-00072:3: # mvc b73(108),a73 002*006*009pn c83-p2-rcvd-by-cntr:c83-p2-rcvd-by-amt #BGNOCCURSS:--p:00006*00009=00054:00082-00135:3: # mvc b82(648),a82 002*006*006*009pn c83-p2-rcvd-fr-cntr:c83-p2-rcvd-fr-am #ENDOCCURSS:--p:00006*00009=00054:00082-00135:3: #ENDOCCURSS:--p:00006*00126=00756:00010-00765:2: #ENDOCCURSS:--p:00002*00756=01512:00010-01521:1: put1 put filo1,b0 skp loop eof cls all eoj
Note that all packed fields are preserved by 1 instruction generated from the outermost Occurs Same Data Packed control record in the cobmap.
mvc b10(1512),a10 #BGNOCCURSS:--p:00002*00756=01512:00010-01521:1:
Option c1 generates separate instructions (4 in this case) from the packed field groups, but these are #commented out.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This job scans the record layout maps for occurs within occurs & creates a report including the relevant lines. These situations (occurs within occurs) may require manual changes to the data conversion jobs generated by the cobmap1/uvdata51/uvdata52 procedures.
The following assumes the COBOL copybooks are in /p1/testlibs/cpys. COBOL copybooks must be converted to 'cobmaps' via cobmap1 (see COBaids.doc). If this has already been performed, you can skip steps #2 & #3 below
#1. cd /p1/testlibs - change to source code directory
#2. mkdir maps - make a directory to receive 'cobmaps'
#3. uvcopyx cobmap1 cpys maps uop=q0i7t2p0 - convert COBOL copybooks to cobmaps
#4. mkdir tmp (or rm tmp/*) - make tmp dir for output reports
#5. uvcopyx occurs2 maps tmp uop=q0i7 =================================
#6. cat tmp/* rpts/occurs2.rpt
#7. uvlp12 rpts/occurs2.rpt
0007 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 0010 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0003 0009 007pns13 0014 EOF: maps/armmea21
0007 05 m-e-npa-code-area2 occurs 50 times. 10 m-e-npa-area2 pic x(3). 0000 0002 003 10 m-e-npa-area2-sum pic s9(11)v99 comp-3. 0003 0009 007pns13 0011 10 monthly-equip-line2 occurs 5 times. 15 monthly-equip-code2 pic x(3). 0010 0012 003 15 monthly-equip-bucket2 pic s9(11)v99 comp-3. 0013 0019 007pns13 0016 EOF: maps/armmea22
Use the report to check the data conversion jobs (in /p1/testlibs/pfx2) for possible manual changes required. Cobmap1 & uvdata51 were enhanced in Jan 2002 to generate code loops to convert occurs data properly about 90% of the time, but you must check for possible exceptions.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Conversion jobs to unpack all packed fields can be generated automatically (based on new & old copybooks). You can copy/rename the copybook & modify.
There are jobs to automatically unpack packed fields (by removing 'comp-3')s, & expand date fields (by inserting century 19 or 20 by window 50). You can manually expand any field length & even rearrange field sequence.
** Directory Setup to Unpack fields ** /p4 :-----cnvdata : :-----d1ebc <-- mainframe DATA loaded from tapes : : :-----apay : : :-----arcv : : :-----ordr <-- use ordr example basic conversion
: :-----d2asc <-- data AFTER conversion to ASCII : : :-----apay : : :-----arcv : : :-----ordr <-- ordr will copy to /p2/proddata/ordr
: :-----d3unpk <-- data AFTER conversion to all UNPACKED : : :----- (only ordr will be unpacked) : : :-----ordr <-- will copy to /p2/proddata/ordr
/p2 :-----testdata : :-----dtls : :-----mstr <-- master files copied from conversion dirs : :-----reports : :-----tape : :-----ftp : :-----upsi : :-----wrk
The above assumes a separate filesystem is available for the conversion steps d1ebc -> d2asc -> d3unpk EBCDIC -> ASCII -> unpacked
You could do these conversions in the same filesystem as the ordr files (/p2). I suggest setting up subdirs d1ebc, d2asc, d3unpk as shown on next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This alternative places the conversion directory in the same filesystem as the resulting converted data file directory (named 'ordr' here). This is more convenient, but requires more space for the duplicate files
/p4 :-----cnvdata : :-----d1ebc - original IBM EBCDIC datafiles : :-----d2asc - converted to ASCII, packed fields preserved : :-----d3unpk - optional, used if unpacking packed fields : :-----mstr - master files copied from conversion dirs
The following illustrates the essential libraries required to generate the uvcopy conversion jobs (starting from the COBOL copybooks).
/p1 :-----testlibs : :-----ordr : : :-----cpys - COBOL copybooks : : :-----maps - copybook maps (created by cobmap1) : : :-----pfx1 - convert skeleton jobs created from maps : : :-----pfx2 - convert jobs completed with real filenames : : :-----ctl - LISTCAT control file stored here (listcat0) supplies filenames vs copybook names
: : :-----cpyu - COBOL copybooks unpacked layouts : : :-----mapu - copybook maps for unpacked layouts : : :-----mapsI - Indexed versions of maps above (packed) : : :-----pfr1 - unpack skeleton jobs created from maps : : :-----pfr2 - unpack jobs completed with real filenames
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#0. login ordr --> /p1/testlibs/ordr
#1. uvcopyx unpack1 cpys cpyu uop=q0i7 ================================== - create new copybooks with 'comp-3's removed
#2. vi cpyu/?????? ============== - optional other changes ?? - increase lengths on any other fields desired - increasing dates will automatically insert century
#3a. uvcopyx cobmap1 cpys maps uop=q0i7p0 ==================================== - create record-layouts from copybooks, for ORIGINAL data files - this step already done on initial conversion EBCDIC to ASCII
#3b. uvcopyx cobmap1 cpyu mapu uop=q0i7p0 ==================================== - create record-layouts from copybooks, for UNPACKED data files
#4a. rmzf mapu - remove zero length files (for any procedure copybooks) #4b. rmzf maps
#5. uvcopyx reform1 maps mapsI uop=q0i7 =================================== - load indexed files from original copybooks - so reform2 can lookup original field defs from new field defs
#6. uvcopyxr reform2 mapu pfr1 mapsI uop=q0i7 ========================================= - generate uvcopy skeleton reformat jobs from copybook layouts
#7. uvcopy uvdata52,fili1=ctl/ctlfile1,fild2=pfr1,fild3=pfr2,uop=???? ================================================================== - complete the uvcopy skeleton job, by encoding the I/O data path directories (& indexed keys if any) - you will be prompted for the I/O data directories (take defaults) enter input data path default --> d1ebc enter output data path default --> d2asc - see the options on page '4G6'
>>End of REFORMjobs Generation, see EXECUTION on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
#8. export CNVDATA=/p4/cnvdata/d2asc - setup path to input data files ================================ (ASCII but may be packed)
export CNVDATA=/p4/cnvdata/d3unpk - setup path to output data files ================================= (for unpacked files)
#9a. uvcopyxx 'pfr2/*' - execute all uvcopy jobs to convert all files ================
#9b. uvcopy 'pfr2/gl*' - execute all reformat jobs for GL data files =================
#9c. uvcopy pfr2/glmaster - execute a specific job to convert a specific file ====================
#10. cp /p4/d3unpk/ordr/* /p2/proddata/ordr ======================================= - initial copy of converted data to ordr subdir for testing - d3unpk allows us to refresh files during testing period
#8. cd /p2/cnvdata/ordr - change to data directory ===================
#9. export CNVDATA=/home/mvstest/cnvdata - setup path to data subdirs & files ==================================== /d2asc input & /d3unpk output - not required if in profile
#10. uvcopyxx '$RUNLIBS/pfr2/*' - execute all uvcopy jobs to convert all files ========================
#11. cp d3unpk/* ordr - initial copy converted data to ordr subdir ================ - repeat to refresh files during testing period
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The 'uvhd' utility is invaluable for this type of work, and the 'uvhdcob' is even better (shows COBOL fieldnames beside data). First we will present an example of uvhd, based on the 'ipalvval' (converted on previous page).
First we would lookup the record size of ipalvval, using our listing of the LISTCAT info (ctl/ctlfile1) created earlier, see page '4E2' of this writeup.
The list states rcs=127 & this works for the EBCDIC mainframe data file, but for the converted ASCII data, we must specify rcs=128, because this is an indexed file,& indexed files for Unix/Linux MicroFocus COBOL have 1 extra byte at the end of the record (x'0A' good & x'00' deleted).
Also note that for the EBCDIC file, we will use the 'a' option (r127a) to translate the character line to ASCII. We can still see the EBCDIC codes on the 'zones' & 'digits' lines (see next page).
#1a. login: ordr --> homedir /p1/testlibs/ordr (apay,arcv,ordr,etc) #1b. . aliass (dot aliass required if logging)
#2. cd /p4/cnvdata/d1ebc - change to d1ebc superdir
#3. uvhd ordr/ipalvval r127a - examine the mainframe EBCDIC data file ======================== --> i1 <-- - print the 1st record
#2. cd /p4/cnvdata/d2asc - change to the Unix/Linux superdir
#5. uvhd ordr/ipalvval r128 - examine the Unix/Linux ASCII data file ======================= --> i1 <-- - print the 1st record
See the EBCDIC & ASCII print outs on the next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#2. cd /p4/cnvdata/d1ebc - change to d1ebc superdir #3. uvhd ordr/ipalvval r127a - examine the mainframe EBCDIC data file --> i1 <-- - print the 1st record
/p3/cnvdata/d1ebc/ordr/ipalvval fsz=2805430 rsz=127 totrecs=22090 current=1 r# 1 1 2 3 4 5 6 b# 0 0123456789012345678901234567890123456789012345678901234567890123 0 0169 055 AMERICAN RAMBLER SED 4D 440 0.....01J22 FFFF444FFF444444CDCDCCCD4444444DCDCDCD4ECC4FC4FFF4444F00010FFDFF 0169000055000000145993150000000914235902540440440000000000C01122 64 2K FD4444444444444444444444444444444444444444444444444444444444444 220000000000000000000000000000000000000000000000000000000000000
#4. cdd - change to ASCII data dir (using alias) #5. uvhd ordr/ipalvval r128 - examine the Unix/Linux ASCII data file --> i1 <-- - print the 1st record
/p3/cnvdata/d2asc/ordr/ipalvval fsz=2827520 rsz=128 totrecs=22090 current=1 r# 1 1 2 3 4 5 6 b# 0 0123456789012345678901234567890123456789012345678901234567890123 0 0169 055 AMERICAN RAMBLER SED 4D 440 0.....01q22 3333222333222222444544442222222544444525442342333222230001033733 01690000550000001D52931E000000021D2C5203540440440000000000C01122 64 2r . 3722222222222222222222222222222222222222222222222222222222222220 220000000000000000000000000000000000000000000000000000000000000A
Please relate this to previous listings of the cobmap maps/palvval, and the conversion job pfx2/ipalvval. Relevant conversion instructions are:
tra b0(00127) translate entire outarea to ASCII mvc b54(5),a54 packed wasmt trt b60(6),$trtsea num-sign wdslpc:wdslva
The 'tra' instruction translates the entire record,& the packed fields are overlaid (since translation would destroy them). The translation of unpacked signed fields is corrected by the 'trt' instruction with translate table '$trtsea'.
input fields: '1J222K' = 11- & 2222- (EBCDIC overpunched neg fields) output fields: '1q222r' = 11- & 222- (ASCII overpunched neg fields)
In EBCDIC neg signs are '}' for 0,& 'J-R' for 1-9 IN ASCII neg signs are 'p-y' for 0-9
Without the 'trt' we would get the EBCDIC values '}' & J-R from the standard 'tra' ASCII translate which would be all wrong.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
datafile=d1ebc/custmas1 bytes=8192 rsz=256 totrecs=32 current=1 cobmapfile=maps/custmas1 today=199902151608 datalastmod=1999011612 rec# 1 fieldname occurs bgn end typ<------ data (hex if typ=p/b) -----> cm-num 0 5 n 132588 cm-delete 6 9 cm-name 10 34 GEECOE GENERATOR SERVICES cm-adrs 35 59 UNIT 170 - 2851 SIMPSON cm-city 60 75 RICHMOND cm-prov 77 78 BC cm-postal 80 89 V6X2R2 cm-phone 90 101 604-278-4488 cm-contact 102 119 cm-thisyr-sales 012 120 124pns 000004680C cm-lastyr-sales 012 180 184pns 000005360C cm-thisyr-xft 240 244pns 4120202038 cm-lastyr-xft 245 249pns 3731303036 null=next,r#=rec,b#=byte,+/-recs,s=search,u=update,p=print,w=write ,q=quit,?=help -->
This utility is excellent for verifying converted data, especially if you have unpacked packed fields since this changes record layouts and there is more possibility of errors. You can see at a glance whether the data fields agree with the COBOL copybook definitions.
This utility requires both the datafile name & the COBOL copybook 'map', which would be awkward for the mainframe to Unix/Linux file designs where data & libraries are in separate file systems, for example:
uvhdcob /p2/proddata/ordr/ipaltran /p2/prodlibs/ordr/paltran ============================================================
This awkward problem is solved by the 'uvhdc' script, which looks up a control file to determine the copybook for the specified data file. The uvhdc script uses $RUNLIBS & $RUNDATA (defined in the users profile) to access the control file, the copybook map,& the datafile. This means we can display any data file from anywhere without knowing the copybook name & using a very short command, for example:
uvhdc ipaltran <-- run 'uvhdcob' from anywhere w/o specifying copybook ==============
uvhd ipaltran <-- run 'uvhd' from anywhere w/o specifying pathname ==============
The 'uvhdc' script is listed at uvhdcob.htm#J1 and before we can use it you have to load the indexed control file. See a sample control file listed at uvhdcob.htm#J2 and it must be loaded into an indexed file as follows:
uvcopy loadctlI,fili1=ctl/ctlfile1,filo1=ctl/ctlfile1I ======================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
7A1. | Introduction to Variable Length RDW files |
7B1. | testRDWe test file to demo RDW files |
7C1. | Investigating RDW files with 'uvhd' |
7D1. | converting EBCDIC RDW files to ASCII - with 'uvhd' |
- uvhd interactive utility, easy to use |
7E1. | converting EBCDIC RDW files to ASCII - with 'uvcp' |
7F1. | converting EBCDIC RDW files to ASCII - with 'uvcopy varfix11' |
- varfix11 batch utility, better for high volume conversions |
7G1. | creating table summary stats of record sizes in variable length files |
uvcopy job 'varstat1' Operating Instructions & sample report |
7X0. | Listings of uvcopy jobs used in Part 7 |
- varstat1, varfix11, | |
- LST2RDW1, RDW2LST1, varfix11 |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
RDW (Record Descriptor Word) variable length files are often used to FTP variable length EBCDIC files from the mainframe to unix/linux/windows systems. uvcp provides for 2 types of RDW files - standard & non-standard (option z2).
dat1/testLST |
|
dat1/testRDW |
|
dat1/testRDWe |
|
We will illustrate how to convert RDW EBCDIC files to text or fixed length files using: uvhd, uvcp,& uvcopy.
We will 1st investigate the testRDWe file using 'uvhd'. We can not use unix tools (vi, lp, etc) since the file is EBCDIC & contains binary record sizes.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvhd dat1/testRDWe as3 <-- examine the RDW test file ====================== - option 'a' translates character line to ASCII - option 's3' to space between scale & 3 line groups
10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123
0 ....DELL00 - Dell Inc. ....HP0000 - Hewlett Packard....IBM000 - 0100CCDDFF464C8994C984440100CDFFFF464C8A98AA4D8898980200CCDFFF46 080045330000045330953B000C008700000008563533071321940C0092400000
64 International Business Machines....MFC000 - Micro Focus COBOL 4C9A8998A899894CAA898AA4D888898A0200DCCFFF464D88994C98AA4CDCDD44 0953595139651302429552204138955200004630000004939606634203626300
128 ....MS0000 - Microsoft Corp.....REDHAT - Red Hat Linux ....SUN0 0100DEFFFF464D8899A98A4C99940100DCCCCE464D884C8A4D89AA440200EEDF 0C0042000000049396266303697B0C0095481300095408130395470004002450
192 00 - Sun Microsystems Ltd ....UVSI00 - UV Software Inc. FF464EA94D8899AAAA89A4DA84440200EEECFF464EE4E98AA8984C984444 00000245049396282354203340000000452900000450266361950953B000
The entire file is only 252 bytes & contains 8 short variable length records:
You cannot display RDW files with 'vi', because vi cannot handle binary, there are no LineFeeds to separate the records, so the entire file appears as 1 long line (unless there just happened to be a x'0A' in the length field, which would be interpreted as a LineFeed by vi).
OPtion 'z' would tell uvhd to look for the 'RDW' record prefixes & show 1 record at a time, but we did not specify option z above, and the default is to show any file in 256 byte blocks (4 groups of 3 64 byte lines for characters, zones,& digits).
See the next page where we will specify option 'z' to show RDW files 1 record at a time. You can press enter to browse forward until EOF reached. Then you could enter '1' to return to the begining of the file.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvhd dat1/testRDWe za <-- display RDW file using option 'z' ===================== - option 'a' to display character line in ASCII - null entries browse forward til EOF reached
10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123 0 ....DELL10 - Dell Inc. 0100CCDDFF464C8994C98444 080045331000045330953B00 10 20 30 40 50 60 r# 2 0123456789012345678901234567890123456789012345678901234567890123 24 ....HP0010 - Hewlett Packard 0100CDFFFF464C8A98AA4D889898 0C00870010000856353307132194 10 20 30 40 50 60 r# 3 0123456789012345678901234567890123456789012345678901234567890123 52 ....IBM010 - International Business Machines 0200CCDFFF464C9A8998A899894CAA898AA4D888898A 0C009240100009535951396513024295522041389552 10 20 30 40 50 60 r# 4 0123456789012345678901234567890123456789012345678901234567890123 96 ....MFC010 - Micro Focus COBOL 0200DCCFFF464D88994C98AA4CDCDD44 00004630100004939606634203626300 10 20 30 40 50 60 r# 5 0123456789012345678901234567890123456789012345678901234567890123 128 ....MS0010 - Microsoft Corp. 0100DEFFFF464D8899A98A4C9994 0C0042001000049396266303697B 10 20 30 40 50 60 r# 6 0123456789012345678901234567890123456789012345678901234567890123 156 ....REDH10 - Red Hat Linux 0100DCCCFF464D884C8A4D89AA44 0C00954810000954081303954700 10 20 30 40 50 60 r# 7 0123456789012345678901234567890123456789012345678901234567890123 184 ....SUN010 - Sun Microsystems Ltd 0200EEDFFF464EA94D8899AAAA89A4DA8444 040024501000024504939628235420334000 10 20 30 40 50 60 r# 8 0123456789012345678901234567890123456789012345678901234567890123 220 ....UVSI10 - UV Software Inc. 0200EEECFF464EE4E98AA8984C984444 0000452910000450266361950953B000
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
On this page we will show you how to convert EBCDIC RDW files to ASCII text using 'uvhd'. On following pages we will do the same with 'uvcp' & 'uvcopy'.
We will specify uvhd options 'za3p4y7' which mean:
z |
|
a3 |
|
a1 |
|
a2 |
|
p4 |
|
y7 |
|
y1 |
|
y2 |
|
y4 |
|
uvhd dat1/testRDWe za3p4y7 <-- display 1st record & wait for command ==========================
10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123 0 ....DELL10 - Dell Inc. 0100CCDDFF464C8994C98444 080045331000045330953B00
---> w99999 <-- write all records (8) to tmp/testRDWe_yymmdd_hhmmssW - output filename will be date/time stamped - On Dec 18, 20007 at 12:15 tmp/testRDWe_071218_121500W
---> q <--- quit uvhd
vi tmp/testRDWe_071218_121500W ============================== vi tmp/*00W <-- shortcut to display desired file ===========
DELL00 - Dell Inc. HP0000 - Hewlett Packard IBM000 - International Business Machines MFC000 - Micro Focus COBOL MS0000 - Microsoft Corp. REDHAT - Red Hat Linux SUN000 - Sun Microsystems Ltd UVSI00 - UV Software Inc.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will now use 'uvcp' to perform the conversion from EBCDIC RDW to ASCII.
Instead of variable length text, we will create fixed length output records that are required for most COBOL input files. For this demo, we will convert to fixed 64 byte records & insert a LineFeed in the last byte so we can have it both ways (fixed length for COBOL, but LFs allow investigation with vi).
uvcp "fili1=dat1/testRDWe,rcs=64,typ=RDW,filo1=tmp/testRDWa,typ=RST,tra=0(64)" ==============================================================================
tra=0(64) |
|
typ=RST |
|
uvhd tmp/testRDWa r64h2 <-- display output file with uvhd ======================= - option r64 for record size - option 'h2' for hex display to see LineFeed - only 1st record shown below
10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123 0 DELL10 - Dell Inc. . 4444332224666246622222222222222222222222222222222222222222222220 45CC100D045CC09E3E000000000000000000000000000000000000000000000A
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'varfix11' is a uvcopy utility job to convert any RDW variable length file to a specified fixed length (large enough to hold largest variable record).
On your mainframe migration, you may have EBCDIC RDW & BDW/RDW files to be transferred to & converted on your unix/linux system.
For a mainframe migration you would not want to use the interactive 'uvhd' for hundreds of files. Here we will illustrate a batch utility 'varfix11' that you could use to convert varlth RDW EBCDIC files to ASCII fixed records.
Note that this job can not be used for EBCDIC files with packed/binary fields. For those you need the procedures documented in 'Part_4' of this MVSDATA.doc.
uvcopy varfix11,fili1=dat1/testRDWe,filo1=tmp/testRDWaf,uop=a1b1h1r64t1 =======================================================================
Note |
|
uop=a0b0h1r2048t0 - option defaults a0 - no translate (input EBCDIC with packed/binary fields) a1 - translate from EBCDIC to ASCII b0 - do NOT convert nulls to blanks b1 - DO convert nulls to ASCII blanks b2 - DO convert nulls to EBCDIC blanks h0 - drop the 4 byte binary record-size headers h1 - replace 4 byte binary recsize hdr with numeric chars r2048 - output fixed records 2048 bytes r8192 - max output fixed size 8192 bytes t0 - do NOT insert LineFeed in last byte of record t1 - DO insert LineFeed in last byte of record null to accept or re-specify (1 or more) --> 071128:172717:varfix11: EOF fili01 rds=16 size=252: dat1/testRDWe 071128:172717:varfix11: EOF filo01 wrts=8 size=512: tmp/testRDWaf
cat tmp/testRDWaf <-- display output file =================
0024DELL10 - Dell Inc. 0028HP0010 - Hewlett Packard 0044IBM010 - International Business Machines 0032MFC010 - Micro Focus COBOL 0028MS0010 - Microsoft Corp. 0028REDH10 - Red Hat Linux 0036SUN010 - Sun Microsystems Ltd 0032UVSI10 - UV Software Inc.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'varstat1' is a uvcopy job that you can run to create a summary table of record-sizes found in all records of a variable file typ=RDW or typ=RDWz2 (variable length records with 4 byte or 2 byte prefixes).
#1. cd $UV <-- change to /home/uvadm
#2. uvcopy varstat1,fili1=d0ebc/testRDW,filo2=rpts/testRDW_recsizes ===============================================================
#3. vi rpts/testRDW_recsizes <-- inspect report ========================
varstat1 2007/12/19_18:10:29 record-sizes in dat1/testRDW tbl#001 pg#001 -argument- line# count % record-sizes
1 1 12 00020 2 3 37 00024 3 2 25 00028 4 1 12 00032 5 1 12 00040
8*100 *TOTAL*
#1. cd $CMPDATA <-- change to file compare superdir
#2. mkdir tmp1 <-- make tmp1 subdir if not existing #2a. rm -f tmp1/* <-- OR remove all files if tmp1 already exists
#3. uvcopyx varstat1 d0ebc tmp1 uop=q0i7,rop=r0 =========================================== - create table summary recsize counts for all files in d0ebc subdir - output reports in tmp1 with same names as datafiles in d0ebc
#4. uvlpd12 tmp1 <-- print all reports in tmp subdir ============
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The 'uvcopy jobs' used in this section are listed on the following pages. You might need to modify them for complex variable length conversions.
7X1. | varstat1 - create table summary of record sizes in variable length files |
- listed further below |
7X2. | varfix11 - convert variable length BDW/RDW files to Fixed Length files |
- might be used to convert mainframe EBCDIC files to ASCII | |
- BUT not if packed/binary present (see Part_3). |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# varstat1 - create summary table of record-sizes used in variable file # - for variable lth records with 4 byte hdrs (standard RDW) # - by Owen Townsend, Dec 12, 2007 #Dec19/07 - replace getvr2 subrtn with file typ=RDW & RDWz2 (optional) # # ** create report for 1 file (for testing) ** # # uvcopy varstat1,fili1=d0ebc/datafilename,filo2=tmp1/reportname # ============================================================== # # ** create reports for all files in directory ** # # 1. cd $CNVDATA <-- change to conversion superdir # - subdir d0ebc contains EBCDIC var lth files # # 2. mkdir tmp1 <-- make tmp1 subdir if not existing # 2a. rm -f tmp1/* <-- OR remove all files if tmp1 already exists # # 3. uvcopyx varstat1 d0ebc tmp1 uop=q0i7,rop=r0 # =========================================== # - create table summary recsize counts for all files in d0ebc subdir # - output reports in tmp1 with same names as datafiles in d0ebc # # 4. uvlpd12 tmp1 <-- print all reports in tmp subdir # ============ # # ** sample report ** # # varstat1 2006/12/17_18:15:17 record-sizes in d0ebc/E2121656 # tbl#001 pg#001 -argument- # line# count % record-sizes # 1 10,552 16 00046 # 2 4,451 7 00065 # 3 23,347 37 00066 # 4 367 0 00068 # 5 21,010 33 00083 # - - - etc - - - # 18 3 0 00218 # 19 441 0 00233 # 20 813 1 00239 # 62,115*100 *TOTAL* #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# This job designed for variable lth records typ=RDW & typ=RDWz2 # typ=RDW (standard RDW) - 4 byte record prefixes # - binary recsize in 1st 2 bytes (3rd & 4th x'0000' unused) # - binary recsize of data (includes the 4 byte prefix) # - blk/rec lth binary BIG-END format, need 's'witch option for INTEL # typ=RDWz2 (non-std RDW) - 2 byte record prefixes (2 nulls omitted) # - binary recsize does not include the 2 byte prefix # opr='$jobname - create summary table of record-sizes used in variable file' opr='uop=q1z0 - option defaults (message only, see uop=... in job)' opr=' z0 - RDW 4 byte prefix, recsize 1st 2 BIG-END binary' opr=' z2 - RDW 2 byte prefix, recsize 1st 2 BIG-END binary' uop=q1z0 # option defaults rop=r1 # option to prompt for report disposition was=a16384 fili1=?d0ebc/input,rcs=8192,typ=RDW #Dec19/07 typ=RDW replaces getvr filo1=?tmp1/$jobname.rpt,typ=LSTt,rcs=128 @run #Dec19/07 - replace subrtn getvr with typ=RDW & RDWz2 # - for nonstd 2 byte prefix vs std 4 bytes # if uop=z2, append file option z2 for typ=RDWz2 tsb o26(1),x'02' uop z2 for nonstd RDW ? skp! 1 cata8 $fili1+180(15),'z2' opn all # # begin loop to get records & build summary table of recsizes # - to be dumped at end of file man20 get fili1,a0 get next RDW record data skp> eof (cc set > at EOF) mvn c0(5),$rv cnvrt binary recsize to numeric tblt1f1 c0(5),'record-sizes' build table in memory skp man20 repeat get loop until EOF # # EOF - close files & end job eof mvfv1 f0(80),'record-sizes in $fili1' tbpt1s1 filo1,f0(50) dump table from memory to file cls all eoj # # getvr - subrtn to get records from IBM std variable length file #Dec19/07 - getvr replaced by file typ=RDW & typ=RDWz2 # - subrtn saved (for interest) in $UV/pf/util/getvr ## folwng instrns modified (see above) for typ=RDW ##man20 bal getvr set rgstrs a & b to next record ## mvn c0(5),a0(2bs) cnvrt binary recsize to numeric
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# varfix11 - convert RDW variable length file test Aug 24/05 # - convert RDW variable length file to fixed length records # - prior to 2nd job that will convert EBCDIC to ASCII # OR compare 2 files uvcmpVA1/uvcmpVE1 (fixlth max 4096) # - by Owen Townsend, UV Software # - originally for Laval, Dec 2005 # #Dec19/07 - replace subrtn getvr with typ=RDW & add option z2 # for nonstd 2 byte prefix vs std 4 bytes (binary recsize 1st 2) # - subrtn code preserved in $UV/pf/util/getv for interest # # Enhanced Oct 2007 for Sungard, options added # - a1=translate to ASCII, h1=cnvt hdr to numerics, r4096=fixed output recsize # - b1=convert nulls to ASCII blanks, t1=terminate with LineFeed # # ** Operating Instructions ** # # uvcopy varfix11,fili1=d0ebc/infile,filo1=d1ebc/outfile,uop=a1h1r4096 # ==================================================================== # - convert data file from variable to fixed 4096 byte records # # option a1 - translate from EBCDIC to ASCII # option h1 - replace 4 byte binary recsize hdr with recsize numerics # option r4096 - output fixed records 4096 bytes # - large enough to hold largest variable lth record # # This 1 uvcopy job 'varfix11' can be used for all data files # - to convert from variable to fixed # - drops 4 byte variable recsize headers, or retain converted to numerics # - null fill records to fixed recsize spcfd by option r # # ** uses for varfix11 ** # # Convert variable length to fixed length to facilitate: # # 1. CONVERTING datafiles from EBCDIC to ASCII (preserving packed/binary) # - could convert back to variable length after conversion # - see examples in VSEDATA.doc # # 2. COMPARING 2 datafiles # - see script uvcmpVE1 to compare 2 varlth EBCDIC datafiles # - this varfix11 used to convert both files to fixed max lth in tmp1/tmp2 # - followed by utility 'uvcmp1' to compare & create report in rpts/... #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# ** sample variable-length RDW files ** # # uvhd dat1/testRDWe a <-- display demo file EBCDIC varlth RDWz4 # ==================== - option 'a' to show char lines in ASCII # (zones & digits lines show EBCDIC codes) # # 10 20 30 40 50 60 # r# 1 0123456789012345678901234567890123456789012345678901234567890123 # 0 ....DELL10 - Dell Inc. ....HP0010 - Hewlett Packard....IBM010 - # 0100CCDDFF464C8994C984440100CDFFFF464C8A98AA4D8898980200CCDFFF46 # 080045331000045330953B000C008700100008563533071321940C0092401000 # 64 International Business Machines....MFC010 - Micro Focus COBOL # 4C9A8998A899894CAA898AA4D888898A0200DCCFFF464D88994C98AA4CDCDD44 # 0953595139651302429552204138955200004630100004939606634203626300 # 128 ....MS0010 - Microsoft Corp.....REDH10 - Red Hat Linux ....SUN0 # 0100DEFFFF464D8899A98A4C99940100DCCCFF464D884C8A4D89AA440200EEDF # 0C0042001000049396266303697B0C0095481000095408130395470004002450 # 192 10 - Sun Microsystems Ltd ....UVSI10 - UV Software Inc. # FF464EA94D8899AAAA89A4DA84440200EEECFF464EE4E98AA8984C984444 # 10000245049396282354203340000000452910000450266361950953B000 # # uvhd dat0/testRDWe z4a1 <-- option 'z4' to process RDWz4 files # ======================= - 'a1' to translate char line to ASCII # - hexadecimal zones & digits show EBCDIC values # # 10 20 30 40 50 60 # r# 1 0123456789012345678901234567890123456789012345678901234567890123 # 0 ....DELL10 - Dell Inc. # 0100CCDDFF464C8994C98444 # 080045331000045330953B00 # # Note - uvhd displays 1st record & prompts for commands # - enter w999 to write all records & q to quit: # # --> w999a1t6r64 <-- Write all records to the tmp/... subdir # - filename date/time stamped ex: tmp/testRDWe_yymmdd_hhmmssW # - On Nov 17/07 3:30 PM might be: tmp/testRDWe_071117_153059W # a1 - translate output records to ASCII # t6 - insert LineFeed terminators after last non-blank # (t6=t2+t4, t2=LineFeed, t4=insert after last non-blank) # r64 - max size for output records # - 'y7' to insert CR+LF after last non-blank # --> q <-- quit uvhd # # cat tmp/*59W <-- display output text records # ============ - use *59W to save a lot of keystrokes # # DELL10 - Dell Inc. # HP0010 - Hewlett Packard # IBM010 - International Business Machines # MFC010 - Micro Focus COBOL # MS0010 - Microsoft Corp. # REDH10 - Red Hat Linux # SUN010 - Sun Microsystems Ltd # UVSI10 - UV Software Inc. #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
opr='$jobname - convert variable length file to fixed length records' opr='uop=a0b0h1r4096t0z4 - option defaults' opr=' a0 - no translate (for EBCDIC packed/binary fields)' opr=' a1 - translate from EBCDIC to ASCII' opr=' b0 - do NOT convert nulls to blanks' opr=' b1 - DO convert nulls to ASCII blanks' opr=' b2 - DO convert nulls to EBCDIC blanks' opr=' h0 - drop the 4 byte binary record-size headers' opr=' h1 - replace 4 byte binary recsize hdr with numeric chars' opr=' r4096 - output fixed records 4096 bytes' opr=' r8192 - max output fixed size 8192 bytes' opr=' t0 - do NOT insert LineFeed in last byte of record' opr=' t1 - DO insert LineFeed in last byte of record' opr=' z4 - RDW 4 byte prefix, recsize 1st 2 BIG-END binary' opr=' z2 - RDW 2 byte prefix, recsize 1st 2 BIG-END binary' uop=q1a0b0h1r4096t0z4 # option defaults was=a32768b32768c32768 # increase areas a,b,c from dflt 1024 fili1=?d0ebc/filename,rcs=8192,typ=RDWz4 # uop z2 will change to option z2 filo1=?d1ebc/filename,rcs=8192,typ=RSF @run #Dec19/07 - replace subrtn getvr with typ=RDWz4 & typ=RDWz2 # - typ=RDWz2 for nonstd 2 byte prefix vs std 4 bytes # if uop=z2, change file option z4 to z2 tsb o26(1),x'02' uop z2 for nonstd RDW ? skp! 1 cata8 $fili1+180(15),'z2' # opn all open files mvn $rf,$uopbr load fixed outsize in rgstr 'f' mvn $rg,$rf transfer to $rg sub $rg,1 -1 byte to insert LF in last byte #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# begin loop to read variable length records until EOF # - using get typ=RDW to get var lth records depending on size in prefix # - write fixed length records padded to max 8192 with EBCDIC blanks ##man20 bal getvr <--Dec19/07 change to get file typ=RDW man20 get fili1,b0 get next record depending on varlth prfx skp> eof # # test option to translate EBCDIC to ASCII tsb o1(1),x'01' trnslt EBCDIC to ASCII ? skp! 1 tra b0($rf8192) # # presume option h1 to replace 4 byte recsize hdr with numerics ## mvn c0(4),a0(2bs) <--Dec19/07 chg to $rv mvn c0(4),$rv cnvt binary recsize to ASCII numerics tsb o1(1),x'01' trnslt EBCDIC to ASCII ? skp= 1 tre c0(4) no - trnslt ASCII recsize to EBCDIC mvc c4($rf8192),b0 move data to follow recsize # # test option h0/h1 drop recsizehdr or replace with numerics ? tsb o8(1),x'01' replace recsize hdr w numerics ? skp= 1 mvc c0($rf8192),b0 no - drop recsize hdr # # test option b1 to convert nulls to ASCII blanks man40 tsb o2(1),x'01' convert nulls to ASCII blanks ? skp! 1 rep c0($rf8192),x'00',x'20'
# test option b2 to convert nulls to EBCDIC blanks tsb o2(1),x'02' convert nulls to EBCDIC blanks ? skp! 1 rep c0($rf8192),x'00',x'40' # # test option t1 to insert Line-Feed Terminator in last byte of record tsb o20(1),x'01' insert LF in last byte ? skp! 1 mvc cg0(1),x'0A' Yes - insert LF in last byte # # write current record to output file & return to get next man48 put filo1,c0($rf8192) write record to file#1 out skp man20 return to get next record # # EOF - close files & end job eof cls all eoj #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
These scripts generate uvcopy jobs to convert MainFrame EBCDIC data files to ASCII files, preserving packed & binary fields, and correcting the signs in unpacked numeric fields.
9A1. | gencnv5A - copy datafiles from d0ebc/... to d1ebc/... modifying filenames |
from mainframe standards to VU unix/linux conventions | |
- translate filenames to lower case | |
- convert GDG filenames from G9999V00 to trailing '_' underscore | |
- generate the control file to relate copybooknames & datafilenames |
Note |
|
9A2. | gencnv5B - 2nd part of scripts to generate conversion jobs for ALL files |
- run cobmap1, uvdata51,& uvdata52 to create jobs for ALL files | |
in the copybook directory & all datafiles in the control file. | |
- The copybooknames in the jobs generated by uvdata51 are changed | |
to the datafilenames supplied by the control file |
9A3. | gencnv51 - run cobmap1, uvdata51,& uvdata53 to create 1 job for 1 file. |
- uvdata53 looks up Indexed file datacnv54I to relate the | |
copybookname to the datafilename | |
- this assumes that you have already run gencnv5A & gencnv5B | |
since datacnv54I is loaded by gencnv5B |
cobmap1 |
|
uvdata51 |
|
uvdata52 |
|
uvdata53 |
|
The KORN shell scripts are stored in the /home/uvadm/sf/IBM/... directory & you can examine or print them as follows, using gencnv5A as an example:
vi /home/uvadm/sf/IBM/gencnv5A <-- examine gencnv5A uvlp12 /home/uvadm/sf/IBM/gencnv5A <-- print gencnv5A
The uvcopy jobs are stored in the /home/uvadm/pf directory & you can examine or print them as follows, using uvdata51 as an example:
vi /home/uvadm/pf/IBM/uvdata51 <-- examine uvdata51 uvlp12 /home/uvadm/pf/IBM/uvdata51 <-- print uvdata51
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # gencnv5A - Generate uvcopy jobs to convert All data files" # - see /home/uvadm/doc/MVSDATA.doc or www.uvsoftware.ca/mvsdata.htm # echo "gencnv5A - script to convert All datafiles in d0ebc-->d1ebc-->d2asc" echo " - part A of 2 parts (gencnv5A & gencnv5B)" echo "*gencnv5A - copies files from d0ebc to d1ebc, modifying filenames" echo " & generates the data conversion control file" echo " - which needs to be edited with copybooknames & record-sizes" echo " gencnv5B - generates the uvcopy data conversion jobs from copybooks" echo " & control file, leaves the generated jobs in pfx2" echo " - you must copy to pfx3 & add code for multi record type files" echo " - then may execute all jobs using uvcopyxx 'pfx3/*' script" echo " " echo "usage: gencnv5A all" echo " ============" echo "datafiles must have been FTP'd from mainframe & stored in d0ebc/..." echo " " echo "d0ebc---------->d1ebc----------------->d2asc--------->$TESTDATA/TNsub/..." echo " copy/rename generated conversions copy to TopNode subdir" echo " " echo "This script generates/executes a subscript to copy/rename datafiles" echo " to unix/linux Vancouver Utility JCL/script standards" echo " - lower case, any '$' & '#' chars changed to '_' underscores" echo " - GDG files with (0) or G1234V00 suffixes changed to _000001" echo " - PDS modules changed from library(module) to library@module" echo " " echo "This script then generates the data conversion control file datacnv53" echo " - to be copied to ctl/datacnv54 & edited with copybooks & record-sizes" echo " " if [[ -d d0ebc && "$1" == "all" ]]; then : else echo "usage: gencnv5A all" echo " ============" echo " - arg1 must be 'all'" echo " - d0ebc must be in current directory \$CNVDATA=$CNVDATA" exit 9; fi # #Note - the 2 digit#s below match the #s in the step by step Op Instrns # - in MVSDATA.doc#Part_4 or www.uvsoftware.ca/mvsdata.htm#Part_4 # echo "Enter to capture filenames in d0ebc & make script sf/cpd0d1rename" echo "- to copy d0ebc/* to d1ebc, renaming to JCL/script standards" read reply #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
ls d0ebc >tmp/ls_d0ebc #===================== #02 - list datafilenames in d0ebc & capture for next step # uvcopy mksfd0d1,fili1=tmp/ls_d0ebc,filo1=sf/cpd0d1rename,uop=q0i7,rop=r0 #======================================================================= #03 - make script to copy d0ebc to d1ebc, changing filenames # from mainframe conventions to unix/linux VU standards # GDG filename(0) --> filename_000001, etc echo "- enter to execute script to copy/rename datafiles (sf/cpd0d1rename)" echo "- will 1st remove all files from output directory d1ebc/..." read reply rm -f d1ebc/* #============ #04a - remove all old files from output directory # sf/cpd0d1rename #============== #04b - execute script to copy/rename files from d0ebc to d1ebc # - changing names from mainframe as required for VU JCL/scripts # cd $TESTLIBS #=========== #05 - change to libraries superdir (JCL & COBOL conversions) # echo "- enter to generate control files datacnv51,52,datactl53,datacnv53" echo "- merging info from datajcl52,datacat52,dataxl152,dataxl252,dataedt52" echo "- changed to \$TESTLIBS (most control files are in \$TESTLIBS)" read reply # uvcopy cnvdata51,fild1=$CNVDATA/d1ebc,filo2=ctl/datacnv51,uop=q0i7,rop=r0 #======================================================================== #06 - determine which files have packed or binary fields # by scanning 1st 5000 bytes of EBCDIC datafiles for x'0C' & x'00 # - write text file of all datafilenames with code 'Dp' or 'Db' echo "- enter to generate ctl/datacnv52 (packed/binary field indicators)" read reply # uvcopy cnvdata52,fili1=ctl/datacnv51,filo2=ctl/datacnv52,uop=q0i7,rop=r0 #======================================================================= #07 - change filenames to match other datafile info jobs (jcldata51,etc) # - translate datafilenames from UPPER to lower case # - GDG filename(0) or .G1234V00 ---> filename_ (trailing '_') # date stamped filename.mmmddyy ---> filename.%%MDY echo "- enter to generate ctl/datactl53 for JCL & DATA conversions" echo "- merging 6 files: datajcl52,datacat52,dataxl152,dataxl252,dataedt52,datacnv52" read reply # uvcopy ctldata53,fili1=ctl/datajcl52,fili2=ctl/datacat52,fili3=ctl/dataxl152\ ,fili4=ctl/dataxl252,fili5=ctl/dataedt52,fili6=ctl/datacnv52\ ,filo7=ctl/datactl53,uop=q0i7,rop=r0 #============================================================================= #08 - combine 6 inputs: JCL, LISTCAT, Excel1, Excel#2, Edited, datainfo # - must have created null files for sources not used #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
echo "- enter to sort & load indexed file ctl/datactl53I (for JCL convert)" read reply # uvsort "fili1=ctl/datactl53,rcs=191,typ=LST,key1=0(44)\ ,filo1=ctl/datactl53I,typ=ISF,isk1=0(44)" #====================================================== #09 - create Indexed file used by JCL & DATA conversions echo "- enter to generate the DATA conversion control file ctl/datacnv53" read reply # uvcopy cnvdata53,fild1=${CNVDATA}/d1ebc,filr2=ctl/datactl53I\ ,filo3=ctl/datacnv53,uop=q0i7,rop=r0 #============================================================ #10 - create data conversion control file # - extracts cpy=..., rca=..., rcm=..., key=... from ctl/datactl53I # - may also include copybooknames (originally on Excel spreadsheet) # #note - datafilenames in the 'DATA conversion control file' ctl/datacnv53 # - different than in the 'JCL conversion control file' ctl/datactl53 # - datacnv53 GDG names have ..._000001 suffix vs ..._ (trailing '_') # - datacnv53 dated files have ...mmmddyy vs %%MDY # echo "- enter to print the data conversion control file Landscape 14cpi" echo "- to be edited with any missing copybooks & record-sizes" read reply # uvlp14L ctl/datacnv53 s2 #======================= #11 - list Landscape at 14 cpi space 2 # - listing will help you research & write in missing copybooks # cp ctl/datacnv53 $CNVDATA/ctl/ #============================= #12 - copy generated control file from \$TESTLIBS/ctl to \$CNVDATA/ctl # - to be edited with any missing copybooks & record-sizes # echo "ctl/datacnv53 generated in both \$TESTLIBS/ctl/... & \$CNVDATA/ctl/..." echo "- you must copy/rename \$CNVDATA/ctl/datacnv53 to datacnv54" echo "- then edit with any missing copybooks & record-sizes" echo " then run gencnv5B to generate uvcopy data conversion jobs in pfx2/..." # exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh #!/bin/ksh # gencnv5B - Generate uvcopy jobs to convert All data files" # - see /home/uvadm/doc/MVSDATA.doc or www.uvsoftware.ca/mvsdata.htm echo "gencnv5B - script to convert All datafiles in d0ebc-->d1ebc-->d2asc" echo " - part B of 2 parts (gencnv5A & gencnv5B)" echo " gencnv5A - copies files from d0ebc to d1ebc, modifying filenames" echo " & generates the data conversion control file" echo " - which needs to be edited with copybooknames & record-sizes" echo "*gencnv5B - generates the uvcopy data conversion jobs from copybooks" echo " & control file, leaves the generated jobs in pfx2" echo " - you must copy to pfx3 & add code for multi record type files" echo " - then may execute all jobs using uvcopyxx 'pfx3/*' script" echo " " echo "usage: gencnv5B all" echo " ============" echo "cpys ---------> maps ---------> pfx1 ---------> pfx2 ----------> pfx3" echo " cobmap1 uvdata51 uvdata53 cp & vi" echo "d0ebc-------------->d1ebc----------------->d2asc--------->$TESTDATA/TNsub/..." echo " copied by gencnv5A generated conversions copy to TopNode subdir" echo " " echo "gencnv5A (prior script) - created ctl/datacnv53 in \$TESTLIBS & \$CNVDATA" echo " - You must have:" echo "1. changed to \$CNVDATA=$CNVDATA" echo "2. copied ctl/datacnv53 to ctl/datacnv54" echo "3. edited ctl/datacnv54 with any missing copybooks & record-sizes" echo "gencnv5B (this script) - will generate data conversion jobs" echo " - from copybooks & edited control file ctl/datacnv54" echo " " if [[ -f ctl/datacnv54 && -d d1ebc && "$1" == "all" ]]; then : else echo "usage: gencnv5B all" echo " ============" echo "- arg1 must be 'all'" echo "- ctl/datacnv54 must be present" echo "- d1ebc subdir must be present in curdir \$CNVDATA=$CNVDATA" exit 9; fi echo "Enter to copy all copybooks from \$TESTLIBS/cpys to \$CNVDATA/cpys" read reply # cp -f $TESTLIBS/cpys/* cpys #========================== #17 - copy all copybooks from $TESTLIBS/cpys (master) to $CNVDATA/cpys echo "Enter to execute cobmap1 to convert copybooks to cobmaps" read reply # uvcopyx cobmap1 cpys maps uop=q0i7p0 #==================================== #18 - generate cobmaps (record layouts) from COBOL copybooks rmzf maps #======== #18b - remove any null files in maps caused by procedure copybooks"
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
echo "Enter to execute uvdata51 to convert cobmaps to uvcopy jobs" read reply # uvcopyx uvdata51 maps pfx1 uop=q0i7 #=================================== #19 - generate data conversion uvcopy jobs from cobmaps echo "Enter to copy 'skeleton2' from $UV/home/uvadm/pf/IBM/..." echo " - in case copybook missing (OK if no packed/binary fields)" read reply # cp $UV/pf/IBM/skeleton2 pfx1 #=========================== #20 - provide 'translate only' uvcopy job, in case copybook missing # - OK if no packed or binary fields echo "Enter to execute uvdata53 to insert datafilenames from control file" read reply # uvcopy uvdata52,fili1=ctl/datacnv54,fild2=pfx1,fild3=pfx2,uop=q1i3 #================================================================= #21 - complete the uvcopy data conversion jobs # - insert datafilenames (vs copybook names) # - if Indexed, change file type & insert keyloc(keylen) echo "Enter to load ctl/datacnv54 into Indexed file ctl/datacnv54I" echo " - for later use with 'gencnv51' to convert 1 file at a time" read reply # uvsort "fili1=ctl/datacnv54,rcs=191,typ=LST,key1=0(44)\ ,filo1=ctl/datacnv54I,typ=ISF,isk1=0(44)" #====================================================== #16 - create Indexed file used to generate DATA conversion jobs # - for 1 file at a time, datacnv54 (seqntl) used to gen All jobs # echo "All uvcopy jobs generated in pfx2 & named same as datafilename" echo "- you must copy to pfx3 (& modify if multi R/T's) before executing" echo "- remove old files & execute All jobs with script 'uvcopyxx'" echo "---> rm -f d2asc/* <-- remove any old files from output dir" echo " =============" echo "---> uvcopyxx 'pfx3/*' <-- uvcopyxx requires pattern in single quotes" echo " =================" echo "Then use 'copy2nodes' script to copy files from d2asc/* to \$TESTDATA" echo "---> copy2nodes \$CNVDATA/d2asc \$TESTDATA" echo " ===================================" echo " copy2nodes $CNVDATA/d2asc $TESTDATA" echo " ===================================" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # gencnv51 - Generate uvcopy job to convert 1 data file EBCDIC to ASCII # - see /home/uvadm/doc/MVSDATA.doc or www.uvsoftware.ca/mvsdata.htm #Oct2007 - new version using uvdata53, which uses datacnv54I (indexed) # echo "gencnv51 - generate uvcopy job to convert 1 data file EBCDIC to ASCII" echo " - run from CNVDATA containing: ctl,cpys,maps,pfx1,pfx2,pfx3" echo " " echo "usage: gencnv51 CopyBookName DataFileName" echo " =================================" echo "ex: gencnv51 tdw011ft.cpy dbdpdw._addr.usg.dly_000001" echo " ===================================================" echo " " echo "cpys ---------> maps ---------> pfx1 ---------> pfx2 ----------> pfx3" echo " cobmap1 uvdata51 uvdata53 cp & vi" echo " " echo "d0ebc----------->d1ebc------------>d2asc----------->$TESTDATA/TNsub/..." echo " copy/rename generated job copy to TopNode subdir" echo " " echo "datafilename should have matching entry in ctl/datacnv54I" echo " - to get filetype, indexed keys, topnode changes (else correct with vi)" echo "datafile must have been FTP'd from mainframe to d0ebc & copied to d1ebc" echo "- changing name to VU JCL/script standards (lower case, '$' to '_' " echo "- GDG files with (0) or G1234V00 suffixes changed to _000001" echo " " echo "Enter to execute cobmap1, uvdata51,& uvdata53"; read reply if [[ ! -f "cpys/$1" ]]; then echo "usage: gencnv51 copybook.cpy data.file.name" echo " ====================================" echo " - copybook not found in cpys/$1" exit 9; fi # if [[ ! -f "d1ebc/$2" ]]; then echo "usage: gencnv51 copybook.cpy data.file.name" echo " ====================================" echo " - data.file.name not found in d1ebc/$2" exit 9; fi # cfn="$1" # capture copybookname from arg1 dfn="$2" # capture datafilename from arg2 cfx=${1%\.*} # strip any extension from copybook name #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvcopy cobmap1,fili1=cpys/$cfn,filo1=maps/$cfx,uop=q0i7p0 #====================================================== # uvcopy uvdata51,fili1=maps/$cfx,filo1=pfx1/$cfx,uop=q0i7 #======================================================= # uvcopy uvdata53,fili1=pfx1/$cfx,filr2=ctl/datacnv54I,filo3=pfx2/$dfn #=================================================================== # echo "uvcopy job generated in pfx2 & named same as datafilename" echo "- must copy to pfx3 (& modify if multi R/T's) before executing" echo "- do not copy yet if you need to retrieve R/T code from old version ?" echo "- copy to pfx3 now (& display) y/n ?" read reply if [[ "$reply" == "y" ]]; then cp -f pfx2/$dfn pfx3 && cat pfx3/$dfn; fi exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page