Part_1 | - Overview - convert data to pipe | delimited to load SQL tables |
- Setup your profile to perform conversions | |
- Setup data conversion super-dir $CNVDATA | |
- Script to setup data conversion sub-dirs | |
- SQL Server directories to execute table create & load | |
- Transfer mainframe DataFiles & CopyBooks to unix/linux for conversion | |
- Rename copybooks same as datafiles if only 1 datafile per copybook | |
- Split copybooks for multi R/T or Occurs Depending on for multi tables | |
- create & print cobmaps (record layouts) from copybooks | |
- generate uvcopy jobs to pipe|delimit datafiles | |
- generate batch-files to create & load SQL tables | |
- Investigating DATA files prior to conversion | |
- Split DATA files with Occurs Depending or Multi R/Ts | |
- Executing generated uvcopy jobs to pipe|delimit DATA files | |
- List samples of pipe delimited data | |
- zip results for transfer to production machine |
Part_2 | - Alternative procedures if more than 1 DataFile per CopyBook (GDG) |
(Part 1 procedures simpler, but allow only 1 DataFile per CopyBook) | |
- Setting up a Control-File to relate datafiles to copybooks | |
- Generated conversion jobs named for datafiles (not copybooks as Part 1) |
Part_3 | - Sample listings of CopyBooks, generated conversion jobs, & scripts |
- Operating Instructions for processing 1 file at a time | |
(use for rerun/correction when multi large files would be slower) | |
- Note that Part1 & part2 instructions process ALL files in directory | |
(more efficient & instructions shorter than for 1 file at a time) |
Part_4 | - Listings of Vancouver Utility scripts & utilities used here, examples: |
gensqljobs1 - perform all steps to GENERATE jobs for pipe|delimiting | |
execsqljobs1 - EXECUTE all jobs to pipe|delimit all data-files | |
zipsql1 - zip generated results to transfer/execute on alt machine | |
insertseq1 - insert sequence# in copybook for SQL Primary Key Integer | |
- cnvdatadirs, cleanupcpy, uvlp12, uvlpd12, uvlpd12D | |
- renameL, rename-XX, recname+X | |
- uvcopyx, uvcopyx2, uvcopyx4, uvcopyxx | |
- cobmap1, uvdata51, uvdata52, genpipe1, sqlcreate2, listpipe1 | |
- keep1, varstat1, tableRT1, | |
- uvhd, uvlist, uvcopy |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
SQL_convert.doc <-- This document, file conversions for SQL Server
DATAcnv1.htm | - Simplified DATA conversion (1 file at a time) |
- Translate EBCDIC to ASCII, preserving packed fields | |
& correcting zoned signs to MicroFocus COBOL standards | |
- converting char fields to ASCII, preserving packed/binary | |
(same record layouts to run same COBOL programs on Unix/Linux) | |
- And/Or converting to all text delimited for loading RDBMS's |
MVSDATA.htm | - MVS DATA Conversion |
- also includes JCL & COBOL conversion |
VSEDATA.htm | - VSE DATA Conversion |
- also includes JCL & COBOL conversion |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1A1. | Converting Mainframe DATA - Overview |
1A2. | Conversions performed by uvdata51 & uvdata52 |
1A3. | Actions performed by genpipe1 & sqlcreate2 |
1B1. | setup your profile to perform conversions |
setup data conversion super-dir cnvdataXX | |
add 'export CNVDATA=... to your profile | |
1B2. | script to setup data conversion sub-dirs |
1B3. | directories to execute table create & load batch-files on SQL SERVER |
1C1. | Overview - subdirs/procedures used to convert DATA files to SQL tables |
- convert COBOL copybooks to 'cobmaps' (Record Layouts) | |
- use cobmaps to generate uvcopy jobs to convert EBCDIC to ASCII | |
- use cobmaps to generate uvcopy jobs to convert DATA to pipe|delimited | |
- use cobmaps to generate batch-files to create & load tables | |
- Executing generated uvcopy jobs to convert EBCDIC to ASCII | |
- Executing generated uvcopy jobs to convert DATA to pipe|delimited | |
- Executing generated batch-files to crreate & load tables on SQL Server |
1D1. | Overview - Operating Instructions for Data Conversion |
1D2. | FTP files to be converted from mainframe to unix |
1E1. | Notes re data-file-names & copybook-names |
Changing copybook-names to match data-file-names | |
1E2. | Cleanup copybooks & list to assist with editing changes |
1E3. | Copybook rename & separate occurs depending segments |
Rename CopyBooks same as DataFiles | |
1E4. | Create (edit) Separate copybooks for Occurs depending & Multi R/T files |
1E5. | Modify copybooks as required for SQL tables |
1E6. | Combine cc,yy,mm,dd fields into 1 8 byte date field |
1F1. | Return point for copybook corrections & rerun generations/executions |
scripts gensqljobs1 & execsqljobs1 generate all jobs & execute all jobs | |
- do NOT use 1st time, use step by step, better to detect any problems |
1G1. | create & print cobmaps for pipe|delim & SQL scripts |
1G2. | Generate uvcopy jobs to pipe|delimit Data files |
1G3. | Generate Batch-Files to Create & Load SQL Tables |
1F4. | Extract osql commands to create & load tables |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1H1. | Investigating DATA files prior to conversion |
Notes re data-file-names & copybook-names | |
1H2. | Print samples of DATA files to aid conversion |
1H3. | Print statistics for data files (optional) |
1I1. | Insert sequence# for primary key integer in SQL tables |
1I2. | Split DATA files with Occurs Depending or Multi R/Ts |
1J1. | Executing generated uvcopy jobs to pipe|delimit DATA files |
1J2. | List samples of pipe delimited data |
List pipe|delim samples for 1 file at a time | |
List pipe|delim samples for ALL files in directory |
1K1. | zip results for transfer to production machine |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This documents the data file conversion for loading relational databases (Microsoft SQL Server).
These conversions are driven by the COBOL 'copybooks' (record layouts). The copy-books are first converted into 'cobmaps' which have field start, end, length,& type on the right side (see COBaids1.doc).
Vancouver Utilities (uvdata51 & genpipe1) read the 'cobmap's and generate uvcopy jobs for each file to be converted & pipe delimited.
Another utility (sqlcreate2) reads the 'cobmap's and generates scripts (SQL batch files) to create & load SQL tables from the pipe delimted data.
The 'uvcopy jobs' when 1st created from the copybook/cobmaps will use the copybook name for the I/O data file names, since we do not know datafilenames from the copybook & 1 copybook could be used for several datafiles.
The one manual job required is to edit the data conversion control file, coding the copybook filename in the space provided. The copybook name may be omitted if you know the data file has no packed or binary fields and no zoned signs in unpacked numeric fields.
For these files the 'skeleton2' conversion job is supplied which will simply translate the entire record from EBCDIC to ASCII. The skeleton2 job will be modified with the data filenames, the record size,& any indexed keys using the data conversion control file (generated from the LISTCAT report).
Note that the Operating Instructions in Part 1 process ALL files in the directories (All CopyBooks or All DataFiles or All Generated Scripts, etc). This is much more efficient than procesing 1 file at a time.
However the instructions for processing 1 file at a time will be presented in Part 3. They might be needed for rerun/correction when multi large files would be slower. The instructions for All files are often used because the instructions require fewer keystrokes than for 1 file at a time.
It is only the DataFile processing where the processing time is significant. Generating various jobs & scripts from copybooks is so fast it does not matter whether you process 1 at a time or all in the directory.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
COBOL copybooks are required by the jobs that generate the conversion jobs except for files that meet the following conditions:
For these files a 'skeleton2' conversion job is supplied which will simply translate the entire record from EBCDIC to ASCII. The skeleton2 job will be modified with the data filenames, the record size,& any indexed keys using the data conversion control file (generated from the mainframe LISTCAT report).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. append the VU .profile on to the end of your .profile
#1a. Login as yourself --> your homedir
#1b. vi .profile <-- edit your profile
#1c. :G <-- goto last line of your profile
#1d. :r /opt/uvsoftware/env/VUprofile <-- append Vancouver Utility profile
#1e. :wq! <-- write & quit
#2. log off & back on to make profile changes effective
#3. verify VU profile changes
#3a. echo $PATH <-- note PATH to uvadm/bin, uvadm/sf, etc
#4. mkdir cnvdata <-- make supedir for conversions =============
#5. cd cnvdata <-- change into superdir ==========
#6. cnvdatadirs <-- make subdirs for data conversions =========== - see next page --->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # cnvdatadirs - script to make sub-dirs for data conversions # - see DATAcnv1.doc & VSEDATA.doc mkdir archive mkdir d0ebc # EBCDIC data files FTP'd from mainframe (variable length RDW) mkdir d1ebc # EBCDIC data files converted to Fixed Length for conversion mkdir d2asc # files converted to ASCII with same record layout mkdir d4pipe # data files converted to '|' pipe delimited format mkdir cpy0 # original copybooks from mainframe (UPPER case) mkdir cpy1 # converted lower case (except in quotes), 1-6 & 73-80 cleared mkdir cpy2 # copybooks split to separate copybooks for occurs segments mkdir cpys # copied, modified for SQL, then generate maps,pfx1,pfp1,sql mkdir ctl # control files for datafile conversions mkdir ftp # subdir to receive files to be converted (& copybooks) mkdir listpipe # subdir to receive 1st 3 pipe|delim recs listed for validation mkdir map1 # prelim record layouts from copybooks (start,end,length,type) mkdir maps # final record layouts gen from copybooks (start,end,length,type) mkdir pfx1 # uvcopy jobs generated by gencnvA, convert EBCDIC to ASCII mkdir pfx2 # uvcopy jobs completed with datafilenames (vs copybooknames) mkdir pfx3 # Ebcdic->Ascii jobs modified for multi R/T if any, execute here mkdir pfp1 # uvcopy jobs generated by genpipe1, convert to pipe delimited mkdir pfp2 # uvcopy jobs copied from pfp1 & modified for multi R/Ts, etc mkdir pfp3 # pipe|delim jobs modified for multi R/T if any, execute here mkdir pf # special uvcopy jobs for varlth records, occurs depending, etc mkdir sf # Script Files subdir for misc (as required) scripts mkdir sqlTC1 # scripts to CREATE SQL tables (generated from COBOL copybooks) mkdir sqlTC2 # sqlTC1 scripts duplicated here - if multi-tables per copybook mkdir sqlTL1 # scripts to LOAD SQL tables (generated from COBOL copybooks) mkdir sqlTL2 # sqlTL1 scripts duplicated here - if multi-tables per copybook mkdir stats # statistics (various summary tables from varstat1,etc) mkdir tmp # temporary miscellaneous intermediate files created by mkdir unzip # unzip files from clients, then move to appropriate subdir mkdir zip # copy files here, then zip files before sending to clients exit 0
sf/osql_create.bat |
|
sf/osql_load.bat |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
C:\cnvdata :-----osql_create.bat <-- script to CREATE All tables on SQL Server :-----osql_load.bat <-- script to LOAD All tables on SQL Server :----- :-----pipe <-- subdir for all pipe|delimited files : :-----aaaaa.dat - FTP'd from unix/linux system : :-----bbbbb.dat : :-----...etc... :-----sqlTC <-- subdir for all batch-files to CREATE all tables : :-----aaaaa - FTP'd from unix/linux system : :-----bbbbb : :-----...etc... :-----sqlTL <-- subdir for all batch-files to LOAD all tables : :-----aaaaa - FTP'd from unix/linux system : :-----bbbbb : :-----...etc...
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Mainframe ---> cpy0 -------> cpy1 -----------> cpy2 --------> cpys FTP cleanup cp/split/rename copy rename as file
cpys -------> maps --------> pfx1 [ --------> pfx2 ] --------> pfx3 cobmap1 uvdata51 [ uvdata52 ] cp & vi
cpys -------> maps --------> pfp1 [ --------> pfp2 ] --------> pfp3 cobmap1 genpipe1 [ uvdata52 ] cp & vi
cpys -----> maps --------> sqlTC1 [------> sqlTC2 ] -------> sf/osql_create.bat cobmap1 sqlcreate2 [ uvdata56 ] select/vi
maps --------> sqlTL1 [------> sqlTL2 ]-------> sf/osql_load.bat sqlcreate2 [ uvdata56 ] select/vi
MAINFRAME ---> d1ebc ----------> d2asc ----------> d4pipe -----> SQL Server FTP uvcopy jobs uvcopy jobs FTP C:\cnvdata\pipe gen by uvdata51 gen by genpipe1
sqlTC1/2 -----------> sf/osql_create.bat ------> SQL Server select/vi FTP C:\cnvdata\osql_create.bat
sqlTL1/2 -----------> sf/osql_load.bat ------> SQL Server select/vi FTP C:\cnvdata\osql_load.bat
sqlTC1/2 -----> C:\cnvdata\sqlTC ----------------> create Tables FTP batch-files osql_create.bat gen by sqlcreate2
sqlTL1/2 -----> C:\cnvdata\sqlTL ----------------> load Tables FTP batch-files osql_load.bat gen by sqlcreate2
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will present the 'CONCISE' operating instructions here in Part 1 where you can easily find them. BUT, if you are working on your 1st conversion, do NOT attempt to use them, without studying the illustrated samples of DataFiles, CopyBooks, generated uvcopy jobs, scripts, etc, listed in Part_3.
After each procedure we will give you a link where you see an illustration of the input/output for that procedure. Do not execute procedures until you understand them.
In some cases there will be manual editing to be done before you can execute a procedure. The main examples of these are editing the control file with the copybooknames for each datafile & editing the generated conversion jobs with record type testing for files with multiple record types.
Data conversion jobs are generated automatically from record layouts. No manual changes are required, EXCEPT for data files with 'multiple record types' or 'occurs depending on' segments.
When creating pipe|delimted files for loading SQL tables, you will need to write uvcopy jobs to split multi R/T data-files & Occurs depending on segments to separate files for loading separate tables. See samples in Part_2.
When converting EBCDIC to ASCII retaining same layouts for running COBOL programs (same as on mainframe) you will not need to write uvcopy split jobs, But you will need to modify the auto-generated jobs for multi R/T files, inserting code to test the R/T field & branch to the appropriate bank of auto-generated conversion instructions. See examples in DATAcnv1.htm.
These instructions assume that you have FTP'd & stored the datafiles & record layouts in the subdirs illustrated on the previous page - EBCDIC files in d1ebc/. The output subdirs should be empty (d2asc,cpys,maps,pfx123,pfp123,etc).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login --> your homedir =====
#2. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir to perform data conversion
#3. cd d0ebc <-- change into subdir to receive data files from mainframe ========
#4. ftp mainframe IP# =================
#4a. --> enter userid/password when prompted #4b. --> binary <-- ensure BINARY mode #4c. --> cd _____ <-- change to subdir if required #4d. --> get _____ <-- get data files desired #4e. --> bye <-- end FTP session
#4f. ls -l d0ebc <-- confirm FTP DATA files received OK ===========
#5. cd ../cpy0 <-- change to subdir to receive copybooks from mainframe ==========
#6. ftp mainframe IP# =================
#6a. --> enter userid/password when prompted #6b. --> binary <-- ensure BINARY mode #6c. --> cd _____ <-- change to subdir if required #6d. --> get _____ <-- get copybooks desired #6e. --> bye <-- end FTP session
#6f. ls -l cpy0 <-- confirm FTP copybooks received OK ==========
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir to perform data conversion
#03. ls -l d0ebc <-- list filenames of data-files (on the screen) =========== #03a. lslp d0ebc <-- print filenames of data-files (hardcopy on laser printer) ========== 'lslp' prints to laser via uvlp12 script)
#04. ls -l cpy0 <-- list filenames of copybooks (on the screen) ========== #04a. lslp cpy0 <-- print filenames of copybooks (hardcopy on laser printer) ========= 'lslp' prints to laser via uvlp12 script)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login --> your homedir ===== #02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03. cleanupcpy cpy0 cpy1 <-- cleanup mainframe COOL copybooks ==================== - clear cols 1-6 & 73-80 - translate to lower case (except in quotes)
#04. uvcopyx cobmap1 cpy1 map1 uop=q0i7p0 <-- generate record layouts ==================================== from COBOL copybooks
Note |
|
#05. uvlpd12 map1 <-- print all copybook/maps in subdir map1/... ============ - do NOT print cpy1/... - DO print map1/... with the vital layout info
#05a. uvlpd12D map1 <-- Alternative if your copybooks are voluminous ============= - uvlpd12D prints Duplex (vs uvlpd12 Simplex)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The conversion operating procedures are simplified if we can assign the same names to data files & their corresponding copybook. We can not do this if there are multiple data-files for 1 copybook (as for GDG generation files).
For multi-files per copybook, we will create a control file to relate each data-file name to its copybook-name. We then run utility 'uvdata52' to replicate the conversion job (created from copybooks) for each data-file in the control file, & name the conversion job from data-file-name (vs copybook-name). We will present those alternate Op/Instns in Part_2.
Here we will assume 1 data-file per copybook & rename the copybook to match the data-file (removing any extensions).
#01. Login --> your homedir ===== #02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#4b. renameL cpy1 <-- remove any extension present (.cpy, .txt, etc) ============ (easier to work with especially on unix/linux)
#03. rename-XX cpy1 <-- remove any extensions on copybooks ==============
#04. cd cpy1 <-- change into subdir to make renames simpler
#05a. mv copybooknameA alternate matching datafilenameA #05b. mv copybooknameB alternate matching datafilenameB #05c. mv copybooknameC alternate matching datafilenameC #05d. mv ---etc--- ---etc---
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
For 'multi R/T files', create separate copybooks by extracting redefined areas to separate copybooks. The entire 01 record might be redefined, or may be a root common to all R/Ts in which case you need to duplicate the root ahead of each R/T redefined.
Copybooks with 'occurs depending on' must be split to multiple copybooks. Assign occurs segment names by suffixing an identifier on the datafilename might suffix copybook/datafile name with the segment name, for example: 'nlfile' might become nlroot, nlcredit, nlpastdue, nldiligence, etc. The 'root' segment will omit the occurs depending segments.
Copybooks for Fixed-length files without multi R/T's might need no changes.
But if you wish to insert a sequence# for database 'primary keys's and 'foreign key's, then you could insert the 'sql:...' controls as shown below.
The '*sql...' controls are used by 'sqlcreate2' which converts the copybooks to SQL table create & load batch scripts.
For copybooks with no redefined splits or for root of split files: - insert 10 digit seq# for SQL primary key - precede with '**sql...' control as follows:
**sql:next='bigint not null primary key identity(1,1)' 03 xxroot-key pic s9(10). ====================================================== 03 filler pic x(10).
For copybooks of redefined split files: - insert 10 digit seq# for 'primary key' preceded with sql: field type control - insert 10 digit seq# for 'foreign key' preceded with sql: field type control
**sql:next='int not null primary key identity(1,1)' 03 xxredef-id pic s9(10). **sql:next='bigint foreign key references nmroot(sql_key)' 03 xxroot-key pic s9(10). ========================================================== 03 filler pic x(10).
The 10 byte filler provides separation from original record data and provides space in case you wish to insert items such as original occurs# for variable length datafiles with occurs depending on segments.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login --> your homedir ===== #02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03. cp cpy1/* cpy2 <-- copy before edit separate copybooks for segment occurs ==============
#04. vi cpy2/* <-- inspect/edit copybooks ========= - split variable lth occurs segments to separate copybooks
#04a. --> ma <-- mark 'a' begin segment #04b. --> mb <-- mark 'b' end segment #04c. --> :w 'a,'b w cpy2/segmentxx <-- write separate segments to cpy2/... ========================= --> etc for other segments
#05. rm cpy2/original-occurs-copybooks ================================= - remove original copybooks that have been split to multiple segments - for example, remove 'nlfile', now split to nlroot,nlcredit,nlpastdue,etc
#06. vi cpy2/* <-- modify copybooks as required for SQL tables =========
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
**sql:next='bigint not null primary key identity(1,1)' 03 xxroot-key pic s9(10). ======================================================
1a. For copybooks of redefined split files: - insert 10 digit seq# for primary key & foreign key as follows:
**sql:next='int not null primary key identity(1,1)' 03 xxredef-id pic s9(10). **sql:next='bigint foreign key references nmroot(sql_key)' 03 xxroot-key pic s9(10). ==========================================================
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
COBOL applications often define date fields with a group-name, followed by multiple 2 byte fields, which would pipe|delimited as 2 byte fields. For SQL, it is probably desired to combine the 4 x 2 byte fields into 1 8 byte field which will then be coded as 'datetime' by the sqlcreate2 job. Consider the following example:
09 NL-START. 11 NL-START-CCYY. 13 NL-START-CC PIC 9(02). 13 NL-START-YY PIC 9(02). 11 NL-START-MM PIC 9(02). 11 NL-START-DD PIC 9(02).
09 NL-START-ccyymmdd pic 9(8). <-- desired alternative =================================
nd_start_dt_ccyymmdd datetime default null =========================================== - sqlcreate2 will then code as above
We provide a special uvcopy utility to modify the copybook cc,yy,mm,dd fields as shown above. The utility is named 'ccyymmdd1' & you would execute as follows:
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03. uvcopyx ccyymmdd1 cpy2 cpys uop=q0i7 ==================================== - combine multi date subfields into 1 8 byte date field as shown above
#03a. cp cpy2/* cpys <-- Alternative (if ccyymmdd1 NOT run) ==============
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The 1st time you convert a new set of copybooks & datafiles, you should follow the step by step procedures before & after this point. After completing the 1st conversions & loading the database tables, you will probably find some problems that can be corrected by modifying the copybooks & reconverting.
After making these copybook corrections, you can use scripts 'gensqljobs1' & 'execsqljobs1' to shortcut the re-generation & re-conversion steps. But do NOT use these scripts on the 1st generation/conversion of a new set of files, because you can better detect & correct any problems using the step by step procedures.
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03. vi cpys/* <-- correct copybooks as required =========
Note |
#04. gensqljobs1 <-- Re-Generate all jobs&scripts for conversion to pipe|delim =========== - special script combines procedures from '1G1' thru '1G4'
#05a. uvcopyxx 'pfd/*' <-- run uvcopy jobs to split redefines to sep files ================ - see details on page '1I2'
#05b. runall sfd <-- run scripts to split redefines to separate files ========== - some files do not need the more powerful uvcopy job
#06. execsqljobs1 <-- Re-Execute all jobs&scripts to convert to pipe|delimited ============ - special script combines procedures from '1J1' thru '1K2'
See gensqljobs1 & execsqljobs1 listed on page '4A1' & '4A2'.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The procedures on '1E1' - '1E4' have created the copybook/maps (record layouts) required to generate pipe|delimted files & the SQL batch scripts to create & load the SQL tables. Copybooks with 'occurs depending on' have been split to separate copybooks for each segment.
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03. uvcopyx cobmap1 cpys maps uop=q0i7p0 ==================================== - convert COBOL copybooks to record layouts (field start/end on right) - see sample 'cobmap' in Part_3.
#04. uvlpd12 maps <-- print all copybook/maps in subdir maps/... ============ - do NOT print cpys/... - DO print maps/... with the vital layout info
#04a. uvlpd12D maps <-- Alternative if your copybooks are voluminous ============= - uvlpd12D prints Duplex (vs uvlpd12 Simplex)
If desired/required, you could 1st convert the EBCDIC datafiles to ASCII files with the same layout (not pipe-delimited). You would do this if you intend to run the mainframe COBOL programs on Unix/Linux with no major changes.
But if the mainframe datafiles are all being converted to pipe-delimited for laoding SQL tables, then we can skip 'uvdata51' which converts EBCDIC to ASCII with same record layout (preserving any packed/binary fields).
Note |
|
#05. uvcopyx uvdata51 maps pfx1 uop=q0i7 =================================== - convert cobmaps to data conversion 'uvcopy jobs' - may not need for pipe delimited & SQL create/load tables
----------------------- End Optional uvdata51 procedure ----------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#06. uvcopyx genpipe1 maps pfp1 uop=q0i7a1d4 ======================================= - convert cobmaps to uvcopy jobs to pipe|delimit the data files - option 'a1' translates EBCDIC to ASCII - option 'a0' if data inputs ASCII (via uvdata51 jobs) - option 'd4' to generate 'bal dat8n' after 8 digit dates to edit 'ccyy/mm/dd' & conveert '|00000000|' to '||' (sqlcreate2 will code these fields as 'datetime default null') - see sample output in Part_3.
#07. cp -r pfp3 pfp3.old <-- backup pipe|delimit jobs from prior conversion =================== - in case of any manual edits (usually not required)
#08. cp pfp1/* pfp3/* <-- copy pipe delimit jobs to pfp3 (execution subdir) ================ - to standardize execution in case uvdata52 used)
The next step 'uvdata52' is required only if you have multiple Data-Files for any 1 copybook. You do not need it if you have only 1 datafile per copybook and you can make the names the same.
See Part_2 for all the procedures required to allow multiple Data-files per copybook. Part_2 shows you how to edit a control-file to relate datafile-names to copybook-names. uvdata52 uses the control-file to replicate the 1 uvcopy job generated from the copybook & name the multiple outputs for the data-file vs the 1 copybook.
Note |
|
#09. uvcopy uvdata52,fili1=ctl/ctlfile54,fild2=pfp1,fild3=pfp2,uop=r1s1t0 ==================================================================== - insert datafilenames (vs copybook names) in pipe|delimit jobs - required only if you have multiple datafiles for any 1 copybook and you have prepared a control-file to relate datafiles to copybooks
#10. cp pfp2/* pfp3 <-- copy completed jobs to pfp3 (execution subdir) ============== - consistent with or without control-file procedures
----------------------- End Optional uvdata52 procedure ----------------------
#11. vi pfp3/* <-- edit if required (usually not for SQL gens) ========= - required for multi R/T's for non-SQL gens
#12. uvlpd12 pfp3 <-- print all pipe|delimit jobs to confirm gen OK ============ & assist with any changes required
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03a. vi ctl/overridexx <-- create override table for SQL data types ================= - used by sqlcreate2 to modify copybook defaults
Note |
|
#03b. cp ctl/overridexx ctl/override99 <-- copy to filename coded in sqlcreate2 ================================
#04. uvcopyx4 sqlcreate2 maps sqlTC1 sqlTL1 uop=q0i7d0 ================================================= - convert cobmaps to batch-files to create & load SQL tables - option 'd0' converts 8 digit dates to 'datetime' SQL field type - I/O datafiles coded as copybook names - see sample output illustrated in Part_3.
#05a. uvlpd12 sqlTC1 <-- print all batch-files to create SQL tables ==============
#05b. uvlpd12 sqlTL1 <-- print all batch-files to load SQL tables ==============
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# override99 - table of SQL field-type overrides # - by Owen Townsend, UV Software, Dec 20/2010 # - uvcopy job sqlcreate2 looksup table for match on name # & replaces normal logic field-type from table # - 00-19 = filename 20-49 = fieldname, 50-80 = override field-type # 1 2 3 4 5 6 7 # 234567890123456789012345678901234567890123456789012345678901234567890123456789 acinstitution ac_dl_date numeric (6,0) acinstitution ac_inst_bank_acct varchar (25) bdemployeradrs bd_ea_phone numeric (10,0) bdroot bd_bank_transit_num numeric (9,0) bdroot bd_grad_dt_cym numeric (6,0) cfroot cf_interest numeric (9,3) dltracking dl_tr_start_dt numeric (6,0) dltracking dl_tr_end_dt numeric (6,0) dlroot dl_appl_amt decimal (9,2) dlroot dl_past_int decimal (9,2) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The batch-files to create & load SQL tables include comments specifying the commands to execute them. We can extract those commands into '.bat' scripts to be FTP'd & executed on the Windows SQL Server machine. See the completed samples on page '1K1' & '1K2'.
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir
#03a. mkdir tmp1 tmp2 <-- create subdirs for outputs ===============
#03b. rm tmp1/* tmp2/* <-- OR remove all files if subdirs already exist ================
#04a. uvcopy keep1d,fild1=sqlTC2,fild2=tmp1,arg1=osql =============================================== - select the 'osql' commands to CREATE tables from all batch-files - writes a 1 line file in tmp1/... for each batch-file in sqlTC2/...
#04b. uvcopy keep1d,fild1=sqlTL2,fild2=tmp2,arg1=osql =============================================== - select the 'osql' commands to LOAD tables from all batch-files - writes a 1 line file in tmp2/... for each batch-file in sqlTL2/...
#05a. cat tmp1/* >sf/osql_create.bat ============================== - concatenate all create commands into a script to create all tables
#05b. cat tmp2/* >sf/osql_load.bat ============================ - concatenate all load commands into a script to load all tables
#06a. vi sf/osql_create.bat <-- edit script to remove '/*' & '*/' comments ===================== --> :%s/^\/\*// <-- remove '/*' from begining of lines --> :%s/\*\/$// <-- remove '*/' from end of lines - see completed batch script to Create tables on page '1K1'
#06b. vi sf/osql_load.bat <-- edit script to remove '--' comments =================== --> :%s/^-- // <-- remove '-- ' from begining of lines - see completed batch script to Load tables on page '1K2'
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Use uvhd to print a few records from each file to determine the format (fixed-length, variable length RDW, or text with CR/LF record terminators). The uvhd print samples will help you to verify formats:
The uvhd print samples will also help you determine conversion steps required:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login --> your homedir =====
#2. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir to perform data conversion
#3. ls -l d0ebc <-- list filenames to be converted =========== (previously FTP'd from mainframe)
#4a. renameL d0ebc <-- translate all filenames to Lower case ============= (easier to work with especially on unix/linux)
#4b. rename-XX d0ebc <-- remove any extension present (.dat, .txt, etc) =============== - see above re: same names for datafiles & copybooks
#5. uvhd d0ebc/filexx az4 <-- investigate file - assuming RDW ===================== - option 'a' translates character line to ASCII - option 'z4' RDW (recsize binary 1st 2 bytes)
--- OR ---
#5a. uvhd d0ebc/filexx ar256 <-- investigate file - if FIXED-LENGTH ======================== - option 'a' translates character line to ASCII - option 'r256' for Record-size 256 (default)
Note |
|
#6. Print 1st few records with uvhd, initiated above & prompting for commands. Use 'i3' Immediate print if utilities & profile setup as recommended. (Your profile has export UVLPDEST=-dprintername near you) uvhd will call uvlist script 'uvlp12' to print specified no of records.
#6a. Alternatively (if profile not setup with printer destination), use 'p3' Print 3 records to a file for subsequent print by other means.
Note |
|
#7. --> i3 <-- Immediate print 1st 3 records ===
#7a. --> p3 <-- Print 1st 3 records to a file (date/time stamped in tmp subdir) === 'tmp/filexx_yymmdd_hhmmssP'
#7b. --> q <-- quit uvhd
#7c. lp tmp/filexx_yymmdd_hhmmssP <-- print uvhd formatted records using 'lp' ============================ or whatever utility available
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can run 'uvcopy varstat1' on RDW files to create summary tables of record counts for each record-size. You might run 'uvcopy table2' to create summary tables of record counts for each record-type.
#1. Login --> your homedir =====
#2. cdc <-- alias cdc='cd $CNVDATA' === - change to data conversion superdir to perform data conversion
#3. ls -l d0ebc <-- list filenames to be converted =========== (previously FTP'd from mainframe)
#4. mkdir stats <-- make subdir to receive reports (if not already present) ===========
#4. uvcopy varstat1,fili1=d0ebc/filexx <-- run varstat1 for 1 file at a time ================================== - create table counts of records by record-size (assuming filexx is RDW) - report will be written to stats/filexx.var - see sample report in Part_3.
#4a. uvcopyx varstat1 d0ebc <-- OR, run varstat1 for ALL files in d0ebc/... ====================== - any non-RDW files will cause errmsg & script will continue to next file
#5. uvcopy tableRT1,fili1=d0ebc/filexx,filo1=stats/filexx.typ,uop=a12b3r400t1 ========================================================================= - sample job to table summarize record counts by Record-Type bytes 12-14 - option 'a12' for R/T start (0 relative) & 'b3' R/T length) - option 'r400' specifies Record-Size 400 (or use 'z4' for RDW) - option 't1' to translate EBCDIC records to ASCII (so you can read report)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Page '1E4' suggested inserting '*sql:...' controls & sequence#s at the begining of the copybooks for database primary & foreign keys. The '*sql...' controls are used by 'sqlcreate2' which converts the copybooks to SQL table create & load batch scripts.
For copybooks with no redefined splits or for root of split files: - insert 10 digit seq# for SQL primary key - precede with '**sql...' control as follows:
**sql:next='bigint not null primary key identity(1,1)' 03 xxroot-key pic s9(10). ====================================================== 03 filler pic x(10).
For copybooks of redefined split files: - insert 10 digit seq# for 'primary key' preceded with sql: field type control - insert 10 digit seq# for 'foreign key' preceded with sql: field type control
**sql:next='int not null primary key identity(1,1)' 03 xxredef-id pic s9(10). **sql:next='bigint foreign key references nmroot(sql_key)' 03 xxroot-key pic s9(10). ========================================================== 03 filler pic x(10).
The 10 byte filler provides separation from original record data and provides space in case you wish to insert items such as original occurs# for variable length datafiles with occurs depending on segments.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You will need to write customized 'uvcopy jobs' if you have variable length datafiles 'occurs depending on segments' or fixed-length datafiles with 'multiple redefined record types'. See examples of these uvcopy jobs in Part_3.
There is of course a 'learning curve' required to write these jobs, but UV Software can do this for you or provide training if you wish to become self-reliant.
Note that 'uvcopy' is ideally suited to working with mainframe type datafiles on unix/linux/windows systems, because it knows about mainframe concepts such as: packed decimal, zoned signs, Indexed files (Micro Focus COBOL compatible), RDW files (binary record size prefixes), EBCDIC/ASCII translates, etc. The standard system utilities do not provide facilities to handle these constructs.
The examples in Part_3 illustrate that uvcopy can do very complex work with only a few instructions vs the many more of instructions that COBOL would require. uvcopy is parameter driven, no compiles required, just edit & run.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
For Fixed-Length data records without multiple record-types, we provide a uvcopy utility pre-programmed job 'insertseq1'. It will prompt for the 'record-size' if you do not enter it on the command-line.
You must therefore run this utility separately for each file (vs all files in directory). Here is an example for 'fileA' with record size 400:
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' ==== - change to data conversion superdir to perform data conversion
#03a. uvcopy insertseq1,fili1=d0ebc/fileA,filo1=d1ebc/fileA,uop=r400 ============================================================== - copy fileA from d0ebc to d1ebc inserting 10 byte seq# & 10 byte filler. - output records will then be 420 bytes.
#03b. uvcopy insertseq1,fili1=d0ebc/fileB,filo1=d1ebc/fileB,uop=r192 ============================================================== - output records for fileB will be 212 bytes.
#03c. ...etc... for all fixed-length datafiles w/o multi R/Ts =======================================================
See 'insertseq1' uvcopy utility job listed on page '4B1'
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
d0ebc ---------> d1ebc -----------> d2asc -----------> d4pipe -----> SQL Server pfd/* jobs pfx3/* jobs pfp3/* jobs FTP C:\cnvdata\pipe hand written gen by uvdata51 gen by genpipe1
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' ==== - change to data conversion superdir to perform data conversion
#03a. uvcopyxx 'pfd/*' <-- run uvcopy jobs to split redefines to separate files ================ - see samples in Part_3
#03b. runall sfd <-- run scripts to split redefines to separate files ========== - some files do not need the more powerful uvcopy job
#04a. ls -l pfp3 <-- confirm pipe|delimit jobs already generated ==========
#04b. uvcopyxx 'pfp3/*' <-- execute jobs to convert All files to pipe|delimited ================= - see samples in Part_3
Note |
|
#05a. ls -l d4pipe <-- display output filenames ============ - do they have '.dat' extensions ?
#05b. rename+X d4pipe .dat <-- append suffix '.dat' on all files in d4pipe/... ==================== - required by SQL Server table loader
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will use the uvcopy 'listpipe1' utility, which by default lists the 1st 3 records in the file. First it lists the raw pipe delimited data record. Then it lists the data fields on separate lines with the copybook name.
#06. mkdir listpipe <-- make subdir to receive print files ============== (if not already present)
#07. uvcopy listpipe1,fili1=d4pipe/filexx.dat,fili2=maps/filexx ,filo1=listpipe/filexx ==========================================================
#07a. uvlp12 listpipe/filexx <-- print the file formated by listpipe1 ======================
#08. uvcopyx2 listpipe1 d4pipe maps listpipe uop=q0i7 ================================================
#08a. uvlpd12 listpipe <-- print ALL files formatted by uvcopyx2 listpipe1 ================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You will need to transfer the generated pipe|delimited files & scripts for creating & loading database tables to the SQL Server machine. We will simplify the transfer by zipping the files.
We will also trasnfer the copybooks record layouts, uvcopy jobs, & the listpipe samples. The SQL Sever DBA will find the record layouts & the listpipe samples useful for validating the pipe|delimited files.
See the subdirs being zipped on the next page (listing of zipsql1 script).
#01. Login --> your homedir =====
#02. cdc <-- alias cdc='cd $CNVDATA' ==== - change to data conversion superdir
#03. zipsql1 <-- zip multiple subdirs (see list above) to subdir zip/... =======
#04. cd zip <-- change into zip/ for FTP to alternate machine ======
#05. ftp altmachine IP# ==================
#05a. userid ----> xxxxx #05b. password --> xxxxx #05c. binary #05d. cd ??? #05e. prompt <-- to allow mput * #05f. mput * <-- send all files with 1 command
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# zipsql1 - zip results of generating pipe|delim & SQL create/load scripts # - by Owen Townsend, UV Software, October 2010 # echo "zip results of generating pipe|delim & SQL create/load scripts" # if [[ -d cpys && -d maps && -d pfp3 && -d sqlTC1 && -d sqlTL1 && -d d4pipe &&\ -d listpipe ]]; then : else echo "subdirs required: cpys,maps,pfp3,sqlTC1,sqlTL1,d4pipe,listpipe" exit 99; fi # cd cpys zip -r ../zip/cpys.zip . cd ../maps zip -r ../zip/maps.zip . cd ../pfp3 zip -r ../zip/pfp3.zip . cd ../sqlTC1 zip -r ../zip/sqlTC1.zip . cd ../sqlTL1 zip -r ../zip/sqlTL1.zip . cd ../d4pipe zip -r ../zip/d4pipe.zip . cd ../listpipe zip -r ../zip/listpipe.zip . cd ../pf zip -r ../zip/pf.zip . cd .. ls -l zip exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
>>>>>>>>>>>>>> to be supplied
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
>>>>>>>>>>>>>> to be supplied
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part 4 will list the names of all Vancouver Utility scripts & uvcopy utilities used in the generation of the uvcopy jobs that perform the pipe|delimits and scripts that create & load the SQL tables.
We will list only a few of contents of these utilities & scripts since some some are quite lengthy. We will flag those listed with an '*'.
Do not confuse the uvcopy utility jobs in Part 4 with the uvcopy job examples given in Part_3. These Part 4 'uvcopy utility jobs' are supplied with the Vancouver Utility package to generate the 'uvcopy worker jobs' which depend on the user's copybook record layouts.
4A0. | Vancouver Utility (VU) Scripts used in generating pipe|delimit files |
- cnvdatadirs, cleanupcpy, uvlp12, uvlpd12, uvlpd12D | |
- renameL, rename-XX, recname+X | |
- uvcopyx, uvcopyx2, uvcopyx4, uvcopyxx | |
4A1. | * gensqljobs1 - perform all steps to GENERATE jobs for pipe|delimit |
4A2. | * execsqljobs1 - EXECUTE all jobs to pipe|delimit all data-files |
4A3. | * zipsql1 - zip generated results for transfer to execute on alt machine |
4B0. | VU uvcopy utility jobs used in generating pipe|delimit files |
- cobmap1, uvdata51, uvdata52, genpipe1, sqlcreate2, listpipe1 | |
- keep1, varstat1, tableRT1, | |
4B1. | * insertseq1 - insert sequence# in copybook for SQL Primary Key Integer |
4C0. | Vancouver Utility "C" programs (used in these pipe|delimit procedures) |
- uvhd, uvlist, uvcopy | |
- see more description on page '4C0' |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
cnvdatadirs |
|
cleanupcpy |
|
uvlp12 |
|
uvlpd12 |
|
uvlpd12D |
|
renameL |
|
rename-XX |
|
rename+X |
|
uvcopyx |
|
uvcopyx2 |
|
uvcopyx4 |
|
uvcopyxx |
|
*gensqljobs1 - run All procedures to generate all jobs for pipe|delimits - intended for reruns after copybook corrections - should perform step by step procedures 1st time with new files
*execsqljobs1 - EXECUTE all jobs to pipe|delimit all data-files - intended for reruns after copybook corrections - should perform step by step procedures 1st time with new files
*zipsql1 - zip pipe|delimit outputs for transfer to alternate machine
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# gensqljobs1 - script to GENERATE jobs to pipe|delimited files for SQL tables # - also generates batch-files to create/load tables on SQL Server # # gensqljobs1 #<-- call this script to GENERATE pipe|delim jobs & batch-files # =========== # execsqljobs1 #<-- separate following script to EXECUTE generated jobs # # These scripts intended for Re-Generation & Re-execution after copybook changes # You should 1st perform the step by step conversions/executions as documented # in www.uvsoftware.ca/sqlconvert.htm to better detect & correct problems # These scripts assume: # # 1. You are in $CNVDATA superdir with subdirs: # - cpys, maps, pfp1, pfp3, sqlTC1, sqlTL1, zip # 2. You must have completed preliminary development for files & copybooks # as documentd in SQLconvert.doc up to page '1E4' (where this script begins) # - redefined record types split to separate copybooks # - occurs depending segments split to separate copybooks # - sequence# field inserted for SQL primary key integer (if desired) # echo "Generate/Execute scripts to convert EBCDIC datafiles to pipe|delimited" echo "files for loading SQL Server tables" # echo "#1. convert copybooks to cobmap (layouts), enter=continue"; read reply; uvcopyx cobmap1 cpys maps uop=q0i7p0 #<-- convert copybooks to record layouts #=================================== # echo "#2. convert cobmaps to pipe|delimit uvcopy jobs"; read reply; uvcopyx genpipe1 maps pfp1 uop=q0i7a1d4 #<-- convert cobmaps to uvcopy jobs #====================================== # option 'a1' translates EBCDIC to ASCII # option 'a0' if data inputs ASCII (via uvdata51 jobs) # option 'd4' to generate 'bal dat8n' after 8 digit dates # to edit 'ccyy/mm/dd' & convert '|00000000|' to '||' # (sqlcreate2 will code these fields as 'datetime default null') # # backup old uvcopy pipe|delimit jobs (pfp3) & copy from pfp1 (new gens) echo "#3. copy generated jobs pfp1/... to pfp3/... for execution";read reply; rm -rf pfp3.old/* mv pfp3 pfp3.old mkdir pfp3 # cp pfp1/* pfp3/ #<-- copy pipe delimit jobs to pfp3 (execution subdir) #============== ls -l pfp3 #<-- list jobs to be executed # echo "#4. convert cobmaps to scripts to create & load SQL tables"; read reply; uvcopyx4 sqlcreate2 maps sqlTC1 sqlTL1 uop=q0i7d0 #================================================ # convert cobmaps to batch-files to create & load SQL tables # option 'd1' to convert 8 digit dates to 'datetime' SQL field type exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# execsqljobs1 - EXECUTE jobs to convert EBCDIC datafiles # to pipe|delimited files for loading SQL Server tables # # execsqljobs1 #<-- call this script to EXECUTE pipe|delimit & listpipe # ============ # # gensqljobs1 - separate preceding script to GENERATE jobs & scripts # - also generates batch-files to create/load tables on SQL Server # # These scripts intended for Re-Generation & Re-execution after copybook changes # You should 1st perform the step by step conversions/executions as documented # in www.uvsoftware.ca/sqlconvert.htm to better detect & correct problems # These scripts assume: # # 1. You are in $CNVDATA superdir with subdirs: # - cpys, maps, pfp1, pfp3, sqlTC1, sqlTL1, zip # # 2. You must have completed preliminary development for files & copybooks # as documentd in SQLconvert.doc up to page '1E4' (where this script begins) # - redefined record types split to separate copybooks # - occurs depending segments split to separate copybooks # - sequence# field inserted for SQL primary key integer (if desired) # echo "Generate/Execute scripts to convert EBCDIC datafiles to pipe|delimited" echo "files for loading SQL Server tables" # echo "#1. Execute pipe|delimit jobs d1ebc/... --> d4pipe/..."; read reply; uvcopyxx 'pfp3/*' #<-- execute All jobs, convert All files to pipe|delimited #================ # echo "#2. create sample listings 1st 3 records in each file"; read reply; uvcopyx2 listpipe1 d4pipe maps listpipe uop=q0i7,rop=r0 #====================================================== # echo "#3. create zip file results for transfer to alt machines"; read reply; zipsql1 #<-- script, create zip files, to transfer/execute on alt machine #====== exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# zipsql1 - zip results of generating pipe|delim & SQL create/load scripts # - by Owen Townsend, UV Software, October 2010 # echo "zip results of generating pipe|delim & SQL create/load scripts" # if [[ -d cpys && -d maps && -d pfp3 && -d sqlTC1 && -d sqlTL1 && -d d4pipe &&\ -d listpipe ]]; then : else echo "subdirs required: cpys,maps,pfp3,sqlTC1,sqlTL1,d4pipe,listpipe" exit 99; fi # cd cpys zip -r ../zip/cpys.zip . cd ../maps zip -r ../zip/maps.zip . cd ../pfd zip -r ../zip/pfd.zip . cd ../pfp3 zip -r ../zip/pfp3.zip . cd ../sqlTC1 zip -r ../zip/sqlTC1.zip . cd ../sqlTL1 zip -r ../zip/sqlTL1.zip . cd ../d4pipe zip -r ../zip/d4pipe.zip . cd ../listpipe zip -r ../zip/listpipe.zip . cd .. ls -l zip exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
cobmap1 |
|
uvdata51 |
|
uvdata52 |
|
genpipe1 |
|
sqlcreate2 |
|
listpipe1 |
keep1d |
|
varstat1 |
|
tableRT1 |
|
*insertseq1 - insert 10 byte sequence# at begining of fixed-length datafiles - user option to specify record-size - sample usage on page '1I1'
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# insertseq1 - insert sequence# at begin fixed-length records # - option for any record size # - by Owen Townsend, UV Software, October 2010 # # Used to prepare fixed-lth EBCDIC records for conversion to pipe|delimited # - 10 byte sequence# will become bigint primary key in SQL table # - 10 byte filler follows to separate original record data # - output records will be 20 bytes bigger than input (spcfd by option 'r') # # uvcopy insertseq1,fili1=d0ebc/filexx,filo1=d1ebc/filexx,uop=r192 # ================================================================ # opr='uop=r99999 - default options' opr=' r99999 - default invalid, forces valid entry' uop=q1r99999 - default options was=a8192b8192 fili1=?d0ebc/filexx,typ=RSF,rcs=8192 filo1=?d1ebc/filexx,typ=RSF,rcs=8192 @run opn all open files mvn $rr,$uopbr store recsize option in register 'r' mvn $rs,$rr copy to register 's' add $rs,20 + 20 for output recsize cmn $rr,8192 user recsize valid ? skp< man20 can 'cancelled - recsize option r must be < 8192' # # begin loop to get/process/put records til EOF reached man20 get fili1,a0($rr8192) get next record skp> eof (cc set > at EOF) add $ca1,1 increment seq# clr b0($rs8192),x'40' clear output record area mvn b0(10ze),$ca1 insert sequence# (Zoned Ebcdic) mvc b20($rr8000),a0 append input record data put filo1,b0($rs8000) write output record skp man20 return to get next record # # EOF - close files & end job eof cls all eoj #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvhd |
|
uvlist |
|
uvcopy |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page