Part_0 | - Introduction & Overview |
- ADMjobs essential to Vancouver Utility conversions (JCL,COBOL,DATA) | |
- recommended References & Books for Unix/Linux system Administration | |
- recommended downloads useful for mainframe conversion sites |
Part_1 | - install Vancouver Utilities - brief review |
(should already be installed following instructions in install.htm). | |
- setting up profiles for appsadm, programmers,& operators | |
- using 'stub' profiles in home dirs, calling a common profile | |
- a 'common_profile' makes site administration much easier | |
- 'stub_profile's (renamed as .profile or .bash_profile) in each user's | |
homedir allows them to code their preferences | |
- setup 'appsadm' (applications administrator) homedir /home/appsadm | |
to hold profiles modified for site, site specific scripts,crontabs,etc | |
- Preparations for UV Software Onsite Training & Conversion Assistance |
Part_2 | - RAID Arrays, Partitioning,& File System Design |
- directory structures suggested for testing & production | |
- environmental variables RUNLIBS & RUNDATA allow testing & production | |
on 1 machine without having to change any JCL/scripts | |
- alternative designs for multiple sets of libraries & data | |
possibly for organizations with multiple companies on 1 machine |
Part_3 | - Backup & Restore |
- samples of production Data & Libraries that need to be backed up | |
- suggested directories for on-disc backup & restore | |
- manual commands reviewed (cp -r, tar, cpio) | |
- sample scripts for backup & restore (to disc or tape) |
Part_4 | - Advanced Backup & Restore |
- complete system scheduled by 'cron' | |
- backup directory trees defined by $PRODDATA, $PRODLIBS,& $HOMEDIRS | |
- 2 days backup on-disc unzipped & instantly available | |
- zipped versions of nightly backups maintained on-disc for 40 days | |
- 1st of month zip file maintained on-disc for 15 months | |
- 1st of year zip file maintained on-disc for 15 months | |
- nightly zip files written to tape by cron & tapes cycled over 30 days | |
- tapes stored onsite in fireproof vault | |
- end of month tape taken offsite & new tape inserted in rotation |
Part_5 | - using 'cron' (automatic job scheduling) |
- to run backups & other jobs (nightly, weekly, monthly) | |
- sample crontabs for users & root | |
- killuser2 crontab & script to kill users who did not log off | |
- scheduling scripts by cron & capturing joblogs mailed to appsadm |
Part_6 | - Console Logging |
- capturing & processing console logs for viewing & printing | |
- uses unix/linux 'script' command to capture both displays & entries | |
- replaces mainframe console logs | |
- also documents 'joblog' scripts for programmers to capture logs | |
for 1 job at a time for test/debug. |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part_7 | - useful scripts for unix/linux/uvadm administrators |
- over 500 Korn shell scripts included with the Vancouver Utilities | |
- in Part 7, we have selected a few of the scripts that are most | |
useful to unix/linux/uvadm administrators. | |
- these scripts do in seconds what could take hours to do manually. | |
ex: renaming files: add/remove suffixes, lower/UPPER case, etc | |
- you can see many other scripts in scripts1.htm |
Part_8 Networking & System Administration - Sample Network (at UV Software) - 3 PCs on a LAN/router & DSL modem to ISP - RHEL 5.1, RHEL 3.0,& Windows XP /etc/hosts file with IP Addresses & Host-names Setup router access to ISP network-scripts (/etc/sysconfig/network-scripts/ifcfg-eth0) - setup static IP#s for computer, gateway,& DNS1/DNS2 Lookup IP Adresses or Domain Names (reverse lookup) - using unix/linux command line tools such as nslookup, host,& dig - using a GUI web browser, try sites such as whatismyipaddress.com. PING * 'pingall' script to determine the IP#s used on your router 'nmap' to determine the device or O/S at any given IP# FTP, SSH samples PUTTY - SSH (Secure SHell) Terminal Emulator for Windows & Unix/Linux SAMBA - Linux file-server for Windows PCs - sample samba configuration file - need to disable SELinux & iptables for SAMBA to work Investigate /var/log/dmesg bootup message file - to determine device name assigned to the DAT tape drive Mounting USB memory devices - determining USB device name for the mount command, by investigating /dev/..., /var/log/messages, & /var/log/dmesg Unix/Linux system log files - /var/log/messages, dmesg, utmp, wtmp Commands to access log file information - who, w, finger, last, lastlog, utmpdump Sample outputs from: who, w,& finger Sample outputs from: last & lastlog Using utmpdump to convert /var/run/utmp (binary file) to an ASCII file - followed by uvlist filter to reduce multi-blanks to fit lines on screen Disc Monitoring (df, du, statdir1) Killing hung-up jobs (ps & kill demo) Running BackGround jobs: jobs (status), fg %1 (foreground), ^Z (background), bg %1 (restart), kill %1 Messaging (wall, write, mail) TOP - Unix/Linux system performance analysis tool msmtp - send email from scripts scheduled by cron at night, to managers at home, to alert them of serious errors. sending unix/linux PCL files to a network printer from Windows - create PCL files on unix/linux & download to widows with 'winscp' net use lpt1 \\computername\printername /persistent:yes copy /b filename.pcl LPT1:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
ADMjobs.doc discusses several subjects that are vital to any successful unix or linux installation (filesystems, backup/restore, cron, console logging,etc).
ADMjobs.doc is intended to be used with the following items, which document conversion of IBM mainframes to unix & linux.
JCLcnv1demo.htm - DEMO conversions, sample JCL, scripts, executions
JCLcnv2real.htm - comprehensive instructions for REAL conversions
JCLcnv3aids.htm - conversion AIDS (cross-references,tips,mass changes,etc)
MvsJclPerl.htm - MVS JCL Conversion to Perl script
MVSCOBOL.htm | - converting mainframe COBOL to Micro Focus or AIX COBOL |
DATAcnv1.htm | - converting mainframe MVS data EBCDIC to ASCII |
VSEJCL.htm | - converting mainframe VSE JCL to Korn shell scripts |
We will emphasize the importance of the automatic backup system. This should be setup very soon after installation so that you have protection from inadvertently wiping out your programs & jobs (before you become a Unix/Linux expert).
It is a great comfort to know that every night your libraries & data are automatically backed up to the DAT tape & also saved to alternate disc directories for convenient & immediate recovery.
Part_1 will document setting up user profiles for uvadm, appsadm, programmers, & operators. This repeats some of the install.htm instructions, but this is vital and is related to this document (backup scripts, console logging, etc).
Owen Townsend, UV Software, 4667 Hoskins Rd., North Vancouver BC, V7K2R3
Tel: 604-980-5434 Fax: 604-980-5404
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
www.redhat.com -> support -> documentation -> Red Hat Enterprise Linux |
Here are a few downloads relevant to mainframe conversions to Unix/Linux.
1. | https://www.kornshell.com - Korn shell 1993 version |
Look for the binary matching your Unix/linux architecture
2. | https://www.chiark.greenend.org.uk - putty terminal emulator for Windows |
See more info & configuration on pages '8F1' - 8F3.
3. | https://winscp.sourceforge.net/eng - winscp WINdows Secure file CoPy to unix/linux |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1A1. | Install Vancouver Utilities - brief review |
(should already be installed following instructions in install.htm). | |
'uvadm' homedir contents |
1B1. | profiles provided in /home/uvadm/env |
- to be copied & modified in /home/appsadm/env | |
1B2. | profiles are vital to Unix/Linux & mainframe conversions |
- split to 2 components for greater flexibility & reduced maintenance | |
(stub_profile & common_profile) | |
1B3. | Advantages to UV Software profile conventions |
1C0. | profile listing summary |
1C1. | 'stub_profile' - modify & copy to user homedirs |
- rename as .bash_profile for bash, .profile for ksh | |
- copy to /home/appsadm/env/... & modified for your site | |
- defines RUNLIBS as testlibs for programmers OR prodlibs for operators | |
- defines RUNDATA as testdata for programmers OR proddata for operators |
1C2. | 'common_profile' - called by 'stub_profile' |
- defines search PATHs to libraries & data based on $RUNLIBS & $RUNDATA |
1C3. | Optional additions to common_profile for DB2,Oracle,mySQL,COBOL-IT,RPG |
1C4. | 'common_defines' - called by 'common_profile' |
- defines TESTLIBS,TESTDATA,PRODLIBS,PRODDATA for backup/restore scripts |
1C5. | 'bashrc' - modify & copy to user homedirs |
- rename as .bashrc (for bash), or .kshrc (for ksh) | |
- required for console logging to preserve aliases & umask |
1C6. | Recommended permissions for directories & files that must be shared |
by groups of programmers & opperators (as in mainframe conversions). | |
- 775 for directories, 664 for files, 002 umask in profiles | |
- programmers & operators in a common group (suggest 'apps') | |
- extending security to the group level |
1C7. | stub_profile_cronlogdemo - to capture log files for cron jobs |
- see pages '5I1' - '5K6' |
1C8. | stub.ini - alternative profile for schedulers such as cron & control-M |
- called at begining of each JCL/script | |
to define RUNDATA & all common.ini | |
- copy/rename for different systems with different 'RUNDATA's |
1C9. | common.ini - called by stub.ini to reduce code duplication |
- to setup search PATHs to JCL/scritps, COBOL programs, etc | |
- usually RUNLIBS is common & only RUNDATA varies by system |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1D1. | 'appsadm' homedir subdirs desired |
1D2. | Op. Instrns. to setup appsadm account & create subdirs |
1D3. | copy profiles from uvadm/env & modify in appsadm/env |
1D4. | copy modified stub_profiles to your user homedirs |
1D5. | modify common_profile, in appsadm/env, called by user stub_profiles |
1E1. | Preparations for UV Software Onsite Training & Conversion Assistance |
1F1. | Training Plan for converting mainframe JCL/COBOL/DATA to Unix/Linux |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The Vancouver Utilities should already have been installed frollowing the instructions in install.htm, but here is a much shortened version assuming Linux (see install guide for other unix O/S's).
#1. login as 'root'
#2. groupadd apps <-- setup group 'apps', if not already setup ============= - OR use whatever groupID you wish - BUT see notes below in '-g apps' paragraph
#3. useradd -m -g apps -s /bin/bash uvadm <-- setup user 'uvadm' =====================================
#4. passwd uvadm <-- setup password desired ============
#5. chmod 755 /home/uvadm <-- allow other users to copy files from uvadm/... ===================== - required for many Vancouver Utility procedures
#6. exit (logout from root)
This assumes UV Software has supplied you with a userid/password to download 'uvadm.zip' from the UV Software web site.
#1, Login as 'uvadm' --> /home/uvadm
#2. sftp uvsoft2@uvsoftware.ca <-- Secure FTP userid 'uvsoft2' ========================== #2a. passwd --> xxxxxxx #2b. get uvadm.zip #2e. bye
#3. unzip uvadm.zip ===============
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#4. cp env/stub_profile_uv .bash_profile <-- copy rename for bash shell ====================================
#5a. vi .bash_profile <-- modify stub profile now or later ? ================ - see optional changes at '1D4'
#5b. vi common_profile_uv <-- modify common profile now or later ? ==================== - see optional changes at '1D4'
#6. exit <-- logout & back in to make new profile effective ====
#7. Login uvadm --> /home/uvadm
After unzip, the stub_profiles & common profiles are available in /home/uvadm/env/ and you can copy the stub_profile over .bash_profile.
See profiles listed begining on page '1C0'. Note that the stub_profile (must be renamed as .bash_profile in homedir) calls the 'common_profile' from /home/uvadm/env/common_profile. A common_profile greatly reduces system admininstration since PATH's etc can be defined in 1 place for use by all users.
Only uvadm will call the common_profile from /home/uvadm. We will soon setup the 'appsadm' user & copy /home/uvadm/env/... to /home/appsadm. All other stub_profiles call the common_profile from /home/appsadm/env/common_profile. This allows you to install new versions of uvadm without disrupting the common_profile called by other users - Important since the common_profile usually is modified considerably depending on site requirements.
#8. ccuvall LNX H64 uvlib64.a disamLNX64.a ====================================== - compile Vancouver Utilities on Linux Intel 64 bit machine
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/uvadm :-----batDOS - BAT files for VU on native Windows :-----binDOS - binaries for VU on native Windows (cc by lcc-win32) :-----bin <-- binaries (uvcopy,uvsort,etc) distros are RedHat Linux :-----binSFU - binaries for SFU (Services For Unix) on Windows :-----ctl - control files for various purposes :-----dat1 - test data files :-----doc - Vancouver Utilities documentation (text) :-----dochtml - documentation in HTML (same as on www.uvsoftware.ca) :-----env <------ profiles for Unix/Linux, SFU, Cygwin,& Uwin :-----hdr - hdr files for C compiles :-----htmlcode - merged into dochtml when text converted to HTML :-----lib - libraries for C compiles (subfunctions,DISAM,etc) :-----mvstest <-- test/demos for MVS JCL/COBOL mainframe conversions : :-----... - many subdirs omitted, see JCLcnv1demo.htm#3B2 :-----perlm <-- Perl Modules (support JCL conversions to Perl scripts) :-----perls <-- Perl Scripts (few, most scripts are ksh in sf/.../...) :-----pf <-- Parameter Files for uvcopy & uvqrpg : :-----adm - administrative jobs : :-----demo - demo jobs : :-----IBM - IBM mainframe conversion jobs : :-----util - utility jobs :-----sf <-- Script Files : :-----adm - administrative scripts : :-----demo - demo scripts : :-----IBM - IBM mainframe conversion scripts : :-----util - utility scripts :-----sfun - ksh functions (used in converted JCL/scripts) :-----src <-- Vancouver Utilities C source code :-----srcf - C source for various sub-functions :-----tf - test files for various examples in doc :-----tmp - tmp subdir (test/demo outputs) :-----vsetest <-- test/demos for VSE JCL/COBOL mainframe conversions : :-----... - many subdirs omitted, see VSEJCL.htm
The profiles (listed on the following pages) are intended to be used with uvadm, appsadm, mvstest,& vsetest. You will need only minor changes to use for your programmers & operators.
The /home/uvadm sub-directories are illustrated here to clarify the procedures required should you find reasons to modify any of the Vancouver Utility scripts or uvcopy jobs or programs at your site.
Note that the uvadm subdirs for 'sf' (script files) & 'pf' (uvcopy parameter files) (or uvcopy jobs) are further sub-directoried as shown above, but there is no need for you to subdirectory sf & pf in appsadm or your homedir.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/uvadm/env <-- profiles provided here :-----stub_profile_uv - copy/rename to .profile (ksh) or .bash_profile (bash) : - defines RUNLIBS/RUNDATA for programmers & operators :-----common_profile_uv - common profile (called by stub_profile) : defines PATH's etc using $RUNLIBS/$RUNDATA : /home/appsadm/env <-- setup user 'appsadm' & copy from /home/uvadm/env/* :-----stub_profile_ABC - customize & copy to homedirs .profile or .bash_profile :-----common_profile_ABC - common profile (called by stub_profile)
You should setup an application administrator userid 'appsadm', copy /home/uvadm/env/* to /home/appsadm/env,& customize profiles there depending on the locations of their libraries & data. Do NOT customize profiles in /home/uvadm/env/... because they would be overwritten when a new version of Vancouver Utilities is installed.
We recommend the concept of 'stub' & 'common' profiles. The shell profile in each user's homedir is a 'stub' that calls the 'common_profile' which is stored in /home/appsadm/env/...
Note that stub profiles must call 'common_profile' using '.' (dot execution), which means the 'export's made in the common_profile will still be effective on return to the users profile.
This system is a big advantage for any site with multiple users, it means the sysadmin can update common_profile once in 1 place & those changes are effective for all users.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Profiles are vital to the success of any Unix/Linux site and especially for sites converting from a mainframe. You can not find much practical advice on 'how to setup user profiles' in books or on the internet, so here are my methods which have been proven successful at over 50 conversion sites.
UV Software supplies the recommended profiles with the Vancouver Utilities, or you can save them from https://www.uvsoftware.ca/admjobs.htm#1C1 & 1C2. BUT, before you try to use them, it is important to understand the concepts. I assume the reader has some basic understanding of profile functions.
The most important profile function is to define search 'PATH's to scripts, programs,& (indirectly) to data-files. Some other functions are to define aliases, terminal types,& to capture console logs.
Without direction, an inexperienced unix programmer would probably define everything (PATHs,aliases,etc) in the profile in his home directory. Then the 1st programmer's profile might be copied to the homedirs of other programmers working on the same system.
You can see a big problem developing - when they need to change search PATHs, etc, they would have to update the multiple profiles in the homedirs of all programmers & operators.
Here is a better system to overcome the problem described above, and to provide many other benefits described further below. The solution is to split the profile in 2 parts (stub_profile & common_profile).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The PATH to JCL/scripts depends on $RUNLIBS (testlibs or prodlibs), example:
export PATH=$PATH:$RUNLIBS/jcls <-- PATH to JCL/scripts (in common_profile) ===============================
'$RUNDATA' determines data-file locations indirectly as follows:
The benefits of this system are HUGE:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
1C1. stub_profile_uv - distributed in /home/uvadm/env/... - copy (to user homedirs) & rename depending on the desired shell (.bash_profile for bash, .profile for ksh) - modify RUNLIBS/RUNDATA differently for programmers or operators - calls common_profile
1C2. common_profile_uv - distributed in /home/uvadm/env/... - defines search PATHs to libraries & data based on $RUNLIBS & $RUNDATA defined in the stub_profiles of programmers & operators (see suggested directory designs in ADMjobs.doc Part 2) - allows updates in 1 place to affect all users - modify TERM & 'stty erase' character depending on most common terminal (distribution has TERM=linux & stty erase '^?')
1C3. | Optional additions to common_profile for DB2,Oracle,mySQL,COBOL-IT,RPG |
1C4. | common_defines - define TESTLIBS,TESTDATA,PRODLIBS,PRODDATA for backups |
- optional, can call common_defines at end common_profile | |
- modify depending on your site |
1C5. bashrc - 'rc file' distributed in /home/uvadm/env/... - copy (to user homedirs) & rename depending on the desired shell (.bashrc for bash, .kshrc for ksh) - master version supplied without the '.' for visibility - required if you invoke another shell level (console logging script) - carries aliases & umask which get lost on another shell level - you should customize & store in /home/appsadm/env/...
1C6. | Recommended Permissions for Directories & Files |
- 775 for directories & 664 for files | |
- to allow programmers access to common libraries & data |
1C7. | stub_profile_cronlogdemo - alternate to stub_profile |
- for logs by mail from cron jobs, see '5I1' - '5K7' |
1C8. | stub.ini - alternative profile for schedulers such as cron & control-M |
- called at begining of each JCL/script | |
to define RUNDATA & all common.ini | |
- copy/rename for different systems with different 'RUNDATA's |
1C9. | common.ini - called by stub.ini to reduce code duplication |
- to setup search PATHs to JCL/scritps, COBOL programs, etc | |
- usually RUNLIBS is common & only RUNDATA varies by system |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# bash_profile_uv - bash_profile for Vancouver Utilities (same as stub_profile_uv) # - stub_profile calls 'common_profile' # - by Owen Townsend, update Feb 2020 # # bash/stub_profile_ABC - users should copy/rename their version of stub_profile # store their master copy in /home/appsadm/env/... # copy to user homedirs, renaming .profile or .bash_profile # common_profile_ABC - copy/rename their common_profile to /home/appsadm/env/... # called by their stub_profiles from $APPSADM/env/... # # bash_profile & common_profile - distributed in $UV/env/... # - copy to $APPSADM/env/... (/home/appsadm/env/...) & modify for your site # - do not modify profiles in $UV because new versions of uvadm would overwrite # - see stub/common profile listings at uvsoftware.ca/install.htm#A6 & A7 # # ** define RUNLIBS/RUNDATA/CNVDATA & call common_profile ** # # stub_profile defines RUNLIBS/RUNDATA for common_profile to define PATHs to libs & data # --> Modify definitions depending on your situation: # # 1. Mainframe JCL/COBOL/DATA Migrations - libs/data in homedir for testing/training # export RUNLIBS=$HOME/testlibs1 RUNDATA=$HOME/testdata1 CNVDATA=$HOME/cnvdata1 # ============================================================================= # 2. Mainframe JCL/COBOL/DATA Migrations - conversion/production, separate file systems # export RUNLIBS=/p1/apps/testlibs1 RUNDATA=/p2/apps/testdata1 CNVDATA=/p3/apps/cnvdata1 # ============================================================================= # - appended digit '1' for future possible alternates testlibs2/testdata2/cnvdata2,etc # - RUNDATA could be defined differently for different programmers for test phase # - see uvsoftware.ca/jclcnv1demo.htm#1B3 RUNLIBS/RUNDATA defines for migrations # # 3. Vancouver Utilities Demos/Tutorials - libs/data in $HOME/demo, default Feb2020+ export RUNLIBS=$HOME/demo RUNDATA=$HOME/demo CNVDATA=$HOME/demo #============================================================== # - may define RUNLIBS/RUNDATA/CNVDATA the same for small projects (not major migrations) # - see uvsoftware.ca/uvdemos2.htm#1B5 more about RUNLIBS/RUNDATA stub/common profiles # # 4. Programmer testing/development projects, Example uvsoftware.ca/uvdemos2.htm#Part_9 # export RUNLIBS=$HOME/testisam RUNDATA=$HOME/testisam CNVDATA=$HOME/testisam # ============================================================================= # export APPSADM=/home/appsadm #Jan2020 APPSADM def in .profile_bash (for WSL Windows Subsystem on Linux) CALLER=$(cat /proc/$PPID/comm) echo "Executing--> \$HOME=$HOME/.bash_profile (copied/renamed from \$APPSADM/env/bash_profile_uv)" echo " - Vancouver Utilities stub_profile in login homedir, will call common_profile" echo " - LOGNAME=$LOGNAME HOME=$HOME PWD=$PWD APPSADM=$APPSADM CALLER=$CALLER" echo "Calling--> . $APPSADM/env/common_profile_uv" # . /home/appsadm/env/common_profile_uv # common_profile called from /home/appsadm/env/... #==================================== # - NOT from /home/uvadm/env/... # - must setup appsadm to store common_profile, so not lost when uvadm updated # - see more at www.uvsoftware.ca/install.htm#A4 echo "HOSTNAME=$HOSTNAME LOGNAME=$LOGNAME APPSADM=$APPSADM" echo "RUNLIBS=$RUNLIBS RUNDATA=$RUNDATA" # # ** misc items that user may need to override common_profile defs ** # export TERM=linux # TERM - modify depending on your terminal # stty erase '^?' # erase char - modify depending on your terminal # stty intr '^C' # interrupt ^C, (probably already default ?) # export UVLPDEST="-dlp0" # default destination for uvlp(uvlist) scripts # # change to a printer near you & un-comment # # ** user aliases, etc ** # alias l='ls -l' # save keystrokes on very often used commands # - see common_profile for several more aliases # - add more here depending on user preferences # # ** TEST or PRODuction ** # export TESTPROD=P000 # P___ for PRODuction export TESTPROD=T000 # T___ for TEST # - PRODuction profiles TESTPROD=P*, developer TEST profiles TESTPROD=T* # - JCL/scripts can test $TESTPROD to control various differences desired # - used to determine if programmer 'T'esting or 'P'roduction # - bytes 2,3,4 of P/T___ reserved for future use as required # if [[ "$TESTPROD" == P* ]] <-- test only 1st byte for Test/Prod # if [[ "$TESTPROD" != T* ]] <-- assume Production if not Test #Note - Test/Prod code relevant only to mainframe migration JCL/scripts sites # - migration sites would move this code up prior to calling common_profile # so common_profile could modify PATH,etc depending on Test/Production # # ** Console Logging - optional ** # - uncomment 9 '##' lines below to activate console logging # - must setup subdirs matching $LOGNAME in $LOGDIR/log1/...,log2/...,log3/... # (usually LOGDIR=$APPSADM in common_profile) # - subdirs log1,log2,log3 hold logfiles for: current file, month, lastmonth # - see details at www.uvsoftware.ca/admjobs.htm#Part_6 # - console logging for production operators to capture entire logon session # - programmers can use the 'joblog1' script to capture log for 1 job at a time ## login1 || exit 2 # exit here if 2nd login ## logfixA $LOGNAME # process log1 file to log2 (to allow read/print) ## echo "--> logview <-- execute logview script to see prior console logs" ## echo "logging requires .bashrc/.kshrc with PS1='<@$HOST1:$LOGNAME:$PWD >'" ## echo "logging requires $LOGNAME subdirs in \$LOGDIR/log1 & log2" ## if [[ -d $LOGDIR/log1/$LOGNAME && ( -f .kshrc || -f .bashrc) ]]; then ## echo "script $LOGDIR/log1/$LOGNAME/$(date +%y%m%d_%H%M%S)" ## exec script $LOGDIR/log1/$LOGNAME/$(date +%y%m%d_%H%M%S) ## fi # 'exec script' must be the last non-comment line in the profile # 'script' disables aliases & umask 002 - put in .bashrc/.kshrc to be effective # ============================== # cp $APPSADM/env/bashrc .bashrc # copy to your homedir restoring correct name # ============================== # After uvadm installed at $UV (/home/uvadm or /opt/uvsw/uvadm) # - setup appsadm at $APPSADM (/home/appsadm or /opt/uvsw/appsadm) # - copy $UV/env/* $APPSADM/env # - modify $APPSADM/env/common_profile & stub_profile for your site # - copy $APPSADM.env/stub_profile to user .profiles # Then all user profiles call common_profile from $APPSADM/env/... # to prevent loss of customized common_profile when new version uvadm installed
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# common_profile_uv - for Vancouver Utilities by Owen Townsend, updated Feb 2020 # - to be '.' dot executed by user .profile or .bash_profile # - see www.uvsoftware.ca/install.htm # # common_profile_uv - UV Software's suggested common_profile # - defines search PATHs based on $RUNLIBS & $RUNDATA # which must be defined in user's .profile or .bash_profile # before calling this commmon_profile # - copy to $APPSADM/env (/home/appsadm/env) for site customizations # common_profile_ABC - users should copy/rename their version of common_profile # & store their version in /home/appsadm/env/... # # bash_profile_uv - also copy to $APPSADM/env (/home/appsadm/env) for site customizations # bash_profile_ABC - might append suffix to identify your customized versions # - then copy to user homedirs & renamed as .profile or .bash_profile # stub_profile_uv - alternate name for bash_profile_uv (same contents) # - stub_profile defines RUNLIBS/RUNDATA/CNVDATA for common_profile to modify PATHs # - users may define depending on their current project (migrations,testing,development) # export RUNLIBS=$HOME/testlibs RUNDATA=$HOME/testdata CNVDATA=$HOME/cnvdata # export RUNLIBS=$HOME/demo RUNDATA=$HOME/demo CNVDATA=$HOME/demo # # bash_profile & common_profile - distributed in $UV/env/... (usually /home/uvadm/env) # - copy to $APPSADM/env/... (/home/appsadm/env/...) & modify for your site # - do not modify profiles in $UV because new versions of uvadm would overwrite # - see stub/common profile listings at uvsoftware.ca/install.htm#A6 & A7 # # ** begin code for common_profile ** echo " " echo "Executing--> \$APPSADM/env/common_profile_uv (APPSADM=$APPSADM)" CALLER=$(cat /proc/$PPID/comm) echo "LOGNAME=$LOGNAME HOME=$HOME APPSADM=$APPSADM CALLER=$CALLER" # export UV=/home/uvadm # UV homedir symbol used below export APPSADM=/home/appsadm # 1st def in .bash_profile (redef here for saftey) export LOGDIR=$APPSADM # console logging subdirs log1,log2,log3 #Jan2020 - also define APPSADM in .profile_bash (for Windows Subsystem on Windows WSL) # # setup PATH for Vancouver Utilities programs & scripts (uvadm & appsadm) # - append onto system PATH, using symbols defined above ($UV, $APPSADM, etc) export PATH=$PATH:./sf:$HOME/bin:$HOME/sf:$APPSADM/bin:$APPSADM/sf:$RUNLIBS/sf export PATH=$PATH:$UV/bin:$UV/sf/adm:$UV/sf/demo:$UV/sf/util:$UV/sf/IBM:$UV/help export PATH=$PATH:/usr/sbin # add system dir for sendmail, etc #Note - APPSADSM appears before UV so user modified scripts/jobs in APPSADM # can be stored in $APPSADM & be found prior to original versions in $UV # - $UV/sf subdirectoried to adm,demo,util,IBM (April2003+) # # setup 'PFPATH' for uvcopy & uvqrpg interpreter to find Parameter Files (jobs) export PFPATH=./pf,$RUNLIBS/pf,$RUNLIBS/pfs export PFPATH=$PFPATH,$HOME/pf,$UV/pf/adm,$UV/pf/demo,$UV/pf/util,$UV/pf/IBM # - use symbol $UV (defined above) to shorten PFPATH definition # - UV/pf/... follows RUNLIBS,APPSADM,HOME to allow user duplicate names # - uvcopy accepts ',' delimiters as well as ':' in case of SFU on Windows # # setup PATH & FPATH for JCL/scripts converted from mainframe Vancouver Utils # - see www.uvsoftware.ca/jclcnv1demo.htm or www.uvsoftware.ca/vsejcl.htm export PATH=$PATH:$RUNLIBS/jcls:$RUNLIBS/jts:$RUNLIBS/jus # Apr10/18 - adding extra JCL/script subdirs jts & jus # #Jun02/2018 - adding user written db2 utility scripts to the PATH # - named as mainframe utilities - dsntiaul,dsnutilb,dsnuproc,etc export PATH=$PATH:$RUNLIBS/db2s/ # # FPATH - defines directory of Korn shell functions (called by some VU scripts) # - examples: jobset12, exportgen0, exportgen1, # export FPATH=$UV/sfun # functions distributed in /home/uvadm/sfun/... export FPATH=$APPSADM/sfun # copy to /home/appsadm/sfun/... for possible customization # export FPATH=$RUNLIBS/sfun # OR to $RUNLIBS for more flexibility if required #Feb2020 - FPATH changed back to $APPSADM # #Mar14/12 - define 'GDGCTL' location of gdgctl51I.dat/.idx # - see doc at www.uvsoftware.ca/jclcnv4gdg.htm#5G1 if [[ -z "$GDGCTL" ]]; then export GDGCTL=$RUNDATA/ctl; fi #<-- set default # - see GDG control file discussed at www.uvsoftware.ca/jclcnv4gdg.htm#5A2 # # Define CTLMAPDIR for uvhdcob (display COBOL copybook fieldnames beside data fields) # - see www.uvsoftware.ca/uvhdcob.htm#Part_5 # - for uvhdc1 script, $UV/ctl/ctlfile_uvhdc1, $UV/mf/maps/copybooks export CTLMAPDIR=$HOME/mf/maps #<-- uvhdc1 demos /home/uvadm/dat1/... & /home/uvadm/maps/... # export CTLMAPDIR=$RUNLIBS/maps #<-- comment out above defaults this for uvhdc2 export COBMAPDIR=$RUNLIBS/maps # for uvhdcob (display data with fieldnames) export UVHDCOBROP=m45 # uvhdcob display 45 lines # export UVHDROP=l64 # uvhd display 64 chars/line - default export UVHDROP=l100 # uvhd display 100 chars/line (if screen allows) # # Indexed file extension controls for Vancouver Utilities export ISDATEXT=".dat" # .dat/.idx Indexed files for uvsort,uvcopy,uvcp,etc # # uvsort,etc expects .dat on data partition of ISAM files # # COBOL equivalent is 'IDXNAMETYPE=2' in $EXTFH/extfh.cfg # # ISDATEXT new way to control DISAM .dat extension Apr2010 export DISAMEXT="dat" # DISAMEXT old way prior to Apr2010 # # - omit both or set null if you want NO .dat extension # # printer destinations for VU laser printing scripts # - modify UVLPDEST to the network printer closest to you export UVLPDEST="-dMS610USB" # default dest for uvlp(uvlist) scripts export UVLPOPTN="-onobanner" # for unix/linux (SFU does not allow) export UVHDPRINT=uvlp16 # script for uvhd 'i' immediate print command export UVHDPWIDE=uvlp14L # script for uvhd 'iprint' Landscape 100 chs/line #----------------------------------------------------------------------------- # # ** TERM, erase, interupt, etc ** stty erase '^?' # erase char - modify depending on your terminal # # '^?' for linux/at386, '^H' for vt100,ansi,xterm # stty -icrnl # ensure CR x'0D' omitted & only LF x'0A' inserted # stty intr '^C' # interrupt ^C, (probably already default ?) # # ** UV Recommended items ** umask 002 # permissions 775 dirs, 664 files ulimit -f 25000000 # set max filesize to 25 gig set -o ignoreeof # disallow logoff via control D (use exit) trm=$(tty) # capture terminal device for PS1 export trmv=${trm#/dev/} # remove prefix /dev/ export HOSTNAME # should already be set export HOST1=${HOSTNAME%%.*} # extract 1st segment of $HOSTNAME export PS1='<@$HOST1:$LOGNAME:$PWD> ' export EDITOR=vi # for Korn shell history export VISUAL=vi # for Korn shell history export HISTSIZE=5000; # Korn shell history file size export TD8=$(date +%Y%m%d) export TD6=$(date +%y%m%d) export EM=$HOME/em # convenience for Owen (EMail directory) export EMTD6=$HOME/em/$TD6 # convenience for Owen (EMail directory) # # ** aliases ** # alias commands to prompt for overwrite (highly recommended) # - use option '-f' when you have many files (rm -f tmp/*, etc) alias rm='rm -i' # confirm removes alias mv='mv -i' # confirm renames alias cp='cp -i' # confirm copy overwrites alias l='ls -l' # save keystrokes alias lsd='ls -l $1 | grep ^d' # list directories only alias vi='vim' # use vim for Linux alias more='less' # less is way better than more alias grep='grep -nHd skip' # ensure filename & line# on matching lines alias uname='uname -a' # ensure -a on uname (All info) alias cdl='cd $RUNLIBS' # quick access to LIBS superdir alias cdlc='cd $RUNLIBS/ctl' # quick access to LIBS/control-files alias cdd='cd $RUNDATA' # DATA superdir alias cdc='cd $CNVDATA' # data CONVERSION superdir alias cdk='cd $CMPDATA' # data COMPARISON superdir alias cdm='cd $RUNLIBS/jclmods' # quick access to Alternate RUNLIBS alias cde='cd $EM' # EMail directory alias cdem='cd $EM/$TD6' # EMail directory for today alias mdem='mkdir $EM/$TD6; cd $EM/$TD6' # make Email dir & change into it alias mkem='mkdir $EM/$TD6; cd $EM/$TD6; touch ${TD6}a; vi ${TD6}a;' # aliases - ineffective if console logging activated (in user stub profile) # - ifso, place aliases in .bashrc (or .kshrc, for ksh) # #------------------------------------------------------------------------- # Verify that critical environmental variables have been defined # (by stub_profile or this common_profile) if [[ "$UV" = "" || "$APPSADM" = "" ]]; then echo "UV=$UV or APPSADM=$APPSADM not defined" echo "- enter to continue"; read $reply; fi if [[ "$RUNLIBS" = "" || "$RUNDATA" = "" ]]; then echo "RUNLIBS=$RUNLIBS or RUNDATA=$RUNDATA not defined" echo "- enter to continue"; read $reply; fi #Dec15/10 - set LOGMSGACK, activate ACK option in logmsg2 in JCL/scripts export LOGMSGACK=n #------------------------------------------------------------------------- # # ** Micro Focus COBOL 2.2 update2 Eclipse on RHEL 7 June2015 ** # export COBDIR=/opt/microfocus/VisualCOBOL # export JAVA_HOME=/usr/local/java32 # export PATH=$COBDIR/bin:$JAVA_HOME/bin:$PATH # export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COBDIR/lib:$JAVA_HOME/lib # export COBCPY=$COBDIR/cpylib # export CLASSPATH=$COBDIR/lib/mfcobol.jar:$COBDIR/lib/mfcobolrts.jar:$COBDIR/lib/mfsqljvm.jar # export COBMODE=64 # export EXTFH=$UV/ctl/extfh.cfg # file handler options IDXNAMETYPE=2 FILEMAXSIZE=8 # # ** AIX COBOL ** # set default file type for JCL converter to AIX COBOL # - other code at http://www.uvsoftware.ca/admjobs.htm#1C3 or $UV/env/archive/ # export COBRTOPT=FILESYS=QSAM # converted JCL/scripts allow override via cft=XXX, for example: # exportfile CUSTMAS data1/ar.custmas.master #cft=QSAM <-- as generated # exportfile CUSTMAS data1/ar.custmas.master cft=STL <-- uncomment & change type # # ** Microsoft SQL Server ** # see www.uvsoftware.ca/sqldemo.htm#Part_6 # export PATH=$PATH:/opt/mssql-tools/bin # export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/microsoft/msodbcsql/lib64 # export ODBCSQL="ODBC Driver 13 for SQL Server" # export DATABASE=testdb # $(DATABASE) used in table create & load scripts # export ODBCINI=/etc/odbc.ini # export ODBCSYSINI=/etc/ #<-- Directory with ODBC config (not File odbcinst.ini) # # ** GNU COBOL testing Oct 2019 ** # export GCHOME=/home/gcobol # export GCDIR=/home/gcobol/cob # # ** optional for WSL (Windows Subsystem for Linux) ** # Example#1 - export WINUSER using wslpath to get windows %USERPROFILE% (in C:\USERS\...) ## WINUSER=$(wslpath $(cmd.exe /C "echo %USERPROFILE%")) ## export WINUSER=$(echo $WINUSER | tr -d '\r') ## echo "WINUSER=$WINUSER" # Example#2 - setup variables for Both Windows & Linux using 'WSLENV' (translates path differences) ## C:\uvadm> set UVADM=C:\uvadm <-- set variable in windows ## set WSLENV=UVADM/p <-- WSLENV sets up variable UVADM for Both Windows & Linux # C:\uvadm> echo %UVADM% --> shows value "C:\uvadm" as expected
# C:\uvadm> wsl <-- run WSL (or bash) # /mnt/c/uvadm> <-- now running Linux, prompt changed as per common_profile # /mnt/c/uvadm> echo $UVADM <-- test, see if same variable now shows Linux path # /mnt/c/uvadm> /mnt/c/uvadm --> proves $UVADM on Linux equivalent of %UVADM% on Windows # # ** define directories for uvcopy mailx1 or mutt1 ** # export MAILDATA=maildata #<-- input data files # export MAILMSGS=mailmsgs #<-- MSG files created for input to mailx utility # export MAILSCRIPTS=mailscripts #<-- scripts created to execute mailx utility # # ** defines for QuikJobs & Easytrieves (converted to uvcopy) ** # export QJS=$RUNLIBS/qjs # export EZTS=$RUNLIBS/ezts # # ** optional software ** # - Micro Focus COBOL, AIX COBOL, GNU COBOL, COBOL-IT # - Microsoft SQL Server Oracle mySQL Morada RPG # - WSL (Windows Subsystem for Linux) # #Note - code removed for seldom used items, Feb 2016 # - but prior version saved in $UV/env/archive/common_profile_uv_20160215 # - contains items that may need to be recovered, such as: # - Micro Focus COBOL, COBOL-IT, Oracle, MySQL, Morada RPG, SQL Server # - see listings at www.uvsoftware.ca/admjobs.htm#1C1 & 1C2,1C3,etc #------------------------- end of common_profile ---------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The above common_profile includes code for Micro Focus COBOL & Microsoft SQL Server.
Here are optional additions to the common_profile for AIX_COBOL_DB2, Oracle, mySQL, COBOL-IT,& RPG. If you have Vancouver Utilities installed, you can also get them from /home/uvadm/env/archive/...
export LIBPATH=$LIBPATH:/opt/IBM/db2/V9.5/lib32 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/IBM/db2/V9.5/lib32 export DEVSQLLIB=/home/devinst/sqllib # added Jun20/14 George helped # . $DEVSQLLIB/db2profile # added Jun20/14 #====================== # - added DEVSQLLIB & ./db2profile here in common_profile_immd # OR could be in compile scripts aixcbl1SQL & aixcblASQL ?? # OR put the connect in the compile scripts ?? # - added LIBPATH & LD_LIBRARY_PATH as above for AIX COBOL compile scripts # see $UV/sf/IBM/aixcbl1SQL & aixcblASQL # to access "DB2 SQL coprocessor services module" (libdb2.a) # # - following SYSLIB defs are duplicated in comile script cnvaix1 # - not sure which is best ? export SYSLIB=/opt/IBM/db2/V9.5/include/cobol_a export SYSLIB=$SYSLIB:/usr/mqm/inc export SYSLIB=$SYSLIB:/usr/lpp/cics/include export SYSLIB=$SYSLIB:$cwd/cpys # user copybook dir export COBPATH=$RUNLIBS/cblx # compiled executables for this user # #Jul16/2014 - compile script aixcbl1 qualifier $DB2SCHEMA1 on --> db2 prep ... export DB2SCHEMA1=DBATIMNA
export ODBCINI=/usr/local/etc/odbc.ini # user DSN (Data Source Name) export ODBCSYSINI=/usr/local/etc # system file directory export LD_LIBRARY_PATH=/usr/local/lib32:/usr/local/lib:/usr/lib:$LD_LIBRARY_PATH
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# - you must NOT define ORACLE_HOME until after server install # - since the install determines ORACLE_HOME & tells you export ORACLE_OWNER=oracle export ORACLE_SID=demo1 export ORACLE_BASE=/h41/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1 #<-- unset for install #Note - #comment out above ORACLE_HOME if installing newer versions # - install process will tell you what new ORACLE_HOME is export PATH=$PATH:$ORACLE_HOME/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib export ORACLE_UNQNAME=orcl alias cdoh='cd $ORACLE_HOME' # alias for quick cd to $ORACLE_HOME alias cdob='cd $ORACLE_BASE' # alias for quick cd to $ORACLE_BASE export OH=$ORACLE_HOME # handy for file copies export OB=$ORACLE_BASE # #------------------------------------------------------------------------- # see www.uvsoftware.ca/sqldemo.htm for Oracle, DB2,& MySQL installs export DB2DIR=/h23/db2/v95 export PATH=$PATH:$DB2DIR/bin:$DB2DIR/adm export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$DB2DIR/lib #------------------------------------------------------------------------- # ODBC must be used for MySQL & may be used for Oracle & DB2 export ODBCINI=/usr/local/etc/odbc.ini # user DSN (Data Source Name) export ODBCSYSINI=/usr/local/etc # system file directory export LD_LIBRARY_PATH=/usr/local/lib32:/usr/local/lib:/usr/lib:$LD_LIBRARY_PATH
export COBOLITDIR=/opt/cobol-it-64 export COBOLIT_LICENSE=/opt/cobol-it-64/license/citlicense.xml export COBITOPT=$UV/ctl/cobdirectives # compiler options & Directives export COB_CONFIG_DIR=$COBOLITDIR/share/cobol-it/config export COB_COPY_DIR=$COBOLITDIR/share/cobol-it/copy export COBCPY=$RUNLIBS/cpys # copybook search, compile script overrides # COBCPY compatible with Micro Focus COBOL # export COB_LIBRARY_PATH=$RUNLIBS/cblx source $COBOLITDIR/bin/cobol-it-setup.sh # add to PATH, LD_LIBRARY_PATH, etc
export RPGADM=/home/rpgadm # Morada RPG compiler homedir export PATH=$PATH:$RPGADM/bin # append RPG bin to PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RPGADM/lib # for Morada RPG export SIGN_OVERRIDES=UN-7 # x'70' neg zone signs for RPG programs export RPGCDIR=$RPGADM # alternate def for compile scripts ? # - see more about Morada RPG at www.uvsoftware.ca/vserpg.htm # - see Part 8 for additional env-vars to generate RPG distribution
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
# common_defines - define environmental variables for Vancouver Utilities # - PRODLIBS,PRODDATA,TESTLIBS,TESTDATA,& TAPE backup devices # # - this file distributed in /home/uvadm/env/common_defines # - copy to /home/appsadm/env/... before modifying for your site # (won't lose your changes when new version of Vancouver Utilities installed) # #Mar2010 - common_defines now called by common_profile # - required by some scripts to define: # PRODLIBS,PRODDATA,TESTLIBS,TESTDATA,TAPE devices for backups, etc # #Pre-Mar2010 - stub_profile called both 'common_defines' & 'common_profile' # - stub_profile now calls only 'common_profile' # export TESTLIBS=/p1/testlibs #<-- examples for user sites export TESTDATA=/p1/testdata # - see overrides for 'mvstest' below export PRODLIBS=/p2/prodlibs export PRODDATA=/p2/proddata export BACKUP=/p3/backup export RESTORE=/p3/restore export HOMEDIRS=/home # HOMEDIRS=/export/home for SUN solaris # export TAPERWD=/dev/st0 # rewind tape device for Linux SCSI export TAPENRW=/dev/nst0 # NO rewind tape device for Linux SCSI # # override above examples with 'mvstest' definitions for testing at UV Software # user implementation would delete following & modify above # - depending on user site disc partitioning & file design #Oct29/2010 - at OldMutual export TESTLIBS=/usr/home/mvstest/testlibsYP export TESTDATA=/usr/home/mvstest/testdataYP export BACKUP=/h33/backup #<-- for testing at UV Software export RESTORE=/h33/restore #------------------------- end of common_defines ---------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# bashrc - initialization file for the bash shell (vs kshrc for Korn shell) # - file stored at /home/uvadm/env/... # - '.' of '.bashrc' omitted for visibility #NOTE - copy to your homedir & rename .bashrc(bash/linux) or .kshrc(ksh/unix) # # - aliases coded here, as well as in .bash_profile(linux) or .profile(unix) # - useful if console logging via 'script' command (see ADMjobs.doc) # - aliases & umask in profile get lost by the 'script' console logging command # - this saves having to remember '. aliases' after login when logging # - could also code functions here # # alias commands to prompt for overwrite (highly recommended) # - use option '-f' when you have many files (rm -f tmp/*, etc) alias rm='rm -i' # confirm removes alias mv='mv -i' # confirm renames alias cp='cp -i' # confirm copy overwrites alias rmf='rm -f' # force removes (no prompts) alias mvf='mv -f' # force renames (no prompts) alias cpf='cp -f' # force copies (no prompts) # # aliases for quick 'cd's to commonly accessed directories # - requires env-vars RUNLIBS, RUNDATA, CNVDATA in your profile alias cdl='cd $RUNLIBS' # quick access to libs superdir alias cdd='cd $RUNDATA' # quick access to data superdir alias cdc='cd $CNVDATA' # quick access to data conversion superdir # # misc aliases alias l='ls -l' # save keystrokes alias md='mkdir' alias rd='rmdir' alias vi='vim' # use vim (vs vi) alias grep='grep -n' # ensure -n option used on grep # # set umask, which also gets lost when console logging umask 002 # ensure dirs 775 & files 664 # for logging, PS1 prompt must begin with '<@' export PS1='<@$HOST1:$LOGNAME:$PWD> ' #--------------------------- end of bashrc ---------------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The profiles set umask 002, which means the permissions will be 775 for directories & 664 for files, created by users of these profiles with umask 002.
Also be sure to assign a common group-ID (we suggest 'apps') to the programmers & operators who are working on a common set of directories & files (JCL, COBOL,& DATAfiles).
Unix normally defaults umask 022, which means subdirs would be 755 & files 644, which would not allow users to write in directories created by other members of the team.
Making umask 002 (dirs 775 files 664) & ensuring all team members in a common group allows team members to write into a common set of directories for JCL, COBOL,& DATA. We are in effect extending security to the group.
Be sure to copy .bashrc or .kshrc to the homedirs of anybody using console logging. Console-Logging is activated by uncommenting 7 '##' lines at bottom of stub_profile. The 'script' command invokes another level of the shell, which loses aliases & 'umask' set in the common_profile. .bashrc/.kshrc restores these aliases & umask.
Nightly batch jobs could fail due to files with bad permisions or group. Nightly batch jobs are scheduled by a crontab owned by 'appsadm' (see crontabs in 'Part_5'). Files with bad permissions migt be FTP to the site or somebody may have used 'root' to copy a file & forgot to fix permissions.
See 'chmod_custom1' '7K9' sample script that could be run before the nightly batch jobs to ensure permissions on all data directories/files 775/664 and group 'apps'. You could also reset owner to 'appsadm' if you want to see who changed what files during the day (or reset owner more infrequently). This sample script has hard-coded directories & permissions for reliability. You would customize for your site.
Note that 'root' should be used only when necessary (fixing permissions, etc). It is too dangerous to run application scripts with root privileges. Of course the chmod_custom1 script must be scheduled by a root crontab, but all batch jobs would be scheduled by 'appsadm' crontabs. And appsadm shares group 'apps' with all operators & programmers who access the data files.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# stub_profile_cronlogdemo - file distributed in /home/uvadm/env/... # - to be copied to /home/appsadm/env/... # # Special version of profile to demo capturing logs from jobs run by cron # - defines RUNLIBS & RUNDATA as /home/mvstest/testlibs & testdata # ============================================================== # - see www.uvsoftware.ca/admjobs.htm#5I1 - 5K6 # # This stub_profile_cronlogdemo called directly by 'cronscript1' # - which is scheduled by 'crontab2' & 'crontabtest2' # - since 'cron' environment has NO profile to setup PATHs, etc # # Define RUNLIBS/RUNDATA & call common_profile export RUNLIBS=/home/mvstest/testlibs #<-- define for user 'mvstest' export RUNDATA=/home/mvstest/testdata . /home/appsadm/env/common_profile #<-- common_profile from $APPSADM/env #================================= # # We have dropped a lot of explanatory #cmts here in cronlogdemo version # - see explanatory #cmts in original /home/uvadm/env/stub_profile
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# stub.ini - .ini file to define RUNDATA/RUNLIBS for for control-M # (vs programmers who use stub_profile & common_profile) # - by Owen Townsend, created Feb 2010, modified Feb 2011 # - this is the master template stored at /usr/home/appsadm/env/stub.ini # - you must copy/rename/modify for various systems, XWO for example: # # 1. cd $APPSADM <-- change to /usr/home/appsadm/ # =========== # 2. cp env/stub.ini env/xwo.ini <-- copy template for new system # =========================== # 3. vi env/xwo.ini <-- modify as follows: # ============== # 3a. Change name at top (#comment only, but important to prevent confusion) # for this example, change 'stub.ini' to 'xwo.ini' # 3b. Change name on RUNDATA & RUNLIBS definition near end of this stub.ini # see export RUNDATA=... & export RUNLIBS=... # # 4. cdl <-- return to your $RUNLIBS (/usr/home/xy35068/testlibsXWO example) # # 5. vi ctl/jclunixop51 <-- update JCL converter control file # ================== - was copied from $UV/ctl by 'copymvsctls' # line 71 call stub.ini file (see xwo.ini example below) # # 6. convert the subsystem JCL to scripts jcl0->jcl1->jcl2->jcl3->jcls # # 7. when debugged, copy jcls/* to control-M scripts/... for example: # cp jcls/* zacomup104:/c01/apps/bt00108/xwo/control/scripts/ # # ** 1st 10 lines JCL/scripts ** # #001 #!/bin/ksh #002 ##XWOA003 JOB WO1000F1,LOAD,CLASS=S,MSGCLASS=P,REGION=0M #003 export JOBID2=XWOA003; scriptpath="$0"; args="$*" #004 if [[ -z "$JOBID1" ]]; then export JOBID1=$JOBID2; fi #005 for arg in $args; do if [[ "$arg" == *=* ]]; then export $arg; fi; done #006 integer JCC=0 SCC=0 LCC=0 # init step status return codes #007 autoload jobset51 jobset52 jobend51 jobabend51 logmsg1 logmsg2 stepctl51 #008 autoload exportfile exportgen0 exportgen1 exportgenall exportgenx #009 . $APPSADM/env/xwo.ini #<-- ensure stub.ini changed to correct system.ini #010 jobset51 # call function for JCL/script initialization # # - JCL converter inserts 1st 10 lines of output scripts from lines 64-72 # of ctl/jclunixop51, before executing the JCL converter for each system, # line 71 must be changed from stub.ini to xwo.ini, or whatever (xpp,xpt,etc) # #071 . $APPSADM/env/stub.ini #<-- ensure stub.ini changed to correct system.ini #071 . $APPSADM/env/xwo.ini #<-- stub.ini changed to xwo.ini (matching JCL) # ======================= # # ** control-M profile defs required ** # # export TESTPROD=P000 - control-M P000, developers T000 # ==================== # export APPSADM=/usr/home/appsadm - for scripts to find this stub.ini # ================================ (see near end this file) # export UV=/usr/home/uvadm # ========================= # # $APPSADM/env/...ini files... <-- .ini files stored here # $APPSADM/env/xwo.ini <-- .ini file for testing # $APPSADM/env/???.ini <-- various systems (other than xwo ?) # $APPSADM/env/common.ini <-- common file called by all stub.ini files # # common.ini - defines items common to all systems (PATHs to libraries) # - called near end of this file (after RUNDATA & RUNLIBS defined) # - saves duplications, easier maintenance # - see exact coding in $APPSADM/common.ini # #------------------------------------------------------------------ # # For programmer conversion & testing, the .ini is ignored so he can # convert & test in testlibs/testdata vs production ctontrol/... # and no JCL/scripts need be changed when copied to production # # export TESTPROD=P000 <-- for production under control-M # ==================== or operator manual commands # - must be coded in control-M & operator profiles for production # - .ini file will be activated to point to production libs/data # for the system to which the particular JCL/script belongs # # export TESTPROD=T000 <-- for programmer conversion & testing # ==================== # - coded in programmer profile for conversion & testing # - disables the .ini file defs for RUNLIBS/RUNDATA # - RUNLIBS/RUNDATA in programmer profile will define testlibs/testdata # - programmers do not need to access different sets of libs/data during # one login session (can modify profile, logoff/logon for next system) # # test for Production (by control-M) or Test (by developer) # if [[ "$TESTPROD" == P* ]]; then export RUNDATA=/c01/apps/bt00108/xwo/control #=========================================== export RUNLIBS=/c01/apps/bt00108/xwo/control #=========================================== . $APPSADM/env/common.ini #======================== fi return 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# common.ini - called by various stub.ini files # - for operators & control-M # (vs programmers who use stub_profile & common_profile) # - by Owen Townsend, 1st developed Feb 2010 # - .ini files & profiles stored in /usr/home/appsadm/env # # stub.ini - master template stored at /usr/home/appsadm/env/stub.ini # - copy & rename for system (example: xwo.ini) # - stub.ini must define RUNDATA/RUNLIBS for desired system # # xwo.ini - sample .ini file for 'xwo' system # ___.ini - define RUNDATA & RUNLIBS for the target system # - stub.ini calls this common.ini to define libraries # PATH to scripts based on $RUNLIBS # # ** essential subdirs in $RUNLIBS & $RUNDATA ** # # Developers require following subdirs n their separate $RUNLIBS & $RUNDATA # - using example for xy35068 & XWO system: # # export RUNLIBS=/usr/home/xy35068/testlibsXWO/ # $RUNLIBS/jcls/... <-- converted JCL/scripts # $RUNLIBS/scripts/... - renamed for control_M # $RUNLIBS/cblx/... <-- any executable COBOL programs ? # # export RUNDATA=/usr/home/xy35068/testdataXWO/ # $RUNDATA/ctl/gdgctl51 <-- GDG control file (generations) # $RUNDATA/jobtmp/... <-- temp files & new GDGs created during run # $RUNDATA/jobmsgs/... <-- job progress status msgs # $RUNDATA/obslog/... <-- Object Star logs subdir by day # $RUNDATA/sysout/... <-- SYSOUT files from COBOL DISPLAYs # $RUNDATA/tmp/... <-- used by sort for merge files # # Above subdirs are combined for control-M, for XWO on zactomup104: # # export RUNLIBS=/c01/apps/bt00108/xwo/control # ============================================ # export RUNDATA=/c01/apps/bt00108/xwo/control # ============================================ # # ** control-M profile defs required ** # # export TESTPROD=P000 (developers have TESTPROD=T000) # ==================== # export APPSADM=/usr/home/appsadm - for scripts to find this stub.ini # ================================ (see near end this file) # export UV=/usr/home/uvadm - for Vancouver Utilities # ========================= # # Verify that critical environmental variables have been defined # (by stub_profile or this common_profile) if [[ "$UV" = "" || "$APPSADM" = "" ]]; then echo "UV=$UV or APPSADM=$APPSADM not defined" echo "- enter to exit"; read $reply; exit 99; fi if [[ "$RUNLIBS" = "" || "$RUNDATA" = "" ]]; then echo "RUNLIBS=$RUNLIBS or RUNDATA=$RUNDATA not defined" echo "- enter to exit"; read $reply; exit 99; fi # # define PATHs common to all systems & all .ini files) #Oct2010 - example export PATH=/usr/bin:/usr/sbin export PATH=$PATH:$RUNLIBS/scripts:$RUNLIBS/jcls # - converted JCL in scripts/ subdir for control-M & jcls/ for developers # export PATH=$PATH:$UV/bin: #<-- path to uvsort,uvcp,etc export PATH=$PATH:$UV/sf/adm:$UV/sf/demo:$UV/sf/util:$UV/sf/IBM # # uvcopy interpreter finds Parameter Files via $PFPATH export PFPATH=$UV/pf/adm:$UV/pf/demo:$UV/pf/util:$UV/pf/IBM export PFPATH=$PFPATH:$RUNLIBS/pf export PFPATH=$PFPATH:$APPSADM/pf # export FPATH=$APPSADM/sfun #<-- ksh functions used by JCL/scripts # export LOGMSGACK=n # disable ACK option in logmsg2 in JCL/scripts # # define path to COBOL programs, COBOL programs should not use PATH # - because mainframe JCL/scripts & COBOL programs could have same names export RLX=$RUNLIBS/cblx #<-- path for loading COBOL programs # export GDGCTL=$RUNDATA/ctl #<-- default location # export GDGCTL=$APPSADM/ctl #<-- could change to this ? #Mar14/12 - allow gdgctl51I.dat/.idx & GDGmkdirs to be located anywhere # - vs $RUNDATA/ctl, see doc at www.uvsoftware.ca/jclcnv1demo.htm#3I1 &/or 7I1 # # For Micro Focus COBOL Server Express # export COBDIR=/usr/lib/cobol # export PATH=$PATH:$COBDIR/bin # export LD_LIBRARY_PATH=$COBDIR/lib:$LD_LIBRARY_PATH # export LANG=en_US # fix animator display carets vs data # export EXTFH=$RUNLIBS/ctl/extfh.cfg # file handler configuration # # Indexed file extension controls for Vancouver Utilities export ISDATEXT=".dat" # .dat/.idx Indexed files for uvsort,uvcopy,uvcp,etc # # uvsort,etc read/write .dat on typ=ISF data partition # # COBOL equivalent is 'IDXNAMETYPE=2' in $EXTFH/extfh.cfg # # printer destinations for VU laser printing scripts export UVLPDEST="-dricoh" # default dest for uvlp(uvlist) scripts at 1 site export UVLPOPTN="-onobanner" # for unix/linux (SFU does not allow) #------------------------------------------------------------------------- # # ** For Object Star ** # export HURON=/usr/home/objstar/tibco/osb export HURONDIR=$HURON/database/huron.dir export OS_ROOT=$HURON export PATH=$PATH:$HURON/bin:/d01/work export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HURON/sharedlib ulimit -n 8000 # modify OBSDOB depending on server 101=OSXD,120=OSXF,104=OSXP export OBSDOB=OSXD # Object Star userid,password,library (temporary userid/passwd for test) export OBSUSER="U=CT04819" export OBSPASS="P=CS00144" export OBSLIB="L=CT04819" # see 'osBatch' command in JCL/scripts as follows: # osBatch dob=$OBSDOB EE=BATCH SEA=L R=.... BROWSE DSDIR=$RUNDATA\ # $OBSUSER $OBSPASS $OBSLIB\ # SESSIONLOG=$RUNDATA/obslog/$OBSDAY/${JOBID1}_${JSTEP}_$(date +%y%m%d_%H%M%S) # setup OBSLOGDIR this code also in jobset51 in case user login past midnight export OBSLOGDIR=$RUNDATA/obslog/$(date +%y%m%d) if [[ -d $RUNDATA/obslog ]]; then if [[ ! -d $OBSLOGDIR ]]; then mkdir $OBSLOGDIR; fi fi #------------------------------------------------------------------------- # # ** For Connect:Direct ** # export CDNDM=/usr/home/cdadmin/cdunix/ndm export PATH=$PATH:$CDNDM/bin export NDMAPICFG=$CDNDM/cfg/cliapi/ndmapi.cfg export CDOPTNS="-x" # $CDOPTNS inserted on 'direct' command in JCL/scripts # '-x' shows command in stdout, might chg to '-n' to inhibit CD stdout export CDSERVER=CD.ZAOMNT02 #Nov14/10 - connect:direct template at end of ctl/jclunixop51 has; # 'process snode=$CDSERVER' to allow prgmr test & oprtr production # by changing profile above, without changing JCL/scripts #Feb2011 - changed to using different directory on windows CD server # JCL/scripts define C:D output files twice & use 1 of P/T files # - depending on $TESTPROD (def in stub.ini) P/T for Production/Testing # #------------------------------------------------------------------------- # # Misc Recommended items # umask 002 for permissions 775 on directories & 664 on files # - so operator or control-M would not need to have root provileges # operators working on common set of dirs/files # must have umask 002 & be in the same group ('apps' used for testing) umask 002 # permissions 775 dirs, 664 files set -o ignoreeof # disallow logoff via ctl D (use exit) HOST1=$(uname -n) # add to PS1 prompt if desired export PS1='<@$HOST1:$LOGNAME:$PWD> ' export EDITOR=vi # for Korn shell history export VISUAL=vi # for Korn shell history export HISTSIZE=1000; # Korn shell history file size #------------------------------------------------------------------------- # alias commands to prompt for overwrite (highly recommended) # alias rm='rm -i' # confirm removes # alias mv='mv -i' # confirm renames # alias cp='cp -i' # confirm copy overwrites # alias l='ls -l' # save keystrokes #-------------------------------------------------------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We recommend you setup a login/userid 'appsadm' to serve as the applications administrator for the unix/linux site. The appsadm home directory would hold various scripts, crontabs, log files, etc used in application administration. Here are some suggested sub-directories:
/home/appsadm :------bin - binaries for site developed/modified programs :--UV--ctl - control files for converting JCL & COBOL & GDG files :--UV--env - profiles copied from /home/uvadm/env/... : modify appropriately for your site :------log1 - console logging files (currently active) : :-----user1 - sub-directoried by user login : :-----user2,etc :------log2 - console logging files (for current month) : :-----user1 : :-----user2,etc :------log3 - console logging files (for last month) :------logs - console logs from nightly 'cron' scripts : :------pf <-- uvcopy jobs developed/modified by site admin :------sf <-- shell scripts developed/modified by appsadm :--UV--sfun <-- functions for JCL/scripts (jobset51,exportgen0,etc) :------src - source for any programs developed/modified by appsadm :------tmp
Note |
|
Do not confuse 'appsadm' (applications administrator userid/login) with 'uvadm' (the Vancouver Utilities administrator userid/login). Please see the uvadm subdirs illustrated on page '1A3'.
One important purpose of appsadm is to hold the modified versions of control files, profiles, scripts & uvcopy jobs that you need to customize at your site. Copy files you need to change from /home/uvadm/... to /home/appsadm/... Do NOT copy yet, see copy commands on page '1D3' after subdir setup on '1D2'.
cp /home/uvadm/ctl/* /home/appsadm/ctl cp /home/uvadm/env/* /home/appsadm/env cp /home/uvadm/sfun/* /home/appsadm/sfun
This protects you from losing your customized versions when you install a future new version of Vancouver Utilities, which would overwrite /home/uvadm.
Note that the recommended profile (listed previously) searches PATH & PFPATH of appsadm before uvadm, so any scripts & uvcopy jobs that you modify will be found before any of the original scripts/jobs.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will use 'useradd' (command line method) here, but you can use the GUI sysadm screen if you prefer. In any case you have to login as 'root' to setup a new user, but be sure to login as 'appsadm' before you setup the profile, make subdirs,& copy any files.
#1. login as 'root'
#2. groupadd apps <-- setup group 'apps', if not already setup ============= (when uvadm was setup in install.htm)
#3a. useradd -m -g apps -s /bin/bash appsadm <-- setup user 'appsadm' ======================================= #3b. useradd -m -d /export/home/appsadm -g apps -s /bin/bash appsadm =============================================================== - must specify '-d ...' homedir option for SUN Solaris
#4. passwd appsadm <-- setup password desired ============== #5. chmod 755 /home/appsadm <-- allow other users to copy files from appsadm/... ======================= - required for many Vancouver Utility procedures
'-m' is the option to create the home directory (/home/appsadm).
'-g apps' assigns the group. Assign it as you wish, but it is VERY important that you assign the same group as for uvadm, and the programmers, analysts, & other users who are going to use the Vancouver Utilities & share information on your UNIX system. This is also related to the recommended system permissions for file read/write/execute, which extends security to the 'group' level, using 'umask 002'.
'-s /bin/bash' specifies the 'bash' shell (the default on Linux systems). For Unix I recommend 'ksh' if 'bash' not available.
These shells are much superior to 'sh' (Bourne shell, default on some Unix systems). The 'history' feature of bash & ksh is a reason enough to upgrade.
The Korn shell is recommended for all scripts - 1st line is '#!/bin/ksh'. All scripts used in the installation procedures have been verified under ksh. The JCL converters create 'ksh' shells since they use some features that are lacking in the 'bash' shell. But 'bash' is easier to use as the login shell.
While you are still logged in as 'root', you might as well setup other user logins, you will require. For example if you plan on running the test/demos described in JCLcnv1demo.htm or VSEJCL.htm, you will need to setup 'mvstest' or 'vsetest'.
#4a. useradd -m -g apps -s /bin/bash mvstest <-- for MVS JCL test/demos ======================================= #4b. passwd mvstest <-- setup password desired ============== #4c. chmod 755 /home/mvstest <-- allow file copy between user accounts ========================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Assuming that 'root' has already created the appsadm account, we will now setup subdirs, copy files from /home/uvadm,& modify as required.
#1. login as 'appsadm' --> /home/appsadm
#2a. mkdir bin ctl log1 log2 log3 logs pf sf sfun src tmp ==================================================== - setup subdirs, see '1D1'
#2b. mkdir log1/oper1 log2/oper1 log3/oper1 log1/oper2 log2/oper2 log3/oper2 etc ======================================================================= - setup subdirs matching logins that will be using console logging
#3. cp /home/uvadm/ctl/* ctl <-- copy control files from uvadm to appsadm ========================
#4. cp /home/uvadm/sfun/* sfun <-- copy functions from uvadm to appsadm ========================== - for JCL/scripts (jobset51,exportgen0,etc)
#5. cp /home/uvadm/env/* env <-- copy profiles from uvadm to appsadm =========================
bashrc - bash 'rc' aliases req'd if console logging kshrc - rename as kshrc for Korn shell (vs bash shell) stub_profile_ABC - stub profile (rename to .profile or .bash_profile) - copy to /home/appsadm/env & modify - modify RUNLIBS/RUNDATA for programmers & operators common_profile_ABC - common profile (called by stub_profile) defines PATH's etc using $RUNLIBS/$RUNDATA stub_profile_test - could make diff versions for prgmrs & oprtrs stub_profile_prod - for copying to homedirs of new users
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
#6. vi env/stub_profile_ABC <-- modify for your site ======================= - modify 'stub_profile' - define RUNLIBS/RUNDATA for programmers & operators
export RUNLIBS=$HOME/testlibs #<-- initial values for training export RUNDATA=$HOME/testdata ============================= export RUNLIBS=/p1/apps/testlibs #<-- later change for conversion project export RUNDATA=/p2/apps/testdata
#7. vi env/common_profile_ABC <-- modify for your site =========================
#7a. change 'COBDIR' to wherever you installed Micro Focus COBOL - COBDIR defined in the supplied common_profile - as the default location for Micro Focus COBOL install which is:
export COBDIR=/opt/microfocus/cobol ===================================
#7b. Modify TERM & 'stty erase' character depending on user's terminal (distribution has TERM=linux & stty erase '^?')
export TERM=linux # TERM - modify depending on your terminal ================= # (vt100,xterm,at386,ansi,etc)
stty erase '^?' # erase char - modify depending on your terminal =============== # '^?' for linux/at386, '^H' for vt100,ansi,xterm
#7c. Modify UVLPDEST to a central laser printer at your site.
export UVLPDEST="-dlp0" <-- change 'lp0' to your laser printer =======================
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
For most unix/linux OS's I think you can simply copy the supplied stub_profile to user homedirs, overwriting the default .profile or .bash_profile that is created when you setup new users (via useradd or the GUI).
But if desired you could read in the default .profile or .bash_profile, at the begining of the VU supplied stub_profile, before copying to user homedirs.
#10c. vi env/stub_profile_ABC <-- additional optional? change ======================= - read in the '.profile' from your OS (SUN,HP,AIX,etc) at the begining of the supplied stub_profile - write & quit
#10d. Should not define 'COBDIR' in user profiles. COBDIR should be defined only in the 'common_profile', since there is usually no need to have different COBDIRs for different users. See #14 on '1D7'.
export COBDIR=/opt/microfocus/cobol ===================================
If you are performing JCL conversions, you will need different versions of the stub_profile for programmers & operators. It is convenient to setup stub_profile_test & stub_profile_prod in /home/appsadm/env/... and then you can simply copy the appropriate version to the homedirs of your programmers & operators.
The main difference is the definition of RUNLIBS & RUNDATA which are intended to point to the 'test' or 'prod' libraries & data appropriate for the user (programmer or operator).
#11. login appsadm --> /home/appsadm ============= - we should already be in /home/appsadm with stub_profile in 'env'
#11a. cp env/stub_profile_ABC env/stub_profile_ABC_test =================================================
#11b. cp env/stub_profile_ABC env/stub_profile_ABC_prod =================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The supplied stub_profile defines RUNLIBS/RUNDATA as follows:
export RUNLIBS=$HOME/testlibs #============================ export RUNDATA=$HOME/testdata #============================
These definitions work well for the JCL conversion test/demo jobs documented in JCLcnv1demo.htm#Part_2, but you would modify for your own conversions depending on where you plan to store your own JCLs, COBOLs, & Data files.
For example the file system design described in Part_2. defines the following libraries & data file locations.
export TESTLIBS=/p1/apps/testlibs export TESTDATA=/p1/apps/testdata export PRODLIBS=/p2/apps/prodlibs export PRODDATA=/p2/apps/proddata export BACKUP=/p3/apps/backup export RESTORE=/p3/apps/restore export CNVDATA=/p4/apps/cnvdata
These SYMBOLS could be defined in a 'common_defines' & used in the stub_profile. You could then define RUNLIBS/RUNDATA as follows:
export RUNLIBS=$TESTLIBS <-- in stub_profile_test for programmers export RUNDATA=$TESTDATA
export RUNLIBS=$PRODLIBS <-- in stub_profile_test for operators export RUNDATA=$PRODDATA
'common_defines' was made optional in Feb 2010 (to reduce complexity) so the definitions are now:
export RUNLIBS=/p1/apps/testlibs <-- in stub_profile_test for programmers export RUNDATA=/p1/apps/testdata
export RUNLIBS=/p2/apps/testlibs <-- in stub_profile_test for operators export RUNDATA=/p2/apps/testdata
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
If desired, you could setup master copies of the stub_profile for programmers & operators & copy them to the homedir of your programmers & operators, renaming as .bash_profile for bash (or .profile for ksh).
#13a. cp env/stub_profile_ABC_test /home/prgmr1/.bash_profile ======================================================= ... etc for other programmers ...
#13b. cp env/stub_profile_ABC_prod /home/oper1/.bash_profile ====================================================== ... etc for other operators ...
#14. cp env/bashrc /home/user1/.bashrc ================================== ... etc for other users (programmers & operators) ...
'.bashrc' should be copied to the homedirs of any programmers & operators who might use 'console logging' (see Part_6). Console logging is activated by uncommenting the 'script' command at the end of the profile. 'script' is another level of the shell which causes any 'aliases' & 'umaks' in the profile to be lost.
See bashrc listed on page '1C5'. It contains the same aliases & umask as the common_profile (listed on page '1C2').
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
UV Software recommends 3 to 6 weeks of onsite training & assistance to quick-start your conversion. In the first 3 or 4 weeks, we can usually do the training & convert all the JCL, COBOL,& DATA and get you started testing. The testing & parallel running could take 6 months or a year for large sites. To get optimum results from the onsite visit, please ensure the following preparations are made.
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
For a faster and more efiicient conversion, please send us all JCL,PROCs,Parms, COBOL,& Copybooks several weeks prior to the scheduled on-site conversion. We will spend up to 1 week (at no extra charge) investigating & optimizing the conversions for your particular coding habits.
No DATA files are required. But if desired, you could send the data files for just 1 small standalone system for us to parrallel & return the reports for you to compare to the mainframe.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
UV Software recommends a 4 or 5 week on-site visit to train the conversion team in using the Vancouver Utilities to convert mainframe JCL, COBOL,& DATA to Korn shell scripts, Micro Focus COBOL,& ASCII data files for Unix/Linux systems.
See more details at uvprices.htm#I1 thru I10
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
2A1. | Installing Red Hat Enterprise Linux on a server |
2A2. | Partitioning discs on RAID arrays |
2A3. | Raid Arrays - RAID1 for O/S, RAID5 for applications |
2A4. | Making File Systems on Partitions (with 'fdisk') |
2A5. | Make filesystems, Label, make mount points, mount partitions |
2A6. | Edit /etc/fstab to mount partitions on reboots |
2B1. | Installing Red Hat Enterprise Linux workstation on a laptop |
- dual boot with Windows 7 | |
2B2. | Sub-Partitioning the Red Hat Partition |
- assigning space to the extended partitions | |
2B3. | Modifying grub.conf (boot loader configuration file) |
- increase OS choice time & change OS descriptions |
2C0. | File System Design (making directories on file systems) |
2C1. | TestLibs & TestData directories (for conversion & testing) |
2C2. | ProdLibs & ProdData directories (for production) |
2C3. | Backup & Restore directories in separate file systems |
2D1. | File Design Principles |
2D2. | RUNLIBS & RUNDATA - concepts & advantages |
2E1. | programmer & operator homedirs & logins |
- .profile RUNLIBS & RUNDATA point to test or production |
2F0. | Alternative directory designs for multiple sets of libraries & data |
- possibly for organizations with multiple companies |
2G1. | Tape Drives for backup & mainframe data exchange |
- DDS4 DAT 20/40 GB SCSI to backup application libraries & data | |
- 3480/3490 tape drives for mainframe data transfer | |
- may need 3480/3490 for data continuing exchange with external sites | |
- SCSI controller cards required |
2H1. | Setup Summary for Unix/Linux Hardware & Software |
- RHEL O/S, Vancouver Utilities, Micro Focus COBOL, etc |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
UV Software recommends Red Hat Enterprise Linux (current version 5.4).
As of July 2008, UV Software is using RHEL 5.1 for software development. We ordered the DVD media kit & 1st year standard support for only $349. 'standard support' provides telephone &/or internet support.
For production sites, you might want 'premium support' but I have good experience with 'standard support'.
Since the media kit does not include hard-copy documentation, we recommend you download & print the following 2 manuals from the Red Hat website.
https://www.redhat.com/docs/manuals/enterprise <-- RHEL manuals
We printed Duplex on 3 hole punched paper on our 35 ppm laser printer and mounted in 3 ring binders.
UV Software now using RHEL 7.1 on a Z420 HP workstation with 4 1 TB discs - so this section needs considerable updating as of Spring 2015+
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a suggested RAID & Partition design that might be used in a typical small server, such as the Dell 2800, using Red Hat Enterprise Linux. Our example will configure 6 * 73 gig SCSI discs into 2 RAID arrays as follows:
Note |
|
The Red Hat O/S install procedure provides the 'disc druid' GUI program that makes it easy to setup the desired partitions. I suggest the RAID1 O/S array (sda) would be partitioned as follows:
/boot 1 gig / (root) 12 gig /swap 4 gig /usr 16 gig /tmp 4 gig /var 4 gig <-- logfiles, etc /opt 4 gig <-- OPTional software (Micro Focus COBOL, etc) /home 10 gig <-- user homedirs /home2 10 gig <-- might be used for /home backups /home3 06 gig <-- misc, reserve, ------------ total 73 gig
Each user will of course have a /home directory, but the home dirs should not be used for production data or libraries (which should always be maintained on the RAID5 array as documented in the next section below).
You can also use /home dirs for for smaller software packages, for example:
/home/uvadm - Vancouver Utilities /home/appsadm - Applications Administrator
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
I suggest the RAID5 array 'sdb' might be partitioned into 4 file systems of about 55 gig each (assuming the 4 * 73 gig discs provide 220 gig effective storage of the 292 physical disc storage). We will name the partitions as p1,p2,p3,p4 and we intend to use them as follows:
/p1 - test-libraries & test-data /p2 - production-libraries & production-data /p3 - backup & restore /p4 - conversion data
This partitioning makes our backups & restores easier to handle. Another significant point is that a run away program cannot fill up the whole file system since it would be restrained to the production or test data partition.
The vendor (Dell, etc) might install the OS on the RAID1 array, but would not usually partition the RAID5 application array. It is easier to partition sdb during OS install on sda, since there is usually a GUI program to partition discs during the OS install.
On Red Hat Enterprise, the GUI program 'disc druid' cannot be used after the OS install, but we can use the command line tools 'fdisk' & 'mkfs'. You can look up the 'man' pages to see how to run these commands, but briefly:
Note that Intel hardware & OS's such as Linux only allow 4 primary partitions on a disc (or RAID array). They do allow partition#4 to be specified as an 'extended' partition & subdivided into many more partitions. But we will assign the 4 primary partitions as 55 gig each.
Here is a summary of the steps required to partition our disc array. The detailed instructions follow on the next few pages.
#1. fdisk /dev/sdb <-- partition the disc - see details on following page
#2. mkfs.ext3 /dev/sdb1,2,3,4 <-- make file systems
#3. e2label /dev/sdb1 p1,p2,p3,p4 <-- assign Labels to disc partitions
#4. mkdir /p1 /p2 /p3 /p4 <-- make mount points
#5. vi /etc/fstab <-- setup mount commands for reboots
Note that some of the sample printouts are from my partitioning of 1 147 gig non-raid disc vs the 4 * 73 gig disc RAID array at a customer site.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. fdisk /dev/sdb <-- start interactive session ============== - to partition sdb
#1a. n <-- make 'n'ew partition sdb1 (55 gig) --> p <-- 'p'rimary partition --> 1 <-- first cylinder --> 4400 <-- last cylinder
#2b. n <-- make 'n'ew partition sdb2 (55 gig) --> p <-- 'p'rimary partition --> 4401 <-- first cylinder --> 8800 <-- last cylinder
#2c. n <-- make 'n'ew partition sdb3 (55 gig) --> p <-- 'p'rimary partition --> 8801 <-- first cylinder --> 13200 <-- last cylinder
#2d. n <-- make 'n'ew partition sdb3 (55 gig) --> p <-- 'p'rimary partition --> 13201 <-- first cylinder --> 17849 <-- last cylinder
#2e. p <-- print partition table
Disk /dev/sdb: 146.8 GB, 146815737856 bytes 255 heads, 63 sectors/track, 17849 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 4400 35342968+ 83 Linux /dev/sdb2 4401 8800 35343000 83 Linux /dev/sdb3 8801 13200 35343000 83 Linux /dev/sdb4 13201 17849 37343092+ 83 Linux
#2f. w <-- write partition table & exit
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Actually, we will use 'mkfs.ext3', which is the Red Hat version of 'mkfs' that will create a journalling file system (fstype=ext3).
#2a. mkfs.ext3 /dev/sdb1 <-- make ext3 file system on sdb1 #2b. mkfs.ext3 /dev/sdb2 ... etc ... #2c. mkfs.ext3 /dev/sdb3 #2d. mkfs.ext3 /dev/sdb4
#3a. e2label /dev/sdb1 p1 #3b. e2label /dev/sdb2 p2 #3c. e2label /dev/sdb3 p3 #3d. e2label /dev/sdb4 p4
Next we need to make 'mount points' (empty directories at the / root level) for mounting our newly created file systems.
#4. mkdir /p1 /p2 /p3 /p4 <-- make mount points for file systems =====================
#5a. mount /dev/sdb1 /p1 <-- mount file system /dev/sdb1 on /p1 #5b. mount /dev/sdb2 /p2 ... etc ... #5c. mount /dev/sdb3 /p3 #5d. mount /dev/sdb4 /p5
#5e. mount -l <-- display all mounted filesystems ========
/dev/sda1 on /boot type ext3 (rw) [/boot1] /dev/sda2 on / type ext3 (rw) [/] /dev/sda3 on /home type ext3 (rw) [/home] /dev/sda4 on /var type ext3 (rw) [/var] /dev/sdb1 on /p1 type ext3 (rw) [/p1] /dev/sdb2 on /p2 type ext3 (rw) [/p2] /dev/sdb3 on /p3 type ext3 (rw) [/p3] /dev/sdb4 on /p4 type ext3 (rw) [/p4]
The above 'mount' commands are not permanent. We need to edit /etc/fstab to have the mounts performed automatically on subsequent reboots. Please see the next page.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#6. vi /etc/fstab =============
label mount-point file-type defaults dump fsck ==========================================================
LABEL=/ / ext3 defaults 1 1 LABEL=/boot1 /boot ext3 defaults 1 2 LABEL=/p1 /p1 ext3 defaults 1 2 <-- add LABEL=/p2 /p2 ext3 defaults 1 2 <-- add LABEL=/p3 /p3 ext3 defaults 1 2 <-- add LABEL=/p4 /p4 ext3 defaults 1 2 <-- add LABEL=/home /home ext3 defaults 1 2 proc /proc proc defaults 0 0 LABEL=/var /var ext3 defaults 1 2 LABEL=SWAP-sda3 swap swap defaults 0 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is how I added Red Hat Linux to my Windows 7 PC. In Jan 2010 I bought a HP Pavilion DV6-2043 laptop with 4 gig memory, 500 gig disc,& Windows 7 O/S (from Costco for $900).
In May 2010, I ordered Red Hat Enterprise workstation 5.4 with the intention to dual boot on my Windows 7 laptop. I loaded the Red Hat DVD & rebooted, but it still booted Windows 7 (ignoring the DVD).
I fixed that problem by rebooting into BIOS setup (hit F10 after power-on), and change the boot order to DVD 1st, hard-disc 2nd.
I re-booted from my Red Hat DVD into Linux setup, but got 'No Space available'. So I re-booted into Windows-7 & investigated the hard-disc as follows:
Control Panel --> Admin Tools --> Computer Mngmnt --> Storage --> Disc Mngmnt =============================================================================
System - .2 gig (200 MB) C: - 450.0 gig D:(recovery) - 10.0 gig HP_Toold - .1 gig (100 MB)
Note that there was no free space & all 4 partitions are occupied (max 4 partitions per disc). We must free up at least 1 partition & recover some space to make it big enough for Red Hat.
We can use Windows 7 Disc Management to delete the D:recovery partition and shrink the C: drive to get the space required. It is OK to delete the recovery partition if you have already created the recovery DVDs (3 discs) or if you have the Windows 7 Install DVD.
Right clicking on a partition gives the following options:
So I Deleted :D (recovry partition) & shrunk :C. The shrink option told me the max shrink was 215 gig. I chose to shrink by 140 gig to leave some space for Windows applications.
Note that deleting & shrinking partitions works well on Windows 7. Previously you might have had to use special software such as 'Partition Magic'.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
I then re-booted from the Red Hat DVD & when prompted for disc partitioning, I chose 'custom layout' & assigned partitions as follows:
/boot 1 gig / (root) 10 gig /swap 4 gig /usr 16 gig /tmp 4 gig /var 4 gig <-- logfiles, etc /opt 4 gig <-- OPTional software (Micro Focus COBOL, etc) /home 20 gig <-- user homedirs /home2 20 gig <-- might be used for /home backups /home3 20 gig <-- reserve for future use /home4 20 gig <-- reserve for future use /home5 20 gig <-- reserve for future use ------------ total 140 gig
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You might want to modify 'grub.conf' (Linux boot loader configuration file) to increase time to choose O/S (Linux or Windows-7) & change O/S descriptions. I increased choice time from 5 to 10 seconds & changed description of 2nd choice from 'other' to 'Windows 7'.
#1. Boot into Red Hat Linux & login as root.
#2. ls -l /boot <-- list files in the /boot partition =========== and look for 'grub'
-rw-r--r-- 1 root root 65937 Aug 18 2009 config-2.6.18-164.el5 drwxr-xr-x 2 root root 4096 May 14 16:49 grub -rw------- 1 root root 2633188 May 13 17:51 initrd-2.6.18-164.el5.img -rw-r--r-- 1 root root 2491311 May 13 18:20 initrd-2.6.18-164.el5kdump.img drwx------ 2 root root 16384 May 13 10:45 lost+found -rw-r--r-- 1 root root 108707 Aug 18 2009 symvers-2.6.18-164.el5.gz -rw-r--r-- 1 root root 1225101 Aug 18 2009 System.map-2.6.18-164.el5 -rw-r--r-- 1 root root 1932316 Aug 18 2009 vmlinuz-2.6.18-164.el5
#3. ls -l /boot/grub <-- list files in the grub subdir ================ and look for grub.conf
#4. vi /boot/grub/grub.conf <-- edit the grub configuration file =======================
# grub.conf generated by anaconda # kernel /vmlinuz-version ro root=/dev/sda11 # initrd /initrd-version.img #boot=/dev/sda default=0 timeout=10 #<-- I changed from 5 to 10 seconds splashimage=(hd0,4)/grub/splash.xpm.gz # hiddenmenu #<-- I #commented 'hiddenmenu' out title Red Hat Enterprise Linux Client (2.6.18-164.el5) root (hd0,4) kernel /vmlinuz-2.6.18-164.el5 ro root=LABEL=/ rhgb quiet crashkernel=128M@16M initrd /initrd-2.6.18-164.el5.img title Windows 7 #<-- I changed from 'other' to 'Windows 7' rootnoverify (hd0,0) chainloader +1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
In the previous section, we setup a RAID array, partitioned it, and made file systems on each partition (p1,p2,p3,p4). After adding directories for our application libraries and data, our file systems will look something like the following:
/root /... <-- unix/linux O/S directories /home <-- home directories :----user1 :----etc--- : /p1/apps <---- /p1 file system mount point :-----testlibs <-- RUNLIBS=$TESTLIBS=/p1/apps/testlibs : :-----cbls - COBOL programs : :-----jcls - JCL/scripts : :---etc--- - see other subdirs at ADMjobs.htm#2C1 :-----testdata <---- RUNDATA=$TESTDATA=/p1/apps/testdata : :-----mstr - data files (or use topnodes as subdirs) : :-----jobtmp - job temporary files : :---etc--- - see other subdirs at ADMjobs.htm#2C2 /p2/apps <---- /p2 file system mount point :-----prodlibs <-- RUNLIBS=$PRODLIBS=/p2/apps/prodlibs : :-----cbls - COBOL programs (production) : :-----jcls - JCL/scripts (production) : :---etc--- :-----proddata : :-----mstr - data files (or use topnodes as subdirs) : :-----jobtmp - job temporary files /p3/apps <---- /p3 file system mount point :-----backup - backup & restore directories :-----restore /p4/apps <---- /p4 file system mount point :-----cnvdata - data conversion directories : :----d1ebc - EBCDIC data files from mainframe : :----d2asc - converted to ASCII (preserving packed)
It is important to make separate file systems for our application libraries & data. You should not use /home directories for these because:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p1/apps/testlibs :--*--cbl0 - COBOL programs ('*' means files present) :-----cbl1 - cleaned up, cols 1-6 & 73-80 cleared, etc :-----cbl2 - cnvMF5 converts mainframe COBOL to MicroFocus COBOL :-----cbls - copy here (standard source library) before compiling :-----cblx - compiled COBOL programs (.int's) :--*--parm0 - control cards (SORT FIELDS, etc) :-----parms - control cards with 73-80 cleared :--*--cpy0 - for COBOL copybooks :-----cpy1 - cleaned up, cols 1-6 & 73-80 cleared, etc :-----cpy2 - cnvMF5 converts mainframe COBOL to MicroFocus COBOL :-----cpys - copy here (standard copybook library) :--*--jcl0 - test/demo JCLs supplied :-----jcl1 - intermediate conversion 73-80 cleared :-----jcl2 - PROCs expanded from procs :-----jcl3 - JCLs converted to Korn shell scripts :-----jcls - copy here manually 1 by 1 during test/debug :--*--proc0 - test/demo PROCs supplied :-----procs - will be merged with jcl1, output to jcl2 :-----prns - .prn files from MS WORD 'print to a file' for overlays :-----ovls - overlays to print forms+data (see pcloverlay & uvoverlay) :-----rpts - for optional statistics reports :-----sf - for misc scripts you may wish to write :-----tmp - tmp subdir used by various conversions
/p1/apps/testdata :-----ap <-- directories created for topnodes of data filenames :-----ar (Accounts Payable, Accounts Receivable, etc) :-----gl :-----py :-----jobctl <-- working directories shared by all applications :-----joblog :-----jobtmp :-----rpts <-- reports created by COBOL programs :-----rptsovls <-- some reports can be printed with overlays (see uvoverlay) :-----sysout :-----tmp :-----wrk
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p2/apps/prodlibs :-----cbls - end point for converted COBOL programs :-----cblst - cobol source listings from compiles :-----cblx - compiled COBOL programs (.int's) :-----cpys - converted, ready for compiles :-----jcl3 - JCLs converted to Korn shell scripts :-----jcls - copy here manually 1 by 1 during test/debug :-----prns - .prn files from MS WORD 'print to a file' for overlays :-----ovls - overlays to print forms+data (see pcloverlay & uvoverlay) :-----pf - uvcopy jobs to replace utilities (easytrieve,etc) :-----sf - for misc scripts you may wish to write :-----tmp - tmp subdir used by various conversions
Please compare these production libraries to the conversion & testing libraries on the preceding page. Note that many original mainframe & intermediate conversion subdirs have been dropped, retaining only the fully converted subdirs of COBOL programs, copybooks,& JCL(now Korn shell scripts).
/p2/apps/proddata :-----ap <-- directories created for topnodes of data filenames :-----ar (Accounts Payable, Accounts Receivable, etc) :-----gl :-----py :-----jobctl <-- working directories shared by all applications :-----joblog :-----jobtmp :-----rpts <-- reports created by COBOL programs :-----rptsovls <-- some reports can be printed with overlays (see uvoverlay) :-----sysout :-----tmp :-----wrk
Also note that we have added some subdirs, such as 'prns' & 'ovls' in prodlibs and 'rptsovls' in proddata. These are used to form overlays that might be used to print some of your reports created by your COBOL programs.
See uvoverlay.htm which documents these procedures & gives an example of printing letters with a letterhead overlay.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p3/apps/backup :-----homedirs <-- $HOMEDIRS backup from last night : :-----appsadm - showing only 1 user to save lines : : :-----ctl & showing only a few subdirs in 1st user : : :-----env : : :-----logs : : :-----... :-----homedirs-1 <-- $HOMEDIRS backup from 2 nights ago : :-----...same as above... :-----proddata <-- $PRODDATA backup from last night : :-----ap : :-----ar : :-----gl : :-----rpts : :-----wrk :-----proddata-1 <-- $PRODDATA backup from 2 nights ago : :-----...same as above... :-----prodlibs <-- $PRODLIBS backups from last night : :-----cbls : :-----cpys : :-----ctl : :-----jcls : :-----parms :-----prodlibs-1 <-- $PRODLIBS backup from 2 nights ago : :-----...same as above... :-----zip <-- last nights backup (only) : :-----homedirs_070529_0301.zip : :-----proddata_070529_0302.zip <-- sample for May 29/2007 : :-----prodlibs_070529_0303.zip : :-----... :-----Day <-- Daily backups in .zip files for last 40 days : :-----homedirs_070419_0301.zip : :-----proddata_070419_0302.zip <-- 40 days ago = April 19/2007 : :-----prodlibs_070419_0303.zip : :-----...(39 sets not shown) :-----Month <-- Monthly backups in .zip files for last 15 months : :-----homedirs_060201_0301.zip : :-----proddata_060201_0302.zip <-- 15 months ago = Feb 1/2006 : :-----prodlibs_060201_0303.zip : :-----...(14 sets not shown) :-----Year <-- Yearly backups in .zip files for last 7 years : :-----homedirs_000501_0301.zip : :-----proddata_000501_0302.zip <-- 7 years ago = Jan 1/2000 : :-----prodlibs_000501_0303.zip : :-----...(6 sets not shown)
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
RUNLIBS & RUNDATA are the 2 critical environmental variables (defined in the profile of programmers & operators) that point to the appropriate Libraries & Data for Testing & Production (prodlibs/proddata or testlibs/testdata).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Environmental variables RUNLIBS/RUNDATA defined in profiles of programmers & operators could allow small sites to use 1 machine for testing & production (larger sites would use separate machines for testing & production).
/home <-- home directories :----uvadm <-- Vancouver Utilities : :-----... - about 25 subdirs, see page '9X9' :----appsadm <-- Applications Administrator : :-----... - about 10 subdirs, see page '9X9' :----prgmr1 :----prgmr2,3,4,etc :----oper1 :----oper2,3,4,etc /p1/apps <---- /p1 file system mount point :-----testlibs <-- RUNLIBS=$TESTLIBS=/p1/apps/testlibs : :-----cbls - COBOL programs : :-----jcls - JCL/scripts : :---etc--- - see other subdirs at ADMjobs.htm#2C1 :-----testdata <---- RUNDATA=$TESTDATA=/p1/apps/testdata : :-----mstr - data files (or use topnodes as subdirs) : :-----jobtmp - job temporary files : :---etc--- - see other subdirs at ADMjobs.htm#2C2 /p2/apps <---- /p2 file system mount point :-----prodlibs <-- RUNLIBS=$PRODLIBS=/p2/apps/prodlibs : :-----cbls - COBOL programs (production) : :-----jcls - JCL/scripts (production) : :---etc--- :-----proddata : :-----mstr - data files (or use topnodes as subdirs) : :-----jobtmp - job temporary files /p3/apps <---- /p3 file system mount point :-----backup - backup & restore directories :-----restore /p4/apps <---- /p4 file system mount point :-----cnvdata - data conversion directories : :----d1ebc - EBCDIC data files from mainframe : :----d2asc - converted to ASCII (preserving packed)
RUNLIBS & RUNDATA are assigned in the profiles, which were discussed & listed begining on page '9X9. 'stub_profile's should define RUNLIBS & RUNDATA for use by the 'common_profile' as shown below:
export RUNLIBS=/p1/apps/testlibs <-- stub_profile_test for programmers export RUNDATA=/p1/apps/testdata
export RUNLIBS=/p2/apps/prodlibs <-- stub_profile_prod for operators export RUNDATA=/p2/apps/proddata
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
export TESTLIBS=/p1/apps/testlibs export TESTDATA=/p1/apps/testdata export PRODLIBS=/p2/apps/prodlibs export PRODDATA=/p2/apps/proddata export BACKUP=/p3/apps/backup export RESTORE=/p3/apps/restore export CNVDATA=/p4/apps/cnvdata
export RUNLIBS=/p1/apps/testlibs <-- stub_profile_test for programmers export RUNDATA=/p1/apps/testdata
export RUNLIBS=/p2/apps/prodlibs <-- stub_profile_prod for operators export RUNDATA=/p2/apps/proddata
The profiles also define several aliases that make it easy for programmers & operators to get to various frequently used directories. 'cdl' & 'cdd' are especially convenient.
alias cdl='cd $RUNLIBS' <-- /p1/apps/testlibs(prgmr) or /p2/apps/prodlibs(oprtr)
alias cdd='cd $RUNDATA' <-- /p1/apps/testdata(prgmr) or /p2/apps/proddata(oprtr)
alias cdb='cd $BACKUP' <-- /p3/apps/backup
alias cdr='cd $RESTORE' <-- /p3/apps/restore
alias cdc='cd $CNVDATA' <-- /p4/apps/cnvdata
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
On pages '2A2'-2A4, we setup a RAID array, partitioned it, and made file systems on each partition (p1,p2,p3,p4). On pages '2C1'-2C3, we suggested a basic directory design for application libraries & data (testlibs, testdata, proddata, prodlibs,& backups).
In this section we will present some alternatives in case you need multiple sets of libraries & data.
We will follow the basic design with some alternative designs for organizations with multiple companies &/or multiple separate applications on the same machine.
/p1/apps <-- /p1 file system mount point :----testlibs - test-libraries & test-data :----testdata /p2/apps <-- /p2 file system mount point :----prodlibs - production-libraries & production-data :----proddata /p3/apps <-- /p3 file system mount point :----backup - backup & restore directories :----restore /p4/apps <-- /p4 file system mount point :----cnvdata - data conversion directories : :----d1ebc : :----d2asc
From here on, we will illustrate basic & alternative designs only for 'proddata'. The libraries could have a similar design, or they may be more integrated (if the same programs & scripts are used for different companies).
Note that the 'RUNLIBS' & 'RUNDATA' definitions in the profiles will facilitate these alternatives. For production operators using the basic design above, the definitions would be:
export PRODLIBS=/p2/apps/prodlibs =================================
export PRODDATA=/p2/apps/proddata =================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p2/apps/proddata :-----ap <-- directories created for topnodes of filenames :-----ar :-----gl :-----py :-----jobctl <-- standard directories shared by all applications :-----joblog :-----jobtmp :-----rpts :-----sysout :-----tmp :-----wrk
These directory illustrations are created by the 'dtree' script and show only directories (no files). But in the following illustrations, I will show a few data files to ensure your complete understanding.
When we convert mainframe data files, we use the top-node as a sub-directory within 'proddata' (path defined by $RUNDATA). We also convert to lower case. Here are a few examples:
AR.CUSTOMER.MASTER <-- Mainframe file naming conventions AR.SALES.ITEMS GL.ACCOUNT.MASTER GL.ACCOUNT.TRANS
/p2/apps/proddata/ar/customer.master /p2/apps/proddata/ar/sales.items /p2/apps/proddata/gl/account.master /p2/apps/proddata/gl/account.trans
/p2/apps/proddata :-----ar : :-----customer.master : :-----sales.items :-----gl : :-----account.master : :-----account.trans
The following pages will show some alternatives to this basic design, using only these 2 subdirs & 4 files for illustration purposes.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will describe these alternate designs as 'multi-company' but of course these designs could apply to divisions, departments, or whatever is relevant.
/p2/apps/aaco <-- company 'aa' :-----proddata : :-----ar : : :-----customer.master : : :-----sales.items : :-----gl : : :-----account.master : : :-----account.trans /p2/apps/bbco <-- company 'bb' :-----proddata : :-----ar : : :-----customer.master : : :-----sales.items : :-----gl : : :-----account.master : : :-----account.trans /p2/apps/prodlibs : :-----cbls : :-----jcls : :------etc-
The above design could be used when there is no interaction required between the companies. When processing 'aaco', there is no need to access any files in 'bbco' & vice-versa. RUNDATA in operator profiles would be defined as 1 of the following 2:
export RUNDATA=/p2/apps/aaco/proddata ===================================== - - - OR - - - export RUNDATA=/p2/apps/bbco/proddata =====================================
But RUNLIBS could define the same set of libraries if the same programs & JCL/scripts could be used for both companies:
export RUNLIBS=/p2/apps/prodlibs ================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
In this design, we will code the company directory following 'proddata' rather than preceding. This still does not allow company file sharing.
You might use format#1 (previous page) if you wanted to be able to backup all data for any 1 company into 1 archive.
You might use format#2 (this page) if you wanted to backup all data for all companies into 1 archive.
/p2/apps/proddata :-----aaco <-- company 'aa' : :-----ar : : :-----customer.master : : :-----sales.items : :-----gl : : :-----account.master : : :-----account.trans :-----bbco <-- company 'bb' : :-----ar : : :-----customer.master : : :-----sales.items : :-----gl : : :-----account.master : : :-----account.trans /p2/apps/prodlibs : :-----cbls : :-----jcls : :------etc-
Like format#1, this design does not allow interaction between the companies if the same set of JCL/scripts is used for both companies. RUNDATA in operator profiles would be defined as 1 of the following 2:
export RUNDATA=/p2/apps/proddata/aaco ===================================== - - - OR - - - export RUNDATA=/p2/apps/proddata/bbco =====================================
RUNLIBS could define the same set of libraries if the same programs & JCL/scripts could be used for both companies:
export RUNLIBS=/p2/apps/prodlibs ================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will discuss this under 2 scenarios, depending on whether the mainframe file naming allowed for multi-company or whether multi-companies are to be added after the conversion from mainframe to unix/linux.
In this scenario some data filenames might be:
AACO.AR.CUSTOMER.MASTER <-- Mainframe files for company AACO AACO.GL.ACCOUNT.MASTER BBCO.AR.CUSTOMER.MASTER <-- Mainframe files for company BBCO BBCO.GL.ACCOUNT.MASTER
/p2/apps/proddata/aaco/ar.customer.master <-- JCL converter default names /p2/apps/proddata/aaco/gl.account.master - subsystem (ar,gl) part of filename /p2/apps/proddata/bbco/ar.customer.master /p2/apps/proddata/bbco/gl.account.master
/p2/apps/proddata/aaco/ar/customer.master <-- JCL converter option /p2/apps/proddata/aaco/gl/account.master - make subdirs from top 2 nodes /p2/apps/proddata/bbco/ar/customer.master (vs just the top-node) /p2/apps/proddata/bbco/gl/account.master
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
To add a 2nd company after conversion (& allow inter company datafile access), you would probably clone a 2nd set of JCL/scripts, renaming & modifying as required.
UV Software has the tools to automate the changes required.
export CUSTMAS=ar/customer.master <-- original JCL conversion =================================
export CUSTMAS=aaco/ar/customer.master <-- modified file definition ======================================
Remember that all JCL/scripts change directory at the begining. Line 9 of all converted JCL/scripts calls common function 'jobset41' which performs 'cd $RUNDATA'. So the effective datafile path name becomes:
export CUSTMAS=/p2/apps/proddata/aaco/ar/customer.master <-- effective file def ========================================================
Actually the JCL converter uses the 'exportfile' function rather than the native 'export' (to display filenames on console), so this would be
exportfile CUSTMAS /p2/apps/proddata/aaco/ar/customer.master <-- effective def ============================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You might select an LTO tape drive (high speed, high capacity) to backup the entire system. RAID makes system failure very unlikely, so I suggest regular backups only for the application libraries and data using cheaper DAT tapes. I suggest the following supplier for backup tape drives.
www.coastalmicrosupply.com | |
sales@coastalmicrosupply.com | |
1830 bickford Ave. #101 | |
Snohomish WA 98290 | |
888-763-7274 |
I suggest the following DAT tape drives. These drives are priced like used, but are often new (excess stock never sold). The cartridges are only $5 each.
HP C5683 DDS4 20/40 GB SCSI 68 pin internal $110 (ask for external version if your server does not provide internal slots)
Quantum STD2401LW-S 20/40 GB DDS4 SCSI 68 pin internal $110
You may need a 3480/3490 tape drive to load the data tapes from your mainframe. Some site may need a 3480/3490 to exchange data with other sites. I suggest the following supplier & following tape drives:
www.comco0inc.com | |
Brian Gillette | |
BJ.Gillette@comco.org | |
2211 Grant St. | |
Bettendorf, IA 52722 USA | |
800-432-8638 |
IBM 3490E reconditioned & certified $2,795.00 - reads both 3480 & 3490, writes only 3490 - highly recommended if you do NOT need to write 3480
Fujitsu M2483/5H reconditioned & ceritifed $2,495.00 - reads both 3480 & 3490, writes both 3480 & 3490 - recommended if you DO need to write 3480
I recommend the following SCSI controller cards for the tape drives. You get them from Coastal Micro or Comco. These are PCI cards for tower PCs & servers.
Adaptec 2944UW SCSI controller 68 pin only
Adaptec 29160N SCSI controller 50 & 68 pin
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
3A1. | Backup/Restore - Introduction & Overview |
3B1. | Backup & Restore Directories |
3C1. | using 'cp -r' to backup/restore directories & files (disc to disc) |
3C2. | using 'cpio' to backup/restore directories & files (disc to disc) |
3C3. | using 'tar' to backup/restore (disc to tape & tape to disc) |
3C4. | using 'cpio' to backup/restore (disc to tape & tape to disc) |
3D0. | summary of Disc backup/restore scripts |
- copy directory trees to empty directories | |
- safer than manual commands |
3D1. | copycpio1 - copy current directory tree to an empty directory |
3D2. | copycpio2 - copy a directory tree to your current empty directory |
3D3. | sortcpio - sort all filenames in an input directory tree |
- before copying to an output empty directory | |
- so files are written into directories in filename sequence | |
- 'ls' shows files in sequence, but many other commands do not |
3E0. | summary of Tape backup/restore scripts |
3E1. | backupT1 - backup any 1 directory tree to tape using find & cpio |
---------- using hard-coded tape rewind device | |
- may need to modify for other devices & other unix systems | |
- rewind allows only 1 archive on tape (vs other no-rewind scripts) |
3E2. | backupT1NRW - backup any 1 directory tree to tape using find & cpio |
------------- using no rewind, may stack multiple archives on 1 tape |
3E3. | backupT2 - backup multi directory trees to multi archives on tape |
---------- sample backup script to multi-archive tape using cpio | |
- use 'restoreT1' to restore any 1 archive to a work area | |
(for investigation & extract of desired files) |
3E4. | restoreT1 - restore any 1 archive from tape to an empty work space |
----------- for investigation & extract of desired files | |
- you must specify full path name of restore directory & | |
it must match your current directory & it must be empty |
3F1. | Restoring Backup tapes to a New System |
- restore homedirs, production Libraries,& production Data | |
from multi-archive tape created by backupT2 |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part 3 is an education on unix/linux basic backup & restore.
The scripts here in Part 3 can be used standalone with no dependencies on site conventions, profiles,or environmental variables.
After you understand these simple backup commands & scripts, please see Part_4, which presents an advanced system for backups automatically scheduled by 'cron'.
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part_2 suggested some file designs that might be used for testing & production. Here is a short summary, with the emphasis on production backup & restore.
/home :----uvadm : :----sf/adm/... <-- backup/restore scripts provided by UV Software :----appsadm : :----sf/... <-- backup/restore scripts copied to /home/appsadm/sf/ - customized as required for your site
/p2/apps <---- /p2 file system mount point :----proddata <-- PRODuction DATA : :-----ap : :-----ar <-- multiple subdirs in proddata : :-----gl : :-----jobtmp : :-----rpts : :-----sysout : :-----tmp : :-----wrk :-----prodlibs <-- $PRODLIBS backups from last night : :-----cbls : :-----cpys : :-----ctl <-- multiple subdirs in prodlibs : :-----jcls : :-----parms /p3/apps <---- /p3 file system mount point :----backup - backup directories (on-disc) : :----proddata - backup dir for proddata : : ... - multiple subdirs in backup/proddata : :----prodlibs - backup dir for prodlibs : : ... - multiple subdirs in backup/proddata : :----... :----restore <-- restore directories (from tape) : :----proddata - restore area for proddata : : ... - multiple subdirs in restore/proddata : :----prodlibs - restore area for prodlibs : : ... - multiple subdirs in restore/prodlibs
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
To illustrate the backup/restore commands, let's assume we wish to backup /p2/apps/proddata directory to /p3/apps/backup/proddata. Please see the dtree diagrams listed previously, but here is a partial view.
/p2 :----proddata <-- production data superdir : :----ap : :----ar <-- subdirs (from mainframe file top-nodes) : :----gl : :----py
/p3 :----backup <-- backup directories : :----proddata <-- proddata Backup before Night batch : : :----ap : : :----ar : : :----gl
Before backups such as this, you should remove all old files from the destination. Otherwise old files would remain in the backup directory that are not present in today's files. This could be a serious error in production file systems.
#1. cd /p2/apps/proddata <-- change to input file superdir ====================
#2. rm -rf /p3/apps/backup/proddata/* <-- remove all old files & subdirs =================================
#3. cp -rf * /p3/apps/backup/proddata <-- copy all proddata files to proddata =================================
Option '-r' (recursive) is vital here, to copy all files from all subdirs. Option 'f' (force) inhibits any overwrite prompts in case you are using alias cp='cp -i' in your .profile (recommended). Actually we can omit 'f' here since we 1st removed all files from the destination.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We will use the same example as illustrated on the previous page.
#1. cd /p2/apps/proddata <-- change to input file superdir ====================
#2. rm -rf /p3/apps/backup/proddata/* <-- remove all old files & subdirs =================================
#3. find . -print | cpio -pdmv /p3/apps/backup/proddata <-- backup all proddata ===================================================
i=input, o=output, p=pass (copy to destination directory) c - compatible ASCII headers (allows transfer to other machines) d - directory creation as required m - modification times will be retained v - verbose (displays filenames copied) B - block size 5120 (vs 512), use 'Q' for 65K if your system allows I - Input device name follows (vs standard Input) O - Output device name follows (vs standard Output)
These scripts are safer than manual commands since they verify that you are positioned where you say you are & that the output directory is empty.
Script 'copycpio1' ensures you are positioned in the INPUT directory & that the output directory is empty.
Script 'copycpio2' ensures you are positioned in the OUTPUT directory & that the output directory is empty.
#1. cd /p2/apps/proddata <-- change to input directory #2. ls -l /p3/apps/backup/proddata | more <-- ensure outdir OK to erase #3. rm -rf /pr/backup/proddata/* <-- clear output directory #4. copycpio1 /p2/apps/proddata /p3/apps/backup/proddata <-- do it ====================================================
Please see the copycpio1 & copycpio2 scripts listed on pages '3D1' & '3D2'.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Our Tape backups will use the 1st SCSI Tape device for Linux systems (/dev/st0). You may have to modify depending on your version of Unix/Linux, and which particular tape device you wish to use.
#1. cd /p2/apps/proddata <-- change to input superdir to be backed up ====================
#2. tar cvf /dev/st0 . <-- backup to tape from '.' (current dir) ==================
c |
|
v |
|
f |
|
x |
|
Let's assume we only want to restore a few files from the backup tape. We will restore the entire tape to our designated restore area in the /p3 filesystem. Then we can copy the desired files over to the production filesystem (/p2/apps/proddata/...).
It is possible to specify the desired files on the restore command, but it is awkward & often you discover that you need additional files, that then require additional time consuming passes of the tape. IE - you will wish you had restored everything the 1st time.
#1. cd /p3/apps/restore/proddata <-- change to the restore area ============================
#2. rm -rf * <-- remove all old files ======== - naked '*' can be dangerous - be sure you are in the right place (via pwd)
#3. tar xvf /dev/st0 <-- restore all files from tape to current dir ================
#4. cp selected-files /p2/apps/proddata/... <-- recover desired files =======================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. cd /p2/apps/proddata <-- change to input file superdir ====================
#2. find . -print | cpio -ocvBO /dev/st0 <-- backup all files & subdirs to tape ====================================
We will assume the same situation as in the 'tar' example on the previous page. We only want to recover a few files, but we will restore the entire tape to a designated restore work area,& then selectively copy the desired files to the production directories.
#1. cd /p3/apps/restore/proddata <-- change to the restore area ============================
#2. rm -rf * <-- remove all old files ======== - ensure you are in right place (pwd)
#3. cpio -icvdmBI/dev/st0 <-- restore all files from tape to current dir =====================
#4. cp selected-files?? /p2/apps/proddata <-- recover desired files =====================================
Option 'B' of '-ocvBO' above specifies block-size 5120 bytes
OPtion 'C' can specify a desired block-size, but timing is not much different as long as you specify at least 5120 (512 is definitely slower).
Large block sizes might use less tape but I don't know how much less. Here is an example specifying block-size 512,000 bytes
#2. find . -print | cpio -ocvBO /dev/st0 -C512000 ============================================ - backup all files & subdirs to tape writing blocks of 512,000 bytes
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
3D0. | summary of Disc backup/restore scripts |
- copy directory trees to empty directories | |
- safer than manual commands |
3D1. | copycpio1 - copy current directory tree to an empty directory |
3D2. | copycpio2 - copy a directory tree to your current empty directory |
3D3. | sortcpio - sort all filenames in an input directory tree |
- before copying to an output empty directory | |
- so files are written into directories in filename sequence | |
- 'ls' shows files in sequence, but many other commands do not |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # copycpio1 - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # copycpio1 - copy a directory tree to a 2nd directory # - input may have sub-directories to any level # - output will have exactly the same tree structure # - you must be in input directory # - output directory must be empty # - you must specify full pathname of indir & outdir # - fail-safe, paranoid script # - also see copycpio2, similar to this *copycpio1, except: #*copycpio1 - requires you to be in input directory # copycpio2 - requires you to be in output directory # # Example - copy /p2/apps/proddata/... to /p3/apps/backup/proddata # # 1. cd /p2/apps/proddata <-- change to input directory # 2. ls -l /p3/apps/backup/proddata | more <-- ensure outdir OK to erase # 3. rm -rf /pr/backup/proddata/* <-- clear output directory # 4. copycpio1 /p2/apps/proddata /p3/apps/backup/proddata <-- do it # ==================================================== # d1="$1"; d2="$2"; if [[ -d "$d1" && -d "$d2" ]]; then : else echo "usage: copycpio1 indir outdir" echo " ======================" echo " - arg1 & arg2 must be directories (input & output)" exit 91; fi # dpath=$(pwd) # capture current directory path if [[ $dpath != $d1 ]] then echo "you must be in the input directory $d1"; exit 92; fi # ls $2 >/tmp/copycpio_emptytest if [[ -s /tmp/copycpio_emptytest ]] then echo "the output directory must be empty $d2";exit 93; fi # echo "copy $d1 to $d2 OK ? (or kill)"; read reply cd $d1 # <-- change to indir # find . -print | cpio -pdmv $d2 # <-- copy all files to outdir #============================= exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # copycpio2 - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # copycpio2 - copy a directory tree to a 2nd directory # - input may have sub-directories to any level # - output will have exactly the same tree structure # - you must be in output directory & it must be empty # - you must specify full pathname of outdir & it must match . # - fail-safe, paranoid script # - also see copycpio1, similar to this *copycpio2, except: # copycpio1 - requires you to be in input directory #*copycpio2 - requires you to be in output directory # # Example - copy /p2/apps/proddata/... to /p3/apps/backup/proddata # # 1. cd /p3/apps/backup/proddata <-- change to output directory # 2. ls -l | more <-- ensure outdir OK to erase # 3. rm -rf * <-- clear output directory # 3a. rm -rf /p3/apps/backup/proddata/* <-- safer (avoid unqualified '*') # 4. copycpio2 /p2/apps/proddata /p3/apps/backup/proddata <-- do it # ==================================================== # d1="$1"; d2="$2"; if [[ -d "$d1" && -d "$d2" ]]; then : else echo "usage: copycpio2 indir outdir" echo " ======================" echo " - arg1 & arg2 must be directories (input & output)" exit 91; fi # dpath=$(pwd) # capture current directory path if [[ $dpath != $d2 ]] then echo "you must be in output directory (& it must be empty) $d2"; exit 92; fi # ls . >/tmp/copycpio_emptytest if [[ -s /tmp/copycpio_emptytest ]] then echo "the current directory (output) must be empty $d2";exit 93; fi # echo "copy $d1 to current dir $d2 OK ? (or kill job)"; read reply cd $d1 # <-- change to indir # find . -print | cpio -pdmv $d2 # <-- copy all files to outdir #============================= exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # sortcpio - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # sortcpio - sort filenames & copy a directory to a 2nd directory # - so filenames will be in sequence for backups, etc # (ls sorts filenames, but not tar, cpio, du, etc) # - input may have sub-directories to any level # - output will have exactly the same tree structure # except directory names & filenames will be sorted # if [[ -d $1 && -d $2 ]]; then : else echo "usage: sortcpio indir outdir (outdir empty)" echo " ====================="; exit 1; fi # echo "current directory must be outside of indir & have tmp subdir - OK ?";read echo "outdir can be anywhere outside of indir & must be empty - OK ?";read # cwd=$(pwd) cd $1; d1=$(pwd); cd $cwd cd $2; d2=$(pwd); cd $cwd if [[ ! -d tmp ]];then mkdir tmp; fi cd $d1 # find . -print >$cwd/tmp/files1 #============================= sort -o$cwd/tmp/files2 $cwd/tmp/files1 #===================================== cat $cwd/tmp/files2 | cpio -pdmv $d2 #=================================== exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
3E1. | backupT1 - backup any 1 directory tree to tape using find & cpio |
---------- using hard-coded tape rewind device | |
- may need to modify for other devices & other unix systems | |
- rewind allows only 1 archive on tape (vs other no-rewind scripts) |
3E2. | backupT1NRW - backup any 1 directory tree to tape using find & cpio |
------------- using no rewind, may stack multiple archives on 1 tape | |
- using hard-coded tape device | |
- may need to modify for other devices & other unix systems | |
- see backupNtape using $variables for tape devices |
3E3. | backupT2 - backup multi directory trees to multi archives on tape |
---------- sample backup script to multi-archive tape using cpio | |
- use 'restoreT1' to restore any 1 archive to a work area | |
(for investigation & extract of desired files) | |
- using hard-coded tape device | |
- may need to modify for other devices & other unix systems | |
- see backupNtape using $variables for tape devices |
3E4. | restoreT1 - restore any 1 archive from tape to an empty work space |
----------- for investigation & extract of desired files | |
- you must specify full path name of restore directory & | |
it must match your current directory & it must be empty |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupT1 - backup any 1 directory tree to DAT tape using find & cpio # - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # # usage: backupT1 directory # ================== # if [ ! -d "$1" ]; then echo "usage: backupT1 directory"; exit 9; fi # echo "backup $1 to DAT tape - OK ?"; read reply # cd $1 find . -print | cpio -ocvBO/dev/st0 exit 0 # #note - tape device shown is the 1st SCSI device on a Linux system # - you must modify for other devices & other unix systems # # See more advanced backup/restores at: www.uvsoftware.ca/admjobs.htm#Part_4 # - easier to install backup scripts at new sites # - easier to make changes when directories or tape devices change
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupT1NRW - backup any 1 directory tree to DAT tape using find & cpio # with no rewind, so you can stack multiple archives on 1 tape # - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # # usage: backupT1NRW directory # ===================== # if [ ! -d "$1" ]; then echo "usage: backupT1NRW directory"; exit 9; fi # echo "backup $1 to DAT tape - OK ?"; read reply # cd $1 find . -print | cpio -ocvBO/dev/nst0 exit 0 #Note - tape device is NO Rewind for 1st SCSI Tape on Linux # - you must modify for other devices & other unix systems # # See more advanced backup/restores at: www.uvsoftware.ca/admjobs.htm#Part_4 # - easier to install backup scripts at new sites # - easier to make changes when directories or tape devices change
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupT2 - backup multiple directory trees to multiple archives on DAT tape # - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # # usage: backupT2 # ======== # # This script is a sample backup script to multi-archive tape using cpio # - also see 'restoreT1' to restore any 1 archive to a work area # (for investigation & extract of desired files) # # assign symbols for REWIND/NOREWIND devices # - for easier changes for various unix/linux systems (also see note at end) TAPERWD=/dev/st0 # rewind tape device for Linux SCSI TAPENRW=/dev/nst0 # NO rewind tape device for Linux SCSI # #note - tape device shown is the 1st SCSI device on a Linux system # - you must modify for other devices & other unix systems # - also see backupNtape showing how tape devices can be defined in 1 place # echo "backupT2 - backup proddata, prodlibs,& home/dirs to DAT tape OK" read reply mt -f $TAPERWD rewind # rewind for Linux # cd /p2/apps/prodlibs echo "backup /p2/apps/prodlibs - archive #0 (1st on tape)" find . -print | cpio -ocvBO/$TAPENRW # cd /p2/apps/proddata echo "backup /p2/apps/proddata - archive #1 (2nd on tape)" find . -print | cpio -ocvBO/$TAPENRW # cd /home echo "backup all /home/...dirs - archive #2 (3rd on tape)" find . -print | cpio -ocvBO/$TAPENRW # echo "backups complete - rewinding tape" mt -f $TAPERWD rewind # rewind for Linux exit 0 # # See more advanced backup/restores at: www.uvsoftware.ca/admjobs.htm#Part_4 # - easier to install backup scripts at new sites # - easier to make changes when directories or tape devices change
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # restoreT1 - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # restoreT1 - restore any 1 archive from tape to an empty work space # for investigation & extract of desired files # #usage: 1. change to directory where files are to be restored # 2. remove any old files (directory must be empty) # 3. restoreT1 tapefile#(0,1,2,etc) directory # ======================================== #sample: 1. cd /p2/apps/prodlibs <-- change to PRODuction LIBrarieS # 2. rm -fr * <-- remove old files & subdirs # 3. restoreT1 0 /dev/nst0 <-- restore from 1st file (archive #0) # ===================== - use 1 for 2nd archive, 2 for 3rd, etc # # - you must specify full path name of restore directory & # it must match your current directory & it must be empty # assign symbols for REWIND/NOREWIND devices # - for easier changes for various unix/linux systems (also see note at end) TAPERWD=/dev/st0 # rewind tape device for Linux SCSI TAPENRW=/dev/nst0 # NO rewind tape device for Linux SCSI # capture command arguments & verify integer fno="$1" # capture tape file number (0 relative) cdir="$2" dpath=$(pwd) # capture current directory path echo "restoreT1: arg1=file#=$fno, arg2=curdirpath=$cdir" if ((fno < 1 || fno > 99)) then echo "USAGE: restoreT1 tapefile#(0,1,2,etc) curdirfullpath"; echo " arg1 invalid - tapefile# (0 relative 0-99)" exit 91; fi if [[ ! -d "$cdir" ]] then echo "USAGE: restoreT1 tapefile#(0,1,2,etc) curdirfullpath"; echo " arg2 invalid - must be full path to current(restore) directory" echo " - restore directory must be empty & you must be in it" exit 92; fi if [[ $dpath != $cdir ]] then echo "you must be in the directory specified"; exit 92; fi ls . >/tmp/restore_emptytest if [[ -s /tmp/restore_emptytest ]] then echo "the current directory must be empty";exit 93; fi # echo "restore tapefile# $fno to current directory ($cdir) OK ?"; read reply echo "- are you sure tapefile# $fno correct for $cdir ? (kill/rekey if not)" read reply mt -f $TAPERWD rewind # ensure tape rewound (for Linux) mt -f $TAPENRW fsf $fno # forward space to desired archive (for Linux) cpio -icvdmBI $TAPENRW # restore spcfd archive to cur dir #===================== echo "tapefile# $fno restored to $cdir, rewinding tape" mt -f $TAPERWD rewind # rewind tape after (for Linux) exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
With Raid, you may never crash, but at some point you will want to move to new hardware. Here is the procedure to restore the backup tape archives created by backupT2 (listed on page '3E3').
#3a. cd /home #3b. mt -f /dev/st0 fsf 2 <-- forward space 2 files (to 3rd archive) #3c. cpio -icvdmBI /dev/nst0 <-- restore 3rd archive (homedirs)
#4a. groupadd apps #4b. adduser -d/home/uvadm -gapps uvadm #4c. adduser -d/home/appsadm -gapps appsadm #4c. adduser -d/home/userxxx -gapps userxxx #4_. - - - - - etc - - - - -
Note |
|
#6a. Logon appsadm --> /home/appsadm #6b. cdl --> $PRODLIBS (/p2/apps/prodlibs on page '2C0' example) #6c. mt -f /dev/st0 fsf 0 <-- forward space 0 files (1st archive) #6d. cpio -icvdmBI /dev/nst0 <-- restore 1st archive (production libraries) #6e. cdd --> $PRODDATA (/p2/apps/proddata on page '2C0' example) #6f. mt -f /dev/st0 fsf 1 <-- forward space 1 files (2nd archive) #6g. cpio -icvdmBI /dev/nst0 <-- restore 2nd archive (production libraries)
#7a. Logon appsadm --> /home/appsadm #7b. cdl --> $PRODLIBS (/p2/apps/prodlibs on page '2C0' example) #7c. restoreT1 0 $PRODLIBS <-- restore 1st archive (production libraries) #7d. cdd --> $PRODDATA (/p2/apps/proddata on page '2C0' example) #7e. restoreT1 1 $PRODDATA <-- restore 2nd archive (production libraries)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
4A1. | Backup/Restore - Introduction & Overview |
4A2. | Advantages of this backup system |
4B1. | Backup & Restore Directories |
4C1. | Directories for advanced Backup & Restore |
4D1. | cronbackupNight - crontab file to run backup scripts |
----------------- backupNight, backupMonth, backupYear, backupNtape | |
- supplied in /home/uvadm/env/... (after Vancouver Utilities installed) | |
- copy to /home/appsadm/env/... & modify as required for your site | |
- this crontab & the backup scripts should run under the 'appsadm' user id | |
- much safer than running under root | |
- appsadm should be in same group as users who created files & directories | |
& permissions 775 for directories & 664 for files (umask 002 in profiles) | |
- use a separate crontab for root if you need to backup system files |
4E1. | cron mail with console logs of backups |
- when backup scripts run under cron, console msgs are mailed to the | |
user owning the crontab (which should be appsadm). | |
- sample mail after cron runs backupNight, backupMonth,& backupNtape |
4F0. | Advanced backup scripts |
- using environmental variables for directories & tape devices | |
to minimize changes when installing backup scripts at new sites | |
($HOMEDIRS, $PRODDATA, $PRODLIBS, $BACKUP, $TAPERWD, $TAPENRW) |
4F1. | backupPROD - backup $PRODDATA & $PRODLIBS |
------------ intended to be run by cron at 3 AM (but could run manually) | |
- removes 2 days ago backup proddata-1 & prodlibs-1 | |
- renames yesterdays proddata,prodlibs (append -1) | |
- makes new empty output dirs for today's backups | |
- runs copycpio1 to copy $PRODDATA to $BACKUP/proddata | |
- runs copycpio1 to copy $PRODLIBS to $BACKUP/prodlibs | |
- zips $PRODDATA & $PRODLIBS to date stamped files in $BACKUP/zip/... | |
proddata_yymmdd_HHMM.zip & prodlibs_yymmdd_HHMM.zip | |
- then copy .zip files to $BACKUP/Day/... | |
- last 40 days accumulated, older files dropped by backupPurge1 (or 2) |
4F2. | backupTEST - backup $TESTDATA & $TESTLIBS |
- similar to backupPROD (for $PRODDATA & $PRODLIBS) | |
- use backupTEST during conversion period | |
- use backupPROD after you go into PRODuction |
4F3. | backupHOME - backup all /home/... directories |
- removes 2 days ago homedirs-1, renames yesterday to homedirs-1 | |
- run copycpio1 to copy all homedirs to $BACKUP/homedirs | |
- zips homedirs to $BACKUP/zip/homedirs_yymmdd_HHMM.zip | |
- copy .zip file to $BACKUP/Day/... |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
4F4. | backupMonth - copy current backup zip files to Month backup directory |
------------- for cron on 1st of month at 4 AM (but could run manually) | |
- preceded by backupNight, which backed up $PRODDATA $PRODLIBS $HOMEDIRS | |
& zipped them to $BACKUP/zip/... with date/time stamps | |
- this script simply copies contents of $BACKUP/zip/* to $BACKUP/Month |
4F5. | backupYear - copy current backup zip files to Year backup directory |
------------ for cron on Jan 1 at 4:30 AM (but could run manually) | |
- preceded by backupNight, backed up $PRODDATA $PRODLIBS $HOMEDIRS | |
& zipped them to $BACKUP/zip/... with date/time stamps | |
- this script simply copies contents of $BACKUP/zip/* to $BACKUP/Year |
4F6. | backupTapeA - backup $BACKUP/zip/... files to tape (DAT or LTO) |
------------- run by cron at 5 AM, following Nightly backups to disc | |
- backup files already zipped by backupPROD/TEST/HOME into $BACKUP/zip/... | |
(prodlibs_yymmdd_HHMM.zip, ...etc... to homedirs_yymmdd_HHMM.zip) | |
- if required restore files to $RESTORE/zip/... using 'restoreTapeA' script | |
- then copy desired .zip file to appropriate area & unzip | |
($RESTORE/prodlibs, $RESTORE/proddata, $RESTORE/homedirs) | |
- after restore, unzip, investigate, & copy desired files back to | |
$PRODDATA/..., or $PRODLIBS/..., or /home/... |
4F7. | restoreTapeA - restore from tape to an empty work space |
-------------- for investigation & extract of desired files | |
- you must specify full path name of restore directory & | |
it must match your current directory & it must be empty | |
- we minimize necessity to modify backup scripts for new sites | |
by using environmental variables $TAPERWD, $TAPENRW | |
- SCSI tape devices for Linux might be TAPERWD=/dev/st0 & TAPENRW=/dev/nst0 | |
- tape devices for site are assigned in $APPSADSM/env/common_profile | |
- these scripts '.' dot-execute $APPSADM/.bash_profile & common_profile |
4F8. | backupPurge1 - purge backup files older than specified limits |
-------------- for cron each night at 2:30 AM (but could run manually) | |
- removes files from \$BACKUP/Day older than 40 days | |
- removes files from \$BACKUP/Month older than 15 months | |
- removes files from \$BACKUP/Year older than 7 years | |
- aging based on unix directory entry last modification dates | |
which could be wrong if files copied without -p option | |
- alternative backupPurge2 aging based on _yymmdd_ embedded in filenames |
4F9. | backupPurge2 - purge backup files older than specified limits |
-------------- for cron each night at 2:30 AM (but could run manually) | |
- removes files from \$BACKUP/Day older than 40 days | |
- removes files from \$BACKUP/Month older than 15 months | |
- removes files from \$BACKUP/Year older than 7 years | |
- aging based on _yymmdd_ embedded in filenames, vs backupPurge1 | |
where aging based on unix directory entry last modification dates | |
which could be wrong if files copied without -p option |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
4F10. purgeold2 - purge files from a directory older than a specified no of days ----------- based on the dates embedded in the filenames uvcopy job called by backupPurge2 script to remove: - Daily backup files older than 40 days - Monthly backup files older than 450 days - Yearly backup files older than 2555 days Alternate backupPurge1 uses directory entry dates vs embedded dates - which could be wrong if files copied without -p option
Part_3 was an education on unix/linux basic backup & restore. The scripts were simple & could be used standalone with no dependencies on site conventions, profiles,or environmental variables.
Here in Part 4 ('https://www.uvsoftware.ca/admjobs.htm#Part_4'), we will present an advanced backup/restore system which uses 'cron' to automatically schedule backups to disc & to tape.
The scripts are well commented & listed in Part 4, so new users should be able to learn something about unix/linux Korn shell scripts & crontabs.
Since the backup scripts are scheduled by crontabs owned by 'appsadm', you must ensure appsadm can write to the backup directories. If you set them up with root, you could fix as follows:
#1. chmod 775 $BACKUP <-- set permissions =================
#2. chown appsadm:apps $BACKUP <-- set owner & goup ==========================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part_2 suggested some file designs that might be used for testing & production. Here is a short summary, with the emphasis on production backup & restore.
/home :----uvadm : :----sf/adm/... <-- backup/restore scripts provided by UV Software :----appsadm : :----sf/... <-- backup/restore scripts copied to /home/appsadm/sf/ - customized as required for your site
/p1/apps <---- /p2 file system mount point :----testdata <-- TEST DATA : :-----ap (subdirs same as proddata below) :----testlibs <-- TEST LIBrarieS : :-----cbls (subdirs same as prodlibs below)
/p2/apps <---- /p2 file system mount point :----proddata <-- PRODuction DATA : :-----ap : :-----ar <-- multiple subdirs in proddata : :-----gl :-----prodlibs <-- $PRODLIBS backups from last night : :-----cbls : :-----cpys <-- multiple subdirs in prodlibs : :-----jcls /p3/apps <---- /p3 file system mount point :----backup - backup directories (on-disc) : :----proddata - backup dir for proddata (unzipped, quick restore) : : ... - multiple subdirs in backup/proddata : :----prodlibs - backup dir for prodlibs (unzipped, quick restore) : : ... - multiple subdirs in backup/proddata : :-----zip <-- last nights backup (only) : : :-----proddata_070529_0302.zip <-- sample for May 29/2007 : :-----Day <-- Daily backup .zips last 40 days : : :-----proddata_070419_0302.zip <-- sample for April 19/2007 : :-----Month <-- Monthly backup .zips 15 mths : : :-----proddata_060201_0302.zip <-- sample for Feb 1/2006 : :-----Year <-- Yearly backup .zips last 7 years : : :-----proddata_000501_0302.zip <-- sample for Jan 1/2000
:-----restore <-- restore directories (from tape) : :----proddata - restore area for proddata : : ... - multiple subdirs in restore/proddata
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/p3/apps/backup :-----homedirs <-- $HOMEDIRS backup from last night : :-----appsadm - showing only 1 user to save lines : : :-----ctl & showing only a few subdirs in 1st user : : :-----logs : : :-----... :-----homedirs-1 <-- $HOMEDIRS backup from 2 nights ago : :-----...same as above... :-----proddata <-- $PRODDATA backup from last night : :-----ap : :-----ar : :-----gl :-----proddata-1 <-- $PRODDATA backup from 2 nights ago : :-----...same as above... :-----prodlibs <-- $PRODLIBS backups from last night : :-----cbls : :-----cpys : :-----jcls :-----prodlibs-1 <-- $PRODLIBS backup from 2 nights ago : :-----...same as above... :-----testdata <-- $TESTDATA backup from last night :-----testdata-1 <-- $TESTDATA backup from 2 nights ago :-----testlibs <-- $TESTLIBS backup from last night :-----testlibs-1 <-- $TESTLIBS backup from 2 nights ago :-----zip <-- last nights backup (only) : :-----homedirs_070529_0301.zip : :-----proddata_070529_0302.zip <-- sample for May 29/2007 : :-----prodlibs_070529_0303.zip : :-----... :-----Day <-- Daily backups in .zip files for last 40 days : :-----homedirs_070419_0301.zip : :-----proddata_070419_0302.zip <-- 40 days ago = April 19/2007 : :-----prodlibs_070419_0303.zip : :-----...(39 sets not shown) :-----Month <-- Monthly backups in .zip files for last 15 months : :-----homedirs_060201_0301.zip : :-----proddata_060201_0302.zip <-- 15 months ago = Feb 1/2006 : :-----prodlibs_060201_0303.zip : :-----...(14 sets not shown) :-----Year <-- Yearly backups in .zip files for last 7 years : :-----homedirs_000501_0301.zip : :-----proddata_000501_0302.zip <-- 7 years ago = Jan 1/2000 : :-----prodlibs_000501_0303.zip : :-----...(6 sets not shown)
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# cronbackupNight - crontab file to run backup scripts # - Nightly1, Monthly1, Yearly # - by Owen Townsend, UV Software, May 19/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # - crontab file supplied with Vancouver Utilities in /home/appsadm/sf/adm/... # - I suggest you setup userid 'appsadm' to house your crontabs & cron scripts # then copy supplied file to /home/appsadm/sf/cronbackupNight & modify as reqd # - see backup scripts (backupPROD,backupTEST,backupHOME,backupMonth,backupYear) # - these should also be copied to /home/appsadm/sf/... & modified there as reqd # - this crontab & the backup scripts should run under the 'appsadm' user id # - much safer than running under root # - works OK if appsadm in same group as users who created files & directories # & if permissions 775 for directories & 664 for files (umask 002 in profiles) # - use a separate crontab for root if you need to backup system files # # ** updating & installing crontab files ** # # - we recommend you store this crontab file in /home/appsadm/sf/... # update it (with vi) when required & re-install, using the crontab command # (IE - the master copy is in /home/appsadm/sf/... not the installed version) # - log on as 'appsadm' to issue the 'crontab' command # (crontab command installs crontabs only for the logged on user) # - the 'crontab' command (#4 below) copies the specified crontab file # to the 'real' crontab file which would be: /var/spool/cron/appsadm # - Before 1st use, you must logon as root, & add the 'appsadm' # userid to: /etc/cron.allow if it exists. If it does not exist # you can use cron unless your userid exists in /etc/cron.deny # # 1. logon as appsadm --> /home/appsadm # 2. vi sf/cronbackupNight - edit this file as required # 3. crontab -r - remove old crontabs for user (appsadm) # (ok since this file is the master copy) # 4. crontab sf/cronbackupNight - activate new crontab for appsadm # 5. crontab -l - list crontab onfile to confirm installation # # arguments to crontab are as follows: # Minute Hour DayofMth MthofYr DayofWeek <----command----> # 00 3 * * 2-6 #<-- codes used for 1st cmd below # # 00 3 * * 2-6 /home/appsadm/sf/backupPROD #Nightly backup proddata & prodtest #======================================= 15 3 * * 2-6 /home/appsadm/sf/backupTEST #Nightly backup testdata & testtest #======================================= # 30 3 * * 2-6 /home/appsadm/sf/backupHOME #Nightly backup homedirs #======================================= # 00 4 01 * * /home/appsadm/sf/backupMonth #Monthly on 1st of month at 4 AM #======================================= # 30 4 01 01 * /home/appsadm/sf/backupYear #Yearly on Jan 1 at 4:30 AM #======================================= # 00 5 * * 2-6 /home/appsadm/sf/backupNtape #Nightly backup to tape #======================================== #Note - Nightly backups for Monday to Friday after 12AM coded as 2-6 (Tues-Sat)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
When scripts are run by cron, the console messages are automatically 'mailed' to the owner of the crontab file (appsadm in this case).
This is a great convenience & the appsadm administrator should check this mail to ensure no problems occurred in overnight backups & any other cron jobs.
From appsadm@uvsoft3.uvsoft.ca Sun May 20 11:30:05 2007 Return-Path: <appsadm@uvsoft3.uvsoft.ca> Received: from uvsoft3.uvsoft.ca (uvsoft3.uvsoft.ca [127.0.0.1]) by uvsoft3.uvsoft.ca (8.12.10/8.12.10) with ESMTP id l4KIU5dc007947 for <appsadm@uvsoft3.uvsoft.ca>; Sun, 20 May 2007 11:30:05 -0700 Received: (from appsadm@localhost) by uvsoft3.uvsoft.ca (8.12.10/8.12.10/Submit) id l4KIU0ix007914 for appsadm; Sun, 20 May 2007 11:30:00 -0700 Date: Sun, 20 May 2007 11:30:00 -0700 Message-Id: <200705201830.l4KIU0ix007914@uvsoft3.uvsoft.ca> From: root@uvsoft3.uvsoft.ca (Cron Daemon) To: appsadm@uvsoft3.uvsoft.ca Subject: Cron <appsadm@localhost> /home/appsadm/sf/backupNight #Nightly backup to disc X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/appsadm> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=appsadm> Status: RO
backupNight - backup $PRODDATA, $PRODLIBS,& $HOMEDIRS - removes 2 days ago backup proddata-1, prodlibs-1, homedirs-1 - renames yesterdays proddata,prodlibs,homedirs (append -1) - makes new empty output dirs for today's backups - changes in turn into each input directory - runs the copycpio1 script to copy all subdirs & files to backup dir enter to proceed (will not stop here if run under cron)
copy /home/mvstest/testdata to /home5/backup/proddata OK ? (or kill) /home5/backup/proddata/./ap /home5/backup/proddata/./ar /home5/backup/proddata/./ar/customer.master /home5/backup/proddata/./ar/customer.nameadrs.list100 /home5/backup/proddata/./ar/sales.items - - - lines removed for this illustration - - - /home5/backup/proddata/./pysave/test5a 317 blocks
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
copy /home/mvstest to /home5/backup/prodlibs OK ? (or kill) /home5/backup/prodlibs/./cbls /home5/backup/prodlibs/./cbls/car100.cbl /home5/backup/prodlibs/./cbls/car120.cbl - - - lines removed for this illustration - - - /home5/backup/prodlibs/./parms/pgl200s1 /home5/backup/prodlibs/./parms/ppy200s2 3760 blocks
copy /home/appsadm to /home5/backup/homedirs OK ? (or kill) /home5/backup/homedirs/./env /home5/backup/homedirs/./env/common_profile /home5/backup/homedirs/./env/stub_profile - - - lines removed for this illustration - - - /home5/backup/homedirs/./sfun/exportfile /home5/backup/homedirs/./sfun/jobset5 /home5/backup/homedirs/./sfun/logmsg1 784 blocks
adding: ap/ (stored 0%) adding: ar/ (stored 0%) adding: ar/customer.master (deflated 66%) adding: ar/customer.nameadrs.list100 (deflated 54%) adding: ar/sales.items (deflated 70%) - - - lines removed for this illustration - - - adding: pysave/test5a (stored 0%)
adding: cbls/ (stored 0%) adding: cbls/car100.cbl (deflated 64%) adding: cbls/car120.cbl (deflated 65%) adding: cbls/car130.cbl (deflated 64%) - - - lines removed for this illustration - - - adding: sf/backupPurge2 (deflated 62%) adding: sf/backupYear (deflated 57%)
backupNight completed, files in $BACKUP/zip are: total 808 -rw-rw-r-- 1 appsadm users 143823 May 20 11:30 homedirs_070520_1130.zip -rw-rw-r-- 1 appsadm users 66054 May 20 11:30 proddata_070520_1130.zip -rw-rw-r-- 1 appsadm users 594983 May 20 11:30 prodlibs_070520_1130.zip accumulated files in $BACKUP/Day (/home5/backup/Day) = 3
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
From appsadm@uvsoft3.uvsoft.ca Sun May 20 11:35:01 2007 Return-Path: <appsadm@uvsoft3.uvsoft.ca> Received: from uvsoft3.uvsoft.ca (uvsoft3.uvsoft.ca [127.0.0.1]) by uvsoft3.uvsoft.ca (8.12.10/8.12.10) with ESMTP id l4KIZ0dc007976 for <appsadm@uvsoft3.uvsoft.ca>; Sun, 20 May 2007 11:35:00 -0700 Received: (from appsadm@localhost) by uvsoft3.uvsoft.ca (8.12.10/8.12.10/Submit) id l4KIZ0H5007967 for appsadm; Sun, 20 May 2007 11:35:00 -0700 Date: Sun, 20 May 2007 11:35:00 -0700 Message-Id: <200705201835.l4KIZ0H5007967@uvsoft3.uvsoft.ca> From: root@uvsoft3.uvsoft.ca (Cron Daemon) To: appsadm@uvsoft3.uvsoft.ca Subject: Cron <appsadm@localhost> /home/appsadm/sf/backupMonth #Monthly on 1st of month at 4 AM X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/appsadm> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=appsadm> Status: RO
backupMonth - copy current backup zip files to Month backup directory - intended for cron on 1st each month at 4 AM (but could run manually) - preceded by backupNight, backed up $PRODDATA $PRODLIBS $HOMEDIRS & zipped them to $BACKUP/zip/... with date/time stamps - this script simply copies contents of $BACKUP/zip/* to $BACKUP/Month - see more documentation at: www.uvsoftware.ca/admjobs.htm#Part_4 enter to proceed (will not stop here if run under cron) stty: standard input: Invalid argument stty: standard input: Invalid argument backupMonth completed, files in $BACKUP/zip are: total 808 -rw-rw-r-- 1 appsadm users 143823 May 20 11:30 homedirs_070520_1130.zip -rw-rw-r-- 1 appsadm users 66054 May 20 11:30 proddata_070520_1130.zip -rw-rw-r-- 1 appsadm users 594983 May 20 11:30 prodlibs_070520_1130.zip accumulated files in $BACKUP/Month are: total 808 -rw-rw-r-- 1 appsadm users 143823 May 20 11:35 homedirs_070520_1130.zip -rw-rw-r-- 1 appsadm users 66054 May 20 11:35 proddata_070520_1130.zip -rw-rw-r-- 1 appsadm users 594983 May 20 11:35 prodlibs_070520_1130.zip file count in $BACKUP/Month (/home5/backup/Month) = 3
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
From appsadm@uvsoft3.uvsoft.ca Sun May 20 11:46:19 2007 Return-Path: <appsadm@uvsoft3.uvsoft.ca> Received: from uvsoft3.uvsoft.ca (uvsoft3.uvsoft.ca [127.0.0.1]) by uvsoft3.uvsoft.ca (8.12.10/8.12.10) with ESMTP id l4KIkDdc008030 for <appsadm@uvsoft3.uvsoft.ca>; Sun, 20 May 2007 11:46:19 -0700 Received: (from appsadm@localhost) by uvsoft3.uvsoft.ca (8.12.10/8.12.10/Submit) id l4KIj0p4008021 for appsadm; Sun, 20 May 2007 11:45:00 -0700 Date: Sun, 20 May 2007 11:45:00 -0700 Message-Id: <200705201845.l4KIj0p4008021@uvsoft3.uvsoft.ca> From: root@uvsoft3.uvsoft.ca (Cron Daemon) To: appsadm@uvsoft3.uvsoft.ca Subject: Cron <appsadm@localhost> /home/appsadm/sf/backupNtape #Nightly backup to tape X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/appsadm> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=appsadm> Status: RO
stty: standard input: Invalid argument stty: standard input: Invalid argument backup $BACKUP/zip/* to tape ($BACKUP=/home5/backup) - intended for cron at 5 AM (but could be run manually) - follows backupNight which created date stamped .zip files /home5/backup/zip/homedirs_yymmdd_HHMM.zip /home5/backup/zip/prodlibs_yymmdd_HHMM.zip /home5/backup/zip/proddata_yymmdd_HHMM.zip - enter to backup all files in $BACKUP/zip/* to tape . ./proddata_070520_1130.zip ./prodlibs_070520_1130.zip ./homedirs_070520_1130.zip 158 blocks backups complete - rewinding & unloading tape
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
4F0. | Advanced backup scripts |
- using environmental variables for directories & tape devices | |
to minimize changes when installing backup scripts at new sites | |
($HOMEDIRS, $PRODDATA, $PRODLIBS, $BACKUP, $TAPERWD, $TAPENRW) |
4F1. | backupPROD - backup $PRODDATA & $PRODLIBS |
- intended to be run by cron at 3 AM (but could run manually) | |
- removes 2 days ago backup proddata-1 & prodlibs-1 | |
- renames yesterdays proddata,prodlibs (append -1) | |
- makes new empty output dirs for today's backups | |
- runs copycpio1 to copy $PRODDATA to $BACKUP/proddata | |
- runs copycpio1 to copy $PRODLIBS to $BACKUP/prodlibs | |
- zips $PRODDATA & $PRODLIBS to date stamped files in $BACKUP/zip/... | |
proddata_yymmdd_HHMM.zip & prodlibs_yymmdd_HHMM.zip | |
- then copy .zip files to $BACKUP/Day/... | |
- last 40 days accumulated, older files dropped by backupPurge1 (or 2) |
4F2. | backupTEST - backup $TESTDATA & $TESTLIBS |
- similar to backupPROD (for $PRODDATA & $PRODLIBS) | |
- use backupTEST during conversion period | |
- use backupPROD after you go into PRODuction |
4F3. | backupHOME - backup all /home/... directories |
- removes 2 days ago homedirs-1, renames yesterday to homedirs-1 | |
- run copycpio1 to copy all homedirs to $BACKUP/homedirs | |
- zips homedirs to $BACKUP/zip/homedirs_yymmdd_HHMM.zip | |
- copy .zip file to $BACKUP/Day/... |
4F4. | backupMonth - copy current backup zip files to Month backup directory |
- for cron on 1st of month at 4 AM (but could run manually) | |
- preceded by backupNight, which backed up $PRODDATA $PRODLIBS $HOMEDIRS | |
& zipped them to $BACKUP/zip/... with date/time stamps | |
- this script simply copies contents of $BACKUP/zip/* to $BACKUP/Month |
4F5. | backupYear - copy current backup zip files to Year backup directory |
- for cron on Jan 1 at 4:30 AM (but could run manually) | |
- preceded by backupNight, backed up $PRODDATA $PRODLIBS $HOMEDIRS | |
& zipped them to $BACKUP/zip/... with date/time stamps | |
- this script simply copies contents of $BACKUP/zip/* to $BACKUP/Year |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
4F6. | backupTapeA - backup $BACKUP/zip/... files to tape (DAT or LTO) |
- run by cron at 5 AM, following Nightly backups to disc | |
- backup files already zipped by backupPROD/TEST/HOME into $BACKUP/zip/... | |
(prodlibs_yymmdd_HHMM.zip, ...etc... to homedirs_yymmdd_HHMM.zip) | |
- if required restore files to $RESTORE/zip/... using 'restoreTapeA' script | |
- then copy desired .zip file to appropriate area & unzip | |
($RESTORE/prodlibs, $RESTORE/proddata, $RESTORE/homedirs) | |
- after restore, unzip, investigate, & copy desired files back to | |
$PRODDATA/..., or $PRODLIBS/..., or /home/... |
4F7. | restoreTapeA - restore from tape to an empty work space |
- for investigation & extract of desired files | |
- you must specify full path name of restore directory & | |
it must match your current directory & it must be empty | |
- we minimize necessity to modify backup scripts for new sites | |
by using environmental variables $TAPERWD, $TAPENRW | |
- SCSI tape devices for Linux might be TAPERWD=/dev/st0 & TAPENRW=/dev/nst0 | |
- tape devices for site are assigned in $APPSADSM/env/common_profile | |
- these scripts '.' dot-execute $APPSADM/.bash_profile & common_profile |
4F8. | backupPurge1 - purge backup files older than specified limits |
- removes files from \$BACKUP/Day older than 40 days | |
- removes files from \$BACKUP/Month older than 15 months | |
- removes files from \$BACKUP/Year older than 7 years | |
- aging based on unix directory entry last modification dates | |
which could be wrong if files copied without -p option | |
- alternative backupPurge2 aging based on _yymmdd_ embedded in filenames |
4F9. | backupPurge2 - purge backup files older than specified limits |
- removes files from \$BACKUP/Day older than 40 days | |
- removes files from \$BACKUP/Month older than 15 months | |
- removes files from \$BACKUP/Year older than 7 years | |
- aging based on _yymmdd_ embedded in filenames, vs backupPurge1 | |
where aging based on unix directory entry last modification dates | |
which could be wrong if files copied without -p option |
4F10. purgeold2 - purge files from a directory older than a specified no of days - based on the dates embedded in the filenames uvcopy job called by backupPurge2 script to remove old backups Alternate backupPurge1 uses directory entry dates vs embedded dates - which could be wrong if files copied without -p option
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupPROD - backup $PRODDATA, $PRODLIBS (nigtly backup) # - intended to be run by cron at 3 AM (but could run manually) # - also see backupTEST for $TESTDATA & $TESTLIBS # - by Owen Townsend, UV Software, May 18/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupPROD <-- no arguments required # ========== # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $PRODDATA,$PRODLIBS,$BACKUP # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # echo "backupPROD - backup \$PRODDATA & \$PRODLIBS" echo "PRODDATA=$PRODDATA PRODLIBS=$PRODLIBS" echo " - intended to be run by cron at 3 AM (but could run manually)" echo " - removes 2 days ago backup proddata-1, prodlibs-1" echo " - renames yesterdays proddata,prodlibs (append -1)" echo " - makes new empty output dirs for todays backups" echo " - changes in turn into each input directory" echo " - runs the copycpio1 script to copy all subdirs & files to backup dir" echo "enter to proceed (will not stop here if run under cron)"; read reply # #Note - this script may be run by cron at 3 AM (but could run manually) # - see recommendations at: www.uvsoftware.ca/admjobs.htm#Part_4 & Part 5 # - crontab file should be invoked by user 'appsadm' (not root) # - appsadm must belong to same group as users creating files to be backed up # - all directories must have perms 775 & all files 664 # (they will if using the profiles recommended in ADMjobs Part 1) # # verify that env-vars set OK & required subdirs present if [[ -d $PRODDATA && -d $PRODLIBS && -d $BACKUP/zip && -d $BACKUP/Day ]]; then : else echo "\$PRODDATA=$PRODDATA" echo "\$PRODLIBS=$PRODLIBS" echo "\$BACKUP/zip=$BACKUP/zip" echo "\$BACKUP/Day=$BACKUP/Day" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# we will maintain 2 days backup of unzipped data & libraries # - for quick recovery if missing files discovered early (else unzip backups) rm -rf $BACKUP/proddata-1 # remove 2 days ago backups rm -rf $BACKUP/prodlibs-1 mv $BACKUP/proddata $BACKUP/proddata-1 # change name of yesterdays backups mv $BACKUP/prodlibs $BACKUP/prodlibs-1 mkdir $BACKUP/proddata # make new dirs for todays backups mkdir $BACKUP/prodlibs # # now execute copycpio1 script from each superdir to be backed up # - must be in directory to be backed up & directory must be empty cd $PRODDATA # change into input superdir copycpio1 $PRODDATA $BACKUP/proddata # backup all levels of subdirs & files #=================================== cd $PRODLIBS copycpio1 $PRODLIBS $BACKUP/prodlibs #=================================== # # now zip the backups # clear old files from zip subdir & zip today's backups into it rm -f $BACKUP/zip/prod* #====================== cd $BACKUP/proddata zip -r $BACKUP/zip/proddata_$(date +%y%m%d_%H%M).zip . #===================================================== cd $BACKUP/prodlibs zip -r $BACKUP/zip/prodlibs_$(date +%y%m%d_%H%M).zip . #===================================================== # # copy today's zip files into the Day backup dir # - backupPurge script will remove files older than 40 days cp $BACKUP/zip/* $BACKUP/Day #=========================== # echo "backupPROD completed, files in \$BACKUP/zip are:" ls -l $BACKUP/zip days=$(ls $BACKUP/Day | wc -l) echo "accumulated files in \$BACKUP/Day ($BACKUP/Day) = $days" exit 0
'backupTEST' is the same as backupPROD (listed above), but backs up testdata & testlibs (vs proddata & prodlibs).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupHOME - backup $HOMEDIRS (nigtly backup) # - intended to be run by cron at 3 AM (but could run manually) # - by Owen Townsend, UV Software, Sep18/2009 # - backupHOME now separate from backupPROD & backupTEST # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupHOME <-- no arguments required # ========== # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $HOMEDIRS & $BACKUP # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $HOMEDIRS, $BACKUP, etc #=========================== # echo "backupHOME - backup \$HOMEDIRS=$HOME to \$BACKUP=$BACKUP" echo " - intended to be run by cron at 3 AM (but could run manually)" echo " - removes 2 days ago backup & renames yesterdays to homedirs-1" echo " - makes new empty output homedirs for todays backups" echo " - changes into \$HOMEDIRS=$HOMEDIRS" echo " - runs the copycpio1 script to copy all subdirs & files to backup dir" echo "enter to proceed (will not stop here if run under cron)"; read reply # #Note - this script may be run by cron at 3 AM (but could run manually) # - see recommendations at: www.uvsoftware.ca/admjobs.htm#Part_4 & Part 5 # - crontab file should be invoked by user 'appsadm' (not root) # - appsadm must belong to same group as users creating files to be backed up # - all directories must have perms 775 & all files 664 # (they will if using the profiles recommended in ADMjobs Part 1) # # verify that env-vars set OK & required subdirs present if [[ -d $HOMEDIRS && -d $BACKUP/zip && -d $BACKUP/Day ]]; then : else echo "\$HOMEDIRS=$HOMEDIRS" echo "\$BACKUP/zip=$BACKUP/zip" echo "\$BACKUP/Day=$BACKUP/Day" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# we will maintain 2 days backup of unzipped homedirs # - for quick recovery if missing files discovered early (else unzip backups) rm -rf $BACKUP/homedirs-1 # remove 2 days ago backups mv $BACKUP/homedirs $BACKUP/homedirs-1 # change name of yesterdays backups mkdir $BACKUP/homedirs # make new dirs for todays backups # # now execute copycpio1 script from superdir to be backed up # - must be in directory to be backed up & directory must be empty cd $HOMEDIRS # change into input superdir copycpio1 $HOMEDIRS $BACKUP/homedirs # backup all levels of subdirs & files #=================================== # # now zip the backups # clear old files from zip subdir & zip today's backups into it rm -f $BACKUP/zip/homedirs* cd $BACKUP/homedirs zip -r $BACKUP/zip/homedirs_$(date +%y%m%d_%H%M).zip . #===================================================== # # copy today's zip files into the Day backup dir # - backupPurge script will remove files older than 40 days cp $BACKUP/zip/* $BACKUP/Day #=========================== # echo "backupHOME completed, files in \$BACKUP/zip are:" ls -l $BACKUP/zip days=$(ls $BACKUP/Day | wc -l) echo "accumulated files in \$BACKUP/Day ($BACKUP/Day) = $days" exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupMonth - copy current backup zip files to Month backup directory # - intended for cron on 1st each month at 4 AM (could run manually) # - by Owen Townsend, UV Software, May 18/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupMonth <-- no arguments required # =========== # echo "backupMonth - copy current backup zip files to Month backup directory" echo "- intended for cron on 1st each month at 4 AM (but could run manually)" echo "- preceded by backupNight, backed up \$PRODDATA \$PRODLIBS \$HOMEDIRS" echo " & zipped them to \$BACKUP/zip/... with date/time stamps" echo "- this script simply copies contents of \$BACKUP/zip/* to \$BACKUP/Month" echo "- see more documentation at: www.uvsoftware.ca/admjobs.htm#Part_4" echo "enter to proceed (will not stop here if run under cron)"; read reply # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $PRODDATA,$PRODLIBS,$BACKUP # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # # verify that env-vars set OK & required subdirs present if [[ -d $BACKUP/zip && -d $BACKUP/Month ]]; then : else echo "\$BACKUP/zip=$BACKUP/zip" echo "\$BACKUP/Month=$BACKUP/Month" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # # copy today's (1st of month) zip files into the Month backup dir # - backupPurge script will remove files older than 15 months cp -i $BACKUP/zip/* $BACKUP/Month #================================ # echo "backupMonth completed, files in \$BACKUP/zip are:" ls -l $BACKUP/zip echo "accumulated files in \$BACKUP/Month are:" ls -l $BACKUP/Month months=$(ls $BACKUP/Month | wc -l) echo "file count in \$BACKUP/Month ($BACKUP/Month) = $months" exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupYear - copy current backup zip files to Year backup directory # - for cron on Jan 1 each year at 4:30 AM (could run manually) # - by Owen Townsend, UV Software, May 18/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupYear <-- no arguments required # =========== # echo "backupYear - copy current backup zip files to Year backup directory" echo "- intended for cron on Jan 1 each year at 4:30 AM (could run manually)" echo "- preceded by backupNight, backed up \$PRODDATA \$PRODLIBS \$HOMEDIRS" echo " & zipped them to \$BACKUP/zip/... with date/time stamps" echo "- this script simply copies contents of \$BACKUP/zip/* to \$BACKUP/Year" echo "- see more documentation at: www.uvsoftware.ca/admjobs.htm#Part_4" echo "enter to proceed (will not stop here if run under cron)"; read reply # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $PRODDATA,$PRODLIBS,$BACKUP,etc # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # # verify that env-vars set OK & required subdirs present if [[ -d $BACKUP/zip && -d $BACKUP/Year ]]; then : else echo "\$BACKUP/zip=$BACKUP/zip" echo "\$BACKUP/Year=$BACKUP/Year" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # # copy today's (1st of year) zip files into the Year backup dir # - backupPurge script will remove files older than 7 years cp -i $BACKUP/zip/* $BACKUP/Year #=============================== # echo "backupYear completed, files in \$BACKUP/zip are:" ls -l $BACKUP/zip echo "accumulated files in \$BACKUP/Year are:" ls -l $BACKUP/Year years=$(ls $BACKUP/Year | wc -l) echo "file count in \$BACKUP/Year ($BACKUP/Year) = $years" exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupTapeA - backup Nightly to tape (DAT or DLT) # - this script stored in: /home/appsadm/sf/backupTapeA # - run by crontab file stored at: /home/appsadm/sf/cronbackupNight # # usage: backupTapeA - no args required # =========== # # - this script run by cron at 5 AM, following Nightly backups to disc # - files already zipped by backupNight script as follows: # # 1. $BACKUP/zip/homedirs_yymmdd_HHMM.zip # 2. $BACKUP/zip/prodlibs_yymmdd_HHMM.zip # 3. $BACKUP/zip/proddata_yymmdd_HHMM.zip # # - use 'restore1' to restore files to $RESTORE/zip/... # - then copy desired .zip file to appropriate area & unzip # ($RESTORE/homedirs, $RESTORE/prodlibs, or $RESTORE/proddata) # - clear out any old files before restore, # - after restore, unzip, investigate, & copy desired files back to # /home/..., $PRODDATA/..., or $PRODLIBS/... # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $BACKUP, $TAPERWD, $TAPENRW # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # # verify that env-vars set OK & required subdirs present if [[ -d $BACKUP/zip && -d $APPSADM && -c $TAPERWD && -c $TAPENRW ]]; then : else echo "\$BACKUP/zip=$BACKUP/zip" echo "\$APPSADM=$APPSADM" echo "\$TAPERWD=$TAPERWD # rewind tape Linux SCSI" echo "\$TAPENRW=$TAPENRW # NO rewind tape Linux SCSI" echo "1 or more of above directories or devices NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # #Note - tape devices for site are assigned in \$APPSADSM/env/common_profile # - 1st SCSI tape device for Linux would be as follows: # TAPERWD=/dev/st0 # rewind tape device for Linux SCSI # TAPENRW=/dev/nst0 # NO rewind tape device for Linux SCSI #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
echo "backup \$BACKUP/zip/* to tape (\$BACKUP=$BACKUP)" echo "- intended for cron at 5 AM (but could be run manually)" echo "- follows backupNight which created date stamped .zip files " echo "$BACKUP/zip/homedirs_yymmdd_HHMM.zip" echo "$BACKUP/zip/prodlibs_yymmdd_HHMM.zip" echo "$BACKUP/zip/proddata_yymmdd_HHMM.zip" echo "- enter to backup all files in \$BACKUP/zip/* to tape" read reply # Note - will not wait for reply when run by cron # mt -f $TAPERWD rewind # ensure tape rewound # cd $BACKUP/zip # find . -print | cpio -ocvBO/$TAPENRW #=================================== # # could append other backups on end of $BACKUP/zip archive ?? # echo "backups complete - rewinding & unloading tape" mt -f $TAPERWD rewind # rewind tape mt -f $TAPERWD offline # unload tape exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # restoreTapeA - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # restoreTapeA - restore cpio tape to an empty work space # - this job restores all files from the 1st archive on tape # restoreTape1 <-- alternate job for multi-archive tapes # #usage: 1. change to directory where files are to be restored # 2. remove any old files (directory must be empty) # 3. restoreTapeA directory # ====================== # #sample 1. cd $RESTORE/zip # 2. rm -fr * # 3. restoreTapeA $RESTORE/zip # ========================= # # - you must specify full path name of restore directory & # it must match your current directory & it must be empty # #Note - we minimize necessity to modify backup scripts for new sites # - by using environmental variables $TAPERWD, $TAPENRW # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # # verify that env-vars set OK & required subdirs present if [[ -d $APPSADM && -c $TAPERWD && -c $TAPENRW ]]; then : else echo "\$APPSADM=$APPSADM" echo "\$TAPERWD=$TAPERWD # rewind tape Linux SCSI" echo "\$TAPENRW=$TAPENRW # NO rewind tape Linux SCSI" echo "1 or more of above directories or devices NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # #Note - tape devices for site are assigned in $APPSADSM/env/common_profile # - 1st SCSI tape device for Linux would be as follows: # TAPERWD=/dev/st0 # rewind tape device for Linux SCSI # TAPENRW=/dev/nst0 # NO rewind tape device for Linux SCSI #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# capture command arguments & verify cdir="$1" # capture arg1 directory path dpath=$(pwd) # cpature current directory path echo "restoreTapeA: arg1=curdirpath=$cdir" # if [[ ! -d "$cdir" ]] then echo "USAGE: restoreTapeA curdirfullpath"; echo " arg2 invalid - must be full path to current(restore) directory" echo " - restore directory must be empty & you must be in it" exit 92; fi # if [[ $dpath != $cdir ]] then echo "you must be in the directory specified"; exit 92; fi # ls . >/tmp/restore_emptytest if [[ -s /tmp/restore_emptytest ]] then echo "the current directory must be empty";exit 93; fi # echo "restore tape to current directory ($cdir) OK ?"; read reply read reply # mt -f $TAPERWD rewind # ensure tape rewound cpio -icvdmBI $TAPENRW # restore spcfd archive to cur dir #===================== echo "tape restored to $cdir, rewinding tape" mt -f $TAPERWD rewind # rewind tape after exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupPurge1 - purge backup files older than specified limits # - intended for cron each night at 2:30 AM (could run manually) # - by Owen Townsend, UV Software, May 18/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupPurge1 <-- no arguments required # ============ # echo "backupPurge1 - purge backup files older than specified limits" echo "- intended for cron each night at 2:30 AM (but could run manually)" echo "- followed by backupNight, backupMonth(1st of month), backupYear(Jan 1)" echo "- removes files from \$BACKUP/Day older than 40 days" echo "- removes files from \$BACKUP/Month older than 15 months" echo "- removes files from \$BACKUP/Year older than 7 years" echo "- see more documentation at: www.uvsoftware.ca/admjobs.htm#Part_4" echo "enter to proceed (will not stop here if run under cron)"; read reply # #Note - we will minimize necessity to modify backup scripts for new sites # - by using environmental variables $PRODDATA,$PRODLIBS,$BACKUP # - variables defined in /home/appsadm/env/common_profile (with profiles) # assuming site admin followed advice in www.uvsoftware.ca/admjobs.htm#Part_1 # - will '.' dot-execute site admin's .bash_profile to define $SYMBOLs & PATHs # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # # verify that env-vars set OK & required subdirs present if [[ -d $BACKUP/Day && -d $BACKUP/Month && -d $BACKUP/Year ]]; then : else echo "\$BACKUP/Day=$BACKUP/Day" echo "\$BACKUP/Month=$BACKUP/Month" echo "\$BACKUP/Year=$BACKUP/Year" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # # now remove files older than desired limits find $BACKUP/Day -mtime +40 -exec rm -fr {} \; #================================================= find $BACKUP/Month -mtime +150 -exec rm -fr {} \; #================================================= find $BACKUP/Year -mtime +2555 -exec rm -fr {} \; #================================================= # echo "after backupPurge1, files in \$BACKUP/Day are:"; ls -l $BACKUP/Day echo "after backupPurge1, files in \$BACKUP/Month are:"; ls -l $BACKUP/Month echo "after backupPurge1, files in \$BACKUP/Year are:"; ls -l $BACKUP/Year days=$(ls $BACKUP/Day | wc -l); echo "file count in \$BACKUP/Day = $days" months=$(ls $BACKUP/Month | wc -l); echo "file count \$BACKUP/Month = $months" years=$(ls $BACKUP/Year | wc -l); echo "file count in \$BACKUP/Year = $years" exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # backupPurge2 - purge backup files older than specified limits # - intended for cron each night at 2:30 AM (could run manually) # - by Owen Townsend, UV Software, May 18/2007 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_4 # # backupPurge2 <-- no arguments required # ============ # # backupPurge2 is an alternate to backupPurge1 which uses find & -mtime: # # find $BACKUP/Day -mtime +40 -exec rm -fr {} \; #=================================================== # find & -mtime would not work if files were copied/moved (without -p option) # backupPurge2 will calculate age based on _yymmdd_ embedded in filenames. # echo "backupPurge2 - purge backup files older than specified limits" echo "- intended for cron each night at 2:30 AM (but could run manually)" echo "- followed by backupNight, backupMonth(1st of month), backupYear(Jan 1)" echo "- removes files from \$BACKUP/Day older than 40 days" echo "- removes files from \$BACKUP/Month older than 15 months" echo "- removes files from \$BACKUP/Year older than 7 years" echo "- see more documentation at: www.uvsoftware.ca/admjobs.htm#Part_4" echo "enter to proceed (will not stop here if run under cron)"; read reply # export APPSADM=/home/appsadm # define site admin superdir #=========================== (only absolute path in these scripts) . $APPSADM/.bash_profile # define $PRODDATA,$PRODLIBS,$BACKUP,etc #=========================== # verify that env-vars set OK & required subdirs present if [[ -d $BACKUP/Day && -d $BACKUP/Month && -d $BACKUP/Year ]]; then : else echo "\$BACKUP/Day=$BACKUP/Day" echo "\$BACKUP/Month=$BACKUP/Month" echo "\$BACKUP/Year=$BACKUP/Year" echo "\$APPSADM=$APPSADM" echo "1 or more of above directories NOT found:" echo "- or \$variables not set by \$APPSADM/.bash_profile" exit 99; fi # call uvcopy job 'purgeold2' to remove files, based on age calculations # - using _yymmdd_ embedded in filenames & specified no of days old # - purging independent of dir entry dates (in case files moved w/o -p option) # uvcopy purgeold2,fild1=$BACKUP/Day,arg1=40,uop=q0i7f1 #==================================================== uvcopy purgeold2,fild1=$BACKUP/Month,arg1=450,uop=q0i7f1 #======================================================= uvcopy purgeold2,fild1=$BACKUP/Year,arg1=2555,uop=q0i7f1 #======================================================= echo "after backupPurge2, files in \$BACKUP/Day are:"; ls -l $BACKUP/Day echo "after backupPurge2, files in \$BACKUP/Month are:"; ls -l $BACKUP/Month echo "after backupPurge2, files in \$BACKUP/Year are:"; ls -l $BACKUP/Year days=$(ls $BACKUP/Day | wc -l); echo "file count in \$BACKUP/Day = $days" months=$(ls $BACKUP/Month | wc -l); echo "file count \$BACKUP/Month = $months" years=$(ls $BACKUP/Year | wc -l); echo "file count in \$BACKUP/Year = $years" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# purgeold2 - purge files from a directory older than a specified no of days # - based on the dates embedded in the filenames # - by Owen Townsend, UV Software, May 19/2007 # - see documentation at www.uvsoftware.ca/admjobs.htm#Part_4 # # uvcopy purgeold2,fild1=directory,arg1=days # ========================================== # # This uvcopy job is called by the backupPurge2 script # (alternate backupPurge1 uses directory entry dates vs embedded dates) # The backuppurge2 script calls this job to remove: # - Daily backup files older than 40 days # - Monthly backup files older than 450 days # - Yearly backup files older than 2555 days # # uvcopy purgeold2,fild1=$BACKUP/Day,arg1=40 # uvcopy purgeold2,fild1=$BACKUP/Month,arg1=450 # uvcopy purgeold2,fild1=$BACKUP/Year,arg1=2555 # # ** sample filenames in $BACKUP/Day ** # # homedirs_070501_0841.zip <-- # proddata_070501_0841.zip <--dropped if run on 070610 # prodlibs_070501_0841.zip <-- # # homedirs_070502_0841.zip # proddata_070502_0841.zip # prodlibs_070502_0841.zip # - - - - etc - - - - # homedirs_070610_0841.zip # proddata_070610_0841.zip # prodlibs_070610_0841.zip # # ** why purgeold2 (backupPurge2 vs backupPurge1) ** # # backupPurge1 drops files older than specified dates # - based on directory entry dates using commands such as # # find $BACKUP/Day -mtime +40 -exec rm -fr {} \; # ================================================== # # But backupPurge1 method will not work if files copied w/o -p option # - which might happen if files moved or system moved to new machine # - backupPurge2 using this job & embedded dates is independent of dir entries #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
opr='uop=f0 - option defaults' opr=' f0 - interactive, prompts for remove (rm option -i)' opr=' f1 - no prompts, use force option (-f)' uop=q1f0 # option defaults fild1=?backupdir,typ=DIR,rcs=80 @run opn all open directory to prove present # # calc purge date yymmdd for current date - arg1 no of days mvn d0(5),$arg1(5) store no of days from $arg1 mvc d10(6),$yymmdd store current date yymmdd datcn d20(5),d10(6) convert current yymmdd to days since 1900 mvn d30(5),d20(5) move to 2nd area (in case debug display) sub d30(5),d0(5) current days since 1900 - days specified datnc d40(6),d30(5) convert days since 1900 to yymmdd # # begin loop to read filenames from directory & remove if older than spcfd man20 get fild1,a0(80) get next directory entry skp> man90 (cc set > at EOD) skp< man20 (cc set < if directory vs file) add $ca1,1 count files scn a0(50),'_' scan to 1st '_' prior to yymmdd skp! err1 mvu b0(7),ax1,'_' move yymmdd until ending '_' skp! err1 cmcp b0(6),'######' 6 numerics ? skp! err1 man28 cmc b0(6),d40(6) extracted date < older than current date ? skp=> man20 # # date extracted from filename is < (older than) calculated purge date man30 mvf b0(250),'rm -f ' setup remove command & clear area cmn $uopbf,1 rm -f option ? (vs prompt -i option) skp=> 1 mvf b0(250),'rm -i ' change to interactive prompt option mvu b6(100),$fild1,x'00' insert directory name cat b0(200),'/' append '/' separator cat b0(200),a0(50) append filename man36 sys b0(200) execute remove command add $ca2,1 count remove (attempts) skp man20 return to get next filename # # EOD - close & end job man90 cls all close all open msgv1 '$ca2 removes from $ca1 files in directory $fild1' eoj end of job # # Error if filename not as expected '_yymmdd_' embedded in filename # example: proddata_070501_0841.zip err1 msg a0(50) display current directory name msg 'proddata_070501_0841.zip <-- sample format expected' msgw 'filename does not contain _yymmdd_, enter to bypass' skp man20 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
5A1. | Introduction & Overview |
5B1. | crontab_appsadm1 - sample of a crontab file, setup by site administrator |
to perform backups & run nightly, weekly, monthly jobs. | |
- runs backupTape Mon-Fri at 3AM to backup data & libraries | |
- runs nightly1, clean tmp dirs, process console logs, etc | |
- runs monthly1, save log2 files in log3 & init log2 for new month |
3E3. | backupT2 - backup proddata, prodlibs, homedirs to tape |
- this script already listed in Part_3 | |
4F4. | backupNtape - backup proddata, prodlibs, homedirs to tape |
- alternate version, already listed in Part_4 |
5C1. | nightly1 - runs each night Monday thru Friday |
- saves prodlibs,proddata,& home dirs to alternate disc filesystem | |
- backsup prodlibs,proddata,& home dirs to separate archives on | |
the DAT tape (allows separate restores by archive#). |
5C2. | cleantmps - subscript called by nightly1 to clear tmp subdirs |
5D1. | weekly1 - runs on Sunday morning (or whenever you decide) |
- clears various subdirs: jobtmp, sysout, tmp, wrk, etc | |
- clears report files older than 15 days |
5E1. | monthly1 - runs on the 1st of each month |
- moves all console log files from log2 to log3 | |
- then removes all log2 files for re-accumulation in the new month |
5F1. | crontab_user - sample crontab file for users |
- sample just 'exit's in case they forgot to log off | |
- closes the console logging file to prevent loss | |
- required before crontab_appsadm1/nightly1 process log files |
5F2. | crontab_root - sample crontab file for root |
- kill users who did not logoff before 12:30 AM (or whatever) | |
- set perms 775/664 & owner:group appsadm:apps on all | |
$PRODDATA subdirs/files to ensure no batch job failures | |
- reboot every Sunday at 1AM |
5G1. | killuser2 - script to kill users who did not logoff |
5H1. | setperms1 - script to set permissions on all subdirs(775) & all files(664) |
within $PRODDATA, $PRODLIBS, $TESTDATA,& $TESTLIBS | |
- run by cron before nightly batch processing | |
- prevents job failures due to new files with bad permissions |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
5I1. | job logging via 'mail' under 'cron' |
crontabs & scripts to demo log capture by mail |
5I2. | appsadm subdirs for cron logs by mail |
JCL/scripts & DATA files used for cron log mail tests |
5I3. | Setup appsadm to demo cron logging by mail |
5I4. | test cron job log capture via mail |
5I5. | observations in 'mvstest' directories |
5J1. | results after 2 cycles cronscript1/cronmailsave1 |
list log files captured from cronscript1 |
5J3. | inspect contents of log files |
5K1. | listings of crontabs & scripts used cron/log/mail demo |
5K2. | crontab2 - schedule cronscript1 & cronmailsave1 |
5K3. | crontabtest2 - schedule cronscript1/cronmailsave1 every 2 minutes |
5K4. | cronscript1 - executing JCL/script jgl100.ksh |
5K5. | jgl100.ksh - JCL/script executed by cronscript1 |
5K6. | stub_profile_cronlogdemo |
- special version of profile to demo capturing logs from jobs run by cron | |
- for appsadm, defines RUNLIBS&RUNDATA as /home/mvstest/testlibs&testdata |
5K7. | cronlog1 - display msgs & append to $APPSADM/cronlog1/yymmdd_HHMMSS_$JOBID2 |
- function called by cronscript1 |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'cron' is the Unix/Linux facility to automatically run scripts at specified times (daily, weekly, monthly, or whatever).
Crontab files specify the times & the names of the scripts which are to be automatically executed at the specified times. Each user may have their own crontab file. See the 'crontab' activation procedures documented within the 'crontab_appsadm1' sample file (listed on the next page).
I recommend the applications administrator (appsadm) setup a 'crontab' to perform the applications backups for libraries & data files, and to run other jobs that can be auto scheduled (nightly, weekly, monthly, etc).
Note that 'cron' jobs run under the crontab owner userid, but the user profile is not automatically executed to setup PATH's etc. You will notice that the various sample scripts (nightly1,weekly1,monthly1) perform a dot '.' execution of /home/appsadm/.bash_profile to setup PATH's.
The appsadm cron jobs (backups & nightly scripts) do not need to run with 'root' privileges since appsadm is in the same group as the production operators who create the files. This means that appsadm can not back up the unix/linux system files.
It is probably not necessary to backup the unix/linux system nightly, since system files rarely change. But if you wish to backup the system nightly, you would set up a separate crontab owned by root to do this.
The system backups could be a separate archive at the end of the same tape just written by the appsadm crontab. Or they could be a different tape if you have a multi-tape system, or a different tape device if you have multiple tape drives.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# crontab_appsadm1 - crontab file to run various scripts # (backups,cleanups,application JCL/scripts) # - nightly1, weekly, monthly1 # - store this file at /home/appsadm/sf/... # # - see ADMjobs.doc or www.uvsoftware.ca/admjobs.htm for crontab samples # - sample crontab file supplied in /home/uvadm/sf/adm/crontab_appsadm1 # - I suggest you setup userid 'appsadm' to house your crontabs & cron scripts # then copy to /home/appsadm/sf/crontab_appsadm1 & modify as required # # - you must log on as 'appsadm' to update crontabs # (crontab affects crons only for the logged on user) # - the 'crontab' command (#4 below) copies the specified crontab file # to the 'real' crontab file which would be: /var/spool/cron/appsadm # - Before 1st use, you must logon as root, & add the 'appsadm' # userid to: /etc/cron.allow if it exists. If it does not exist # you can use cron unless your userid exists in /etc/cron.deny # # suggested procedures for updating crontab file (for appsadm) are: # 1. logon as appsadm --> /home/appsadm # 2. vi sf/crontab_appsadm1 - edit this file as required # 3. crontab -r - remove all old crontab lines for appsadm # (OK since this file is always the source) # 4. crontab sf/crontab_appsadm1 - activate new crontab for appsadm # 5. crontab -l - list crontab onfile to confirm installation # #----------------------------------------------------------------------- # arguments to crontab are as follows: # minute hour day-of-mth mth-of-yr day-of-week <----command----> # 00 3 * * 2-6 /home/appsadm/sf/backupTape # Nightly backup to tape 3AM #======================================= # 30 3 * * 2-6 /home/appsadm/sf/nightly1 # Nightly cleanup 3:30 AM #===================================== # - nightly1 calls logfixN to fix console logs for viewing/printing # - cleanup tmp subdirs in homedirs, prodlibs, proddata # 00 4 * * 0 /home/appsadm/sf/weekly1 # Weekly Sunday 4 AM #================================== # - remove report subdirs older than 15 days # 00 5 01 * * /home/appsadm/sf/monthly1 # Monthly (1st day at 5 AM) #==================================== # - monthly1 calls logfixM to copy /home/appsadm/log2/... to log3 # & clear log2/... subdirs # #------------------------ end crontab_appsadm1 --------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # nightly1 - nightly processing for user applications # - by Owen Townsend, UV Software, updated Nov 2009 # # This 'nightly1' script run by 'crontab_appsadm1' # - runs various scripts (cleantmps, logfixN, vtocrpts, etc) # - see scripts & crontabs stored in /home/appsadm/sf/... # export APPSADM=/home/appsadm cd $APPSADM # change to /home/appsadm (above env/ & log/ subdirs . $APPSADM/.bash_profile # '.' dot execute profile for PATH's,perms,etc #======================= # (RUNLIBS/RUNDATA + common_profile) export PATH=$PATH:$RUNLIBS/jcls # might need this if jcls not already in PATH # # clean out tmp subdir contents in homedirs, prodlibs, proddata, etc cleantmps #======== # run 'logfixN' to process logfiles for any users who did not logoff # - killuser2 (run by crontab_root prior to this) has closed their logfiles logfixN #====== # #Note - could run series of batch jobs here # jgl100.ksh #<-- could run a demo job for testing # ========== - see jgl100.ksh listed at www.uvsoftware.ca/admjobs.htm#5K5 # #Note - console logs for jobs run under cron are 'mailed' to the crontab owner # which is 'appsadm', so appsadm can login each morning & read his mail # to see if the nigtly jobs had any errors ? # BUT - would be nice if we could save the mail as date/time stamped files # in case appsadm forgets & we want to examine the history # YES - we can do it, see crontab2 & cronmailsave1 in ADMjobs.doc 5K1 & 5K4 # - crontab2 schedules cronmailsave1 to run after nightly jobs # to read the mail & save in date/time stamped files in appsadm/cronlog2 exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # cleantmps - clean out all 'tmp' subdirs # - in homedirs, prodlibs, proddata, etc # - by Owen Townsend, Dec26/05, at LNPF # - store these scripts & crontab_appsadm1 in /home/appsadm/sf/... # # - cleantmps called by 'nightly1', which is scheduled by crontab_appsadm1 # - nightly performs '.' (source) execute of appsadm .bash_profile # to setup $symbols $PRODLIBS, $PRODDATA for use below # # clean out tmp subdir contents in homedirs, prodlibs, proddata, etc rm -f /home/*/tmp/* rm -f $PRODLIBS/tmp/* rm -f $PRODDATA/tmp/* rm -fr $PRODDATA/jobtmp/* exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # weekly1 - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # weekly1 - sample script run by 'cron' weekly Sunday at 3AM or whatever # - makes weekly on-disc backup of PRODLIBS & PRODDATA # # - copy this (/home/uvadm/sf/adm/weekly1) to your /home/appsadm/sf/... # & modify depending on site requirements # - this script can be auto scheduled by 'cron' # - see sample crontab file /home/uvadm/sf/adm/crontab_appsadm1, that you can # copy to your /home/appsadm/sf/... & modify as required # # - perform '.' (source) execute of appsadm .bash_profile to setup $symbols # - see $symbols below ($PRODLIBS, $PRODDATA, $BACKUP) # export APPSADM=/home/appsadm cd $APPSADM # change to /home/appsadm (above env/ & log/ subdirs . $APPSADM/.bash_profile # '.' dot execute profile for PATH's,perms,etc #======================= # (RUNLIBS/RUNDATA + common_profile) export PATH=$PATH:$RUNLIBS/jcls # might need this if jcls not already in PATH # #---------------------------------------------------------------------- # copy PRODLIBS & PRODDATA to weekly on-disc backup directories # - 1st remove all prior week backup files rm -rf $BACKUP/prodlibsBW/* cp -r $PRODLIBS $BACKUP/prodlibsBW rm -rf $BACKUP/proddataBW/* cp -r $PRODLIBS $BACKUP/proddataBW # #---------------------------------------------------------------------- # clean out various temp subdirs # note - jobtmp & sysout are subdirectoried (use option 'r') rm -fr $PRODDATA/jobtmp/* # clear all subdirs & files from jobtmp rm -fr $PRODDATA/sysout/* # clear all subdirs & files from sysout rm -f $PRODDATA/tmp/* # clear all files from tmp rm -f $PRODDATA/wrk/* # clear all files from wrk # # clear files in subdir 'rpts' older than 15 days find $PRODDATA/rpts/* -ctime +15 -exec rm -r {} \; #================================================= # exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # monthly1 - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # monthly1 - sample script run by 'cron' early on the 1st of each month # # - copy this (/home/uvadm/sf/adm/monthly1) to /home/appsadm/sf/... # & modify depending on site requirements # - this script can be auto scheduled by 'cron' # - see sample crontab file /home/uvadm/sf/adm/crontab_appsadm1 # - copy to your site's /home/appsadm/sf/... & modify as required # # - this sample runs 'logfixM' logfile monthly processing # - see descriptions in /home/uvadm/sf/logfixM # - could add more monthly processing to this script # # establish appsadm PATH & execute common_profile_prod # - to define: PATH, PFPATH, PRODLIBS, PRODDATA, BACKUP dirs, etc # export APPSADM=/home/appsadm export APPSADM=/home/appsadm cd $APPSADM # change to /home/appsadm (above env/ & log/ subdirs . $APPSADM/.bash_profile # '.' dot execute profile for PATH's,perms,etc #======================= # (RUNLIBS/RUNDATA + common_profile) export PATH=$PATH:$RUNLIBS/jcls # might need this if jcls not already in PATH # logfixM # save last months log files in log3 & clear log2 for this month #====== backupBM # make monthly backups of pdoddata & prodlibs #======= vtocshift # shift vtoc report subdirs (vtoc2->vtoc3,vtoc1->vtoc2,clear vtoc1) #======== # # could add more scripts here ??? exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# crontab_user - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # crontab_user - sample crontab file for users # - this sample only 'exit's in case you forgot to log off # - to close the console logging file to prevent loss # - required before crontab_appsadm1/nightly1 processes log files # # The 'console log' file is created by unix 'script' command at end of profile # - see 'console logging' at www.uvsoftware.ca/admjobs.htm#Part_6 # - also see 'crontab_root' to kill users who did not log off # # ** Op. Instrns. for console logging users ** # # 1. login with your userid --> /home/userid/ # 2. mkdir sf - make directory if not already made # 3. cp /home/uvadm/sf/adm/crontab_user sf/crontab_userid # ==================================================== # - copy supplied crontab_user to your subdir & rename with your userid # # 4. vi sf/crontab_userid - edit this file if desired # - could add actions other than 'exit' # 5. crontab sf/crontab_userid - activate new crontab for your userid # ========================= # 6. crontab -l - list crontab file to confirm installation # # minute hour day-of-mth mth-of-yr day-of-week <----command----> # 51 01 * * * exit #<-- 1st exit exits 'script' (console logging) #=============== (closes the console log file) 52 01 * * * exit #<-- 2nd exit exits your shell #=============== # #BUT - this is obsoleted by crontab_root, which runs killuser2 script # to kill all ksh & bash shell users who forgot to log off #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# crontab_root - crontab file to run under root # - by Owen Townsend, UV Software, May 2008 # - supplied in /home/uvadm/sf/adm/crontab_root # - copy to /home/appsadm/sf/crontab_root before customization & activation # - see details at www.uvsoftware.ca/admjobs.htm#Part_5 # # Only for backups, cleanups, etc that require 'root' # - see crontab_apps1 for backups & applications (daily,weekly,monthly,etc) # - 'crontab_apps1' runs under 'appsadm' (not root) & much safer # # Actual crontab for root depends on the unix/linux OS # - you need to find it & add desired lines # OR, you could use the following procedure so you could maintain # all crontabs in 1 place /home/appsadm/sf/... # - after you have retrieved & combined root crontabs with this file # - for RHEL 5.1 there is no root crontab, so just copy this # # suggested procedures for updating root crontab: # 1. logon as root & cd to /home/appsadm # 2. crontab -l (if 1st setup) - list to see if any crontab exists for root # 3. crontab -l >sf/crontab_root - ifso, redirect to sf/appsadm # 4. vi sf/crontab_root - edit this file as required # 4a. :r /home/uvadm/sf/adm/crontab_root <-- append this supplied file # 5. crontab -r - remove old crontab file for root # 6. crontab sf/crontab_root - activate new crontab for root # 7. crontab -l - list crontab file to confirm installation # # minute hour day-of-mth mth-of-yr day-of-week <----command----> # 15 00 * * * /home/appsadm/sf/killuser2 all # kill users at 12:15AM every night #========================================= # - killuser2 kills all 'ksh' or 'bash' users who did not log off # - also see killuser1 to interactively kill any 1 specified userid # 30 00 * * 2-6 /home/appsadm/sf/setperms1 all # set perms at 12:30AM Tues-Sat #=========================================== # At 12:30 AM Tues-Sat, run setperms1 to set permissions on data & libraries # - ensure directories 775, data-files 664, script-files 775 # - ensure owner:group appsadm:apps (see details in setperms1 script) # 00 01 * * 0 /sbin/shutdown -g0 -y -i6 # reboot at 01:00AM Sunday #==================================== (above command for traditional unix) # reboot Sunday 1 AM -g0(no wait) -y(auto reply y to prompt) -i6(reboot) # 00 01 * * 0 /sbin/shutdown -r now #<-- can use this for Linux # ================================= # #Note: Add commands here to bring up software packages, such as: # - Micro Focus COBOL license manager # - Online systems (MTO, unikix, CICS6000, etc) #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # killuser2 - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # killuser2 - kill users who did not logout # - by Owen Townsend, www.uvsoftware.ca, March 2002 # # - intended to be run by crontab_root at 11:45 PM or whenever # - killuser2 is run before logfixN which processes user console logs # - this kills user 'script' & closes the script output file # - also see 'killuser1' script for interactive use to kill any 1 user # #usage: killuser2 all # ============= # # verify arg1 'all' if [ "$1" != "all" ]; then echo "killuser2 arg1 must be 'all' "; exit 1; fi # # redirect ps -f output to a tmp file # - use '-o' output option for 3 fields only (COMMAND, PID,& RUSER) ps -e -ocomm -opid -oruser >/tmp/psef #==================================== # ** sample output lines ** # COMMAND PID RUSER # init 1 root # bash 9022 root # bash 9077 uvadm # bash 9118 uvbak # ps 9226 uvadm # # - open the file & read back into variables for easier manipulation exec 3< /tmp/psef # open file #3 # x=0; y=0 while read -u3 comm pid ruser do if [[ ("$comm" == ksh || "$comm" == bash) && ("$ruser" != root) ]] then let x=x+1 kill -9 $pid if [[ $? == 0 ]]; then kok=OK; let y=y+1; else kok=NAK; fi echo "#$x kill $comm $pid $ruser - $kok" fi done exec 3<&- # close file #3 echo "$x kills attempted, $y killed OK $(date)" #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # setperms1 - set permissions on subdirs & files under a specified superdir # - using 'find' to process all levels of directories & files # - followed by chmod 775 for any bin/* & script/* dirs # - by Owen Townsend, UV Software, May 2008 # - see complete doc at www.uvsoftware.ca/admjobs.htm#Part_5 # # This script run may be run by a root crontab prior to nightly batch runs # - script supplied in /home/uvadm/sf/adm/setperms1 # - should setup user 'appsadm' & copy this to /home/appsadm/sf/setperms1 # # This script run by crontab_root stored at /home/appsadm/sf/crontab_root # - see complete listing at www.uvsoftware.ca/admjobs.htm#5F1 # - here is just the crontab command line to run this script: # # 30 00 * * 2-6 /home/appsadm/sf/setperms1 all # ============================================ # # This script intended as part of the Vancouver Utility mainframe conversions # - to ensure no bad permissions get into the DATA & Library file systems # - see DATA & LIBS directories suggested in www.uvsoftware.ca/admjobs.htm#2C0 # (ex: p1/apps/testlibs, p1/apps/testdata, p2/apps/prodlibs, p2/apps/proddata) # - use $symbols $TESTLIBS, $TESTDATA, $PRODLIBS, $PRODDATA # - defined in /home/appsadm/env/common_profile # # After using find to set perms for all subdirs(775) & all files(664), within # library superdirs, we must follow with 'chmod 775' for any bin/script subdirs # - this script assumes these are called 'bin', 'sf',& 'jcls' (VU conversions) # - you must modify if you use different names or setup additional bin/scripts # # Ensure arg1 is 'all' (protection against inadvertent entry of 'setperms1') echo "setperms1 - set perms on all subdirs(775) & all files(664)" echo " - within \$PRODLIBS, \$PRODDATA, \$TESTLIBS, \$TESTDATA" echo " - as defined in /home/appsadm/env/common_profile" echo " - this script can be scheduled by /home/appsadm/sf/crontab_root" if [[ "$1" != "all" ]]; then echo "usage: setperms1 all" echo " =============" echo " - arg1 must be 'all'" exit 90; fi # # '.' execute common_profile to get superdir locations . /home/appsadm/env/common_profile #================================= #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
echo "set perms for all subdirs(775) & files(664) in \$PRODDATA=$PRODDATA" test -d "$PRODDATA" || (echo "\$PRODDATA directory not defined"; exit 91); # find "$PRODDATA" -type d -exec chmod 775 {} \; find "$PRODDATA" -type f -exec chmod 664 {} \; #============================================= # echo "set perms for all subdirs(775) & files(664) in \$PRODLIBS=$PRODLIBS" test -d "$PRODLIBS" || (echo "\$PRODLIBS directory not defined"; exit 92); # find "$PRODLIBS" -type d -exec chmod 775 {} \; find "$PRODLIBS" -type f -exec chmod 664 {} \; #============================================= # # restore 775 for executable files (in bin, sf, jcls) echo "restore perms 775 for executables (bin,sf,jcls) in \$PRODLIBS=$PRODLIBS" test -d "$PRODLIBS"/bin && chmod 775 "$PRODLIBS"/bin/* test -d "$PRODLIBS"/sf && chmod 775 "$PRODLIBS"/sf/* test -d "$PRODLIBS"/jcls && chmod 775 "$PRODLIBS"/jcls/* # echo "set perms for all subdirs(775) & files(664) in \$TESTDATA=$TESTDATA" test -d "$TESTDATA" || (echo "\$TESTDATA directory not defined"; exit 93); # find "$TESTDATA" -type d -exec chmod 775 {} \; find "$TESTDATA" -type f -exec chmod 664 {} \; #============================================= # echo "set perms for all subdirs(775) & files(664) in \$TESTLIBS=$TESTLIBS" test -d "$TESTLIBS" || (echo "\$TESTLIBS directory not defined"; exit 94); # find "$TESTLIBS" -type d -exec chmod 775 {} \; find "$TESTLIBS" -type f -exec chmod 664 {} \; #============================================= # # restore 775 for executable files (in bin, sf, jcls) echo "restore perms 775 for executables (bin,sf,jcls) in \$TESTLIBS=$TESTLIBS" test -d "$TESTLIBS"/bin && chmod 775 "$TESTLIBS"/bin/* test -d "$TESTLIBS"/sf && chmod 775 "$TESTLIBS"/sf/* test -d "$TESTLIBS"/jcls && chmod 775 "$TESTLIBS"/jcls/* # #------------------------------------------------------- # Set Owner & Group - could #comment if sure not a problem # - this protects for somebody using root & forgetting to reset owner:group chown -R appsadm:apps "$PRODDATA" chown -R appsadm:apps "$PRODLIBS" chown -R appsadm:apps "$TESTDATA" chown -R appsadm:apps "$TESTLIBS" # exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Several of our customers have nightly batch jobs scheduled by cron. Some of them have reported batch shift failures due to file permissions. To understand how we use cron to schedule batch jobs, please see: https://www.uvsoftware.ca/admjobs.htm#Part_5
You could run a night shift from a crontab owned by 'root' & never have a permissions failure, but this would be extremely dangerous. One wrong use of 'rm *' could wipe out your system. Running as appsadm/apps protects your system.
Our suggested crontabs & scripts are owned by 'appsadm' in group 'apps' which is common to the group of operators & programmers who work with the production data & libraries. The permissions in this group must be 775 for directories & 664 for files (which extends security to the group level).
Batch failures can occur if a day shift operator/programmer creates a file with the wrong permissions or group & this file is later used by the nightly batch scripts. FTP'd files can have the wrong permissions. Somebody might use 'root' to fix something & forget to reset permissions/owner/group.
We can prevent these failures if we setup a cron script (setperms1) to set permissions & group before the nightly batch jobs are scheduled. The setperms1 script & the crontab used to schedule it must of course be run under 'root' to be able to change permissions & groups. See crontab_root example at https://www.uvsoftware.ca/admjobs.htm#5F2. Here is the essential line:
30 01 * * 2-6 /home/appsadm/sf/setperms1 all # fix permissions #=========================================== on data & libraries # minute hour day-of-mth mth-of-yr day-of-week <----command---->
The crontab above (owned by root) schedules setperms1 at 1:30 AM Tues-Sat. The crontab below (owned by appsadm) schedules nightly1 at 1:45 AM Tues-Sat.
45 01 * * 2-6 /home/appsadm/sf/nightly1 all # schedule nightly batch jobs #==========================================
See the full 'setperms1' script at https://www.uvsoftware.ca/admjobs.htm#5H1, but here are the essential lines:
. /home/appsadm/env/common_profile #================================= find $PRODDATA -type d -exec chmod 775 {} \; #=========================================== find $PRODDATA -type f -exec chmod 664 {} \; #=========================================== chown -R appsadm:apps $PRODDATA #==============================
'PRODDATA' is the super directory containing all production subdirs & files, and is defined in 'common_profile' (see '1C2'), for example:
export PRODDATA=/p2/apps/proddata =================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Part_6 documents 'console logging' for logged on users, BUT, it does not work for jobs scheduled by cron, because the 'script command' (uncommented at the end of the stub_profile) is designed to work only for login sessions. See the script command documented on page '6C1'.
However, we have an alternate solution based on the fact that jobs scheduled by cron 'mail' any console output to the user who issued the 'crontab'. The following pages show you how to capture the mail into date_time stamped log files.
5K1. | crontab2 - schedules cronscript1 at 2 AM Tuesday - Saturday |
- ALSO schedules 'cronmailsave1' at 3 AM to capture mail for log | |
- you could use this as a model for your production cron jobs | |
BUT - we will use crontabtest2 for our tests (see next below) | |
- schedules cronscript1 every 2 minutes |
5K2. | crontabtest2 - variation for testing at UV Software |
- schedules script 'cronscript1' every 2 minutes (even minutes) | |
(cronscript1 calls 'jgl100.ksh', JCL converted to script) | |
- ALSO schedules 'cronmailsave1' every 2 minutes (odd minutes) | |
to capture mail from cronscript1 into a date_time stamped file | |
(for easy testing with minimal wait for results) |
5K3. | cronscript1 - script executing JCL/scripts to be logged |
- runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh | |
- jgl100.ksh writes a GDG file in $RUNDATA/gl/... |
5K4. | cronmailsave1 - script called by crontab2 & crontabtest2 |
to save mail from prior crontab2/cronscript1 | |
into date_time stamped files in /home/appsadm/cronlog2/... |
5K5. | jgl100.ksh - JCL/script called by cronscript1 to demo joblogs by cron mail |
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
/home/appsadm :-----cronlog1 <-- 1 line status msgs (optional) : :-----090423_174201_cronscript1 : :-----090423_174401_cronscript1 : :----- :-----cronlog2 <-- console logs captured by mail from cron : :-----090423_174301_cronscript1 : :-----090423_174501_cronscript1 : :----- :-----env <-- profiles called by cronscript1 : :-----stub_profile : :-----common_profile : :-----stub_profile_cronlogdemo : :----- :-----sf <-- crontabs & scripts for demo & production models : :-----cronscript1 : :-----cronmailsave1 : :-----crontab2 : :-----crontabtest2
/home/mvstest :-----testdata : :-----gl : :-----account.acntlist_000001 <-- jgl100.ksh writes GDG file : :-----account.acntlist_000002 <-- existing generations : :-----account.acntlist_000003 : :----- <-- observe creation of new GDGs :-----testlibs : :-----cbls <-- COBOL programs : :-----cgl100.cbl : :-----jcls <-- JCL/scripts (converted from mainframe JCL) : :-----jgl100.ksh
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login as appsadm --> /home/appsadm ================
#2a. mkdir cronlog1 <-- subdir for 1 line msgs vis cronlog1 function #2b. mkdir cronlog2 <-- subdir date_time stamped logs captured from cron mail #2c. mkdir env <-- subdir for profiles (copied from /home/uvadm/env/...) #2d. mkdir sf <-- subdir for crontabs&scripts from /home/uvadm/sf/adm/...
#3. cp /home/uvadm/sf/adm/cron* sf <-- copy crontabs & scripts from $UV ==============================
#4. cp /home/uvadm/env/* env <-- copy profiles from $UV to /home/appsadm/env ======================== - for this demo & site specific customization
#5. vi env/stub_profile <-- examine profile, listed on '1C1' =================== - modify for appsadm (see page '1D4') - customize as required for your site - will later change TESTLIBS/TESTDATA to PRODLIBS/PRODDATA for production
#6. cp env/stub_profile .bash_profile ================================= - copy 'appsadm' version of stub_profile to the actual '.bash_profile' (assuming bash/linux, copy to '.profile' for unix Korn shell)
#7. logoff & back on to make new .bash_profile effective
#8. vi env/stub_profile_cronlogdemo <-- examine profile for cronlogdemo =============================== - should not need changes for testing in /home/mvstest/...
Note |
|
Note |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The following instructions assume:
#1. Login as appsadm --> /home/appsadm ================
#2. crontab sf/crontabtest2 <-- start cron for appsadm ======================= - see listing on page '5K1'
#3. Wait for next 'EVEN' minute & then check for log/mail files created
#4a. l cronlog1 <-- list message files in cronlog1/ from sf/cronscript1 ========== - should see msgs from cronscript1 (run on EVEN minutes)
#4b. l cronlog2 <-- list mail files captured by cronmailsave1 date_time stamped ========== - will be none until ODD minute
#5. Wait for next 'ODD' minute & then check for log/mail files created
#6a. l cronlog1 <-- Re-list message files in cronlog1/ from sf/cronscript1 ========== - should now be some (after cronmailsave1 scheduled)
#6b. l cronlog2 <-- Re-list mail files captured by cronmailsave1 ========== - will be more after next ODD minute
#7. Wait for 1 more cycle of #3 - #6
#8. crontab -r <-- Remove/deactivate the crontab (for appsadm) ==========
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can also login as 'mvstest' on another screen to observe results of running JCL/scripts by cron. The demo script 'cronscript1' runs 'jgl100.ksh' which writes a GDG file. You should be able to see a new generation created every 2 minutes.
#1. Login as mvstest --> /home/mvstest ================
#2. cdd alias='cd $TESTDATA' --> /home/mvstest/testdata ===
#3. l gl <-- list gl subdir prior to 1st crontab execution of jgl100.ksh ==== - account.acntlist_ demo file distributed with 3 generations
/home/mvstest :-----testdata : :-----gl : :-----account.acntlist_000001 : :-----account.acntlist_000002 <-- existing generations : :-----account.acntlist_000003
#4. l gl <-- list gl subdir AFTER 1st crontab execution of jgl100.ksh ====
: :-----account.acntlist_000004 <-- jgl100.ksh writes 4th generation
#5. l gl <-- list gl subdir AFTER 2nd crontab execution of jgl100.ksh ====
: :-----account.acntlist_000005 <-- jgl100.ksh writes 5th generation
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login appadm --> /home/appsadm ============
#2. ls cronlog1 cronlog2 <-- list log files captured from cronscript1 ====================
/home/appsadm :-----cronlog1 <-- 1 line status msgs (optional) : :-----090423_174201_cronscript1 : :-----090423_174401_cronscript1 : :----- :-----cronlog2 <-- console logs captured by mail from cron : :-----090423_174301_cronscript1 : :-----090423_174501_cronscript1 : :-----
#3. cat cronlog1 <-- display contents of /home/appsadm/cronlog1/... ============ - 1 line status msgs created by logcron1 function - optionally coded in scripts triggered by cron
090423_174201_cronscript1: cronscript1 - test running scripts via crontab 090423_174201_cronscript1: cronscript1 - end running scripts via crontab
090423_174401_cronscript1: cronscript1 - test running scripts via crontab 090423_174401_cronscript1: cronscript1 - end running scripts via crontab
Note |
|
17:42:01 - cronscript1 begins (1st of 2 cycles tested) 17:42:01 - cronscript1 ends (in same second)
17:44:01 - cronscript1 begins (2nd of 2 scycles, 2 minutes later) 17:44:01 - cronscript1 ends (in same second)
#4. l cronlog2 <-- list log files from crontabtest2/cronscript1 ========== - date_time stamped by cronmailsave1
090423_174301_cronscript1 090423_174501_cronscript1
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#5a. vi cronlog2/* <-- inspect contents of log files =============
From appsadm@uvsoft4.uvsoftware.ca Thu Apr 23 15:19:01 2009 Return-Path: <appsadm@uvsoft4.uvsoftware.ca> Received: from uvsoft4.uvsoftware.ca (localhost [127.0.0.1]) by uvsoft4.uvsoftware.ca (8.13.8/8.13.8) with ESMTP id n3NMJ1w4006673 for <appsadm@uvsoft4.uvsoftware.ca>; Thu, 23 Apr 2009 15:19:01 -0700 Received: (from appsadm@localhost) by uvsoft4.uvsoftware.ca (8.13.8/8.13.8/Submit) id n3NMJ18c006672; Thu, 23 Apr 2009 15:19:01 -0700 Date: Thu, 23 Apr 2009 15:19:01 -0700 Message-Id: <200904232219.n3NMJ18c006672@uvsoft4.uvsoftware.ca> From: root@uvsoft4.uvsoftware.ca (Cron Daemon) To: appsadm@uvsoft4.uvsoftware.ca Subject: Cron <appsadm@uvsoft4> /home/appsadm/sf/cronmailsave1 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/appsadm> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=appsadm> X-Cron-Env: <USER=appsadm> Status: R
Mail version 8.1 6/6/93. Type ? for help. "/var/mail/appsadm": 2 messages 2 new >N 1 root@uvsoft4.uvsoftw Thu Apr 23 15:17 26/1121 "Cron <appsadm@uvsoft4" N 2 root@uvsoft4.uvsoftw Thu Apr 23 15:18 56/3469 "Cron <appsadm@uvsoft4" "/home/appsadm/cronlog2/cronscript1" [Appended]
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#5b. vi cronlog2/* <-- inspect contents of log files ============= - continued
stty: standard input: Invalid argument stty: standard input: Invalid argument rm: cannot lstat `/home/appsadm/mbox': No such file or directory 090423_174201_cronscript1: cronscript1 - test running scripts via crontab clear TESTDATA temp subdirs before tests or batch shift - easier to investigate any problems (unclutered by old files) - jobtmp, tmp, wrk, rpts, sysout, joblog, jobctl - this script does 'cd $TESTDATA' so you can run from anywhere - clear TESTDATA=/home/mvstest/testdata y/n ? all files removed from: jobtmp,tmp,wrk,rpts,sysout,joblog,jobctl 090423:174201:JGL100: Begin Job=JGL100 090423:174201:JGL100: /home/mvstest/testlibs/jcls/jgl100.ksh 090423:174201:JGL100: Arguments: 090423:174201:JGL100: RUNLIBS=/home/mvstest/testlibs 090423:174201:JGL100: RUNDATA=/home/mvstest/testdata 090423:174201:JGL100: JTMP=/home/mvstest/testdata/jobtmp/JGL100 SYOT=/home/mvstest/testdata/sysout/JGL100 090423:174201:JGL100: RUNDATE=20090423 090423:174201:JGL100: ******** Begin Step S0010 cgl100 (#1) ******** 090423:174201:JGL100: EOF fili01 rds=3 size=75: /home/mvstest/testdata/jobtmp/JGL100/gtmp/0010I_gl_account.master_ 090423:174201:JGL100: EOF filr01 rds=1 upds=1 size=10240: /home/mvstest/testdata/ctl/gdgctl51I 090423:174201:JGL100: EOF filo02 wrts=1 size=51: /home/mvstest/testdata/jobtmp/JGL100/gtmp/0010G0_gl_account.master_ 090423:174201:JGL100: gen0: ACCTMAS=gl/account.master_000003 insize=8720 090423:174201:JGL100: EOF fili01 rds=3 size=81: /home/mvstest/testdata/jobtmp/JGL100/gtmp/0010O_gl_account.acntlist_ 090423:174201:JGL100: EOF filr01 rds=1 upds=1 size=10240: /home/mvstest/testdata/ctl/gdgctl51I 090423:174201:JGL100: EOF filo02 wrts=1 size=128: /home/mvstest/testdata/jobtmp/JGL100/gtmp/0010G1_gl_account.acntlist_ 090423:174201:JGL100: gen+1: ACTLIST=/home/mvstest/testdata/jobtmp/JGL100/GDG/gl/account.acntlist_000004 gens=8 090423:174201:JGL100: file: SYSOUT=/home/mvstest/testdata/sysout/JGL100/S0010_SYSOUT bytes= 090423:174201:JGL100: Executing--> cobrun -F /home/mvstest/testlibs/cblx/cgl100 090423:174201:JGL100: Job Times: Begun=17:42:01 End=17:42:01 Elapsed=00:00:00 090423:174201:JGL100: moving /home/mvstest/testdata/jobtmp/JGL100/GDG/subdir/files back to $RUNDATA/subdirs/ `/home/mvstest/testdata/jobtmp/JGL100/GDG/gl/account.acntlist_000004' -> `gl/account.acntlist_000004' 090423:174201:JGL100: EOF filr01 rds=5 upds=1 size=10240: /home/mvstest/testdata/ctl/gdgctl51I 090423:174201:JGL100: JobEnd=Normal, StepsExecuted=1, LastStep=S0010 090423_174201_cronscript1: cronscript1 - end running scripts via crontab
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# crontab2 - crontab file sample for user modification/implementation # - schedules sample script 'cronscript1' at 2 AM Tues-Sat # - ALSO schedules 'cronmailsave1' at 3 AM to capture mail for job log # - by Owen Townsend, April 2009 # - see documentation at www.uvsoftware.ca/admjobs.htm#Part_5 # # crontabtest2 - variation for testing at UV Software # - test running JCL/scripts by cron & capturing logs in mail # - schedules script 'cronscript1' every 2 minutes (even minutes) # AND schedules 'cronmailsave1' every 2 minutes (odd minutes) # for testing with minimal wait for results # # cronscript1 - script executing JCL/scripts to be logged # - runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh # - jgl100.ksh writes a GDG file in $RUNDATA/gl/... # # cronmailsave1 - script called by this crontab2 # to save mail from prior crontab2/cronscript1 # - script might be as follows: # echo "save * $APPSADM/cronlog2/$CRONDT_$JOBID1" | mail # #minute hour day-of-mth mth-of-yr day-of-week <----command----> 00 2 * * 2-6 /home/appsadm/sf/cronscript1 # 2 AM Tues-Sat #======================================== # #Note - jobs run under cron send mail to user (appsadm) # - We will capture the mail for a joblog (into a date_time stamped file) # - BUT, we have to do this AFTER cron session ends (with a separate cron) # - NOW, run the mail capture script at 3 AM # #minute hour day-of-mth mth-of-yr day-of-week <----command----> 00 3 * * 2-6 /home/appsadm/sf/cronmailsave1 # 3 AM Tues-Sat #========================================== # Also see crontabtest2 which schedules cronscript1 & cronmailsave1 # - every 2 minutes for easy testing (until you disable via 'crontab -r') # See crontabs documented at www.uvsoftware.ca/admjobs.htm#Part_5 #------------------------ end crontab2 --------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# crontabtest2 - test running JCL/scripts by cron & capturing logs in mail # - schedule cronscript1(EVEN minutes) & cronmailsave1(ODD minutes) # - every 2 minutes for testing with minimal wait for results # - by Owen Townsend, April 2009 # - see documentation at www.uvsoftware.ca/admjobs.htm#Part_5 # #*crontabtest2 - *THIS schedules cronscript1 every EVEN minute # AND schedules cronmailsave1 every ODD minute # # cronscript1 - script executing JCL/scripts to be logged # - runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh # - jgl100.ksh writes a GDG file in $RUNDATA/gl/... # #*crontabtest2 - *THIS ALSO schedules cronmailsave1 every odd minute # - to save the 'mail' from prior crontab2/cronscript1 # in date_time stamped file # ($APPSADM/cronlog2/yymmdd_HHMMSS_jobname) # # cronmailsave1 - script called by this crontab2/crontabtest2 # to save mail from prior cronscript1 # - script might be as follows: # echo "save * $APPSADM/cronlog2/$CRONDT_$JOBID1" | mail # # schedule cronscript1 every even minute #minute hour day-of-mth mth-of-yr day-of-week <----command----> 00,02,04,06,08,10,12,14 * * * * /home/appsadm/sf/cronscript1 16,18,20,22,24,26,28,30 * * * * /home/appsadm/sf/cronscript1 32,34,36,38,40,42,44,46 * * * * /home/appsadm/sf/cronscript1 48,50,52,54,56,58 * * * * /home/appsadm/sf/cronscript1 # #Note - jobs run under cron send mail to user (appsadm) # - We will capture the mail for a joblog (into a date_time stamped file) # - BUT, we have to do this AFTER cron session ends (with a separate cron) # - SO NOW, run the mail capture script every 2 minutes on the odd minute # # schedule cronmailsave1 every odd minute #minute hour day-of-mth mth-of-yr day-of-week <----command----> 01,03,05,07,09,11,13,15 * * * * /home/appsadm/sf/cronmailsave1 17,19,21,23,25,27,29,31 * * * * /home/appsadm/sf/cronmailsave1 33,35,37,39,41,43,45,47 * * * * /home/appsadm/sf/cronmailsave1 49,51,53,55,57,59 * * * * /home/appsadm/sf/cronmailsave1 # scripts run every 2 minutes (until you disable via 'crontab -r') # - see crontabs documented at www.uvsoftware.ca/admjobs.htm#Part_5 #------------------------ end crontabtest2 --------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # cronscript1 - test/demo running scripts via crontab # - by Owen Townsend, April 2009 # - demo running 'mvstest' JCL/scripts under 'cron' & capturing log via 'mail' # - script, crontab,& profile distributed in /home/uvadm/testlibs/sf/... # - setup user appsadm & copy test crontabs & scripts to /home/appsadm/sf/... # - also copy profiles from /home/uvadm/env/... to /home/appsadm/env/... # - must setup subdirs /home/appsadm/cronlog1, cronlog2 # - all documented at www.uvsoftware.ca/admjobs.htm#Part_5 # # crontab2 or crontabtest2 - schedules cronscript1 # - every 2 minutes for testing with minimal wait for results #*cronscript1 - *THIS script executing JCL/scripts to be logged # - runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh # - jgl100.ksh writes a GDG file in $RUNDATA/gl/... # # Note - scripts scheduled by crontab do not have benefit of a login profile # - must dot execute profile to set PATHs to find scripts & programs # - also allows us to call using just the jobnames (vs full path names) # export APPSADM=/home/appsadm # homedir for crontabs & scripts # . $APPSADM/env/stub_profile # uncmt for production cron job mail logs #============================ # next line active for test/demo . $APPSADM/env/stub_profile_cronlogdemo # for demo ADMjobs.doc#5I1-5K7 #====================================== autoload logcron1 # function to log msgs to $APPSADM/cronlog1 #================ # found via export FPATH=... set in profile export CRONDT=$(date +%y%m%d_%H%M%S) # date_time for logcron1 function export JOBID1=cronscript1 # jobid for date_time_jobid msg prefix #======================== rm $APPSADM/mbox # remove old mail for appsadm (owner crontab running this) #================ # jobs run under cron will send mail to user (appsadm) #Note - you can copy/rename this demo script for your production cron scripts # - replace commands between BEGIN/END with your commands # - replace all instances of demo name (cronscript1) with your scriptname # #---------------------------- BEGIN user jobs -------------------------------- # logcron1 "$JOBID1 - test running scripts via crontab" testdatainit2 #<-- optional script to clear any old files in temp subdirs jgl100.ksh # test/demo job documented at www.uvsoftware.ca/mvsjcl.htm#1E3 #========= # - runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh # # - jgl100.ksh writes a GDG file in $RUNDATA/gl/... # # - you can replace these testjobs with your production jobs logcron1 "$JOBID1 - end running scripts via crontab" #----------------------------- END user jobs --------------------------------- #Note - jobs run under cron will send mail to user (appsadm) # - save mail (as date_time stamped file) after cron session as follows: # echo "save * $APPSADM/cronlog2/$CRONDT_$JOBID1" | mail #======================================================= # - BUT, you have to do this AFTER cron session ends (separate cron) exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # cronmailsave1 - script called by crontab2 or crontabtest2 # to save mail from cronscript1 # - by Owen Townsend, April 2009 # - see doc at www.uvsoftware.ca/admjobs.htm#Part_5 # # crontab2 or crontabtest2 # - schedules cronscript1 every EVEN minute # - every 2 minutes for testing with minimal wait for results # # cronscript1 - script executing JCL/scripts to be logged # - runs demo JCL/script /home/mvstest/testlibs/jcls/jgl100.ksh # - jgl100.ksh writes a GDG file in $RUNDATA/gl/... # # crontab2 or crontabtest2 # - runs script to save the 'mail' from prior cronscript1 # in date_time stamped file every ODD minute # in $APPSADM/cronlog2/yymmdd_HHMMSS_jobname # #*cronmailsave1 - script called by crontab2/crontabtest2 (every ODD minute) # to save mail from prior crontab2/cronscript1 # APPSADM=/home/appsadm #<-- location of mail save subdir (cronlog2) JOBID1=cronscript1 #<-- jobname for suffix on mail save file CRONDT=$(date +%y%m%d_%H%M%S) #<-- date_time stamp for mail save file # # use 'echo' to pipe 'save' & 'delete' commands to 'mail' #======================================================= echo "save * $APPSADM/cronlog2/${CRONDT}_$JOBID1" | mail echo "delete *" | mail #======================================================= #Note - jobs run under cron send mail to user (appsadm) # - BUT, you have to do this AFTER cron session ends (via separate cron) # - see more explanations at www.uvsoftware.ca/admjobs#Part_5 #------------------------ end cronmailsave1 -------------------------- exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh ##JGL100 JOB (1234),'LIST GL MASTER CHART OF ACCOUNTS' export JOBID2=JGL100; scriptpath="$0"; args="$*" if [[ -z "$JOBID1" ]]; then JOBID1=JGL100; fi; export JGL100 for arg in $args; do if [[ "$arg" == *=* ]]; then export $arg; fi; done integer JCC=0 SCC=0 LCC=0 # init step status return codes autoload jobset51 jobend51 jobabend51 logmsg1 logmsg2 stepctl51 autoload exportfile exportgen0 exportgen1 exportgenall exportgenx jobset51 # call function for JCL/script initialization goto S0000=A # * MVS JCL CONVERSION DEMO - PROCs & GDG files ##STEPA EXEC PGL100,HLQ=GL,YEAREND=2003 #<-PROC1call ##PGL100 PROC HLQ=GL,YEAREND=2002 HLQ="GL";YEAREND="2002"; ##STEPA EXEC PGL100,HLQ=GL,YEAREND=2003 #<-PROC1exp HLQ="GL";YEAREND="2003"; # * LIST G/L CHART OF ACCOUNTS FROM ACCOUNT.MASTER #1======================= begin step#S0010 CGL100 ======================== S0010=A JSTEP=S0010; ((XSTEP+=1)); SCC=0; LCC=0; alias goto=""; logmsg2 "******** Begin Step $JSTEP cgl100 (#$XSTEP) ********" stepctl51 # test oprtr jcpause/jcclear ##STEPA EXEC PGM=CGL100,REGION=1024K,PARM=&YEAREND export PROGID=cgl100 export PARM="2003" exportgen0 0 ACCTMAS gl/account.master_ exportgen1 +1 ACTLIST $JGDG/gl/account.acntlist_ #exportgen1 $JGDG/subdir/tempfiles restored to outdir at Normal EOJ exportfile SYSOUT $SYOT/${JSTEP}_SYSOUT logmsg2 "Executing--> cobrun $ANIM $RLX/cgl100" #3---------------------------------------------------------------------- cobrun $ANIM $RLX/cgl100 #4---------------------------------------------------------------------- LCC=$?; S0010C=$LCC; ((SCC+=LCC)); ((JCC+=LCC)); S0010R=1; alias goto=""; if ((S0010C != 0)) then logmsg2 "ERR: step#$JSTEP cgl100 abterm $SCC" alias goto="<<S9900=A"; fi goto #/=*.#PEND1 PGL100 #8====================================================================== S9000=A jobend51 #move any GDG files from jobtmp/GDG/subdirs to RUNDATA/subdirs logmsg2 "JobEnd=Normal, StepsExecuted=$XSTEP, LastStep=$JSTEP" exit 0 #jclunix51 ver:20110116 a1b2c0d3e2f3g1i1j0k3l20m4n3o0p0r0s0t1u1w0x0y1z0 #9====================================================================== S9900=A logmsg2 "JobEnd=AbTerm, JCC=$JCC,StepsX/L=$XSTEP/$JSTEP" RV ACK jobabend51 #report GDGs NOT moved from jobtmp/GDG/subdirs to outdirs exit $JCC
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# stub_profile_cronlogdemo - file distributed in /home/uvadm/env/... # - to be copied to /home/appsadm/env/... # # Special version of profile to demo capturing logs from jobs run by cron # - defines RUNLIBS & RUNDATA as /home/mvstest/testlibs & testdata # ============================================================== # - see www.uvsoftware.ca/admjobs.htm#5I1 - 5K6 # # This stub_profile_cronlogdemo called directly by 'cronscript1' # - which is scheduled by 'crontab2' & 'crontabtest2' # - since 'cron' environment has NO profile to setup PATHs, etc # # Define RUNLIBS/RUNDATA & call common_profile export RUNLIBS=/home/mvstest/testlibs #<-- define for user 'mvstest' export RUNDATA=/home/mvstest/testdata . /home/appsadm/env/common_profile #<-- common_profile from $APPSADM/env #================================= # # We have dropped a lot of explanatory #cmts here in cronlogdemo version # - see explanatory #cmts in original /home/uvadm/env/stub_profile
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# logcron1 - function to log msgs from jobs run by cron # - by Owen Townsend, April 23/2009 # # Prints to screen & append to file: $APPSADM/cronlog1/yymmdd_HHMMSS_$JOBID1 # - prefix messages with date_time:$JOBID1 # - use cron to remove $APPSADM/cronlog1/... older than 10 days ? # - calling script should define CRONDT & JOBID1 for output filename # # logcron1 "x---message---x" <-- sample command in calling JOBXXX # 051011_124700_JOBXXX x---msg---x <-- sample output (at 12:47 Oct 11/05) # function logcron1 { if [[ $CRONDT == "" ]]; then CRONDT=$(date +%y%m%d_%H%M%S); fi NOWDT=$(date +%y%m%d_%H%M%S) msg="${NOWDT}_$JOBID1: $1" print "$msg" # append msg to the cronlog1/file print $msg >>$APPSADM/cronlog1/${CRONDT}_$JOBID1 }
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
6A1. | joblog script for programmer test/debug |
- capture log for 1 job at a time |
6B1. | Console logging for production |
- capture all console activity for entire batch shift |
6C1. | Activating console logging |
- uncomment 4 lines at end of provided user profile | |
- setup subdirs to capture console logs for each user |
6D1. | Console log collection directories |
6E1. | Console logging demo/illustration |
- activate console logging, run JCL/script jgl100.ksh | |
- logoff & back on to process log file & show results | |
(vs un-processed log file) |
6S0. | scripts & uvcopy jobs used to process console logs |
6S1. | joblog script for programmer test/debug |
6S2. | logfixA - script to process console log |
- activated by user logoff/logon |
6S3. | logview - list logfiles & allow pick by number |
6S4. | logfixN - script to process console log for users who forgot to logoff |
- Nightly script scheduled by cron |
6S5. | logfixM - Month end script scheduled by cron |
- copies current month subdir log2 to log3 | |
& clears log2 for new month |
6U1. | logfix1 - uvcopy job to process logfiles (from log1 to log2) |
- removes screen control chars to allow viewing & printing |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'Console Logging' captures everything that happens on the screen, including operator commands & replies to prompts (highly recommended for production).
'Job Logging' captures the console output (only) for 1 job at a time, which is better for the programmers, because it can be inspected immediately.
'Console Logging' requires some setup (documented on following pages). Until you get console logging activated, anybody could use the joblog scripts to capture the console log for 1 job at a time.
#1. cdd <-- change to $TESTDATA superdir
#2. joblog jar100.ksh <-- use joblog to run script jar100.ksh ================== & capture console log in joblog/jar100.log
#3. uvlp12 joblog/jar100.log <-- print the log ========================
Script 'joblog' writes the output into subdir joblog/... in the current directory. Subdir 'joblog/' will be created if not already present. It does not matter where you are when you run joblog, because the JCL/script will be found via $RUNLIBS set in your profile.
The script captures the JCL/script screen displays into a file using the unix/linux 'tee' command. The log filename is created by dropping the '.ksh' extension from the jobname & then appending '.log'.
You can see 'joblog' listed at ADMjobs.htm#6S1. You can see all scripts in /home/uvadm/sf/IBM/...
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Most UNIX systems have a 'script' command which can be used to capture all activity on any terminal.
This would be especially important to mainframe users as a replacement for the system console log, but 'script' can be employed by any user to capture their own console log, for example:
script logfile ==============
All console I/O will be captured into the named file until you log off. To make effective use of this facility, I suggest the following:
1 - Start the script automatically as the last command in your .profile (this way you can not forget)
2 - Assign the script filename as the current date & time (using the $(date) command - see below)
3 - Setup a separate directory to hold your logfiles
4 - Accumulate your logfiles for a month (or whatever period suits you). You might accumulate these in directory log1 for example. - at the end of the month copy log1 to log2, backup log1 to tape or diskette, and remove all files from log1 for the new month
5 - You should login only once on the userid you intend to use for logging, because your current logfile would be destroyed by the 2nd login. (see logfixA & logview scripts listed later in this section).
The last command in your .profile might then be:
exec script log1/$(date +%y%m%d_%H%M%S) =======================================
exec script log1/060219:075200 - sample expansion ==============================
Your site administrator might want to keep log files for all users in 1 directory, with sub-directories for each user, in which case the following might be appropriate:
exec script $LOGDIR/log1/$LOGNAME/$(date +%y%m%d_%H%M%S) ========================================================
I suggest that LOGDIR=/home/appsadm (the site administrator's home dir). An export for LOGDIR is coded in the profile (listed in Part_1).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You activate console logging by uncommenting 4 lines marked near the end of the profile provided (in /home/uvadm/env/stub_profile_uv).
See complete listing begining on page '1C1', but here are the last few lines:
#-------------------------------------------------------------------------- # ** Console Logging - optional ** # - uncomment 9 '##' lines below to activate console logging # - must setup subdirs matching $LOGNAME in $APPSADM/log1/...,log2/...,log3/... # - subdirs log1,log2,log3 hold logfiles for: current file, month, lastmonth # - see details at www.uvsoftware.ca/admjobs.htm#Part_6 # - console logging for production operators to capture entire logon session # - programmers can use the 'joblog' script to capture log for 1 job at a time ## login1 || exit 2 # exit here if 2nd login ## logfixA $LOGNAME # process log1 file to log2 (to allow read/print) ## echo "--> logview <-- execute logview script to see prior console logs" ## echo "logging requires .bashrc/.kshrc with PS1='<@$HOST1:$LOGNAME:$PWD >'" ## echo "logging requires $LOGNAME subdirs in \$APPSADM/log1 & log2" ## if [[ -d $APPSADM/log1/$LOGNAME && ( -f .kshrc || -f .bashrc) ]]; then ## echo "script $APPSADM/log1/$LOGNAME/$(date +%y%m%d_%H%M%S)" ## exec script $APPSADM/log1/$LOGNAME/$(date +%y%m%d_%H%M%S) ## fi # 'exec script' must be the last non-comment line in the profile # 'script' disables aliases & umask 002 - put in .bashrc/.kshrc to be effective # ============================ # cp $APPSADM/env/kshrc .kshrc # copy to your homedir restoring correct name # ============================ #--------------------------- end of stub_profile ---------------------------
You also need to setup a subdir for each user before the login for the 1st time after uncommenting the 4 lines at the end of their profile.
The 'appsadm' might setup directories as follows:
#0. user appsadm login ---> /home/appsadm
#1. mkdir log1 log2 log3 <-- 1 time only
#2. mkdir log1/userxx log2/userxx log3/userxx ========================================= - setup subdirs for each user to be console logged
See descriptions of log1,log2,log3 on the next page:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The script files are created as $LOGDIR/log1/$LOGNAME/yymmdd_HHMMSS
$LOGDIR is defined in profiles as: export LOGDIR=/home/appsadm & contains 3 logfile directories as follows:
log1 |
|
log2 |
|
log3 |
|
'logfixA' in your profile to performs the filtering. This is activated simply by logging off & back on & then using the 'logview' script.
The advantage of logfixA in the .profile is that you can see your filtered log files without waiting for the nightly cron job or running logfix manually.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We assume you have performed the JCL conversions in JCLcnv1demo.htm#Part_3 (or VSEJCL.htm#Part_1).
We will demo a vital feature of the Vancouver Utility console logging system - removing screen control characters to facilitate viewing & printing.
After the preparations on this page, the demo on the next page will:
#1. Login as appsadm --> /home/appsadm
#2. setup subdirs to collect & process console log files as per page '6D1'.
#2a. mkdir log1/mvstest <-- captures log from current login session ================== #2b. mkdir log2/mvstest <-- collects processed logs for current month ================== - screen control characters removed for view/print
#3. exit
#4. Login as mvstest --> /home/mvstest
#5. activate console logging (if not already done as per page '6C1').
#5a. vi .bash_profile <-- edit your profile ================ - uncomment the 7 '##' lines at end of profile (see page '6C1')
#6. exit
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. Login as mvstest --> /home/mvstest
#2. cdl alias cdl='cd $RUNLIBS' --> /home/mvstest/testlibs ===
#3. vi jcls/jgl100.ksh <-- inspect JCL/script ==================
#4. cdd alias cdd='cd $RUNDATA' --> /home/mvstest/testdata ===
#5. jgl100.ksh <-- execute JCL/script ========== - see listing at JCLcnv1demo.htm#2D1 or '5K5' in this doc
#6. l gl <-- list I/O files in subdir gl/... ====
#7. exit
#8. Log back in to process log file - see 'logfixA' called near end of profile (listed on page '6C1')
#9. logview <-- list processed log files (/home/appsadm/log2/mvstest/...) ======= #9a. 1 <-- select latest file (file #1, numbering from latest) === - invokes 'vi' editor #9b. :q <-- quit vi to return to logview list === #9c. q <-- quit logview ===
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
<@:mvstest:/home/mvstest> cdl
<@:mvstest:/home/mvstest/testlibs> vi jcl3/jgl100.ksh [36m#!/bin/ksh ##JGL100 JOB 'LIST GL MASTER CHART OF ACCOUNTS'[0m [33m" if[0m JGL100 <--Note: 'vi' reduced to 5 lines [25;1H -->> MAX LOG LINES: vi=8, .ksh=8000, other=2000 <<--
<@:mvstest:/home/mvstest/testlibs> cdd
<@:mvstest:/home/mvstest/testdata> jgl100.ksh 100328:135304:JGL100: Begin Job=JGL100 100328:135304:JGL100: /home/mvstest/testlibs/jcls/jgl100.ksh 100328:135304:JGL100: Arguments: 100328:135304:JGL100: RUNLIBS=/home/mvstest/testlibs 100328:135304:JGL100: RUNDATA=/home/mvstest/testdata 100328:135304:JGL100: JTMP=jobtmp/JGL100 SYOT=sysout/JGL100 100328:135304:JGL100: RUNDATE=20100328 100328:135304:JGL100: ******** Begin Step S0010 cgl100 (#1) ******** 100328:135304:JGL100: gen0: ACCTMAS=gl/account.master_000003insize=13952 100328:135304:JGL100: gen+1: ACTLIST=jobtmp/JGL100/GDG/gl/account.acntlist_000004 gens=8 100328:135304:JGL100: file: SYSOUT=sysout/JGL100/S0010_SYSOUT bytes= 100328:135304:JGL100: Executing--> cobrun -F /home/mvstest/testlibs/cblx/cgl100 100328:135304:JGL100: Job Times: Begun=13:53:04 End=13:53:04Elapsed=00:00:00 100328:135304:JGL100: moving jobtmp/JGL100/GDG/subdir/files back to $RUNDATA/subdirs/ `jobtmp/JGL100/GDG/gl/account.acntlist_000004' -> `gl/account.acntlist_000004' 100328:135304:JGL100: EOF filr01 rds=5 upds=1 size=10240: /home/mvstest/testdata/ctl/gdgctl51I 100328:135304:JGL100: JobEnd=Normal, StepsExecuted=1, LastStep=S0010
<@:mvstest:/home/mvstest/testdata> l gl total 104 -rw-rw-r-- 1 mvstest apps 7303 Sep 29 08:41 account.acntlist_000001 -rw-rw-r-- 1 mvstest apps 7303 Mar 28 13:42 account.acntlist_000002 -rw-rw-r-- 1 mvstest apps 13952 Sep 27 10:50 account.master_000001 -rw-rw-r-- 1 mvstest apps 13952 Sep 27 10:51 account.master_000002 -rw-rw-r-- 1 mvstest apps 13952 Sep 27 10:52 account.master_000003 -rw-rw-r-- 1 mvstest apps 1600 Apr 23 2009 account.tran1 -rw-rw-r-- 1 mvstest apps 1600 Apr 23 2009 account.trans_000001 -rw-rw-r-- 1 mvstest apps 1600 Apr 23 2009 account.trans_000002 -rw-rw-r-- 1 mvstest apps 1600 Apr 23 2009 account.trans_000003
<@:mvstest:/home/mvstest/testdata> exit exit Script done on Sun 28 Mar 2010 01:53:31 PM PDT
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is just the 1st few lines of the un-processed console log file, to show you how essential the console logging system is.
Note that the 'logfixA' script (calling uvcopy job logfix1) removes the garbage created by screen control characters. logfix1 recognizes 'vi' commands and reduces the output to the 1st 5 lines (compare the processed log on the previous page to the unprocessed log listed below).
vi /home/appsadm/log1/mvstest/100328_015236 =========================================== - attempt to view un-processed logfile directly in log1/mvstest/... - before processing to log2/mvstest/... - logfile name date/time stamped yymmdd_HHMMSS
Script started on Sun 28 Mar 2010 01:52:36 PM PDT <@:mvstest:/home/mvstest> cdl <@:mvstest:/home/mvstest/testlibs> vi jcl3/jgl100.ksh ![1;25r![?25h![?8c![?25h![?0c![27m![24m![0m![H![J![?25l![?1c![25;1H"jcl3/jgl100.ksh" 51L, 2391C![1;1H![1m![36m#!/bin/ksh ##JGL100 JOB (1234),'LIST GL MASTER CHART OF ACCOUNTS'![0m ![1m![33mexport![0m ![1m![36mJOBID2![0m=JGL100![1m![33m;![0m ![1m![36mscriptpath![0m=![1m![33m"![0m![1m![34m$0![0m![1m![33m";![ if![0m ![1m![31m[[![0m ![1m![33m-z![0m ![1m![33m"![0m![1m![34m$JOBID1![0m![1m![33m"![0m ![1m![31m]]![0m![1m![33m;![0m ![1m![33m integer![0m ![1m![36mJCC![0m=![1m![35m0![0m ![1m![36mSCC![0m=![1m![35m0![0m ![1m![36mLCC![0m=![1m![35m0![0m ![1m![36m # init st ![1m![33mautoload![0m jobset51 jobset52 jobend51 jobabend51 logmsg1 logmsg2 stepctl51 ![1m![33mautoload![0m exportfile exportgen0 exportgenp exportgenq exportgenr exportgenall ![1m![33mautoload![0m exportgen1 exportgen2 exportgen3 exportgenx jobset51 ![1m![36m # call function for JCL/script initialization![0m goto ![1m![36mS0000![0m=A -------- remaining lines removed, see processed logs on prior page ---------
If you attempt to edit or print the log files directly, you will have problems due to screen control escape sequences from COBOL program displays, vi editor sessions, etc. Another problem is extraneous voluminous data from various commands (ls, vi, cat, more, programs).
'logfix1' is a uvcopy job that will solve these problems, by copying the log files dropping the escape sequences & inserting LineFeeds as required to ensure that no lines are longer than 80 columns.
'logfix1' has options to reduce voluminous output displays to 3 or 4 lines by scanning for known symbols '<@' at the beginning of each user prompt. PS1 is modified accordingly to: export PS1='<@${PWD}> '
'logfix1' has options to set max lines for certain commands. For 'vi' option 'v5' limits output to 5 lines, for any JCL/script (ID by .ksh suffix) option j5000 means unlimited, for all other commands 'm50' sets max 50 lines.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Some of the logfix scripts & uvcopy jobs are listed on the following pages:
6S1. | joblog - capture console log for 1 job at a time (vs logon to logoff) |
- for programmers to see log immediately | |
vs production operators where we want to capture entire shift |
6S2. | logfixA - process log files from log1 to log2 to enable editing/printing |
- run by .profile, to see your latest logfile, just logout/login | |
& use the 'logview' script to vi by logfile sequence# | |
- new job as of June 98, recommended alternative to logfixN |
xxx: logfixB |
|
6S3. | logview - convenient script to list your logfiles & allow you pick desired |
file by number (1=latest). Calls the 'vi' editor. | |
- Short & long (A & B) versions of each file are available. | |
(short version truncates command responses to 3 lines) | |
- You could subsequently list a desired logfile, for example: | |
uvlp12 /home/appsadm/log2/uvadm/980808:1335A |
6S4. | logfixN - nightly cron to process any files from log1 to log2 |
- runs after killuser2 has killed any users who did not log off | |
- killuser2 closes their log files to prevent losing last buffer |
6S5. | logfixM - scheduled on the 1st of each month (by crontab5) |
- script copies log2 to log3 & cleans out log2 for the new month. |
6U1. | logfix1 - uvcopy job to process logfiles, bypasses 'vi' editor displays. |
- executed by logfixA script which is executed by your .profile. |
You can use various UNIX utilities to investigate the console log files, if you explore processed files in log2 vs the unprocessed files in log1.
vi |
|
uvlp12 |
|
uvlp12 /home/appsadm/log2/uvadm/031115:092400 =============================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# joblog - run a JCL/script & capture a log file (via tee) # - names the log file by dropping the .ksh & appending .log # - writes the logfile into subdir 'joblog' (creates it if not present) # - prompts for command to view/print (vi,cat,more,uvlp12,etc) # &/or optionally (save) with a date/time stamp # # This script intended only for test/debug when console logging not activated # - Console logging is better because it captures everything that happens, # including operator commands & replies to prompts. # - to activate console logging, see: www.uvsoftware.ca/admjobs.htm#Part_6 # jclksh="$1" # capture the script filename with extension (jclname.ksh) if [[ -f $RUNLIBS/jcls/$jclksh || -f $RUNLIBS/ksh/$jclksh ]]; then : else echo "usage: joblog jclname.ksh [args]" echo " =========================" echo " - arg1 must be a script in the PATH" exit; fi # # setup joblog directory pathname & make joblog subdir if not existing #Jan08/2014 - ensure joblog written to $RUNDATA, regardless of where run jld=$RUNDATA/joblog # setup joblog directory pathname if [[ ! -d $jld ]]; then mkdir $jld; fi # # create logfilename by dropping .ksh & appending .log jf=${jclksh%\.*} # drop extension .ksh from JCL/script filename jlf=$jf.log # add extension .log to create logfilename # $jclksh $* 2>&1 | tee $jld/$jlf #============================== # "$*" include all arguments following jobname.ksh echo "enter command to view, print, and/or save logfile" echo "logfile: $jld/$jlf" echo "--> vi,cat,more,uvlp12,etc, and/or 'save' to date/time stamp" read reply if [[ "$reply" == *save* ]]; then cp $jld/$jlf $jld/${jlf}_$(date +%y%m%d_%H%M%S); fi cmd=${reply%save*} if [[ -n "$cmd" ]]; then $cmd $jld/$jlf; fi exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # logfixA - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # logfixA - convert all script log files from log1 to log2 & remove log1 file # - removes control chars & limits 'vi' & other lengthy outputs # #usage: logfixA logsubdir #default: logfixA $LOGNAME <-- usually subdir=$LOGNAME # # - logfixA is run from the .profile at login time, to process the prior logfile # which was closed when you logged out, so you can display it via logview # - the last 3 lines in the .profile should be as follows: # # 1. login1 || exit 2 # prevents 2nd login (would destroy logfiles) # 2. logfixA $LOGNAME # process log1 file to log2 (to allow read/print) # 3. exec script $LOGDIR/log1/$LOGNAME/$(date +%y%m%d:%H%M%S) # # - logfixA allows only 1 login per userid (recommended for console logging) # - but see 'logfixB' if you really want to allow multiple logins per userid # - use the 'logview' script to view your logfiles (requires no arguments) # - logview lists your logfiles & prompts for sequence# of logfile to vi # - 1st logoff & back on if you want to see your latest activity # job=logfixA # setup jobname for echo msgs # ensure logfile parent directory defined & change to it if [ ! -d "$LOGDIR" ]; then echo "$job - LOGDIR undefined (usually = \$APPSADM)"; exit 9; fi cd $LOGDIR # change to parent directory of logfile subdirs #========= subdir="$1" # capture arg1 (default to $LOGNAME if not specified) echo "$job - convert $subdir log files for viewing & printing " ## if [[ -z "$subdir" ]]; then subdir=$LOGNAME; fi if [[ ! -d "log1/$subdir" ]]; then echo "usage: $job $subdir <-- arg1 must be subdir in $LOGDIR/log1/ & log2/" echo " ===========" exit 99; fi # echo "files in log1/$subdir listed below:" ls -l log1/$subdir for i in log1/$subdir/* do if [[ -s $i ]]; then f=${i##*/} uvcopy logfix1,fili1=$i,filo1=log2/$subdir/${f},uop=q0i7 #======================================================= let x=x+1 echo "file# $x $i converted to log2/$subdir/$f" # remove input file from log1, unless LOGFIXDEBUG=Y if [[ "$LOGFIXDEBUG" != "Y" ]]; then rm -f $i; fi fi done echo "$job - $x files converted from log1/$subdir to log2/$subdir for vi/uvlp12" echo "$job - use 'logview' script to list & view logfiles (no prmtrs reqd)" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # logview - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # logview - list user's log2 files & view specified files # #usage: logview [ user ] <-- user defaults to $LOGNAME # ================ # # This script does an 'ls -l' of log2/user files # & prompts for seq# to 'vi' (counting backwards on ls -l list) # Repeats the ls -l & re-prompts until user enters 'q' # 2nd arg optional to print to specified destination (or default) # - must be no space between '-d' & printer destination # -> 1 - vi last log file, :q to quit vi & redisplay file list # -> 3 -d - print 3rd last log file to default printer # -> 3 -dlp09 - print 3rd last log file on lp09 # -> 0 - quit logview # if [[ "$1" = "" ]]; then user=$LOGNAME; else user="$1"; fi logdir=$LOGDIR/log2/$user # setup logdir for use below tmpd=$HOME/tmp if [ ! -d $tmpd ]; then mkdir $tmpd; fi typeset lognames[200] # array for up to 200 files integer fil=1 # force at least 1 list & prompt while (($fil > 0)) # if user reply > 0 do ls -l $logdir # show log files to user ls -1r $logdir > $tmpd/logfiles # logfile names in reverse order # read logfile names into an array integer n=1 exec 3< $tmpd/logfiles # open log while read -u3 logname # read current name do lognames[$n]=$logname # add current filename to array n=n+1 done exec 3<&1 # close file dscrptr 3 echo "enter 1,2,3,etc to vi logfile (counting backwards) 0 to quit" echo " - follow file# with -ddest to print for example--> 1 -dlpt1 " read fil prt # read users response fil=file#, prt dest (optional) if (( $fil < 1 || $fil > 9 )); then break; fi if [[ "$prt" = "" ]]; then vi $logdir/${lognames[$fil]} # vi specified file# else if [[ "$prt" == -d* ]]; then dest="$prt"; else dest=$UVLPDEST; fi if [[ "$dest" == *lpt* ]]; then I=i1; else I=i0; fi uvlist $logdir/${lognames[$fil]} p60$I t1c13 | lp -onobanner $dest fi done exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # logfixN - convert all script log files from $APPSADM/log1 to $APPSADM/log2 # # This 'logfixN' run by cron to fix logs of users who did not logoff # - part of 'nightly1' which is scheduled by 'crontab_appsadm1' # - these scripts & crontabs distributed by UVSI in /home/uvadm/sf/adm/... # - should be customized & stored for execution at /home/appsadm/sf/... # # Note - 'logfixA' is run by user profiles to fix logs during the day # - users logoff & backon to process logs, for viewing by logview script # - 'logfixN' run by crontab_appsadm1 fixes logs of users who did not logoff # - crontab_root has already KILLed users who did not logoff # #usage: logfixN #<-- no arguments required # ======= # # - removes control chars & inserts LFs to enable log file view & print # - copies all files for all users from log1/users/... to log2/users/... # - then removes all from log1/users/... # - this script is run by 'nightly1' which is scheduled by crontab_appsadm1 # # To initially setup logging for a user: # - setup a subdir matching his login/userid in log1/...,log2/...,log3/... # - remove '#' from last 4 lines of his .bash_profile (stub_profile_prod) # - see stub_profiles & common_profiles supplied in /home/uvadm/env/... # export APPSADM=/home/appsadm # define APPSADM cd $APPSADM # change to parent directory of logfile subdirs # JOBID=logfixN # setup scriptname for echos echo "$JOBID - convert script log files for editing & printing" echo "- convert all files from $APPSADM/log1/user/... to $APPSADM/log2/user/..." x=0 for d in log1/* do s=${d##*/} # capture subdir let x=x+1; y=0 # count subdirs & reset file ctr for subdir for f in $d/* # for each file in current subdir do if [[ -s $f ]]; then g=${f##*/} # capture file element name h=log2/$s/$g # setup output filename uvcopy logfix1,fili1=$f,filo1=$h # convert current logfile rm -f $f # remove file from log1 let y=y+1 echo "$x/$y - $f converted to log2/$s & deleted from $d" fi done done echo "$JOBID - $x subdirs converted from $APPSADM/log1 to $APPSADM/log2" exit
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # logfixM - Korn shell script from UVSI stored in: /home/uvadm/sf/adm/ # logfixM - console log file processing - called by monthly1 script # which is scheduled by CRON on the 1st of each month # - moves all log files (created by UNIX 'script' command) # from log2 to log3 & then removes all files from log2 # - UVSI suggests you setup userid 'appsadm' for aplctns admin # export APPSADM=/home/appsadm # define apps admin homedir #=========================== # $APPSADM should be defined in all user homedirs as the LOGDIR # - with subdirs as follows: # # log1 - daily log written here by UNIX script commands # - each user has a subdir containing his date/time stamped logfiles # for example: log1/$LOGNAME/yymmdd:HHMM # log1/owen/950615:0710 # log1/gordon/950615:1030 # log2 - current month log files # - converted by logfixA to allow editing & printing # - copied over to log3 on the 1st of each month # - then cleaned out for re-accumulation # log3 - last month log files # - provides log history up to 2 months ago (at end of mth) # cd $APPSADM # change to $APPSADM homedir JOBID=logfixM # setup JOBID for multi use below (easier to clone/rename) # echo "$JOBID - remove all subdirs/files from $APPSADM/log3" rm -rf log3/* # remove all subdirs & files from log3 echo "$JOBID - copy all subdir/files in $APPSADM/log2 to $APPSADM/log3" cp -rp log2/* log3 # copy all log2 subdirs & files to log3 echo "$JOBID - remove all files from all user subdirs in $APPSADM/log2" rm -f log2/*/* # remove all files from all log2 users subdirs echo "$JOBID - remove all files from all user subdirs in $APPSADM/log1" rm -f log1/*/* # remove all files from all log1 users subdirs exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# logfix1 - uvcopy Parameter File from UVSI stored in: /home/uvadm/pf/util/ # logfix1 - fix the console log created by the 'script' command # - called by script 'logfixA' invoked at end of user profile's # #example: uvcopy logfix1,fili1=tmp/demolog1,filo1=tmp/demolog2 # ==================================================== # # - see console logging documented in ADMjobs.doc # 'script' is used to capture console logs for each user # - this job makes the log files much easier to use as follows: # - removes screen control 'escapes' that interfere with viewing/printing # (causing overprinting, long lines,& lost data) # - unprintable characters are translated to x'00' & squeezed out # - drop blank lines # # - option 'f' to remove screen control codes (in addition to escapes) # (drop data from 1st '[' on a line to the last '[' on a line) # - depend on unique PS1 prompt pattern in the profile # must have: export PS1='<@${PWD}> ' # # - option j# to limit '*.ksh' displays to spcfd max lines # - option v# to limit 'vi' displays to spcfd max lines # - option m# to limit all other command displays to spcfd max lines # Note - must have option f1 or f3 to activate options j,v,m # - option 'f0' processes all lines (ensures you dont lose anything) # opr='$jobname - fix script/console log, remove escapes & vi displays' opr='uop=q1f3j8000m2000v10 - option defaults' opr=' q0 - do not prompt to allow option changes' opr=' q1 - prompt to allow option changes' opr=' f0 - filter option off (show 1st 256 all lines)' opr=' f1 - filter option on - limit output for JCL,vi,other' opr=' f2 - drop vi escape codes' opr=' from 1st "[" on a line to last "[" on a line' opr=' j8000 - for "*.ksh" cmds - limit output to 8000 lines' opr=' v8 - for " vi " cmds - limit output to 8 lines' opr=' m2000 - for other cmds - limit output to 2000 lines' opr='options j,v,m require option f1+ & depend upon unique pattern in PS1' opr='export PS1="<@$PWD> " <-- PS1 in the profile must contain "<@" ID' opr='IE - option f0 disables optns j,v,m & all lines are processed' uop=q1f3j8000m2000v8 # option defaults was=g20000 # allow 20000 input area in case of long lines w/o LFs fili1=?input,rcs=16000,typ=LSTe1 #<-- see note in getr re optn e1/n1 filo1=?tmp/output,rcs=512,typ=LSTt #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
@run opn all # setup translate table for later use to remove unprintable characters mvc p0(256),$trt neutral trslt tbl to w/s clr p0(32),x'00' clear ctls low clr p126(130),x'00' clear ctls high incldng tilde & del mvc p10(1),x'0A' restore LineFeed # # store prompt ID from arg1 & calc length, also with 1 LF preceding mvf c0(20),'<@' may change here if required scnr c0(20),>' ' scan back to 1st nonblnk for lth mvn $rc,$rx save dsp to LNB add $rc,1 +1 for length mvc c40(1),x'0A' insert 1 LF prior to prompt ID mvc c41(20),c0 mvn $rd,$rc to calc rep lth add $rd,1 +1 for 1 LF inserted # # transfer $uopbj,$uopbm,$uopbv to $cb1,$cb2,$cb3 for MAX LOG LINES msg mvn $cb1,$uopbj mvn $cb2,$uopbm mvn $cb3,$uopbv #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# begin loop to get/process/put records until EOF # getr subrtn gets lines up to 8000 bytes, but discards anything > 256 man20 bal getr get next line skp> man90 # # test option f1 to limit outputs for vi, JCL, other man22 cmn $uopbf,1 limit outputs for vi,JCL,other ? skp< man66 # # test for PS1 prompt & options to limit outputs depending on command man24 scne1z1 aa0(256),c0($rc20) PS1 prompt (usual '<@') skp! man60 man25 mvn $ca1,0 clear ctr lines for current PS1 mvn $ca2,$uopbm presume max lines for other cmds scne1 aa0(256),'> ' scan to end of PS1 prompt skp! man60 scn aa0(30),' vi ' ' vi ' within 30 bytes of prompt ? skp= man30 scn aa0(30),' logview ' or logview within 30 bytes of prompt ? skp= man30 scn aa0(30),'.ksh' JCL/ksh/script (within 30 bytes) ? skp= man34 skp man60 # # vi - set max lines from option v man30 mvn $ca2,$uopbv skp man60 # # JCL/script - set max lines from option j man34 mvn $ca2,$uopbj skp man60 # # common point to output & return to get next line # - output inhibited if line ctr for current cmd > max set at PS1 prompt man60 add $ca1,1 count lines since last PS1 prompt cmn $ca1,$ca2 vi lines > max ? skp> man20 skp= man70 man66 put filo1,a0(256) write current line to output log skp man20 return to get next # # Insert notes re max lines reached for vi, .ksh,& other man70 mvfv1 m0(80),'-->> MAX LOG LINES: vi=$cb1, .ksh=$cb2, other=$cb3 <<--' put filo1,m0(80) skp man20 # # end of file - close files & end job man90 cls all eoj #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#---------------------------- getr -------------------------------------- # getr - subrtn to get next line # - allow 16000 bytes & discard anything > 256 #Dec2003 - option LSTe1 chngd to n1 & n1 added to get in getr subrtn # - option e1/n1 inhibits errmsg if get data > op2 area # - option e1 left on LSTe1 for Henrico new logfix w/o new uvcopy getr getn1 fili1,g0(16000) optn n1 no errmsgstop if data > op2 skp> getr9 # # translate any remaining control chars to nulls & squeeze left mvc h0(80),g0 save 1st 80 for escape test below trt g0(8000),p0 trslt ctl chars to nulls sqz g0(8000),x'00' squeeze out nulls sqzc1 g60(8000),' ' squeeze multi blanks to 1 # sqzc1 above starts at col 60 to retain spacing for most lines, but # squeeze out multiple blanks for long lines (screen displays?) # # drop any blank lines cmc g0(256),' ' all blank line ? skp= getr # # ensure PS1 prompt (ID by <@) starts on a new line rep g0(256),c0($rc20),c40($rd22) # # optionally drop escape codes (escape itself x'1B' drop by above trt/sqz) # not perfect - drops from 1st '[' on line to last '[' on a line getr4 mvc a0(256),g0 presume option off & move all tsb o6(1),x'02' drop escape sequences ? skp! getr8 getr5 cmn $uopbf,2 drop escape codes [....[ ? skp< getr8 scn h0(80),x'1B' any escapes in current line ? skp! getr8 getr6 clr a0(512),' ' clear delivery area mvu a0(256),g0,'[' move until '[' found (if any) skp! getr8 mvn $ra,$rx save ptr to 1st [ in perm rgstr 'a' scnr g0(256),'[' scan from right for last '[' on line mvn $rg,$rx save ptr to last [ in perm rgstr 'g' mvc aa0(256),gg0 move rmndr of line (dropping [...[) # getr8 ret= getr9 ret> #-------------------------- end of logfix1 ------------------------------
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
7A1. | Introduction & overview |
7B1. | Listing files & directories sorted by name,date,size etc |
llm - sorted by filename, same as 'ls -l | more', saves keystrokes | |
llt - sorted by createion date, latest first | |
lls - sorted by file size, biggest first | |
... - several more, all scripts pipe to more, enter for next screen |
7C1. | scripts to Count Files, Lines,& KB in Directories |
1. cfl - Count Lines in 1 File | |
2. cfd - Count Files,Lines,KB in 1 Directory | |
3. cfdt - Totals-Only version of Count Files,Lines,& KiloBytes in a Directory | |
4. cfdpf - Count Files in a Directory with a Pattern [or not] in filenames | |
5. cfdpl - Count Files in a Directory with a Pattern [or not] on any line in any file | |
6. cfdd - Count Files,Lines,& KB in a ALL Sub-Dirs in a Super-Directory | |
7. cfddt - Count Files,Lines,& KB in a ALL Sub-Dirs in a Super-Directory Totals-Only | |
8. cfddf - Count Files,Lines,KB in ALL Sub-Dirs in a Super-Dir + 1st few files | |
9. cfdmm - List Directory: File#,Lines,Minsize,Maxsize,Minrec#,Maxrec#, Dir/Filename |
7D1. | rename - these scripts will rename all files in a directory |
saving hours of manual 'mv' commands | |
- 20 scripts to perform various conditional renames | |
- add/remove/change extensions, prefixes,& embedded patterns | |
- rename to UPPER case, lower case, etc |
7E1. | aliases - useful aliases for user profiles |
- alias rm,mv,cp to add option 'i' (are you sure) | |
- aliases for quick 'cd' to long frequently used pathnames |
7F1. | alldiff - powerful script to confirm the results of mass changes |
to entire directories of JCL/scripts or COBOL source programs | |
- it employs the marvelous unix/linux system 'diff' utility, | |
repeating it for each pair of files found in the directory. |
7G1. | dtree - draw directory tree from any specified starting directory |
- great for documentation |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
7H1. | statlogin1 - create table summary of logins |
- number of logins for each user in past few months |
7I1. | devicemod1 - allow user access to tape & diskette |
- setup rc5.d init script to chmod 666 /dev/st0,nst0,sde | |
udev rules - rules for devices accessed by users | |
- sample file /etc/udev/rules.d/70-local.rules | |
- Red Hat recommended alternative to devicemod1 (see above) |
7I2. | /etc/rc.d/rc.local - Boot time startup script |
- easier than the awkward coding in rc4.d (S999xxx & K999xxx) | |
- my example to start Micro Focus license mgr | |
and change permissions on DAT tape devices |
7J1. | findowner - find files for a specified owner |
7J2. | findgrpnw - find files with No Group Write permissions |
- no group write permissions could cause scripts to fail | |
when a group of programmers are working on a common project |
7J3. | findgrpnwfix - find No Group Write perms & FIX |
7K1. | chmod1 - change permissions on entire directory trees using 'find' |
- may specify permissions for directories & files | |
- we recommend 775 for directories & 664 for datafiles | |
- uses the unix/linux 'find' command to process all subdirs & files | |
from a specified starting directory | |
- after this script you would have to manually fix any executable | |
'program' & 'script' files |
7K4. | chmod3 - change permissions on entire directory trees |
- looks for subdir names identifying programs or scripts | |
to set executable permissions on files within these subdirs | |
- using 'recursion' to process all levels of sub-directories | |
- specify permissions for files, directories,& programs/scripts |
7K8. | chmod_custom1 - script to be run by cron to fix permissions for batch jobs |
- nightly batch jobs could fail due to files with bad perms | |
- this script must run under a root crontab to change perms | |
but nightly batch jobs run under crontab owned by appsadm | |
(too dangerous to run application scripts with root privileges) | |
- this example hard-codes directories & permissions for reliability | |
(you would customize for your site) |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
There are over 500 Korn shell scripts included in the Vancouver Utilities. After installation these will be in /home/uvadm/sf, which is sub-directoried as shown in the PATH below (extracted from the common_profile).
export UV=/home/uvadm
export PATH=$PATH:$UV/sf/adm:$UV/sf/demo:$UV/sf/util:$UV/sf/IBM
For Part 7 of ADMjobs, we have selected a few of the scripts that are most useful to unix/linux/uvadm administrators.
These scripts can save you a lot of time. They can do in seconds what could take you hours to do manually.
See many of these scripts listed at https://uvsoftware.ca/scripts1.htm =======================================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
l |
|
llm |
|
lla |
|
llc |
|
lld |
|
llr |
|
llt |
|
lltr |
|
lls |
|
llsr |
|
llu |
|
lslp |
|
spreadA |
|
llc $UV/sf/util <-- list Vancouver Utility script filenames =============== - with File & Line counts (25 files/screen) llc /home/uvadm/sf/util <-- '$UV' usually /home/uvadm ======================= - but could be different at your installation
File# Lines 1 43 -rwxrwxr-x 1 uvadm apps 1929 Jan 20 17:29 sf/util/acum1 2 21 -rwxrwxr-x 1 uvadm apps 900 Jan 20 17:29 sf/util/allcancel 3 29 -rwxrwxr-x 1 uvadm apps 828 Jan 20 17:29 sf/util/allchmod --- 412 lines omitted --- 416 59 -rwxrwxr-x 1 uvadm apps 2729 Jan 20 17:29 sf/util/xvsesli2 416 files, 15878 total lines in directory sf/util
You can see a help screen for each script, by entering the script-name only, without its required arguments (omit the directory). See these scripts listed on pages https://uvsoftware.ca/scripts1.htm#3A1 - 3H1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#0a. Login userxx ---> /home/userxx #0b. mkdir demo <-- make demo/ dir in your homedir #0c. cp -r $UV/demo/* demo <-- copy demo dirs/files to your demo/... dir #0d. cd demo <--- change into the demo/directory
#1. cfl $UV/src/uvcopy.c <-- count lines in 1 file (uvcopy source program) ==================== cfl /home/userxx/src/uvcopy.c - count lines in 1 file 24400 lines, 1072 KB in /home/userxx/src/uvcopy.c Report left in --> /home/userxx/demo/rpts/_home_userxx_src_uvcopy.c_cfl
#2. cfd dat1 <-- Count Files,Lines,& KiloBytes in the dat1/ directory ======== cfd dat1 - Count Files,Lines,KB in Directory File# Lines KB Directory/Filename 20190726:1027 1 335 32 dat1/CanadaMPs 2 13 4 dat1/CanadaProvinces 3 8 4 dat1/nameadrs1 4 305 28 dat1/UScities 5 539 40 dat1/UScongress 6 50 4 dat1/USstates ****** 1250 112 *Totals* in Directory /home/userxx/demo/dat1 Report left in --> /home/userxx/demo/rpts/dat1_cfd
#3. cfdt dat1 <-- Count Files,Lines,& KiloBytes in dat1/... Totals-Only ========= cfdt dat1 - Count Files,Lines,& KB in Directory - Totals-Only Files Lines KB Directory/Filename 20190726:1027 6 1250 112 *Totals-Only* in Directory /home/userxx/demo/dat1 Report left in --> /home/userxx/demo/rpts/dat1_cfdt
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#4. cfdpf dat1 'Canada*' <-- Count Files,Lines,KB for filenames begining with 'Canada' ==================== cfdpf dat1 Canada* - Count Files,Lines,KB in Directory with Pattern [not] in filenames File# Lines KB Directory/Filename 20190726:1027 1 335 32 dat1/CanadaMPs 2 13 4 dat1/CanadaProvinces ****** 348 36 *Totals* in dat1 with pattern "Canada*" Report left in --> /home/userxx/demo/rpts/dat1_cfdpf_Canada_
** 5. cfdpl - Count Files,Lines,KB with/without Pattern any line any file in dir **
#5. cfdpl dat1 'Washington' <-- Count Files,Lines,KB with 'Washington' on any line in any file ======================= cfdpl dat1 Washington - Count Files,Lines,KB in Directory with Pattern [not] on any line in file File# Lines KB Directory/Filename 20190726:1027 1 305 28 dat1/UScities 2 50 4 dat1/USstates ****** 355 32 *Totals* in /home/userxx/demo/dat1 with pattern "Washington" Report left in --> /home/userxx/demo/rpts/dat1_cfdpl_Washington
#6. cfdd $UV/sf <-- Count Files,Lines,& KB in all Sub-Dirs of $UV/sf/... =========== cfdd /home/userxx/sf - Count Files,Lines,& KB in SubDirs of SuperDir Dir# Files Lines KB SubDir/ParentDirectory 20190726:1027 1 227 6502 944 /home/userxx/sf/adm 2 86 2501 352 /home/userxx/sf/demo 3 308 17102 1436 /home/userxx/sf/IBM 4 541 21323 2248 /home/userxx/sf/util ****** 1162 47428 4980 *Totals* for SubDirs of SuperDir /home/userxx/sf/... Report left in --> /home/userxx/demo/rpts/_home_userxx_sf_cfdd
#7. cfddt $UV/sf <-- Count Files,Lines,& KB in all Sub-Dirs of $UV/sf/... Totals-Only ============ cfddt /home/userxx/sf - Count Files,Lines,KB in SubDirs of Super-Dir - Totals-Only Dirs Files Lines KB SubDir/ParentDir 20190726:1027 4 1162 47428 4980 *Totals* for SubDirs of SuperDir /home/userxx/sf/... Report left in --> /home/userxx/demo/rpts/_home_userxx_sf_cfddt
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#8. cfddf backup 2 <-- cfddf - for all Sub-Dirs in backup Dir + 1st 2 files of each subddir ============== cfddf backup 2 - Count Files,Lines,KB in SubDirs of SuperDir + 1st few files Dir# Files Lines KB SubDir/ParentDirectory 20190726:1027 1 8 1299 268 backup/archive 1 1299 28 backup/archive/CanadaMPs.csv 2 1299 4 backup/archive/ftpsall 2 6 1250 116 backup/dat1 1 1250 32 backup/dat1/CanadaMPs 2 1250 4 backup/dat1/CAprovinces 3 24 211 108 backup/dat2 1 211 4 backup/dat2/accents1 2 211 4 backup/dat2/accents2 ****** 38 2760 492 *Totals* for SubDirs of SuperDir /home/userxx/demo/backup/... Report left in --> /home/userxx/demo/rpts/backup_cfddf
#9. cfdmm dat1 <-- report Files,Lines,KB, Min/Max RecSize & RecNumbers in directory dat1/... ========== cfdmm dat1 - list File#,Lines,Minsize,Maxsize,Minrec#,Maxrec#,Dir/Filename File# Lines KB MinRsiz MaxRsiz MinRnum MaxRnum Directory/FileName 1 335 32 95 95 1 335 dat1/CanadaMPs 2 13 4 14 34 13 5 dat1/CanadaProvinces 3 8 4 70 75 6 7 dat1/nameadrs1 4 305 28 75 82 2 1 dat1/UScities 5 539 40 71 74 2 422 dat1/UScongress 6 50 4 13 23 15 40 dat1/USstates 6 1250 112 13 95 *Totals* in dat1 Report left in --> /home/userxx/demo/rpts/dat1_cfdmm
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can see the actual scripts listed in https://uvsoftware.htm/scripts1.htm. The following "+" links will go to scripts1.htm.
When you want to see a script coding, it is best to 'right-click' on the desired link & open in 'new-tab'. Then you will not get lost in scripts1.htm without an easy way back.
5A1+ | v12 - help screen of scripts |
- to count Lines, Files,& KiloBytes in directories | |
- optionally qualified by specified patterns | |
- sample outputs for each script |
5B1+ | cfl - Count Lines in 1 File |
5C1+ | cfd - Count Files,Lines,KB in 1 Directory |
5D1+ | cfdt - Totals-Only version of Count Files,Lines,& KiloBytes in a Directory |
5E1+ | cfdpf - Count Files in a Directory with a Pattern [or not] in filenames |
5F1+ | cfdpl - Count Files in a Directory with a Pattern [or not] on any line in any file |
5G1+ | cfdd - Count Files,Lines,& KB in a ALL Sub-Dirs in a Super-Directory |
5H1+ | cfddt - Count Files,Lines,& KB in a ALL Sub-Dirs in a Super-Directory Totals-Only |
5I1+ | cfddf - Count Files,Lines,KB in ALL Sub-Dirs in a Super-Dir + 1st few files |
5J1+ | cfdmm - List Directory: File#,Lines,Minsize,Maxsize,Minrec#,Maxrec#, Dir/Filename |
You can see a help screen for each script, by entering the script-name only, without its required arguments (omit the directory). See these scripts listed on pages https://uvsoftware.ca/scripts1.htm#5B1 - 5E1
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
These scripts will rename all files in a directory saving hours of manual 'mv' commands. We will present 20 scripts to perform various renames, (UPPER case, lower case, add/remove/change extensions, prefixes,& patterns).
rename-A |
|
renameAA |
|
renameB |
|
renameL |
|
renameP |
|
rename-P |
|
rename+P |
|
renameParens |
|
renameU |
|
renameU1 |
|
renameX |
|
rename-X |
|
rename.X |
|
rename+X |
|
rename-X1 |
|
rename+X1 |
|
rename-X2 |
|
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
When we convert mainframe files to unix/linux, we will change filenames from UPPER case to lower case. This would be very laborious if we had to manually enter an 'mv' command for each file, but very quick using the 'renameL' script.
We will illustrate using just 3 filenames, but the directory could contain hundreds of long filenames.
#1. ls datadir ==========
E2123001.ITAXE.BANQTAXE E2123002.ITAXE.TAXATION E2123003.ITAXE.TRANSDAM
#2. mv E2123001.ITAXE.BANQTAXE e2123001.itaxe.banqtaxe <-- the HARD way mv E2123002.ITAXE.TAXATION e2123002.itaxe.taxation mv E2123003.ITAXE.TRANSDAM e2123003.itaxe.transdam
#2a. renameL datadir <-- the EASY way ===============
#3. ls datadir ==========
e2123001.itaxe.banqtaxe e2123002.itaxe.taxation e2123003.itaxe.transdam
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a listing of just 1 of the 'rename' scripts provided. You can see the others in /home/uvadm/sf/util/... You can also see most of these on the web site at: www.uvsoftware.ca/scripts.htm
#!/bin/ksh # renameL - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # renameL - rename an entire directory of filenames to lower case # echo "rename all filenames in subdir to lower case" if [ -d "$1" ]; then : else echo "usage: renameL directory <-- arg1 must be a directory" echo " =================" exit 1; fi # reply="n" until [ "$reply" = "y" ] do echo "will rename all files in $1 to lower case OK ? y/n" read reply done # x=0; y=0 for i in $1/* do let x=x+1 f=${i##*/} typeset -l g=$f if [[ $g != $f ]]; then mv -i $1/$f $1/$g let y=y+1 echo "file# $y (of $x) $1/$f - renamed to: $1/$g" fi done echo "total $y files in ${1}, $x renamed to lower case" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here are 'alias' commands extracted from the common_profile which is listed on page '1C2'.
# alias UNIX commands to prompt for overwrite (highly recommended) # - use \rm, \mv, \cp, when you have many files & know what you are doing # - '\' tells UNIX to ignore the alias & use native UNIX command alias rm='rm -i' # confirm removes alias mv='mv -i' # confirm renames alias cp='cp -i' # confirm copy overwrites alias l='ls -l' # saves a lot of keystrokes alias cdd='cd $RUNDATA' # quick access to data dir alias cdl='cd $RUNLIBS' # quick access to libs (same as cd) alias cdc='cd $CNVDATA' # quick access to data conversion superdir
The 1st 3 (rm='rm -i',etc) are recommended for unix/linux beginners & experts since it is so easy to wipe out files unintentionally. When you do have multiple files to remove (using the '*' wildcard), you can disable the prompt by preceding the command with a backslash or using option '-f'.
These are a great convenience for Your programmers & operators, because the recommended directory design has libraries & data in different filesystems & the paths can be long & awkward if you had to key them often.
export TESTLIBS=/p1/apps/testlibs #<-- defs in common_defines export TESTDATA=/p1/apps/testdata export PRODLIBS=/p2/apps/prodlibs export PRODDATA=/p2/apps/proddata
export RUNLIBS=$TESTLIBS #<-- defs in .profile or .bash_profile export RUNDATA=$TESTDATA --- OR --- export RUNLIBS=$PRODLIBS export RUNDATA=$PRODDATA
alias cdd='cd $RUNDATA' #<-- defs in common_profile alias cdl='cd $RUNLIBS'
With the above aliases in your profile, you can switch between your libraries & data with 3 character commands:
cdd <-- change to your data files superdir === cdl <-- change to your library files superdir ===
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The 'alldiff2' script will run the unix 'diff' utility on all pairs of files in 2 subdirs. The unix 'diff' command is a marvelous thing. It shows you the differences between any 2 text files.
I recommend you use 'alldiff2' whenever you make changes to the JCL converter or the control files (same applies to the COBOL converter & its search/replace).
First save the existing scripts as jcl3.old, remake jcl3, rerun the converter, & then run alldiff2 to verify that the changes made are the changes you intended & nothing else has gotten screwed up.
You can use the 'newold' script to change the name & make the new subdir.
#1a. Login as yourself or appsadm #1b. cdl ---> $TESTLIBS
#3. newold jcl3 <-- script to 'mv jcl3 jcl3.old' & 'mkdir jcl3' ===========
#4. jclxx41 jcl2 jcl3 <-- reconvert all JCL in jcl2 to ksh in jcl3 =================
#5. alldiff2 jcl3.old jcl3 <-- run alldiff2 script ====================== - saves jcl3.dif in the tmp subdir
#6. uvlp12 tmp/jcl3.dif <-- print the jcl3.dif file =================== - or vi, more, etc
37c37 < exportgen0 E211801 $TAPE/tu.f01.e211801.adrpos_ --- > exportgen0 E211801 $TAPE2/tu.f01.e211801.adrpos_ diff file# 10 - jcl3.old/... vs jcl3/24599j04.ksh
97c97 < exportgen1 E212990 $TAPE/tu.f01.e211801.adrpos_ --- > exportgen1 E212990 $TAPE2/tu.f01.e211801.adrpos_ diff file# 78 - jcl3.old/... vs jcl3/28401j04.ksh
2 different of 280 files compared jcl3.old to jcl3
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # alldiff2 - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # alldiff2 - script to compare all text files in 1 directory to a 2nd directory # - bypass non-text files & provide audit trail with file-counts # # alldiff variations: # alldiff - displays filename only if differences exist # alldiff1 - displays filename even if no differences #*alldiff2 - same as alldiff, but in addition # - redirects output to tmp/dir2.dif & prompts to view/print/etc # d1="$1"; d2="$2"; if [[ -d "$d1" && -d "$d2" ]]; then : else echo "USAGE: alldiff2 dir1 dir2" echo " ==================" exit 1; fi d2b=$(basename $d2) # get basename of dir2 (drop any preceding /path/...) log=tmp/$d2b.dif # make name for output log file >$log #init logfile in tmp subdir w same name as dir2 + .dif x=0; y=0; for i in $d1/* do let x=x+1 typ=$(file $i) if [[ $typ == *text* || $typ == *script* || $typ == *spreadsheet* ]]; then f=${i##*/} diff -b $d1/$f $d2/$f >>$log if [[ $? -gt 0 ]]; then echo "diff file# $x - $d1/... vs $d2/$f" >>$log echo " " >>$log let y=y+1 fi else echo " file# $x $i - NOT a text/script file" >>$log fi done lines=$(wc -l $log) # capture line count echo "$y different of $x files compared $d1 to $d2" >>$log echo "$y diff of $x files in $d1 & $d2, report is: $lines" echo "--> use uvlp12,uvlp14,uvlp16 to laser print at 12,14,16 cpi" echo "--> enter command (vi,cat,more,uvlp12,etc, or null)" read ans if [[ ! "$ans" = "" ]]; then $ans $log fi exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The 'dtree' script will draw a directory tree from any specified starting directory (great for documentation).
For example here is the 1st 40 lines of the dtree for /lib on my Red Hat system.
#1. dtree /lib >tmp/lib <-- create dtree diagram for /lib ===================
#2. more tmp/lib <-- display /lib dtree ============
/lib :-----evms :-----i686 :-----iptables :-----kbd : :-----consolefonts : : :-----partialfonts : :-----consoletrans : :-----keymaps : : :-----amiga : : :-----atari : : :-----i386 : : : :-----azerty : : : :-----dvorak : : : :-----fgGIod : : : :-----include : : : :-----qwerty : : : :-----qwertz : : :-----include : : :-----mac : : : :-----all : : : :-----include : : :-----sun : :-----unimaps :-----lsb :-----modules : :-----2.4.21-4.EL : : :-----kernel : : : :-----arch : : : : :-----i386 : : : : : :-----kernel : : : :-----crypto : : : :-----drivers : : : : :-----addon : : : : : :-----aep : : : : : :-----bcm : : : : : :-----cipe : : : : : :-----megarac : : : : : :-----qla2200 : : : : :-----block : : : : :-----cdrom
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # dtree - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # dtree - list a directory tree # - contributed by Howard Lobsinger (Peacock Engineering, Montreal) # #usage: dtree directory # =============== # D=${1:-`pwd`} (cd $D; pwd) find $D -type d -print | sort | sed -e "s,^$D,,"\ -e "/^$/d"\ -e "s,[^/]*/\([^/]*\)$,\:-----\1,"\ -e "s,[^/]*/,: ,g" | more exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# statlogin1 - table summary of user logins by month & userid # - based on /var/log/messages # - by Owen Townsend, UV Software, Nov 26/2007 # # uvcopy statlogin1,fili1=/var/log/messages,filo1=stats/login.rpt1 # ================================================================ # uvcopy statlogin1 <-- same & easier (files default as shown above) # ================= # #Note - above works if you have updated root's profile to run uvcopy # - if not, use following procedure: #1. mkdir stats tmp <-- make subdirs in your working directory #2. su root <-- Switch User to root & enter password #3. cp /var/log/messages tmp <-- copy messages file to tmp subdir #4. chmod 777 tmp/messages <-- change permissions on messages file #5. uvcopy statlogin1,fili1=tmp/messages,filo1=stats/login.rpt1 # =========================================================== #6. vi stats/login.rpt1 <-- view report #7. uvlp12 stats/login.rpt1 <-- print report # # ** sample input /var/log/messages ** # # Oct 23 06:11:51 uvsoft3 login[16341]: session opened for user laval4 by LOGIN(uid=0) # Oct 23 06:11:51 uvsoft3 -- laval4[16341]: LOGIN ON tty5 BY laval4 # Oct 23 06:13:08 uvsoft3 ftpd[16342]: FTP LOGIN FROM 192.168.0.2, uvsoft2.uvsoft.ca (laval4) # Oct 23 07:15:00 uvsoft3 login[16341]: session closed for user laval4 # Oct 23 07:15:23 uvsoft3 login[16516]: authentication failure; logname=LOGIN uid=0 euid=0 tty=tty5 ruser= rhost= user=mvstest # Oct 23 07:15:33 uvsoft3 login[16516]: session opened for user mvstest by LOGIN(uid=0) # Oct 23 07:15:33 uvsoft3 -- mvstest[16516]: LOGIN ON tty5 BY mvstest # Oct 23 21:00:42 uvsoft3 shutdown: shutting down for system halt # #Note - the code (on next page) scans for 'session opened' & if found # - then scans for ' user ', extracts following word (userid) # - moves month (1st 3 bytes) & userid together for table argument # - see vital instructions 'tbl' (build table) & 'tbp' (print table) # # ** sample output report ** # # statlogin1 2007/11/26_21:24:24 logins by month & userid # tbl#001 pg#001 -argument- # line# count % mth login # 1 13 5 Nov efunds2 # 2 13 5 Nov laval4 # 3 23 10 Nov mvstest # 4 27 12 Nov root # 5 26 11 Nov uvadm # 6 26 11 Nov uvbak # 7 8 3 Oct efunds2 # 8 7 3 Oct laval4 # 9 10 4 Oct mvstest # 10 11 4 Oct root # 222*100 *TOTAL*
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
rop=r1 # option to prompt for report disposition at EOJ fili1=/var/log/messages,typ=LST,rcs=256 filo1=stats/login.rpt1,typ=LSTt,rcs=128 @run opn all # # begin loop to get & process messages lines until EOF man20 get fili1,a0 get next line of messages skp> man90 (cc set > at EOF) sqzc1 a0(256),' ' ensure only 1 blank between words # # scan for 'session opened' & 'user' login # table user logins by month & by month+day scn a0(100),' session opened ' skp! man20 scn a0(100),' user ' skp! man20 clr b0(500),' ' clear workarea mvu b0(25),ax6,' ' store user login until ending blank mvc b100(3),a0 store mth (1st 3 bytes) mvc b104(25),b0 follow with userid tblt1f4 b100(32),'mth login' skp man20 return to get next line # # EOF - dump tables, close files, prompt for report view (rop=r1), end job man90 tbpt1 filo1,'logins by month & userid' cls all eoj
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # devicemod1 - allow user access to some devices (tape & diskette) # - by Owen Townsend, June 2008 # This script distributed with Vancouver Utilities at # /home/uvadm/sf/adm/devicemod1 # # Setup to run script at boot time as follows: # 1. login as root --> /root # 2. mkdir sf # make /root/sf (if not already present) # 3. cp /home/uvadm/sf/adm/devicemod1 sf # copy to /root/sf subdir # 4. cd /etc/rc5.d # change to run level 5 init script directory # 5. ln -s /root/sf/devicemod1 S99xxdevicemod1 # ========================================= chmod 666 /dev/st0 # SCSI tape (rewind) chmod 666 /dev/nst0 # SCSI tape non-rewind chmod 666 /dev/sde # diskette on USB
Red Hat recommends using 'udev rules' to set desired modes on devices (alternative to devicemod1 init script above).
The udev default rules are stored at /etc/udev/rules.d/50-udev-rules. Red Hat recommend you do NOT change that file, but rather create a new file with overrides (see sample '/etc/udev/rules.d/70-local.rules' listed below).
# 70-local.rules - by Owen Townsend, UV Software, June 2008 # - allow user access to DAT tape & USB diskette # /etc/udev/rules.d/50-udev.rules <-- system default rules stored here # /etc/udev/rules.d/70-local.rules <-- this file overrides defaults 0660 # KERNEL=="st0", GROUP="disk", MODE="0666" KERNEL=="nst0", GROUP="disk", MODE="0666" KERNEL=="st0m", GROUP="disk", MODE="0666" KERNEL=="nst0m", GROUP="disk", MODE="0666" KERNEL=="sde", GROUP="disk", MODE="0666"
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
RHEL 5 & 6 now provide a better way to run boot time startup scripts (easier than the above example of calling S999xxdevicemod1 from /etc/rc5.d) You may now add your startup scripts from /etc/rc.d/rc.local. Here is mine with 2 items added on Feb27/2012.
#!/bin/sh # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local # #Feb27/12/OT - start Micro Focus License mgr cd /opt/microfocus/mflmf sh mflmman # #Feb27/12/OT - allow users to access DAT tape chmod 666 /dev/st0 # SCSI tape (rewind) chmod 666 /dev/nst0 # SCSI tape non-rewind #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # findowner - find files owned by specified user # if [[ -d "$1" && -n "$2" ]]; then : else echo "usage: findowner directory username" echo " ============================" echo "example: findowner /u2/apps/data root" echo " ============================" echo " - list files owned by root (& change manually to appsadm ?)" echo " - arg1 must be a directory, arg2 must be a useraccountname" echo " " echo "example2: findowner . root" echo " ================" echo " - use '.' if you are above the subdirs to be searched" exit 1; fi # find $1 -user $2 -print #====================== # - find all files owned by spcfd user & list (for manual change?) exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
#!/bin/ksh # findgrpnw - find files without group write permission # if [[ ! -d "$1" ]]; then echo "usage: findgrpnw directory" echo " ===================" echo "example: findgrpnw /u2/apps/data" echo " =======================" echo " - list files without group write permission" echo " - arg1 must be a directory, will serach all files beneath" echo " " echo "example2: findgrpnw . " echo " ============" echo " - use '.' if you are above the subdirs to be searched" exit 1; fi # find $1 ! -perm /g+w -exec ls -l {} \; #===================================== # - find all files without group write permission (for manual change?) exit 0
#!/bin/ksh # findgrpnwfix - find files without group write permission & add group write # if [[ ! -d "$1" ]]; then echo "usage: findgrpnwfix directory" echo " ======================" echo "example: findgrpnwfix /u2/apps/data" echo " ==========================" echo " - find files without group write permission & fix" echo " - arg1 must be a directory, will serach all files beneath" echo " " echo "example2: findgrpnwfix . " echo " ===============" echo " - use '.' if you are above the subdirs to be searched" exit 1; fi # find $1 ! -perm /g+w -exec chmod g+w {} \; #========================================= # - find all files without group write permission & add group write perm exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
This script will change permissions on entire directory trees, using 'find' to process all levels of sub-directories. This script can save hours of manual investigation & correction.
For projects such as mainframe conversions, there is usually a team of programmers who must be able to read & write on a common set of directories & files (JCL/COBOL libraries & datafiles).
We recommend 775 for directories & 664 for files. This extends security to the 'group' level & all team members must be in the same group. Scripts would be 775 since they are files with the execute bit on.
This script is the solution to a very significant problem, that I frequently encounter when I arrive onsite to assist customer conversions.
If the site administrator did not initially setup the profiles with umask 002 (permissions 775/664 for directories/files), then the other programmers will be very frustrated when they attempt to work on the shared directories of JCL, COBOL,& DATA files.
If you are interested, you can see my recommended profiles in 'Part_1'. The profiles consist of 3 files (stub_profile, common_profile, & bashrc). 'umask' is specified in both the stub_profile & bashrc. Note that the stub_profile is copied to the homedirs & renamed '.bash_profile' & bashrc is copied & renamed as '.bashrc'.
You must login or su to 'root' to run this script since it changes permissions. In the instructions below, I have included an 'export' to add the appsadm & uvadm script subdirs to root's PATH (or you could add this permanently in root's profile for future use).
#1. su root <-- switch to root
#2. export PATH=$PATH:/home/appsadm/sf:/home/uvadm/sf/adm ===================================================== - add to PATH, so root can find chmod1 (in either appsadm or uvadm)
#3. chmod1 directory dir-perms file-perms <-- command format =====================================
#3a. chmod1 directory 775 664 <-- recommended permissions ========================
#4. exit <-- exit from root asap ====
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We have provided a test directory '/home/uvadm/tf/test_chmod' that you can use to test/demo the chmod1 script. If you don't have the root password, you could test in your own home dir as follows:
#1. login as yourself --> /home/userxx
#2. mkdir tmp1 <-- make a subdir to receive demo files ==========
#3. cp -pr /home/uvadm/tf/test_chmod tmp1 <-- copy test dirtree to tmp1/apps/... ===================================== - option 'p' Preserves current permissions - option 'r' (Recursive) copies all levels of sub-directories
#4. ls -lr tmp1 <-- display test dir tree BEFORE chmod1 ===========
tmp1: total 16 drwxr-xr-x 2 uvtest users 4096 Dec 22 17:32 data -rw-r--r-- 1 uvtest users 7 Sep 22 23:54 file1 drwxr-x--- 2 uvtest users 4096 Dec 22 17:29 programs drwxr-x--- 2 uvtest users 4096 Dec 22 17:29 scripts
tmp1/apps/data: total 8 -rw-r----- 1 uvtest users 11 Sep 22 23:54 datafile1 -rw-r----x 1 uvtest users 11 Dec 22 17:32 datafile2
#5. chmod1 tmp1 775 664 <-- execute chmod1 ===================
#6. ls -lr tmp1 <-- display test dir tree AFTER chmod1 ===========
tmp1: total 16 drwxrwxr-x 2 uvtest users 4096 Dec 22 17:32 data -rw-rw-r-- 1 uvtest users 7 Sep 22 23:54 file1 drwxrwxr-x 2 uvtest users 4096 Dec 22 17:29 programs drwxrwxr-x 2 uvtest users 4096 Dec 22 17:29 scripts
tmp1/apps/data: total 8 -rw-rw-r-- 1 uvtest users 11 Sep 22 23:54 datafile1 -rw-rw-r-- 1 uvtest users 11 Dec 22 17:32 datafile2
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # chmod1 - change permissions on subdirs & files under a specified superdir # - using 'find' to process all levels of directories & files # - need to manually add 'x' perms on any bin/* & script/* dirs # - by Owen Townsend, April 26/2006 # # chmod1 directory dir-perms file-perms <-- command format # ====================================== # chmod1 directory 775 664 <-- recommended permissions # ========================= # # - also see alternatives chmod2 & chmod3 (to this *chmod1) #*chmod1 - change perms on dirs & files, using 'find' # chmod2 - change perms on dirs & files, using 'recursion' # chmod3 - change perms on dirs & files, using 'recursion' # - sets 'x' perm by testing for known names of bin/ & script/ subdirs # # After running chmod1 or chmod2, you must manually fix permissions on # executable programs & scripts via -->chmod 775 bin/*; chmod 775 scripts/* # # capture arguments & force perms integers dir="$1"; typeset i dperm="$2"; typeset i fperm="$3"; # # ensure arg1 is directory & length of perms are 3 digits dpl=${#dperm}; fpl=${#fperm}; # if [[ -d "$dir" ]] && ((dpl==3 && fpl==3)); then : else echo "usage: chmod1 directory dir-perms file-perms" echo " =====================================" echo "example: chmod1 dirxx 775 664" echo " ====================" echo " - arg1 must be dir, args 2 & 3 must be 3 digits" exit 90; fi # echo -n "chmod1: set perms $dperm/$fperm from: $dir - enter to continue" read reply # find $dir -type d -exec chmod $dperm {} \; #========================================= # find $dir -type f -exec chmod $fperm {} \; #========================================= # echo "chmod1: perms set $dperm/$fperm for all subdirs & files within: $dir" exit 0 #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
One problem with 'chmod1' is that files in any executable program or script directories will get permissions 664 & they must be 775 to execute.
'chmod3' is an alternate script provided for this problem, but it's not perfect. It looks for any subdirs whose names contain 'bin', 'program', or 'script', or 'sf' (my preferred short name for Script File subdirs).
#1. login as yourself --> /home/userxx
#2. mkdir tmp3 <-- make a subdir to receive demo files ==========
#3. cp -pr /home/uvadm/tf/test_chmod tmp3 <-- copy test dirtree to tmp3/apps/... =====================================
#4. ls -lr tmp3 <-- display test dir tree BEFORE chmod1 ===========
tmp3: total 16 drwxr-xr-x 2 uvtest users 4096 Dec 22 17:32 data -rw-r--r-- 1 uvtest users 7 Sep 22 23:54 file1 drwxr-x--- 2 uvtest users 4096 Dec 22 17:29 programs drwxr-x--- 2 uvtest users 4096 Dec 22 17:29 scripts
tmp3/apps/data: total 8 -rw-r----- 1 uvtest users 11 Sep 22 23:54 datafile1 -rw-r----x 1 uvtest users 11 Dec 22 17:32 datafile2
tmp3/apps/programs: total 8 -rwxr-x--- 1 uvtest users 127 Dec 22 16:53 program1 -rwxr-x--x 1 uvtest users 127 Dec 22 17:29 program2
tmp/apps3/scripts: total 8 -rwxr-x--- 1 uvtest users 125 Dec 22 16:53 script1 -rwxr-x--x 1 uvtest users 125 Dec 22 17:29 script2
#5. chmod1 tmp3 775 664 775 <-- execute chmod1 =======================
#6. ls -lr tmp3 <-- display test dir tree AFTER chmod1 ===========
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
tmp3: total 16 drwxrwxr-x 2 uvtest users 4096 Dec 22 17:32 data -rw-rw-r-- 1 uvtest users 7 Sep 22 23:54 file1 drwxrwxr-x 2 uvtest users 4096 Dec 22 17:29 programs drwxrwxr-x 2 uvtest users 4096 Dec 22 17:29 scripts
tmp3/apps/data: total 8 -rw-rw-r-- 1 uvtest users 11 Sep 22 23:54 datafile1 -rw-rw-r-- 1 uvtest users 11 Dec 22 17:32 datafile2
tmp3/apps/programs: total 8 -rwxrwxr-x 1 uvtest users 127 Dec 22 16:53 program1 -rwxrwxr-x 1 uvtest users 127 Dec 22 17:29 program2
tmp3/apps/scripts: total 8 -rwxrwxr-x 1 uvtest users 125 Dec 22 16:53 script1 -rwxrwxr-x 1 uvtest users 125 Dec 22 17:29 script2
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# chmod3 - change permissions on subdirs & files under a specified superdir # - using 'recursion' to process all levels of subdirs # - adds 'x' perms on any bin/* & script/* dirs, by testing # for known names of bin & script subdirs # - by Owen Townsend, June 13/2006 # # chmod3 directory dir-perms file-perms xfile-perms <-- command format # ================================================== # chmod3 directory 775 664 775 <-- recommended permissions # ============================= # - also see alternatives chmod1 & chmod2 (to this *chmod3) # chmod1 - change perms on dirs & files, using 'find' # chmod2 - change perms on dirs & files, using 'recursion' #*chmod3 - change perms on dirs & files, using 'recursion' # After running this script, you must manually fix permissions on executable # programs & scripts via -->chmod 775 bin/*; chmod 775 scripts/* # - if you can't modify the script to recognize all your bin & script dirs # capture arguments & force perms integers dir="$1"; typeset i dperm="$2"; typeset i fperm="$3"; typeset i xperm="$4"; # ensure arg1 is directory & length of perms are 3 digits dpl=${#dperm}; fpl=${#fperm}; xpl=${#xperm}; if [[ -d "$dir" ]] && ((dpl==3 && fpl==3 && xpl==3)); then : else echo "usage: chmod3 directory dir-perms file-perms xfile-perms" echo " =================================================" echo "example: chmod3 dirxx 775 664 775" echo " ========================" echo " - arg1 must be dir, args 2,3,4 must be 3 digits" exit 90; fi echo -n "chmod3: Begin directory $dir - enter to continue" read reply integer nd=0 nf=0 nxf=0 for df in $dir/* { if [[ -d $df ]] then chmod $dperm $df; ((nd+=1)) elif [[ $df == *script* || $df == *sf* || $df == *bin* || $df == *program* ]] then chmod $xperm $df; ((nxf+=1)) else chmod $fperm $df; ((nf+=1)) fi # if current entry in current dir is a subdir # - call this script again to process its files & subdirs # - using 'recursion' to process all existing levels of subdirs # - but do not call if directory is empty if [[ -d $df ]]; then ls $df >/tmp/chmod3 if [[ -s /tmp/chmod3 ]]; then chmod3 $df $dperm $fperm $xperm #============================== fi fi } echo "chmod3: End dir $dir, subdirs=$nd, files=$nf, xfiles=$nxf" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note that line 39 of chmod3 (listed above) identifies subdirs with executable files if their names contain 'bin', 'program', 'script',or 'sf'. You could change &/or add patterns depending on the conventions at your site.
Also note that when you wish to change any script in /home/uvadm/sf/..., you should 1st copy it to /home/appsadm/sf/... and modify it there.
/home/appsadm/sf is in the PATH prior to /home/uvadm/sf, so your modified script will be found before the original uvadm script. This way, you will not lose your changes when you install a new version of uvadm.
If you are using 'console logging' (as documented in 'Part_6'), then you must ensure you have setup '.bashrc' with umask 002 in the user homedirs.
The umask 002 in the .bash_profile is lost when console logging is activated, because logging uses the unix/linux 'script' command which is another level of the shell. '.bashrc' solves this problem (see listing on page '1C5').
Even if you are not currently a customer of UV Software, you are welcome to download scripts such as chmod3 that are listed in the web documentation.
Your web browser will allow you to save this page & you can then cut it out & store it in your scripts directory. Remember to change permissions to 775.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Nightly batch jobs could fail due to files with bad permisions or group. Nightly batch jobs are scheduled by a crontab owned by 'appsadm' (see crontabs in 'Part_5'). Files with bad permissions migt be FTP to the site or somebody may have used 'root' to copy a file & forgot to fix permissions.
See 'chmod_custom1' '7K9' sample script that could be run before the nightly batch jobs to ensure permissions on all data directories/files 775/664 and group 'apps'. You could also reset owner to 'appsadm' if you want to see who changed what files during the day (or reset owner more infrequently). This sample script has hard-coded directories & permissions for reliability. You would customize for your site.
Note that 'root' should be used only when necessary (fixing permissions, etc). It is too dangerous to run application scripts with root privileges. Of course the chmod_custom1 script must be scheduled by a root crontab, but all batch jobs would be scheduled by 'appsadm' crontabs. And appsadm shares group 'apps' with all operators & programmers who access the data files.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # chmod_custom1 - script to fix perms/owner/group in a directory tree # - fixes all subdirs & files at all levels down the tree # - manually restore execute perms on any script/program subdirs # - by Owen Townsend, UV Software, January 2010 # See hard-coded perms/owner/group in code further below # This script must run as 'root', BUT do NOT forget to exit root after use # - could schedule via a 'crontab' owned by root see ADMjobs.htm#Part_5 # - assumes your root profile PATH modified to access UV scripts # # 1. Login as root # # 2. chmod_custom1 directory <-- arg1 must be a directory # # 2a. chmod_custom1 /p2/apps/proddata <-- examples # =============================== # 2b. chmod_custom1 /p2/apps/prodlibs # =============================== # # 3a. chmod 775 /p2/apps/prodlibs/jcls <-- restore execute perms on JCL/scripts # ================================ # 3b. chmod 775 /p2/apps/prodlibs/sf <-- restore any other script/program dirs # ============================== # # 4. exit <-- exit root # # Sample script to fix perms/owner/group on all subdirs/files in directory tree # - with hard-coded perms/owner/group # (chown to appsadm optional & #commented out below) # - might use as emergency fix if somebody used root to copy files # #Note - UV conversion sites should be run by users, NOT 'root' (too dangerous) # - root perms/group would cause batch jobs to fail (cron nightly jobs) # - all directories should be 775 # - all files should be 664, except programs & scripts must be 775 (execute) # - user profiles umask 002 (vs dflt 022) allow group share read/write # - converted JCL/scripts run by users with recommended profile/umask # should always write directories/files with 775/664 permsions # - all users running scripts must be setup with common group (ex: appsadm) # so any user in the group has read/write access to directories & files # #Potential Problems: # 1. Files imported (by email,FTP,CD,etc) may have wrong permissions # 2. Somebody might operate as 'root' & create files with wrong perms & group # #VITAL - use 'root' ONLY to fix permissions,owners,groups on imported files # - NEVER use 'root' to create any files in the application directories # (will cause batch jobs to fail, possibly run by cron at night) #
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# ** sample hard-coded script to fix perms/groups ** # dperm="775"; fperm="664"; #<-- hard-coded perms for Dirs & Files owner="appsadm"; #<-- hard-coded owner (optional may #cmt out below) group="apps"; #<-- hard-coded group dir="$1"; #<-- capture directory from arg1 # # ensure arg1 is a directory if [[ ! -d "$dir" ]]; then echo "usage: chmod1 directory <-- arg1 must be a directory" echo " =================" echo "ex: chmod1 /p2/apps/proddata <-- example" echo " ========================" echo "DIRperms=$dperm, FILEperms=$fperm, owner=$owner, group=$group" exit 90; fi # echo "DIRperms=$dperm, FILEperms=$fperm, owner=$owner, group=$group" echo "set perms, owner, group on $dir - enter to continue" read reply # find $dir -type d -exec chmod $dperm {} \; #========================================= # - set perms ($dperm) on all subdirs in the directory tree # find $dir -type f -exec chmod $fperm {} \; #========================================= # - set perms ($fperm) on all files in the directory tree # chgrp -R $group $dir #<-- set group on all subdirs/files in tree #=================== # chown -R $owner $dir #<-- set owner on all subdirs/files in tree #=================== #Note - might #comment out above chown (to see who created files) # echo "perms set for all subdirs & files within: $dir" echo "DIRperms=$dperm, FILEperms=$fperm, owner=$owner, group=$group" echo "Note - must restore execute perms on any program/script subdirs" echo " - examples below, could hard-code here ??" echo "chmod 775 /p2/apps/testlibs/cblx/* <-- COBOL programs" echo "chmod 775 /p2/apps/testlibs/rpgx/* <-- RPG programs" echo "chmod 775 /p2/apps/testlibs/jcls/* <-- JCL/scripts" echo "chmod 775 /p2/apps/testlibs/sf/* <-- other misc scritps" echo "- enter to acknowledge & end script"; read reply exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
8A1. | Sample Network (at UV Software) |
- 3 PCs on a LAN/router & DSL modem to ISP | |
- RHEL 5.1, RHEL 3.0,& Windows XP |
8A2. | /etc/hosts - convert Host-names to IP addresses on the LAN |
/etc/resolv.conf - specify the Domain Name Servers IP addresses | |
- DNS computers at my ISP convert host names to IPs for the internet WAN |
8A3. | setup router access to ISP |
8A4. | network-scripts (/etc/sysconfig/network-scripts/ifcfg-eth0) |
- setup static IP#s for computer, gateway,& DNS1/DNS2 |
8B1. | Lookup IP Adresses or Domain Names (reverse lookup) |
- using unix/linux command line tools such as nslookup, host,& dig | |
- using a GUI web browser, try sites such as whatismyipaddress.com. |
8C1. | using 'ping' to investigate communication problems |
8C2. | script 'pingall' to determine the IP#s used on your router |
8C3. | 'nmap' to determine the device or O/S at any given IP# |
8D1. | FTP - sample session, transfer files between my computers |
8E1. | SSH - sample session, unzip uvweb.zip previously FTP'd to the web site |
8F1. | PUTTY - SSH (Secure SHell) Terminal Emulator for Windows & Unix/Linux |
- free download from www.chiark.greenend.org.uk |
8G1. | SAMBA - Linux file-server for Windows PCs |
- sample samba configuration file |
8H1. | Investigate /var/log/dmesg bootup message file |
- to determine device name assigned to the DAT tape drive |
8I1. | Mounting USB memory devices |
Determining USB device name for the mount command, by investigating | |
/dev/..., /var/log/messages, & /var/log/dmesg |
8J1. | Unix/Linux system log files |
- /var/log/messages, dmesg, utmp, wtmp | |
Commands to access log file information | |
- who, w, finger, last, lastlog, utmpdump | |
8J2. | Sample outputs from: who, w,& finger |
8J3. | Sample outputs from: last & lastlog |
8J4. | using 'utmpdump' to convert /var/run/utmp (binary file) to an ASCII file |
- followed by uvlist filter to reduce multi-blanks to fit lines on screen |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
8K1. | using 'uvhd' to investigate /var/log/wtmp |
8K2. | search all login records for userid 'uvadm' |
8K3. | select all uvadm login records to a separate file |
8K4. | convert the separate file to a text file (using utmpdump) |
8L1. | Disc Monitoring (df, du, statdir1) |
8L2. | System Information (free, uname) |
8M1. | Killing hung-up jobs (ps & kill demo) |
8N1. | Running jobs in the BackGround |
--> sleep 100 & <-- sleep 100 seconds | |
--> jobs (status), fg %1 (foreground), ^Z (background), bg %1 (restart) |
8N2. | testjobs1 - script to test/demo running jobs in the background |
- displays msg every 15 seconds, try jobs, fg %1, ^Z, bg %1, kill |
8O1. | Messaging (wall, write, mail) |
8P1. | TOP - Unix/Linux system performance analysis tool |
8Q1. | meminfo - how to determine system memory & usage |
8R1. | msmtp - send email from scripts scheduled by cron at night, |
to managers at home, to alert them of serious errors. |
8S1. | sending unix/linux PCL files to a network printer from Windows |
- create PCL files on unix/linux & download to widows with 'winscp' | |
- assign a network printer to DOS LPT1 | |
net use lpt1 \\computername\printername /persistent:yes | |
======================================================= | |
- copy print file to network printer with /binaary option | |
copy /b filename.pcl LPT1: | |
========================== |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
As an example, I will describe the small network I use at UV Software (3 PC's on a LAN/router with a DSL modem to my ISP).
google.com yahoo.com msn.com etc.com | | | | --------------- Internet --------------- | | Web server ------- DNS server ------- Mail server uvsoftware.ca uniserve.com owen@uvsoftware.ca (my ISP) | WAN DNS Dynamic IPs | ========================================================================= LAN No DNS static IPs | | modem | ------------------------- router --------------------------- | | 192.168.0.1 | | | | | | 192.168.0.4 192.168.0.3 192.168.0.2 192.168.0.101 HP XW9400 HP Kayak HP Pavilion Lexmark T642 RHEL 5.1 RHEL 3.0 Windows XP 45 ppm Laser
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# /etc/hosts - on 'uvsoft4' RHEL 5.1 at UV Software 2008 # Do not remove the following line, or various programs # that require network functionality will fail. ::1 localhost6.localdomain6 localhost6 127.0.0.1 localhost 192.168.0.4 uvsoft4 uvsoft4.uvsoftware.ca 192.168.0.3 uvsoft3 192.168.0.2 uvsoft2 192.168.0.1 gateway router
nameserver 216.113.192.5 <-- old Uniserve nameservers nameserver 216.113.192.6
nameserver 4.2.2.1 <-- nameservers after switch to Webfaction nameserver 4.2.2.2
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Use a web browser (FireFox or IE) to setup the router to access your ISP. Here is the procedure for my D-Link router:
1. | enter https://192.168.0.1 in the web browser address |
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# /etc/sysconfig/network-scripts/ifcfg-eth0 # - on xw9400 RHEL 6 at UV Software, Dec 18/2011 # - did not work as setup by system -->admin --> preferences --> network connections # I edited this file - for 'static' protocol (#commented out NM_CONTROLLED & ONBOOT) # - assigning IP addresses for: computer, gateway,& DNS1/DNS2 DEVICE="eth0" # NM_CONTROLLED="yes" # ONBOOT="no" ONBOOT=yes TYPE=Ethernet # BOOTPROTO=none BOOTPROTO=static IPADDR=192.168.1.4 # PREFIX=24 GATEWAY=192.168.1.254 DNS1=192.168.1.254 DNS2=8.8.8.8 # DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 HWADDR=00:1C:C4:18:61:60
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can lookup IP Adresses or Domain Names (reverse lookup) using unix/linux command line utilities - nslookup, host,or dig. On a GUI browser, you could use web sites such as whatismyipaddress.com.
nslookup uvsoftware.ca ======================
Server: 4.2.2.1 Address: 4.2.2.1#53 Non-authoritative answer: Name: uvsoftware.ca Address: 174.133.234.43
This tells me that my domain name (uvsoftware.ca) IP address is 174.133.234.43. It also tells me that my ISP server IP address is 4.2.2.1 (on port #53).
My IP address is dynamically assigned by my ISP, but might never change if I never turn off my router & modem. Note my computers on my LAN address the router/gateway as 192.168.0.1, but my ISP addresses my router from the outside world as 174.133.234.43.
nslookup 174.133.234.43 =======================
Server: 4.2.2.1 Address: 4.2.2.1#53 Non-authoritative answer: 43.234.133.174.in-addr.arpa name = 2b.ea.85ae.static.theplanet.com.
My ISP is webfaction.com who contract their web servers from "The Planet" in Dallas USA. I highly recommend webfaction.com if you are looking for an ISP who lets you use all the unix/linux tools (ssh,sftp,etc).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
host uvsoftware.ca ==================
uvsoftware.ca has address 174.133.234.43 uvsoftware.ca mail is handled by 10 mx8.webfaction.com.
host 174.133.234.43 ===================
43.234.133.174.in-addr.arpa domain name pointer 2b.ea.85ae.static.theplanet.com.
"reverse lookup" does not give "UV Software" because I use an ISP webfaction.com who contract their services from "The Planet".
dig uvsoftware.ca =================
; <<>> DiG 9.3.4-P1 <<>> uvsoftware.ca ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38793 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;uvsoftware.ca. IN A ;; ANSWER SECTION: uvsoftware.ca. 3438 IN A 174.133.234.43 ;; Query time: 31 msec ;; SERVER: 4.2.2.1#53(4.2.2.1) ;; WHEN: Sat Oct 29 10:52:43 2011 ;; MSG SIZE rcvd: 47
dig uvsoftware.ca +short ========================
174.133.234.43
dig -x 174.133.234.43 +short ============================
2b.ea.85ae.static.theplanet.com.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'ping' is usually the 1st tool used to investigate problems communicating with other computers. For example if I had problems FTP'ing or browsing to my web site I might use the following ping command:
ping uvsoftware.ca ==================
PING uvsoftware.ca (216.113.194.1) 56(84) bytes of data. 64 bytes from www6.uniserve.com (216.113.194.1): icmp_seq=1 ttl=61 time=7.27 ms 64 bytes from www6.uniserve.com (216.113.194.1): icmp_seq=2 ttl=61 time=8.41 ms 64 bytes from www6.uniserve.com (216.113.194.1): icmp_seq=3 ttl=61 time=7.75 ms --- control-C to interupt --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 7.275/7.814/8.414/0.466 ms
If the above ping using the host-name failed, then I would use the IP address (to see if the Domain Server might be down vs a connection problem).
ping -c2 216.113.194.1 ======================
PING 216.113.194.1 (216.113.194.1) 56(84) bytes of data. 64 bytes from 216.113.194.1: icmp_seq=1 ttl=61 time=7.52 ms 64 bytes from 216.113.194.1: icmp_seq=2 ttl=61 time=7.84 ms --- Count option '-c2' quits after 2 pings --- 2 packets transmitted, 2 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 6.924/7.430/7.840/0.380 ms
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'pingall' is a script to determine the IP#s used on your router. The following will scan all IP#s from 192.168.0.1 to 192.168.0.8 & report which IP#s are alive. The script is listed following the sample report below. You could cut & paste it if you do not have Vancouver Utilities installed.
pingall 192.168.0 1 10 ======================
ip=192.168.0.1 <-- alive ip=192.168.0.2 <-- alive ip=192.168.0.3 <-- alive ip=192.168.0.4 <-- alive ip=192.168.0.5 ip=192.168.0.6 ip=192.168.0.7 ip=192.168.0.8 pingall ended --> 4 alive
# pingall - ping a range of IP addresses # - by Owen Townsend, UV Software, Dec 2012 # using options to speed up ping # -c1 - sends 1 ping only # -W1 - wait only 1 second for response #note - can not kill this script with control-C # - kill from a root login, 'ps -u userid' to get process# to kill ip0="$1"; ip1="$2"; ip2="$3" if [[ $# -ne 3 ]]; then echo "ping range of IP addresses to detect devices on your router" echo "example: pingall 192.168.0 1 10" echo " ======================" echo "- ping from 192.168.0.1 to 192.168.0.10" echo "- arg1=1st 3 nodes, arg2=4th node start, arg3=4th node end" echo "- router 1st 3 nodes might be 192.168.1.__ or 10.10.0.__" echo "- use 'nmap -O 192.168.0.__' to determine device at IP#__" exit 99; fi ip=$ip1; ok=0 >tmp/pingall.log until [[ $ip -gt $ip2 ]] do ping -c1 -W1 -t1 $ip0.$ip > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ip=$ip0.$ip <-- alive" echo "ip=$ip0.$ip <-- alive" >>tmp/pingall.log ((ok+=1)) else echo "ip=$ip0.$ip " echo "ip=$ip0.$ip " >>tmp/pingall.log fi ((ip+=1)) done echo "pingall ended --> $ok alive" echo "pingall ended --> $ok alive" >>tmp/pingall.log echo "- results also captured in file: tmp/pingall.log"
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'nmap' is a network exploration utility with many options (see man pages). You will probably have to download it from the internet. Here is how I downloaded on my Red Hat Linux.
yum install nmap <-- download & install nmap =================
Here is an example to determine the OS (device) at IP# 192.168.0.3 - using option 'O' (UPPER case alpha letter). You need to be root to use option 'O'.
nmap -O 192.168.0.3 ===================
Starting Nmap 5.51 ( https://nmap.org ) at 2012-12-18 17:43 PST Nmap scan report for 192.168.0.3 Host is up (0.0024s latency). Not shown: 999 closed ports PORT STATE SERVICE 80/tcp open http MAC Address: 68:7F:74:5B:46:C1 (Cisco-Linksys) Device type: VoIP adapter Running: Sipura embedded OS details: Sipura SPA-1001 or SPA-3000 VoIP adapter Network Distance: 1 hop OS detection performed. Report incorrect results at https://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 2.27 seconds
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a sample FTP session to transfer a file between my 2 Linux computers on my LAN (3 PCs & a router). I captured the session by turning on 'console logging' as explained in Part_6.
Script started on Sat 14 Jun 2008 09:29:07 AM PDT
<@:owen:/home/owen> ftp 192.168.0.3 =============== Connected to 192.168.0.3. Name (192.168.0.3:owen): owen 331 Password required for owen. Password: 230 User owen logged in. Remote system type is UNIX. Using binary mode to transfer files.
ftp> put stub_profile ================ local: stub_profile remote: stub_profile 200 PORT command successful. 150 Opening BINARY mode data connection for stub_profile. 226 Transfer complete. 8538 bytes sent in 8.2e-05 seconds (1e+05 Kbytes/s)
ftp> dir === 200 PORT command successful. 150 Opening ASCII mode data connection for /bin/ls. total 60 -rw------- 1 owen 349 Jun 13 20:50 .bash_history -rwxrwxr-x 1 users 8858 Apr 26 14:17 .bash_profile -rw-r----- 1 owen 8538 Jun 14 09:26 stub_profile drwxrwxr-x 2 owen 4096 Jun 13 12:35 tmp 226 Transfer complete.
ftp> bye === 221 Goodbye. <@:owen:/home/owen> exit ==== Script done on Sat 14 Jun 2008 09:30:48 AM PDT
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Script started on Sat 14 Jun 2008 05:16:01 PM PDT
<@:owen:/home/owen> ftp www.uvsoftware.ca ===================== Connected to uvsoftware.ca. 220 www6.uniserve.ca FTP server ready Name (www.uvsoftware.ca:owen): wd-uvsoft 331 Password required for wd-uvsoft. Password: 230 User wd-uvsoft logged in. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd public_WWW ============= 250 CWD command successful. ftp> put uvweb.zip ============= local: uvweb.zip remote: uvweb.zip 150 Opening BINARY mode data connection for uvweb.zip 226 Transfer complete. 2765834 bytes sent in 38 seconds (72 Kbytes/s) ftp> dir === 150 Opening ASCII mode data connection for file list -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 465893 Jun 13 19:07 admjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 483264 Jun 13 19:07 cmpjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 340703 Jun 13 19:07 cnvaids.htm - - - 70 lines omitted - - - -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 2765834 Jun 14 17:15 uvweb.zip - - - 10 lines omitted - - - -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 132637 Jun 13 19:07 windowsdos.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 47649 Jun 13 19:07 wordjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 77710 Jun 13 19:07 xrefjobs.htm 226 Transfer complete. ftp> bye 221 Goodbye.
<@:owen:/home/owen> exit Script done on Sat 14 Jun 2008 05:18:27 PM PDT
Note |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
To reload my web-site I first create a 'zip' file (about 100 HTML documents). Next I FTP the zip file into the public_WWW directory for www.uvsoftware.ca. Then I need to 'unzip' the zip file.
My ISP does not allow 'TELNET', but does allow 'SSH' (which is more secure). Here is the SSH session captured by console logging (see Part_6).
Script started on Sat 14 Jun 2008 09:34:16 AM PDT
<@:owen:/home/owen> ssh wd-uvsoft@www.uvsoftware.ca Password: wd-uvsoft@www6$ ls -l ===== total 3 lrwxr-xr-x 1 root wheel 27 Aug 2 2003 public_CGI -> /u/www/public_CGI/wd-uvsoft drwxr-x--x 4 wd-uvsoft wd-uvsoft 3072 Jun 14 19:10 public_WWW wd-uvsoft@www6$ cd public_WWW ============= wd-uvsoft@www6$ unzip uvweb.zip =============== wd-uvsoft@www6$ ls -l ===== total 16435 -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 465893 Jun 14 19:07 admjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 483264 Jun 14 19:07 cmpjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 340703 Jun 14 19:07 cnvaids.htm - - - many lines omitted - - - -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 132637 Jun 14 19:07 windowsdos.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 47649 Jun 14 19:07 wordjobs.htm -rw-rw-r-- 1 wd-uvsoft wd-uvsoft 77710 Jun 14 19:07 xrefjobs.htm
wd-uvsoft@www6$ exit ==== logout Connection to www.uvsoftware.ca closed.
<@:owen:/home/owen> exit ==== Script done on Sat 14 Jun 2008 09:38:32 AM PDT
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'putty' is a free terminal emulator for windows, that may be downloaded from https://www.chiark.greenend.org.uk written by Simon Tatham.
'putty' uses SSH (Secure SHell) protocol which is more secure that 'telnet'. We recommend putty for mainframe conversion projects where programmers & operators will be using Windows PCs online to a Unix/Linux system.
https://www.chiark.greenend.org.uk/~sgtatham/putty/ | |
==================================================== | |
- putty.zip <-- download | |
- putty-0.60-installer.exe <-- download |
#3. putty-0.60-installer.exe <-- execute ======================== - installs the following programs into C:\program files\putty\...
putty.exe |
|
pscp.exe |
|
psftp.exe |
|
plink.exe |
|
pageant.exe |
|
puttygen.exe |
|
The install creates a putty icon on the desktop & adds menu items to programs.
control panel --> system --> advanced --> environmental variables
system variables --> PATH --> edit append ';C:\program files\putty'
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Executing putty (double click putty icon on desktop) displays the configuration screen. I have listed the menu below with items to be modified in UPPER case.
Session - Logging Terminal - Keyboard, Bell. Features Window - APPEARANCE, Behaviour, Translation, Selection, COLOURS Connection - Data, Proxy, Telnet, Rlogin, SSH, Serial
HOST-NAME or IP ADDRESS ______________ <-- enter '192.168.0.4' Port: 22 Connection Type: raw, telnet, rlogin, *SSH, serial Saved session: _______________________ <-- enter 'uvsoft4' Load, Save, Delete <-- enter Save, then Load Open, Cancel <-- enter Open
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The default login screen is small, black & white, and ugly. We can make putty much more pleasant to work with as follows:
2a. click Appearance --> Change
2b. select Courier, Bold, 14 point
3a. click Colours --> Modify
3b. select Default Background --> Yellow
3c. select Default Foreground --> Black
3d. click Apply
5a. click upper left corner of screen --> displays config menu
5b. click Change Settings --> displays list of saved sessions
5c. highlight the session name just modified
5d. click Save & exit
On future sessions, loading the saved session (uvsoft4 in my example) will bring up the login screen with your desired colours & size.
For multiple sessions, use the 'Duplicate Sessions' option on the putty reconfig menu.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Some of UV Software's customers use Samba to allow Windows PC access to files on the Linux machine. Mainframe conversion sites might provide access to the COBOL reports now created on the Linux machine.
Please see the sample Samba configuration file listed 2 pages ahead -->
#1. Login as root
#2. cd /etc/samba
#3. mv smb.conf smb.conf.orig <-- save original =========================
#4. cp /home/uvadm/env/smb.conf . <-- copy VU sample =============================
#5. vi smb.conf <-- modify smb.conf for your site =========== - change user-names, directories, etc for your site
#6. smbpasswd -a userid <-- add Samba users =================== - userid already existing as a unix user
#7. service smb start <-- start Samba service now for testing =================
#8. chkconfig smb on <-- setup to start Samba whenever system booted ================ (run levels 2,3,4,5)
9. chkconfig --level 2345 iptables off ===================================
Note |
|
10. System --> Administration --> SELinux mngmnt --> disable ======================================================== - check the box 'Relabel on next boot' & reboot Linux
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. start Windows Explorer
#2. Tools --> Map Network drive
Drive: Z: \\uvsoft4\root Folder: root Reconnect at logon: check
Drive: Y: \\uvsoft4\home Folder: home Reconnect at logon: check
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# smb.conf - Samba configuration file # /etc/samba/smb.conf <-- location on Red Hat Enterprise Linux # /home/uvadm/env/smb.conf <-- sample distributed with Vancouver Utilities # - this shorter sample created June 2008, by Owen Townsend (UV Software) [global] workgroup = uvsoftware printing = CUPS printcap name = CUPS disable spoolss = yes show add printer wizard = no passdb backend = smbpasswd [home] comment = uvsoft4 /home directories path = /home valid users = uvadm, uvbak, uvext read only = no create mask = 0775 directory mask = 0775 force group = apps [root] comment = uvsoft4:root files path = / valid users = uvadm, uvbak, uvext read only = yes # # Install & modify for your site as follows: # # 1. Login as root # 2. cd /etc/samba # 3. mv smb.conf smb.conf.orig <-- save original # 4. cp /home/uvadm/env/smb.conf . <-- copy VU sample # 5. vi smb.conf <-- modify smb.conf for your site # - change user-names, directories, etc for your site # # 6. smbpasswd -a userid <-- to add new Samba users # =================== - userid already existing as a unix user # # 7. service smb start <-- restart after config file changes # ================= (easier than using separate stop/start) # # 8. chkconfig smb on <-- start Samba when system booted # ================ (run levels 2,3,4,5) # # 9. chkconfig --level 2345 iptables off # =================================== # # I had to disable iptables (firewall) before samba would work properly # - my router has a built in firewall # Also had to turn off SELinux, using the GUI mngmnt tool as follows: # #10. System --> Administration --> SELinux mngmnt --> disable # ======================================================== # - check the box 'Relabel on next boot' & reboot Linux
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
When I installed a DAT tape device & an Adaptec SCSI controller, I checked to ensure the system recognized these devices at system boot time & to determine the device name assigned to the DAT tape drive.
grep -i 'adaptec' /var/log/dmesg ================================
Unix/Linux saves the last boot messages in /var/log/dmesg (563 lines on my Red Hat 5.1). Here is a sample from my last boot - lines 1-12 & 461-472).
Bootdata ok (command line is ro root=LABEL=/ rhgb quiet) Linux version 2.6.18-92.el5xen (brewbuilder@ls20-bc2-13.build.redhat.com) (gcc version 4.1.2 20071124 (Red Hat 4.1.2-41)) #1 SMP Tue Apr 29 13:31:30 EDT 2008 BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000000f39b5000 (usable) On node 0 totalpages: 997813 DMA zone: 997813 pages, LIFO batch:31 DMI 2.5 present. ACPI: RSDP (v002 HP ) @ 0x00000000000e9e10 ACPI: XSDT (v001 HPQOEM SLIC-WKS 0x20070625 0x00000000) @ 0x00000000dffca474
- - - - - 400 lines omitted - - - - -
scsi8 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 7.0 <Adaptec 29160N Ultra160 SCSI adapter> aic7892: Ultra160 Wide Channel A, SCSI Id=7, 32/253 SCBs Vendor: HP Model: C1537A Rev: L111 Type: Sequential-Access ANSI SCSI revision: 02 target8:0:3: Beginning Domain Validation target8:0:3: FAST-10 SCSI 10.0 MB/s ST (100 ns, offset 32) scsi 8:0:3:0: Attached scsi generic sg6 type 1 ACPI: PCI Interrupt Link [LMC0] enabled at IRQ 16 GSI 24 sharing vector 0x29 and IRQ 24 ACPI: PCI Interrupt 0000:00:08.0[A] -> Link [LMC0] -> GSI 16 (level, high) -> IRQ 24 st 8:0:3:0: Attached scsi tape st0 - - - - - 100 lines omitted - - - - -
grep -i 'adaptec' /var/log/dmesg ================================
scsi8 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 7.0 <Adaptec 29160N Ultra160 SCSI adapter>
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Unix/Linux systems might not automatically mount USB memory devices (as does Windows). You can mount devices manually if you know the device name & are logged in as root. For example if you know the device name is 'sdf1' you could mount as follows:
mount /dev/sdf1 /mnt (/dev/sdf1 is a USB memory device on Owen's system) ====================
ls -l /mnt <-- display any files on the memory stick ==========
cp /etc/passwd /mnt <-- copy a file to the memory stick ===================
If you have some understanding of Unix/Linux devices you can list /dev/... & probably guess which device is the USB memory stick. I know that USB devices are treated as SCSI devices & that SCSI device names are /dev/sd... On my system I can list all SCSI devices & get the response shown below:
ls -l /dev/sd* ==============
/dev/sda <-- 1st hard disc (a) /dev/sda1 <-- 1st partition on 1st disc /dev/sda2 <-- 2nd partition on 1st disc /dev/sda3 /dev/sda4 /dev/sdb <-- 2nd hard disc (b) /dev/sdb1 <-- 1st partition on 2nd disc /dev/sdb2 --etc-- /dev/sdd4 <-- 4th partition on 4th disc (d) /dev/sde <-- floppy disc /dev/sdf <-- USB /dev/sdf1 <-- 1st USB device
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Another way is to plug in the device & watch for a system response on the system console (console screen tty1 Alt-Function-1).
If you do not have access to the system console, you can use 'dmesg' which displays all system messages since the last boot (from /var/log/dmesg). To see just the last few lines pipe the output into 'tail'.
dmesg | tail ============
Vendor: SanDisk Model: Cruzer Mini Rev: 0.2 Type: Direct-Access ANSI SCSI revision: 02 SCSI device sdf: 1000944 512-byte hdwr sectors (512 MB) sdf: Write Protect is off sdf: assuming drive cache: write through SCSI device sdf: 1000944 512-byte hdwr sectors (512 MB) sdf: Write Protect is off sdf: assuming drive cache: write through sdf: sdf1 sd 12:0:0:0: Attached scsi removable disk sdf sd 12:0:0:0: Attached scsi generic sg7 type 0
From the above, you can guess that the USB device is '/dev/sdf1'.
Another alternative might be to search (vi,grep,tail) /var/log/dmesg or /var/log/messages for patterns you know represent the USB devices. BUT, you need 'root' permissions to access those files directly.
grep 'USB' /var/log/dmesg <-- requires root access =========================
grep 'USB' /var/log/messages <-- requires root access ============================
In fact on my Red Hat Enterprise 5.1, the USB memory device is automatically mounted on /media/disk, but I would not have know this by searching messages because it is not recorded.
mount /dev/sdf1 /media/disk <-- automatic mount on my RHEL 5.1 ===========================
mount /dev/sdf1 /mnt <-- can also mount like this ==================== (if you don't know auto mount point)
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a summary of system log files that are most relevant to the casual Unix/Linux user (vs system administrator geeks).
/var/log/messages - records system events at boot time & ongoing - boot, shutdown, device recognitions, hardware faults, software faults, server events (FTP, Samba, etc) - text file (may inspect/search with vi, grep, etc)
/var/log/dmesg - records system boot/restart messages - the info you see flying up the screen on bootup - text file (may inspect/search with vi, grep, etc)
/var/run/utmp - currently logged in users (1 record per user) - userid, login date/time, terminal, process id - binary file (many x'00' bytes, fixed recsize 384) - can NOT inspect/search with vi, grep, etc - interrogated by several Unix/Linux system utilities (who, w, finger, utmp-dump) - may also use 'uvhd' to investigate (see page '8K1')
/var/log/wtmp - same as /var/run/utmp, but includes last 300 logins & reboots in date/time sequence - interrogated by Unix/Linux system utilities 'last' & 'lastlog' - may also use 'uvhd' to investigate (see page '8K1')
who |
|
w |
|
finger |
|
last |
|
lastlog |
|
utmpdump |
|
uvlist |
|
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Note |
|
who <-- display users currently logged in ===
root tty1 2008-08-31 07:00 uvadm tty2 2008-08-31 07:00 uvbak tty3 2008-08-31 07:00 mvstest tty4 2008-08-31 07:19 root :0 2008-08-31 06:47 root pts/1 2008-08-31 10:04 (:0.0)
w <-- display users currently logged in & what they are doing ===
10:22:03 up 3:35, 6 users, load average: 0.04, 0.04, 0.00 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT ===================================================================== root tty1 - 07:00 3:38 0.03s 0.03s -bash uvadm tty2 - 07:00 0.00s 0.01s 0.00s w uvbak tty3 - 07:00 3:21m 0.00s 0.00s -bash mvstest tty4 - 07:19 2:59m 0.00s 0.00s -bash root :0 - 06:47 ?xdm? 13.74s 0.04s /usr/bin/gnome- root pts/1 :0.0 10:04 17:04 0.01s 0.01s bash
finger <-- display user info (userid, terminal, login time) ======
Login Name Tty Idle Login Time Office Office Phone ======================================================================== mvstest tty4 3:05 Aug 31 07:19 root root tty1 9 Aug 31 07:00 root root *:0 Aug 31 06:47 root root pts/1 23 Aug 31 10:04 (:0.0) uvadm tty2 Aug 31 07:00 uvbak tty3 3:27 Aug 31 07:00
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
last <-- display login history (last 300 logins, multiple lines per user) ====
root pts/1 :0.0 Sun Aug 31 10:04 still logged in mvstest tty4 Sun Aug 31 07:19 still logged in uvbak tty3 Sun Aug 31 07:00 still logged in uvadm tty2 Sun Aug 31 07:00 still logged in root tty1 Sun Aug 31 07:00 still logged in - - - 290 lines omitted - - - uvadm tty2 Mon Jul 21 10:37 - down (10:50) root tty1 Mon Jul 21 09:39 - down (11:48) reboot system boot 2.6.18-92.1.6.el Mon Jul 21 07:38 (13:49) wtmp begins Sat Jul 19 19:57:15 2008
lastlog -t7 <-- display last login info (1 line per user) ============ - for users login within past 7 days
Username Port From Latest ========================================================================= root tty1 Sun Aug 31 07:00:45 -0700 2008 uvadm tty2 Sun Aug 31 07:00:50 -0700 2008 mvstest tty4 Sun Aug 31 07:19:05 -0700 2008 uvbak tty3 Sun Aug 31 07:00:55 -0700 2008 vsetest tty5 Sat Aug 30 07:43:39 -0700 2008 uvext tty4 Thu Aug 28 20:50:49 -0700 2008
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'utmpdump' dumps /var/run/utmp or /var/log/wtmp (binary files) to ASCII text files, but the result has many blanks between fields, & looks ugly. So, we will redirect the output of utmpdump to a tmp/file & then use 'uvlist' (as a filter), with option 'c9' to reduce multi-consecutive blanks to 1.
#1. /usr/sbin/utmpdump /var/run/utmp >tmp/utmp ========================================== - dump /var/run/utmp to an ASCII file (tmp/utmp)
#2. uvlist tmp/utmp -c9i1 >tmp/utmp2 ================================ - use uvlist with option 'c9' to reduce multi-consecutive blanks to 1 - option 'i1' inhibits uvlist output 1st line laser printer control codes
#3. cat tmp/utmp2 <-- display filtered output =============
/home/uvadm/tmp/utmp1 now=080831:1721 uvadm pg# 1
[8] [00420] [si ] [ ] [ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:05 2008 PDT] [2] [00000] [~~ ] [reboot ] [~ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:05 2008 PDT] [1] [20021] [~~ ] [runlevel] [~ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:05 2008 PDT] [8] [02339] [l5 ] [ ] [ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:30 2008 PDT] [7] [03974] [1 ] [root ] [tty1 ] [ ] [0.0.0.0 ] [Sun Aug 31 07:00:45 2008 PDT] [7] [03980] [2 ] [uvadm ] [tty2 ] [ ] [0.0.0.0 ] [Sun Aug 31 07:00:50 2008 PDT] [7] [03982] [3 ] [uvbak ] [tty3 ] [ ] [0.0.0.0 ] [Sun Aug 31 07:00:55 2008 PDT] [7] [03985] [4 ] [mvstest ] [tty4 ] [ ] [0.0.0.0 ] [Sun Aug 31 07:19:05 2008 PDT] [6] [03988] [5 ] [LOGIN ] [tty5 ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:30 2008 PDT] [6] [03989] [6 ] [LOGIN ] [tty6 ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:30 2008 PDT] [5] [03990] [x ] [ ] [ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:30 2008 PDT] [8] [04030] [mF ] [ ] [ ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:31 2008 PDT] [7] [04110] [:0 ] [root ] [:0 ] [ ] [0.0.0.0 ] [Sun Aug 31 06:47:40 2008 PDT] [8] [00000] [/1 ] [root ] [pts/1 ] [ ] [0.0.0.0 ] [Sun Aug 31 10:32:28 2008 PDT]
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Now let's investigate /var/log/wtmp using 'uvhd' (the Vancouver Utility for investigating binary files). Note the difference between 'utmp' & 'wtmp'. Both files store user logins & events such as reboot, shutdown,& runlevel changes. Both are binary files with fixed record size 384 bytes.
/var/run/utmp - stores logins only for currently logged in users
/var/log/wtmp - stores login HISTORY (last 300 logins & shutdown/reboots)
uvhd /var/log/wtmp r384 <-- investigate wtmp (recsize=384) =======================
10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123 0 ....05..~...............................~~..runlevel............ 0000330070000000000000000000000000000000770077666766000000000000 10000500E0000000000000000000000000000000EE0025EC565C000000000000 64 ............2.6.18-92.1.6.el5xen................................ 0000000000003232332332323266376600000000000000000000000000000000 0000000000002E6E18D92E1E6E5C585E00000000000000000000000000000000 128 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 192 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 256 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 320 .......................H<....................................... 000000000000000000008A843900000000000000000000000000000000000000 00000000000000000000B928C0B0000000000000000000000000000000000000
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
uvhd /var/log/wtmp r384 <-- startup uvhd for /var/log/wtmp (recsize 384) ======================= - will display 1st record (same as above) - not shown here to save space
--> s 44(5),'uvadm' <-- search for records with userid 'uvadm' - displays 1st record found as follows:
r# 74 0123456789012345678901234567890123456789012345678901234567890123 28032 ........tty2............................2...uvadm............... 0000800077730000000000000000000000000000300077666000000000000000 70006F004492000000000000000000000000000020005614D000000000000000 64 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 128 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 192 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 256 ................................................................ 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 320 ....................A..H9....................................... 000000000000000080004C843E00000000000000000000000000000000000000 00000000000000006F0019489640000000000000000000000000000000000000
found--> s 44(5),'uvadm' <--at byte# 44 of record# 74 =====================================================
--> ss <-- may use 'ss' to repeat the search ===
You could use 'ss' to repeat the search for the next matching record, until yor reach the end of the file. We will not show you any more matching records.
We will demo the select/write command on the next page -->
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
--> w9999 44(5),'uvadm' <-- Write all records with 'uvadm' in bytes 44-48 =================== to a tmp/file
10 20 30 40 50 60 r# 1340 0123456789012345678901234567890123456789012345678901234567890123 514176 ........tty2............................2...uvadm............... 0000800077730000000000000000000000000000300077666000000000000000 70005F004492000000000000000000000000000020005614D000000000000000 ----- bytes 64-319 omitted to save space ----- 320 .......................H.8...................................... 0000000000000000000011A4E300000000000000000000000000000000000000 0000000000000000000050888890000000000000000000000000000000000000
w9999 44(5),'uvadm' 30 written, tmp/wtmp_080817_151157W =======================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
We can now examine the selected records as follows:
uvhd tmp/wtmp_080817_153211W r384 <-- examine selected records (user 'uvadm') =================================
uvhd filename=/home/uvadm/tmp/wtmp_080817_153211W options=r384 lastmod=2008081715 today=20080817153306 print=p1 rec#=1 rcount=30 filesize=11520 recsize=384 fsize%rsize(remainder)=0 10 20 30 40 50 60 r# 1 0123456789012345678901234567890123456789012345678901234567890123 0 ........tty2............................2...uvadm............... 0000800077730000000000000000000000000000300077666000000000000000 70006F004492000000000000000000000000000020005614D000000000000000 ----- bytes 64-319 omitted to save space ----- 320 ....................A..H9....................................... 000000000000000080004C843E00000000000000000000000000000000000000 00000000000000006F0019489640000000000000000000000000000000000000
We can now use 'utmpdump' to display the selected records in user friendly format with the binary times converted to a readable format.
utmpdump tmp/wtmp_080817_153211W ================================
[7] [03974] [2 ] [uvadm ] [tty2 [Mon Jul 21 10:37:05 2008 PDT] [7] [03936] [2 ] [uvadm ] [tty2 [Tue Jul 22 07:39:50 2008 PDT] [7] [03974] [2 ] [uvadm ] [tty2 [Wed Jul 23 10:07:30 2008 PDT] - - - - - 24 records omitted to save space - - - - - [7] [03971] [2 ] [uvadm ] [tty2 [Fri Aug 15 06:32:59 2008 PDT] [7] [03973] [2 ] [uvadm ] [tty2 [Sat Aug 16 08:06:10 2008 PDT] [7] [03973] [2 ] [uvadm ] [tty2 [Sun Aug 17 04:48:37 2008 PDT]
This item (using 'uvhd' to investigate Unix/Linux system files) was published in the Linux Gazette in Sept 2008. See https://linuxgazette.net.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
df <-- disk free (Owen's HP XW9400) === - showing only Scsi Disk 'A' (omitting sdb,sdc,sdd)
Filesystem Size Used Avail Use% Mounted on ========================================================= /dev/sda2 8.2G 4.4G 3.4G 57% / /dev/sda1 200M 30M 160M 16% /boot /dev/sda5 11G 436M 9.2G 5% /home /dev/sda6 11G 1.1G 8.7G 11% /home2 /dev/sda7 11G 6.1G 3.7G 63% /home3 /dev/sda8 11G 391M 9.3G 5% /home4 /dev/sda9 11G 193M 9.5G 2% /home5 /dev/sda10 4.1G 292M 3.6G 8% /var
du /home/uvadm/* <-- disc usage for all subdirs in uvadm/... ================
439k archive 132k batDOS 4.2M bin --- 21 subdirs omitted --- 7.5M src 8.2k tmp 3.3M vsetest 66M total
statdir1 /home/uvadm <-- Vancouver Utility script ==================== (more info than du)
statdir1 - FileCounts & DiscUsage for SubDirs in ParentDir: /home/uvadm statdir1 uvadm >stats/uvadm.stats Tue Sep 2 06:07:48 PDT 2008 =============================================================== #1 Files=0000062 SubDirs=0003 KB=0000428 - /home/uvadm/archive #2 Files=0000031 SubDirs=0000 KB=0000128 - /home/uvadm/batDOS #3 Files=0000021 SubDirs=0000 KB=0004044 - /home/uvadm/bin - - - 21 subdirs omitted - - - #25 Files=0000057 SubDirs=0000 KB=0007312 - /home/uvadm/src #26 Files=0000003 SubDirs=0000 KB=0000016 - /home/uvadm/tmp #27 Files=0000188 SubDirs=0095 KB=0003156 - /home/uvadm/vsetest Total Files=2845, SubDirs=217, KB=63824, for ParentDir=/home/uvadm
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
free <-- system command to show internal memory free & used ====
total used free shared buffers cached ========================================================================= Mem: 3983360 1059856 2923504 0 84444 460028 -/+ buffers/cache: 515384 3467976 Swap: 6144852 0 6144852
uname -a <-- display Unix/Linux system information ======== - OS version, etc
Linux uvsoft4 2.6.18-92.1.10.el5xen #1 SMP x86_64 x86_64 x86_64 GNU/Linux =========================================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Sometimes you need to kill a hung-up process. This is most easily done if you can login as root on another screen. Here is a demo:
#1. Login as youself (uvadm in my case)
#2. sleep 300 <-- run something =========
#3. Login as root
#4. ps -u uvadm <-- display processes for user 'uvadm' ===========
PID TTY TIME CMD 4845 tty2 00:00:00 bash 6280 tty2 00:00:00 sleep
#5a. kill 6280 <-- kill process hung up =========
#5b. kill 4845 <-- OR kill the user's shell =========
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You probably know that you can run jobs in the background by terminating the command with an '&'. If you are not familiar with the various job control features of the KORN shell you might like to try the following:
--> sleep 100 & <-- run some simple jobs for testing --> sleep 200 & --> sleep 300 &
--> jobs <-- request status of jobs [3] + Running sleep 300 & - response to jobs request [2] - Running sleep 200 & [1] Running sleep 100 & - the jobs are assigned a job# starting with 1 & incrementing as more jobs are run simultaneously - most recent job is marked with '+' & the prior job is marked with '-'
--> fg %2 <-- bring job #2 to the forground sleep 200 & - displays the original command for #2
--> ctl Z <-- suspend job #2 - goes to the [2] + Stopped sleep 200 & background in stopped state
--> jobs <-- request job status [2] + Stopped sleep 200 & - note: job #2 stopped [3] - Running sleep 300 & [1] Running sleep 100 &
--> bg %2 <-- restart job #2
--> kill %1 <-- kill job #1
--> jobs <-- request job status [2] + Running sleep 200 & - note: job #2 running [3] - Running sleep 300 & - note: job #1 was killed
The 'SHIFT ZZ' command in the 'vi' editor writes out & quits.
A common mistake is to hit 'CTL ZZ' instead which will put your 'vi' editor job into the background & return you to the shell.
The solution is to use the 'fg' command to bring back your editor job into the foreground, but if you do not know this it is PANIC time.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is a simple script you might play with to test/demo running jobs in the background. Similar to using sleep directly, but this script outputs a message every 15 seconds.
# testjobs1 - test unix/linux job control # - by Owen Townsend, UV Software, Feb 3/2011 # # testjobs1 & <-- run in background # =========== # jobs <-- display status of background jobs (running/stopped) # fg %1 <-- bring job #1 to foreground # ^Z <-- control-Z to put into background (will be stopped) # bg %1 <-- restart job # kill %1 <-- kill job #1 # while ((1)) do sleep 15 echo "testjobs1 - test jobs running in background" echo "try: -->jobs -->fg %1 -->^Z -->bg %1 -->kill %1" done
uvcopy testbg1 & <-- run uvcopy job to test background jobs ================ - similar to script above vi $UV/pf/adm/testbg1 <-- stored here if you wish to inspect code
The default is that output messages by background jobs will come to your screen possibly interspersed with messages from the forground job, which might be the editor or whatever.
Any input request by a background job will cause that job to be stopped by the UNIX operating system until you terminate the current forground job & then you will get a message informing you of the background job name, but you might not know what the request was because it may have rolled off the screen.
The solution to this problem might be to use the 'stty tostop' system command which causes background jobs to be stopped when they request output as well as input. You could put this command in the .profile of operators who will be running multiple production jobs.
--> stty tostop <-- put your terminal in 'tostop' mode =========== - background jobs stop on output as well as input
The best solution to the problem is "Do Not run jobs in the background". In my experience of mainframe conversions, the unix/linux machines were so much faster it was unnecessary to run jobs in the background.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
wall <-- initiate wall, type message, enter control-D to end ==== Hello All users - this is a this is a 'wall' message from uvadm - testing wall for my documentation Thanks, Owen ^D
Note |
|
Broadcast message from uvadm (tty2) (Tue Sep 2 12:04:57 2008)
write appsadm <-- initiate write, type message, control-D to end ============= Hello appsadm, this is a 'write' from uvadm - testing 'write' for my documentation Bye, Owen ^D
mail -s'testing mail' appsadm <-- initiate mail, type message, control-D to end ============================= Hello appsadm, this is mail from uvadm - testing 'mail' for my documentation Bye Owen ^D
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
top <-- initiate 'top' (display shown below) ===
top - 12:36:28 up 5:42, 6 users, load average: 0.56, 0.22, 0.11 Tasks: 169 total, 3 running, 166 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.1%sy, 0.0%ni, 99.2%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983360k total, 1816140k used, 2167220k free, 115316k buffers Swap: 6144852k total, 0k used, 6144852k free, 1146996k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7516 mvstest 25 0 4360 868 536 R 99 0.0 0:18.78 uvcopy (loop1) 7517 vsetest 22 0 4360 892 552 R 47 0.0 0:06.57 uvcopy (loop2) 1 root 15 0 10328 708 592 S 0 0.0 0:00.40 init 2 root RT -5 0 0 0 S 0 0.0 0:00.00 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 4 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0 0.0 0:00.01 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root 10 -5 0 0 0 S 0 0.0 0:00.06 events/0 9 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1 10 root 10 -5 0 0 0 S 0 0.0 0:00.00 khelper 11 root 10 -5 0 0 0 S 0 0.0 0:00.00 kthread 13 root 10 -5 0 0 0 S 0 0.0 0:00.00 xenwatch 14 root 10 -5 0 0 0 S 0 0.0 0:00.00 xenbus 17 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/0 18 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/1 19 root 20 -5 0 0 0 S 0 0.0 0:00.00 kacpid
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1a. Login as mvstest
#1b. uvcopy loop1 <-- run cpu bound job ============
#2a. Login as vsetest
#2b. uvcopy loop2 <-- run I/O bound job ============
#3. Login as uvadm
#4. top >tmp/top1 <-- run top & redirect output to a file ============= --> q <-- quit after 1 or 2 seconds
#5. uvcopy unscreen1[,fili1=tmp/top1,filo1=tmp/top2] ================================================
#6. vi tmp/top2 <-- inspect output ===========
See the 3 uvcopy jobs listed on the following pages:
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# loop1 - uvcopy job to do nothing but hang up in code loop # - to test/demo 'top' performance analysis tool # # uvcopy loop1[,uop=t300] # ======================= # opr='uop=t300 - option default' opr=' t300 - loop for 300 seconds' uop=q1t300 # option defaults @run # begin loop to test loop time reached # $time1 = time job starts (unix epoch time, seconds since 1970) # $time2 = time updated by the 'tim' instruction man20 tim update $time2 with current time mvn $ca1,$time2 current time to work ctr sub $ca1,$time1 - time job started cmn $ca1,$uopbt compare diff to option time ? skp< man20 eoj
# loop2 - uvcopy job to generate I/O activity # - to test/demo 'top' performance analysis tool # - this job writes a specified no of records # - writes a neutral translate table 256 bytes codes x'00' - x'FF' # - output file at UVSI /h24/tmp/loop2_output # - /h24 is an empty 35 gig file system (other than tmp subdir) # # uvcopy loop2[,uop=n4000000][,filo1=/h24/tmp/loop2_testfile] # =========================================================== # opr='uop=n4000000 - option default' opr=' n4000000 - write 4 million records (1 Gig)' uop=q1n4000000 # option defaults filo1=?/h24/tmp/loop2_testfile,rcs=256,typ=RSF @run opn filo1 open the output file # begin loop to write records until spcfd# (option n) reached man20 put filo1,$trt write neutral translate table add $ca1,1 count records written cmn $ca1,$uopbn reached spcfd# ? skp< man20 cls filo1 close file eoj end job
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
# unscreen1 - remove screen control escape sequences # - by Owen Townsend, UV Software, Sept 3/2008 # # This job searches for the escape x'1B' start char & removes until 'm' or blank # I created this job so I could show screens in my text documentation # For example, the 'top' utility creates a screen loaded with escape sequences # # 1. top >tmp/top1 # --> q <-- quit top # # 2. uvcopy unscreen1,fili1=tmp/top1,filo1=tmp/top2 # ============================================== # - 2 lines of input/output shown below (escapes shown as '!') # # ![7m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND ![0;10m![39;49m![K # ![0;10m 1 root 15 0 10328 708 592 S 0 0.0 0:00.40 init ![0;10m![39;49m # # PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND # 1 root 15 0 10328 708 592 S 0 0.0 0:00.40 init # was=a5000b5000 fili1=tmp/top1,rcs=1024,typ=LST filo1=tmp/top2,rcs=1024,typ=LSTt @run opn all # begin loop to get/process/put records until EOF man20 get fili1,a0(1024) get next input record skp> man90 (cc set > at EOF) # # copy input area 'a' to output area 'b' removing escape sequences clr b0(1024),' ' clear output area mvn $ra,0 init rgstr 'a' ptr to area 'a' mvn $rb,0 init rgstr 'b' ptr to area 'b' # # begin loop to copy data until escape x'1B' found man30 mvue3 bb0(1024),aa0,x'1B' move until next escape found skp! man40 scne1m aa0(20),'m ' scan to end code 'm' or ' ' add $ra,1 bypass the 'm' or ' ' skp man30 repeat loop til no more escapes found # man40 put filo1,b0(1024) write out result skp man20 return to get next line # # EOF - close files & end job man90 cls all eoj
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You can obtain system memory & memory usage by displaying /proc/meminfo. Here is the results from my Red Hat Linux Enterprise 5.1 installed on my HP xw9400 workstation with 4 GB memory:
cat /proc/meminfo =================
MemTotal: 3983360 kB MemFree: 2802296 kB Buffers: 143692 kB Cached: 502656 kB SwapCached: 0 kB Active: 433628 kB Inactive: 454720 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 3983360 kB LowFree: 2802296 kB SwapTotal: 6144852 kB SwapFree: 6144852 kB Dirty: 104 kB Writeback: 0 kB AnonPages: 242104 kB Mapped: 67024 kB Slab: 90192 kB PageTables: 22640 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 8136532 kB Committed_AS: 2032136 kB VmallocTotal: 34359738367 kB VmallocUsed: 3740 kB VmallocChunk: 34359734559 kB
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
The objective here is to send email from a script, scheduled by cron at night, to managers at home, to alert them of serious errors.
In this section, we will show you how to use 'msmtp' to send error messages from cron scripts to managers at home using their internet email addresses (vs their unix/linux system login accounts, which are the usual destinations of the 'mail' utility without special 'sendmail' configurations).
It is easy to send mail from a unix/linux script to any other user account on the unix/linux system, but this mail will not be delivered to the internet without special configuration of the mail transport agent (sendmail by default).
We do not want our implementation of 'msmtp' to interfere with mail & sendmail. 'mail' (user interface) passes the mail to 'sendmail' (transport agent), which (without special configuration), delivers the mail ONLY to other login user accounts on the local unix/linux system, and NOT to the internet.
In Part_5, we illustrated how to schedule JCL/scripts with 'cron', which will automatically mail the console messages to the owner of the 'crontab' file, which we suggest should be 'appsadm'. Each morning appsadm can read his mail to see the console log from nightly jobs.
The 'msmtp' utility is a simple way to send mail to the internet (vs mail & sendmail which send only to local users without complex configurations).
Some msmtp setups show you how to disable 'sendmail' & have 'mail' call 'msmtp', but we need the standard mail/sendmail to deliver console logs from cron jobs to appsadm (as described above & on pages '5I1' - 5K4).
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
https://msmtp-sourceforge.net <-- Sourceforge project page for 'msmtp' ============================
msmtp-1.4.31.tar.bz2 - latest version as of December 2013 ==================== - download to /root/Downloads/...
4. cp Downloads/msmtp-1.4.31.tar.bz2 . - might copy out of Downloads ? ===================================
5. tar -xjf msmtp-1.4.31.tar.bz2 <-- extracts to /root/msmtp-1.4.31/... =============================
6. cd msmtp-1.4.31 <-- change into subdir ===============
7. vi INSTALL <-- read install instructions ==========
8. ./configure <-- configure msmtp for your machine ===========
9. make <-- make msmtp ====
10. make install <-- install msmtp ============
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
You must setup the msmtp config file in the home directories of users who wish to use msmtp. You can copy the msmtp 'example' file & modify depending on the values required for your email provider (account, host, port, user, password). Here are my procedures followed by a listing of the edited result.
#3. cp /root/msmtp-1.4.31/doc/msmtprc-system.example .msmtprc =========================================================
#4. vi .msmtprc ===========
# .msmtprc <-- must be named '.msmtprc' in user homedir # - 'msmtp' user config file in /home/owen/.msmtprc # msmtprc_owen - alternate filename for visibility (same contents) # - testing msmtp by Owen Townsend, UV Software, Dec 2013 # # Download MSMTP pkg from https://msmtp_sourceforge.net # - msmtp-1.4.31.tar.bz2 as of Dec 2013 # - uncompress with 'tar xjf' into /root/msmtp/... # - see INSTALL to configure, make,& make install # This msmtprc created from /root/msmtp/doc/msmtprc-user.example # - copied/renamed to /home/owen/.msmptprc # - permissions must be 0600 # - omitting many optional configuration commands # # 'msmtp' is an alternate to the default 'sendmail' # - specified in the config file of the mail client ('mail' or 'mutt') # # set sendmail="/usr/local/bin/msmtp" <-- in .mailrc of user homedir # =================================== or .muttrc of user holedir # 'mutt' is a more capable mail client than 'mail' # See ftp://ftp.mutt.org/mutt - download mutt-1.5.22.tar.gz # defaults logfile ~/msmtp.log account webfaction host smtp.webfaction.com # port 465 from owen@uvsoftware.ca auth on user uvsoft password xxxxxx account default : webfaction # Note - I #commented out 'port 465' (caused 'remote protocol error') # - port seems to default OK
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'mail' is the default 'mail client' (front-end program to format the mail message to be sent thru a mail agent such as 'sendmail' or better 'msmtp').
'mail's default agent is 'sendmail', but we can change it by setting up a '.mailrc' as follows:
#1. Login --> /home/owen
#2. vi .mailrc <-- create .mailrc as follows: ==========
# .mailrc - setup Nov04/2013 to test msmtp set sendmail="/usr/local/bin/msmtp" #==================================
#1. echo "mail/msmtp message#1" | mail -s"mail/msmtp test#1" owen@uvsoftware.ca ===========================================================================
#2. echo "mail/msmtp message#2" | mail -s"mail/msmtp test#2 with attachment" \ -a .mailrc owen@uvsoftware.ca ==========================================================================
Next, we will show you how to download & test an alternate mail client 'mutt' which has advantages over 'mail'. I particularily like the '-i' option that allows you to specify the name of an input file as well as standard input.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
www.mutt.org <-- project page for 'mutt' ============
www.mutt.org/download.html <-- click download goes here ==========================
mutt-1.5.22.tar.gz <-- latest version as of December 2013 ================== - download to /root/Downloads
4. cp Downloads/mutt-1.5.22.tar.gz . - might copy out of Downloads ? =================================
5. gunzip mutt-1.5.22.tar.gz <-- unzips to tar file =========================
6. tar xvf mutt-1.5.22.tar <-- extracts to mutt-1.5.22/... =======================
7. cd mutt-1.5.22 <-- change into subdir ==============
8. vi INSTALL <-- read install instructions ==========
9. ./configure <-- configure mutt for your machine ===========
10. make install <-- install mutt ============
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
'mutt's default agent is 'sendmail', but we can change it by setting up a '.muttrc' as follows:
#1. Login --> /home/owen
#2. vi .muttrc <-- create .muttrc as follows: ==========
# .muttrc - config file for mutt (in homedir of user) # - /home/owen/.muttrc for Owen's test Nov 2013 # muttrc_owen - I used this as a visible name # # See ftp://ftp.mutt.org/mutt - download mutt-1.5.22.tar.gz # - copy to /root/mutt & gunzip, tar xvf, configure, make install # - 'mutt' is a better mail client than 'mail' # - but mail works if you create .mailrc with 'set sendmail/usr/local/bin/msmtp' # # These set's from sourceforge msmtp documentation paragraph 10.3 set sendmail="/usr/local/bin/msmtp" set use_from=yes set realname="Owen Townsend" set from=owen@uvsoftware.ca set envelope_from=yes # # ** Easy ways to test mutt (or mail) with 'msmtp' ** # # 1. echo "test mutt & msmtp" | mutt -s "test mutt/msmtp" owen@uvsoftware.ca # ======================================================================= # # 2. echo "attach .muttrc" | mutt -a /home/owen/.muttrc -- owen@uvsoftware.ca # ======================================================================= # Note - must separate attachments & recipient addresses with '--' # # 3. mutt -i /home/owen/.muttrc -s ".muttrc in body" owen@uvsoftware.ca </dev/null # ======================================================================= # Note '-i' inserts body from a file & </dev/null replaces normal input # '-i' useful option only in 'mutt' (not in 'mail')
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#1. echo "mutt/msmtp message#1" | mutt -s"mutt/msmtp test#1" owen@uvsoftware.ca ===========================================================================
#2. echo "mutt/msmtp message#2" | mutt -s"mutt/msmtp test#2 with attachment" \ -a .muttrc -- owen@uvsoftware.ca ==========================================================================
Note |
|
#3. mutt -i".muttrc" -s"mutt/msmtp test#3 -i" owen@uvsoftware.ca </dev/null =======================================================================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Here is good example of how you might use mutt/msmtp - in a JCL/script that might be scheduled by 'cron' at night. I will use 'jar100.ksh' which is a demo job listed in full at JCLcnv1demo.htm#2A1.
Here are the last few lines containing the 'Normal' & 'Abnormal' job termination points. These lines are common to all JCLs converted to scripts by the JCL converter. I have inserted the 'mutt' call as marked by '-->'.
If the job had been run using 'joblog jar100.ksh' then the log is joblog/jar100.log & we can use '-i' to include in the email body & use '-a' to attach the JCL/script to the email.
#!/bin/ksh # jar100.ksh - accounts receivable processing # ---- 25 lines omitted ---- LCC=$? <-- capture COBOL return code, set to 99 to force failure # ---- 5 lines omitted ---- S9000=A jobend51 logmsg2 "JobEnd=Normal, StepsExecuted=$XSTEP, LastStep=$JSTEP" exit 0 S9900=A logmsg2 "ERR: Terminated Abnormally,JCC=$JCC,Step=$JSTEP" RV ACK --> mutt -s "$JOBID2 AbTerm" -i joblog/$jobid2.log \ -a $RUNLIBS/jcls/jar100.ksh -- owen@uvsoftware.ca </dev/null #================================================================ jobabend51 exit $JCC
I used the 'mvstest' user & demo files documented at JCLcnv1demo.htm#Part_4.
#1. Login as 'mvstest' --> /home/mvstest
#2. cdl --> /home/mvstest/testlibs (cdl is alias 'cd $RUNLIBS')
#3. vi jcls/jar100.ksh <-- edit jar100.ksh 2 changes as follows: ================== #3a. mutt ... <-- insert mutt instruction (see above) #3b. LCC=99 <-- change COBOL return code to 99 to force failure #3c. :wq! <-- write & quit editor
#4. joblog jar100.ksh <-- run the job (with logging) =================
#5. switch to a GUI internet browser to see if errmsg receieved OK.
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
Sometimes I am working on a client's unix/linux machine with no printer configured on the unix/linux machine, but there are printers on the network. Usually I am using 'putty' (terminal emulator) on a Windows 7 laptop to access unix/linux and can also access printers on the network.
I can download the PCL files from unix/linux to my laptop windows directories with 'winscp' and then send them to a printer on the network as follows (using a DOS command window)
net use lpt1 \\computername\printername /persistent:yes =======================================================
For example, my computername was 'OWEN-PC' & I had configured a network printer named 'lexmarkT652' so my command was:
net use LPT1 \\OWEN-PC\lexmarkT652 /persistent:yes ==================================================
copy /b filename.pcl LPT1: ==========================
When at a client site, I might want to print my documentation using my 'uvlist' utility (indirectly with 1 of the many 'uvlp' scripts calling uvlist). For example to create the PCL file for the 'uvlist' documentation:
2. mkdir docpcl <-- make dir to receive PCL version of doc ============
3. uvlp12Dpcl doc/uvlist.doc docpcl ================================ - creates output as docpcl/uvlist_doc.pcl - .doc changed to _doc since only period allowed is in '.pcl'
5. net use LPT1 \\OWEN-PC\lexmarkT653 /persistent:yes ===================================================
6. copy /b uvlist_doc.pcl LPT1: ============================
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page
#!/bin/ksh # uvlp12Dpcl - Korn shell script from UVSI stored in: /home/uvadm/sf/util/ # uvlp12Dpcl - print a file at 12 cpi DUPLEX (90 chars on 8 1/2 x 11) # - pg hdngs with: filename, mod-date, today-date, page#s # - for HP laserjet printers & compatibles # - alt version of uvlp12D, to create a file (do NOT pipe to printer) # - so output file can be taken to a windows machine # - could use verypdf to convert PCL to .pdf if necessay # #usage: uvlp12Dpcl filename outdir # ========================== # # if [[ -f "$1" && -d "$2" ]]; then : else echo "usage: uvlp12Dpcl UVdocfile outdir" echo " ===========================" echo "ex: uvlp12Dpcl doc/uvlist.doc docpcl" echo " ================================" echo " - output will be pcl/uvlist.pcl" exit 1; fi # d1f1x=$1; d2=$2; f1x=$(basename $d1f1x) f2=$(echo $f1x | tr '.' '_') d2f2x=$d2/$f2.pcl #Note - must change '.'s to '_' in filenames for windows PCL print # - the only '.' must be on the '.pcl' extension # uvlist $d1f1x p60 t4d1c12n-240 >$d2f2x #====================================================== #note - option 't4' for alternate tray on my Lexmark t642 # - option 't1' for main tray #1 (t2 & t3 are manual & envelope) # - I use t4 for script 'uvlp12D' for my Duplex paper <-- this script # - I use t1 for script 'uvlp12' for my Simplex paper <-- alternate uvln=$(basename $0) linesbf=$(wc -l $1); linesb=${linesbf% *}; lines=${linesb##* }; echo "$uvln printing $1 on $UVLPDEST, lines=$lines" exit 0
Goto: Begin this doc , End this doc , Index this doc , Contents this library , UVSI Home-Page