TWS8.5 Overview
TWS8.5 Overview
Version 8.5
Overview
SC32-1256-07
Tivoli Workload Automation
®
Version 8.5
Overview
SC32-1256-07
Note
Before using this information and the product it supports, read the information in “Notices” on page 95.
This edition applies to version 8, release 5 of IBM Tivoli Workload Automation (program numbers 5698-A17,
5698-WSH, and 5698-WSE) and to all subsequent releases and modifications until otherwise indicated in new
editions.
This edition replaces SC32-1256-06.
© Copyright International Business Machines Corporation 1991, 2008.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . v Role of the scheduling manager as the focal point 20
Role of the operations manager. . . . . . . 21
About this publication . . . . . . . . vii A powerful tool for the shift supervisor . . . . 21
Role of the application programmer . . . . . 21
What is new in this release . . . . . . . . . vii
Console operators . . . . . . . . . . . 21
What is new in this publication . . . . . . . vii
Workstation operators . . . . . . . . . . 21
Who should read this publication . . . . . . . vii
End users and the service desk . . . . . . . 22
What this publication contains. . . . . . . . viii
Summary . . . . . . . . . . . . . . . 22
Publications . . . . . . . . . . . . . . viii
Accessibility . . . . . . . . . . . . . . ix
Tivoli technical training . . . . . . . . . . ix Chapter 4. Tivoli Workload Automation
Support information . . . . . . . . . . . ix and ITUP . . . . . . . . . . . . . . 23
How to read the syntax diagrams . . . . . . . ix The ITUP processes. . . . . . . . . . . . 23
Service execution and workload management . . . 23
| Chapter 1. Summary of enhancements . 1 Managing workload with Tivoli Workload
| New Tivoli Workload Scheduler and Tivoli Dynamic Automation . . . . . . . . . . . . . . 24
| Workload Console features . . . . . . . . . 1
| Installation improvements . . . . . . . . . 1 | Chapter 5. Who performs workload
| Variable tables . . . . . . . . . . . . . 2 | management . . . . . . . . . . . . 27
| Workload Service Assurance . . . . . . . . 3
| Problem determination and troubleshooting Chapter 6. A business scenario . . . . 29
| enhancements . . . . . . . . . . . . . 5
The company . . . . . . . . . . . . . . 29
| Enhanced integration features . . . . . . . 5
The challenge. . . . . . . . . . . . . . 31
| New Tivoli Dynamic Workload Console features . . 5 The solution . . . . . . . . . . . . . . 32
| Tivoli Workload Scheduler for Applications features 5 Typical everyday scenarios . . . . . . . . 36
| Support for Internet Protocol version 6 (IPv6) . . 6
Managing the workload . . . . . . . . 37
| New installation launchpad . . . . . . . . 6
Monitoring the workload . . . . . . . . 39
| Displaying and rerunning of SAP process chains . 6
Managing the organization of the IT
| Balancing of SAP workload using server groups . 6
infrastructure . . . . . . . . . . . . 40
| Exporting of SAP factory calendars . . . . . . 6
The benefits . . . . . . . . . . . . . . 41
| Definition of internetwork dependencies and
| event rules based on SAP events. . . . . . . 7
| Definition of event rules based on IDoc records. . 7 Chapter 7. Tivoli Workload Scheduler 43
| Summary of product enhancements for Tivoli Overview . . . . . . . . . . . . . . . 43
| Workload Scheduler for z/OS . . . . . . . . 8 What is Tivoli Workload Scheduler . . . . . 43
The Tivoli Workload Scheduler network . . . . 44
Manager and agent types . . . . . . . . . 45
Chapter 2. Interoperability notes . . . . 11
Topology . . . . . . . . . . . . . . 45
Interoperability of the components. . . . . . . 11
Networking . . . . . . . . . . . . . 46
| Interoperability tables . . . . . . . . . . . 12 Tivoli Workload Scheduler components . . . . 47
Tivoli Workload Scheduler scheduling objects . . 48
Chapter 3. Overview of Tivoli Workload The production process . . . . . . . . . 50
Automation . . . . . . . . . . . . . 15 Scheduling . . . . . . . . . . . . . . 51
The state-of-the-art solution . . . . . . . . . 15 Defining scheduling objects . . . . . . . . 51
Comprehensive workload planning . . . . . 16 Creating job streams . . . . . . . . . . 51
Centralized systems management . . . . . . 16 Setting job recovery . . . . . . . . . . 51
Systems management integration . . . . . . 16 Integration with Tivoli Enterprise Data
Automation . . . . . . . . . . . . . 18 Warehouse. . . . . . . . . . . . . . 52
Workload monitoring . . . . . . . . . . 18 Running production . . . . . . . . . . . 52
Automatic workload recovery . . . . . . . 19 Running the plan . . . . . . . . . . . 52
Productivity . . . . . . . . . . . . . 19 Running job streams . . . . . . . . . . 53
Business solutions . . . . . . . . . . . . 19 Monitoring . . . . . . . . . . . . . 54
User productivity . . . . . . . . . . . . 19 Controlling with IBM Tivoli Monitoring . . . 54
Growth incentive . . . . . . . . . . . . 19 Reporting . . . . . . . . . . . . . . 55
How Tivoli Workload Automation benefits your Auditing . . . . . . . . . . . . . . 55
staff . . . . . . . . . . . . . . . . . 20 | Using event-driven workload automation . . . 56
For information about the APARs that this release addresses, see the Tivoli
Workload Scheduler Download Document at http://www.ibm.com/support/
docview.wss?rs=672&uid=swg24018908.
Note: Changed or added text with respect to the previous version is marked by a
vertical bar in the left margin.
v Data processing (DP) operations managers and their technical advisors who are
evaluating the product or planning their scheduling service
v Individuals who require general information for evaluating, installing, or using
the product.
Publications
Full details of Tivoli Workload Scheduler publications can be found in Tivoli
Workload Automation: Publications. This document also contains information on the
conventions used in the publications.
A glossary of terms used in the product can be found in Tivoli Workload Automation:
Glossary.
Accessibility
Accessibility features help users with a physical disability, such as restricted
mobility or limited vision, to use software products successfully. With this product,
you can use assistive technologies to hear and navigate the interface. You can also
use the keyboard instead of the mouse to operate all features of the graphical user
interface.
For full information with respect to the Job Scheduling Console, see the
Accessibility Appendix in the Tivoli Workload Scheduler: Job Scheduling Console User’s
Guide.
For full information with respect to the Tivoli Dynamic Workload Console, see the
Accessibility Appendix in the Tivoli Workload Scheduler: User's Guide and Reference.
http://www.ibm.com/software/tivoli/education
Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM
provides the following ways for you to obtain the support you need:
v Searching knowledge bases: You can search across a large collection of known
problems and workarounds, Technotes, and other information.
v Obtaining fixes: You can locate the latest fixes that are already available for your
product.
v Contacting IBM Software Support: If you still cannot solve your problem, and
you need to work with someone from IBM, you can use a variety of ways to
contact IBM Software Support.
For more information about these three ways of resolving problems, see the
appendix on support information in Tivoli Workload Scheduler: Troubleshooting Guide.
KEEP KEEP
AVAIL( RESET ) DEVIATION( amount )
NO RESET
YES
KEEP YES
QUANTITY( amount ) CREATE( NO )
RESET
0
TRACE( trace level )
Read the syntax diagrams from left to right and from top to bottom, following the
path of the line.
STATEMENT
optional item
v An arrow returning to the left above the item indicates an item that you can
repeat. If a separator is required between items, it is shown on the repeat arrow.
v If you can choose from two or more items, they appear vertically in a stack.
– If you must choose one of the items, one item of the stack appears on the
main path:
– If choosing one of the items is optional, the entire stack appears below the
main path:
STATEMENT
optional choice 1
optional choice 2
– A repeat arrow above a stack indicates that you can make more than one
choice from the stacked items:
STATEMENT
optional choice 1
optional choice 2
optional choice 3
v Parameters that are above the main line are default parameters:
default
STATEMENT
alternative
option 1
default
optional choice 1( alternative )
option 2
default
optional choice 2( alternative )
| Installation improvements
| The installation process is significantly improved by the following enhancements:
| Common shared architecture for Tivoli Workload Scheduler and Tivoli Dynamic
| Workload Console
| Tivoli Workload Scheduler and Tivoli Dynamic Workload Console are from
| this version capable of reciprocal discovery. When one of them is found
| already installed, the installer, while setting up the other, is given the
| option to optimize the installation and have a shared middleware and
| common installation layout. If this option is selected, the two components
| will share the same embedded WebSphere application server. The result is
| that disk space and memory consumption is greatly optimized and access
| to the administration tools to browse and collect administration
| information is highly simplified.
| The installation interfaces for the two components now share the same
| look and feel. The consistency in messages and labels for these two
| components is greatly improved. In addition, all the log and trace files will
| be located in the same directory. Problem determination and
| troubleshooting are greatly simplified by this and by the possibility to
| share the same tools to manage the installed middleware.
| The revised Planning and Installation Guide now includes all the installation
| documentation for both Tivoli Workload Scheduler and Tivoli Dynamic
| Workload Console, including installation troubleshooting and messages.
| Enhanced installation launchpad
| The installation launchpad is an alternative way to install or upgrade
| Tivoli Workload Scheduler and Tivoli Dynamic Workload Console, or
| install Job Scheduling Console and Tivoli Enterprise Console. It is a
| powerful tool that guides you through the installation process by
| providing information about the shared infrastructure and simplifying
| Variable tables
| Variable tables are new scheduling objects that can help you to reuse job and job
| stream definitions.
| In Tivoli Workload Scheduler version 8.5, what were previously called global
| parameters are referred to as variables. You can now place your defined variables
| into variable tables which are simply collections of variables. You can define
| variables with the same name and different values and save them in different
| tables. You can then use such tables in job stream and workstation definitions and
| in run cycles to provide different input values according to which table is in use.
| You can use variable tables in the definitions of run cycles, job streams, and
| workstations. This means that you can specify a table to be used for resolving
| variables when workload instances are generated from these objects. Likewise, you
| can specify the variable_table_name.variable_name combination in submit
| operations.
| There is one default variable table and this is where all the variables previously
| defined in the database are included upon migration from previous versions. If
| you choose not to use variable tables, all the variables you define are nevertheless
| included in the default variable table. The default table is used for the resolution of
| variables if no other tables are defined or specified together with the variables in
| the job definitions.
| The benefit of using variable tables is that you can more easily replace variable
| input data in job definitions used as templates for jobs running in more than one
| job stream. This is explained by the following examples:
| v You run the same job stream on Mondays and Fridays, but on Mondays it uses
| different input files from Fridays, because the same work is run on different
| data. To do this, you create a single job stream with two run cycles, referencing
| two different variable tables. You then define file dependencies on jobs using
| variables, forcing instances generated on Mondays to use different variable
| values from the instances generated on Fridays.
| v For security reasons, a job must run in a specific job stream with restricted
| permissions compared to the ones that are normally applied to the same job in
| other job streams. An easy way to accomplish this, when specifying the user
| logon in the job definition, is to use a variable name and then set the
| unrestricted user value for that variable name in the default table, while setting
| the user with restricted permissions as a value for the same variable name in a
| separate table. By assigning this table to the job stream that needs to run the job
| with restricted permissions, you can address the problem without having to
| create a duplicate job definition for that specific job stream.
| v A number of jobs use the SQLPLUS utility to run SQL reports on Oracle databases.
| Jobs that run on different workstations might need to find SQLPLUS in different
| installation paths. Hard-coding the path to the Oracle instance in the task string
| of each job would bind each job to a workstation and require a change in the
| task string if the job definition is moved from one workstation to another. You
| can define a specific variable table to each workstation on which Oracle is
| installed, define a variable for the Oracle installation path, and use that variable
| in the task string of each job that runs SQLPLUS. At plan time, each task string is
| be determined after resolving the Oracle installation path against the variable
| table that is assigned to the workstation on which each job runs.
| v You want to test the scheduling workload on production machines without
| running real jobs, but just test ones. To do this, you use variables in the
| definition of the task string of each job, and then set up two different variable
| tables: one with the real tasks and one with test tasks. When you set the first
| table as default and you extend the plan with JnextPlan, you are actually
| creating the production plan. When you set the second table as default, at the
| next plan extension you are creating the test plan. By having the ability to
| manage different tables of variables, you no longer need to change all values in
| job definitions, and do not lose the values to be restored when you decide to
| switch back to the production plan.
| Job schedulers can use the Tivoli Workload Scheduler command line or the Tivoli
| Dynamic Workload Console to flag jobs as mission-critical and specify their
| deadlines. A critical job and all its predecessors make up what is called a critical
| network. At planning time, Tivoli Workload Scheduler calculates the start time of
| the critical job and of each of its predecessors starting from the critical job deadline
| and estimated duration. While the plan runs, this information is dynamically kept
| up-to-date based on how the plan is progressing. If a predecessor, or the critical job
| itself, is becoming late, Tivoli Workload Scheduler automatically prioritizes its
| submission and promotes it to get more system resources and thus meet its
| deadline.
| The Tivoli Dynamic Workload Console provides specialized views for tracking the
| progress of critical jobs and their predecessors. Job schedulers and operators can
| access the views from the Dashboard or by creating Monitor Critical Jobs tasks.
| The initial view lists all critical jobs for the engine, showing the status: normal,
| potential risk, or high risk. From this view, an operator can navigate to see:
| v The hot list of jobs that put the critical deadline at risk.
| v The critical path.
| v Details of all critical predecessors.
| v Details of completed critical predecessors.
| v Job logs of jobs that have already run.
| Using the views, operators can monitor the progress of the critical network , find
| out about current and potential problems, release dependencies, and rerun jobs.
| For example:
| 1. To flag a critical job and follow it up, the Job scheduler opens the Workload
| Designer on the Tivoli Dynamic Workload Console, marks the specific job as
| critical, and sets the deadline for 5 a.m.
| When JnextPlan is run, the critical start dates for this job, and all the jobs that
| are identified as its predecessors, are calculated.
| 2. To track a specific critical job, the operator proceeds as follows:
| a. The operator checks the dashboards and sees that there are critical jobs
| scheduled on one of the engines.
| b. He clicks the link to get a list of the critical jobs.
| The specific job shows a Potential Risk status.
| c. He selects the job and clicks Hot List to see the predecessor job or jobs that
| are putting the critical job at risk.
| One of the predecessor jobs is listed as being in error.
| d. He selects the job and clicks Job log.
| The log shows that the job failed because of incorrect credentials for a
| related database.
| e. After discovering that the database password was changed that day, he
| changes the job definition in the symphony file and reruns the job.
| f. When he comes back to the dashboard, he notices that there are no longer
| any jobs in potential risk. Also, the critical jobs list that was opened when
| clicking on the potential risk link no longer shows the critical job after the
| job is rerun.
| g. The job is now running after being automatically promoted, getting higher
| priority for submission and system resources.
| h. No further problems need fixing and the critical job finally completes at
| 4.45 a.m.
| IBM Support Assistant can be downloaded for free from the IBM software support
| Web page.
| See IBM Tivoli Workload Scheduler: Integrating with Other Products for information on
| how Tivoli Workload Scheduler integrates with these and other products.
|
| New Tivoli Dynamic Workload Console features
| Tivoli Dynamic Workload Console version 8.5 includes modeling and topology
| functions. You can now use a single graphical user interface to model and monitor
| your workload environment.
| A Workload Designer was added for modeling, covering all the use cases related to
| creating and maintaining the definition of the automation workload, such as jobs,
| job streams, resources, variable tables, workstation classes, and their dependencies.
| You can now use the Tivoli Dynamic Workload Console to define Tivoli Workload
| Scheduler topology objects, covering all the use cases related to the definition and
| maintenance of the scheduling environment, such as workstations and domains.
|
| Tivoli Workload Scheduler for Applications features
| This section describes the features introduced with Tivoli Workload Scheduler for
| Applications version 8.4 Fix Pack 1:
| v “Support for Internet Protocol version 6 (IPv6)” on page 6
| v “New installation launchpad” on page 6
| v “Displaying and rerunning of SAP process chains” on page 6
| v “Balancing of SAP workload using server groups” on page 6
| v “Exporting of SAP factory calendars” on page 6
Chapter 1. Summary of enhancements 5
Summary of enhancements
| The launchpad must be accessed with one of the following Web browsers:
| v Mozilla, version 1.7 and later
| v Firefox, version 1.0 and later
| v Microsoft Internet Explorer Version 5.5 and later (for Windows operating
| systems only)
| Scheduler composer command line. As a consequence, you can import the calendar
| definitions from SAP into the Tivoli Workload Scheduler database, with the result
| that the Tivoli Workload Scheduler calendars becomes synchronized with the
| existing SAP factory calendars.
| To define SAP events as internetwork dependencies, XBP versions 2.0 and 3.0 are
| supported, with the following differences:
| XBP version 2.0
| SAP events can release Tivoli Workload Scheduler internetwork dependencies
| only if the dependencies are created or checked before the SAP event is raised.
| An event history is ignored, therefore an SAP event raised before the
| internetwork dependency is created is not considered.
| XBP version 3.0 (supported by SAP NetWeaver 7.0 with SP 9, or later)
| Only the SAP events stored in the SAP event history table are considered by
| Tivoli Workload Scheduler to check for internetwork dependencies resolution.
| As a prerequisite, the SAP administrator must create the appropriate event
| history profiles and criteria on the target SAP system.
| You can also create event rules based on one or more SAP events, to define a set of
| actions that are to run when specific event conditions occur. The definition of an
| event rule correlates events and triggers actions. Whenever you define an event
| rule based on an SAP event in your Tivoli Workload Scheduler plan, that event is
| monitored by Tivoli Workload Scheduler.
| Monitoring SAP events is allowed only if you use XPB version 3.0, or later. Tivoli
| Workload Scheduler monitors two types of SAP event:
| Events defined by the SAP system
| The events that are triggered automatically by system changes, for example
| when a new operation mode is activated. This type of event cannot be modified
| by the user.
| Events defined by the user
| The events that are triggered by ABAP or external processes, for example when
| a process triggers an SAP event to signal that external data has arrived and
| must be read by the SAP system.
|
| Summary of product enhancements for Tivoli Workload Scheduler for
| z/OS
| This section lists the product enhancements released by PTF for Tivoli Workload
| Scheduler for z/OS version 8.3 since March 2007.
| PK40356 - End-to-end Password Encryption
| This fix enhances the way security is handled for Tivoli Workload
| Scheduler end-to-end users scheduling on Windows workstations. This
| enhancement provides two ways to avoid having the Windows password
| in plain text when defining the USRPSW parameter of the USRREC
| initialization statement for scheduling on Windows workstations:
| Centralized management of users and passwords for Windows
| workstations
| An encryption utility encrypts any user password specified in the
| USRREC definitions. The new EQQBENCR member in the SEQQSAMP
| library provides a JCL sample to run the utility.
| Handling of users and passwords locally on the Windows workstation
| You can select this solution by using the LOCALPSW(YES) parameter
| of the TOPOLOGY initialization statement. It enables the fault-tolerant
| agent to look for users and passwords in a local file, at run time.
| Use the new users Windows utility to create and manage Windows
| users’ credentials locally.
| PK40969 - Using TCP/IP connection among components
| This fix provides TCP/IP support for the communication between the
| following product components:
| Controller and tracker
| Provides new parameters in the ROUTOPTS and TRROPTS initialization
| statements to define TCP/IP destinations.
| Controller and data store
| Provides new parameters in the FLOPTS and DSTOPTS initialization
| statements to define TCP/IP destinations.
| Remote dialog and server
| The EQQXLUSL panel flow allows users to define the remote host
| name and port number of a server that specifies PROTOCOL(TCP) in
| the SERVOPTS initialization statement.
| Programming Interfaces (PIF) and server
| With the INIT initialization statement and INIT PIF request, users
| can define the remote host name and port number of a server that
| specifies PROTOCOL(TCP) in the SERVOPTS initialization statement.
| The TCP/IP connection support includes Batch Command Interface
| Tool (BCIT), Operation Control Language (OCL), and Batch Loader.
| PK41519 - Tivoli Workload Scheduler for z/OS Reporting
| This fix provides back-end support for the Tivoli Dynamic Workload
| Console reporting feature, collecting historical workload data and archiving
| it in dedicated tables in a DB2 database. The console process retrieves the
| archived data and returns it using the Business Intelligent Report Tool
| (BIRT) Report Viewer.
| Using Reporting with Tivoli Workload Scheduler for z/OS, you archive
| historical data about jobs and workstation workloads in the DB2 database.
| The historical data consists of old current plan data collected at daily
| planning time.
| For example, you can extend or replan your current plan more than once a
| day but decide to run the archiving process only once, selecting the new
| ARCHIVE option from the daily planning dialog. This triggers the
| submission of a batch job that archives the historical data in the DB2
| database, using Java Data Base Connectivity (JDBC) services.
| PK46296 - Virtual workstation
| This fix improves the scheduler workload balancing and monitoring of
| system availability. It automatically directs the submission of workload to
| different destinations removing the need to associate a workstation to a
| specific destination. You can define a list of destinations for the submission
| of workload and the scheduler distributes the workload to
| automatically-selected active destinations, according to a round-robin
| scheduling approach.
| You can activate this new capability by specifying the new virtual option at
| workstation definition level. This option is allowed for computer
| workstations with the automatic reporting attribute, and is supported by
| all the interfaces available to define, modify, and monitor workstations.
| Using virtual workstations the scheduler distributes the workload across
| your trackers evenly, thus avoiding bottlenecks when submitting or
| running jobs. In fact, the scheduler splits the workload among the available
| destinations, so that the Job Entry System (JES) and Workload Manager
| (WLM) do not find overloaded input queues when selecting jobs for their
| action.
| PK46532 - NOERROR improvement
| This fix improves the NOERROR specification rules by extending the set of
| conditions that determine whether a job is to be treated as not in error.
| It provides the following new capabilities:
| v Extended format for the error code entry, both in the NOERROR statement
| and in the NOERROR parameter of the JTOPTS statement. In particular, you
| can:
| – Specify a range of conditions rather than a single condition.
| – Use relational operators in the specified conditions.
| v A new extended status operation, that indicates that the job error code
| matched a NOERROR condition.
| v Listing the currently active NOERROR statements using the new LSTNOERR
| option for the MODIFY command which modifies the scheduler.
| PK58520 - Dynamic critical path support
| This fix provides the dynamic handling of the critical path calculated by
| the daily planning batch jobs process.
| The critical path is the path, within a network of jobs, with the least slack
| time.
| The slack time, in a critical job predecessor path, is the amount of time that
| processing of the predecessor jobs can be delayed without exceeding the
| deadline of a critical job. It is the spare time calculated using the deadline,
| input arrival, and duration settings of predecessor jobs.
| The new capabilities include:
| All domain managers and agents of Tivoli Workload Scheduler, version 8.5 can
| coexist with the following:
| v All domain managers and agents of Tivoli Workload Scheduler, version 8.2, 8.2.1,
| 8.3, and 8.4
| v Tivoli Workload Scheduler for z/OS, versions 8.1, 8.2, and 8.3
| v Tivoli Workload Scheduler for Applications, versions 8.2, 8.2.1, 8.3, and 8.4
| v Tivoli Workload Scheduler for Virtualized Data Centers, version 8.2
| Notes:
| 1. Basic interoperability is provided with the GA versions of 8.2, 8.2.1, 8.3, and 8.4
| but to get full interoperability apply the latest fix packs to these versions to
| take advantage of the improvements in functionality and performance that they
| provide.
| 2. Any new features in version 8.5 that require information from another node
| might not work correctly if the other node is not at version 8.5. However, the
| planning and modeling features provided with the version 8.3 master domain
| manager can be immediately implemented in a mixed network without
| requiring an upgrade of the version 8.2 and 8.2.1 agents to version 8.3 and
| later.
| 3. The change in the job stream instance naming convention, introduced with
| Tivoli Workload Scheduler version 8.3, imposes the following restrictions when
| issuing command-line commands against a plan generated on a master running
| Tivoli Workload Scheduler version 8.3 and later from Tivoli Workload
| Scheduler version 8.2, or 8.2.1 agents:
| v You must use the @ (at) symbol as the first character for the job stream
| instance identifier. For example, the job stream running on workstation CPU1
| with identifier 0AAAAAAAAAAAAY3 must be identified in the conman command
| line as follows:
| CPU1#@AAAAAAAAAAAAY3
| v You cannot use the follows keyword when you add a dependency to a job
| or a job stream when you submit a command or a file as a job.
| v You cannot use the into keyword to specify the job stream where the job
| must be added when you submit a command or a file as a job.
| For example, to display the information about the job2 job included in the
| job stream instance having 0AAAAAAAAAAAAT1 as identifier and running on
| workstation CPU1, run the following command on Tivoli Workload Scheduler
| version 8.2, or 8.2.1 agents:
| sj CPU1#@AAAAAAAAAAAAT1.job2
| These changes will also be seen in reports, logs, and any other places where job
| stream names are printed or displayed.
| 4. In mixed environments, the dumpsec utility must not be used on a version 8.2
| or 8.2.1 agent with a version 8.5 security file, because it causes a core dump. If
| you need to dump the security file to a flat file, do it on a version 8.5 agent.
| Interoperability tables
| The following tables show what associations are possible among component
| versions for:
| v Tivoli Workload Scheduler
| v Tivoli Workload Scheduler for Applications
| v Tivoli Workload Scheduler for z/OS
| v Distributed connector
| v z/OS connector
| v Job Scheduling Console
| v Tivoli Dynamic Workload Console
| v Tivoli Dynamic Workload Broker
| Note: Tivoli Workload Scheduler and Distributed connector versions must always
| be aligned to the fix pack level.
| Note: Tivoli Workload Scheduler and Distributed connector versions must always
| be aligned to the fix pack level.
| Table 3 shows which versions of Tivoli Workload Scheduler for z/OS and of the
| z/OS connector, Job Scheduling Console, and Tivoli Dynamic Workload Console
| can work together:
| Table 3. Interoperability table for Tivoli Workload Scheduler for z/OS
| Tivoli Workload Scheduler Tivoli Job Scheduling Tivoli Dynamic Workload
| for z/OS z/OS connector Console Console
| 8.1 8.3, 8.3.02 and later 8.4.x, 8.3.x not supported
| 8.2 8.3, 8.3.02 and later 8.4.x, 8.3.x 8.5, 8.4, 8.3
| (8.3.04 is required to
| work with Tivoli
| Dynamic Workload
| Console version 8.5)
| 8.2 + APAR PK33565 8.3, 8.3.02 and later 8.4.x, 8.3.x 8.5, 8.4, 8.3
| (8.3.04 is required to
| work with Tivoli
| Dynamic Workload
| Console version 8.5)
| Table 3. Interoperability table for Tivoli Workload Scheduler for z/OS (continued)
| Tivoli Workload Scheduler Tivoli Job Scheduling Tivoli Dynamic Workload
| for z/OS z/OS connector Console Console
| 8.3 8.3.02 and later (8.3.04 8.3.02 and later 8.5, 8.4, 8.3
| is required to work Note: The new functions Note: Version 8.3 of the
| with Tivoli Dynamic of Tivoli Workload scheduler is used in
| Workload Console Scheduler for z/OS compatibility mode with 8.2.
| version 8.5) version 8.3 are supported This means that even though
| only on versions 8.3.02 Tivoli Workload Scheduler for
| and later. z/OS version 8.3 is installed,
| only the version 8.2 functions
| can be used.
|| v 8.3 + APAR PK41519 8.3.03 and later (8.3.04 8.3.02 and later 8.5, 8.4, 8.3
| is required to work Note: The new functions Note: The features added with
|| v 8.3 + APAR PK46296
with Tivoli Dynamic of Tivoli Workload APARs PK46296 (Virtual
| Workload Console Scheduler for z/OS workstations) and PK41519
| version 8.5) version 8.3 are supported (Reporting) are supported only
| only on versions 8.3.02 on versions 8.4 and later.
| and later.
| 8.3 + APAR PK58520 8.3.04 and later 8.3.02 and later 8.5, 8.4, 8.3
| Note: The new functions Note: The Dynamic Critical
| of Tivoli Workload Path (Workload Service
| Scheduler for z/OS Assurance) feature added with
| version 8.3 are supported APAR PK58520 is supported
| only on versions 8.3.02 only on versions 8.4.01 and
| and later. later.
|
| Note: Tivoli Dynamic Workload Broker and Tivoli Workload Scheduler for
| Applications can be accessed from Tivoli Workload Scheduler for z/OS in
| end-to-end configurations with Tivoli Workload Scheduler.
Users and owners of DP services are also making more use of batch services than
ever before. The batch workload tends to increase each year at a rate slightly below
the increase in the online workload. Combine this with the increase in data use by
batch jobs, and the end result is a significant increase in the volume of work.
Furthermore, there is a shortage of people with the required skills to operate and
manage increasingly complex DP environments. The complex interrelationships
between production activities, between manual and machine tasks, have become
unmanageable without a workload management tool.
When the portfolio interfaces with other system management products, it forms
part of an integrated automation and systems management platform for your DP
operation.
The portfolio interfaces directly with some of the z/OS products as well as with a
number of other IBM products to provide a comprehensive, automated processing
facility and an integrated approach for the control of complex production
workloads.
NetView. NetView is the IBM platform for network management and automation.
You can use the interface for Tivoli Workload Scheduler for z/OS with NetView to
pass information about the work that is being processed. The portfolio lets you
communicate with NetView in conjunction with the production workload
processing. Tivoli Workload Scheduler for z/OS can also pass information to
NetView for alert handling in response to situations that occur while processing
the production workload. NetView can automatically trigger Tivoli Workload
Scheduler for z/OS to perform actions in response to these situations using a
variety of methods. Tivoli Workload Scheduler/NetView is a NetView application
that gives network managers the ability to monitor and diagnose Tivoli Workload
Scheduler networks from a NetView management node. It includes a set of
submaps and symbols to view Tivoli Workload Scheduler networks
topographically and determine the status of job scheduling activity and critical
Tivoli Workload Scheduler processes on each workstation.
Resource Object Data Manager (RODM). RODM provides a central location for
storing, retrieving, and managing the operational resource information needed for
network and systems management. You can map a special resource to a RODM
object. This lets you schedule the production workload considering actual resource
availability, dynamically updated.
Tivoli Decision Support for z/OS (Decision Support). Decision Support helps you
effectively manage the performance of your system by collecting performance data
in a DATABASE 2 (DB2®) database and presenting the data in a variety of formats
for use in systems management. Decision Support uses data from Tivoli Workload
Scheduler for z/OS to produce summary and management reports about the
production workload, both planned and actual results.
Output Manager for z/OS. Helps customers increase productivity and reduce the
costs of printing by providing a means for storing and handling reports in a z/OS
environment. When a dialog user requests to view a job log or to automatically
rebuild the JCL for a step-level restart, Tivoli Workload Scheduler for z/OS
interfaces with Output Manager. This interface removes the requirement to
duplicate job log information, saving both CPU cycles and direct access storage
device (DASD) space.
Resource Access Control Facility (RACF®). RACF is the IBM product for data
security. You can use RACF as the primary tool to protect your Tivoli Workload
Scheduler for z/OS services and data at the level required by your enterprise. With
RACF 2.1 and later, you can use a Tivoli Workload Scheduler for z/OS reserved
resource class to protect your resources.
Tivoli System Automation for z/OS (SA z/OS). SA z/OS initiates automation
procedures that perform operator functions to manage z/OS components, data
sets, and subsystems. SA z/OS includes an automation feature for Tivoli Workload
Scheduler for z/OS. You can define an automation workstation in Tivoli Workload
Scheduler for z/OS to handle system automation operations with a specific set of
options.
Chapter 3. Overview 17
Overview
by the portfolio and seamlessly integrates these objects with all other business
objects monitored by Tivoli Business Systems Manager.
Besides these IBM products, there are many products from other software vendors
that work with or process data from the portfolio.
Automation
By automating management of your production workload with the portfolio, you
can minimize human errors in production workload processing and free your staff
for more productive work. The portfolio helps you plan, drive, and control the
processing of your production workload. These are important steps toward
automation and unattended operations. Whether you are running one or more
systems at a single site, or at several distributed sites, the portfolio helps you
automate your production workload by:
v Coordinating all shifts and production work across installations of all sizes, from
a single point of control
v Automating complex and repetitive operator tasks
v Dynamically modifying your production workload schedule in response to
changes in the production environment (such as urgent jobs, changed priorities,
or hardware failures) and then managing the workload accordingly
v Resolving workload dependencies
v Managing utilization of shared resources
v Tracking each unit of work
v Detecting unsuccessful processing
v Displaying status information and instructions to guide operations personnel in
their work
v Interfacing with other key IBM products to provide an integrated automation
platform
The portfolio lets you centralize and integrate control of your production workload
and reduces the number of tasks that your staff need to perform.
Workload monitoring
Besides providing a single point of control for the production workload across
your systems, the portfolio:
v Monitors the production workload in real time, providing operations staff with
the latest information on the status of the workload so that they can react
quickly when problems occur.
v Provides security interfaces that ensure the protection of your services and data.
v Enables manual intervention in the processing of work.
v Reports the current status of your production workload processing.
v Provides reports that can serve as the basis for documenting your service level
agreements with users. Your customers can see when and how their work is to
be processed.
Productivity
The portfolio represents real productivity gains by ensuring fast and accurate
performance through automation. Many of today’s automation solutions quote
unrealistic productivity benefits. Some of the tasks automated should never be
performed, or certainly not as often as they are by automation. Because of this, it is
difficult to correlate real productivity benefits to your enterprise.
The tasks the portfolio performs not only have to be performed, but have to be
performed correctly, every time, and as quickly as possible. Many of these tasks,
traditionally performed by DP professionals, are tedious and as a result prone to
error. With the portfolio, your DP staff can use their time more efficiently.
Business solutions
The portfolio provides business solutions by:
v Driving production according to your business objectives
v Automating the production workload to enhance company productivity
v Providing you with information about current and future workloads
v Managing a high number of activities efficiently.
User productivity
Your DP staff and end users can realize significant productivity gains through the
portfolio’s:
v Fast-path implementation.
v Immediate response to dialog requests for workload status inquiries. Users are
provided with detailed real-time information about production workload
processing so that they can detect and promptly correct errors.
v Automation of operator tasks such as error recovery and data set cleanup.
v Job Scheduling Console with its easy-to-use graphical user interface and
sophisticated online help facilities.
Growth incentive
As you implement automation and control you can manage greater production
workload volumes. The portfolio brings growth within your DP operation by
providing:
Chapter 3. Overview 19
Overview
This section describes how the portfolio can directly benefit your DP staff.
The portfolio can be a valuable tool for application development staff when they
are doing the following:
v Packaging new applications for the production environment
v Testing new JCL in final packaged form
v Testing new applications and modifying existing ones
Console operators
The portfolio can free console operators from the following time-consuming tasks:
v Starting and stopping started tasks
v Preparing JCL before job submission
v Submitting jobs
v Verifying the sequence of work
v Reporting job status
v Performing data set cleanup in recovery and rerun situations
v Responding to workload failure
v Preparing the JCL for step-level restarts.
Workstation operators
The portfolio helps workstation operators do their work by providing the
following:
v Complete and timely status information
v Up-to-date ready lists that prioritize the work flow
v Online assistance in operator instructions.
Chapter 3. Overview 21
Overview
The help desk can use the Tivoli Dynamic Workload Console in the same way to
answer queries from end users about the progress of their workload processing.
Summary
Tivoli Workload Automation communicates with other key IBM products to
provide a comprehensive, automated processing facility and an integrated solution
for the control of all production workloads. Here are the benefits that the portfolio
offers you:
Increased automation
Increases efficiency and uses DP resources more effectively, resulting in
improved service levels for your customers.
Improved systems management integration
Provides a unified solution to your systems management problems.
More effective control of DP operations
Lets you implement change and manage growth more efficiently.
Increased availability
Is made possible by automatic workload recovery.
Opportunities for growth
Are made possible by your ability to manage greater workload volumes.
Investment protection
Is made easier by building on your current investment in z/OS and
allowing existing customers to build on their existing investment in
workload management.
Improved customer satisfaction
Is achieved thanks to higher levels of service and availability, fewer errors,
and faster response to problems.
Greater productivity
Results because repetitive, error-prone tasks are automated and operations
personnel can use their time more efficiently.
Integration of multiple operating environments
Provides a single controlling point for the cooperating systems that
comprise your DP operation.
The processes described in ITUP are strongly aligned with the Information
Technology Infrastructure Library (ITIL) which is based on best practices observed
within the IT industry. ITIL provides high-level guidance of what to implement,
but not how to implement. ITUP contains detailed process diagrams and
descriptions to help users understand processes and their relationships, making
ITIL recommendations easier to follow.
ITUP is based on the IBM Process Reference Model for IT (PRM-IT), which was
jointly developed by IBM Global Services and Tivoli. PRM-IT provides detailed
process guidance for all activities that fall under the office of the CIO, including,
but not limited to, IT Service Management.
Workload management has the target to maximize the utilization of task execution
resources and to minimize the total time that is required to deliver the output of
task processing. This activity operates at both a macro-level and micro-level to
prepare work schedules and to pre-process work items where necessary so that the
delivery resources can be matched to the demands of the flow of work in an
optimal fashion.
The minimum set of object definitions that are required to produce a workload
consists of a workstation, a job, and a job stream. Other required scheduling objects
might be predefined and exist by default.
A job is the representation of a task (an executable file, program, or command) that
is scheduled and launched by the scheduler. The job is run by a workstation and,
after running, has a status that indicates if the run was successful or not. A job
definition can specify information on what to do whenever its run was not
successful. Jobs not included in a job stream do not have any attribute for running,
and are only the description of a task with a definition on how to perform it in a
form that is known to the specified workstation.
A job stream represents a container for related jobs and organizes them, in terms of
run time, sequencing, concurrency limitations, repetitions, assigning priority or
resources, and so on. Job streams are the macro elements of the workload that you
manage.
The scheduling plan is the to-do list that tells Tivoli Workload Scheduler or Tivoli
Workload Scheduler for z/OS what jobs to run, and what dependencies must be
satisfied before each job is launched. Tivoli Workload Scheduler or Tivoli Workload
Scheduler for z/OS builds the plan using the elements that are stored in the
scheduling database.
The running of a plan requires tracking to identify possible problems that can
impact the effective delivery of the work products. It is possible to perform the
tracking from a Web-based Java interface, the Tivoli Dynamic Workload Console,
on either of the platforms (z/OS and distributed). As an alternate interface to the
Tivoli Dynamic Workload Console on the z/OS platform you can also use the ISPF
panel interface, and on the distributed platforms you can use the command-line
interface.
The company
Fine Cola is a medium-sized enterprise that produces and distributes soft drinks to
retailers across the country. It owns a production plant and several strategically
located distribution centers. The primary customers of Fine Cola are foodstore
chains and the quantity and size of their orders is usually regular and stable.
Order quantities, however, peek in the warmer season and during holidays.
Moreover, in the mid term, Fine Cola wants to increase its business by gaining
market in other countries. Fine Cola's sales people are always keen to place new
orders and increase the customer portfolio. These characteristics determine Fine
Cola's production and distribution processes. Production and distribution can be
broken down into ongoing subprocesses or phases which are constantly interlocked
with each other. They are:
Inventory
Underlays the entire production process. The raw materials database is
sized on the production levels supplemented by minimum safety levels.
The production levels are in turn based on the order quantity for the
specific period.
Ordering
Raw material quantity levels must be available to production according to
the preset production levels. Orders must be planned and issued in
advance to take into account delivery times by third-party suppliers.
Production
General production levels are planned for well in advance based on
customer orders. Production is regularly increased by an additional five
percent to provide the capability to honor unplanned-for orders.
Supply
From the production plant the soft drinks are transported to the
distribution centers according to the customer delivery schedules.
Delivery
The last phase of the process. Fine Cola sodas are delivered from the
distribution centers to the customer shelves.
Inventory, ordering, and production take place in the production plant. Supply
takes place from the production plant to the distribution centers. Delivery takes
place from the distribution centers to the end destinations.
These phases are tightly bound to each other. While each soda placed on the shelf
might be regarded as the outcome of a specific sequence that starts with inventory
and terminates with delivery, all phases are actually constantly interwoven. In fact,
the same data is shared in one way or another by all or most phases, and
applications are designed to carry on the daily operations and set up future ones.
Fine Cola uses the following databases for running the above-mentioned
subprocesses:
Customer Orders
Contains all orders for the upcoming period from Fine Cola's customer
base. Provides input to:
v Inventory
Raw Materials
Contains the quantities in stock of the raw materials required to produce
Fine Cola's sodas. From here, orders are dispatched to suppliers when
stock levels reach a pre-set minimum. Receives input from:
v Production Volumes
Production Volumes
Contains the quantities of sodas that are to be produced daily according to
order volumes. Provides input to:
v Inventory
v Raw Materials
Receives input from:
v Inventory
Inventory
Contains the quantities in stock of the finished product. Is monitored to
verify that the quantities in stock are sufficient to honor the orders of a
specific time interval. Provides input to and receives input from:
v Production Volumes
v To Supply
To Supply
Contains the quantities of sodas that must be sent periodically from the
manufacturing plant to the distribution centers to satisfy foodstore orders
for the upcoming period. Provides input to:
v Inventory
v To Deliver
To Deliver
Contains the quantities that are to be delivered from each distribution
center to the foodstores in its area. Provides input to:
v Customer Orders
Receives input from:
v To Supply
These core applications are highly relevant for the profitability of the company and
also directly influence customer satisfaction.
To create added value and exceed customers expectations, the company must
strengthen integration with business applications and provide complete scheduling
capabilities and tighter integration with enterprise applications.
The challenge
Currently the databases are not automatically integrated with each other and need
continual human intervention to be updated. This affects Fine Cola’s operations
because:
v The process as a whole is onerous and prone to error.
v The interfaces between phases are slow and not very efficient.
The company realizes it needs to better integrate with the distribution centers
because processing is extremely low during the regular office hours in the warmer
seasons and during holidays. Users experience applications freezing, often taking
considerable time before being available for them to use again. This lack of
integration is causing problems for the organization in terms of lost productivity,
while applications come back online. This is a problem because the interruption of
important processing is not acceptable when the company wants to expand the
business. The response time for service level agreements (SLAs) must continue to
be met if a resource goes down, a workstation breaks, or there is urgency for
maintenance, and even more during peak periods even if the resources are
geographically distributed. On the other hand the company does not want to buy
new IT resources (hardware, software, applications) because this would not be
used during the other periods of the year.
Fine Cola realizes that their main weakness lies in their processing. They need to
implement a solution that:
v Integrates the data behind their processing workflow from inventory to
distribution. This makes it possible to automatically trigger the daily operations
without much need for human intervention. It also gives Fine Cola complete
control over the entire business process, reducing human intervention only to
exception handling.
v Integrates external data coming from third parties, such as selected customers
and raw material suppliers, into their process flow. Such data is provided to Fine
Cola in several formats and from different applications and should be integrated
into Fine Cola's databases in a seamless manner.
v Enables daily backups of their data as well as subsequent reorganization of the
DB2 database with as little impact as possible on their processes. Processing of
data collected online during the previous day is the next step.
v Optimizes capacity across the IT infrastructure and runs a high workload, much
more than before, using shared resources, even if the resources are
geographically distributed.
v Ensures 24x7x365 availability of critical business services. Disaster recovery
plans are no longer sufficient because the business requires recovery within a
couple of hours, not days. Recovering from last night tapes and recapturing lost
transactions after a system or application failure is no longer a viable option for
the company in a highly competitive market.
v Has very low probability of failure leading to maximum system reliability.
The main company goal at this time is to obtain an integrated workload solution
that can entirely choreograph its business application tasks. This means solutions
that optimize capacity across the IT infrastructure and run a tremendous workload,
much more than before, using less resources. For example, if the company has a
problem and a primary server does not process the workload, the company wants
to automate the quick redistribution of system resources to process workloads and
scale up or down for flawless execution. In this way the company reduces costs
because it speeds recovery time, no matter what the source. The goal is to have a
Company goal
Obtain an integrated workload
solution that can choreograph
business application tasks
12
11 1
12
10 2 11 1
12
11 1 12
11 1 10 2
9 3
10 2
10 2 3
8 4 9
9 3
7 5 9 3
8 4
6
8 4
8 4 7 5
6
7 5
6 7 5
6
Customer Pains
The solution
Fine Cola decides that one important step toward improving their process
execution is to adopt a solution based on automatic and dynamic workload
scheduling. The solution is based on a choice that strengthens integration with
business applications to run the following tasks:
v Read data from one database to update other databases.
v Read data from external applications, process it, and add it to the appropriate
databases.
v Provide the information necessary for the operation of every phase.
v Trigger some of the phases when predetermined thresholds are reached.
v Back up their data without interrupting production.
After analyzing the workload management products available on the market, Fine
Cola has chosen to use IBM Tivoli Workload Scheduler together with IBM Tivoli
Dynamic Workload Broker, because they can:
v Optimize and automate the tasks to process their applications and dynamically
adapt their processing in response to changes in the environment.
v Plan, choreograph, and schedule required changes to applications to minimize
the impact of changes on critical production workloads, and ensure that
workload processes are updated to reflect changes throughout asset life cycles.
v Minimize the total amount of time that is required to deliver the output of the
task resolution processes.
v Handle dependencies between tasks, data, and external applications so that the
entire workload can be managed homogeneously in the same process flow.
v Create a policy-based view of workflow automation, not just workload
automation, but cross-enterprise workflow, and direct that workflow across the
enterprise while planning, scheduling, managing, and monitoring all these
things. Dynamically tuning the cross-enterprise capacity to support this dynamic
view of workloads.
v Automatically transfer entire workloads across multiple platforms, and update
policies across multiple platforms.
v Balance between the ability to provide sophisticated planning, testing,
choreographing, monitoring, and adaptation of workload processes with fault
tolerance and redundancy for high availability of the scheduling infrastructure,
while minimizing server and network resource requirements.
v Perfectly integrate with each other.
Tivoli Workload Scheduler operates at both a macro-level and micro-level to
prepare work schedules and to preprocess work items where necessary so that the
delivery resources can be matched to the demands of the flow of work in an
optimal fashion.
Tivoli Dynamic Workload Broker dynamically routes workload to the best available
resources based on application requirements and business polices. Moreover it
optimizes the IT computing resource use according to SLAs.
Jobs that run as a unit (such as a weekly backup application), along with times,
priorities, and other dependencies that determine the exact order of the jobs are
grouped into job streams.
Fine Cola's job streams are collections of jobs that are grouped for organizational
purposes. The jobs of any particular job stream are related because they:
v Operate toward the completion of related tasks. For example, the jobs of
Jobstream100 run tasks designed to convert incoming customer orders into
operational data.
v Might be dependent on each other. Some jobs might have to wait for the
completion of predecessor jobs before they can start running. The jobs are
usually laid out in a sequence where the outcome of a predecessor is fed to a
successor job.
v Share the same programs, applications, and databases.
v Share the same timeframes within the planning period.
Figure 2 shows how to connect Tivoli Workload Scheduler and Tivoli Dynamic
Workload Broker.
Master
Domain
Manager
Tivoli
Workload Scheduler
Database
Job Scheduling Tivoli Dynamic
Console Workload Broker
Tivoli Dynamic Web Console Client
Workload Broker
Server
Tivoli Dynamic
Workload Broker
Domain Database
Manager
Common agent
and
Workload agent
Figure 2. Integration between Tivoli Workload Scheduler and Tivoli Dynamic Workload Broker
Using Tivoli Workload Scheduler, Fine Cola's business process is structured in the
following way:
Fine Cola sets up a long term plan that encompasses the entire workload, spanning
job streams that run on a daily basis and job streams that have other reoccurrences.
From the long term plan, a current plan is extracted at the beginning of every time
unit. The time period of the current plan can be chosen to vary from some hours to
several days. Fine Cola has chosen to set their current plan on a daily basis. At the
start of every day a new daily plan is built by their workload scheduling software:
data is taken from the long term plan and from the daily plan of the previous day
to include any jobs that might not have completed.
The company must also ensure that during peek periods the jobs in the critical
path are run in the required time frame. To ensure this they converted some jobs
from static definition to dynamic definition to manage the extra orders using Tivoli
Dynamic Workload Broker. With Tivoli Dynamic Workload Broker the company
can:
v Manage the automatic discovery of available resources in the scheduling
environment with their characteristics and relationships.
v Assign to the job the appropriate resources for running based on the job
requirements and on the administration polices.
v Optimize the use of the resources by assigning to the job the required resources
based on the SLA.
v Manage and control the resource consumption and load.
v Dispatch jobs to target resources that meet the requirements to run the job.
The Tivoli Workload Scheduler relational database contains the information related
to the jobs, the job streams, the workstations where they run, and the time
specifications that rule their operation. On the other hand, the Tivoli Dynamic
Workload Broker database contains information about the current IT environment,
the resource real time performance, and load data. It also stores the job definitions
and keeps track of resources assigned to each job.
In this way, Fine Cola's scheduling analyst can create and change any of these
objects at any time and Fine Cola's IT administrator can dynamically assign the
best set of resources to match allocation requests based on the defined policies,
without any impact on the business.
He can also ensure the correct concurrent or exclusive use of the resources across
the different jobs according to resource characteristics. If the resource request
cannot be immediately satisfied the IT administrator, using Tivoli Dynamic
Workload Broker, can automatically queue the resource until changes in the
resource utilization or in the environment cause it to be satisfied it.
The workload scheduling plan can be changed as quickly and dynamically as the
business and operational needs require. The scheduling analyst makes full use of
the trial and forecast planning options available in the scheduler to adjust and
optimize workload scheduling and, as a consequence, Fine Cola's line of
operations.
Moreover using Tivoli Dynamic Workload Broker the company can rapidly adapt
to the increase of workload during peak periods driving the requirement for
workload virtualization, that is the ability to manage and control the workload so
that it can be slit, routed to appropriate resources and capacity, and dynamically
moved around in logical resource pools.
If a resource is not available, the SLA defined continues to be met because the job
processing is restarted from the point where the failure happens.
Figure 3 on page 39 shows how the Fine Cola company can dynamically manage
its workload using Tivoli Workload Scheduler and Tivoli Dynamic Workload
Broker and satisfying the SLA response time.
Availability 99.00%
Policy
Maximize resource
utilization
Tivoli Dynamic
Workload Broker
12
11 1
12
10 2 11 1
12
11 1 12
11 1 10 2
9 3
10 2
10 2 3
8 4 9
9 3
7 5 9 3
8 4
6
8 4
8 4 7 5
6
7 5
6 7 5
6
Static way
Dynamic way
If an unplanned order
arrives, the SLA for the Automatically discovers
job cannot be met the available resources
Figure 3. How to satisfy SLA response time during peak oeriods using Tivoli Workload
Scheduler and Tivoli Dynamic Workload Broker
After an internal analysis, the application specialist finds that there is a broken
execution path that must be fixed. The expected time for resolution is three hours,
including a hot fix and a regression test.
One hour later, however, the operations analyst realizes that even if the application
support team works overtime, the fix will not be completed before the end of the
day and it will be impossible to close the daily processing today. He checks the
status of the depending jobs and sets a target time to have the hot fix loaded into
production during the night.
To find a solution to the potential problem and achieve the goals set for workload
processing, without buying additional resources, using Tivoli Dynamic Workload
Broker, he proceeds in the following way:
1. He performs an automatic discovery of the resources available in the
scheduling domain with their characteristics and relationships.
2. He finds a pool of resources in the Inventory department that meet the SLA to
run the jobs. These resources have the required RAM, microprocessor, operating
system, and application environments to run the new job stream and will be
used at half their capacity during Christmas.
Without Tivoli Dynamic Workload Broker he could not dynamically adapt the
new workload processing to match load requirements with business policies
and priorities, and resource availability and capacity. The only way to solve the
problem would be to buy new hardware to run the added job streams
increasing the cost of IT management infrastructure without optimizing the use
of the existing resources.
3. He determines, based on the policies and jobs dispatching, how many new
resources are required to run the new job stream.
4. He manages the definition of business-oriented performance goals for the entire
domain of servers, provides an end-to-end view of actual performance relative
to those goals, and manages the server resource allocation and load to meet the
performance goals.
5. He identifies the required resources and finds an agreement with the Inventory
department manager, to share the required resource between the two
departments.
6. He defines a new logical resource in which he outlines the machines that are
shared between the departments.
7. He communicates to the Ordering department the new agreement with the
resource optimization.
8. Now he can guarantee the running of jobs within the time frame according to
policies, rules, and resources planned availability. In this way he can also
satisfy the optimization policy to maximize resource utilization.
9. The scheduling analyst now builds a feasible production plan.
Using Tivoli Dynamic Workload Broker he met the constraints imposed by
rules and policies and achieved SLA goals, optimizing execution time,
throughput, cost, and reliability.
The benefits
By adopting a workload scheduling strategy, and in particular by using Tivoli
Workload Scheduler and Tivoli Dynamic Workload Broker, Fine Cola is
experiencing significant and immediate benefits, such as:
v The successful integration of all its manufacturing and distribution processes.
Because of how Fine Cola implemented their new processing flow, every
customer order is active from the time a customer service representative receives
it until the loading dock ships the merchandise and finance sends an invoice.
Now orders can be tracked more easily, and manufacturing, inventory, and
shipping among many different locations can be coordinated simultaneously. If
an unplanned order arrives it can be easily managed in the new dynamic IT
infrastructure.
v The standardization and speeding up of the manufacturing process.
Tivoli Workload Scheduler has helped to automate many of the steps of Fine
Cola's manufacturing process. This results in savings in time and increase in
productivity.
v Reduce inventory
The manufacturing process flows more smoothly, and this improves visibility of
the order fulfillment process inside the company. This can lead to reduced
inventory of the raw materials used, and can help better plan deliveries to
customers, reducing the finished goods inventory at the warehouses and
shipping docks.
v Optimize IT infrastructures
The dynamic allocation of the IT resources maximizes the workload throughput
across the enterprise reducing costs, improving performance, and aligning IT
with business needs and service demands.
v Guarantees Fault Tolerance and High Availability
Tivoli Dynamic Workload Broker can recover from server, agent, and
communication failures and it can restart from the point where the failure
Overview
The next sections provide an outline of Tivoli Workload Scheduler.
| With either interface you can manage all Tivoli Workload Scheduler plan
| and database objects.
Using multiple domains reduces the amount of network traffic by reducing the
amount of communication required between the master domain manager and other
computers.
Every time the production plan is created or extended the master domain manager
creates a production control file, named Symphony. Tivoli Workload Scheduler is
then restarted in the network, and the master domain manager sends a copy of the
new production control file to each of its automatically linked agents and
subordinate domain managers. The domain managers, in turn, send copies to their
automatically linked agents and subordinate domain managers.
Once the network is started, scheduling messages like job starts and completions
are passed from the agents to their domain managers, through the parent domain
managers to the master domain manager. The master domain manager then
44 IBM Tivoli Workload Automation: Overview
Tivoli Workload Scheduler
broadcasts the messages throughout the hierarchical tree to update the production
control files of domain managers and fault tolerant agents running in Full Status
mode.
Topology
A key to choosing how to set up Tivoli Workload Scheduler domains for an
enterprise is the concept of localized processing. The idea is to separate or localize
the enterprises’s scheduling needs based on a common set of characteristics.
Networking
The following questions will help in making decisions about how to set up your
enterprise’s Tivoli Workload Scheduler network. Some questions involve aspects of
your network, and others involve the applications controlled by Tivoli Workload
Scheduler. You may need to consult with other people in your organization to
resolve some issues.
v How large is your Tivoli Workload Scheduler network? How many computers
does it hold? How many applications and jobs does it run?
The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with Tivoli Workload Scheduler,
there may not be a need for multiple domains.
v How many geographic locations will be covered in your Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?
This is one of the primary reasons for choosing a multiple domain architecture.
One domain for each geographical location is a common configuration. If you
choose single domain architecture, you will be more reliant on the network to
maintain continuous processing.
v Do you need centralized or decentralized management of Tivoli Workload
Scheduler?
A Tivoli Workload Scheduler network, with either a single domain or multiple
domains, gives you the ability to manage Tivoli Workload Scheduler from a
single node, the master domain manager. If you want to manage multiple
locations separately, you can consider the installation of a separate Tivoli
Workload Scheduler network at each location. Note that some degree of
decentralized management is possible in a stand-alone Tivoli Workload
Scheduler network by mounting or sharing file systems.
v Do you have multiple physical or logical entities at a single site? Are there
different buildings, and several floors in each building? Are there different
departments or business functions? Are there different applications?
These may be reasons for choosing a multi-domain configuration. For example, a
domain for each building, department, business function, or each application
(manufacturing, financial, engineering, etc.).
v Do you run applications, like SAP R/3, that will operate with Tivoli Workload
Scheduler?
If they are discrete and separate from other applications, you may choose to put
them in a separate Tivoli Workload Scheduler domain.
v Would you like your Tivoli Workload Scheduler domains to mirror your
Windows domains?
This is not required, but may be useful.
v Do you want to isolate or differentiate a set of systems based on performance or
other criteria?
This may provide another reason to define multiple Tivoli Workload Scheduler
domains to localize systems based on performance or platform type.
v How much network traffic do you have now?
If your network traffic is manageable, the need for multiple domains is less
important.
v Do your job dependencies cross system boundaries, geographical boundaries, or
application boundaries? For example, does the start of Job1 on CPU3 depend on
the completion of Job2 running on CPU4?
The degree of interdependence between jobs is an important consideration when
laying out your Tivoli Workload Scheduler network. If you use multiple
domains, you should try to keep interdependent objects in the same domain.
This will decrease network traffic and take better advantage of the domain
architecture.
v What level of fault-tolerance do you require?
An obvious disadvantage of the single domain configuration is the reliance on a
single domain manager. In a multi-domain network, the loss of a single domain
manager affects only the agents in its domain.
On any computer running Tivoli Workload Scheduler there are a series of active
management processes. They are started as a system service, or by the StartUp
command, or manually from the Job Scheduling Console. The following are the
main processes:
Netman
The network management process that establishes network connections
between remote mailman processes and local Writer processes.
Mailman
The mail management process that sends and receives inter-CPU messages.
Batchman
The production control process. Working from Symphony, the production
control file, it runs jobs streams, resolves dependencies, and directs jobman
to launch jobs.
Writer The network writer process that passes incoming messages to the local
mailman process.
Jobman
The job management process that launches and tracks jobs under the
direction of batchman.
Starting from version 8.3, the Tivoli Workload Scheduler scheduling objects are
stored in a relational database. This results in a significant improvement, in
comparison with previous versions, of how objects are defined and managed in the
database. Each object can now be managed independently without having to use
lists of scheduling objects like calendars, parameters, prompts and resources. The
command syntax used to define and manage these objects has also become direct
and powerful.
Tivoli Workload Scheduler administrators and operators work with these objects
for their scheduling activity:
Workstation
Also referred to as CPU. Usually an individual computer on which jobs
and job streams are run. Workstations are defined in the Tivoli Workload
Scheduler database as a unique object. A workstation definition is required
for every computer that executes jobs or job streams in the Tivoli Workload
Scheduler network.
Workstation class
A group of workstations. Any number of workstations can be placed in a
class. Job streams and jobs can be assigned to execute on a workstation
class. This makes replication of a job or job stream across many
workstations easy.
Job A script or command, run on the user’s behalf, run and controlled by
Tivoli Workload Scheduler.
Job stream
A list of jobs that run as a unit (such as a weekly backup application),
along with run cycles, times, priorities, and other dependencies that
determine the exact order in which the jobs run.
Calendar
A list of scheduling dates. Each calendar can be assigned to multiple job
streams. Assigning a calendar to a job stream causes that job stream to run
on the dates specified in the calendar. A calendar can be used as an
inclusive or as an exclusive run cycle.
Run cycle
A cycle that specifies the days that a job stream is scheduled to run. Run
cycles are defined as part of job streams and may include calendars that
were previously defined. There are three types of run cycles: a Simple run
cycle, a Weekly run cycle, or a Calendar run cycle (commonly called a
calendar). Each type of run cycle can be inclusive or exclusive. That is,
each run cycle can define the days when a job stream is included in the
production cycle, or when the job stream is excluded from the production
cycle.
Prompt
An object that can be used as a dependency for jobs and job streams. A
prompt must be answered affirmatively for the dependent job or job
stream to launch. There are two types of prompts: predefined and ad hoc.
An ad hoc prompt is defined within the properties of a job or job stream
and is unique to that job or job stream. A predefined prompt is defined in
the Tivoli Workload Scheduler database and can be used by any job or job
stream.
Resource
An object representing either physical or logical resources on your system.
Once defined in the Tivoli Workload Scheduler database, resources can be
used as dependencies for jobs and job streams. For example, you can
define a resource named tapes with a unit value of two. Then, define jobs
that require two available tape drives as a dependency. Jobs with this
dependency cannot run concurrently because each time a job is run the
tapes resource is in use.
| Variable and variable table
| A variable can be used to substitute values in scheduling objects contained
| in jobs and job streams; that is, in JCL, log on, prompts dependencies, file
| dependencies, and recovery prompts. The values are replaced in the job
| scripts at run time. Variables are global (that is, they can be used on any
| agent of the domain) and are defined in the database in groups called
| variable tables.
| Parameter
| A parameter can be used to substitute values in jobs and job streams just
| like global variables. The difference is that a parameter is defined on the
| specific workstation where the related job is to run and has no global
| effect, but only on that specific workstation. Parameters cannot be used
| when scripting extended agent jobs.
User On Windows workstations, the user name specified in the Logon field of a
job definition must have a matching user definition. The definitions
provide the user passwords required by Tivoli Workload Scheduler to
launch jobs.
Event rule
A scheduling event rule defines a set of actions that are to run upon the
occurrence of specific event conditions. The definition of an event rule
correlates events and triggers actions. When you define an event rule, you
specify one or more events, a correlation rule, and the one or more actions
that are triggered by those events. Moreover, you can specify validity
dates, a daily time interval of activity, and a common time zone for all the
time restrictions that are set.
You can control how jobs and job streams are processed with the following
attributes:
Dependencies
Conditions that must be satisfied before a job or job stream can run. You
can set the following types of dependencies:
v A predecessor job or job stream must have completed successfully.
v One or more specific resources must be available.
v Access to specific files must be granted.
v An affirmative response to a prompt.
Time constraints
Conditions based on time, such as:
v The time at which a job or job stream should start.
v The time after which a job or job stream cannot start.
v The repetition rate at which a job or job stream is to be run within a
specified time slot.
Job priority
A priority system according to which jobs and job streams are queued for
execution.
Job fence
A filter defined for workstations. Only jobs and job streams whose priority
exceeds the job fence value can run on a workstation.
Limit Sets a limit to the number of jobs that can be launched concurrently on a
workstation.
All of the required information for that production period is placed into a
production control file named Symphony. During the production period, the
production control database is continually being updated to reflect the work that
needs to be done, the work in progress, and the work that has been completed. A
copy of the Symphony file is sent to all subordinate domain managers and to all
the fault-tolerant agents in the same domain. The subordinate domain managers
distribute their copy to all the fault-tolerant agents in their domain and to all the
domain managers that are subordinate to them, and so on down the line. This
causes fault-tolerant agents throughout the network to continue processing even if
the network connection to their domain manager is down. From the graphical
interfaces or the command line interface, the operator can view and make changes
in the current production by making changes in the Symphony file.
Tivoli Workload Scheduler processes monitor the production control database and
make calls to the operating system to launch jobs as required. The operating
system runs the job, and in return informs Tivoli Workload Scheduler if the job
Scheduling
Scheduling can be accomplished either through the Tivoli Workload Scheduler
command line interface or one of the two graphical interfaces.
Job streams can be defined as draft. A draft job stream is not considered when
resolving dependencies and is not added to the production plan. It becomes actual
only after the draft keyword is removed from its definition, and the JnextPlan
command is run to add it to the preproduction plan and so to the production plan.
Another option is to define a recovery job that can be run in the place of the
original job if it completes unsuccessfully. The recovery job must have been
defined previously. Processing stops if the recovery job does not complete
successfully.
Running production
Production consists of taking the definitions of the scheduling objects from the
database, together with their time constraints and their dependencies, and building
and running the production control file.
You use the JnextPlan command on the master domain manager to generate the
production plan and distribute it across the Tivoli Workload Scheduler network.
To generate and start a new production plan, Tivoli Workload Scheduler performs
the following steps:
1. Updates the preproduction plan with the objects defined in the database that
were added or updated since the last time the plan was created or extended.
2. Retrieves from the preproduction plan the information about the job streams to
run in the specified time period and saves it in an intermediate production
plan.
3. Includes in the new production plan the uncompleted job streams from the
previous production plan.
4. Creates the new production plan and stores it in a file named Symphony.
5. Distributes a copy of the Symphony file to the workstations involved in the
new product plan processing.
6. Logs all the statistics of the previous production plan into an archive.
7. Updates the job stream state in the preproduction plan.
The copy of the newly-generated Symphony file is used starting from the top
domain’s fault-tolerant agents and domain managers of the child domains and
down the tree to all subordinate domains.
Each fault-tolerant agent that receives the production plan can continue processing
even if the network connection to its domain manager goes down.
This means that during job processing, each fault-tolerant agent has its own copy
of the Symphony file updated with the information about the jobs it is running (or
that are running in its domain and child domains if the fault-tolerant agent is
full-status or a domain manager), and the master domain manager (and backup
master domain manager if defined) has the copy of the Symphony file that
contains all updates coming from all fault-tolerant agents. In this way the
Symphony file on the master domain manager is kept up-to-date with the jobs still
to run, the jobs running, and the jobs already completed.
After the production plan is generated for the first time, it can be extended to the
next production period with the JnextPlan command. The Symphony file is
refreshed with the latest changes and redistributed throughout the network.
While the job stream is in the plan, and has not completed, you can still modify
any of its components. That is, you can modify the job stream properties, the
properties of its jobs, their sequence, the workstation or resources they use, and so
on, to satisfy last-minute requirements.
You can also hold, release, or cancel a job stream, as well as change the maximum
number of jobs within the job stream that can run concurrently. You can change the
priority previously assigned to the job stream and release the job stream from all
its dependencies.
Last minute changes to the current production plan include the possibility to
submit jobs and job streams that are already defined in the Tivoli Workload
Scheduler database but were not included in the plan. You can also submit jobs
that are being defined ad hoc. These jobs are submitted to the current plan but are
not stored in the database.
Starting from version 8.3, you can create and manage multiple instances of the
same job stream over a number of days or at different times within the same day.
This new feature introduced the possibility to have in the same plan more than
one instance of the same job stream with the same name. Each job stream instance
is identified by the job stream name, the name of the workstation where it is
scheduled to run, and by the start time defined in the preproduction plan.
Monitoring
Monitoring is done by listing plan objects in either graphical user interface. Using
lists, you can see the status of all or of subsets of the following objects in the
current plan:
v Job stream instances
v Job instances
v Domains
v Workstations
v Resources.
v File dependencies
v Prompt dependencies.
You can use these lists also to manage some of these objects. For example, you can
reallocate resources, link or unlink workstations, kill jobs, or switch domain
manager.
Additionally, you can monitor the daily plan with Tivoli Business Systems
Manager, an object-oriented systems management application that provides
monitoring and event management of resources, applications, and subsystems, that
is integrated with Tivoli Workload Scheduler.
Tivoli Workload Scheduler integrates with IBM Tivoli Monitoring. A set of IBM
Tivoli Monitoring resource models, tailored to check the status of scheduling
resources, is included with Tivoli Workload Scheduler.
By adding this set of resource models to the IBM Tivoli Monitoring default
resource models set, you can add these resource models to monitoring profiles and
distribute them to the profile subscribers where the scheduling resources are.
Within the monitoring profile you can define the following items:
v Which resource models you want to distribute, and, for each resource model:
– When an alarm is triggered
– Which response severity is assigned to the triggered alarm
– Who is notified about the alarm and how
– If a program is run in response to a triggered alarm
– If events are to be sent to Tivoli Enterprise Console in response to triggered
alarms
Reporting
As part of the pre-production and post-production processes, reports are generated
which show summary or detail information about the previous or next production
day. These reports can also be generated ad-hoc. The following reports are
available:
v Job details listing
v Prompt listing
v Calendar listing
v Parameter listing
v Resource listing
v Job History listing
v Job histogram
v Planned production schedule
v Planned production summary
v Planned production detail
v Actual production summary
v Actual production detail
v Cross reference report
In addition, during production, a standard list file (STDLIST) is created for each
job instance launched by Tivoli Workload Scheduler. Standard list files contain
header and trailer banners, echoed commands, and errors and warnings. These
files can be used to troubleshoot problems in job execution.
Auditing
An auditing option helps track changes to the database and the plan.
For the database, all user modifications, except for the delta of the modifications,
are logged. If an object is opened and saved, the action is logged even if no
modification is made.
For the plan, all user modifications to the plan are logged. Actions are logged
whether or not they are successful.
Audit files are logged to a flat text file on individual machines in the Tivoli
Workload Scheduler network. This minimizes the risk of audit failure due to
network issues and allows a straightforward approach to writing the log. The log
formats are basically the same for both the plan and the database. The logs consist
of a header portion which is the same for all records, an “action ID”, and a section
of data which varies according to the action type. All data is stored in clear text
and formatted to be readable and editable from a text editor such as vi or notepad.
| Tivoli Workload Scheduler includes a set of predefined event and action plug-ins,
| but also provides a software development kit with samples and templates for your
| application programmers to develop their own plug-ins.
Setting security
Security is accomplished with the use of a security file that contains one or more
user definitions. Each user definition identifies a set of users, the objects they are
permitted to access, and the types of actions they can perform.
A template file is installed with the product. Edit the template to create the user
definitions and compile and install it with a utility program to create a new
operational security file. After it is installed, you make further modifications by
creating an editable copy with another utility.
Each workstation in a Tivoli Workload Scheduler network has its own security file.
An individual file can be maintained on each workstation, or a single security file
can be created on the master domain manager and copied to each domain
manager, fault-tolerant agent, and standard agent.
The Tivoli Workload Scheduler administrator must plan how authentication will be
used within the network:
v Use one certificate for the entire Tivoli Workload Scheduler network.
v Use a separate certificate for each domain.
v Use a separate certificate for each workstation.
Also, for all the workstations having this attribute set to ON, the commands to
start or stop the workstation or to get the standard list will be transmitted through
the domain hierarchy instead of opening a direct connection between the master
(or domain manager) and the workstation.
If you prefer the traditional security model, you can still use it by not activating
the global variable.
Once configured, time zones can be specified for start and deadline times within
jobs and job streams.
The access method is a program that is run by the hosting workstation whenever
Tivoli Workload Scheduler, either through its command line or either graphical
interface, interacts with the external system. IBM Tivoli Workload Scheduler for
Applications includes the following access methods:
v Oracle e-Business Suite access method (MCMAGENT)
v PeopleSoft access method (psagent)
v R/3 access method (r3batch)
To launch and monitor a job on an extended agent, the host runs the access
method, passing it job details as command line options. The access method
communicates with the external system to launch the job and returns the status of
the job.
Figure 5 shows how these elements fit together in the case of a typical extended
agent configuration.
Structure
Tivoli Workload Scheduler for z/OS consists of a base product, the agent and a
number of features. Every z/OS system in your complex requires the base product.
One z/OS system in your complex is designated the controlling system and runs
the engine feature. Only one engine feature is required, even when you want to
start standby engines on other z/OS systems in a sysplex.
Tivoli Workload Scheduler for z/OS with Tivoli Workload Scheduler addresses
your production workload in the distributed environment. You can schedule,
control, and monitor jobs in Tivoli Workload Scheduler from Tivoli Workload
Scheduler for z/OS. For example, in the current plan, you can specify jobs to run
on workstations in Tivoli Workload Scheduler.
The workload on other operating environments can also be controlled with the
open interfaces provided with Tivoli Workload Scheduler for z/OS. Sample
programs using TCP/IP or an NJE/RSCS (network job entry/remote spooling
communication subsystem) combination show you how you can control the
workload on environments that at present have no scheduling feature.
Additionally, national language features let you see the dialogs and messages, in
the language of your choice. These languages are currently available:
v English
v German
v Japanese
v Spanish
Concepts
In managing production workloads, Tivoli Workload Scheduler for z/OS builds on
several important concepts.
Plans. Tivoli Workload Scheduler for z/OS constructs operating plans based on
user-supplied descriptions of the DP operations department and its production
workload. These plans provide the basis for your service level agreements and give
you a picture of the status of the production workload at any point in time. You
can simulate the effects of changes to your production workload, calendar, and
installation by generating trial plans.
Tivoli Workload Scheduler for z/OS schedules work based on the information you
provide in your job stream descriptions.
Tivoli Workload Scheduler for z/OS supports a range of work process types, called
workstations, that map the processing requirements of any task in your production
workload. Each workstation supports one type of activity. This gives you the
flexibility to schedule, monitor, and control any type of DP activity, including the
following:
v Job setup, both manual and automatic
v Job submission
v Started-task actions
v Communication with the NetView program
v Print jobs
v Manual preprocessing or postprocessing activity
You can plan for maintenance windows in your hardware and software
environments. Tivoli Workload Scheduler for z/OS helps you perform a controlled
and incident-free shutdown of the environment, preventing last-minute
cancellation of active tasks. You can choose to reroute the workload automatically
during any outage, planned or unplanned.
Tivoli Workload Scheduler for z/OS tracks jobs as they are processed at
workstations and dynamically updates the plan with real-time information on the
status of jobs. You can view or modify this status information online using the
workstation ready lists in the dialog.
You can define dependencies for jobs when a specific processing order is required.
When Tivoli Workload Scheduler for z/OS manages the dependent relationships
for you, the jobs are always started in the correct order every time they are
scheduled. A dependency is called internal when it is between two jobs in the same
job stream, and external when it is between two jobs in different job streams.
You can work with job dependencies graphically from the Tivoli Job Scheduling
Console.
Tivoli Workload Scheduler for z/OS lets you serialize work based on the status of
any DP resource. A typical example is a job that uses a data set as input, but must
not start until the data set is successfully created and loaded with valid data. You
can use resource serialization support to send availability information about a DP
resource to Tivoli Workload Scheduler for z/OS.
Tivoli Workload Scheduler for z/OS keeps a record of the state of each resource
and its current allocation status. You can choose to hold resources if a job
allocating the resources ends abnormally. You can also use the Tivoli Workload
Scheduler for z/OS interface with the Resource Object Data Manager (RODM) to
schedule jobs according to real resource availability. You can subscribe to RODM
updates in both local and remote domains.
Tivoli Workload Scheduler for z/OS lets you subscribe to data set activity on z/OS
systems. The data set triggering function of Tivoli Workload Scheduler for z/OS
automatically updates special resource availability when a data set is closed. You
can use this notification to coordinate planned activities or to add unplanned work
to the schedule.
Calendars. Tivoli Workload Scheduler for z/OS uses information about when the
job departments work, so that job streams are not scheduled to run on days when
processing resources are not available (for example, Sundays and holidays). This
information is stored in a calendar. Tivoli Workload Scheduler for z/OS supports
multiple calendars for enterprises where different departments have different work
days and non-working days. Different groups within a business operate according
to different calendars.
Business processing cycles. Tivoli Workload Scheduler for z/OS uses business
processing cycles, or periods, to calculate when your job streams run, for example,
weekly, or every 10th working day. Periods are based on the business cycles of
your customers. Tivoli Workload Scheduler for z/OS supports a range of periods
for processing the different job streams in your production workload.
When you define a job stream, you specify when it is planned to run, using a run
cycle, which can be:
v A rule with a format such as
ONLY the SECOND TUESDAY of every MONTH
EVERY FRIDAY in the user-defined period SEMESTER1
where the words in upper case are selected from lists of ordinal numbers, names
of days, and common calendar intervals or period names.
v A combination of period and offset. For example, an offset of 10 in a monthly
period specifies the 10th day of each month.
Long-term planning
The long-term plan is a high-level schedule of your anticipated production
workload. It lists, by day, the instances of job streams to be run during the period
of the plan. Each instance of a job stream is called an occurrence. The long-term
plan shows when occurrences are to run, as well as the dependencies that exist
between the job streams. You can view these dependencies graphically on your
workstation as a network, to check that work has been defined correctly. The plan
can help you in forecasting and planning for heavy processing days. The
long-term-planning function can also produce histograms showing planned
resource use for individual workstations during the plan period.
You can use the long-term plan as the basis for documenting your service level
agreements. It lets you relate service level agreements directly to your production
workload schedules so that your customers can see when and how their work is to
be processed.
The long-term plan provides a window to the future. You can decide how far into
the future, from one day to four years. You can also produce long-term plan
simulation reports for any future date. Tivoli Workload Scheduler for z/OS can
automatically extend the long-term plan at regular intervals. You can print the
long-term plan as a report, or you can view, alter, and extend it online using the
dialogs.
Detailed planning
The current plan is the center of Tivoli Workload Scheduler for z/OS processing. It
drives the production workload automatically and provides a way to check its
status. The current plan is produced by the run of batch jobs that extract from the
long-term plan the occurrences that fall within the specified period of time from
the job details. The current plan selects a window from the long-term plan and
makes the jobs ready to be run. They are started depending on the decided
restrictions (for example, dependencies, resources availability, or time-dependent
jobs).
The current plan is a rolling plan that can cover several days. A common method
is to cover 1 to 2 days with regular extensions each shift. Production workload
processing activities are listed by minute.
You can either print the current plan as a report, or view, alter, and extend it
online, by using the dialogs.
Tivoli Workload Scheduler for z/OS also provides manual control facilities, which
are described in “Manual control and intervention” on page 69.
By saving a copy of the JCL for each separate run, or occurrence, of a particular job
in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional reuse
of temporary JCL changes, such as overrides.
Job tailoring. Tivoli Workload Scheduler for z/OS provides automatic job tailoring
functions to automatically edit jobs. This can reduce your dependency on
time-consuming and error-prone manual editing of jobs. Tivoli Workload Scheduler
for z/OS job tailoring provides:
v Automatic variable substitution
v Dynamic inclusion and exclusion of inline job statements
v Dynamic inclusion of job statements from other libraries or from an exit
For jobs submitted on a z/OS system, these job statements are z/OS JCL.
Scheduler JCL tailoring directives can be included in jobs that are submitted on
other operating systems, such as AIX®/6000.
Variables can be substituted in specific columns, and you can define verification
criteria to ensure that invalid strings are not substituted. Special directives
supporting a variety of date formats used by job stream programs let you
dynamically define the required format and change the multiple times for the same
job. You can define arithmetic expressions to calculate values such as the current
date plus four work days.
Recovery of jobs and started tasks. Automatic recovery actions for failed jobs are
specified in user-defined control statements. Parameters in these statements
determine the recovery actions to be taken when a job or started task ends in error.
User Application
Restart
An Earlier
Job? Job 1
Automatic
Catalog
Cleanup? Recovery
Job?
Job 2
Restart
the Failing
Job? 2
The Scheduler Analyzes
the Error and Determines
the Restart Action
1 Job 3
Job 3 Analyze
Ends In
Error !
Continue? Do Nothing?
Restart and cleanup. You can use restart and cleanup to catalog, uncatalog, or
delete data sets when a job ends in error, or when you need to rerun a job. Data
set cleanup handles JCL in the form of in-stream JCL, in-stream procedures, and
cataloged procedures on both local and remote systems. This function can be
initiated automatically by Tivoli Workload Scheduler for z/OS or manually by
using the panels. Tivoli Workload Scheduler for z/OS resets the catalog to the
status that it was before the job ran for both generation data set groups (GDGs)
and for DD allocated data sets contained in JCL. In addition, restart and cleanup
supports the use of Removable Media Manager in your environment.
Restart at both the step- and job-level is also provided in the Tivoli Workload
Scheduler for z/OS panels. It manages resolution of generation data group (GDG)
names, JCL containing nested INCLUDEs or PROC, and IF-THEN-ELSE
statements. Tivoli Workload Scheduler for z/OS also automatically identifies
problems that can prevent successful restart, providing a logic of the “best restart
step.”
You can browse the job log or request a step-level restart for any z/OS job or
started task even when there are no catalog modifications. The job-log browse
functions are also available for the workload on other operating platforms, which
is especially useful for those environments that do not support an SDSF-like
facility. If you use a SYSOUT archiver, for example RMDS, you can interface with
it from Tivoli Workload Scheduler for z/OS and so prevent duplication of job log
information.
These facilities are available to you without the need to make changes to your
current JCL.
Tivoli Workload Scheduler for z/OS gives you an enterprise-wide data set cleanup
capability on remote agent systems.
Tivoli Workload Scheduler for z/OS uses the VTAM® Model Application Program
Definition feature and the z/OS defined symbols to ease the configuration and job
in a sysplex environment, giving you a single system view of the sysplex.
Starting, stopping, and managing your engines and agents do not require you to
know on which sysplex z/OS image they are actually running on.
z/OS
Parallel Sysplex
Shared
DASD
XCF Controlled
Controlling Scheduler
Scheduler (Hot Standby)
F
XC
XC
F
Controlled
Scheduler
Hot standby. Tivoli Workload Scheduler for z/OS provides a single point of
control for your z/OS production workload. If this controlling system fails, Tivoli
Workload Scheduler for z/OS can automatically transfer the controlling functions
to a backup system within a Parallel Sysplex®, see Figure 7. Through XCF, Tivoli
Workload Scheduler for z/OS can automatically maintain production workload
processing during system or connection failures.
for z/OS can record status information from both local and remote processors.
When status information is reported from remote sites in different time zones,
Tivoli Workload Scheduler for z/OS makes allowances for the time differences.
Status inquiries
With the ISPF dialogs or with the Job Scheduling Console, you can make queries
online and receive timely information on the status of the production workload.
Time information that is displayed by the dialogs is in the local time of the dialog
user. Using the dialogs, you can request detailed or summary information on
individual job streams, jobs, and workstations, as well as summary information
concerning workload production as a whole. You can also display dependencies
graphically as a network at both job stream and job level. Status inquiries:
v Provide you with overall status information that you can use when considering
a change in workstation capacity or when arranging an extra shift or overtime
work.
v Help you supervise the work flow through the installation; for example, by
displaying the status of work at each workstation.
v Help you decide whether intervention is required to speed up the processing of
specific job streams. You can find out which job streams are the most critical.
You can also check the status of any job stream, as well as the plans and actual
times for each job.
v Help you to check information before making modifications to the plan. For
example, you can check the status of a job stream and its dependencies before
deleting it or changing its input arrival time or deadline. See “Modifying the
current plan” for more information.
v Provide you with information on the status of processing at a particular
workstation. Perhaps work that should have arrived at the workstation has not
arrived. Status inquiries can help you locate the work and find out what has
happened to it.
You can modify the current plan online. For example, you can:
v Include unexpected jobs or last-minute changes to the plan. Tivoli Workload
Scheduler for z/OS then automatically creates the dependencies for this work.
v Manually modify the status of jobs.
v Delete occurrences of job streams.
v Graphically display job dependencies before you modify them.
v Modify the data in job streams, including the JCL.
v Respond to error situations by:
– Rerouting jobs
– Rerunning jobs or occurrences
– Completing jobs or occurrences
– Changing jobs or occurrences
v Change the status of workstations by:
– Rerouting work from one workstation to another
– Modifying workstation reporting attributes
– Updating the availability of resources
In addition to using the dialogs, you can modify the current plan from your own
job streams using the program interface or the application programming interface.
You can also trigger Tivoli Workload Scheduler for z/OS to dynamically modify
the plan using TSO commands or a batch program. This adds unexpected work
automatically to the plan.
Security
Today, DP operations increasingly require a high level of data security, particularly
as the scope of DP operations expands and more people within the enterprise
become involved. Tivoli Workload Scheduler for z/OS provides complete security
and data integrity within the range of its functions. It provides a shared central
service to different user departments even when the users are in different
companies and countries. Tivoli Workload Scheduler for z/OS provides a high
level of security to protect scheduler data and resources from unauthorized access.
With Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and
protect user data to safeguard the integrity of your end-user applications (see
Figure 8 on page 71). Tivoli Workload Scheduler for z/OS can plan and control the
work of many user groups, and maintain complete control of access to data and
services.
Audit
Trail
Scheduler
TSO User JCL
RACF
Scheduler Data
Finance Sales Manufact.
Figure 8. Security
Audit trail
With the audit trail, you can define how you want Tivoli Workload Scheduler for
z/OS to log accesses (both reads and updates) to scheduler resources. Because it
provides a history of changes to the databases, the audit trail can be extremely
useful for staff that work with debugging and problem determination.
A sample program is provided for reading audit-trail records. The program reads
the logs for a period that you specify and produces a report detailing changes that
have been made to scheduler resources.
If you have RACF Version 2 Release 1 installed, you can use the Tivoli Workload
Scheduler for z/OS reserved resource class to manage your Tivoli Workload
Scheduler for z/OS security environment. This means that you do not have to
define your own resource class by modifying RACF and restarting your system.
Data integrity during submission: Tivoli Workload Scheduler for z/OS can
ensure the correct security environment for each job it submits, regardless of
whether the job is run on a local or a remote system. Tivoli Workload Scheduler
for z/OS lets you create tailored security profiles for individual jobs or groups of
jobs.
The engine is the focal point of control and information. It contains the controlling
functions, the dialogs, and the scheduler’s own batch programs. Only one engine is
required to control the entire installation, including local and remote systems (see
Figure 9 on page 73).
OS/390
Z/OS Tracker
Tracker
Hot Standby Controller
Distributed z/OS
WebSphere Connector Connector Sysplex
Application
Server OS/390
Z/OS Tracker
Tracker
Z/OSTracker
OS/390 Tracker Hot Standby Controller
Active Controller
Domain Managers
z/OS Agents
Distributed
Agents
The agent runs as a z/OS subsystem and interfaces with the operating system
(through JES2 or JES3, and SMF), using the subsystem interface and the operating
system exits. The agent monitors and logs the status of work, and passes the status
information to the engine via shared DASD, XCF, or ACF/VTAM.
You can use z/OS and cross-system coupling facility (XCF) to connect your local
z/OS systems. Instead of being passed to the controlling system using shared
DASD, work status information is passed directly through XCF connections. XCF
lets you use all the production-workload-restart facilities and its hot standby
function. See “Automatic recovery and restart” on page 66.
Remote systems
The agent on a remote z/OS system passes status information about the
production work in progress to the engine on the controlling system. All
communication between Tivoli Workload Scheduler for z/OS subsystems on the
controlling and remote systems is done through ACF/VTAM.
Tivoli Workload Scheduler for z/OS lets you link remote systems using
ACF/VTAM networks. Remote systems are frequently used locally “on premises”
to reduce the complexity of the data processing (DP) installation.
The server is a separate address space, started and stopped either automatically by
the engine or by the user through the z/OS start command. There can be more
than one server for an engine.
If the dialogs or the PIF applications run on the same z/OS system where the
target engine is running, the server might not be involved.
In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to
run on workstations in Tivoli Workload Scheduler. Tivoli Workload Scheduler for
z/OS passes the job information to the Tivoli Workload Scheduler Symphony file,
which in turn passes the jobs in the current plan to Tivoli Workload Scheduler to
distribute and process. In turn, Tivoli Workload Scheduler reports the status of
running and completed jobs back to the current plan for monitoring in Tivoli
Workload Scheduler for z/OS.
| Tivoli Dynamic Workload Broker integrates with the common agent services
| infrastructure to retrieve information about the available workstations in the
| environment. The goal of common agent services is to reduce the cost of
| ownership by minimizing customer effort, by reducing the complexity of the
| deployment, and by using resources more efficiently. To accomplish this, the
| common agent services architecture provides a shared infrastructure for managing
| the computer systems in your environment. This infrastructure has the following
| parts:
| common agent
| Each managed system in your deployment runs the common agent
| software, which provides a shared runtime for product agents and a single
| implementation of services used by multiple products.
| agent manager
| One server in your deployment runs the agent manager, which is a
| network service that provides authentication and authorization services
| and maintains a registry of configuration information about the managed
| systems in your environment.
| resource manager
| Each Tivoli management application that uses the common agent services
| architecture provides software called a resource manager, which uses the
| services of the agent manager to communicate securely with and to obtain
| information about the managed systems running the common agent
| software. The resource manager uses the services of the common agent to
| deploy and run its software on managed systems.
| The agent manager provides the Tivoli Dynamic Workload Broker server with a list
| of the common agents installed on the workstations in the environment. Based on
| this list, you decide on which workstations you want to install the Workload
| Agent. By installing the Workload Agent on a workstation, you add the
| workstation as a resource to the Tivoli Dynamic Workload Broker environment.
| The agent manager also enables secure connections via SSL between managed
| systems in your deployment by providing authentication and authorization
| services and issuing certificates.
| When the Workload Agent is installed, it communicates directly with the Tivoli
| Dynamic Workload Broker server. Together with the Agent, you also install the
| related features. A feature is a functional capability associated to the Workload
| Agent. The features are stored on the server and are available for installation and
| update on the target workstations in the environment using the Tivoli Dynamic
| Workload Console. By installing or removing features, you add or remove
| functional capabilities from the target workstation.
| Available resources are constantly monitored for up-to-date status with the
| Workload Agent. The information about resources in the IT environment is stored
| in the Resource Repository database.
| You can create and submit jobs in Tivoli Workload Scheduler, or you can create
| them in Tivoli Dynamic Workload Broker, using the Job Brokering Definition
| Console. The Job Brokering Definition Console provides an easy-to-use graphical
| interface for creating and editing jobs based on the Job Submission Description
| Language (JSDL) schema.
| When a job is submitted, Tivoli Dynamic Workload Broker checks the job
| requirements, the available resources and the related characteristics and submits
| the job to the resource that best meets the requirements.
| Tivoli Dynamic Workload Broker guarantees load balancing across resource pools
| over time. New resources just provisioned are automatically discovered and
| integrated in the scheduling environment so that jobs can run automatically on
| these resources.
| You can also establish an affinity relationship between two or more jobs when you
| want them to run on the same resource, for example when the second job must use
| the results generated by the previous job. If the resource is not available, the affine
| job is held until the resource is available again. You can define affinity between
| two or more jobs using the Web console or the command line interface.
| It is now the strategic user interface for the Tivoli Workload Automation suite of
| products and includes support for the latest functions and enhancements available
| with the scheduling engines. It is replacing the Job Scheduling Console whose
| functional contents are not being extended beyond those of version 8.4.
| The Tivoli Dynamic Workload Console is a light, powerful and user-friendly single
| point of operational control for the entire scheduling network. It allows for single
| sign-on and authentication to one or many schedulers, is highly scalable, and
| provides real-time monitoring, management and reporting of enterprise workloads.
| It also greatly simplifies report creation and customization.
| You can access the Tivoli Dynamic Workload Console from any computer in your
| environment using a Web browser through both secure HTTPS or HTTP protocol.
| The first and main actions you perform when you connect to the Tivoli Dynamic
| Workload Console are:
| Creating a connection to a scheduling engine (Tivoli Workload Scheduler or
| Tivoli Workload Scheduler for z/OS)
| You type the details (such as IP address, user name, and password) to
| access a scheduling engine, and, optionally, a database to operate with
| objects defined in plans or stored in the database. You can also define new
| scheduling objects in the database.
| From the Tivoli Dynamic Workload Console you can access the current
| plan, a trial plan, a forecast plan, or an archived plan for the distributed
| environment or the current plan for the z/OS environment.
| You might want to access the database to perform actions against objects
| stored in it or generate reports showing historical or statistical data.
| In addition, working both on the database and on plans, you can create
| and run event rules to define and trigger actions that you want to run in
| response to events occurring on Tivoli Workload Scheduler nodes.
| Creating tasks to manage scheduling objects in the plan
| You specify some filtering criteria to query a list of scheduling objects
| whose attributes satisfy the criteria you specified. Starting from this list,
© Copyright IBM Corp. 1991, 2008 79
Tivoli Dynamic Workload Console
| you can navigate and modify the content of the plan, switching between
| objects, opening more lists and accessing other plans or other Tivoli
| Workload Scheduler or Tivoli Workload Scheduler for z/OS environments.
Overview
The Tivoli Job Scheduling Console for the Tivoli Workload Automation portfolio is
an interactive interface for creating, modifying, and deleting objects in the product
database. It also helps you monitor and control objects scheduled in the current
plan.
The Job Scheduling Console helps you to work with Tivoli Workload Scheduler for
z/OS and with Tivoli Workload Scheduler. You can work with these products
simultaneously from the same graphical console.
To run the console, you only have to be able to log into a scheduling engine
through a connector. This means that you can manage plan and database objects
from any system, including a laptop, on which the Job Scheduling Console is
installed and from which you can reach via TCP/IP a server running the connector
for Tivoli Workload Scheduler or for Tivoli Workload Scheduler for z/OS.
Connectors manage the traffic between the Job Scheduling Console and the job
schedulers. Connectors are installed separately on a Tivoli management server and
on managed nodes that have access to the scheduler.
If you plan to use the Job Scheduling Console to schedule the workload with Tivoli
Workload Scheduler for z/OS, you need to install the Tivoli Workload Scheduler
for z/OS connector. If you plan to use the Job Scheduling Console to schedule the
workload with Tivoli Workload Scheduler, you need to install the Tivoli Workload
Scheduler connector.
Extensions, built into the Job Scheduling Console, extend its base scheduling
functions to specific scheduling functions of Tivoli Workload Scheduler for z/OS
and of Tivoli Workload Scheduler.
For each of these functions, you can use a list creation mechanism to list database
or plan objects that you select according to filtering criteria. Filtering criteria
narrow a list down to selected objects that you want to work with. You can list
objects without using filtering criteria. In this case, the list displays all the existing
objects of a kind. You can use both pre-defined lists that are packaged with the Job
Scheduling Console and lists that you create.
Work begins by selecting a Tivoli Workload Scheduler for z/OS engine. A popup
window lists what actions are available for the engine. The same actions can be
done by clicking the corresponding icons at the top of the window. The icons
display contextually with the engine.
Scheduler tasks
From the Job Scheduling Console, you can define and manage the following
objects in the scheduler database:
v Job streams
v Jobs
v Workstations
v Resources
Modifying a job stream involves adding, deleting, or modifying any of the jobs
that comprise it, along with the dependencies and run cycles. You can also delete
an entire job stream.
Job stream definitions are stored in the job scheduler databases. To browse or
update job streams you have created, you must make and run a list of job streams
in the database.
Jobs are stored in the job scheduler database as parts of job streams. To browse,
update, or delete a job definition, you must list the parent job stream in the
database.
82 IBM Tivoli Workload Automation: Overview
Tivoli Job Scheduling Console
Resource definitions are stored in the job scheduler database. To browse, update, or
delete a resource definition, you must make and run a list of resources in the
database.
Operator tasks
From the Job Scheduling Console, you can monitor and control the following
objects in the current plan:
v Job stream instances
v Job instances
v Workstations
v Resources
To monitor and control these objects, you must first display them in a list in the
Job Scheduling Console.
You can also change the status of a job stream instance to Waiting or to Complete.
Scheduler tasks
From the Job Scheduling Console, you can define and manage the following
objects in the scheduler database:
v Job streams
v Jobs
v Calendars
v Prompts
v Parameters
v Domains
v Workstations and workstation classes
v Resources
v Users
You can create, modify, and delete prompts in the Tivoli Workload Scheduler
database.
You can create, modify, and delete parameters in the Tivoli Workload Scheduler
database.
You can create, modify, and delete domain definitions in the Tivoli Workload
Scheduler database.
You can list workstations defined in the scheduler database, selected according to
filtering criteria, and browse or modify their properties. You can also delete
workstations from the database.
If a job stream is defined on a workstation class, each job added to the job stream
must be defined either on a single workstation or on the exact same workstation
class that the job stream was defined on.
You can list resources defined in the scheduler database, selected according to
filtering criteria, and browse or modify their properties. You can also delete
resources from the database.
Operator tasks
From the Job Scheduling Console you can monitor and control the following
objects in the daily plan:
v Job stream instances
v Job instances
v Workstations
v Domains
v File dependencies
v Prompt dependencies
v Resource dependencies
To monitor and control these objects, you must first display them in a list.
You can also select a different plan to use, other than the current plan.
In addition, you can browse job logs and get job outputs (STDLST).
v Link or unlink.
Common tasks
The Common view is an additional selection at the bottom of the tree view of the
scheduling engines. It provides the ability to list job and job stream instances in a
single view and regardless of the scheduling engine, thus furthering integration for
workload scheduling on the mainframe and the distributed platforms.
You can display common plan lists in the main Job Scheduling Console window.
With these you can run common lists of job and job stream instances from all the
engines displayed in the Job Scheduling tree.
You can list job or job stream instances from all the engines to which the Job
Scheduling Console is connected. As it is for individual engines, default lists are
provided, but you can also create and save filtered lists that respond to your
needs.
The Common view implementation considers only the common properties of job
and job stream instances. This means that you can filter your queries only on
common characteristics and the resulting lists will have only columns that display
the common attributes.
You can select the engines to query on. You can also modify the objects by
selecting the actions that are allowed by the specific scheduler engine
Tivoli Workload Scheduler for z/OS creates the production plan also for the
distributed network and sends it to the domain managers and to the
directly-connected agents. The domain managers send a copy of the plan to each of
their agents and subordinate domain managers for execution.
The Tivoli Workload Scheduler domain managers function as the broker systems
for the distributed network by resolving all dependencies for their subordinate
managers and agents. They send their updates (in the form of events) to Tivoli
Workload Scheduler for z/OS so that it can update the plan accordingly. Tivoli
Workload Scheduler for z/OS handles its own jobs and notifies the domain
managers of all the status changes of the Tivoli Workload Scheduler for z/OS jobs
that involve the Tivoli Workload Scheduler plan. In this configuration, the domain
managers and all the distributed agents recognize Tivoli Workload Scheduler for
z/OS as the master domain manager and notify it of all the changes occurring in
their own plans. At the same time, the agents are not permitted to interfere with
the Tivoli Workload Scheduler for z/OS jobs, because they are viewed as running
on the master that is the only node that is in charge of them.
Tivoli Workload Scheduler for z/OS also allows you to access job streams
(schedules in Tivoli Workload Scheduler) and add them to the current plan in
Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies
among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload
Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and
control the distributed agents.
In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to
run on workstations in the Tivoli Workload Scheduler network. Tivoli Workload
Scheduler for z/OS passes the job information to the Symphony file in the Tivoli
Workload Scheduler for z/OS server, which in turn passes the Symphony file to
the Tivoli Workload Scheduler domain managers (DMZ) to distribute and process.
In turn, Tivoli Workload Scheduler reports the status of running and completed
jobs back to the current plan for monitoring in the Tivoli Workload Scheduler for
z/OS engine.
MASTERDM
z/OS
Master The TWS plan is extracted
Domain from the TWS for z/OS plan
Manager
Windows 2000
A light version of the TWS plan The TWS plan is distributed to
is distributed to the SAs the subordinate DMs and FTAs
FTA1
TWS plan
Windows 2000
DomainZ
Domain AIX
Manager
DMZ
TWS plan
DomainA DomainB
AIX HPUX
Domain Domain
Manager Manager
DMA DMB
Distributed agents
A distributed agent is a computer running Tivoli Workload Scheduler on which
you can schedule jobs from Tivoli Workload Scheduler for z/OS. Examples of
distributed agents are the following: standard agents, extended agents,
fault-tolerant agents, and domain managers.
Distributed agents replace tracker agents in Tivoli Workload Scheduler for z/OS.
The distributed agents help you schedule on non-z/OS systems with a more
reliable, fault-tolerant and scalable agent.
In the Tivoli Workload Scheduler for z/OS plan, the logical representation of a
distributed agent is called a fault-tolerant workstation.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corporation in the United States, other countries,
or both. If these and other IBM trademarked terms are marked on their first
occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this
information was published. Such trademarks may also be registered or common
law trademarks in other countries. A current list of IBM trademarks is available on
the Web at ″Copyright and trademark information″ at http://www.ibm.com/legal/
copytrade.shtml.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product, and service names might be trademarks or service marks
of others.
Notices 97
98 IBM Tivoli Workload Automation: Overview
Index
A D L
accessibility ix Data Facility Hierarchical Storage local options 56
advanced program-to-program Manager (DFSHM) 17 long-term plan 64
communication (APPC) 69 Decision Support 17
alerts, passing to NetView 66 dependencies
API (application programming
interface) 69
defining 63
graphic display of 63
M
mailman 47
APPC (advanced program-to-program DFHSM (Data Facility Hierarchical
manual status control 71
communication) 69 Storage Manager) 17
master domain 44
application domain manager 45
master domain manager 45
definition of 62
monitoring the workload 18
application programmer 21
multi-tier architecture 93
application programming interface
(API) 69
E
education
audit-trail facility 71
authority checking 72
See Tivoli technical training
end users, queries from 22
N
automatic national language features 61
extended agent 45
job submission 66 netman 47
status checking 68 NetView
status reporting 69 alerts 66
automatic job and started-task F description of 16
recovery 66, 68 fault-tolerant agent 45 RODM 17
automation 18 file dependency 54 network Agent 45
availability 19
G O
B global options 56 occurrences 64
backup domain manager 45 glossary viii operation dependencies 63
backup master 45 graphic display of dependencies 63 operations manager 21
backup system 68 operator, workstation 21
batchman 47 Output Manager for z/OS 17
benefits 15, 22
business processing cycle 64
H
helpdesk 22
P
parameter 49
C I periods 64
calendar 49 PIF (program interface) 69
IMS 17
definition of 63 PIF applications
integration 16
CICS 17 applications 74
ISPF (Interactive System Productivity
Common Programming Interface for plan
Facility)
Communications (CPI-C) 69 current 64
dialog 65
Composer 47 definition of 64
configurations 72 detailed 64
Conman 48 long-term 64
connector 43 J modification of 70
for Tivoli Workload Scheduler 43 job completion checker (JCC) 69 trial 62
console operator 21 job dependencies types 61
controlled systems 73 See operation dependencies planning
controlling system job log 84 trial plans 62
description 72 job recovery production control file 44
recovery of 68 automatic 66 production period 50
conventions used in publications viii manual 70 production workload restart 66, 68
CPI-C (Common Programming Interface Job Scheduling Console program interface (PIF) 69
for Communications) 69 accessibility ix prompt 49, 85
cross-system coupling facility (XCF) 66, job streams 74 prompt dependency 54
68, 73 job submission publications viii
current plan 64 automatic 66
customers, queries from 22 manual 70
job tailoring 66
jobman 47
S W
SA for z/OS Automation Feature 17 work submission, automatic 66
SAF (system authorization facility) 72 Workload Manager (WLM) 16, 68
schedule 64 workload monitoring 18
scheduling manager 20 workload restart 66, 68
security 72 workstation
shift supervisor 21 changing the status of 70
simulation with trial plans 62 definition of 62
special resources operator 21
definition of 63 workstation class 48
standard agent 45 writer 47
standard list file 55
status checking, automatic 68
status control
manual 69, 70 X
status inquiries 69 XCF (cross-system coupling facility) 66,
status reporting 68, 73
automatic 69
from heterogeneous environments 69
from user programs 69
step-level restart 67
symphony 44, 50
syntax diagrams, how to read ix
SYSOUT, checking of 69
system authorization facility (SAF) 72
system automation commands
tailoring 66
System Automation for z/OS 17
System Automation z/OS (SA/zOS) 17
system failures 66
Systems Application Architecture
Common Programming Interface for
Communications 69
T
technical training
See Tivoli technical training
time restrictions 83
Tivoli Business Systems Manager 54
Tivoli Dynamic Workload Console
accessibility ix
Tivoli Information Management for
z/OS 17
Tivoli technical training ix
Tivoli Workload Scheduler 61, 74
Printed in USA
SC32-1256-07
Spine information: