Vpar Training
Vpar Training
Partitions
August 2003
                NES2-VPARS-vC
Legal Notices
The information in this document is subject to change without notice.
Restricted Rights Legend. Use of this manual supplied for this class is restricted to this class only. Additional copies of the
manuals may be made for security and back-up purposes only. Resale of the manuals in their present form or with alterations,
is expressly prohibited.
HEWLETT-PACKARD COMPANY
3000 Hanover Street
Palo Alto, California 94304
U.S.A.
Copyright Notices. © copyright 2002 Hewlett-Packard Company, all rights reserved. Reproduction, adaptation, or translation
of this document without prior written permission is prohibited.
Preface
Using Virtual Partitions (vPars), a supported HP-UX server or hardware partition can be subdivided into multiple logical
servers, each running its own copy of HP-UX 11i. Each vPar provides partial isolation from application or operating system
faults in other vPars. Each vPar behaves as if it were a separate standalone system and can run different patch levels and
versions of HP-UX 11i.
Slide 1-1:
                                                        Module 1 introduction
                                                        NES2-VPARvC
HP-Restricted
Slide 1-2:
               29/08/2003                                   HP restricted                                                   2
Typically, servers within an enterprise only run at between 40 and 60 per cent of full capacity. Processing peaks occur when,
for example, calculations are executed, but this does not occur on a daily basis. Situations must, however, be prevented where
applications requiring the same resources at the same time come into conflict. The solution in the past was logical, but
inefficient: different applications were installed on different servers. The end result was large idle run times.
To eliminate this, Hewlett-Packard provides a broad portfolio of flexible and comprehensive partitioning solutions. This
includes hard partitions, virtual partitions and resource partitions that enable application environments to be isolated and to
function independently of each other. At the same time, however, they guarantee the highest flexibility by ensuring that
resources can be shifted quickly and easily during operations.
The concept of partitioning provides an impetus for server consolidation, as it enables the simultaneous use of several
operating systems and applications on one hardware platform. Through the use of partition software, the server is divided into
several virtual units (partitions). An independent version of HP-UX 11i (or higher) can then be installed on each partition,
along with any required applications. In this way, the contrary performance curves of different servers can be readily
accommodated on one server.
Three advantages result from this process. Firstly, resources usage is substantially improved, which reduces the Total Cost of
Ownership (TCO). Secondly, security is increased as the failure of an operating system or an application in one partition does
not impact the efficiency of the remaining partitions, while intervention and restoration of the affected partition can also be
completed in complete isolation. Thirdly, vPar provides greater flexibility as several different applications, and even various
versions of the HP-UX operating system, can be supported on the one server platform. Additionally with virtual partitioning
(vPar), partition processor allocation can be dynamically configured during operation - without the need for a restart of either
the partition or the entire system.
Slide 1-3:
isolation
              29/08/2003                                   HP restricted                                                   3
Notes:
Partitioning allows greater flexibility in the use of system resources, however there is a trade off between the amount of
flexibility on offer and the level of isolation. Not all customer requirements are likely to demand the same compromise
between these two objectives. To handle this HP offers a range of partitioning options which collectively are known as the HP
partitioning continuum.
Solutions based on separate systems provide a high degree of fault isolation but lack flexibility, whereas scheduling solutions
within a single instance of HP-UX can provide outstanding flexibility but have little fault isolation capability.
Hard Partition solutions: There are two hardware based partitioning options, Hyperplex, which uses a centrally managed
network of separate systems, and nPartitions (node Partitions). Hyperplex is not really relevant to this course and so will not be
discussed any further
Node Partitions: HP's Superdome, rp8400, and rp7410 platforms are based on a modular set of components, with a cell, or
cell board, being the basic building block of the system. A cell consists of a symmetric multi-processor (SMP), containing up
to 4 processors and up to 16 GB of main memory. A connection of a 12-slot PCI card cage is optional for each cell. An
nPartition consists of one or more cells that communicate coherently over a high-bandwidth, low-latency crossbar fabric.
A single physical server can be configured as one or nPartitions, each with dedicated resources. Hardware fault isolation
ensures that a fault occurring in nPartition #1 will have no impact on the availability or performance of applications executing
in nPartition #2.
Value proposition: With nPartitions, an HP rp7410, rp8400, or Superdome server can be configured either as one large
symmetric multiprocessor or as several independent nPartitions. Using independent nPartitions has the following advantages:·
•     electrical isolation (i.e., errors in the servicing of one nPartition incurs zero impact on other nPartitions)·
•     improved system utilization (i.e., spare capacity can be configured as an nPartition(s) to cater to additional development,
      test, or production environments until the capacity is required)·
•     enhanced flexibility by supporting multiple environments (i.e., multiple, independent OS instances executing within the
      same physical server)·
•     increased uptime (i.e., each nPartition can be serviced independently, including addition of resources and reconfiguration)
      without affecting other partitions
Software solutions
Software based solution: Whilst hardware based solutions have the advantage of providing protection down to the hardware
layer, they provide less flexibility, and involve reboots to perform reconfiguration. Software solutions can provide more
flexibility at the cost of less isolation.
HP-UX virtual partitions: This is the partitioning system that this class is concerned with. Here multiple copies of HP-UX
11i run within a single hardware environment, either a single system or nPartition.
The down side is that since these copies of HP-UX exist within a single hardware environment problems at the hardware level
can affect all the partitions.
On the upside, the virtual partition environment allows finer granularity, and some dynamic reconfiguration.
Resource allocation solutions: Both nPars and vPars create separate partitions which run individual copies of HP-UX. The
partitioning continuum also contains solutions where the resources within a single instance of HP-UX can be allocated under
managed control, rather than simply on a “on demand” basis.
Processor Sets: or psets, are an HP-UX feature that allows the customer to group processors together within a single OS
instance. Representing a group of processors within a system, psets provide a mechanism for dynamic CPU resource
management, enabling users to partition large systems into more than one virtual machine in terms of the processor resources
only. An application assigned to a pset will only utilize processors within that assigned pset, ensuring CPU resource isolation
for applications and users. Each pset requires that a minimum of one processor be assigned. Each system may, therefore, have
multiple psets configured.
NOTE                  Note: HP-UX processor sets do not provide fault isolation against operating system failures, or multiple
                      OS instances. They can help provide fault isolation against a misbehaving application attempting to use
                      too much CPU time.
Value proposition: As technology enables ever-greater numbers of processors within a system to scale, customers need to
realize the benefits that server consolidation offers. This does, however, require improved resource management to control the
manner in which processor resources are utilized by users and applications. Although, as we will see, HP-UX provides
share-based sharing of processor resources with PRM, this facility does not allow the assignment of a dedicated set of
processor resources for a specific set of applications, resulting in applications not taking advantage of processor locality.
Certain applications, such as the Oracle Database Resource Manager (ODBRM), require dedicated resources without
interference from other applications.
This functionality is provided by psets, facilitating the partitioning of processors in order to group applications within a pset.
Resource management based on processor sets is completely hardware platform independent and can be used on any
multi-processor system running HP-UX 11i or above.
Features of pset: Processor sets include the following features:·
•
•    management of the most critical shared server resources: CPU, real memory, and disk I/O bandwidth without hardware
     duplication·
•    support for resource allocation policies for both online and batch applications executing within either an nPartition or a
     vPar or a single server node ·
•     fine grain CPU allocation, i.e., CPUs can be allocated on a percentage or share basis·
•     application independence, i.e., because resource allocation is transparent to the application, applications require no
      modification to execute on a server whose resources are allocated by PRM ·
•     dynamic PRM (re)configuration, i.e., the configuration of PRM groups does not require a system reboot ·
•     automated policy changes, i.e., applications do not need to be restarted when resource allocation policies are changed ·
•     hierarchical resource allocation, where a PRM group's resources are divided among its subgroups·
•     integration with psets, i.e., instead of having only percentages or shares of CPUs, a resource group can have dedicated
      processors in addition to dedicated memory being allocated by associating it with a pset·
•     integration with UNIX accounting·
•     integration with HP Glanceplus Pak
Why use PRM? PRM is ideal for consolidating applications that co-exist well within a single HP-UX image yet require
guaranteed resources, e.g., multiple SAP R/3 instances. In addition, PRM can also be used to cap resource consumption, thus
preventing runaway processes. The scenario variation within a single partition (e.g., online during the day, backup and batch
windows at night) often requires changes in resource allocation in order to adhere to defined SLOs (e.g., between 7 am and 7
pm, backup applications are eligible for 0% CPU utilization; however, between 7 pm and 7 am the following day, backup
processes are eligible for 60% CPU utilization).
PRM's dynamic configuration features enable the system administrator to create multiple configurations representing resource
and time-based policies.
Moreover, PRM is easy-to-use because PRM groups are configured through a Java™-based graphical user interface, which
executes either as a standalone Java™ application or within a browser. System administration is simplified because multiple
servers can be configured with a single GUI session.
Workload manager(WLM) The software based solutions in the HP-Partitioning continuum all offer the capability for
dynamic reconfiguration. This allows changes to be made to the allocation of resources on live systems to react to the
performance needs of the users and the business. HP’s WLM product provides a means of automatically tuning vPars, pSets
and PRM resources to meet Service Level Objectives.
Value proposition With HP-UX WLM, the system administrator creates one or more service-level objectives (SLOs) for
defined workloads consisting of applications. Each SLO is assigned a priority, in addition to metric or resource usage goals. As
the applications execute, HP-UX WLM compares the performance metrics or usage against the defined goals automatically
adjusting the CPU entitlements (the amounts of CPU that are available to the workloads) to achieve each goal.
 WLM allows you to run the system at 100% utilization and still guarantee the performance of your mission-critical
applications. This is accomplished by putting one or more critical applications that have performance requirements on a
system along with many lower priority workloads that have no performance requirements. WLM will allocate the resources to
ensure that the critical applications get the resources they need to meet their performance requirements, while allocating spare
CPU cycles to the lower priority workloads. WLM literally maximizes the use of CPU resources while ensuring that the most
critical applications perform according to the defined SLOs.
Thus, utilization of HP-UX WLM enable the following:
•     multiple applications can be consolidated so as to utilize excess capacity while ensuring that the highest priority
      applications still have access to the resources they need during peak times
•     system resources can be dynamically re-allocated in response to changing priorities, conditions that change over time,
      resource demand, and application performance
Features of WLM In order to facilitate the sharing of excess capacity between applications, HP-UX WLM provides the
ability to·
•    prioritize workloads on a single system or across an MC/Serviceguard cluster, adjusting the workloads' CPU resources
     based on their goals·
•    manage by service-level objectives·
•    adjust resource allocations by automatically enabling or disabling SLOs based on time of day, system events, or
     application metrics·
•    automatically allocate resources upon MC/Serviceguard package failover·
•    ensure critical workloads have sufficient resources to perform at desired levels·
•    set and manage user expectations for performance·
•    run multiple workloads on a single system and maintain performance of each workload·
•    monitor resource consumption by applications or users through HP Glanceplus or PRM tools·
•    set minimum and maximum amounts of CPU available to a workload·
•    automatically allocate CPU resources in order to achieve the desired SLOs
Set real memory and disk bandwidth entitlements (guaranteed minimums) to fixed levels. SLOs can be entitlement-based or
goal-based. With an entitlement-based SLO, HP-UX WLM simply tries to grant the associated workload a certain amount of
the CPU. With goal-based SLOs, HP-UX WLM actively changes the associated workload's CPU to best meet the SLO. These
SLOs are based on one of two goal types:
•    metric goals: goals based on a metric, such as having at least x transactions per minute or a response time under y seconds
•    usage goals: goals based on how efficiently workloads use their CPU allocations. If a workload is not using a certain
     amount of its allocation, its allocation is decreased; similarly, if a workload is using a high percentage of its allocation, the
     allocation is increased
Why use WLM The major benefit of using HP-UX WLM is its ability to manage service-level objectives. The most benefit
will be gained by users who meet one or more of the following conditions:
•    execute more than one workload (e.g., multiple database servers, a database server and an applications server)
     concurrently within a single HP-UX (11.0 or later) instance
•    schedule workloads that can be prioritized·
•    have an important workload that includes a metric goal·alternate, discretionary workloads
•    need consistent performance from applications under varying application and system loads·
•    have implemented MC/Serviceguard and need to ensure proper prioritization of workloads in the event of a failover
Slide 1-4:
29/08/2003 HP restricted 4
HP-UX Virtual partitioning allows a single server or a hard partition to be sub divided into a number of virtual partitions, each
running it’s own independent copy of HP-UX.
As each virtual partition runs independently then they are each able to run different versions of HP-UX. HP-UX 11.11 is the
first release to support the vPars functionality, but different release levels of HP-UX 11.11 can be run within different
partitions. When future releases of HP-UX for HP-PA based systems arrive then it is planned to allow these to run with in
vPars along with current releases.
Unlike the nPar environment where the partitions are built by the hardware/firmware out of cells. vPars are built by a software
layer which can partition
•     IO - at the level of Local Bus Adapters (LBA), in all the currently supported systems this equates to the level of the PCI
      cards. All the current systems have one PCI slot per LBA.
When configuring CPUs in vPars, they can either be bound to a partition or be floating (or unbound). These floating CPUs can
be dynamically moved between partitions, either manually or automatically under the control of a tool like WLM.
Since each vPar runs it’s own independent instance of HP-UX, software running in one partition is isolated from software
running in another. If software in one partition needs to communicate with another partition it must do so via the network. This
level of isolation allows partitions to be individually rebooted, reconfigured crashed... etc. without effecting other partitions.
This isolation is only at the software level, a failure or problem at the hardware level can affect all partitions. So for example
an failure such as an HPMC will result in all the partitions failing. Also performance problems associated with
backplane/memory performance will affect all partitions sharing the same hardware resources.
One of the major problems with consolidating multiple applications onto single large shared servers has been optimizing the
Operating system to efficiently run the multiple application, since these will often have different or even conflicting
requirements. With the vPars environment where different applications can be run within separate partitions each with an
individual copy of HP-UX this problem goes away. They can be tuned separately, patched separately and managed separately.
Component of vPars
Slide 1-5:
29/08/2003 HP restricted 5
•            Device drivers a number of device drivers are needed to allow partitions to communicate with the monitor and the virtual
             console. A driver is also needed to allow the monitor to communicate with the real console which is owned by one of the
             virtual partitions.
•            Daemons the vPars environment runs a couple of daemon to manage the partitioning environment, and maintain the
             VPDB.
Supported systems
Slide 1-6:
             29/08/2003                                        HP restricted                                               6
Different releases of vPars add support for different hardware platforms, so in practice it is probably a good idea to check with
the “Virtual Partitions Ordering and Configuration Guide”. This can be found on http://docs.hp.com, however this is not
always the latest version. Internally within the HP network you should be able to find the latest version on
http://esp.mayfield.hp.com.
•            The initial release of vPars (A.01) from November 2001 only supported vPars running on the L3000 (now rp5470) and
             N4000 (rp7400) systems
•            A.02.00, June 2002 release added support for
— HP Superdome servers
— nPartitions
      — vparmgr GUI
      — tighter integration with Ignite/UX (which is now vPar-aware)
•     A.02.01, September 02 release added support for:
•     Multi-cabinet nPartitions
•     pay-per-use % utilization tool, T2351AA
•     Veritas Volume manager RAID-5 support as a alternative to the RAID 4Si controller card
Patch PHSS_28764 resolved some issues:
•     L3000(rp54XX) & N4000(rp7400) systems fitted with PA8700 CPUs (> 600MHz) did not boot vpmon.
•     Boot performance problems, where one partition booting would need exclusive access to IODC and so potentially causing
      other partitions to stall.
All current releases of vPars only support HP-PA Risc based systems. According to the vPars road map (again you can look for
it on ESP) support for IPF based systems is not due till HP-UX 11i v3 in late 2004 to the first half of 2005.
Slide 1-7:
                    –     MCOE
                    –     TCOE
                    –     EOE
                    –     Base
             29/08/2003                                      HP restricted                                      7
The vPars software is currently only supported on HP-UX 11.11
All HP-UX 11.11 operating environments are supported for use with vPars
Boot disks
Slide 1-8:
Boot disks
The vPars environment supports nearly all of the disk interfaces supported by HP-UX on the platforms running vPars.
Table 1-1              Disk interfaces listed as supported for use with HP-UX vPars
A4800A single port PCI FWD SCSI-2 Card for HP 9000 servers
A5158A One Port PCI 2x Fibre Channel Adapter (2x PCI, 1Gb FC)
A5159A dual port PCI FWD SCSI-2 Card for HP 9000 servers
 A5838A                 PCI dual port Ultra2Wide SCSI and dual port 10/100Base-T PCI HBA, boot support
                        is provided by A.02.02 vpmon for virtual partitions only. This card can not be used to
                        boot the system either into standalone HP-UX or into vpmon
Also, Superdome IOX - I/O Expander - is supported as of vPars A.02.01 release for not only data, but boot and dump. This
means that vPars supports a Superdome vPar to be booted off an I/O card configured in the I/O expander chassis, assuming the
boot cards are currently supported by vPars.
Table 1-2 Disk interface not listed as supported for use with HP-UX vPars
 Part number            PCI Ultra 320, I believe that there is also an Ultra 320 SCSI HBA available, this is
 unknown                not mentioned in the June 2003 version of the configuration guide
Mass storage devices, Storage devices supported for the above PCI interface cards, and otherwise supported for boot and
dump within HP-UX (on servers supported with vPars), are supported for boot/dump within a vPars environment.
SAN interconnects, Where SAN environments are used there are many more possible combinations of equipment. The vPar’s
lab has not & can not test all possible combinations of systems, HBA, FC switches, storage systems. Currently only the
following interconnect devices are supported with vPars:
• FC S10 hub
Slide 1-9:
Product interaction
             iCOD 5.0
             MC/Service Guard
             Work Load manager
             Ignite-UX
             Ignite-UX Recovery and Expert Recovery
             UPS
             Real-time clock
             Kernel crash dump analyzer
             Support Tools
             Ignite-UX and other Curses Applications
             29/08/2003                                     HP restricted                                                   9
The vPars software has some interactions with other HP-UX software
iCOD, As of vPars release A.02.01 and iCOD version B.05.00, vPars can work in conjunction with iCOD systems. The vPar
software CD contains iCOD B.05.00.
MC/ServiceGuard, vPars can be used and are supported as part of ServiceGuard clusters. However when planning any highly
available solution you need to be careful to avoid single point of failure. So although we support so called “Cluster in a box”
solutions, where all the nodes of a cluster are inside a single system this should not be viewed a real highly available solution,
since a single hardware failure would result in the loss of the whole cluster.
Virtual partitions within a cluster are not a problem to high availability as long as less than quorum number of members are
virtual partitions of the same hardware environment.
Workload manager (WLM) From WLM release A.02.00 on the June 2002 applications release WLM supports global
workload management across virtual partitions.
Ignite/UX, Ignite versions starting with B.3.4.115 and before B.3.7 need to be modified to support virtual partitions. All the
virtual partitions within one system/nPar share a common physical memory map. So it is not possible for all the HP-UX
kernels to start at address zero (or 128K actually). When a virtual partition is loaded, the vpmon relocates in to start at a
different physical address. HP-UX kernels, including the install kernel WINSTALL need to be relocatable. Before B.3.7, the
supplied WINSTALL file was not relocatable, and so a new version of the file needed to be downloaded from
http://www.software.hp.com.
Starting with Ignite version B.3.7 which shipped with the June 2002 releases Ignite has had a suitable WINSTALL file as
standard.
Currently there is no tape boot support in A.02.02 vPars. It not possible to boot a virtual partition from a
make_tape_recovery tape. However, it may be possible to boot the whole system from the tape, so it may be possible to
perform a tape based recovery in standalone mode (i.e. without vPars/vpmon running).
Virtual partitions can be booted from an Ignite/UX server so no such restriction happens with using make_net_recovery.
UPS, a UPS typically has a single connection to an HP-UX server, under vPars this connection will only be connected to a
single virtual partition. In the event of a power failure exhausting the capacity of the UPS by default only the attached virtual
partition will be shutdown. There is no option to the standard shutdown command to shutdown all partitions, so arrangements
will need to be made to make sure all partitions are correctly shutdown.
Real Time Clock each virtual partition can have it’s own settings for time. However the system only possesses one real time
clock, so when the time is set from within HP-UX using the date command, rather than modifying the hardware’s real time
clock, vpmon stores the difference between the hardware clocks time, and the time set by the user.
Should the time be checked from the boot console handler (BCH) then it will not reflect the correct time as set from HP-UX.
If the time is set from BCH then this will change the times of all partitions, but not to the correct time. The partition still have
their own stored offset between the hardware clock’s time, and the time they like to see. As we will see in a later module,
vpmon, provides a toddriftreset command to reset all these time offsets.
Crash dump analysis, when looking at HP-UX kernel dumps from a virtual partition system it is necessary to make sure that
the tools understand relocatable kernels. However as well as the HP-UX kernels inside vPars being able to dump it is also
possible that vpmon could produce a dump.
A later module discusses dump analysis, so we will wait until then to talk about it.
Support tools, it is important to make sure that the version of the diagnostic support tools supports vPar environments.
Screen/Curses based applications, applications that control the terminal screen such as vi, or the terminal user interface
(TUI) based versions of sam or swinstall can have the screen messed up by using the virtual console in vPars. For example
if you are performing some task within sam and decide you need to go to another virtual partitions console using ^A you could
leave the screen in a odd mode. Also on returning to the original partitions console, the screen would not automatically be
re-drawn. Most curses based application allow the screen to be re-drawn using a “^L”.
Other problems with the virtual console, also include applications that expect “^A” for their own input. Using EMACS mode
for command input in the shell for instance uses “^A” to go to the start of the line.
Stable storage settings and setboot, using the HP-UX setboot command, normally a system admistrator can set the primary
and alternate boot paths for a system. These are stored in stable storage, or NVRAM. On a system running vPars, boot paths
are virtual partition parameters, and are stored in the VPDB. When the setboot is used from within a virtual partition
running under the control of vpmon, it works on these boot settings and not the ones in stable storage.
The LIF boot area, normally when an HP-UX system boots the system firmware loads ISL from the LIF boot area. This then
reads the AUTO file which tells the system how to proceed with the boot. Normally the AUTO file just contains the command
hpux, so ISL loads the HPUX secondary loader from LIF area which in turn loads and boots the HP-UX kernel
/stand/vmunix.
Virtual partitions are not booted in this way, the vpmon monitor replaces ISL, and the HP-UX secondary loader. It loads and
boots the kernels for the different partitions.
The monitor itself, /stand/vpmon, however does need to be loaded by ISL and the secondary loader HPUX.
mkboot, The mkboot -a command will no longer need to be used to set boot options for the HP-UX kernel. These will now
need to be specified as part of the configuration of the virtual partition. Instead, mkboot -a needs to be used to change the
AUTO LIF file to get /stand/vpmon loaded instead of the regular HP-UX kernel.
This will be covered when we look at installing and configuring vPars.
Shutdown & Reboot, the normal HP-UX shutdown and reboot commands are not particularly aware of the vPars
environment. They are used to shutdown or reboot an instance of HP-UX, it does not matter whether that copy of HP-UX is in
a standalone system or in a virtual partition.
When shutting down a virtual partition it is important to remember that other partition might be running on the system, so do
not just switch the system off, unless you know no other partition is running.
The /stand filesystem, vPars creates a number of files in the /stand filesystem on all virtual partitions. This increases the space
requirements. The default size for a /stand filesystem is now 304MB.
SCSI card settings, configuring SCSI initiator IDs or other settings on PCI SCSI cards is normally done from the BCH
service menu. For a vPars system, accessing BCH requires that all partitions are shutdown and that the monitor is rebooted. To
avoid this level of interruption vPars ships with a command scsiutil that allows SCSI cards belonging to one partition to be
configured from another one whilst the owning partition is down.
Licensing
Slide 1-10:
Licensing
29/08/2003 HP restricted 11
•               VPARBASE, is a $0 downloadable demo version. It allows customers to tryout vPars but has restrictions on the
                configuration that can be setup:
Preface
Using Virtual Partitions (vPars), a supported HP-UX server or hardware partition can be subdivided into multiple logical
servers, each running its own copy of HP-UX 11i. Each vPar provides partial isolation from application or operating system
faults in other vPars. Each vPar behaves as if it were a separate standalone system and can run different patch levels and
versions of HP-UX 11i.
Slide 2-1:
HP-Restricted
Module objectives
Slide 2-2:
Module objectives
               Understand
                • The architecture of a system running standalone HP-UX
                • The architecture of a system running HP-UX under vPars
                • The hardware resources managed and partitioned by vPars
                • What needs to change for a system to run virtual partitions
                • How an HP-UX system boots with and without vPars
                • Console operation
                • Hardware resources
                      –     CPUs, floating and bound
                      –     MEMory
                      –     IO
                • vPar management & security
               14/08/2003                                          HP restricted                                             2
In this module of the class we aim to get an understanding of:
•              The system architecture of both the software and hardware environments of an HP-UX system with and without virtual
               partitions
•              The changes needed to the HP-UX kernel to be able to run within the virtual partition environment
•              The console with the vPars environment
•              How vPars partitions up the hardware resources within the system
•              The management and security implications of working with vPars
Slide 2-3:
               LBA LBA LBA LBA LBA LBA    LBA LBA LBA LBA LBA LBA
                       LAN                  LAN LAN     LAN
                       LAN      SCSI SCSI   LAN LAN FC  LAN FC  FC
             LBA       LAN                  LAN LAN     LAN
               LAN     LAN                  LAN LAN     LAN
               SCSI      cdrom
                  SCSI
                  SCSI     Root
               GSP
               Serial     Root’
             CORE IO
              14/08/2003                                   HP restricted                                                  3
This slide shows a non virtual partition system. Here all the resources of the system are controlled by a single instance of
HP-UX. All of these resources are available to applications running within this instance of HP-UX.
Slide 2-4:
App 2 App 4
App 1 App 3
                                                  Firmware
                                                PDC & IODC
14/08/2003 HP restricted 4
When looking at the organization of the software within the system it can be useful to organize it as a series of layers:
•              Firmware, often over looked when looking at the software on the system is the “firmware” built into the actual hardware
               of the system. On PA systems the firmware is known as Processor Dependent Code(PDC) and IO Dependent
               Code(IODC). The operating system needs to interact with this firmware layer to carry out many command tasks.
•              HP-UX Operating Environment, primarily the kernel, but also all the other parts of HP-UX such as the daemons... etc.
•              Applications, it is possible for many applications to run within a single instance of HP-UX. However this is not always
               desirable, as we discussed in Module 1.
Slide 2-5:
                                                                         App 4
                          App 2
App 1 App 3
                                         Firmware
                                       PDC & IODC
14/08/2003 HP restricted 5
Virtual partitioning is added to the system as a new layer. It comes underneath the HP-UX Kernel and is able to divide up the
resources of the system between a number of virtual partitions, each of which runs it’s own instance of HP-UX. The vPar
monitor (VPMON) only allows the individual instances of HP-UX to see the parts of the system assigned to them.
Once the partitions are loaded and their HP-UX kernels are running, vpmon is not involved in the actual access to memory or
IO devices But when the kernel is scanning for the available hardware, vpmon intercepts the accesses and limits what
hardware is visible to the partition.
Slide 2-6:
                 LBA LBA LBA LBA LBA LBA                                LBA LBA LBA LBA LBA LBA
                      1       1        3       2         2     3              2         3        2       1       1        3
                             LAN               SCSI           SCSI            LAN      LAN FC           LAN FC           FC
             LBA     1       LAN                                              LAN      LAN              LAN
                             LAN                                              LAN      LAN              LAN
                 LAN         LAN                                              LAN      LAN              LAN
                 SCSI          cdrom
                    SCSI
                    SCSI           Root
                 GSP
                 Serial
                                                    Root            Root
                                   Root’
             CORE IO
               14/08/2003                                     HP restricted                                                   6
From the hardware perspective different pieces of hardware can be assigned to different partitions. CPUs, memory and Local
Bus Adapters(LBA) can be assigned to virtual partitions. It is not necessary to assigned all the hardware resources, some can
be kept back for future use.
When assigning LBAs to partitions all the hardware associated with the LBA will be available exclusively to that partition.
It is not possible to dynamically add LBAs to partitions, so it can be useful to add unused empty LBAs to partitions since this
allows the HP-UX 11.11 online add functionality to be used to dynamically add interface cards to partitions at a later date. We
will discuss this in more detail in the planning module.
Slide 2-7:
                                                PDH              PDC
                    SBA                                                                                             SBA
                                                                                  Assigned to               Owned by
               LBA LBA LBA LBA LBA LBA                                              vPars                    vpmon
                     1       1         3        2        2         3
                            LAN                SCSI               SCSI            CPUs                    SBAs
             LBA    1       LAN
                            LAN                                                   Memory                  PDH/PDC
               LAN          LAN
               SCSI                                                               LBAs                    IO TLB
                 SCSI                                                                                     Some
                 SCSI                                                                                     memory
              14/08/2003                                          HP restricted                                                 7
CPUs, Memory & LBAs are assigned to virtual partitions, other hardware resources belong to vpmon:
•            System Bus Adapters(SBA), the partitioning of IO resources is performed at the level of the LBAs, the SBA can not be
             assigned to any partition and are managed by vpmon
•            PDC/PDH In a standalone (non partitioned) system then the HP-UX kernel communicates with the underlying firmware
             PDC and Processor Dependant Hardware(PDH). PDC provides vital information to HP-UX, but it also exposes more
             system information to the operating system than is safe in a Virtual Partition. So vpmon intercepts all PDC calls made by
             HP-UX kernels, and emulates each call as appropriate for the virtual partition making the PDC call.
•            IO TLB/PDIR, The IO system needs to be able to perform virtual to physical address translations and this is performed
             using hardware IO TLBs backed up by a software IO Page Directory(IO PDIR), The IO TLB hardware is incorporated
             into the SBA and so is common between vPars.
•    Memory, vpmon needs some memory itself. It can take this from one of the virtual partitions. See the discussion on
     memory resources for more information.
Slide 2-8:
                                                       vcn/vcs
                                     Device Drivers
                                                       vpmon               Proc            Mem
                           GIO                                             Mgmt            Mgmt
                                           PSMs
                                                           I/O                                                Moved
                                        vPars PSM
              Added                                                                                           from
                                                                                                              memory
                                                                                                              mgmt
                                                        vPar Monitor              I/O TLB, I/O PDIR
Hardware
14/08/2003 HP restricted 8
This figure shows how the system environment is changed by the introduction of vPars. Notice that some components have
been added to the kernel (the Virtual Console devices and drivers, for example), and that others are actually removed from the
kernel. The new pieces are shown in pale blue, or light gray if you’re looking at a black and white printout :-).
Physical Memory
Perhaps the biggest change that needs to happen to the HP-UX kernel is not actual shown on the slide. When running
standalone, all the physical memory on the system is available to the single instance of HP-UX. When virtual partitions are
introduced this is no longer true, all the partitions must share a common physical address space. The vpmon allocates different
areas of physical memory to each of the different partitions. Normally the system expects to start at the beginning of memory.
The first page is used to communicate between the kernel and the underlying firmware.
The kernel then normally starts at 128K. If there are a number of kernels running within one system, they obviously can not all
start at this address. So vpmon relocates the kernel.
•     Relocation: Since multiple images of the HP-UX kernel have to reside in memory, having a fixed load address for the
      kernel (0x20000) doesn’t work any more. The vPars kernel is compiled with special linker options to be relocatable. The
      memory location of the kernel isn’t known until the kernel is actually loaded.
      This doesn’t affect the kernel itself, except for some elements of the kernel that are not relocatable (assembly language
      routines, for example). They assume that they can do 32-bit arithmetic on kernel addresses. Due to this, the kernel has to
      have its text segment loaded below the 2GB mark in memory. Note that with very large kernels — those that have been
      tuned for certain application loads, for example — this may limit the number of vPars that can be run on a system.
•     Page Zero: Computer systems that conform to the PA-RISC I/O Architecture have an area of low memory called Page
      Zero. Page Zero is a collection of fields and structures for communicating information between the operating system and
      firmware; it starts at physical address zero and extends upward. This memory area is initialized by firmware with platform
      specific information that the kernel needs in order to boot, reset, or recover from error conditions. There are also areas that
      describe I/O devices such as boot device, console, and keyboard. Some of the Page Zero locations are written by the
      kernel to tell the firmware where to vector to when a particular condition occurs (one such condition is Transfer of
      Control).
      When vPars was first released, each vPar -- that is, each instance of HP-UX -- needed its own copy of Page Zero. To
      accomplish this, HP-UX had to stop assuming that Page Zero resided at physical location zero. This was done by
      changing all hard-coded Page Zero addresses to macros that use a base address and offset. The base address is kept in the
      new kernel global variable page_zero, which is patched by the vPar monitor when loading the kernel. Hence, each vPar
      has its own “virtualized” Page Zero.
      The monitor fills in the virtualized Page Zero with values expected by each vPar -- PDC entry point, memory
      configuration, that sort of thing. The monitor manages the “real” Page Zero at address zero. That Page Zero is initialized
      by firmware and updated by the monitor. That’s because the monitor must handle chores like TOC handling where the
      source of the TOC may be initiated by the system administrator via a ctrl-B TC.
•     Kernel Data: In addition to the global page_zero, the monitor patches several other globals because of relocation
      (firstaddr and imm_start_pfn, to name two). Two other patched globals are cpu_arch_is_2_0 and
      vemon_present; the latter is patched to 1, so you can check it to see if you’re booted as a vPar.
vPar Monitor
The vPar monitor manages the hardware resources, loads kernels, and emulates global platform resources to create the illusion
that each individual vPar is a complete HP-UX system. At the heart of the monitor is the partition database that tracks what
resources are associated with which vPar. When the vPar monitor is running, the master copy of this database is kept in the
monitor. All changes to the partition database are also reflected to a file on each vPar’s boot disk to ensure that the partition
configuration is preserved across system reboots.
The monitor code is loaded from the file /stand/vpmon on the system boot device in the same way as a normal HP-UX
kernel would be loaded from the file /stand/vmunix.
Once the vPar kernels are up and running, the monitor is mostly idle; it’s invoked by the running vPars when HP-UX makes
calls to firmware (which are intercepted by the monitor), when the configuration of the vPars is changed (which is managed by
the monitor) or when the operating system is shutting down. The vPar that owns the physical console will take over
management of the device, so the monitor prompt is no longer available once the vPars are launched. Nonetheless, console I/O
from any vPar still goes through the monitor.
Within the kernel’s I/O subsystem, a Platform Support Module isolates I/O functionality that is specific to a hardware platform
or “virtual” platform like the vPars monitor. In a vPars-enabled kernel, the Virtual Environment PSM, or ve_psm, moderates
the sharing of I/O TLBs between vPars, redirects the kernel’s firmware requests to the monitor, and encapsulates the
pseudo-driver vpmon that gives users access to vPars functionality. When not running in a vPar environment, the vPar PSM is
mostly dormant and doesn’t affect the normal operation of the system.
Virtual Console
Given the scarcity of PCI slots in most systems, it is unreasonable to assume that each vPar will have its own serial port to use
for a console device. Therefore vPars use a virtual console device that all the vPars share.
The Virtual CoNsole (vcn) driver implements a virtual serial port that HP-UX can use as its console. Unless a hardware
console is specifically specified in the partition database, vcn is used. All console I/O to the vcn driver is transferred to the
monitor where it is buffered until it can be handled by a Virtual Console Slave (vcs) driver that has established a logical
connection to that vPar. The vcs driver may or may not be in the same vPar as the vcn driver. The monitor provides a special
inter-vPar communication path specifically for console I/O.
Other changes were scattered across machdep, process management, and virtual memory management to support the
following features: CPU migration, I/O TLB sharing, Page Zero emulation, and global purge TLB synchronization.
Slide 2-9:
                 •   PDC
                 •   ISL
                 •   Read auto file
                 •   HPUX (boot loader)
                 •   vmunix
14/08/2003 HP restricted 9
•              PDC, the system starts running using the firmware, PDC & IODC. It will go through various phases such as testing the
               system, selecting a Monarch CPU, and then using the options stored in stable storage it will start looking for a console &
               bootable device. From the boot device it accesses a Logical Interchange Format(LIF) filesystem.
•              ISL, the first software loaded is the Initial System Loader(ISL), which is an Operating System independent loader. ISL is
               not only used for loading HP-UX, it also used in loading MPE, and any other operating system that uses PA-Risc
               hardware. Consequently ISL does not know how to load an HP-UX kernel from a Unix filesystem.
•              AUTO, ISL reads the LIF file AUTO to find out what it is supposed to do next.
•     HPUX, the secondary loader that is then able to access a HFS and load a kernel is called HPUX. The AUTO file instructs
      ISL to invoke HPUX. By default the HPUX secondary loader then loads and starts the /stand/vmunix kernel, without
      passing any further options to it.
      The secondary loader can be used to pass various options to the kernel such as selecting an init run level, or selecting
      maintenance mode for the different volume managers.
      Once the kernel file has been loaded, HPUX’s, jobs is not complete. Initially the kernel has no IO capability but needs to
      access various files from the /stand filesystem. So at this stage it gets HPUX, to load them for it.
•     VMUNIX, The secondary loaders primary job is to load and start the kernel file, normally /stand/vmunix. This is then
      responsible for starting the rest of the Operating System, once it has initialized itself.
Module objectives
Slide 2-10:
                • PDC
                • ISL
                • Read auto file
                • HPUX (boot loader)
                • Vpmon
                •   Read vpdb
                •   Start the partitions
                •   Load vmunix
                •   Start vmunix
                •   Start the vPar daemons
14/08/2003 HP restricted 10
For a vPars system certain changes need to take place to the boot sequence. The boot proceeds as before as far as having ISL
read the AUTO file. This is really the first opportunity to make changes. Before this point the only options are where would you
like to load ISL from, but the AUTO file is fully configurable. We will start the discussion from here:
•             AUTO, normally the file just contains a the single word command “HPUX” this invokes the secondary loader who’s
              default behavior is to load /stand/vmunix. In the case of the vPars environment however we need to startup the Virtual
              Partition Monitor (VPMON) before loading the kernel. The monitor is stored in the file /stand/vpmon, this file is in the
              HFS /stand filesystem, and looks sufficiently like an HP-UX kernel, that the secondary loader is able to load it
              So in order to have the vPars monitor loaded automatically we need to use “mkboot -a“ to modify the AUTO file to load
              /stand/vpmon.
•     HPUX, the secondary loader is able to load files other than the default /stand/vmunix in standalone environments. We
      occasionally use this ability to boot alternate kernels such as /stand/vmunix.prev. For the vPars environment we get
      HPUX to load the monitor /stand/vpmon.
•     VPMON, as we discussed in the previous module, the vPars monitor is the heart of the vPars environment. It is
      responsible for:
      — Loading the vPars database /stand/vpdb by default. As with the early HP-UX kernel, vpmon, has no file IO
        capability and so uses the HPUX secondary loader to load the file.
      — Setting up the partitions
      — Loading their kernel files, and assisting them during early initialization.
      During a normal boot, ISL and the secondary loader HPUX are responsible for loading and starting the HP-UX kernel. In
      the vPars environment this role is taken over by vpmon, when booting the partitions there is no ISL> prompt, options that
      would normally be passed to the secondary loader such as kernel file selection, run-levels or LVM options must now be
      selected either as options for the partition stored inside the configuration database or passed as options to the vPars
      commands for booting the partitions.
      By default vpmon starts in interactive mode and does not load or boot any partitions. In order to perform an automatic
      boot then the “-a” option needs to be passed to vpmon. This is typically done in the AUTO file by using the mkboot
      command.
      — CPU, for the processors it is possible to select which actual processors are assigned to a partition, or alternatively you
        can simply allow vpmon to choose. As we will see shortly under vPars CPUs can either be bound or unbound, only
        the bound CPUs can be assigned. It’s always up to vpmon to select the unbound ones.
      — Memory, again, it is possible to configure which ranges of physical memory will be used by which partitions
        However with current configurations there is little point in so doing. Normally vpmon will select which ranges of
        memory to assign to which partitions, and also from which partitions it is going to steal it’s own working space from.
      — IO, when configuring partitions it is necessary to allocate the IO resources to be used by the partition, vpmon can not
        make this choice for you.
•     Loading and starting the partition’s kernels, in a standalone environment then the HPUX secondary loader is
      responsible for loading and starting the kernel, but this can only run once and was used to load the monitor. The monitor
      can not simply load the kernel into memory in the same way as the secondary load does, the kernel needs to be re-located
      to it’s allocated start address.
•     Daemons, once the HP-UX kernel has initialized it runs init, which in turn starts rc. For a system running under vPars
      there are three new startup script that perform the relocation of the /stand/vmunix file, so that it agrees with the kernel
      loaded into memory, handle any monitor crash dumps and start the vpar daemons. We will look at these in later modules.
Slide 2-11:
Virtual console
                                                             Control A
                                                              toggles
                                                             between
                                                             partitions
14/08/2003 HP restricted 11
Most modern computer rooms do not have enough space for consoles for all the system they have. Also users don’t want to
have to go down to the data center just to access the console. This has lead to developments such as the GSP and Secure Web
Console. Also given the scarcity of PCI slots in most systems, it is unreasonable to assume that each vPar will have its own
serial port or GSP to use for a console device. Therefore vPars use a virtual console device that all the vPars share.
From the console, the user can toggle between the different partitions using “Control A”.
If the system’s Monarch CPU is not currently being used by a virtual partition then as well as being able to access the console
of each of the active partitions you will also be able to get to the MON> prompt of the monitor. There is really no significant
advantage in being able to access the monitor’s prompt in a production environment, but for this training class it will be useful.
Virtual Console
Slide 2-12:
Virtual console
                                                    vpar1
                                                                                    vpar2                       vpar3
                            tty                 vconsd
vPar Monitor
                      Physical
                      Console
14/08/2003 HP restricted 12
The vPars Virtual Console provides a mechanism for console multiplexing (sharing) among vPars. Key concepts to remember:
The virtual console module multiplexes all vPars using a virtual console to one actual hardware console; thus, the virtual
console is shared by all vPars. The vPar which is assigned the actual console hardware is responsible for transmitting
information to the actual console, and for receiving information from the console keyboard. Only one vPar may logically
connect to the actual console at any given time. A special character (ctrl-A) typed at the console changes this logical
connection from one vPar to the next, in a round-robin fashion.
The Virtual CoNsole (vcn) driver implements a virtual serial port that HP-UX can use as its console. Unless a hardware
console is specifically called out in the partition database, vcn is used. All console I/O to the vcn driver is transferred to the
monitor, where it is buffered until it can be handled by a Virtual Console Slave (vcs) driver that has established a logical
connection to that vPar. The vcs driver may or may not be in the same vPar as the vcn driver. The monitor provides a special
inter-vPar communication path specifically for console I/O.
The vconsd connects the vcs driver to the tty driver which is attached to the actual console hardware. This kernel daemon is
necessary because there are times when the console hardware isn’t ready to accept new data, and the calling process has to
sleep; drivers aren’t supposed to sleep on the interrupt stack, so vconsd serves as a “sleepable” interface between vcs and the
tty driver. The daemon waits for data to become available in either direction (either to the console or from the keyboard), and
it passes the data on to the appropriate recipient.
If the vPar that owns the hardware console is not running, the vPar monitor emulates the vcs driver and manages the console
device, using console IODC.
In this diagram, the physical console device is owned by vpar1. When any of the vPars have console output, they send it to
their vcn driver, which sends it to the monitor, where it is buffered. As shown in the slide, the monitor in turn sends the output
to vpar1’s vcs driver. Under normal circumstances, the monitor keeps track of which kernel instance is currently “connected”
to the console, and will copy incoming/outgoing data from/to its per-instance buffer from/to the monitor’s buffer, which will
then be copied from/to the physical console.
In addition to the portion of the monitor which manages the virtual console, there are three major components in the virtual
console module. They are the Virtual CoNsole pseudo-device driver (vcn), the Virtual Console Slave pseudo-device driver
(vcs), and the Virtual Console Kernel Daemon (vconsd). These drivers and daemon were covered on the previous slide.
The device file for the virtual console master (/dev/vcn) effectively points to /dev/console.
The device file for the virtual console slave (/dev/vcs) is a feed into the physical console, but only on the vPar that owns the
physical console hardware. That means that if you have sufficient privilege to open /dev/vcs on the vPar that owns the I/O path
with the console hardware, writing to /dev/vcs is the same as typing on the virtual console, no matter which vPar owns it.
NOTE When using ioscan both the vcn and vcs device show up with NO_HARDWARE, don’t worry this is normal
CPU resources
Slide 2-13:
CPU resources
                • CPUs can be
                     –     Bound to a partition
                     –     Unbound (or floating)
14/08/2003 HP restricted 13
Virtual partitions need a number of physical processors, and these CPUs can be configured in two ways:
• Bound CPUs:
              — are attached to a partition and can not be moved whilst the partition is active.
              — can be used to process IO interrupts
              — The must be at least one bound CPU in all partitions
              — The vparcreate and vparmodify commands refer to bound CPUs as “minimum number” of CPUs
      — You can choose which CPUs to allocate as bound CPUs to a particular partition, it can be help in the labs to avoid
        allocating the monarch CPU. In the real world it is useful when working with Cell based systems, such as the
        SuperDome, rp8400 & rp7410, as certain inter CPU operations are quicker between two CPUs on the same cell, or in
        the case of larger SuperDomes, between CPUs on cells attached to the same cross bar.
•     Unbound/Floating CPUs:
      — Can be added or removed from active partitions, and so offer more flexibility in terms of managing CPU intensive
        work loads.
      — Can not be used for IO processing
We will cover the choosing between Bound and UnBound CPUs in the planning module later in the class.
Monarch CPUs
On multiprocessor systems, there are certain jobs it make most sense to allow one CPU to do. At boot time the firmware
selects a Monarch CPU to perform these jobs, for instance the monarch drives the boot process. On a standalone system this
CPU will then go on to become the monarch of the HP-UX instance. On a vPars system, each partition will need a monarch
CPU, it will be the lowest numbered bound CPU.
The systems monarch processor is also significant to the behavior of the console. Normally when using ^A to toggle between
the partitions only the active partition consoles are accessible. If however the system’s monarch processor is not currently in
use by an active partition then the monitor’s MON> prompt is also accessible.
Memory Resources
Slide 2-14:
Memory resources
As well as assigning CPUs to partitions, they will also need a certain amount of physical memory. Memory resources are
assigned to partitions in 64MB increment. In the planning module we discuss how much memory needs to be assigned to
partitions, but basically a partition will need the same amount of memory as a standalone system would require to handle the
same workload.
Normally when configuring memory resources to a partition it is only necessary to describe the total amount of memory
required by the partition. The vpmon monitor then determines which actual parts of the physical memory map are assigned to
which partitions. Generally a partition will get a 64MB area low in the memory map, and the rest further in. Certain parts of
the kernel are limited to running in physical addresses below 4GB, and with very large kernel configurations this could
potentially limit the number of virtual partitions running on a system.
As well as having vpmon choose the memory areas it is also possible to explicitly choose which ranges of memory get
assigned to which partitions. However with current system configuration there is little to be gained by doing this, since all
CPUs have equal memory access times to all the system’s memory. Even in cell based systems such as the SuperDome etc.
which have Non Uniform Memory Architectures(NUMA) all the current systems interleave memory accesses across all the
cells within the hard partition so cancelling out the NUMA nature of the implementation. If at some future date the NUMA
nature of the systems was exposed then their would be performance benefits to localizing memory and CPUs with partitions.
The virtual partitions are built on top of vpmon and this also needs some memory, however when planning your partitions you
can largely ignore this and allocate all the physical memory within the system to the partitions. Vpmon will steal the space it
requires from one or more of the partitions.
Here we have a system with 6GB of RAM and 3 virtual partitions with 2GB each
# vparstatus
[Virtual Partition]
                                                                                                        Boot
Virtual Partition Name                     State Attributes Kernel Path                                 Opts
============================== ===== ========== ========================= =====
keswick                                    Up        Dyn,Auto      /stand/vmunix
carlisle                                   Up        Dyn,Auto      /stand/vmunix
settle                                     Up        Dyn,Auto      /stand/vmunix
If however we look at the memory section of the dmesg output you will see that they are not all equal.
Carlisle
Memory Information:
      physical page size = 4096 bytes, logical page size = 4096 bytes
      Physical: 2078720 Kbytes, lockable: 1559280 Kbytes, available: 1797088 Kbytes
Settle
Memory Information:
      physical page size = 4096 bytes, logical page size = 4096 bytes
      Physical: 2097152 Kbytes, lockable: 1574104 Kbytes, available: 1814304 Kbytes
Keswick
Memory Information:
      physical page size = 4096 bytes, logical page size = 4096 bytes
      Physical: 2093056 Kbytes, lockable: 1572812 Kbytes, available: 1809640 Kbytes
Here we can see that the partition Carlise is 18MB short, Settle has the requested 2GB and the Keswick is 4MB short. Which
partitions loose memory to vpmon and the order that the physical memory gets allocated appears to be variable between boots.
Module objectives
Slide 2-15:
IO resources
14/08/2003 HP restricted 15
When planning and configuring IO resources the important thing to remember is that vPars works at the level of the Local Bus
Adapter(LBA). All the systems than currently support vPars have 1 PCI slot for each LBA.
It is important to remember that everything attached through a single LBA is going to be assigned to a single virtual partition.
You can not have a 4 port LAN card and assign the 4 ports to 4 different virtual partitions - This DOES NOT WORK.
The System Bus Adapters(SBA) are not assigned to individual virtual partitions.
As well as assigning LBAs as IO resources to virtual partitions, you also assign boot devices as IO resources. You can either
do this as you create a virtual partition, or Ignite can do it using setboot during the installation.
Module objectives
Slide 2-16:
14/08/2003 HP restricted 16
Virtual partitions act like a series of separate systems, They can not see each others files or memory etc. They will each have
their own configurations files, such as /etc/passwd. If a user needs to be able to access more than one partition then just like
with separate systems, they will need to have an entry in the password files of each partition, or you will need to run some
form of network based authentication system, such as LDAP.
This behavior of vPars gives very good isolation between the different partitions, however there are two issues which could
affect the security of the vPars environment if the different partitions are in different management domains:
•             The vPar commands, the commands for managing the vPars environment such as vparstatus can be used by root in any
              virtual partition. This allows root in one partition to carry out operations against another partition. When all the partitions
              are managed by the same group, within a co-operative environment this can be good. However if the different partitions
              are managed by different non co-operating teams this could be a problem. Potential you could have a situation line “My
              systems going slow, I’ll steal some of your CPUs”.
•     Shared Console, using a “^A” on the console, you can toggle between the different active partitions. When you move
      away from a partition you are not automatically logged off, in fact the system is unaware that you have moved away, or
      later come back. If two partitions were managed by different groups, and they were both to have console access, then they
      could “bump” each other and gain access to things like root sessions left on the console.
The two limitation mean that vPars is probably best suited to uses where management of the partitions is performed by a single
team of administrators.
Questions
Slide 2-17:
Questions
14/08/2003 HP restricted 17
Preface
Ignite/UX is the system installation tool for HP-UX 11. We are going to use Ignite to install HP-UX into the Virtual Partitions
we create and so it is useful to have some understanding of Ignite. This is not intended as an Ignite class, just a very brief
introduction to cover the aspects of Ignite that we are going to use in the class.
Slide 3-1:
HP-Restricted
Module objectives
Slide 3-2:
Module objectives
29/08/2003 HP restricted 2
•              what Ignite is
•              what it can and can not do
•              How to set up a simple ignite server to perform installations using Software distributor
•              How to create a golden image, and configuring it into Ignite.
Slide 3-3:
             29/08/2003                                     HP restricted                                                   3
This is a class about virtual partitioning, not Ignite, but we are going to be using Ignite to install HP-UX into our virtual
partitions so it helps to have some understanding of Ignite.
What is Ignite
Slide 3-4:
What is Ignite
               29/08/2003                                            HP restricted                                                 4
Ignite/UX is the installation environment for HP-UX 11 (it was also back-ported to HP-UX 10.20), it has many capabilities,
including:
•              Installs can be performed from, tape, CD or the network, in fact it is possible to configure multiple software sources and
               install from all three.
•              The main methods of performing installations use Software Distributor (IEEE 1387.2) format media, as this is the format
               that HP-UX is shipped in, and it provides most flexibility in terms of what actual software products are installed.
               Ignite/UX however is also capable of installing software in simple tar and cpio formats. If installing from multiple
               sources Ignite can control the order in which loads will take place.
               Ignite can also execute custom programs/scripts which could then access any other format that was required.
      One major benefit of Ignite being able to handle formats other than Software Distributor is that systems can be cloned by
      producing a backup of one customized system and then using this backup as an Ignite source to install other systems. This
      allows systems to be replicated easily, complete with all local customisations. These archive based installations are much
      faster to deploy since they do not need to run all the customisation and patching scripts associated with a normal
      installation.
•     Ignite sessions are control by configuration files that control how the system is to be configured, where software is
      coming from and what software is available. You can create these configuration files by hand, but Ignite also ships with
      utilities that can produce customized ones. Particularly a configuration file to allow replication of the current system.
•     This last feature is used to allow Ignite to produce disaster recovery images, to allow a system to be re-installed in the
      event of loosing a system. These should not be confused with backups, they are intended to allow the system to be re-built
      to the extent that it is able to then handle the main backups.
Slide 3-5:
29/08/2003 HP restricted 5
Ignite is for performing HP-UX system installations. It is not able to add additional software to an already installed system,
and it is not able to update an existing system.
There are three main types of installation formats used:
•              Software Distributor based installs, these are the default since this is the format that HP ships HP-UX in.
•              Golden images, or archive based installs. Here a simple backup of a system is used to perform installation of other
               systems. This is a very useful way of cloning systems, and providing standard builds.
•              Recovery images, as well as providing the installation facilities, Ignite also provides a pair of commands,
               make_tape_recovery and make_net_recovery, since as of A.02.02 vPars tape based installs are not supported for
               virtual partitions we will not be looking at make_tape_recovery.
   Both of these recovery tools make a golden image of the current system, and produce a customized Ignite configuration
   file that is able to re-produce the current system. These can then be used to rebuild the current system in the event of a
   total system failure.
   It is not impossible to use the archive created by make_net_recovery to clone the system onto other hardware, but this
   generally involves messing around with Ignites directory structures and can be prone to errors. If you want to clone the
   system, it is generally better to produce a real golden image and use that.
Slide 3-6:
29/08/2003 HP restricted 6
When you first run the Ignite GUI (or TUI) it will take you through a configuration wizard and allow you to set up and
configure Software Distributor depots. You can also choose the “Run Tutorial/Server Setup” option from the “Actions” menu.
However here we are going to perform the setup from the command line.
         1. Make sure that you have enough disk space. By default Ignite is going to put it’s depots under
            /var/opt/ignite/depots. You can easily put them anywhere else, but this is the default. A complete copy of all the
            HP-UX 11.11 core OS & apps CDs for the June 2003 shipment works out at about 3.5GB. You probably don’t need quite
            that much as not many people install every single application in all languages.
         2. Check the source of your software, are you copying it from the CDs or DVDs? Or are you copying it from an existing
            network based depot?
   As well as the main HP-UX Operating Environment you will also need to provide the vPars software. If you attempt to
   install into a virtual partition and not include the vPars software then the partition will fail to boot when Ignite first boots
   using the kernel it has built. Without the vPars software the kernel is not relocatable.
   root@keswick[] swlist -l depot -s donald
   # Initializing...
   # Target "donald" has the following depot(s):
       /var/spool/sw
       /cdrom
       /depots/0112/11.00/gnu
       /depots/addons
       /depots/patches
       /depots/0303/11.11
       /depots/0306/11.11
       /depots/vPars/A.02.02
       /var/opt/ignite/depots/recovery_cmds
   root@keswick[]
   The two depots we need are, /depots/0306/11.11 and /depots/vPars/A.02.02.
 3. Copy the depots, Ignite provides a command make_depots to perform this task. Or you can use the standard swcopy
    command.
   root@keswick[] make_depots -r B.11.11 -s donald:/depots/0306/11.11
   If there are multiple depots that need to be copied then the command can be issued multiple times.
   root@keswick[] make_depots -r B.11.11 -s donald:/depots/vPars/A.02.02
 4. Configuration files, Once the depot’s have been copied onto the Ignite server, Ignite needs to be configured to know
    about them.
   Ignite configuration files can be built using the make_config command. You will probably find that you have two
   depots:
   •    /var/opt/ignite/depots/Rel_B.11.11/core
   •    /var/opt/ignite/depots/Rel_B.11.11/apps
   Each depots that Ignite references gets a separate configuration file, but for the default depots we have just created then a
   single run of make_config will make the two configuration files.
   root@keswick[] make_config -r B.11.11
   NOTE:         make_config can sometimes take a long time to complete. Please be
                 patient!
   This will then make a pair of files in the directory /var/opt/ignite/data/Rel_B.11.11, the files are named
   apps_cfg & core_cfg. Earlier versions of Ignite then required that these two were added into Ignite’s INDEX file.
 5. INDEX file, Ignite keeps track of all of it’s configuration files in the file /var/opt/ignite/INDEX, which can be managed
    using the manage_index command. However the file is a very simple text file and can be edited directly. Earlier releases
    of Ignite needed the config files produced by make_config to be added manually to the INDEX file, either using
    manage_index or directly with a text editor. More recent versions automatically add these configuration files.
     root@top[Rel_B.11.11] cd /var/opt/ignite/
     root@top[ignite] more INDEX
     # /var/opt/ignite/INDEX
     # This file is used to define the Ignite-UX configurations
     # and to define which config files are associated with each
     # configuration.        See the ignite(5), instl_adm(4), and
     # manage_index(1M) man pages for details.
     #
     # NOTE: The manage_index command is used to maintain this file.
     #          Comments, logic expressions and formatting changes are not
     #          preserved by manage_index.
     #
     # WARNING: User comments (lines beginning with '#' ), and any user
     #              formatting in the body of this file are _not_ preserved
     #              when the version of Ignite-UX is updated.
     #
     cfg "HP-UX B.11.11 Default" {
                description "This selection supplies the default system configuration th
     at HP supplies for the B.11.11 release."
                "/opt/ignite/data/Rel_B.11.11/config"
                "/opt/ignite/data/Rel_B.11.11/hw_patches_cfg"
                "/var/opt/ignite/data/Rel_B.11.11/apps_cfg"
                "/var/opt/ignite/data/Rel_B.11.11/core_cfg"
                "/var/opt/ignite/config.local"
     }
     As we can see, our two new configuration files have been added to the list for the “HP-UX B.11.11 Default” section.
     Ignite also provides a command “instl_adm -T” to test the INDEX file and the configuration files that it lists.
Slide 3-7:
bdf
bdf
                                                   archive_impacts
                                                                                               ignite
                                                   Then Create an
             make_sys_image                        Ignite
                                                   configuration file
29/08/2003 HP restricted 7
Golden images, or archives, are the simplest and fastest way of cloning systems using Ignite. A golden image is simply a
backup of a system, in either tar or cpio format. This can be configured into Ignite, and used as the basis of further installs.
Unlike the above case with SD based install though, we are going to need to produce a configuration file by hand, with only a
template to help us.
To produce a good golden image we need to:
         1. Install & customize a system, one of the beauties of installing using golden images is that you can install the system
            already customized the way you like it, with all the software installed that you need, not just the HP supplied software.
              This works for the majority of software, but some applications unfortunately tie themselves to a system during
              installation, for instance by including a hostname in a config file. In these case you might need to add a customization
              script to sort out the problem.
 2. Make the archive, Ignite provides a command make_sys_image that produces the golden image archive.
     root@settle[] /opt/ignite/data/scripts/make_sys_image                          -s 15.0.104.43 -d /golden -l 2
     -m t -c g
     The make_sys_image is not in the normal PATH and as the directory name implies it is a shell script which you can of
     course customize.
     -s specifies the Ignite server that you are going to save the image onto.
     -d /golden, This is the directory on the server that the image will be saved in. The name of the file defaults to being the
     current system name, with an appropriate suffix depending on compression type selected.
     Our system is called “settle”, so the file will be /golden/settle.gz.
     The client, that is being archived, needs to be able to remsh to the server, although not necessarily as root. See the -r
     option.
     -l 2, before generating the archive make_sys_image can strip off configuration information that is really relevant to the
     systems identity. The idea is to clone system, but they do need some differences. The -l 2 option is the default and strips
     this identity information off. -l 1 would leave it in the archive, see the man page for more details.
     -m t, use tar format, make_sys_image actually uses pax, which can write in either tar or cpio format.
     -c g, compress the archive using gzip.
 3. Archive Impact, when installing software using SD, a lengthy analysis phase is needed. Most of this time is spent
    working out how much disk space will be required. When using ignite the ignite configuration files store the details of
    disk space used, so as to avoid needing the analysis phase and to allow the filesystems to be sized appropriately. For an SD
    based install, make_config works out these impact values. For an archive based install Ignite provides a command
    archive_impact. The output of this command needs to be saved, so that it can be included into the configuration file
    we are going to produce next.
     root@keswick[golden] /opt/ignite/lbin/archive_impact -t -g -l 1 settle.gz >> settle.imp
     Again the command is not in the normal bin directories.
     -t, the archive was created in tar format.
     -g, the archive was compressed with gzip
     -l 1, work out the space used per directory at a depth of 1, i.e. only work out space used at the top level of directories.
     When creating filesystem in an Ignite session, if we were to try and create a filesystem for /opt/myApp it would not
     know how much space was needed at this level, the space requirements, impacts, were only worked out for one level of
     directory. So Ignite would know how much space was required under /opt, but not under /opt/myApp.
     By choosing -l 2, the impacts would know how much space was required for each second level directory like
     /opt/myApp. The downside is that you end up with many more impact lines to included into the configuration file.
     The output is then saved so as to be included into the configuration file.
Next a configuration file needs to be created and added into the INDEX file.
Slide 3-8:
29/08/2003 HP restricted 8
         1. Configuration file, the configuration file is produce by hand based on a template. This configuration file needs to tell
            Ignite where it can get software from and how, a “software source” stanza, and then what software is available from that
            source, a “software selection” stanza.
               Start by copying the template
               root@keswick[golden] cd /var/opt/ignite/data/Rel_B.11.11
               root@keswick[Rel_B.11.11] cp /opt/ignite/data/examples/core11.cfg golden.cfg
               NOTE: there are two example core configuration files, core.cfg and core11.cfg. You need the core11.cfg file for
               HP-UX 11.00 & 11.11 systems, the core.cfg file is for use with HP-UX 10 and will not work with HP-UX 11 as it does
               not understand about 32bit & 64bit systems.
          #     ftp_source = "anonymous@15.1.54.123:iux"
          #     remsh_source = "user@15.1.54.123"
     }
     sw_source, optionally change this, this is not just a comment, it is the name of this software source. The later software
     selection stanza will need to know the name of the source it is to use. If you are intending to have multiple software
     sources for core archives then you will need to rename all but one.
     description, optional, used to describe this software source.
     load_order, Controls the order that software is loaded, 0 first, then 1 ... Core OS images need to have a load order of 0.
     source_format, this is an archive based install, rather than an SD based one.
     source_type, vPars can currently only be installed from a network based Ignite. For CD or tape based installations it
     might be necessary to change the media between the booting, and then starting to load the archive.
     nfs_source, Ignite can load the archive over the network using NFS, ftp or remsh, we will use NFS, you need to set the
     source to the IP address of the archive server, this is normally the Ignite server but does not need to be.
     Then starting on line 120 there are the example software selection(sw_sel) stanzas. The core11.cfg file has two example
     sw_sel sections, a 32bit example and a 64bit one. The 32bit example we are obviously not interested in so it can just be
     deleted, the 64bit one needs to be edited. It starts off as
     init sw_sel "golden image - 64 bit OS" {
          description = "English HP-UX 11.00 CDE - 64 Bit OS"
          sw_source = "core archive"
          sw_category = "HPUXEnvironments"
          archive_type = gzip cpio
          archive_path = "B.11.00_64bitCDE.gz"
          impacts = "/"                421Kb
          impacts = "/sbin" 30086Kb
          impacts = "/opt"          78654Kb
          impacts = "/usr" 276420Kb
          impacts = "/var"          10059Kb
          exrequisite += "golden image - 32 bit OS"
          visible_if       = can_run_64bit
     } = (!can_run_32bit)
     sw_sel, the name of the software selection.
     sw_source, the name of the software source this software is to be loaded from. This needs to match the name set in the
     sw_source stanza above.
     archive_type, describes the type of compression and file format of the archive We opted to use a gzipped tar archive so
     this will need to be changed to match.
     archive_path, the filename of the archive relative to the directory specified in the sw_source above.
   impacts, these describe the amount of space required under the different directories. This section needs to be replaced
   with the output from the archive_impacts command.
   exrequisite, any software selection that this software can not be loaded with. We don’t really need one of these.
   visible_if, controls whether this option is visible to the user in the Ignite GUI session. In this case make the option visible
   (therefore available) to any system that can run 64bit HP-UX. “can_run_64bit” is a special variable created by Ignite.
   Equals True, if a sw_sel is given the value of True it will be the default software selection in the Ignite session, in this
   case the special variable can_run_32bit is being tested, so if the system can not run 32bit HP-UX make this the default
   software selection.
   We then need to change the stanza to something like:
   init sw_sel "vPars golden image 1" {
        description = "English HP-UX 11.11 vPars configuration"
        sw_source = "vPar golden core archive"
        sw_category = "HPUXEnvironments"
        archive_type = gzip tar
        archive_path = "settle.gz"
        impacts = "/" 15Kb
        impacts = "/.ssh" 1Kb
        impacts = "/dev" 13Kb
        impacts = "/etc" 41839Kb
        impacts = "/home" 1Kb
        impacts = "/opt" 1244509Kb
        impacts = "/sbin" 37222Kb
        impacts = "/stand" 1093Kb
        impacts = "/usr" 1222755Kb
        impacts = "/var" 320043Kb
        visible_if        = can_run_64bit
   } = (!can_run_32bit)
   Starting on line 172 (original line numbers before we start editing the file) is a section that sets an Ignite variable
   _hp_os_bitness, that Ignite needs. If you deleted the 32bit sw_sel stanza then the 32bit version needs deleting here as well.
   (sw_sel "golden image - 32 bit OS") {
        _hp_os_bitness = "32"
   }
   The 64bit version needs to have the software selection name changed to match the one we used above
   (sw_sel "vPars golden image 1") {
        _hp_os_bitness = "64"
   }
     The configuration file is now complete and needs adding into the INDEX file.
 2. INDEX file, with the SD example above make_config produced the configurations file we needed and added them into
    the default HP-UX 11.11 section of the INDEX file. For our golden image example though you need to manually add the
    new configuration file into the INDEX.
     Probably the easiest way to do this is to copy the default HP-UX 11.11 stanza and edit it.
     cfg "HP-UX B.11.11 Default" {
                description "This selection supplies the default system configuration th
     at HP supplies for the B.11.11 release."
                "/opt/ignite/data/Rel_B.11.11/config"
                "/opt/ignite/data/Rel_B.11.11/hw_patches_cfg"
                "/var/opt/ignite/data/Rel_B.11.11/apps_cfg"
                "/var/opt/ignite/data/Rel_B.11.11/core_cfg"
                "/var/opt/ignite/config.local"
     }
     cfg, The name of the configuration group. This is the name shown “Configurations” options menu in the Ignite GUI.
     /opt/ignite/data/Rel_B.11.11/config, supplies all the configuration information to know about which disks to choose,
     what filesystems to make... you probably need this unless you know a lot more about Ignite than is covered in the class.
     /opt/ignite/data/Rel_B.11.11/hw_patches_cfg, makes sure that the patches get loaded. We don’t need this since the
     patches are already in the archive we are using.
     /var/opt/ignite/data/Rel_B.11.11/apps_cfg & core_cfg, are the configuration files produced by make_config for the SD
     case above. We don’t need to copy these.
     /var/opt/ignite/config.local, holds local configuration information. You can use this file to specify things like the DNS
     domain name, and the address of name servers, timezone, language...etc.
     So we can start by copying and then edit this.
     cfg "HP-UX B.11.11 vPars" {
                description "HP-UX Virtual Partition configurations."
                "/opt/ignite/data/Rel_B.11.11/config"
                "/var/opt/ignite/data/Rel_B.11.11/golden.cfg"
                "/var/opt/ignite/config.local"
     }
     We could have added the golden.cfg file to the existing stanza, but the patch configuration file needed in there would have
     resulted in the Ignite attempting to reload all the patches onto the archive.
Questions
Slide 3-9:
Questions
29/08/2003 HP restricted 9
Preface
Before starting to create virtual partitions it is important to know why you are creating them, how many are needed and what
resources they need. This module takes a look at planning for virtual partitions.
Slide 4-1:
HP-Restricted
Module objectives
Slide 4-2:
Module objectives
20/08/2003 HP restricted 2
•              Examining the customers requirements, why is the system being partitioned, how many partitions are required and what
               resources they are going to need.
•              The numbers of partitions different systems can support.
•              As well as the maximum number, what are likely to be sensible limits.
•              When working on cell based systems, which also support hardware partitions, how to plan the two partitioning
               environments together.
•              Creating partition plans
•              Look at some example.
Slide 4-3:
20/08/2003 HP restricted 3
Outside the class room it is important to look at why systems are being partitioned. You need to know what applications are to
be run on the system. What resources the different applications need in terms of CPUs, Memory and disk space.
Once you know which applications you are supporting, it is then important to understand whether there are any interactions
between the different application. Once the system is partitioned if applications in different partitions need to communicate
they will need to do so via the network. The different partitions need to be treated as if they were separate system, from the
point of view of HP-UX they are separate systems. If applications need to communicate more closely, for example though
shared memory or files, then separating them using virtual partitions is not going to work. If being able to control their
resource usage is important then other products from the Partitioning Continuum, such as Processor Sets or PRM might be
more appropriate. Even if they do use network type communication if the applications were in the same partition then they can
short cut the physical network and gain better performance.
Of course the reason for having partitioning products is that quite often applications are sharing the same system but do not
need to. They would run just as well of separate systems. In fact quite often they would run better, or be much easier to manage
on separate system. This is where tools like virtual partitions can add value.
Slide 4-4:
•            A console device, this would normally be the GSP. For the cell based system this can be shared between any hard
             partitions.
•            A bootable disk device and interface. The SCSI LAN combo card is not able to act as a system boot device.
•            An installation device, the configuration guide states that a CD/DVD device is required, but all these systems can be
             booted across the network for installation from a Ignite/UX server if available.
The HP-UX Virtual partitioning software currently has a limitation of 8 virtual partitions in a single hardware environment
(system or nPar).
•     One or more CPUs. The virtual partitioning system does not provide virtual hardware, it allows you to allocate the
      physical CPUs present within the system across the virtual partitions. When allocating CPUs remember that CPUs can be
      either bound or un-bound/floating. We will discuss this more shortly.
•     Some memory, the configuration guide mentions a minimum of 256MB, however you would probably not want to run
      many application with that amount of RAM. Mostly I would expect a minimum of 1GB for a virtual partition running real
      application.
•     A boot device, starting with the A.02.02 release of the virtual partitioning software this can be a SCSI/LAN combo card,
      or other supported system boot device. The only card supported for system boot but not supported by vPars is the Raid 4Si
      card.
      One of the virtual partitions shares it’s boot device with the system. All other partition must have separate boot devices on
      separate LBAs.
•     In addition to a boot disk, the virtual partition will require enough disk resources to be able to run it’s applications and to
      be able to provided the required level of availability.
•     A LAN card.
Slide 4-5:
                    –     Yes
                          • Great move on to the planning stage
                    –     No
                          • Reduce the number of partitions
                                  Or
                          • Can we upgrade the system
20/08/2003 HP restricted 5
The first part of planning for partitioning is to identify the application, and group these into ones that would need or would
benefit from being run within the same partition.
Next it is an issues of looking at the hardware resources available. How many partitions can the hardware support?
One of the benefits of partitioning is that it can reduce the overall hardware requirements, but this is compared with running
everything in separate systems. Compared with running all the applications within a single system/partition there can be an
increase in the required hardware resources. As we discussed on the previous page, there are minimum requirements for each
virtual partition.
CPUs, there could be an increase in the required number of CPUs due to the requirement of each partition to have at least 1.
On a non-partitioned system a number of small applications might in effect share a processor. If these same applications are
each placed in individual partitions, then they will require at least one each.
Memory, Each partition will have a copy of the HP-UX kernel, This will impose a small overhead on a partitioned system as
it will now have several copies of the kernel, rather than just one. Remember however than the amount of memory used by the
kernel is in part a function of the amount of RAM available, so rather than one large kernel you will have a number of smaller
ones.
Also there is the overhead of the memory required by vpmon. Remember the example from module 2, where across the 3
partitions we were 22MB short on the memory actually allocated.
Disk interfaces/HBAs, In practice each partition is likely to need at least 2 disk interfaces in order to provide a reasonable
amount of resilience. On a non-partitioned system, disk interfaces would typically be shared between applications. They can
not be shared between virtual partitions.
Disks, here there are two issues, the overhead of having multiple copies of the HP-UX operating system. However the larger
overhead is likely to be in terms of granularity of disk hardware, unless disk arrays are being shared between multiple systems
such as in a SAN environment.
LAN cards, In most systems LAN cards will be shared by all the applications running on the system. Where multiple LAN
interfaces are used it is normally to
It is possible that there will not be sufficient hardware resources to create all the desired virtual partitions, if this is the case
there are two possible solutions.
Slide 4-6:
20/08/2003 HP restricted 6
In terms of the ultimate limit on the number of virtual partitions that a system can support there are now two limitation.
Slide 4-7:
               20/08/2003                                  HP restricted                                                   7
The first recommendation from the configuration guide for the number of partitions in a system is that the number should be
less than, or equal to half, the number of CPUs. In practice for many applications the number of partitions could be lower still,
depending on the CPU and IO requirements of the partitions.
Slide 4-8:
             •General rule
              • Bound CPUs ≥ floating CPUs
                    –     Your mileage may vary
             20/08/2003                                             HP restricted                                                   8
CPU resources can be added to partitions in one of two ways:
•            Bound processors are attached to a partition in a way that does not allow them to be moved whilst the partition in up and
             running. This allows the HP-UX kernel to use these processors to handle device interrupts. All partitions must have at
             least 1 bound CPU, the first bound CPU will act as the monarch processor for the partition.
•            Unbound or floating processors are not so strongly tied to the partition an so can be added and removed from partitions
             whilst the partition is active. This allows greater flexibility in handling work loads. Unfortunately these processor can not
             be used for handle device interrupts.
When configuring a partition you need to decide how to allocate the CPU resources. As a general first rule for performance the
recommendation is that you should have equal or more bound CPUs than unbound ones. However like all issues with
performance “your mileage may vary”, the first answer to all performance questions is “It depends”.
In trying to determine the best balance between bound and unbound CPUs it is best to use the CPU screen in glance, or even
better the by CPU report in GPM. These are able to show the amount of CPU time being spent in interrupt handling. High
percentage figures could indicate that more bound CPUs are needed.
Generally where there is a high IO load on the system then more bound CPUs will be needed. High through put interfaces such
as Fibre Channel and gigabit ethernet can create large interrupt handling loads.
For system with a more direct CPU load, such as number crunching systems then there might be little call for bound CPUs and
the increased flexibility gained by using floating CPUs would be of more benefit.
Slide 4-9:
              • For cell based systems you may need to consider the hard
                partitioning as well.
                    – Hard partitions (nPars) are less flexible than vPars
                    + Hard partitions (nPars) provide more isolation than vPars
+ By using nPars and vPars you can have more than 8 partitions
20/08/2003 HP restricted 9
When working with the Cell based system such as the SuperDome, Keystore(rp8400) and Matterhorn(rp7410), there is a
choice of two partitioning schemes available. Hard partitions (node partitions/nPars) and virtual partitions (vPars).
Hard partitions, provide better isolation between partitions. With nPars the isolation is at both the hardware and software
level.
Virtual partitions, provide more flexibility in both initial configuration, and at run time.
In deciding between the two it is worth remembering that they can be combined. It is not an all or nothing choice. By
combining both partitioning schemes you can go beyond the limitation of 8 virtual partitions. Also for MC/ServiceGuard
“cluster in a box” solutions you can provide some hardware fault isolation, whilst still having some of the flexibility of virtual
partitions.
If your eventual plan is going to use both virtual partitioning and hard partitioning then you should design the hard partitions
first. The configuration of hard partitions can affect the overall performance of the system. Follow the recommendations in the
configuration guide.
Once the hard partitions have been laid out, then the detailed planning of the virtual partitions can be performed. With current
system configurations all the memory within a hard partition is interleaved, so that the memory access performance for any
CPU to any memory is equal. The selection of memory ranges is not important, but as well as CPU to memory accesses, there
are also CPU to CPU cache transfers when different CPUs are trying to access the same memory. Here there can be advantages
in choosing CPUs from the same cell for the same virtual partition.
When configuring CPUs only bound CPUs can be selected. For A.02.02 vPars, vpmon appears to select the next available
processors to the bound ones for the unbound ones.
Slide 4-10:
20/08/2003 HP restricted 10
When creating a partitioning plan it is a good idea to start with a full list of the available hardware. If the system is already
installed with HP-UX you can use “ioscan -f” to produce a list
In this example we look initially at a N-Class (rp7400) system with 4 CPUs, that is going to have three virtual partitions.
When mapping out the hardware from the ioscan -f output, remember we are only interested initially in the hardware resources
that vPars works with, so this is mostly the CPUs and LBSa, but we also need to determine the boot devices for the partitions.
45 CPU
101 CPU
109 CPU
Note the allocation of an empty LBA to the “keswick” partition, this is aimed to increase flexibility. The virtual partition
environment does not allow changes to IO resource on live partitions, so adding new IO resources requires the partition to be
shutdown first. However, HP-UX 11.11 supports OLAR, so this can be used to add IO devices to already running system, as
long as the LBA is already assigned to the partition.
Slide 4-11:
minimum 1
unbound 1
max
Memory 2GB
IO LBA 0/0
Static No
kernel /stand/vmunix
minimum 1
unbound 0
max
Memory 2GB
Static No
kernel /stand/vmunix
boot options
minimum 1
unbound 0
max
Memory 2GB
Static No
kernel /stand/vmunix
boot options
Slide 4-12:
20/08/2003 HP restricted 12
Once we have the worksheets for the virtual partitions we just need to create them. Virtual partitions are created with the
command vparcreate.
•     -p, you must provide a partition name, it’s maximum length is 239 char! The partition name is used in the vpar commands.
      For other commands you will use the hostname of the partition. Whilst vpars is quite happy for the names to be different,
      in practice this is likely to cause confusion and lead to errors.
      So please set the vpar name to the hostname you are intending to use inside the partition.
•     -B boot_attr, controls whether the partition autoboots, when vpmon is started with the “-a” option.
      — auto, [default] when the boot_attr is set to auto, the partition automatically boots when vpmon starts with the
        “-a” option.
      — manual, the partition will need to be to manually booted.
•     -S static_attr, controls whether changes can be made to the partition.
      — -a cpu::Normal number, this is the normal total number of CPUs bound + unbound.
      — -a cpu:::min[:max], the minimum number is the number of bound CPUs. The maximum number can be used to limit
        the number unbound CPUs being added dynamically at run time.
      — -a cpu:HW PATH, for bound CPUs (and only bound CPUs) you can explicitly choose the CPUs to be allocated to
        the virtual partition. It appears that vpmon chooses the next available CPU after the bound processors for any
        unbound CPUs.
      — -a mem::MEM SIZE, specifies the total amount of memory in MegaBytes to be allocated to a partition.
      — -a mem:::base:range, this option allows the memory assigned to a virtual partition to be choose, here range MB
        starting at address base will be assigned. This option can be specified multiple times to assign multiple memory
        ranges. With current systems this option is of limited use.
      — -a io:HW_PATH, the hardware path of an LBA to be allocated to the virtual partition.
      — -a io:HW_PATH:boot, the hardware path to the boot disk for the virtual partition. You can also specify “altboot”,
        for the alternative boot device. For the initial partition the boot device needs to be specified. If additional partitions
        are to be installed using Ignite, this can be left off during the creation of the partition, Ignite will set this later for you.
It can be very helpful to put the vparcreate command into a shell script, for a number of reasons:
•     The commands themselves can end up being quite long, with lots of -a’s, putting them into a script can make checking
      them easier.
•     You often want to create more than one partition, with the command in a file, it can be easier to copy and edit an previous
      example than to try and remember all the options again.
•     It can be useful to see how the partition was create later on
•     If you ever want to re-create the partition then you can simply re-run the script.
Over the last couple of slides we have looked at planning a set of three partitions in a N-Class system. Next let us look at
actually creating those partitions. Our worksheets provide all the information needed. The first partition keswick is to have 2
CPUs, one bound, 2GB of RAM, the core IO board, a LAN board and the fibre channel disks.
#!/usr/bin/sh
#############
#
#            Script to create the virtual Partition "keswick"
#
             vparcreate -p keswick                                       \
                         -o "-lq"                                        \
                         -a cpu::2                                       \
                         -a cpu:::1                                      \
                         -a mem::2048                                    \
                         -a io:0/0                                       \
                         -a io:0/0/2/0.6:boot                            \
                         -a io:0/0/2/1.6:altboot                         \
                         -a io:0/2                                       \
                         -a io:0/12                                      \
                         -a io:1/4                                       \
                         -a io:1/10
The boot path, autoboot and static flags are all left at their default, we do not often need to changes them.
The other two partitions “settle” and “carlisle”, each have 1 CPU, 2GB of memory, two FWD SCSI cards with dual attached
disk arrays for boot and alternate boot, and a LAN board.
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "settle"
#
            vparcreate -p settle                                    \
                       -a cpu::1                                    \
                       -a mem::2048                                 \
                       -a io:0/8                                    \
                       -a io:0/8/0/0.3.0:altboot                    \
                       -a io:0/10                                   \
                       -a io:0/10/0/0.3.0:boot                      \
                       -a io:1/0
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "carlisle"
#
            vparcreate -p carlisle                                  \
                       -a cpu::1                                    \
                       -a mem::2048                                 \
                       -a io:0/4                                    \
                       -a io:0/4/0/0.2.0:altboot                    \
                       -a io:0/5                                    \
                       -a io:0/5/0/0.2.0:boot                       \
                       -a io:1/2
Since there is only one processor, vpars can deduce that it must be bound :-).
Slide 4-13:
This is one of the lab systems. For the planning there is no need to worry about the applications this system is to run, the reason
for partitioning it is to learn about partitioning. We are going to aim to create 3 partition on these systems. To allow lab
questions to explore the behavior from the monitor prompt we are going to make sure that the systems monarch processor (HP
path 33) is not bound to a partition. There is no direct way to do this so instead we will assign the other three processors as
bound ones.
minimum 1
unbound 1
max
HW PATH 37
Memory 768MB
IO LBA 0/0
Static No
kernel /stand/vmunix
minimum 1
unbound 0
max
HW Path 97
Memory 512M
Static No
kernel /stand/vmunix
boot options
minimum 1
unbound 0
max
HW Path 101
Memory 768MB
Static No
kernel /stand/vmunix
boot options
NOTE             The third partition is called vp8d rather than vp8c, the hostnames for the lab systems use vp?c for the GSP
                 of the system.
vp8a
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vp8a"
#
            vparcreate -p vp8a                     \
                       -o "-lq"                    \
                       -a cpu::2                   \
                       -a cpu:::1                  \
                       -a cpu:37                   \
                       -a mem::768                 \
                       -a io:0/0                   \
                       -a io:0/0/1/1.0.0:boot      \
                       -a io:0/0/2/0.0.0:altboot
vp8b
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vp8b"
#
            vparcreate -p vp8b                     \
                       -a cpu::1                   \
                       -a cpu:97                   \
                       -a mem::512                 \
                       -a io:0/1                   \
                       -a io:0/5                   \
                       -a io:0/5/0/0.10.0:boot
vp8d
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vp8d"
#
            vparcreate -p vp8d                                        \
                       -a cpu::1                                      \
                       -a cpu:101                                     \
                       -a mem::768                                    \
                       -a io:0/8                                      \
                       -a io:0/8                                      \
                       -a io:0/10                                     \
                       -a io:0/12
In this example there is no boot IO resource specified, this partition is going to be installed by Ignite as a running partition, so
Ignite will set the boot device as part of the installation.
Example plans
Slide 4-14:
20/08/2003 HP restricted 14
In this example we have a SD3200 SuperDome divided into 3 hard partitions, the larger first partitions is used to run 3
database separate applications. These would benefit from running independently but the CPU work load changes between the
applications and the admistrators need to be able to change the resources between them, live. This is not possible using hard
partitions, but is using virtual partitions. So the customer wants to divide the larger first partition into 3 virtual partitions.
Example plans
Slide 4-15:
Having started with the plan of the hardware, the virtual partition worksheets can be produced.
minimum 4
unbound 2
max 8
HW PATH 0/10
HW PATH 0/11
HW PATH 0/12
HW PATH 0/13
Memory 6144
IO LBA 0/0/0
boot 0/0/1/0/0.0
altboot 0/0/2/0/0.0
Static No
kernel /stand/vmunix
minimum 4
unbound 2
max 8
HW PATH 1/10
HW PATH 1/11
HW PATH 1/12
HW PATH 1/13
Memory 6144
IO LBA 1/0/0
boot 1/0/1/0/0.0
altboot 1/0/2/0/0.0
Static No
kernel /stand/vmunix
minimum 2
unbound 2
max
Memory 4096
Static No
kernel /stand/vmunix
For vpar11 & vpar12 we want to keep the bound CPUs on the same cells, so it is a good idea to specify their HW paths. For
vpar13 this is not likely to be an issue.
From the worksheets we can create the scripts to make the partition. Here you can clearly see the advantage of putting the
vparcreate command inside a script.
vpar11
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vpar11"
#
            vparcreate -p vpar11                     \
                      -o "-lq"                       \
                      -a cpu::6                      \
                      -a cpu:::4:8                   \
                      -a cpu:0/10                    \
                      -a cpu:0/11                    \
                      -a cpu:0/12                    \
                      -a cpu:0/13                    \
                      -a mem::6144                   \
                      -a io:0/0/0                    \
                      -a io:0/0/1                    \
                      -a io:0/0/1/0.0.0:boot         \
                      -a io:0/0/2                    \
                      -a io:0/0/2/0.0.0:altboot      \
                      -a io:0/0/2                    \
                      -a io:0/0/4                    \
                      -a io:0/0/6                    \
                      -a io:0/0/8                    \
                      -a io:0/0/9                    \
                      -a io:0/0/10                   \
                      -a io:0/0/11                   \
                      -a io:0/0/12
vpar12
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vpar12"
#            vparcreate -p vpar12                                       \
                       -o "-lq"                                        \
                       -a cpu::6                                       \
                       -a cpu:::4:8                                    \
                       -a cpu:1/10                                     \
                       -a cpu:1/11                                     \
                       -a cpu:1/12                                     \
                       -a cpu:1/13                                     \
                       -a mem::6144                                    \
                       -a io:1/0/0                                     \
                       -a io:1/0/1                                     \
                       -a io:1/0/2                                     \
                       -a io:1/0/2                                     \
                       -a io:1/0/4                                     \
                       -a io:1/0/6                                     \
                       -a io:1/0/8                                     \
                       -a io:1/0/9                                     \
                       -a io:1/0/10                                    \
                       -a io:1/0/11                                    \
                       -a io:1/0/12
For vpar12 we will install it using Ignite directly into the virtual partition, so Ignite can control the boot disk, and once we have
configured mirroring we can use setboot to specify the alternate boot device as usual.
vpar13
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "vpar13"
#
            vparcreate -p vpar13                     \
                      -o "-lq"                       \
                      -a cpu::4                      \
                      -a cpu:::2                     \
                      -a mem::4096                   \
                      -a io:0/0/3                    \
                      -a io:0/0/14                   \
                      -a io:1/0/3                    \
                      -a io:1/0/14
Questions
Slide 4-16:
Questions
29/08/2003 HP restricted 9
Preface
This module takes a look at installing the virtual partition software and creating and installing virtual partitions. Before starting
you need a initial installation of HP-UX 11.11. There are several possible methods of installing HP-UX for the different
partitions, it can be done from within the virtual partitioning environment or as a series of standalone system. As we will
discuss there are a series of draw backs to the standalone approach so this module will concentrate on performing the installs
from within vPars.
Slide 5-1:
                                                       Module 5 Installation
                                                       NES2-VPARvC
HP-Restricted
Module objectives
Slide 5-2:
Module objectives
29/08/2003 HP restricted 2
Slide 5-3:
When installing HP-UX for virtual partitions there are several possible approaches. Before you can install the virtual
partitioning software you must have one copy of HP-UX installed and running. Typically we use this as the basis of our first
virtual partition. For the other virtual partitions you need to choose an installation technique. These additional HP-UX
installations can either be performed on the system standalone or into the partition under vpmon.
When installing HP-UX standalone there are a number of issues to be aware of:
•            When working outside of the virtual partition environment, obviously, no other virtual partitions will be running. The
             whole system is effectively down.
             So, only one partition can be installed at a time in this way.
• During the installations all the system hardware is visible which can lead to problems such as:
      — You manually need to make sure that you do not overwrite the disks that should belong to another partition.
      — You need to make sure that you only use hardware resources that will ultimately belong to the planned partition.
      — Ignite will want to size the swap area based on the total RAM size of the whole system, rather than the planned
        partition. Kernel parameters might also get tuned on this basis.
      — The ioconfig file in the installation will know about all the systems hardware resources, not just the resources that
        will ultimately belong to the planned partition. This will lead to instance numbers being based on the overall system’s
        hardware rather than the partitions hardware.
          This might at first sight seem like a benefit, but if you make use of OLAR, then these system wide instance numbers
          will soon break down and you will get duplicates where perhaps you were not expecting them.
      — Network PPA numbers can change when the system is rebooted and brought up under the control of vPars. This is
        already a common cause of boot failures on initial partitions.
For this reason it is recommended that generally the instances of HP-UX in virtual partitions should be installed into the virtual
partitions themselves, whilst the system is running under vpmon.
There are however some case where you might choose to perform installations outside of vPars:
•     There is no Ignite server, nor is there the possibility of creating one. With the size of modern hard disks I think this is
      unlikely to be the case.
•     The copies of HP-UX are already installed on the disks, either because hard partitions are being migrated to virtual
      partitions, or because existing standalone systems are being concentrated onto new hardware under virtual partitions.
If you are performing the installations outside of the control of vpmon, then it is just necessary to install the vPars software
into the instance of HP-UX. You can not boot a HP-UX kernel that is not relocatable into a virtual partition.
The HP-UX virtual partitioning software works in conjunction with Ignite. The vparboot command can boot a partition from
an Ignite server, it even has a short cut built in in case the current partition is the Ignite server (handy for our labs at least). By
installing HP-UX directly into the partition you:
•     Avoid problems with over writing the disk that should belong to other partitions. The partition can only see the resources
      assigned to it.
•     The automatic sizing capabilities of Ignite knows the right size for the memory.
•     The instance numbers of the devices will be assigned in a more normal fashion.
•     LAN card PPA numbers will be setup correctly.
The rest of the module will concentrate on this form of installation.
Slide 5-4:
             Congratulations,
                          you are now running a virtual partition
             29/08/2003                                            HP restricted                                                  4
Before starting the actual installation, do your planning. You need to know which devices are to belong to which partitions.
Once you are running in a virtual partition you can not see which IO cards are plugged into the LBAs that do not belong to
your partition. Hopefully as part of your planning stage you have worked out complete work sheets for the partitions, so the
actual creation of the partitions should be straight forward.
To install and startup your first partition you need to:
•            Install an initial copy of HP-UX, this might already have been done. Make sure that it is only using hardware resources
             that will belong to your initial partition.
•            Use swinstall to install the vPars software. It is not on the Core OS or Applications CDs. It is only provided on a separate
             CD at present. This will build a vPar capable kernel and reboot the system.
•     Once the system has rebooted, you can use the vparcreate command to create the first virtual partition. If during the
      planning stage you wrote the vparcreate scripts, you can now just execute the appropriate one.
•     The system’s boot command needs to be changed to boot “/stand/vpmon -a”, rather than the normal HP-UX kernel.
      The “mkboot -a” command is used to change the boot command in the LIF AUTO file.
•     You can now reboot the system, and hopefully it should boot up under the control of the virtual partitioning environment.
Slide 5-5:
Before you can install any other partitions, you need to decide on your installation strategy. We are going to concentrate on
installations using Ignite. At vPars A.02.02 a virtual partition can not be booted from CD, DVD or tape. To install HP-UX
directly into a virtual partition you need to use an Ignite server. If you do not have one already then now is a good time to set
one up.
In order to use your Ignite server to install virtual partitions you must make the HP-UX Virtual Partitioning software available
to the Ignite server. If you install HP-UX into a virtual partition without including the vPars software it will fail to boot, and
you will have to restart the installation!
If you are using an existing Ignite server then there are some version issues to be aware of. Current Ignite versions B.4.XXX
do not have any issues. but:
•     Versions starting at B.3.4.115, but before B.3.7.XX need to have a modified version of the WINSTALL installation kernel
      added. The command
      /usr/ccs/bin/elfdump -r /opt/ignite/boot/WINSTALL | grep -i relocation
      can be used to find out whether you have a suitable WINSTALL file. If the above command produced no output then your
      WINSTALL file is not relocatable and will not work in a virtual partition.
•     Versions from B.3.7.XX onwards are already capable of working within a virtual partition.
Slide 5-6:
29/08/2003 HP restricted 6
If you are starting with a completely clean system, then it will need HP-UX installing first. Just make sure that you only use
hardware that will belong to the initial partition.
Once HP-UX is installed then you need to install the vPars software. Remember this is not on the standard CDs it is only
currently shipped on a dedicated CD.
Slide 5-7:
                 • You need to
                      –     specify the partition name
                      –     The resources, CPU, MEMory & IO that will belong to
                            the virtual partition.
29/08/2003 HP restricted 7
•              -p, you must provide a partition name, it’s maximum length is 239 char! The partition name is used in the vpar commands.
               For other commands you will use the hostname of the partition. Whilst vpars is quite happy for the names to be different,
               in practice this is likely to cause confusion and lead to errors.
               So please set the vpar name to the hostname you are intending to use inside the partition.
• -B boot_attr, controls whether the partition autoboots, when vpmon is started with the “-a” option.
      — auto, [default] when the boot_attr is set to auto, the partition automatically boots when vpmon starts with the
        “-a” option.
      — manual, the partition will need to be to manually booted.
•     -S static_attr, controls whether changes can be made to the partition.
      — -a cpu::Normal number, this is the normal total number of CPUs bound + unbound.
      — -a cpu:::min[:max], the minimum number is the number of bound CPUs. The maximum number can be used to limit
        the number unbound CPUs being added dynamically at run time.
      — -a cpu:HW PATH, for bound CPUs (and only bound CPUs) you can explicitly choose the CPUs to be allocated to
        the virtual partition. It appears that vpmon chooses the next available CPU after the bound processors for any
        unbound CPUs.
      — -a mem::MEM SIZE, specifies the total amount of memory in MegaBytes to be allocated to a partition.
      — -a mem:::base:range, this option allows the memory assigned to a virtual partition to be choosen. Here range MB
        starting at address base will be assigned. This option can be specified multiple times to assign multiple memory
        ranges. With current systems this option is of limited use.
      — -a io:HW_PATH, the hardware path of an LBA to be allocated to the virtual partition.
      — -a io:HW_PATH:boot, the hardware path to the boot disk for the virtual partition. You can also specify “altboot”,
        for the alternative boot device. For the initial partition the boot device needs to be specified. If additional partitions
        are to be installed using Ignite, this can be left off during the creation of the partition, as Ignite will set this later for
        you.
It can be very helpful to put the vparcreate command into a shell script, for a number of reasons:
•     The commands themselves can end up being quite long with lots of -a’s. Putting them into a script can make checking
      them easier.
•     You often want to create more than one partition. With the command in a file, it can be easier to copy and edit an previous
      example than to try and remember all the options again.
#!/usr/bin/sh
#############
#
#           Script to create the virtual Partition "keswick"
#
            vparcreate -p keswick                                      \
                        -o "-lq"                                       \
                        -a cpu::2                                      \
                        -a cpu:::1                                     \
                        -a mem::2048                                   \
                        -a io:0/0                                      \
                        -a io:0/0/2/0.6:boot                           \
                        -a io:0/0/2/1.6:altboot                        \
                        -a io:0/2                                      \
                        -a io:0/12                                     \
                        -a io:1/4                                      \
                        -a io:1/10
The boot path, autoboot and static flags are all left at their defaults, we do not often need to changes them.
CPU resources
Slide 5-8:
               • -a cpu::Normal Number
                    –     Bound + Floating CPUs
               • -a cpu:hardware_path
                    –     For bound CPUs you can choose which processors
29/08/2003 HP restricted 8
When creating partitions we need to specify the number of processors available. This is done using the “-a cpu” option to the
vparcreate command.
•            -a cpu::Normal number, this is the normal total number of CPUs bound + unbound.
•            -a cpu:::min[:max], the minimum number is the number of bound CPUs. The maximum number can be used to limit the
             number unbound CPUs being added dynamically at run time.
•            -a cpu:HW PATH, for bound CPUs (and only bound CPUs) you can explicitly choose the CPUs to be allocated to the
             virtual partition. It appears that vpmon chooses the next available CPU after the bound processors for any unbound CPUs.
Slide 5-9:
                 • -a mem::Size
                      –     Size in MB
                 • -a mem:::base:range
                      –     Choose which RAM
                 • -a io:HW_PATH
                      –     Hardware Path to an LBA
                 • -a io:HW_PATH:boot (or altboot)
                      –     Hardware Path to the primary of secondary boot disk
               29/08/2003                                              HP restricted                                                    9
As well as specifying the number of processors, you also need to provide memory and IO devices. Again by adding these as
resources to the vparcreate command with -a options.
•              -a mem::MEM SIZE, specifies the total amount of memory in MegaBytes to be allocated to a partition.
•              -a mem:::base:range, this option allows the memory assigned to a virtual partition to be choosen. Here range MB
               starting at address base will be assigned. This option can be specified multiple times to assign multiple memory ranges.
               With current systems this option is of limited use.
•              -a io:HW_PATH, the hardware path of an LBA to be allocated to the virtual partition.
•              -a io:HW_PATH:boot, the hardware path to the boot disk for the virtual partition. You can also specify “altboot”, for
               the alternative boot device. For the initial partition the boot device needs to be specified. If additional partitions are to be
               installed using Ignite, this can be left off during the creation of the partition, as Ignite will set this later for you.
Slide 5-10:
                    vparcreate -p vp8a                                                          \
                                   -o "-lq"                                                     \
                                   -a cpu::2                                                    \
                                   -a cpu:::1                                                   \
                                   -a cpu:37                                                    \
                                   -a mem::768                                                  \
                                   -a io:0/0                                                    \
                                   -a io:0/0/1/1.0.0:boot                                       \
                                   -a io:0/0/2/0.0.0:altboot
29/08/2003 HP restricted 10
Here we have an example of creating the initial partition for one of the labs machines at TPEC.
In this example the two internal disks have been configured using mirroring, and are the only two disks in the volume group,
so it is sensible to boot with the -lq option.
This partition is to normally have two processors, one of which is bound. You do not specify the number of unbound ones, just
the total number, and the number of bound processors. In this example there is no maximum number.
The partition is to have 768MB of memory.
The only LBA assigned to the partition connects the Core IO board. Since the boot disk is mirrored, the two halves have been
configured as primary and secondary boot disks.
The auto boot and static flags along with the kernel path have all been left at their default, these do not often need changing.
Slide 5-11:
29/08/2003 HP restricted 11
The LIF AUTO file controls the system boot once ISL has been loaded.Normally for an HP-UX system the file contains the
single word “HPUX” to start the secondary loader with no options. For a vPars system we need to have the loader start vpmon.
The command “mkboot -a” is used to change the AUTO file.
mkboot -a “hpux /stand/vpmon -a” /dev/rdsk/c1t0d0
The “-a” option to vpmon tells it to start all the partitions with the auto boot flag set, normally all virtual partitions.
If PDC has a configured alternate boot path then the mkboot command needs to run against this disk as well.
The system is now ready to be rebooted and should now restart in virtual partition mode.
Slide 5-12:
29/08/2003 HP restricted 12
The system should then reboot in virtual partition mode, if it does not then in module 8 we will look at problems booting vPar
systems. However before then there are a couple of things that might cause problems to a system that was installed outside of
virtual partitions and was then moved into vPars. This is the case with the initial partition, and can also be the case where you
decided to not use the Ignite install option:
•             One common problem in labs is that the network card numbers can change. This is particularly a issue when the initial
              standalone system was not configured on LAN0. The main symptom you see is that the networking startup fails, and then
              the system hangs in the NFS client stage of booting. If this happens break into the script using “^\” then fix the
              /etc/rc.config.d/netconf file and reboot.
•             Other potential problems are that the system was using hardware resources that were not assigned to the partition. Did you
              forget to add all the disk interfaces?
Slide 5-13:
               • Use vparboot -I, to boot the partition from the ignite server
                     #     Vparboot –p vp8b –I WINSTALL
29/08/2003 HP restricted 13
Once the system is running on it’s initial partition, you can create the other partitions. Hopefully during the planning stage you
produced vparcreate shell script for your other partitions. If you did not do any planning, you are probably now in trouble since
you will not know what IO resources are on which hardware paths.
Creating the partitions just sets up their entries in the vpdb and in vpmon. To be able to use the partitions they need to be
installed. The vparboot command, that is used to boot virtual partitions has an install option “-I”.
If the current partition is the Ignite server then the only value that needs to be passed with the “-I” is the name of the install
kernel to use WINSTALL. If the Ignite server is else where then you must specify it’s IP address followed by a coma (yes a
coma! not a colon) then the full path name of the install kernel.
vparboot -p vp8b -I 15.0.104.38,/opt/ignite/boot/WINSTALL
Questions
Slide 5-14:
Questions
29/08/2003 HP restricted 14
Preface
Now that we have a virtual partition environment it is time to start taking a look at it. In this module was will look at the
components of vPars.
Slide 6-1:
                                                       Module 6 Monitor
                                                       NES2-VPARvC
HP-Restricted
Module objectives
Slide 6-2:
Module objectives
               25/08/2003                                            HP restricted                                             2
In this module of the class we aim to get an understanding of:
•              The monitor, which is the heart of the vPars environment. The software that makes it all work.
•              The virtual partition database, which tells the monitor what to do.
•              The daemons used by the virtual partitions.
•              The drivers that are added to the HP-UX kernel to support the new vPar functionality.
Slide 6-3:
                          •Daemons                                                     •Drivers
                               –vpard                                                         –vcn
–vphbd –vcs
–vconsd –vpar
                                                                                              –vpar_driver
                                                        •Device files
                                                              – /dev/vpmon
                                                              – /dev/vcn
– /dev/vcs
             25/08/2003                                           HP restricted                                                3
The most major components of vPars are the monitor, vpmon, and it’s database, vpdb. In addition to these vPars uses:
•            Daemons, In greek mythology the Daemons were the helps of the Gods. In Unix, daemons fulfill a similar role. They act
             as helpers for the system doing work that the kernel does not, typically they are standalone Unix programs although they
             can be parts of the kernel itself, when it needs to be able to behave more like a normal program.
             For vPars there are three daemons:
      All the virtual partitions run vpard & vphbd, but only the virtual partition that owns the physical console runs the special
      kernel daemon vconsd.
•     Drivers, when the virtual partition software is installed the “vpars” drivers is added to the system file. This driver has
      dependency entries in /usr/conf/master.d/vpar to include vcn, vcs & vpar_driver.
      — vcn, the virtual console driver, used by all partitions to write data to the console, vcn directs all the data through
        vpmon towards the actual console.
      — vcs, on the partition that owns the actual console vcs reads the buffered up console output from vpmon
      — vpar, the vpar CDIO, installs the ve_psm. See the advanced troubleshooting module for more details
      — vpar_driver, the driver for the /dev/vpmon device file, this driver is called vpmon in the output of lsdev. The
        vPar commands use the device file /dev/vpmon to access ioctl functions in this driver to interact with vpmon, to
        perform tasks such as create and remove virtual partitions.
•     Device files, the device drivers are accessed though device files.
      — /dev/vcn, the device file for the virtual console master, it effectively points to /dev/console
      — /dev/vcs, the device file for the virtual console slave, acts as a feed into the physical console, but only on the vPar that
        owns the physical console hardware. That means that if you have sufficient privilege to open /dev/vcs on the vPar that
        owns the I/O path with the console hardware, writing to /dev/vcs is the same as typing on the virtual console, no
        matter which vPar owns it.
      — /dev/vpmon, This device provides access to the memory-resident instance of the vPars monitor. It’s used by the vPars
        commands.
      There is no device file associated with the driver vpar, it is not really a device driver. Drivers are also the mechanism used
      to include functionality into the kernel, in this case it includes the vPar PSM, platform support module, that allows the
      HP-UX kernel to know that it is running inside a virtual partition, and how it can then perform various tasks. This is
      discussed a little in the advanced troubleshooting module.
Slide 6-4:
The monitor
• A “special” kernel
The monitor is the heart of the vPars environment, it is the software that make it all work. In some ways it is a mini operating
system, under which HP-UX runs.
The HP-UX kernel normally expects to talk to the firmware of the underlying system by making calls to PDC functions. When
running standalone, these call directly to the firmware. When running in a virtual partition, the underlying layer is vpmon. The
monitor emulates the PDC functions from the firmware so that the kernel can act as though it were running in a normal
fashion. However vpmon answers the PDC calls in such a way as to only let the HP-UX instances see the resources allocated
to their partition.
The monitor is loaded from the boot disk of the system in just the same way that the HP-UX kernel, vmunix, normally is.
There is a copy of vpmon in the /stand area of all of the partitions, so it can be loaded from other disks, so long as the other
partition in question does not boot from a SCSI/LAN combo card, since this card lacks the required firmware to support the
boot processes.
Once vpmon has been loaded into memory it reads the partition database from the disk it was itself loaded from. The database
is then held in memory. The monitor does not have an disk IO capability, nor does it own any disks, so once the database has
been loaded into memory that is where it stays as far as vpmon is concerned. The monitor can not write this information back
to disk.
Once partitions have booted they start a daemon, vpard, that then reads the database information from vpmon’s memory and
stores the current configuration in the /stand/vpdb file of the current partition. In this way the partition database is
synchonized between all the partitions, and if need be can be read at boot time from any one of the partitions.
Since vpmon does not write any information back to disk, it does not need to be shutdown in any particular way, it has no
information to loose, you can just turn the system off when you are finished with it. Of course if HP-UX is running in a
partition, it does have information to loose and must be correctly shutdown, but not the monitor.
A reboot command is provided to allow the system to be reloaded, without the need to turn it off and back on again.
One other issue that arises from vpmon not being able to write the database back to disk itself, you can not make any
configuration changes from the monitor. All changes must be made from a partition, since only a partition is able to perform
the necessary disk updates to make the changes persistent.
Slide 6-5:
The database
              • Is in a proprietary format
                    –     Can not be read by other database programs
25/08/2003 HP restricted 5
The monitor, vpmon, does the work, the partition database /stand/vpdb tells it what to do, it lists there partitions resources
and other configuration information.
The database is read from the /stand/vpdb file (by default) on the boot disk. You can have alternative database files, the
“-D” option to vpmon and most of the vPar commands, allows the use of alternative files, however they must reside in the
/stand filesystem, since they will be read using the HPUX boot loaders file IO capability and this secondary loader does not
understand LVM (other than to find to the start of the user data area), VxVM or VxFS. It is only able to read files from an HFS
filesystem where that is a contiguous volume at the start of a disk.
The vPars database is created by the first vparcreate command issued, subsequent commands will then work with the file.
Once HP-UX is running under vPars it starts a daemon, vpard, that synchonized the local disk copy of the database with the
real working version inside vpmon.
NOTE                 It is only through the vpard daemon that the /stand/vpdb file is kept up to date. It is therefore important to
                     realize that if changes are made to the configuration whilst the partition that owns the normal boot disk is
                     down, then the copy of /stand/vpdb that is normally read will not have been updated. If the whole system
                     were then to be restarted the configuration changes would be lost.
As a general rule only make changes when this partition is up. If you need to make changes that require this partition to be
shutdown, such as adding memory of LBAs, then restart this partition as soon as possible.
Database synchronization
Slide 6-6:
Database synchronisation
                                                          vpard                 vpar1
                                  Monitor                                   /stand/vpdb
                                  Database                                                      Or alternative
                                                                                                  database
                                                                  vpard
                                                                                      vpar2
                          vpard                                                   /stand/vpdb
                                                         vpard
                               vpar4
                                                                   vpar3
                           /stand/vpdb
                                                               /stand/vpdb
             25/08/2003                                   HP restricted                                             6
At boot time the database is read from the /stand/vpdb file by vpmon, and then exists in memory within vpmon. The
partitions run a daemon, vpard, that reads the current configuration information from vpmon’s memory and writes it to the
/stand/vpdb file.
As shipped, vpard, runs every 5 seconds. So every 5 seconds the database file can be updated. If you were to remove, or
damage the file, within 5 seconds it would be replaced with a new good copy. This has changed from the A.02.01 version,
where, although, vpard ran every 5 seconds it only updated the disk based copy when commands like vparstatus were run,
or when the vpard daemon was shutdown.
Alternative databases
Slide 6-7:
Alternate databases
                                                                                                                 vpar2
                                                                     vpard                  vpard            /stand/ myDB
                                                                          vpar4                    vpar3
                                                                      /stand/myDB              /stand/ myDB
               25/08/2003                                  HP restricted                                                    7
It is possible to have alternate database, either for backup reasons or because the system hardware is used for different
configurations at different times.
Copies of the database can be made simply by copying the file. Just remember that in order to be used the file must reside in
the /stand filesystem.
Alternative database can also be created using the “-D” option to the vparcreate command. Be aware that this does not
warn you if you ask to make an alternative database file outside of /stand.
When booting you can choose an alternative database from the ISL prompt using
ISL> hpux vpmon -D myDB
If the current database fails to load then they can also read from the MONitor prompt using the read command.
Slide 6-8:
Daemon, vpard
             25/08/2003                                  HP restricted                                                 8
The main vPars daemon is vpard, it is used to communicate with vpmon and to maintain the local disk copy of the vPar
database.
It is started from an rc script at run level 1 and configured using a script in the /etc/rc.config.d directory. From here you
can control whether it runs, and how often.
Slide 6-9:
Daemon, vphbd
Before vpard is started another vPars daemon is kicked off, vphbd. This is responsible for letting vpmon know that our
partition is alive.
When it first starts, vpmon will change the state of our partition to UP. Should it die, then vpmon assumes the partition is being
shutdown and changes the state to SHUT. During normal running the vphbd heartbeat daemon runs every 360 seconds (6
minutes) to let vpmon our partition is still alive. Should vpmon miss 10 heartbeats from a partition (yes 1 hours worth) it will
mark the partition as HUNG.
Slide 6-10:
vPar states
From here we can see that the partition can have various states during booting the partition will go through several of these:
•             DOWN, the partition has been shutdown or not yet loaded into memory.
•             LOAD, when the partition is loaded using the vparboot command from HP-UX or vparload from the monitor. vpmon
              loads the kernel into the partition and starts it. During this phase, the state is marked as LOAD.
•             BOOT, once the kernel has been loaded vpmon can start it and hand over control, once control has been handed over the
              partition is moved to the BOOT state.
•             UP, after the kernel has finished booting, the system goes though all the normal userland boot procedures mostly using rc.
              Early in the proceedings of rc, at run level 1, vphbd is started and the vpmon moves our state to UP.
These are then the normal states during start up. During shutdown the transitions are:
•     CRASH, where the system has died due to a Panic, HPMC or TOC.
•     HUNG, where vpmon has seen no communication with vphbd for 1 hour.
The current states of partitions can be seen from HP-UX using the vparstatus command, and from the monitor prompt
using the vparinfo command.
Slide 6-11:
Boot process
rc rc rc
vpmon vpdb
HPUX
ISL AUTO
                                                                    PDC                   NVRAM
              25/08/2003                                           HP restricted                                                11
During a normal boot the system progresses thought from the firmware PDC thought to letting you login
For a system running vPars the AUTO file is changed so that the secondary loader HPUX loads vpmon rather than the normal
HP-UX kernel vmunix.
Once vpmon has been loaded it loads the vpdb from the current disk and starts the partitions. HP-UX in a partition boots in
much the same way as it would stand alone, but there are some addition rc scripts:
•             vparinit, the first vPar script checks whether there has been a monitor crash and recovers the monitor dump if required.
              Makes sure that the kernel is relocatable and relocates it to the correct address if need be.
•             vphbd, starts the heartbeat daemon
•             vpard, starts the main vPar daemon.
Slide 6-12:
25/08/2003 HP restricted 12
The monitor vpmon has a user interface that can be accessible through the console. This user interface and it MON> prompt is
accessible when:
•             The vpmon is first booted if the “-a” option is not used. By default vpmon boots into interactive mode.
•             When all the partitions are shutdown, the monitor is left running on the console.
•             If the systems monarch CPU is not running within a virtual partition, then toggling through the virtual partitions using
              “^A” will give access to the MON> prompt as well as all consoles of the active partitions.
              It should be stressed that for normal customers there is not much benefit in being able to access the MON> prompt.
              Bound processors can be assigned to partitions either explicitly by the user using the -a cpu:HW_PATH option to
              vparcreate for instance. If the user does not specify the hardware path of the CPUs then the monitor will choose the
              paths, and starts from the lowest available processor, thus assigning the systems monarch CPU.
   For unbound CPUs there is no way to explicitly select them, the monitor always chooses, it chooses the next available
   processor after the bound ones.
Question time
Slide 6-13:
Questions
25/08/2003 HP restricted 13
Preface
Once we have created the virtual partitions, we need to know how to administrator the environment. Particularly how to
exploit the promised flexibility.
Slide 7-1:
HP-Restricted
Module objectives
Slide 7-2:
Module objectives
•              The different command environments available. Virtual partitions are mostly managed from within any one of the
               partitions, from the normal HP-UX command line (or GUI) environment. In addition to this vpmon also provides a user
               interface, certain tasks can be performed from here.
               We need to understand the commands that exist in the different environments, because unfortunately they are not the
               same, even when they are doing the same job.
•              How to see what is happening within the virtual partition environment, both in terms of the configuration and also the
               current state.
•              Creating virtual partitions, yes I know we have covered this already but it is listed in this module so that all the commands
               are covered in a single place, that way you only have to look in one place.
•     Once partitions have been created, they can be changed, we need to know how.
•     When changing virtual partition some changes can be made live, other changes can only be made whilst the partition is
      down.
•     As well as being able to create partition they can also be deleted.
•     Normal standalone HP-UX instances are booted from ISL, in most cases automatically. With virtual partitions, this is not
      the case, instead they are booted from vpmon. We need to look at how to control the boot process and how the different
      recovery options work.
Display information
Slide 7-3:
Displaying information
                 • From HP-UX
                      –     vparstatus               displays status and config.
                      –     ioscan                   displays hardware config
                 • From vpmon
                      –     vparinfo                 displays resources and status
                      –     scan                     lists hardware
                      –     log                      “dmesg” for vpmon
                      –     cbuf                     console buffer
29/08/2003 HP restricted 3
When looking at our vPars environment there are a number of commands that can be used, either from the HP-UX command
line, or from vpmon’s MON> prompt.
From HP-UX
The main place to look at the environment is from the HP-UX command line, and from here there are two main commands we
will use
      There is no way to use ioscan to look at the configuration of other partition, but for the current partition it gives full details
      of the IO hardware available
      For processors, it will not only list all the CPUs currently assigned to the partition whether bound or floating. It also lists
      all the CPUs that were not bound to any other partition at the time the current partition booted. These are the processors
      that could be re-assigned to the partition dynamically as floating CPUs if required. If a processor is not listed here it can
      not be migrated in at a later date, without rebooting the partition first.
From vpmon
For the MON> prompt, there are more commands available, but only because the commands are more limited.
•     vparinfo, lists the hardware resources and the partitions to which they are assigned, it also lists the partitions and there
      current state. This command is apparently only for internal HP use.
•     scan, similar to vparinfo, it displays the hardware resources.
•     log, displays vpmon’s message buffer, in much the same way that dmesg displays the HP-UX kernel’s message buffer.
•     cbuf, displays the console buffer of a virtual partition. The shared virtual console used with vPars buffers up the console
      output sent by the different partitions. When toggling to a partition the buffered data is then sent to the physical console.
      The cbuf command allows you to view this output. Remember that you can not toggle to console of a partition if it is
      down.
Over the next few slides we will look at these commands in more detail.
Slide 7-4:
vparstatus
                 • options
                      -p Displays status of a specified virtual partition
                      -D Displays information from the alternate database
                         rather than from the monitor database
                      -A Displays information about available resources
                         that have not been assigned
                      -M Displays information in a machine readable format
                      -w Displays the name of the current virtual partition
                      -e Displays the monitor's event log, a circular buffer
                         8 Kbytes in length
                      -R Displays Processor Information Module (PIM) data from
                         the most recent reset of the specified virtual partition
                      -v Display verbose information
                      -V Display version number (of output format!)
               29/08/2003                               HP restricted                                               4
The main command for looking at the vPar environment is vparstatus. When used on it’s own with no options it gives the
current status of whole system and the partitions along with a brief summary of the configurations of the partitions:
root@keswick[]
From here we can see that all three partitions are in the up state, that they allow dynamic changes, autoboot and that they boot
from /stand/vmunix. The second section of output gives the resource summary, the only issue to what out for here is that the
number of IO resources columns is not a count of LBAs, it also included the boot & altboot IO resources.
As well as the default output the vparstatus command can be used to get other information or to present the standard
information in other ways.
•     -A, display available resources rather than the resources assigned to partitions.
      root@keswick[] vparstatus -A
      [Unbound CPUs (path)]:       <none>
      [Available CPUs]:      0
•     -M, the normal output format of vparstatus is designed to be human readable. It is not the easiest format to process
      using shells scripts ...etc. The -M option gives a more easily machine readable format.
      root@keswick[] vparstatus -Mp keswick
      keswick:Up:Dynamic,Autoboot:/stand/vmunix::1/4;101;;109:0.0,0.0.2.0.6.0.0.0.0.0
      BOOT,0.0.2.1.6.0.0.0.0.0 ALTBOOT:;2048:N
      root@keswick[]
•     -e, display vpmon’s message buffer, this is like dmesg for the monitor, the equivalent MON> command is log.
      root@keswick[] vparstatus -e
      INFO:CPU0:MON:[10:40:45 8/27/2003 GMT] VPAR Monitor version 0.4 started
      INFO:CPU0:MON:Version string: @(#) $Revision: vpmon:                 vw: -proj      selectors:
      CUP11.11_BL2002_0723 -- jholly_a02_02 'cup_vpar_pib3' 'cup_jholly_a02_02'                 Tue D
      ec 17 14:54:36 PST 2002 $
      INFO:CPU0:MON:Partition keswick monarch set to 101
      INFO:CPU2:MON:[10:41:17 8/27/2003 GMT] keswick is active
      INFO:CPU2:MON:PDC_STABLE return size = 3f0
      INFO:CPU2:MON:[10:42:03 8/27/2003 GMT] keswick is up
•     -R displays the most recent PIM data for either the current partition, or if used with the -p option then the specified
      partition.
      root@keswick[] vparstatus -Rp settle
      ******** TOC -- Processor 1: HPA 0xfffffffffed2d000              Entity ID: 3 *********
      General Registers 0 - 31
      00-03   0000000000000000      0000000008b7f080       0000000000000000      0000000000000000
      04-07   00000000015403c8      000000000153d000       000000000153d9b0      00000000089c0880
      08-11   0000000000000001      0000000008c930a0       0000000008b7c880      0000000008b7d080
      12-15   0000000008b7f080      0000000008b7f080       0000000008b7d080      0000000008b81880
      16-19   0000000001540268      0000000000000000       0000000000000000      0000000000000000
      20-23   0000000000000000      0000000000000000       0000000000000005      0000000000000001
      24-27   0000000000000000      0000000000000002       000000000beeb000      0000000008b87080
      28-31   0000000000000000      0000000003ffc290       0000000003ffc2c0      0000000000046df1
      Control Registers 0 - 31
      00-03   000000007af8bc27      0000000000000000       0000000000000000      0000000000000000
      04-07   0000000000000000      0000000000000000       0000000000000000      0000000000000000
      08-11   000000000000e893      000000000000a3c2       00000000000000c0      000000000000003e
      12-15   0000000000000000      0000000000000000       000000000800a000      fffffff0ffffffff
      16-19   0000014d5f1610a1      0000000000000000       000000000815b2e4      00000000e934a8ac
      20-23   0000000010340005      000000004f53c1d8       000000ff0824ff1f      8000000000000000
      24-27   000000000153d000      00000000080177f0       0000000000000000      000000004003b828
      28-31   0000000000000005      0000014d5ed34582       0000000000007c4f      0000000003ffc2c0
      Space Registers 0 - 7
      00-03   000000000821ac00      0000000008238800       000000000a2adc00      0000000000000000
      04-07   0000000000000000      00000000ffffffff       000000000aa5c000      0000000000000000
root@keswick[]
      [CPU Details]
      Min/Max:     1/4
      Bound by User [Path]:
      Bound by Monitor [Path]:             45
      Unbound [Path]:
      [IO Details]
         0.8
         0.10
         1.0
         0.10.0.0.3.0.0.0.0.0             BOOT
         0.8.0.0.3.0.0.0.0.0             ALTBOOT
      [Memory Details]
      Specified [Base       /Range]:
                   (bytes) (MB)
      Total Memory (MB):          2048
      root@keswick[]
      Here we can see all the configuration options for the partition, you also get to see which CPUs are bound and unbound,
      and for the bound CPUs which CPUs were chosen using -a cpu:HW_PATH options “Bound by User” and which the
      monitor chose.
•     -V, uppercase V gives the version of the output format used by vparstatus, there were some output format changes
      between vPars A.01.00 & A.02.00, in order for scripts to reliably process the output they need to be able to confirm that
      all the data is were they expect and that no one has moved things.
Slide 7-5:
Monitor commands
              • vparinfo [partition]
                    –     Displays resource and partition information from the
                          monitor
              • scan
                    –     Displays all hardware found by the monitor
                          It lists the LBAs not what is plugged into them
              • log
                    –     Displays the monitors log, like dmesg
              • cbuf
                    –     Displays the console buffer for a partition
29/08/2003 HP restricted 5
The previous slide covered the vparstatus command used from the HP-UX prompt, here we are looking at the options available
from vpmon’s MON> prompt. Whereas the vparstatus command could give all the information, from the MON> prompt, a
number of separate commands are used.
•    vparinfo, displays the either the available resources and the partition states or the resources of the specified partition.
     Note that when specifying a partition there is no -p option.
     MON> vparinfo
     keswick (up)
     carlisle (down)
     settle (up)
The TYPE & SV_MODEL values are not documented in the vPars literature, for the type values
     — 0, is a processor
     — 1, is the memory
     — 7, is an SBA, oddly these are shown as being unassigned and assigned to all partitions. Here the SBAs also include
       the bus adapter connecting the CPUs to the Merced buses of the N-Class.
      — 13, is an LBA, there is no indication as to what is plugged into an LBA, the HP-UX kernel uses claim functions in the
        device drivers to find out what is plugged into the different slots.
      For the SV_MODEL values
      — 4, is a processor
      — 9, is the memory
      — 10, is an LBA
      — 12, is an SBA
•     scan, the scan command like vparinfo lists the hardware paths of the IO resources, but lists all the paths found, and which
      partition they belong to.
•     log, view vpmon’s message buffer, from the HP-UX command line, this is accessed using vparstatus -e.
•     cbuf, view the console buffer of a partition. This does not appear to be accessible from the HP-UX command line.
Slide 7-6:
Creating vpars
                 • You need to
                      –     specify the partition name
                      –     The resources, CPU, MEMory & IO that will belong to the
                            virtual partition.
29/08/2003 HP restricted 6
•              -p, you must provide a partition name, it’s maximum length is 239 char! The partition name is used in the vpar commands.
               For other commands you will use the hostname of the partition. Whilst vpars is quite happy for the names to be different,
               in practice this is likely to cause confusion and lead to errors.
               So please set the vpar name to the hostname you are intending to use inside the partition.
• -B boot_attr, controls whether the partition autoboots, when vpmon is started with the “-a” option.
      — auto, [default] when the boot_attr is set to auto, the partition automatically boots when vpmon starts with the
        “-a” option.
      — manual, the partition will need to be to manually booted.
•     -S static_attr, controls whether changes can be made to the partition.
      — -a cpu::Normal number, this is the normal total number of CPUs bound + unbound.
      — -a cpu:::min[:max], the minimum number is the number of bound CPUs. The maximum number can be used to limit
        the number unbound CPUs being added dynamically at run time.
      — -a cpu:HW PATH, for bound CPUs (and only bound CPUs) you can explicitly choose the CPUs to be allocated to
        the virtual partition. It appears that vpmon chooses the next available CPU after the bound processors for any
        unbound CPUs.
      — -a mem::MEM SIZE, specifies the total amount of memory in MegaBytes to be allocated to a partition.
      — -a mem:::base:range, this option allows the memory assigned to a virtual partition to be chosen. Here range MB
        starting at address base will be assigned. This option can be specified multiple times to assign multiple memory
        ranges. With current systems this option is of limited use.
      — -a io:HW_PATH, the hardware path of an LBA to be allocated to the virtual partition.
      — -a io:HW_PATH:boot, the hardware path to the boot disk for the virtual partition. You can also specify “altboot”,
        for the alternative boot device. For the initial partition the boot device needs to be specified. If additional partitions
        are to be installed using Ignite, this can be left off during the creation of the partition, as Ignite will set this later for
        you.
It can be very helpful to put the vparcreate command into a shell script, for a number of reasons:
•     The commands themselves can end up being quite long with lots of -a’s. Putting them into a script can make checking
      them easier.
•     You often want to create more than one partition. With the command in a file, it can be easier to copy and edit an previous
      example than to try and remember all the options again.
#!/usr/bin/sh
#############
#
#      Script to create the virtual Partition "keswick"
#
      vparcreate -p keswick                          \
           -o "-lq"                  \
           -a cpu::2                 \
           -a cpu:::1                    \
           -a mem::2048                      \
           -a io:0/0                 \
           -a io:0/0/2/0.6:boot              \
           -a io:0/0/2/1.6:altboot               \
           -a io:0/2                 \
           -a io:0/12                    \
           -a io:1/4                 \
                        -a io:1/10
The boot path, autoboot and static flags are all left at their defaults, we do not often need to changes them.
Slide 7-7:
29/08/2003 HP restricted 7
The major advantage of virtual partitions is their flexibility, not only can they be easily created. Once created they can easily
be changed to suit your changing needs. Virtual partitions can be changed using the vparmodify command.
This allows all the options that were initially set during creation to be modified.
Slide 7-8:
29/08/2003 HP restricted 8
When making changes to partitions, you will find that some changes can be performed on live partitions whilst other require
that the partition is shutdown first. Where the changes can be made on live partitions, most of the options effect boot time, so
will come really come into effect until the partition is next rebooted. Other live modifications result in immediate changes.
Module objectives
Slide 7-9:
Vparmodify command
               • vparmodify -p vp_name
                     [-B boot_attr] [-D db_file] [-S static_attr]
                          [-b kernel_path] [-o boot_opts]
                          [-P new_vp_name]
                          [-a rsrc]... [-m rsrc]... [-d rsrc]...
29/08/2003 HP restricted 9
Slide 7-10:
The key change that can be made to live partitions is to add and remove floating CPUs.
CPUs can only be added to a partitions if
•             They are available, HP-UX virtual partition environment works with physical CPUs, not virtual ones. A CPU can only
              belong to one partition at a time
•             The number does not exceed the maximum specified. It is possible to impose a maximum number of processors on a
              partition, to control the amount of flexibility.
•             The processors to be added are were not bound to another partition at the time this partition booted. At boot time, the
              HP-UX kernel can see it’s own bound CPUs and also all the unbound CPUs in the systems. These CPUs are allocated
              space in the kernels CPU management table. They are also visible in the output of ioscan. CPUs that were not seen at boot
              time, and are therefore not visible to ioscan can not be added to the partition.
Slide 7-11:
Floating or unbound CPUs can be worked on live, but other changes require the partition to be in the down state. These
changes include:
Slide 7-12:
29/08/2003 HP restricted 12
As with bound CPUs, changes to memory configurations, either the amount of memory or the specified ranges of memory
allocated require the partition to be shutdown first.
Modifying IO resources
Slide 7-13:
Modifying IO resources
29/08/2003 HP restricted 13
Using vparmodify to change IO resources again needs the partition to be in the down state. This is true both for adding or
removing LBAs, or for changing boot devices.
The contents of an LBA can be changed using HP-UX 11.11 OLAR capability, so when planning partitions if there are several
empty LBAs it can be worth adding some to our partitions.
It is also possible to change the boot devices of partitions in the normal HP-UX way, using the setboot command. When
using setboot, HP-UX makes a call to PDC to change the boot devices, normally in stable storage, but when running in a
partition vpmon, implements these PDC calls as changes to the partition database. Using the setboot command, only the
boot device of the current partition can be changed.
Slide 7-14:
               • Name changes
                     # vparmodify –p keswick –P newlands
               • Autoboot
                     # vparmodify –p keswick –B manual
               • Boot kernel
                     # vparmodify –p keswick –b WINSTALL
               • Boot options
                     # vparmodify –p keswick –o “-lq”
               • Static flag
                     # vparmodify –S dynamic –a cpu::4                                     -S static
              29/08/2003                                 HP restricted                                               14
All the other settings of partitions can also be changed using vparmodify, when performing multiple changes within one
command, the command is processed left to right, so in the last example we have a case where a partition that is currently
marked as static is changes to be dynamic, and so allowing 4 more CPUs to be added. Once this has happened, the
partition is returned back to it’s static condition.
Slide 7-15:
29/08/2003 HP restricted 15
As we have seen many of the changes require the target partition to be shutdown. The command then needs to be issued from
another partition. Where there are several partitions this is easy, but if there was only one partition it would not be possible.
If changes need to be made where only one partition exists, for instance if you had made a mistake during your initial setup
and assigned all the memory to your initial partition. There are two possible ways to make the changes. One would be to
reboot the system and bring it back up outside the control of vpmon.
The other approach would be to make use of an alternative database. The alternative database could be created either by
•             copying the existing database file, and then using vparmodify against this copy file.
              # cp /stand/vpdb /stand/tmpDB
              # vparmodify -D /stand/tmpDB -p keswick -m mem:2048
•     Using the -D option to vparcreate to create the partition in the correct configuration. If the initial vparcreate was
      issued from a shell script, then the script and be copied and edited.
The trick is now to replace /stand/vpdb with the new database /stand/tmpDB. If you were to copy it over, the next time
vpard ran (so within the next 5 seconds) your change would be over written as the daemon makes sure that the disk based
copy is in sync with vpmon’s memory based master.
The easiest way around this problem is to substite the file from single user mode when vpard is not running, and it’s control
script is not going to get re-executed.
Once the /stand/vpdb file has been substituted the partition can be shutdown, and the monitor rebooted.
Slide 7-16:
              29/08/2003                                     HP restricted                                                  16
If the scripts used to create partitions are to be used either as reference material or to re-create partitions then they need to be
kept up to date with any changes that are applied to the partition using vparmodify. One way to do this would be to remember
to edit the file each time vparmodify is run, but this could prove error prone. Another approach would be to use a shell script to
convert the output of vparstatus into a script needed to make the partition. If you remember the vparstatus command
has a -M option to easy it’s use in scripts.
The following is an example script using awk to produce vparcreate scripts. Note it does not support the max CPUs attribute or
specifying memory ranges.
#!/usr/bin/awk -f
BEGIN {
FS=":";
while ( "vparstatus -M" | getline ) {
          partName=$1;
          state=$2;
          flags=$3;
          kernel=$4;
          bootOpts=$5;
          cpus=$6;
          IO=$7;
          MEM=$8;
          nParRB=$9;
          fn=partName ".vparcreate"
          print fn;
n=split(cpus,subf,";");
          n2=split(subf[1],subf2,"/");                         # min/max
          minCPUs=subf2[1];
          maxCPUs=sunf2[2];                                    # script ignores max
          if ( nIOr )                                        # IO resources
                   for (i=1; i<= nIOr; i++)
                             printf("\t-a io:%s\\\n",IOr[i]) >fn;
Slide 7-17:
Deleting a vPar
               • vparremove -p vp_name
                     [-D db_file] [-f]
As well as creating virtual partitions you can also remove them using the vparremove command. In order to remove a partition
it must be in the down state.
When a partition is removed, only it’s definition in the partition database is removed. No chances are made to any to the
resources that it owned. Most particularly the disks are not modified in any way. If the partition were to be recreated it could
just be booted and used straight away without needing to reinstall HP-UX.
Slide 7-18:
               • Commands
                     –     Vparboot – from HP-UX
                     –     Vparload – from vpmon
               • Tasks to consider
                     –     Normal boots
                     –     Manual boots
                     –     Alternate devices
                     –     Alternate kernels
                     –     Single user & maintenance mode
              29/08/2003                                   HP restricted                                                 18
Once you have created virtual partitions they need to be booted. For standalone HP-UX this is done by ISL and the HPUX
boot loader. These are not used to boot virtual partitions. In this subsection we will look at the commands for booting virtual
partitions, the different boot scenarios and issues with shutting down partitions.
Slide 7-19:
              Normal boots
                • vparboot -p vp_name
                     [-b kernel_path] [-o boot_opts] [-B boot_addr]
              Boot for install
                • vparboot -p vp_name -I ignite_kernel
                # vparboot –p settle
                     –     Boots settle using it’s normal options
                # vparboot –p settle –b /stand/vmunix.prev –o “-is”
                     –     Boots settle in single user mode from previous kernel
                # vparboot –p carlisle –I
                     15.0.104.38,/opt/ignite/boot/WINSTALL
                     –     Install carlisle from an Ignite server
              29/08/2003                                              HP restricted                                                  19
To boot a virtual partition we use the vparboot command. This command has two main forms, either to boot the partition off
it’s own disks for normal operation, or to boot from an Ignite server.
Slide 7-20:
                • vparload -all
                  vparload -auto
                  vparload -ppartition_name
                     [-bkernelpath][-oboot_options][-Bhardware_path]
It is also possible to boot virtual partitions from the monitor prompt, using the vparload command. As well as booting
individual partitions using the -p option vparload can also boot
•             all the partition, irrespective of their autoboot setting. The -all option performs this task.
•             all partitions configured to boot automatically, the -auto option is used for this.
When boot individual partitions, you can use the -b, -o & -B options from vparboot. The only exceptions is that with the
-B option full hardware paths must be used, it does not evaluate the pri & alt options.
Module objectives
Slide 7-21:
               • Reboot
                     –     You can reboot partitions like normal systems
                     –     You can reboot the monitor
                            • If a partition is running, reboot asks for confirmation
29/08/2003 HP restricted 21
The HP-UX shutdown and reboot commands are not aware of vPars, they work within there copies of HP-UX just as they
would on a standalone system. Consequently to shutdown all the partitions you need to login to each one in turn and issue a
separate shutdown command.
The monitor does not write it’s own information to disk, it relies on the partitions running the vpard to save the database to
disk. Since the monitor does not write to disk, it does not have any information to loose, it can safely be switched off once no
partitions are running.
If the monitor needs to be restarted, then it does provide a reboot command, if any partitions are running, the monitors reboot
command will ask for confirmation. Rebooting the monitor whilst a partition is running will have the same effect as resetting
it.
Slide 7-22:
                     # vparreset –p carlisle –t
                     Reset virtual partition carlisle? [n] y
29/08/2003 HP restricted 22
In the event of a system failing to respond we sometimes need to reset of TOC the system. On an HP9000 server this is normal
done by entering console mode using ^B, and then issuing either the RS or TC command, depending on whether a dump is
required.
When the system is running under vpmon,
•             tc, will TOC each of the partitions and then the monitor. Giving a monitor dump in addition to a series of HP-UX dumps
              for each of the partitions.
•             rs, will reset all the partitions and the monitor and reboot the system, with no dumps being performed.
The vparrset command can also be used to either TOC or reset an individual partition.
Slide 7-23:
29/08/2003 HP restricted 23
Slide 7-24:
Manual booting
              29/08/2003                             HP restricted                                           24
Virtual partitions can be manually booted using either the vparboot command from HP-UX or vparload from the monitor.
It is also possible to pass vparload commands as options to vpmon, and so access it from the ISL prompt.
Slide 7-25:
29/08/2003 HP restricted 25
If the primary boot device for a partition fails then it is possible to boot from alternative boot devices by using the -B option to
either vparboot or vparload.
The vparload command does not provide the alt options, you need to know the hardware path to the other disks. One feature of
vpmon that can be useful here is that it provides to file access commands to access the HFS boot filesystem off the disk where
vpmon was booted.
Slide 7-26:
29/08/2003 HP restricted 26
If the normal boot kernel /stand/vmunix is broken in someway then the -b option can be used to specify an alternative
kernel. For the partition that owns vpmon’s normal boot disk, the ls command can be used to look for alternate kernels, for
other partitions no such help is available.
As with standalone HP-UX, if you are booting from a kernel file other than /stand/vmunix it is normally best to boot into
single user mode. Quite a few commands assume that the system was booted from /stand/vmunix and will not run correctly
without knowing the actual boot file.
Slide 7-27:
29/08/2003 HP restricted 27
When running standalone boot options can be passed to the kernel from the ISL prompt. Virtual partition do not boot in this
way, and so these options need to be passed though the -o option to vparboot or vparload.
These commands think they know the valid options and will only allow these to be passed. Unfortunately they do not currently
accept the -vm option needed to boot VxVM into maintenance mode.
Other tasks
Slide 7-28:
Other tasks
29/08/2003 HP restricted 28
In addition to the major administration tasks we have already covered their are a number of other areas where the virtual
partitioning software effects our administration of the system
•             vPars provides a command to set the SCSI card parameters for down partitions without the need to go all the way down to
              BCH.
•             There is some information that is only relevant when running within a virtual partition, vparextract provides a means to
              access this.
•             As well as getting dumps from HP-UX kernels when they fail, it is also possible to get dumps from vpmon, we will cover
              this topic in detail in the advanced trouble shooting module, but we will have a brief look at the subject here.
•     In standalone HP-UX systems, the kernel starts at the beginning of memory, where several instances of HP-UX exits in
      the same system, they obviously can not all live at the same address, the kernel therefore needs to be relocated to different
      start addresses. Since many commands expect the /stand/vmunix file to provide details of the addresses of data object
      in the kernels memory, the kernel file needs to updated to know where it is currently residing.
•     In order for different partitions to have different time settings vpmon has to manage the relationship between a partitions
      view of time and the hardware clock. If the hardware clock is changed, then vpmon’s stored differences might need
      clearing.
Slide 7-29:
29/08/2003 HP restricted 29
With older SCSI cards there were dip switches of jumper settings on the boards to make changes to things like SCSI addresses
or termination modes. For the PCI based cards, these settings are done in software, but this is done from BCH. Accessing BCH
requires that the system is down. On a standalone system this is not such as issue, since HP-UX doesn’t know how to deal with
settings changing on a live system, and so the HP-UX needs to be shutdown anyway.
For a vPars system, accessing BCH requires that all the partitions are shutdown, and that the monitor is rebooted. This is a
major inconvenience, and reduces the usefulness of the vPars environment.
To reduce avoid this need to access BCH, vPars provides the vparutil command. This allows the SCSI setting to be check
or set from HP-UX. In order to change the settings the target card’s partition must be down.
Not all of vparutil’s options seem to work on all platforms.
If a SCSI card is not in the SCSI tables because it’s setting have never been changed, then it can not be viewed using the -g
option.
Once a -s has been used, then the card is added to PDC’s SCSI tables and the card is visible.
root@settle[] vparutil -a
Option not supported
root@settle[] vparutil -s         0/4 -i 7
root@settle[] vparutil -g 0/4
Device: 0/4      Scsi ID: 7       Scsi Rate: Default (driver determined)
Attempting to set parameters for a card in a live partition will result in an error
root@settle[] vparutil -g 0/8
Error while making firmware call. Check device path and retry.
root@settle[] vparutil -s 0/8 -i 7
Vpar which owns specified device is not in down state.
root@settle[]
Slide 7-30:
              29/08/2003                                            HP restricted                                           30
The manual page for the vparextract command proclaims
vparextract - extract memory images from a running virtual partition systems
but it really it just provides access to additional information relevant to a virtual partition or a vPar system.
•             vparextract -b, provides the boot path where vpmon was loaded from, and therefore also where vpdb was loaded from.
              root@settle[] vparextract -b
              disk(0/0/2/0.6.0.0.0.0.0)
•             vparextract -k, provided the start address/relocation address, of the current partition.
              root@settle[] vparextract -k
              0x8000000
•     vparextract -l, extracts vpmon’s event log, just like vparstatus -e does.
•     vparextract -d, this option does not appear in either the man page or the commands usage message. This option handles
      the case where /stand on the boot disk is mirrored. On previous versions of vPars, or when running standalone, their is a
      problem with a mirrored /stand, in that vpmon’s dump only wrote to one of the disk. HP-UX’s attempts to read the
      dump file could be satisfied from either disk, resulting in some of the data coming from the wrong disk. This option is
      used by the vparinit startup script.
Managing dumps
Slide 7-31:
Managing dumps
29/08/2003 HP restricted 31
If vpmon fails or the whole system is TOC’d then vpmon can dump core. Rather than dumping into a dedicated dump area or
a partitions swap space, vpmon dumps core into a special file in the /stand area on the disk it booted from. There are a few
issues with this.
•             The file /stand/vpmon.dmp is not a normal file, it must be created by the vpardump command with either the -i or -f
              options. The file needs to be contiguous on disk.
•             On mirrored system the dump is only written to one half of the mirror.
The startup script vparinit normally performs vpardump for you.
We will cover this topic in more detail in the advanced trouble shooting module.
Module objectives
Slide 7-32:
29/08/2003 HP restricted 32
At boot time the startup script vparinit using vparextract -k to find the relocation address of the partition. It then uses nm
to find the current text start address of the kernel, if these do not match then it uses vparreloc -a to patch the
/stand/vmunix file to the start address that vpmon has assigned to the partition.
The kernel file should therefore always be relocated to the correct start address. However if the /stand/vmunix file is ever
recreated from some where else, such as a backup, then it might be necessary to relocate back to the correct address.
vparreloc -f /stand/vmunix can be used to check whether the kernel is relocatable.
root@carlisle[] vparreloc -f /stand/vmunix
The source vmunix file /stand/vmunix is relocatable
root@carlisle[]
Slide 7-33:
29/08/2003 HP restricted 33
Normally in HP-UX when you change the time in a partition, it changes the hardware clock of the system. For virtual
partitions though this approach won’t work, since different partitions can be set to different times. If the hardware clock is
reset from PDH, you can use this command to clear the differences stored by vpmon.
This command does not effect currently running partitions.
Module objectives
Slide 7-34:
              29/08/2003                                  HP restricted                                  34
In this module of the class we aim to get an understanding of:
Module objectives
Slide 7-35:
Questions
29/08/2003 HP restricted 35
Preface
A lot of externally-reported problems are simple misconfiguration issues. This “low-hanging fruit” can be checked and
resolved very quickly, so they are good starting points for troubleshooting. We will then take a look at issues with starting and
running vPars
Slide 8-1:
HP-Restricted
Module objectives
Slide 8-2:
Module objectives
29/08/2003 HP restricted 2
Module objectives
Slide 8-3:
A lot of customer problems are actually pretty straightforward configuration issues, like trying to use new hardware on a
release that doesn’t yet support it. You can head these off with some quick checks, so it’s always a good idea to do them first.
Note that much of the information in this module is taken from the HP-UX Virtual Partitions Ordering and Configuration
Guidelines, available on the web at docs.hp.com. That information is constantly changing, so this module -- what you’re
reading right now -- may be out of date. When in doubt, download the latest copy.
Module objectives
Slide 8-4:
29/08/2003 HP restricted 4
The main class of software problem seen with vPars is just whether the software’s installed correctly. But before we can check
for correct installation, we have to know what we’re checking for.
The vPars product comes in two distinct flavors:
•              VPARSBASE: the base product. It’s free, and downloadable from the web. It’s mainly intended as an evaluation tool for
               customers, and has limited functionality. Users can only create two vPars on a system or nPar, with the first vPar having
               one CPU. You’ll realize that means CPU migration doesn’t work. This software isn’t supported, but a kernel running
               under it still has support.
•              T1335AC: the full product. This one costs money, and adds in the full vPars functionality and support. You get the ability
               to make additional partitions, reconfigure them, and migrate CPUs dynamically.
To see which flavor you’ve got installed -- and hence, whether it’s supported or not -- use swlist:
# /usr/sbin/swlist -l bundle
Initializing...
...
  T1335AC               A.02.01.00                      HP-UX Virtual Partitions
This system obviously has the full product installed. Here’s an example of the base product:
# /usr/sbin/swlist -l bundle
Initializing...
...
  VPARSBASE             A.02.01.00                      HP-UX Virtual Partitions - Base
Alternatively, you can check by looking at the VirtualPartition product. The base product has a VPAR-MON fileset, while
the full version has a VPAR-MON2 fileset. Here’s a system with the base product:
# /usr/sbin/swlist -l fileset VirtualPartition
Initializing...
...
# VirtualPartition                      A.02.01.00                     HP-UX Virtual Partitions Functionality
  VirtualPartition.VPAR-KRN             A.02.01.00                     Virtual Partition Kernel Files
  VirtualPartition.VPAR-MON             A.02.01.00                     Virtual Partition Monitor
  VirtualPartition.VPAR-RUN             A.02.01.00                     Virtual Partition User Space Commands
And here’s one with the full product:
# /usr/sbin/swlist -l fileset VirtualPartition
Initializing...
...
# VirtualPartition                      A.02.01.00                     HP-UX Virtual Partitions Functionality
  VirtualPartition.VPAR-KRN             A.02.01.00                     Virtual Partition Kernel Files
  VirtualPartition.VPAR-MON2            A.02.01.00                     Virtual Partition Monitor
  VirtualPartition.VPAR-RUN             A.02.01.00                     Virtual Partition User Space Commands
There’s currently no documented way to tell which vPars flavor is installed just from looking at a kernel crash dump or a
monitor crash dump. Explaining how it’s done might enable customers to convert the free product into the full product.
Module objectives
Slide 8-5:
               A.01
                      • First release
                      • Support for rp5470 (L3000), rp7400 (N4000)
                      • “Post-release” support for rp5405
               A.02.01
                      •     Support for Superdome
                      •     Integration with iCOD
                      •     vparmgr GUI
                      •     “Post-release” support for rp8400 (Keystone), IOX
                            expander
               A.02.02
                      • Support for rp7405/rp7410 (Matterhorn), A5838A “combo
                        card”
               29/08/2003                                  HP restricted                                                  5
As of this writing, there are three different releases of the vPars software, each one adding functionality and support for new
processors and hardware.
This slide shows a summary of those new features. For details on each release, check the Installing and Managing HP-UX
Virtual Partitions documents for each release; they’re available at
http://docs.hp.com/hpux/11i/index.html#Virtual%20Partitions.
Module objectives
Slide 8-6:
                    • A.02.01:
                    # vparstatus -V
                    Version 1.1                              Not a typo
                    # swlist –l fileset VirtualPartition
                    # VirtualPartition A.02.01.00 HP-UX Virtual Partitions
                    • A.02.02:
                    # vparstatus -V
                    Version 1.1
                    # swlist –l fileset VirtualPartition
                    # VirtualPartition A.02.02.00 HP-UX Virtual Partitions
             29/08/2003                                             HP restricted                                                   6
You can check for the installed release one of two ways:
•            swlist: This is the best approach. By definition, each new release has to have a new product revision number -- the third
             field in the swlist output.
•            vparstatus: Using the -V (yes, that’s a capital letter) option to vparstatus prints the version of the command’s output
             format. Right now, you can use it to differentiate between releases, but the output format isn’t necessarily tied to a release
             -- a new release might not change it, or a simple patch could change it.
Although it’s not officially documented, you can also get version information from either a kernel crash dump or a monitor
crash dump. From a vPar kernel crash dump, look for the symbol vpar_dvr_version. This variable has 16 bits of major
version and 16 bits of minor version. This was added for the second release of vPars, so if it’s not there, you’ve got A.01. In
A.02.01, the major version is 0 and the minor version is 1.
Module objectives
Slide 8-7:
29/08/2003 HP restricted 7
The previous slides have been background information, leading up to the main problem with software configuration on vPars:
the software’s not installed correctly. The two main culprits are:
•            Not installing all the required patches, so that the new kernels aren’t relocatable
•            Installing all the required patches, but having something go wrong at install time so the patches aren’t completely installed
(Actually, there’s a third “most common” problem: not using the appropriate WINSTALL kernel with Ignite-UX, but we said
up front we’re not covering Ignite-UX. Check the config guide for details, but the short response is: make sure you’re using
Ignite-UX version B.3.7.X or better.)
We’ve already seen how to check if the software’s installed -- use swlist.
To check if it’s completely installed, use the -a state option to swlist. This should print the string “configured” for all
the vPars software and patches; in fact, every fileset on your system should be marked configured, or it’s not completely
installed. If you’ve got some filesets that aren’t configured, check the SD logs to figure out what happened.
These two checks should be sufficient, but sometimes you have to check even further to see if there’s a problem with the
installation, or if your kernel “lost” the patches -- by booting a backup kernel, copying a kernel over from another system, or
restoring it from backup. You can confirm if the patches are in the kernel by running the what command on it.
The complete list of patches bundled with each vPars release can be found in KMine; in particular, newsletter UVPARGD004
contains the list for version A.02.01.
Module objectives
Slide 8-8:
                          # vparstatus -w
                          vparstatus: Error: Virtual partition monitor not running.
                          # vparreloc -f /stand/vmunix
                          Didn't find necessary dynamic sections needed for
                            relocation
                          ERROR: The source vmunix file /stand/vmunix is not
                            relocatable
29/08/2003 HP restricted 8
Another common software issue is, frankly, whether vPars is running or not.
To tell whether you’re booted standalone or as a vPar, use the vparstatus command with the -w option. This prints out the
name of the local virtual partition -- that is, the one in which the command is executed. If you’re booted standalone, the
monitor’s not running, so the command will fail.
To tell whether your current kernel can be booted under vPars, you’ll have to check if it’s relocatable. Use the vparreloc
command with the -f option.
Module objectives
Slide 8-9:
That’s it for the quick software checks. The hardware checks are basically whether the hardware’s supported or not.
First off, the processors. As we saw in the previous module, the monitor emulates PDC; thus, it has to be compatible with each
processor’s implementation of PDC. Adding support of new processors entails PDC compatibility testing and potential
monitor changes, so not all processors or versions of PDC are (or will be) supported.
•              The standard way to check for what processor you have is the model command. That’s what the checkinstall script
               for the vPars product (namely, the VirtualPartition fileset) uses. If the output of model doesn’t match one of a set of
               hard-coded strings, you won’t be able to install vPars. People have been known to work around such restrictions, so it’s
               still a good idea to check.
      There’s one case where this model string check could fail. To check for the rp5470, the checkinstall script only looks
      if the model string starts with “9000/800/L”. That implies that vPars will successfully install on an L1000, L1500, or
      L2000, even though vPars isn’t supported on them.
      If all you have is a kernel core dump, you can’t very well run the model command. Fortunately, you can still get the
      model information -- it’s in the crash dump’s INDEX file:
      # grep model crash.0/INDEX
      modelname 9000/800/L3000-5x
•     To check for the appropriate firmware revisions, you can use the boot console handler (BCH). After interrupting the
      monitor boot, go to the information menu, and query the firmware revision. You can do this in one line from the Main
      Menu prompt with “in fv”, as shown below:
      Main Menu: Enter command or menu > in fv
FIRMWARE INFORMATION
Module objectives
Slide 8-10:
              The monitor uses IODC to boot & dump vPars, so not all I/O
                cards will work for those purposes
The exceptions -- that is, which I/O hardware is not supported -- depend on whether you’re using the card for booting or
dumping vPars. Since the monitor uses IODC to do its I/O, there’s significant testing (and potential code changes) to make
sure that the IODC on a particular card interacts cleanly with the monitor.
If you’re not using the card for boot or for dump, vPars doesn’t impose any restrictions, so it should work as well as it does
standalone. Well, almost. The only exception is the A5856A RAID 4si SCSI controller: it can cause an HPMC when a vPar is
reset via vparreset, so it’s not supported at all in a vPar; see JAGae44143 for details.
You can check what the boot device is, by using the vparstatus and ioscan commands. For example:
# vparstatus -v -p vpar1 | grep BOOT
0.0.2.1.6.0.0 BOOT
# ioscan -H 0.0.2.1
Module objectives
Slide 8-11:
                Methodology:
                How to Troubleshoot the Monitor
                The vPars monitor boots like a standalone HP-UX kernel.
                Hardware problem?
                  • Troubleshoot the disk connection
                  • Boot from an alternate path
                Software problem?
                  • Boot an alternate monitor (if available)
                  • Boot /stand/vmunix (without the monitor)
                29/08/2003                               HP restricted                                               11
From a troubleshooting standpoint, the most important thing to remember about the monitor is that it boots just like a
standalone kernel. That is, the HP-UX boot loader hpux reads the executable from the /stand/ file system, loads it into
memory, and launches it. The only significant difference is the name of the executable; instead of /stand/vmunix, it’s
/stand/vpmon.
That means that the same techniques you use to troubleshoot a failed standalone kernel boot come into play for the monitor as
well.
When trying to figure out why the monitor won’t boot, the most straightforward approach is to watch the console messages
from either the Boot Console Handler (BCH) or ISL.
•     For example, if the problem appears to be based on hardware -- such as BCH or ISL being unable to communicate with
      the disk with /stand/ on it -- then you can check the disk connections, or try one of the alternate disk paths. Remember,
      each vPar has its own boot disk, so there will be multiple /stand/ directories, each with its own copy of the monitor.
•     If BCH and ISL can contact the disk, then the monitor boot failure probably isn’t a hardware issue, unless it’s an
      unsupported disk. In this case, read the console messages. If it’s a problem with the monitor executable itself, you can try
      booting an alternate monitor -- either a backup copy in /stand/ if you have one, or /stand/vpmon from another boot
      disk, as noted in the previous paragraph.
•     If you can’t boot an alternate monitor, the next step is to try booting the system standalone -- that is, boot
      /stand/vmunix instead of /stand/vpmon. If /stand/vmunix successfully boots, you’ve isolated the problem to
      vPars, and not a generic system problem. At that point, you can effect recovery -- by running swverify to confirm that
      the vPars products are not corrupt, copying the monitor over from an identical system, or reinstalling the vPars products.
•     If you can’t boot the system standalone, then it’s probably not a vPars issue. Once you’ve tried to boot off all your
      alternate disks -- both as a vPar and standalone -- there’s not much else you can do. It may be time for the support media
      or a reinstall.
The following slides show some problems you might run across.
Module objectives
Slide 8-12:
                             Booting...
                             Failed to initialize.
                             ENTRY_INIT
                             Status = -7
                             Failed to initialize.
                             Main Menu: Enter command or menu >
              29/08/2003                                       HP restricted                                             12
This example shows a system boot disk that’s either not connected, or just can’t be accessed by the BCH. This isn’t a vPars
problem, but a hardware issue. It could be caused by bad cabling, old firmware revisions, unsupported disk type, that sort of
thing.
Troubleshooting hardware problems is beyond the scope of this course, since hardware really isn’t part of the vPars product.
Nonetheless, the next slide shows our approach to recovery.
Module objectives
Slide 8-13:
                             Main Menu: Enter command or menu > sea ipl                 Find a bootable disk
                             Searching for device(s) with bootable media
                             This may take several minutes.
                                 Path# Device Path (dec)        Device Path (mnem) Device Type and Utilities
                                 ----- -----------------        ------------------ -------------------------
                                 P0    0/0/1/1.2                intscsib.2         Random access media
                                                                                     IPL
                                 P1     0/0/1/1.0               intscsib.0         Random access media
                                                                                     IPL
HARD Booted.
ISL>
29/08/2003 HP restricted 13
If the disk doesn’t appear to be working, you can do a BCH search for any media with an initial program loader (IPL). That
should likely give you a list of disks with usable LIF volumes.
Be sure to interact with ISL, so you can check two things:
         1. The AUTO file in the LIF volume may not be set to run /stand/vpmon. By default, the autoboot file just contains the
            string “hpux” with no options, which the boot loader will interpret as a request to run /stand/vmunix. So be sure to run
            the ISL lsautofl command to list the contents of the autoboot file, and override it if necessary. For example:
              ISL> lsa
hpux
     ISL>
 2. Another reason to interact with ISL is that the disk may not have any file system on it, let alone a /stand/ file system. If
    the disk does hold the right type of file system, that file system may not contain a vpmon executable. Be sure to run the
    hpux ls command from ISL to get a listing of the /stand/ file system, and confirm that there’s a /stand/vpmon in it.
    For example, this disk has no vpmon on it:
     ISL> hpux ls
     Ls
     : disk(0/0/1/1.2.0.0.0.0.0.0;0)/.
     lost+found      system          kernel                             ioconfig               bootconf
     build           vmunix          dlkm                               system.d               rootconf
Once you find a disk with /stand/vpmon on it, you can boot it directly from ISL.
Module objectives
Slide 8-14:
                           Booting...
                           Boot IO Dependent Code (IODC) revision 1
                           Deal with this the same way as you would a bad disk.
                           One difference: you may be able to load the monitor
                           from the original disk.
                           ISL> hpux (0/0/1/1.0.0)/stand/vpmon
                                                                                Specify disk path to
                                                                                    monitor
                           Boot
                           : disk(0/0/1/1.0.0;0)/stand/vpmon
                           614400 + 168736 + 16898800 start 0x23000
MON>
              29/08/2003                                    HP restricted                                                 14
This example shows another monitor boot failure, again not attributable to vPars.
In this case, the LIF on the boot disk is corrupt, so the system firmware can’t find ISL, the program that launches the boot
loader hpux (which then launches the monitor). In this case, it’s not the monitor that failed to boot, but ISL that failed to load
or boot.
As far as troubleshooting is concerned, you can handle this the same way as you did for a bad disk: find a disk that has a valid
LIF, and boot from it.
This scenario is different from the previous example in a significant way: you can still access the disk. If you’re comfortable
with ISL and the boot loader command syntax, you can load ISL and run hpux from one disk, but hunt for the monitor on
another disk. Just specify the alternate disk’s path after the boot loader’s ls command:
Ls
: disk(0/0/1/1.0;0)/stand
lost+found      ioconfig                      bootconf               system
system.d        build                         vmunix                 dlkm
rootconf        krs_temp                      system.prev            dlkm.vmunix.prev
vpar1           vpar2                         vpar2_new              vpdb_geff
vpdb_old        vpmon.orig                    core
Since this disk has a /stand/vpmon on it, you can either reset the system and boot from that alternate path, or you can use
the same addressing trick to boot this alternate monitor -- even if the alternate disk doesn’t have a LIF. As long as the alternate
disk has a /stand file system on it, the boot loader from the original disk can get to it, load it, and execute it. The syntax is
shown on the slide. One caveat: the monitor will still try to load the vPars database /stand/vpdb from the original disk path.
Module objectives
Slide 8-15:
                           Boot
                           : disk(0/0/1/1.0.0;0)/stand/vpmon
                           Exec failed: Exec format error                          Corrupt monitor
                           Boot
                           : disk(0/0/1/1.0.0;0)/stand/vpmon
                           disk(0/0/1/1.0.0;0)/stand/vpmon: cannot open, or not executable
                           Exec failed: No such file or directory
                                                                                   Missing monitor
              29/08/2003                                   HP restricted                                                 15
If you can get to ISL, the next potential failure is if the monitor executable itself -- /stand/vpmon -- can’t be executed by the
loader, either because it’s corrupt, missing, or has the wrong permissions. This scenario is pretty much the same as having a
corrupt or missing /stand/vmunix on a standalone system. Recovery is also pretty much the same.
Module objectives
Slide 8-16:
29/08/2003 HP restricted 16
         1. Use the boot loader’s ls command to see if there’s a backup copy of the monitor, and try to load that.
         2. If there isn’t a backup copy, check the other boot disks -- remember, there’s one for every vPar. You can use the same
            search process that you did for a bad disk or corrupt LIF.
         3. If you still can’t boot an alternate monitor, the next step is to boot the system without virtual partitions. That’s right, just
            boot /stand/vmunix. Even though it’s a relocatable kernel, it can still boot outside of vPars. That kernel will suddenly
            be able to see all the memory, processors, and I/O hardware, but that shouldn’t have any ill effects. At least you’ll be up
            and running.
Of course, this just gets you booted. Then you’ll need to track down how the monitor got corrupted or removed. In the
meantime, you have several possibilities for recovery, listed in order of decreasing likelihood of success:
•     You can restore your good copy of /stand/vpmon from your system backup. You do have one, right?
•     You can reinstall the vPars software.
•     You can copy over a working /stand/vpmon from one of the other vPars, as long as they’re at exactly the same patch
      level and release. That situation is not as common as you may think.
•     You can change the boot path permanently to point to the disk path where you loaded the alternate monitor. Of course, that
      doesn’t really fix the problem; you’re still missing a monitor.
Module objectives
Slide 8-17:
                              Boot
                              : disk(0/0/1/1.0.0;0)/stand/vpmon
                              614400 + 168736 + 16898800 start 0x23000
                              ERROR loading /stand/vpdb (-4)                       vpmon error
                              Welcome to VPMON (type ‘?’ for a list of commands)
29/08/2003 HP restricted 17
Another monitor boot-time problem is a missing or corrupt vPars database file. We’ve booted the monitor successfully, but it
doesn’t have a database. You still get a monitor prompt, but you may have an error message immediately before the prompt.
You won’t find anything in the monitor log.
If you try to load a kernel from here, nothing will happen, since the monitor doesn’t think any vPars exist.
To recover, you need to find an alternate database. You can use the monitor’s ls command to search /stand for a backup
database, or you can use your newly-acquired disk-hunting skills to look through all the other disks. Once you’ve found a
potential database, use the readdb command from the monitor to read in that database file, as shown on the slide.
If you’re unable to find a suitable database, you can boot standalone, as discussed earlier.
Once you’ve worked around the immediate problem, you still need to restore your database (and hopefully figure out what
happened to it). If you booted from an alternate disk, using the same database file name, the database will be automatically
synchronized across the partitions when they boot and run the vPar daemon vpard.
If you used an alternate database, you can just copy that to /stand/vpdb and reboot the system. This assumes that this
alternate database is the one you want to keep. If it’s not -- it’s a “fail-safe” configuration, or it’s a little out of date -- then you
might not want to propagate this database across the other partitions. In that case you can rename the database file -- when the
rest of the partitions boot, they will sync based on the alternate database’s name, leaving /stand/vpdb intact. If you can find
a good database on one of the partitions, you can copy that one to /stand/vpdb and reboot the system.
If, unfortunately, you’ve lost the database completely and had to boot standalone, your options depend on how careful you
were when you set up your partitions:
•     If you’ve been diligent in your system backups, you can restore /stand/vpdb from backup.
•     If you saved the commands you used to create the initial partitions as a script (as recommended in the prerequisite vPars
      class), you can recreate the vPars by running that script.
•     You can try to repair a damaged /stand/vpdb, by dumping it out with the GSE tool vpdbdump, available from
      http://wtec.cup.hp.com/~hpux/vm-pm/vpars/tools.htm (no, it’s not part of the vPars product), and then editing it with adb.
      Obviously, this requires a level of expertise that you’re not going to get from this class. Note that as of this writing,
      vpdbdump has to run from the current directory (.), or it generates a “cannot open” error.
      Note that you could dump the database out by using the vparstatus -D command, but that bails if the file’s corrupt --
      which is the problem you’re trying to fix.
•     You can refer to your partition plan, and rebuild the partitions.
Module objectives
Slide 8-18:
              Methodology:
              How to Troubleshoot the kernel
                           Make sure it’s a vPar problem:
                                 • Read the console boot messages or command errors
                                 • Query the monitor
                                 • Boot /stand/vmunix (without the monitor)
                           Hardware problem?
                                 • Boot from an alternate disk path
                           Software problem?
                                 • Boot an alternate kernel (if available)
                                 • Copy a kernel over from another vPar
                                 • Reinstall
29/08/2003 HP restricted 18
Sometimes the monitor comes up just fine, but a particular vPar won’t boot. The approach to troubleshooting a kernel boot
failure is a little different. In this situation, you’ve got a monitor running that can give you extra information; you may also
have other vPars running, which may have information as well.
One other thing to note is that the kernel boot failure may or may not be a vPars problem.
The methodology for debugging kernel boot failures under vPars is mostly a matter of gathering information:
If booting standalone fails, then you can probably eliminate vPars as the source of the problem. And you’ve also stepped
outside the scope of this course. Nonetheless, there are options available within vPars to make kernel recovery easier.
As a general rule, the method of recovering from a kernel boot failure depends on whether it’s an apparent hardware problem
or software problem.
If it’s hardware, like you can’t access the disk, you’ll need to try an alternate path, or find another disk to boot from.
If you can access the vPar’s boot disk, but for some reason can’t run the kernel, that’s typically a software problem. In that
case, there are three general steps:
Module objectives
Slide 8-19:
              29/08/2003                                    HP restricted                                           19
The vPars monitor has a number of built-in commands, easily printed by entering help at the monitor prompt:
MON> help
Supported Commands:
?                Print list of commands
cat              Dump contents of file to screen
cbuf             Dump contents of console buffer
getauto          Print the AUTO file
help             Print list of commands
lifls            List files in LIF directory
log              View the event log
ls               List files in a directory
•     log: dumps out the monitor’s event log. This is typically where you should start looking.
•     ls: lists the files in a UFS directory. By default, it lists the contents of the /stand/ directory. It takes a subset of the
      options available to the real ls(1) command, just like the boot loader’s ls command. This one is very handy for finding
      alternate kernels and databases.
•     scan: lists all the hardware recognized by the monitor. That includes memory, bus converters, processors and bus bridges,
      but nothing below the LBA level. It also shows to which, if any, vPar that hardware is assigned. This is particularly useful
      to validate I/O addresses, and confirm vPar ownership.
•     vparinfo: displays information about the named vPar -- its assigned hardware, boot paths, memory requirements, and
      boot options. If you don’t specify a vPar name, it prints the unassigned hardware and memory, as well as the names of the
      defined vPars.
The other commands are helpful as well, but these four are the ones you’ll use most to troubleshoot.
Module objectives
Slide 8-20:
                • vparextract [-b|-k|-l]
                     -b: monitor boot path
                     -k: kernel’s relocation address
                     -l: Dumps entire monitor log
29/08/2003 HP restricted 20
If you’ve used vPars at all, you’re already familiar with the vparstatus command. There are a few options useful for
troubleshooting that you may not have used before:
•             -v: On the other hand, you’ve probably used this option. It just displays a lot more information.
•             -e: This option prints the monitor's event log, but it’s limited to a circular buffer roughly 4K bytes long. It’s a subset of the
              log printed by the monitor’s log command, and is helpful if you can’t get to the monitor prompt.
•             -w: This option prints the name of the local vPar -- that is, the one in which the command is executed. We saw this in the
              previous module; you can use this to see if you’re booted standalone or as a vPar.
•             -R: This option prints any available Processor Internal Memory (PIM) data for the named vPar. If you did a soft reset of
              that vPar -- an emulated Transfer of Control via vparreset -t -- this will print PIM for all that vPar’s processors at the
              time they were TOC’ed.
•     -A: This option lists the resources -- I/O paths, memory, and processors -- that haven’t been assigned to any vPars. This is
      more helpful during run-time than at boot-time.
•     -D: This option uses an alternate database file.
A command that you may not be familiar with is the vparextract command. It communicates directly with the monitor, and
thus only works when the monitor is running -- that is, when you’re booted under vPars. Its options are:
•     -b: This option prints the path to the disk from where the monitor booted.
•     -k: This option prints the address to which the currently running kernel was relocated. It’s not too useful.
•     -l: This option prints the monitor's entire event log, as opposed to the last 4K printed by vparstatus -e. Like
      vparstatus -e, it’s helpful if the monitor prompt’s not accessible.
Module objectives
Slide 8-21:
              MON> log
              INFO:CPU1:MON:[18:16:26 9/9/2002 GMT] vpar1 is active
              INFO:CPU1:MON:PDC_STABLE return size = 3f0
              INFO:CPU1:MON:[18:17:19 9/9/2002 GMT] vpar1 is up
              [Maybe some warnings about TOC vectors, which can be ignored]
29/08/2003 HP restricted 21
When a partition boots successfully, these are the kinds of messages you should see, in either the monitor log or from the
vparstatus command.
In the monitor log, you may see some PDC warnings about corrupted TOC vectors, but those are documented in JAGae44546
and can be ignored.
While the vPar boots, you can see its state change via vparstatus. The state should proceed from Load to Boot to Up. If it
returns to Down, then you’ve got a boot problem. Unfortunately, the vparstatus command doesn’t usually give you any
more information than that.
Let’s look at a few examples of when things go wrong.
Module objectives
Slide 8-22:
              29/08/2003                                   HP restricted                                                  22
Here’s an example of what happens when the vPar’s boot disk isn’t accessible. It’s nearly impossible for this situation to occur
if the vPar’s boot path is the same as the monitor’s; if the monitor could be loaded from this disk, the kernel should be loadable
as well.
Note that the monitor gives a hint of what’s wrong, but if you tried to boot using the vparboot command on another vPar,
you don’t get much in the way of error messages.
Module objectives
Slide 8-23:
              To recover:
                     • Try the alternate path
                     • Make sure the LBA is valid (could be user error)
                     • Go back to BCH to check the disk
29/08/2003 HP restricted 23
To figure out what went wrong, we go to the monitor log, using the monitor’s log command, or either the vparextract -l
command or the vparstatus -e command from a booted vPar. The logs tell us that this is not a vPars problem, but a disk
access problem: IODC was unable to talk to the disk.
To recover:
•             First, try booting from the vPar’s alternate boot path, using vparboot with the -B alt option (if you’re booting from
              another vPar) or vparload -B alternate_path (if you’re booting from the monitor).
•             If that fails, check if the LBA is valid and belongs to the vPar being booted. You can check the validity of the LBA using
              the monitor’s scan command. The scan should also show if the LBA belongs to the vPar being booted, but you can also
              check with either the vparinfo command from the monitor or, if you’re on another partition, with the vparstatus -v
              command. Of course, this only works to the LBA level, which doesn’t include the disk.
      If the LBA isn’t valid or assigned to this vPar, you’re likely got a user error in setting up the path to the boot device.
•     If the LBA is owned by the unbootable vPar, then you can look for the disks under it. Assuming you planned ahead, you’ll
      have a hard copy of the output of the ioscan command for this vPar. You can check that listing for any alternate disks
      under that LBA, as well as any other LBAs assigned to this vPar. You can then use the -B option to vparboot and try to
      boot from those disks, in case one of them happens to be the boot disk.
•     Finally, if you didn’t plan ahead, you may have to go all the way back to the BCH to try to talk to the disk, and that entails
      shutting down all the partitions and resetting the entire system. Once you’ve gotten back to the BCH, you can do a BCH
      search along the path for the disk and/or ISL.
Module objectives
Slide 8-24:
29/08/2003 HP restricted 24
Here’s an example of what happens when the vPars kernel isn’t accessible. This can happen on any vPar, even if it’s the one
the monitor booted from.
Once again, you can see that the monitor indicates the load failed, but the vparboot command just indicates that the boot was
attempted.
Module objectives
Slide 8-25:
                     • Reboot and use the loader’s ls command from ISL to look for a
                       backup kernel
                     • Copy the kernel over from another vPar
29/08/2003 HP restricted 25
Back to the monitor log. In this case, we can see that the load simply failed, with no IODC errors.
To recover:
•             Once again, you can try booting from the vPar’s alternate boot path, using vparboot with the -B alt option or
              vparload from the monitor.
•             If that fails, look around on the vPar’s /stand file system for an alternate kernel and boot that, using the -b option to
              either vparboot or vparload. When looking for alternate kernels, the monitor’s ls command will come in handy, but
              it will only list files on the disk from which the monitor was loaded. If you’re trying to boot a kernel from another disk, ls
              won’t help.
      If you booted the monitor from a different disk, you can still use the boot loader’s ls command. Unfortunately, the boot
      loader hpux is run from ISL, and that will require a system reboot; you’ll have to shut down all the currently running
      vPars and boot back up to ISL. The advantage here is that once you’re at the ISL prompt, the hpux command can access
      all the disks, regardless of whether they have a LIF volume or not.
•     If you still can’t find a kernel that will boot, you may be able to copy a kernel over from another partition. However, you
      can’t do a direct copy -- even though the disks are connected to the same system, they belong to different partitions.
      To do a copy between partitions, you can “virtually” disconnect the disk from one vPar, and connect it to another. The
      process is a lot like the mounting a standalone system’s boot disk on another system; the only difference is that you
      “virtually” recable the boot disk, not physically.
      The details on how to do this are covered in two response center engineering notes (RCENs): KBRC00009026 and
      KBRC00009029. The first one explains how to do the “recabling” on a system with only two vPars; in that case, you boot
      standalone and vgimport the “bad” vPar’s root disk. The second RCEN is for a system with three or more vPars -- the
      difference being that with more than two vPars, you don’t have to reboot the entire system, but use vparmodify to move
      the root disk’s I/O path between vPars.
•     Worst case, you may have to reinstall the vPar.
Module objectives
Slide 8-26:
29/08/2003 HP restricted 26
In this example, you’ve got a kernel that’s not relocatable, so the monitor can’t relocate it. You may see this if someone took a
shortcut installing the system, or installed using an older version of Ignite-UX -- the infamous “WINSTALL” problem.
As before, the monitor tells the story, but vparboot doesn’t. Let’s check the log.
Module objectives
Slide 8-27:
              To recover:
                     • Boot standalone, if it’s feasible
                         – Otherwise, treat it like a missing/corrupt kernel
29/08/2003 HP restricted 27
Again, the monitor log tells you what the problem is.
To get the vPar running, you could use the same tricks as we did for a missing or corrupt kernel. However, in this case, the
kernel isn’t corrupt or missing -- it just won’t boot under the monitor. In this case you can boot the kernel standalone. If you
can reset the system, this is the easiest approach to take. To boot standalone, reset the machine via the monitor’s reset
command (after shutting down the other partitions, of course); when you come up to BCH, instead of booting the monitor,
boot /stand/vmunix.
Once the system’s up, either by booting an alternate kernel, or by booting standalone, you can confirm that the kernel’s not
relocatable, using the vparreloc command:
# vparreloc -f /stand/vmunix
Didn't find necessary dynamic sections needed for relocation
ERROR: The source vmunix file /stand/vmunix is not relocatable
Then to recover:
Module objectives
Slide 8-28:
              MON> log
              WARNING:CPU0:MON:Monarch cpu information not found for partition 0
              ERROR:CPU0:MON:Unable to boot vpar1: no monarch CPU.
29/08/2003 HP restricted 28
Here’s an example of a kernel that can’t boot because there aren’t enough required resources.
In this particular case, the vPar was unable to boot because there weren’t enough CPUs available to meet its minimum needs.
Another failure could be insufficient memory.
This kind of problem can happen if you build the database while you’re still standalone (that is, without vPars running) and
then reboot the system under vPars.
Another possible cause is copying over an “oversubscribed” database from another system.
A third scenario is the diagnostics subsystem deconfiguring a resource due to errors, such as bad memory or a CPU getting
numerous LPMCs. For the LPMC case, note that all the diagnostic monitor does is mark the CPU -- the actual deconfiguration
won’t take effect until the system is rebooted, so you won’t see the lack of CPU resources until the system -- not the vPar --
reboots.
So, what can you do when you’re short of resources? You’ll have to figure out what the vPar “wants,” using vparstatus,
and compare that to what’s available. See what resource is missing, and then determine if it’s really missing, or if the vPar’s
oversubscribed.
Module objectives
Slide 8-29:
29/08/2003 HP restricted 29
The upshot of this is that all instances of HP-UX are potentially in contention for a single resource -- the system firmware --
and the monitor has to serialize their requests. Specifically, the monitor uses a spinlock to make sure only one instance is using
PDC/IODC at a time. If two or more vPars are trying to make a PDC or IODC call simultaneously, one of the vPars will be
blocked pending resolution of the intervening call. If the PDC or IODC call is quick, you shouldn’t see any noticeable delay.
If, however, the IODC call takes a “long” time (for example, when booting through SAN switches, as noted in the Superdome
delta training), then a vPar that wants to do a simple PDC call may have to wait a while for the booting vPar to finish its IODC
call. These delays become very noticeable, with the result that all currently running vPars appear to hang for multiple, 15-30
second periods while the vPar is booting. Clearly, this impacts the message that vPars are separate, and do not interact with
each other.
As we’ll see in the next module, a vPar doesn’t do many firmware calls once it’s up and running, so this impact should be
minor. However, there are a couple of diagnostic daemons that increase the congestion. First, a chassis code daemon called
cclogd tracks all the chassis codes generated throughout the system. When a vPar boots, it generates a number of chassis
codes, which the cclogd on each of the already-booted vPars tries to read -- using PDC. This increases the contention for the
firmware lock on the vPars that are already booted. A.02.02 alleviates this, by making the chassis log reads return immediately
if the firmware locked is already held; any “missed” chassis codes will be read the next time cclogd does its PDC call.
A second diagnostics daemon, diagmond, does a periodic poll for suspended PCI cards, an operation requiring several PDC
calls. Originally this poll was done every five seconds, then lengthened to every minute. Since the only purpose of the poll is
to take the devices out of the system map if the card remains suspended for a long period of time, the check was changed to
once every 90 minutes. This change is independent of vPars, available in PHKL_28252.
If a customer’s seeing this type of problem, you can work around it by single-threading your boots -- that is, booting one vPar
at a time, manually. You can cut back on the PDC congestion by disabling cclogd and diagmond via stm; however, there
are some consequences -- see the cclogd(1m) and diagmond(1m) man pages for details.
The general performance issue is being tracked via change request JAGae33708. Fixing it will require changes to diagnostics,
firmware, and the vPars monitor. In the meantime, make sure that you follow the configuration guidelines for any EMC drives
and Brocade switches. These are documented in the Ordering and Configuration guidelines paper from docs.hp.com.
Module objectives
Slide 8-30:
              29/08/2003                                           HP restricted                                               30
Once a vPars kernel is up and running, the vPars monitor switches from an active role to more of a reactive role. Most of the
vPars software is idle, waiting for some sort of request. There are only four reasons why the monitor may start running on a
well-behaved system:
•             The kernel may make firmware calls to PDC or IODC, so the monitor has to emulate the firmware.
•             The user or kernel may read from or write to the console device, so the console drivers vcn and vcs have to manage
              console input and output.
•             The vPars daemons are running, and will do occasional system calls to the kernel, which translate into downcalls for the
              monitor. However, both daemons are sleeping most of the time, and only run every 5 seconds (for vpard) or 6 minutes
              (the default for vphbd).
•     The user may run vPars commands to change the configuration; these result in downcalls from the kernel to the monitor,
      and potential events from the monitor to a kernel (for example, to tell the kernel that it’s acquiring a migrating CPU).
What this means is that vPars shouldn’t spontaneously fail. More often than not, you’ll find that the user ran a vPar command,
and didn’t get the results he or she expected; and in fact, this is the main type of run-time “problem” seen by customers. For
that reason, in this module we’ll focus mainly on collecting the system state.
We’ll also look at some examples of what happen when those other monitor users fail -- such as the daemons dying or the
console failing.
Module objectives
Slide 8-31:
              Methodology:
              How to Troubleshoot at Run-time
              Collect system and vPar status:
                     •     Monitor, if it’s accessible
                     •     vparstatus
                     •     Any error messages
                     •     Shell history (see what the user was doing)
              Ask:
                     •     What did the customer expect to happen?
                     •     Does it make sense?
29/08/2003 HP restricted 31
As in boot-time troubleshooting, the debugging methodology is mainly information gathering. Use the monitor and vPars
commands to figure out what the system configuration is, and if any errors were logged.
Given that most of the run-time problems seen so far are either user misunderstanding or at least user-triggered, there’s one
additional source of data to factor in: what the customer did. To that end, ask the customer what he or she was doing (or trying
to do), what the expected result was, and what actually happened.
Once you’ve collected all your data, make sure that what the customer’s trying to do:
•             Makes sense
•             Is supported
•             Is supported on this release
Don’t skip over that last item. As of this writing, there are three releases of the vPars software (A.01, A.02.01, and A.02.02),
each with two variants (free and purchased). Every one of those releases fixes (and perhaps introduces) defects; in addition,
each one adds support for new processors and I/O hardware. For example, release A.01 doesn’t support Superdome, but
A.02.01 does.
Intermingled with those releases, other subsystems like iCOD and Ignite-UX have become vPar-aware, so the interactions
between vPars and those products have changed as well.
All the new features and product interactions for each release are documented in the HP-UX Virtual Partitions Ordering and
Configuration Guidelines. Rather than clutter this training with all that material, and render it almost immediately out of date,
we’ll refer to that document, available on the web at docs.hp.com.
Module objectives
Slide 8-32:
Monitor commands
              29/08/2003                                              HP restricted                                                    32
To collect system information at run-time, you use the same monitor commands you used at boot-time.
There is one additional command that we haven’t covered that’s helpful during run-time debugging:
•             cbuf: displays the contents of the console buffer of the named vPar. This prints any data buffered in the virtual console
              for a particular vPar, that hasn’t yet been printed.
              When a vPar (let’s call it vpar1) sends something to its virtual console, the vcn driver copies the data into the monitor,
              which sends it to the partition owning the physical console (call it vpar2, or it could be the monitor itself). That data stays
              buffered in the monitor until the virtual console is switched (via ctrl-A) to vpar1, at which time the buffer is written to
              the display, and flushed. If vpar1 “goes away” -- that is, if it hangs or gets reset -- before control of the console is switched
              to it, that data remains buffered.
   This comes in handy if the vPar appears to be hung -- you can try switching the console to the hung vPar and see if there’s
   any console output. If there’s not, you can use the cbuf command to see if there’s any data buffered in the monitor.
Module objectives
Slide 8-33:
               • vparextract [-b|-k|-l]
                     -b: monitor boot path
                     -k: kernel’s relocation address
                     -l: Dumps entire monitor log
              29/08/2003                               HP restricted                                           33
There aren’t any extra commands to help information gathering at run-time. We stick with vparstatus and vparextract,
just like we used when troubleshooting kernel boot problems.
Let’s look at some examples.
Module objectives
Slide 8-34:
29/08/2003 HP restricted 34
Here’s an example based on an actual customer call. In this case, the user added some new hardware, assigned it to a vPar, but
the hardware couldn’t be seen from that vPar.
Using the monitor’s scan command and vparstatus, we can confirm that the user didn’t actually assign the hardware to
any partition -- it’s still unassigned.
Of course, the resolution was to bring down vpar1 and assign the path:
vpar2# vparmodify -p vpar1 -a io:0/1
On reboot, things were as the customer expected them to be.
Module objectives
Slide 8-35:
              29/08/2003                                    HP restricted                                                   35
This next example isn’t truly troubleshooting, but shows how to recognize a partition hang. There’s nothing logged in the
monitor event log, but both the monitor’s vparinfo command and the vparstatus command show the vPar state as Hung.
Here’s how the monitor decides if a vPar is hung:
The vPar heartbeat daemon vphbd uses a downcall to send a heartbeat message to the monitor on a defined interval, specified
in the configuration file /etc/rc.config.d/vparhb. That interval, VPHBD_DELAY, defaults to 360 seconds, or six
minutes. If the vPar misses 10 beats -- an hour, if you’re using the default interval -- the monitor marks the vPar Hung.
Since the heartbeat daemon is a user-space process, you can’t tell for certain that the vPar marked Hung is locked up -- only
that the daemon hasn’t sent a heartbeat in 10 intervals. So don’t assume that a Hung state means the kernel’s hung. Try
switching consoles to it, logging in over the network, or using a hardwired terminal to figure out if it’s really hung or if there’s
just a problem with the heartbeat daemon.
If the vPar is well and truly hung, you can do a soft or hard reset to it to recover, as documented in the next module.
Module objectives
Slide 8-36:
              29/08/2003                                     HP restricted                                                   36
Here’s what you’ll see if the vPar’s not really hung, but the heartbeat daemon merely died for some reason.
If the heartbeat daemon process is terminated gracefully (that is, by sending it a SIGTERM, as is done at system shutdown
time), it sends a final “goodbye” message to the monitor; on receiving such a message, the monitor sets the vPar’s state to
Shut. This prevents any dynamic configuration changes of that vPar until it has returned to the Down state.
The monitor no longer expects to receive any heartbeats from that vPar, so its state never changes from Shut. In other words,
if you kill the monitor with a simple “kill pid_of_vphbd”, then that vPar appears to be shutting down to the other vPars
and the monitor. That’s the only effect -- the vPar is still up and running, it’s just that it looks to the vPars subsystem like it’s
perpetually shutting down.
If, on the other hand, the heartbeat daemon is terminated with extreme prejudice (by sending it a SIGKILL), the daemon has
no chance to send a “goodbye” message. Suddenly, the heartbeat daemon isn’t sending any more heartbeat messages to the
monitor. This looks exactly like a hung vPar. But it’ll still take 10 missed heartbeats -- again, an hour by default -- for the
monitor to mark it Hung.
If either of these cases comes up, you can restart the heartbeat daemon using the same script that starts it up at boot-time:
# /sbin/init.d/vparhb start
This will immediately bring the vPar back into the Up state.
Module objectives
Slide 8-37:
              If vpard dies:
                     • No warnings in monitor log or vparstatus
                     • Database file won’t get updated automatically
                     • Database file will be updated from monitor when vpard is restarted –
                       e.g., at boot
                     • Only a problem if the monitor boots from this vPar
              To restart vpard:
                     # /sbin/init.d/vpard start
              29/08/2003                                   HP restricted                                                  37
So what happens if the other vPars daemon, vpard, dies?
There are no indications -- from either the monitor or vparstatus -- that the daemon’s gone away. All that happens is that
the database file for this particular vPar (in /stand/vpdb) won’t get updated every 5 seconds. If the system configuration
changes, that copy of the database will become stale.
Is this a problem? Not necessarily. Since you have other vPars that have a “fresh” copy of the database, the changes aren’t
really being lost. The only time the system could boot with stale configuration data is if the monitor read in this stale copy. In
other words, a stale database file is only a problem if that database is on the monitor’s boot disk. In that case, you could lose
some configuration changes made while the vPar daemon was not running.
On a side note, vpard isn’t the only program that updates the database file on disk. The vparstatus command, as it reads
configuration information from the monitor, updates its local copy of the database file. Since you usually run vparstatus
after making configuration changes, that shrinks the window of failure even more.
In fact, when there was a defect involving vpard failing to update /stand/vpdb, the workaround was to run vparstatus
periodically (by the way, that defect’s long been fixed).
Like the heartbeat daemon, you can restart the vPar daemon using its rc startup script:
# /sbin/init.d/vpard start
Database updates resume immediately with no ill effects.
Module objectives
Slide 8-38:
CPU migration
29/08/2003 HP restricted 38
The only truly run-time reconfiguration you can do on a vPar is CPU migration, and the main problem there is user
misunderstanding about which CPUs can be migrated. There was a defect with “losing” a CPU in a vPars environment after
removing a vPar using vparremove, but that’s long been fixed (documented in JAGae05529, fixed pre-release).
So what makes a CPU unable to migrate? You’ve probably seen all this before: it depends on whether the CPU is bound or
unbound (also known as floating).
Here’s some background: At boot-time, the kernel assigns I/O interrupts across all the available CPUs. What’s good about that
is that it distributes the interrupt-processing load, helping performance. What’s bad about it is that those interrupts can’t be
reassigned on the fly; the kernel currently doesn’t have a mechanism to reallocate I/O interrupts, so you have to reboot to
reassign them.
The result is that all CPUs that are assigned I/O interrupts are “trapped” -- they are stuck to the partition they were booted in.
That’s what a bound CPU is: a CPU that was assigned to the vPar early in the boot process, and had I/O interrupts assigned to
it.
The monitor is the thing that assigns the CPUs to a vPar, and here’s how it does the assignment:
 1. When a vPar is about to boot, the monitor reserves min CPUs, where min is the minimum number of CPUs assigned to
    the vPar (when you typed cpu:::min:max). If there are less than min CPUs available, it reserves 1.
 2. The monitor then starts assigning CPUs to the vPar. First, it uses any CPUs that you assigned by hardware path to this
    vPar, using cpu:path. These are obviously bound.
 3. Then it adds any unassigned CPUs, until it reaches the CPU minimum count (from cpu:::min). All of these CPUs are
    bound as well. If there aren’t enough CPUs to reach the minimum count, the vPar will boot with what it can get, and you
    won’t see any warnings in any logs; in fact, the monitor will update the database with a new minimum count -- silently
    changing what you had configured.
 4. Finally it adds any remaining unassigned CPUs, until it reaches the vPar’s CPU count, from cpu::cnt. All of these
    CPUs are unbound. Again, if there aren’t enough CPUs to reach the total count, the vPar will boot with whatever it can
    get.
When the vPar’s kernel boots, it sees both the bound and unbound CPUs assigned to the vPar, as well as the unbound ones that
aren’t; it does downcalls to the monitor to get each one’s status. Only the bound CPUs are then used for I/O interrupts.
Module objectives
Slide 8-39:
cpu::cnt
29/08/2003 HP restricted 39
Here’s how you can read the vparstatus output to figure out which CPUs were assigned, and why.
•             The CPUs specified by hardware path (“one colon”) are called out as Bound by User. Obviously, these are bound, and
              can’t be migrated.
•             The CPUs added to reach the minimum count (“three colons”) are marked as Bound by Monitor. These too can’t be
              migrated.
•             The CPUs added to reach the CPU count (“two colons”) are of course Unbound, and can be dynamically migrated.
At this point, you should be able to tell how to unbind CPUs. Don’t forget the vPar has to be in the Down state to do this.
•     If you want to unbind a CPU that you’ve bound by hardware path (“one colon”), use vparmodify -d cpu:path.
      Realize that this won’t reduce the number of bound CPUs -- you haven’t changed the minimum. It also won’t change the
      total number of CPUs (“two colons”).
•     If you want to reduce the number of bound CPUs (“three colons”), lower the minimum count using vparmodify -m
      cpu:::min. Again, this won’t reduce the total CPU count.
One last note about modifying the configuration: the options to vparmodify take effect from left to right. If you choose to do
multiple changes in a single command, make sure they make sense if done in order. For example, if you try to change both the
minimum count and the total count, such that the new total is less than the original minimum, make sure you change the
minimum first:
# vparmodify -p vpar1 -m cpu:::min -m cpu::cnt
Better yet, do it as separate commands.
Module objectives
Slide 8-40:
              29/08/2003                                  HP restricted                                              40
A couple of slides ago, you saw which CPUs are visible to each vPar: the ones assigned to it, and the floating unbound ones.
What the vPar can’t see are the CPUs bound to another vPar. It will never be able to migrate those CPUs, even if those CPUs
are unbound or the vPar is removed; the only way the CPUs will become visible to the vPar is if they’re unbound when the
vPar is booted.
This explains a problem noted in an appendix to the Installing and Managing HP-UX Virtual Partitions manual.
For example, here’s an existing configuration (output trimmed for readability):
vpar1# vparstatus -v
Name:         vpar1
[CPU Details]
Min/Max: 2/3
Module objectives
Slide 8-41:
29/08/2003 HP restricted 41
One problem that users run into is that the monitor prompt suddenly seems to go away when one of the vPars is booted. The
reason is that the interactive monitor runs on the monarch CPU. The monarch CPU is typically the one with the lowest
hardware path.
Whenever the monarch CPU is unassigned and the console client has been “round-robinned” to the monitor, it displays the
monitor prompt (“MON>”). It then waits for the user input. However, if the monarch gets assigned to a vPar, it’s doing work on
behalf of the vPar -- it’s no longer free to manage the console, so the “MON>” prompt goes away.
The monitor will start being interactive again when either the monarch is unassigned -- that is, migrated out of a vPar -- or
when the vPar owning the monarch CPU is reset.
In addition, there is one known case where the virtual console will ignore keyboard input, even though console output
continues to work. By way of background: console keyboard input is normally handled by the system monarch CPU, as long
as it’s not assigned to a vPar; the monarch constantly queries the console hardware for input. If the system monarch CPU isn’t
available, then the monarch CPU on the vPar that owns the console hardware takes over, using vconsd (if it’s up) or IODC (if
it’s down).
So here’s the problem scenario: assume the system monarch is assigned to a vPar, so it’s not available to manage the console.
If the vPar that owns the console hardware goes down, and its monarch CPU is then migrated to a running partition, there’s no
longer a CPU in charge of handling console input -- none of the vPars can access the hardware, and the system monarch’s not
free to block waiting for input. Console output continues to work because that’s handled within the monitor, and the CPU that
makes the downcall acts as the monitor and does the output.
To get the console back without rebooting the entire system, restart the vPar that owns the console hardware. Since you can’t
get to the monitor, you’ll have to use vparboot from another running vPar.
Module objectives
Slide 8-42:
Performance Problems
              29/08/2003                                  HP restricted                                                 42
As we discussed at the beginning of this module, the vPars monitor isn’t doing very much once the vPar is up and running. The
kernel is given its subset of the system’s CPUs, memory, and I/O, and left to run pretty much on its own. The monitor doesn’t
steal any processing power from the kernel, nor does it dynamically consume any of the vPar’s memory. So you shouldn’t see
any significant run-time performance hit under vPars.
That said, there are a few areas where vPars has been seen to affect performance. There’s a whole separate class on vPar
performance and configuration, so we’ll just lightly touch on them here.
First, there’s the firmware bottleneck that we mentioned in the boot-time module. Once a vPar kernel is up and running, it only
does one PDC call of note on its own: PDC_CHASSIS (or its PAT equivalent) to update the chassis display (diagnostics does
its own PDC calls). The kernel updates the chassis every five seconds to show the run queue length and number of processors
-- for example, “F21F”. If some other vPar is actively booting or dumping, that chassis update may have to contend for the
firmware spinlock; worst case (using poorly configured SAN switches or booting multiple vPars at once) that contention has
been known to produce mini-hangs of 15-30 seconds.
Sescond, there’s the issue of I/O interrupts only being handled by bound CPUs. It sounds like a great idea to try to maximize
your floating CPUs -- you can shuffle your processing power around at will. However, by minimizing the number of bound
CPUs, you’re limiting the number of CPUs that can handle I/O interrupts. If you’re running an I/O intensive application mix,
you could swamp the small number of bound CPUs with I/O interrupts, and hurt your I/O performance. The current guideline
is to have at least one bound CPU for each unbound CPU -- that is, the ratio of unbound to bound CPUs should not exceed 1.
Finally, there’s a question of balance. In addition to balancing your floating and bound processors, don’t forget memory and
I/O. For example, if you have a system with 8 processors and 8 GB of memory, don't configure a vPar with 512 MB of
memory and 4 processors. That is, make sure there is balance to the system configuration.
But again, performance isn’t the subject of this class -- that’s a topic for another class.
Module objectives
Slide 8-43:
Questions
29/08/2003 HP restricted 43
Preface
Panics happen, even within -- and sometimes because of -- vPars. In this module, we’ll look at how to collect vPars dumps,
both of a vPars kernel and the monitor, how to force said dumps, and how to run the dump analysis tools, but we’ll start with
some theory.
Slide 9-1:
HP-Restricted
Module objectives
Slide 9-2:
Module objectives
                • Internals overview
                      –     Outline the main components of vPars
                      –     Describe the changes to the HP-UX kernel
                      –     Name the vPar daemons
                • Dump time problem
                      –     Diagnose dump time problems
                      –     Configure the monitor to collect dump images
                      –     Analyze a monitor dump
                • Changes to HP-UX dumps
                      –     Describe what’s different about a kernel dump from a vPar
                      –     Use crashinfo to analyze a kernel crash dump
                      –     Determine if the dump was likely to be caused by vPars
29/08/2003 HP restricted 2
Notes:
Module objectives
Slide 9-3:
                                                                                                     Mem
                                              Device Drivers                                         Mgmt
                                                                                Proc
                                 GIO                                            Mgmt                I/O TLB
                                                                                                   I/O PDIR
                                                 PSMs
                                                                    I/O
Hardware
29/08/2003 HP restricted 3
Here’s the classic HP-UX kernel block diagram, with a bit of added detail in the I/O subsystem. Items that are not of particular
interest in the vPars environment are represented in gray, while those that are directly affected by the introduction of vPars are
shown in green.
You probably know all of the following text already, but in case you don’t, there are four primary subsystems in the kernel.
Process Management
This subsystem is responsible for managing all of the running processes and threads in the kernel, maintaining information
about process and thread resources, and scheduling threads.
Memory Management
This subsystem manages memory in a way as to allow each process to have its own virtual address space. Traditionally, this
address space is private to a particular process but HP-UX has evolved to the point where much of this address space is shared.
Kernel threads alter the privacy of process data but the traditional view is as stated. Other memory resources are global to all
processes.
All of these memory structures are managed in such a way that allows the sum of process address space to be greater than the
amount of physical memory in the system. This is the theory behind the concept of virtual address space.
File Systems
The file system is responsible for managing data on various types of mass storage. The file system has provisions to manage
different types of file system layouts such as UFS, NFS, and VxFS.
I/O
The I/O system is primarily made up of device drivers which are specifically designed for a particular type of device or
interface. The I/O system defines general principles of how interrupt-driven devices can talk to the system and how to
determine which driver is to be called for a given task. Platform Support Modules (PSMs) provide specific routines for
different hardware (both real and virtual) platforms.
Module objectives
Slide 9-4:
                                                             vcn/vcs
                                           Device Drivers
                                                             vpmon             Proc                Mem
                                 GIO                                           Mgmt                Mgmt
                                                  PSMs
                                                                     I/O
                                              vPars PSM
Hardware
29/08/2003 HP restricted 4
This figure shows how the system environment is changed by the introduction of vPars. Notice that some components have
been added to the kernel (the Virtual Console devices and drivers, for example), and that others are actually removed from the
kernel. The new pieces are shown in gold, or lighter gray if you’re looking at a black and white printout.
vPar Monitor
The vPar monitor manages the hardware resources, loads kernels, and emulates global platform resources to create the illusion
that each individual vPar is a complete HP-UX system. At the heart of the monitor is the partition database that tracks what
resources are associated with which vPar. When the vPar monitor is running, the master copy of this database is kept in the
monitor. All changes to the partition database are also reflected to a file on each vPar’s boot disk to ensure that the partition
configuration is preserved across system reboots.
The monitor code is loaded from the file /stand/vpmon on the system boot device in the same way as a normal HP-UX
kernel would be loaded from the file /stand/vmunix.
Once the vPar kernels are up and running, the monitor is mostly idle; it’s invoked by the running vPars when HP-UX makes
calls to firmware (which are intercepted by the monitor), when the configuration of the vPars is changed (which is managed by
the monitor), or when the operating system is shutting down. The vPar that owns the physical console will take over
management of the device, so the monitor prompt is no longer available once the vPars are launched; nonetheless, console I/O
from any vPar still goes through the monitor.
Within the kernel’s I/O subsystem, a Platform Support Module isolates I/O functionality that’s specific to a hardware platform
or “virtual” platform like the vPars monitor. In a vPars-enabled kernel, the Virtual Environment PSM, or ve_psm, moderates
the sharing of I/O TLBs between vPars, redirects the kernel’s firmware requests to the monitor, and encapsulates the
pseudo-driver vpmon that gives users access to vPars functionality. When not running in a vPar environment, the vPar PSM is
mostly dormant and doesn’t affect the normal operation of the system.
Virtual Console
Given the scarcity of PCI slots in most systems, it is unreasonable to assume that each vPar will have its own serial port to use
for a console device. Therefore vPars use a virtual console device that all the vPars share.
The Virtual CoNsole (vcn) driver implements a virtual serial port that HP-UX can use as its console. Unless a hardware
console is specifically specified in the partition database, vcn is used. All console I/O to the vcn driver is transferred to the
monitor where it is buffered until it can be handled by a Virtual Console Slave (vcs) driver that has established a logical
connection to that vPar. The vcs driver may or may not be in the same vPar as the vcn driver. The monitor provides a special
inter-vPar communication path specifically for console I/O.
Other changes were scattered across machdep, process management, and virtual memory management to support the
following features: CPU migration, I/O TLB sharing, Page Zero emulation, and global purge TLB synchronization.
Module objectives
Slide 9-5:
I/O Init/UI
             29/08/2003                                    HP restricted                                                 5
This figure shows the block-level structure of the vPar monitor and its relation to the HP-UX kernel.
At the top of the software stack are the vPar commands which allow the system administrator to create and configure virtual
partitions. The commands that add, modify, or delete resources were modeled after the partition manager commands for
Superdome. This is intended to reinforce HP’s commitment toward a continuum of partitioning solutions.
The vPar commands are built upon a set of library functions that allow user commands to safely and consistently interact with
vPars. This allows a function to be written once and leveraged into commands on an as needed basis.
The library routines interact with the vPar monitor through an ioctl interface in the kernel. The vPar PSM has a pseudo driver
that exports open, close, and ioctl system calls for the device /dev/vpmon. The ioctl interface in the vPar PSM decodes the
requested operation and makes a monitor downcall if necessary.
vPar-enabled Kernel
The next level in the vPar hierarchy is the vPar-enabled kernel. As mentioned on the previous slide, most of the changes were
put into two new modules called the vPar platform support module (PSM) and virtual console. Some changes were necessary
to allow sharing of resources across vPars.
Emulation
The vPar monitor provides the facilities to partition resources among the vPar kernels as described by the partition database. In
some cases, the vPar monitor has to emulate certain functionality when the task to be performed is at a finer granularity than
the system or architecture can support. In these cases, the vPar monitor provides functionality to assist in management
policies.
Downcalls
A downcall is an operation that a client vPar performs when it needs to communicate with the underlying monitor or with
another vPar via the monitor. The downcall operation allows the vPar OS to provide information to or request services from
the monitor. The vPar monitor services are split into four relatively independent internal subsystems: supervisory services
(sup), console I/O services (cio), boot device services (bio), and vPar configuration database management (dbm). The
downcall is implemented as an hversion-dependent Processor Dependent Code (PDC) call which is intercepted and decoded
by the monitor.
Loader
The vPar monitor contains a boot loader that is equipped to determine where to load a vPar’s kernel based on the memory
specification found in the partition database. The loader determines where to find the kernel from the partition database
(hardware path to the boot device and file path to the kernel) and examines the kernel to determine its link address. If the vPar
owns the memory where the kernel is to be loaded and it’s of sufficient size then the loader reads the kernel image into that
memory. The loader can also relocate kernels to available memory if the address range that the kernel has been linked at is
unavailable.
The loader also patches several kernel symbols with information pertaining to vPars.
Events
The vPar monitor contains an event subsystem that notifies the vPar kernel of an activity that needs attention.
The notification for the event is delivered via an external interrupt for virtual console slave (vcs), virtual console (vcn), CPU
migration, vPar daemon, and platform events or by vectoring to a specific routine in the case of transfer of control (TOC).
Services
The vPar monitor has a set of internal routines that are generically termed monitor services. The services can be lumped into
the following categories: lock structure and locking primitives, macro definitions for character manipulation, assertion
definitions, string manipulation functions, memory copy and comparison functions, memory allocation functions, and print
and debugging functions.
File System
The vPar monitor has a primitive file system that allows it to access preexisting directories and files on UFS, LIF, NFS, and
RAM file systems. This functionality was leveraged from the HP-UX boot loader and is used by the vPar monitor when
loading the HP-UX kernel. The kernel uses the vPar monitor file system, early in the boot process, to read files from the boot
disk before the kernel has initialized its file system. The vPar monitor also uses the file system to read the partition database
off of the initial boot device. The initial boot device is the device that was used to bootstrap the monitor.
Console
The vPar monitor provides a virtual console that buffers characters while the partition is not connected to the real console.
When the partition owns the console, characters directed to it will be displayed along with any output produced while typing
on the keyboard.
Resources
The vPar monitor has the responsibility of managing resources at a finer granularity than the platform was designed for. The
vPar monitor acts like another layer of firmware to make the sharing of hardware possible even though the vPar never notices
it. This is the essence of what the partition database is for. Resource functions provided by the vPar monitor include: shared
I/O TLB and physical memory management, and CPU migration.
Platform
The functions that deal with the discovery and configuration of platform resources are generically referred to as platform
services.
I/O
The vPar monitor contains an I/O subsystem that allows it to read and write files to a specific list of supported devices. The I/O
subsystem uses drivers built into the platform to perform the actual transfer of data. These drivers are known as I/O dependent
code (IODC) and are provided with the platform, typically as firmware, for core I/O and mass storage devices. The core
devices that are supported are networking, mass storage, console, and keyboard.
Init/UI
The tasks performed during monitor boot are analogous to a subset of the discovery and initialization tasks that happen during
the HP-UX kernel boot. A sequence of functions are called by the monitor to allocate and initialize the various global locks,
construct a physical memory table based on the information in the address map returned by firmware, discover and initialize
the underlying hardware in the system, construct the native module tree, rendezvous the processors in the system, read the
partition database, and create the virtual partitions.
The vPar monitor has a user interface that allows the system administrator to issue vPar commands related to boot and
configuration. The monitor user interface should be used sparingly and it is expected that virtually all commands issued to the
monitor will originate from user space in a vPar.
Module objectives
Slide 9-6:
29/08/2003 HP restricted 6
Before the introduction of vPars, the HP-UX kernel could assume a number of things about the system environment that may
no longer be true. For example, it knew that it would always be loaded at a certain address in memory, and that there would be
no other instances of HP-UX that could affect memory contents.
To minimize changes to the kernel, vPars either emulates or virtualizes much of the system environment:
•            The I/O TLB and PDIRs are managed within the monitor and the vPar PSM.
•            System firmware (PDC and IODC) is emulated, so that the boot-time hardware discovery by the kernel is limited to what
             hardware is assigned to the vPar.
•            Nonvolatile memory and the clock, which are system-wide resources, are kept in the partition database. The kernel
             accesses these through firmware, which is emulated.
•     The console is managed by a set of pseudo-drivers, vcn and vcs, which isolate the high-level console driver from the
      underlying console multiplexing.
•     Shutdown handling, including panics and HPMCs, normally uses PDC to reset all the processors. This is trapped by the
      PDC emulation in the monitor, which limits the scope of the reset.
This level of emulation kept the changes to the kernel to a minimum, and encapsulated most of the real changes in the PSM
and console drivers.
Module objectives
Slide 9-7:
             Relocation
                    • Kernel linked to be placed anywhere in memory
                    • Text still needs to be below 2GB mark
             Page Zero
                    • Addressing changed to base+offset to allow relocation
                    • Fields patched so that vPar monitor can intercept calls to
                      PDC, interrupts
             Kernel Data
                    • Some data patched by monitor; some promoted to
                      global, to allow access across modules
             Drivers
                    • vPars PSM and vcn/vcs drivers to enable console
                      multiplexing, kernel downcalls
             29/08/2003                                            HP restricted                                                 7
Despite all the efforts to minimize vPars’ modifications, some things in the kernel did have to change. The changes center
around four areas:
•            Relocation: Since multiple images of the HP-UX kernel have to reside in memory, having a fixed load address for the
             kernel (0x20000) doesn’t work any more. The vPars kernel is compiled with special linker options to be relocatable. The
             memory location of the kernel isn’t known until the kernel is actually loaded.
             This doesn’t affect the kernel itself, except for some elements of the kernel that are not relocatable (assembly language
             routines, for example). They assume that they can do 32-bit arithmetic on kernel addresses. Due to this, the kernel has to
             have its text segment loaded below the 2GB mark in memory. Note that with very large kernels — those that have been
             tuned for certain application loads, for example — this may limit the number of vPars that can be run on a system.
•     Page Zero: Computer systems that conform to the PA-RISC I/O Architecture have an area of low memory called Page
      Zero. Page Zero is a collection of fields and structures for communicating information between the operating system and
      firmware; it starts at physical address zero and extends upward. This memory area is initialized by firmware with platform
      specific information that the kernel needs in order to boot, reset, or recover from error conditions. There are also areas that
      describe I/O devices such as boot device, console, and keyboard. Some of the Page Zero locations are written by the
      kernel to tell the firmware where to vector when a particular condition occurs (one such condition is Transfer of Control).
      When vPars came out, each vPar -- that is, each instance of HP-UX -- needed its own copy of Page Zero. To accomplish
      this, HP-UX had to stop assuming that Page Zero resided at physical location zero. This was done by changing all
      hard-coded Page Zero addresses to macros that use a base address and offset. The base address is kept in the new kernel
      global variable page_zero, which is patched by the vPar monitor when loading the kernel. Hence, each vPar has its own
      “virtualized” Page Zero.
      The monitor fills in the virtualized Page Zero with values expected by each vPar -- PDC entry point, memory
      configuration, that sort of thing. The monitor manages the “real” Page Zero at address zero. That Page Zero is initialized
      by firmware and updated by the monitor. That’s because the monitor must handle chores like TOC handling where the
      source of the TOC may be initiated by the system administrator via a ctrl-B TC.
•     Kernel Data: In addition to the global page_zero, the monitor patches several other globals because of relocation
      (firstaddr and imm_start_pfn, to name two). Two other patched globals are cpu_arch_is_2_0 and
      vemon_present; the latter is patched to 1, so you can check it to see if you’re booted as a vPar.
•     Drivers: Obviously, there’s new code for vPars in the kernel. The designers tried to keep the modifications to a minimum;
      for those changes that were more drastic, the modifications and new functionality were either packaged into the vPar PSM
      or the virtual console drivers.
As mentioned earlier, other minor changes were scattered across machdep, process management, and virtual memory
management to support CPU migration, I/O TLB sharing, and global purge TLB synchronization.
Module objectives
Slide 9-8:
              services:
               • Supervisory                                                                    find CPU #
               • Console I/O
               • Boot I/O                                                                Yes                      No
                                                                                                 Downcall?
               • vPar Database Mgmt
29/08/2003 HP restricted 8
PDC Emulation
PDC provides vital information to HP-UX, but it also exposes more system information to the operating system than is safe in
a Virtual Partition environment. The vPar monitor will intercept all PDC calls made by HP-UX kernels, and emulate each call
as appropriate for the vPar making the PDC call. This is referred to as PDC emulation.
The monitor patches the PDC entry point on the vPar’s virtualized Page Zero, so when the vPar does a PDC call, the monitor
is invoked instead. The monitor can choose how to deal with each type of PDC call:
• Emulate: Perform the actions needed to emulate this PDC call, without actually invoking PDC.
•     Pass-Thru: Perform minimal pre-processing, and then perform the actual PDC call. An example of what minimal
      pre-processing could be performed for this case would be the verification of the r_addr buffer, ensuring that it is owned by
      the calling vPar.
•     Filter: Perform the actual PDC call, but return only information about the resources owned by the calling vPar.
Some PDC calls may not be emulated at all -- for example, calls with illegal options. These will return an error.
Kernel Downcalls
A downcall is the mechanism that a vPar kernel uses to communicate with the underlying monitor, or with another vPar kernel
through the monitor. The downcall operation allows the vPar kernel to provide information to or request services from the
monitor.
From the vPar kernel’s point of view, downcalls are implemented as an Hversion-dependent PDC call, called
PDC_VIRTUAL_ENV. Specifics about the downcall being made are encoded into one or more of the PDC call’s arguments.
The downcalls are split into four categories:
Module objectives
Slide 9-9:
Virtual Console
                                                   vpar1
                                                                                   vpar2                       vpar3
                           tty                 vconsd
vPar Monitor
                     Physical
                     Console
             29/08/2003                                            HP restricted                                                9
The vPars Virtual Console provides a mechanism for console multiplexing (sharing) among vPars. Key concepts to remember:
The virtual console module multiplexes all vPars using a virtual console to one actual hardware console; thus, the virtual
console is shared by all vPars. The vPar which is assigned the actual console hardware is responsible for transmitting
information to the actual console, and for receiving information from the console keyboard. Only one vPar may logically
connect to the actual console at any given time. A special character (ctrl-A) typed at the console changes this logical
connection from one vPar to the next, in a round-robin fashion.
The Virtual CoNsole (vcn) driver implements a virtual serial port that HP-UX can use as its console. Unless a hardware
console is specifically called out in the partition database, vcn is used. All console I/O to the vcn driver is transferred to the
monitor, where it is buffered until it can be handled by a Virtual Console Slave (vcs) driver that has established a logical
connection to that vPar. The vcs driver may or may not be in the same vPar as the vcn driver. The monitor provides a special
inter-vPar communication path specifically for console I/O.
The vconsd connects the vcs driver to the tty driver which is attached to the actual console hardware. This kernel daemon is
necessary because there are times when the console hardware isn’t ready to accept new data, and the calling process has to
sleep; drivers aren’t supposed to sleep on the interrupt stack, so vconsd serves as a “sleepable” interface between vcs and the
tty driver. The daemon waits for data to become available in either direction (either to the console or from the keyboard), and
it passes the data on to the appropriate recipient.
If the vPar that owns the hardware console is not running, the vPar monitor emulates the vcs driver and manages the console
device, using console IODC.
In this diagram, the physical console device is owned by vpar1. When any of the vPars have console output, they send it to
their vcn driver, which sends it to the monitor, where it is buffered. As shown in the slide, the monitor in turn sends the output
to vpar1’s vcs driver. Under normal circumstances, the monitor keeps track of which kernel instance is currently “connected”
to the console, and will copy incoming/outgoing data from/to its per-instance buffer from/to the monitor’s buffer, which will
then be copied from/to the physical console.
Module objectives
Slide 9-10:
                     • /dev/vcn
                           –   Virtual console master device (vcn driver)
              Daemons
                     • /sbin/vphbd
                         – Sends heartbeat to monitor every 360 seconds
                     • /sbin/vpard
                           –   Synchronizes in-memory database with /stand/vpdb
                     • vconsd
                         – Transfers data between vcs driver and local tty
              29/08/2003                                    HP restricted                                              10
vPars introduces several new drivers with their device files, and three new daemons.
/dev/vpmon
This device provides access to the memory-resident instance of the vPars monitor. It’s used by the vPars commands.
In addition to the portion of the monitor which manages the virtual console, there are three major components in the virtual
console module. They are the Virtual CoNsole pseudo-device driver (vcn), the Virtual Console Slave pseudo-device driver
(vcs), and the Virtual Console Kernel Daemon (vconsd). These drivers and daemon were covered on the previous slide.
The device file for the virtual console master (/dev/vcn) effectively points to /dev/console.
The device file for the virtual console slave (/dev/vcs) is a feed into the physical console, but only on the vPar that owns the
physical console hardware. That means that if you have sufficient privilege to open /dev/vcs on the vPar that owns the I/O path
with the console hardware, writing to /dev/vcs is the same as typing on the virtual console, no matter which vPar owns it.
/sbin/vphbd
The vPar heartbeat daemon is a user-space process that periodically performs a downcall to the monitor. It “pings” the monitor
every 360 seconds by default, so the monitor can tell if that particular vPar is up and running. If the vPar misses ten heartbeats,
the monitor changes the vPar’s state to Hung.
/sbin/vpard
A copy of the virtual partition database is maintained on the boot disk of every partition. The purpose of the vpard is to
synchronize the contents of the in-memory copy of the vPar monitor’s database with the local on-disk copy. It takes a
command-line argument (-i) that determines the interval between updates, in seconds. The default is five seconds.
When a vPar makes a change to the live database, the change is first made to the in-memory copy of the database and is then
synchronized to the disk copy of the vPar initiating the change. The vpards running on the other vPars see the changed
database at their next poll interval, and propagate that change to their local disk.
Note the obvious side effect: if a vPar is not up, it’s not running a vpard -- so it’s not updating its local copy of the database in
/stand/vpdb. Thus, if you’re making configuration changes, a “best practice” is to make sure that as many vPars as possible
are up. Another best practice is to perform configuration changes on the vPar that owns the monitor’s boot path (since its local
database will be the one used by the monitor).
Module objectives
Slide 9-11:
Boot-time Changes
29/08/2003 HP restricted 11
In a standalone boot -- that is, without the vPars monitor -- firmware selftests the processors and memory, finds the boot disk,
and loads ISL, which loads the hpux boot loader, which loads the standalone kernel. In the vPars environment, the hpux boot
loader loads the monitor /stand/vpmon. The monitor goes through most of the same boot-time initialization that the HP-UX
kernel does: allocating and initializing various global locks, constructing a physical memory table based on the address map
returned by firmware, creating a native module tree based on the hardware it discovers, initializing that hardware, and
rendezvousing all the processors in the system. The monitor then reads the partition database from the disk (that is, the disk
from which the system booted) and creates the virtual partitions, assigning memory, I/O hardware, and processors according to
the database and available hardware.
When requested, the monitor loads and launches the vPar kernels. You’ll notice that there’s no boot-time selftest, ISL, or hpux
boot loader when booting a kernel under vPars -- just the monitor, which does almost exactly what the hpux boot loader does.
HP-UX kernels under vPars actually boot faster, because the lengthy hardware selftests don’t have to happen.
The monitor loads the kernel into memory, relocating it if necessary (that is, if the kernel’s load point and size aren’t within the
assigned address range for the vPar). The monitor sets up the vPar’s virtualized Page Zero, initializes it, and patches the
kernel’s image in memory to set various globals. Finally, it boots the vPar.
As far as the kernel is concerned, not much changes in the boot process. The main difference is its environment: rather than
being launched by the hpux boot loader, the kernel has been launched by the monitor, which supplies all the same facilities
(like early access to the /stand file system) and information that the hpux boot loader did. Kernel initialization is the same:
the kernel does all its own hardware discovery using firmware -- but that firmware is emulated by the monitor, so the kernel
only finds the hardware that the monitor wants it to find.
There are a couple of differences in the device discovery and initialization:
•     The kernel can see the CPUs that are bound to it, and the CPUs that are floating, but only the bound ones appear to be
      enabled.
•     When the vcs driver is attached, it checks the virtualized Page Zero to see if its console path is set. If so, the physical
      console is owned by this vPar, so it fills out a static array called vcs_boot_dev[] with information about the console.
      When vcn is attached, it sets cons_mux_dev to the vcn driver. When vconsd starts up, it checks if vcs_boot_dev[]
      is filled out -- if not, it exits; otherwise, it grabs the tty information from vcs_boot_dev[] and begins its daemonic
      work.
•     The Ike PSM calls the vPars PSM to determine which portion of any shared I/O PDIRs this vPar can use; the other entries
      are marked busy within this vPar (since they’re owned by other vPars).
Late in the kernel boot process, /etc/rc runs, and it runs all the rc scripts. vPars has three rc scripts:
•     /sbin/init.d/vparinit: This initializes the vPars environment. It saves and analyzes any monitor crash dump,
      makes the kernel relocatable if it isn’t already, rebuilds the vPars device files, tries to make sure there’s enough space for
      a monitor dump, and changes the kernel’s relocatable load address if it had to change.
•     /sbin/init.d/vparhb: This starts the heartbeat daemon vphbd. After opening /dev/vpmon, it sends a “hello”
      message to the monitor, and begins sending heartbeat messages. When the monitor gets a “hello” message -- which is just
      a downcall -- it migrates floating CPUs to that vPar to meet its total count.
•     /sbin/init.d/vpard: This starts the database synchronization daemon vpard, which opens the monitor device file
      /dev/vpmon, then enters a loop, checking if the on-disk copy of the database needs to be updated and sleeping.
Module objectives
Slide 9-12:
Halt-time Changes
29/08/2003 HP restricted 12
Like the boot path, the shutdown path for the kernel hasn’t changed very much because of vPars. Again, most of the change is
in the kernel’s environment. There aren’t any new system calls or commands to shut down the system -- you still use the
standard shutdown and reboot commands. There is some special kernel code for vPars in the shutdown and reboot path, but
it’s mainly to ensure that CPU resets are done via PDC rather than directly sending broadcast resets. The algorithms in the
kernel are unchanged.
The reason for using PDC calls to reset the processors is that the PDC call is emulated by the monitor. It recognizes the PDC
call as a shutdown request, and idles only the CPUs for the vPar in question, and frees up that vPar’s I/O resources, CPUs, and
memory. In addition, it also takes over the virtual console if that vPar owned the physical console hardware, and makes sure
that any locked resource (specifically the data TLB purge lock) is released.
As far as dumps are concerned, the kernel doesn’t see anything different. It still uses IODC to copy its memory image to the
dump device. However, since IODC is loaded via PDC calls, and since the monitor emulates PDC, the IODC that the dumping
kernel uses is supplied by the monitor, and is just a stub to the monitor’s I/O routines. Those routines use locks to make sure
that only one vPar (actually, only one CPU in the system) uses IODC at a time, and then use the original IODC to perform the
actual writes of memory to the dump device.
We do have a couple of new items, thanks to vPars. First, there are now monitor crash dumps in addition to kernel crash
dumps. The dump mechanism is similar -- IODC -- but where the dump ends up and how it’s analyzed are a little different.
We’ll cover this in more detail in a later module.
The second change is how you reset a kernel. If you do a soft reset (akin to typing ctrl-B TC at the GSP prompt), that
branches to a TOC handler defined in Page Zero. Since the monitor owns the true Page Zero, this branches into the monitor,
which TOCs all the kernels and itself.
If you want to reset a single vPar, either “hard” or “soft,” you use the vparreset command, which does a downcall to the
monitor. For a soft reset, the monitor branches to the vPar’s virtualized Page Zero TOC handler.
Module objectives
Slide 9-13:
29/08/2003 HP restricted 13
The vPar software attempts to isolate each vPar, making each partition wholly separate from the others, and unaffected by
them. But that isolation does have limits.
Hardware Limitations
Remember that the partitions are virtual, by definition; the partitions are managed by software, so hardware problems aren’t
contained like they are on an nPar. Malfunctioning hardware can bring down the monitor, and hence the entire system -- for
example, a processor HPMC, detected by either a vPar or by the monitor, will crash all the vPars and the monitor.
In addition, some of the hardware in the system is shared, so the vPars are not totally separated. This is illustrated in the
sharing of the I/O PDIR.
This sharing of system hardware resources can lead to bottlenecks, if the resource isn’t designed to be shared. Two examples
are the system firmware, which expects only one CPU to access it at a time, and the Purge Data TLB operation on some
systems, which can only be executed on one CPU at a given time. The monitor has to arbitrate between the vPars -- typically
with spinlocks -- so one vPar may have to wait while another uses that resource.
Finally, on the hardware side, vPars doesn’t add any new hardware fault tolerance over a standalone HP-UX kernel.
Software Limitations
Module objectives
Slide 9-14:
                                                           No!
                     • Memory image written by IODC to raw disk
                           –   Typically primary swap, same as always
                           –   Monitor adds layer of IODC abstraction
                     • Dump space configured same as standalone
                     • savecrash copies disk image to /var/adm/crash
                           –   Happens as part of rc processing at boot
                           –   Should work even if rebooted standalone
With the introduction of vPars, we now have two different kinds of dumps: kernel crash dumps and monitor crash dumps.
The mechanism of setting up, generating, and preserving a kernel crash dump -- that is, a crash dump produced by one of the
virtual partitions crashing -- is pretty much the same as a standalone system.
When the kernel crashes either standalone or under vPars, it saves a copy of the system memory to a dump device, based on
the dump configuration you’ve set up via the crashconf command. By default, the dump device is primary swap. The kernel
uses IODC to write the data -- the only difference under vPars is that the monitor inserts itself between the kernel and IODC to
force single-threading of firmware operations across all the vPars.
The next time the kernel boots, the savecrash command saves the memory image on disk into the file system, typically
under /var/adm/crash. This is no different in a vPars environment. In fact, savecrash should work exactly the same if
the kernel is booted standalone.
This last feature is helpful if you have a vPar that’s continually crashing just after dumps are configured, but before the dump
can be saved; if the crash is limited to vPars, you can boot standalone, and savecrash will save it.
In summary, the mechanism of configuring, generating, and saving kernel crash dumps doesn’t change under vPars -- the only
difference is that the monitor arbitrates IODC usage between the vPars.
Module objectives
Slide 9-15:
The second difference is how the dump space is configured. As you know, the monitor understands how to traverse UFS file
systems -- it has to read in /stand/vpdb and /stand/vmunix for all of the vPars. What the monitor doesn’t understand is
how to create files -- how to allocate disk space and inodes. Thus, you have to preallocate the space for the dump file. (This
conveniently sidesteps the issue of /stand/ not having enough space for the monitor dump when it happens.)
The third difference is how the dump image gets copied to its final destination. Instead of /var/adm/crash, the monitor
crash dump is saved to /var/adm/crash/vpar. Like kernel crash dump processing, this happens at boot time; there isn’t
any dependency on the monitor to save the monitor crash dump, so the save works even if the system’s booted standalone.
Finally, dump analysis of a monitor dump is different.
Let’s look at why the monitor crashes, how you configure monitor crash dumps, how they are saved on reboot, and how you
analyze them.
Module objectives
Slide 9-16:
               • Monitor Panic
                     –     Monitor detects uncorrectable internal error
               • Unhandled Exception
                     –     Monitor gets fault, trap, or interrupt in real mode
               • Unexpected Transfer of Control (“ctrl-B TC”)
                     –     User forces a crash dump to be taken
               • High Priority Machine Check (HPMC)
                     –     Hardware detects internal malfunction
                     –     Can be detected by vPar or by the monitor
              29/08/2003                                  HP restricted                                                 16
The monitor may crash and save a memory image for any of four reasons. Quoting from the vPars Internals class, they are:
Monitor Panic: The vPar monitor calls the internal function panic() when it discovers a fatal error condition that cannot be
contained. This is the typical failure mode when the vPar monitor is the first entity to detect the fatal problem.
Unhandled Exception: An unhandled exception occurs when a fault, trap, or interrupt occurs on a processor while the
monitor is executing code. Since the monitor runs in real-mode with interrupts turned off, there should be no interruptions. If
there are, state is collected to diagnose the problem and panic() is called. For example, if memory corruption causes the
monitor to dereference an unaligned pointer, an unaligned data reference trap will occur. The monitor will handle this trap by
logging the interesting fault information and calling panic().
Unexpected Transfer of Control: The monitor generates TOCs during normal operation to reset processors either as part of a
vPar reset or to migrate a CPU from one vPar to another. If the monitor is the initiator of the TOC, this is known as an expected
TOC. If the TOC is generated by an entity other than the vPar monitor, such as the system administrator via the ctrl-B TC
keyboard command, then the monitor has no advanced warning that a TOC is going to occur. This is known as an unexpected
TOC and is assumed to be a request to collect state and shut down the system. Unexpected TOCs are often used to diagnose a
system that appears to be hung.
High Priority Machine Check: An HPMC occurs when a processor detects a fatal error condition in hardware. The cause of
the fatal error can be either software or hardware, but is detected by hardware first. Since it is possible that an HPMC was
caused by a software error, a crash dump is often required to diagnose the root of the problem.
Module objectives
Slide 9-17:
When it crashes, the monitor writes its memory image to /stand/vpmon.dmp. Because the monitor doesn’t know how to
create new files, vpmon.dmp has to exist before the monitor crashes; in other words, the disk space for the file has to be
preallocated.
To preallocate the file, use the vpardump command with the -q option, and either the -i or -f option. Both -i and -f create
and initialize /stand/vpmon.dmp; however, the -f option allocates space for a full dump. A full dump includes additional
information like the I/O Page Directory (IOPDIR), so the dump file will be somewhat larger. The -q option skips trying to
analyze any dump that may already be saved in /stand/vpmon.dmp.
You shouldn’t normally have to use vpardump to preallocate the dump file. There’s a startup script that’s run from rc at boot
time, called /sbin/init.d/vparinit, which does it for you. That script defaults to a normal -- that is, not full -- dump, but
you can change that behavior using vparinit’s configuration file in /etc/rc.config.d/vparinit; reading that file is
an exercise for the reader (it’s very straightforward).
The only time you should have to create the dump file by hand is if someone’s removed it. You can run the vpardump
command as shown, or you can run the vparinit script yourself, as discussed on the next slide; the script does more than just
create the dump file, but it won’t harm the vPar.
Module objectives
Slide 9-18:
29/08/2003 HP restricted 18
So the monitor crashes, saving a dump into /stand/vpmon.dmp, on the disk from which the monitor was booted.
Nothing else happens, dump-wise, until an HP-UX kernel boots. That is, the monitor doesn’t do anything to preserve this
particular dump, and will gladly overwrite it if no one saves it -- much like a kernel will overwrite primary swap if you forget
to run savecrash.
For a monitor crash dump, there’s no savecrash program -- it’s all done by the vPars rc script, /sbin/init.d/vparinit.
Here’s what vparinit does to preserve the monitor crash dump; this isn’t the exact order, but what happens conceptually:
         1. It checks to see if /stand/vpmon and /stand/vpmon.dmp exist. If not, there’s obviously no dump to be preserved.
         2. Otherwise, it checks if /stand/vpmon.dmp has dump data in it, or if the file’s been cleared. If there’s no dump data,
            there’s nothing to preserve.
Module objectives
Slide 9-19:
              Output contains:
               • Processor stack trace, if executing in the monitor
               • Spinlocks held within the monitor
               • Monitor event log
               • Processor tombstones (PIM)
Now that you have a monitor crash dump, how do you analyze it? Since it’s not a kernel crash dump, you can’t use the same
tools. Instead, use vpardump.
With no options, vpardump performs a monitor dump analysis on the dump in /stand/vpmon.dmp, using the monitor
/stand/vpmon. This is exactly how vparinit generates the summary in /var/adm/crash/vpar/summary.count.
If you want to analyze a different monitor crash dump, you can give the names of the monitor executable and the dump file
explicitly. vpardump is reasonably smart about what you want done. For example, if you wanted to analyze the monitor dump
in /var/adm/crash/vpar/vpmon.dmp.4, using the monitor executable preserved in
/var/adm/crash/vpar/vpmon.4, any of these would work:
# vpardump /var/adm/crash/vpar/vpmon.4 /var/adm/crash/vpar/vpmon.dmp.4
# vpardump /var/adm/crash/vpar/vpmon.4
•     The monitor boot command, dump file path, and monitor version string
•     The time and reason for the crash
•     A table of contents for the dump -- where in the dump file to find the monitor’s memory image, its memory segment table,
      and the IOPDIRs (only for a full dump)
•     The name and status of all the partitions
•     A list of all the CPUs in the system, showing:
Processor Information
-----------------------------------------------------------------------------
Processor 0: Entity ID=0, bound to vpar1, running in monitor
        VPAR Monitor Locks Held:
            I/O and Filesystem Lock (acquired in unknown_function, tag=0x1000000058098)
        Per-Processor Data Pointer (CR24) = 0x123a68
        Stack Pointer (GR30) = 0xae920
        Instruction Pointer: io_owned_by_vp+0x1e0
        Stack:
              0x00083948 (io_owned_by_vp+0x1e0)
              0x00029c30 (iodc_verify_ownership+0x60)
              0x0002534c (dev_read+0x6c)
              0x0005844c (consin+0x3b4)
              0x00041e3c (interact+0xf4)
              0x00041d10 (sched_interact+0x20)
              0x0007915c (launch_processor+0x4c)
Processor Tombstones
-----------------------------------------------------------------------------
PROCESSOR 0 (Entity ID 0):
General Registers 0 - 31
00-03 0000000000000000 00000000049ac880 0000000000000000 0000000000000000
04-07 00000000007c6790 00000000007c0000 00000000007c09b0 0000000004a6ffe8
08-11 0000000004a99118 00000000048d0880 00000000049a9080 00000000049ab080
12-15 00000000049a9080 00000000049ab080 00000000049ab080 0000000004922840
16-19 0000000000000000 0000000000000016 000000000000002e 0000000000000002
20-23 000000000000019e 0000000000000000 0000000000000000 0000000000000002
24-27 0000000000000003 00000000007c0000 0000000000000000 00000000049b2080
28-31 0000000000000001 0000000000000000 00000000102da310 0000000000000000
Control Registers 0 - 31
00-03 0000000078596328           0000000000000000   0000000000000000    0000000000000000
04-07 0000000000000000           0000000000000000   0000000000000000    0000000000000000
08-11 000000000000e75d           000000000000a244   00000000000000c0    0000000000000032
12-15 0000000000000000           0000000000000000   000000000400a000    fffffff0ffffffff
16-19 000056ea144255bc           0000000000000000   00000000043c6ea4    000000008abf4050
20-23 0000000010340001           00000000f03c11d8   000000ff0804941f    8000000000000000
24-27 00000000007c0000           0000000000000000   0000000000000000    00000000400f2210
28-31 0000000000000000           000056ea13e26843   0000000004143c68    00000000102da310|
Space Registers 0 - 7
00-03 0000000001a1f800           000000000613ec00   000000000298ec00    0000000000000000
04-07 0000000000000000           00000000ffffffff   0000000003268800    0000000000000000
Module objectives
Slide 9-20:
              To reset a vPar:
               # vparreset –p vpar_name [-h|-t] [-q] [-f]
               -h: hard reset (no dump)
                     –     Emulates a hard reset from GSP
               -t: soft reset (dump created)
                     –     Emulates a Transfer of Control
               -q: skips PIM output for vPar’s processors
               -f: skips request for user confirmation
29/08/2003 HP restricted 20
On a hung standalone system, if you want to force a kernel dump, you either type ctrl-B TC or use some machine-dependent
mechanism to force a Transfer of Control. Under vPars, you don’t want to do that, because it affects the entire system, and not
just a vPar. Forcing a Transfer of Control through the GSP or hardware brings down the monitor and all vPars, as noted on the
next page.
If you want to force a single vPar to dump, use the vparreset command with the -t option. This will emulate a Transfer of
Control to the named vPar.
By default, vparreset will print out the contents of each CPU’s registers as it’s TOCed. If you want to avoid all that output,
use the -q option.
Also by default, vparreset prompts you to confirm that you really, really do want to TOC that vPar. If you really, really do
want to TOC that vPar, but you don’t want the confirmation dialog -- for example, if this is part of a script -- use the -f option
and vparreset will skip the confirmation.
In case you haven’t noticed, you can’t TOC a vPar from the monitor. If you only have one vPar running, it can TOC itself via
vparreset. However, if there’s only one vPar, and it’s hung, and you want to TOC it, you’ll either have to bring up another
vPar, or TOC the entire system; the next page shows how.
Module objectives
Slide 9-21:
              29/08/2003                                 HP restricted                                                21
Here’s the case where you do want to type ctrl-B TC: when you want to force the monitor to generate a monitor crash dump.
The only hitch is that the monitor will TOC all the vPars before it TOCs itself. That means you’ll have a whole lot of dumping
going on. You can avoid those vPar dumps by simply shutting the individual vPars down before doing the ctrl-B TC.
When you TOC the monitor, you’ll see a new crash processing interface. It’s covered in the Installing and Managing HP-UX
Virtual Partitions manual, and looks something like this:
Virtual Partition Activity at Time of Crash
partition 0 (vpar1): active
partition 1 (vpar2): active
partition 2 (vpar3): down
Module objectives
Slide 9-22:
              Workaround:
               • Bypass mirroring
                     –     Remove the mirror temporarily
                     –     Read the raw disk (messy)
               • Documented in JAGae44442, fixed in A.02.02
29/08/2003 HP restricted 22
As noted earlier, when it crashes, the monitor writes out its dump image using IODC. This causes a problem in a mirrored
environment, such that the saved monitor dump is corrupt.
The defect, documented in JAGae44442 and fixed in A.02.02, states that the dump of the vPar monitor (vpmon.dmp) will be
corrupt if the boot disk -- that is, /stand -- is mirrored. As a result, the summary file resulting from analysis by vpardump
doesn’t contain expected data.
When the monitor dumps, it writes to the preallocated file /stand/vpmon.dmp using its built-in UFS file system code. Since
the monitor doesn’t understand mirroring, it only writes to one disk in a mirrored logical volume. LVM is unaware of this
inconsistency, so as the vPar boots and copies /stand/vpmon.dmp to /var/adm/crash/vpar, LVM alternates reads
between the two mirror copies. Since one mirror copy is the saved monitor dump and the other copy is empty -- that is, zeroes
-- every other block is read as all zeroes, which leaves a corrupt copy of the monitor dump in /var/adm/crash/vpar. As a
result, the summary file generated by vpardump -a is useless.
There are a few workarounds possible, as documented in the defect report. You can remove the mirror temporarily and resave
the monitor dump, or get the file from the UFS file system some other way. One method is to directly dd the logical volume
from the “good” mirror’s disk to another empty logical volume, then fsck and mount it. Another possibility is to use a tool that
understands the UFS layout and extract the file directly from the boot disk. Again, all this is in the defect report for
JAGae44442. Your best bet, however, is to install A.02.02.
Module objectives
Slide 9-23:
                           To recover:
                            • Do a manual savecrash
                            • Do a soft reset to initialize scratch RAM ahead of time
29/08/2003 HP restricted 23
There has also been one issue seen with kernel dumps and vPars. It has to do with how the dump is saved after reboot.
When the kernel crashes, it saves its memory image off to the dump device(s). Then it tucks away some information about the
dump: specifically where the dump header is -- which disk it’s on, and where on the disk it’s located. The kernel uses a PDC
call to save this in non-volatile memory, a portion of Stable Storage called “scratch RAM.”
On reboot, the savecrash program reads this information from scratch RAM to decide if there’s a dump to be saved.
Unfortunately, there is not enough non-volatile memory to store the crash information for each vPar. So, like the rest of Stable
Store, the scratch RAM is emulated by the monitor, and kept in the vPar database. When the kernel saves the dump header’s
location, it actually saves it in the vPar database. Thus, when savecrash runs, it reads the emulated scratch RAM, gets the
dump header location, and saves the dump image.
As you know by now, the vPar database gets synced to disk by vpard every five seconds. You can see a potential problem:
when the kernel’s crashing, it doesn’t run vpard, since it’s too busy, well, crashing, so that scratch RAM won’t get updated on
the crashing vPar. However, that’s not what happens -- the monitor keeps the database, so when the vPar reboots, it gets the
updated database, and things work as expected.
This doesn’t work in one case: when the vPar and the monitor go down together. That happens when there’s an HPMC or a
ctrl-B TC on the console. In that case, the scratch RAM doesn’t get saved anywhere. When the vPar comes back up,
savecrash looks for the dump header location, and thinks there’s no dump to save.
One nice thing about the kernel’s dump algorithm is that it will dump to the same location if you don’t add new dump devices.
So if a partition dumped previously due to a kernel panic, the crash parameters are already saved in the scratch RAM record. If
you crash once -- without the monitor crashing too -- you’re okay. If you want to be sure that scratch RAM is set up right, you
can force a dump by doing a soft reset (vparreset -t) on every vPar after installation or dump device reconfiguration.
If you run into this situation after the fact, you can run savecrash manually to save the dump. The crashconf command
will give you the device and offset of the dump header, so you can give those to savecrash with the -D and -O options,
respectively:
# savecrash -r -f -D disk_device -O offset
The -r option forces the dump to be saved, even if there’s no dump header. The -f option makes savecrash run in the
foreground.
Module objectives
Slide 9-24:
Not much!
The good news about kernel dump analysis under vPars is that most things still work as before. You can still use the standard
dump analysis tools crashinfo, p4, and q4.
The only issue is making sure you have recent enough versions of the tools. Older versions didn’t understand The Big Change
for vPars -- making the kernel relocatable. For example, here’s what an “old” q4 would print:
# q4 .
@(#) q4 $Revision: 1.79a $ $Date: 97/09/08 12:00:22 $ 0$
q4: (warning) the kernel and core release versions don't match
q4: (warning) kernel: @(#)     $Revision: vmunix:    vw: -proj    selectors: CUP
I80_BL2000_1108 -c 'Vw for CUPI80_BL2000_1108 build' -- cupi80_bl2000_1108
'CUPI80_BL2000_1108' Wed N
q4: (warning) core:   @(#)     $Revision: vmunix:    vw: -proj    selectors: CUP
Module objectives
Slide 9-25:
29/08/2003 HP restricted 25
Here’s a general list of what’s changed. We’re not going to go through every data structure and global variable -- this is just a
short tally of the most “interesting” differences, and those you might see when running crashinfo.
Far and away, the biggest change to vPars kernels is that they’re relocatable. Before vPars, the kernel was hard-coded to load
at 0x20000; after vPars, since multiple kernel images have to reside in memory, each vPar’s kernel is loaded at a different
address. In fact, there’s no guarantee that a kernel will be loaded to the same address twice -- it’ll certainly move if you change
the memory allotment for a vPar.
This relocation is dealt with transparently by the dump analysis tools, so it’s a non-issue.
Another visible change is that the I/O page directory (IOPDIR) is no longer in the kernel dump, but is now saved in the
monitor crash dump. This might be a concern if you’re debugging I/O problems. The HP-UX GSE has some details on the
problem here: http://wtec.cup.hp.com/~hpux/hpux-wtec/vm-pm/Meetings/vPars_IO.htm. Note that this page is
password-protected, so if you’re not a member of the GSE team, you’ll need to get their permission.
There are a few new global variables. One, vemon_present, is simply a boolean, patched to 1 by the vPars monitor when the
kernel boots. This is the easiest way to tell if a kernel was booted under vPars.
Another global, called page_zero, points to the address of Page Zero. In a dump from a standalone kernel -- or a vPars kernel
booted standalone -- this will contain a 0, since there’s no need to virtualize Page Zero. This is another way to determine if the
kernel was booted under the vPars monitor or not, and in fact is how crashinfo figures it out.
In addition, you’ll see the new virtual console drivers vcn and vcs in a kernel dump.
Finally, you’ll notice some things are, for lack of a better term, different. The most obvious is that some CPUs in the dump may
be disabled. These are the unassigned floating CPUs -- they’re visible to all the vPars, but they’re idle, so they won’t log crash
events.
Module objectives
Slide 9-26:
29/08/2003 HP restricted 26
Module objectives
Slide 9-27:
29/08/2003 HP restricted 27
Later, in the Crash Events section, crashinfo again notes the floating CPU, and that there won’t be a crash event for it:
Note: We have 1 disabled cpu !
As this is a vpars system you should not get a crash event
for the disabled cpu.
Then, when listing stack traces for all the processors, crashinfo will note the lack of a trace from the disabled processor:
Processor #1
Can't find the mpinfo[0].prochpa into the crash event table
You shouldn’t need to worry about these messages, since they’re a consequence of floater CPUs.
Module objectives
Slide 9-28:
              crashinfo/q4/p4 fails:
              # crashinfo
              Warning: some pages can't be artificially mapped
              Can't get base_pdir
              Continuing without translation
              HP-UX 0.00 not supported
There is a situation where a kernel dump is saved properly, but can’t be analyzed by any dump tools. It’s caused by
“on-the-fly” relocation.
When the monitor boots a kernel, it tries to place the kernel in memory at the kernel’s relocation address. If that address isn’t
available -- either because some other vPar has it or because that physical memory’s bad -- the monitor loads the kernel at
some address that is available: the kernel is relocated “on the fly” into a different physical address. Once the kernel’s up and
running its rc scripts, the vparinit script will tweak /stand/vmunix with vparreloc, so that the kernel’s load address
matches reality.
This may cause a problem if the previous reboot saved a memory dump. Since vparinit runs before the savecrash rc script, the
saved vmunix will have a different load address than the saved crash dump image. That really confuses the dump analysis
tools. For example,
# q4 /var/adm/crash/crash.0
...
Error: requested page does not exist on target machine.
q4: (warning) failing page number = 0x8b1c status - -2
q4: (error) can not read symbol pdirhash_type from core file
quit
p4 and crashinfo have the same type of problem:
# crashinfo /var/adm/crash/crash.0
...
Kernel TEXT pages not requested in crashconf
Will use an artificial mapping from /var/adm/crash/crash.0/vmunix TEXT pages
Warning: some pages can't be artificially mapped
Can't get base_pdir
Continuing without translation
HP-UX 0.00 not supported !!!
As of this writing, the lab is still working on an approach to recovery. Basically, you have to vparreloc the saved kernel in
/var/adm/crash to the address that matches the dump; unfortunately, that address isn’t saved anywhere. The worst case
scenario is for you to try all the possible kernel load addresses -- starting at 64MB and finishing at 2GB less 64MB.
Module objectives
Slide 9-29:
29/08/2003 HP restricted 29
Now that you know what’s different about vPars kernel dumps, you’ll probably want to know if a kernel crash dump was
caused by vPars. Just because a kernel was running in a vPars environment when it crashed, you can’t assume that vPars is at
fault -- all the old causes of panics haven’t been eradicated yet.
Although there’s no way to guarantee that a crash was triggered by vPars, there are some general principles you can apply:
•             If the kernel was executing vPars code when it crashed: If you have a stack trace, you can usually tell if we’re in the
              vPars PSM or vPars support code. Most of the vPars functions in the kernel start with the prefix “ve_” (short for “virtual
              environment”) or “vpar”. The virtual console drivers start with “vcn” for the master and “vcs” for the slave. The virtual
              console daemon functions start with “vconsd”.
              If the system panicked in a function starting with one of those prefixes, or several of them appear in the stack trace, it’s a
              good bet that vPars is involved in the crash.
•     If a vPars command was running when it crashed: If one of the vPars commands like vparmodify is the current
      process, or ran just a couple of ticks ago, vPars may be the trigger for the crash. You normally only run these commands
      when you’re booting, shutting down, or changing the configuration, so if one of them is running, the vPars subsystem is
      active.
      Don’t be hasty to assume that if any vPar command is in the list of processes, that vPars was the trigger. The vPars
      daemons -- vpard, vphbd, and vconsd -- normally start at boot, and never exit. In fact, the vPar daemon vpard should
      always have run within the last 5 seconds.
•     If the kernel was executing a PDC call when it crashed: Since the vPars monitor emulates PDC calls, any PDC call
      from a vPars kernel will be executing monitor code. In addition, any requests from the kernel to the monitor -- the
      downcalls -- are implemented through the PDC call interface.
      So if the kernel appears to be performing a PDC call, it’s talking to the monitor.
Module objectives
Slide 9-30:
Questions
29/08/2003 HP restricted 30