0% found this document useful (0 votes)
17 views152 pages

Installationv11 4 3

Instalacion de Riverbed

Uploaded by

Ronald N Meza C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views152 pages

Installationv11 4 3

Instalacion de Riverbed

Uploaded by

Ronald N Meza C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 152

Installation Overview

CHAPTER 1 Installation Overview

The following sections explain the supported installations of the APM analysis server and agents.

Quick Start Installation and Configuration


Aternity APM supports an analysis server and agent installation with a basic configuration so you can get
up an running immediately.
For instructions on how to perform a quick start installation and configuration of the analysis server and
agents, see “Installation/Configuration Quick Start“

Analysis Server Installation


The analysis server is the central component of APM that processes performance data generated by
monitored web pages and APM agents installed on monitored systems. The analysis server also hosts the
APM web interface.
For the Software as a Service (SaaS) offering of the analysis server, there is no installation.
For the on-premises version of the analysis server, the following material describes the different approaches
to install the analysis server:
 The easiest way to install the analysis server is as a self-contained virtual appliance, as described in
“Analysis Server Installation: Virtual Appliance“
 You can also install the analysis server on a dedicated Linux system, as described in “Analysis Server
Installation: Linux System“.

Agent Installation
APM agents monitor application and system performance. Install APM agent software on systems you
want to monitor, as described in the following topics:
 “Agent Installation: Windows“
 “Unattended Agent Installation: Windows“
 “Agent Installation: Unix-Like Operating Systems“
 “Unattended Agent Installation: Unix-Like Operating Systems“
 “Deploying Agents in Cloud-Native Environments“

Aternity APM Version 11.4.3 1


Installation Overview

Initial Configuration
In addition to installing the agent, you also need to perform the following initial configuration:
 “Synchronizing Agent Time With the Analysis Server“
 “Enabling Instrumentation of Java Processes“

Where To Go From Here


Determine which installation of the analysis server and agents is best suited to your environment and
follow the appropriate links listed above. To get up and running quickly, see the
“Installation/Configuration Quick Start“.

2 Aternity APM Version 11.4.3


CHAPTER 2 Installation/Configuration Quick Start

The following sections explain how to quickly install and configure APM to monitor your environment:
 “Installing the Analysis Server“
 “Installing the Agent Software“
 “Instrumenting Processes to Monitor“
 “Creating a Transaction Type for Your Application“

Installing the Analysis Server


The quickest way to get up and running with AppInternals is to perform an Open Virtual Appliance (OVA)
installation. For information on how to perform an OVA installation, see “Analysis Server Installation:
Virtual Appliance“

Installing the Agent Software


After you have completed an OVA installation of the Analysis Server and logged into the WebUI, as
explained in the previous section, install the agent software on systems you want to monitor.
Once started, the agents connect to the analysis server and the analysis server automatically begins
harvesting data on key operating system resources (CPU, memory, and networking).
For information on how to install agents, see the following:
 “Agent Installation: Windows“
 “Agent Installation: Unix-Like Operating Systems“

Aternity APM Version 11.4.3 1


Installation/Configuration Quick Start

Instrumenting Processes to Monitor


In the analysis server, go to Configure > Agent List to see what agents have connected to the analysis server.
Click the server that is running an application that you want to monitor. In the Agent Details screen, Click
the edit icon ( ) in the row for the application to open the “Edit Processes to Instrument Dialog“. Click
the Instrument option and Save:

Restart the application to start monitoring.

Creating a Transaction Type for Your Application


A transaction type is a series of patterns that matches interesting transactions in your environment. You
specify one or more patterns and associate them with a friendly transaction type name. (See the
“Transaction Types“ topic for more information on transaction types.)
It is important to create a transaction type to get started. You need at least one transaction type to populate
the “Top Transaction Types Card“ and “Transaction Types Tab“ in the APM interface. In addition, you will
filter by the transaction type to focus on transactions related to activity you want to monitor.
For web applications, a good ‘catch-all’ transaction type is one that matches all front-end transactions. The
following steps create at transaction type that matches all ASP. NET pages for the “TradeFast” application:

1) In the analysis server, go to Configure > Transaction Types.

2) In the “Transaction Types Screen“, click Add a Transaction Type.

3) The “Define a Transaction Type Screen“ opens. Supply a transaction type name (TradeFast)

2 Aternity APM Version 11.4.3


Installation/Configuration Quick Start

4) In the “Add Criteria“ area choose url in the Match any of these list and supply a pattern of
http://*/tradefast/*.aspx*:

5) Click Save. The new transaction type appears in the Manage Transaction Types screen.

Note: Right-Click Values to Create Transaction Types


A quick way to create transaction types is to right-click values in the “Search Tab“ or in the “Transaction
Details“ window. See “Right-Clicking Values in Analysis Screens“ in the “Transaction Types“ topic for
details.

Aternity APM Version 11.4.3 3


Installation/Configuration Quick Start

Enjoy Your Data!


In a few minutes, transaction data for your instrumented applications will appear in the APM interface. To
filter by the transaction type you just created, double click the bubble in the “Top Transaction Types Card“.
This adds a global filter for the transaction type in the “Filter Area“ and filters out transaction data you are
not interested in:

Where to Go From Here


For more information on working with APM and monitoring your environment, see “Data Analysis Tabs
and Metric Data Views“.
For more information on advanced configurations, see “Analysis Server System Configuration“.

4 Aternity APM Version 11.4.3


CHAPTER 2 Analysis Server Installation: Virtual
Appliance

This section describes installing the APM analysis server as a virtual appliance. The virtual appliance
contains a virtual machine (VM) with the analysis server already installed. It is packaged in Open
Virtualization Format (OVF) and distributed as an Open Virtual Appliance (OVA) file.
Installing the analysis server as a virtual appliance is the primary approach for installation. See “Installation
Overview“ for other installation options.

Planning The Installation


Before you begin the installation, identify a system where the analysis server will be installed and the
systems running Java and .NET applications that you want to monitor. These are the systems where you
will install Aternity APM agents.

Installation Prerequisites
The System Requirements document, available from the APM support page, details supported virtual
environments and resource requirements.

Downloading and Importing the Analysis Server Virtual


Appliance
To download and import the Analysis Server Virtual Appliance into a supported virtual environment,
follow these steps:

Note—For a list of supported virtual environments, see the System Requirements document on the support
site.

Aternity APM Version 11.4.3 7


Analysis Server Installation: Virtual Appliance

1. Download the analysis server OVA file from the APM support site:
https://support.riverbed.com/content/support/software/steelcentral-ap/appinternals.htm

2. Import the analysis server virtual appliance into a supported virtual environment by using the
VMWare OVF tool with the following syntax:
ovftool -ds=[Name of data store on ESXi host] ../[AppInternals OVA file]
vi://[adminuser]:[password]@[esxi host]

The following example shows how to use the OVF tool with the 11.0 OVA:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe -ds=datastore1 -nw="VM Network"
c:\$RVBD\AI10\Software\appinternals-vm-11.0.0.22.ova vi://kafka@192.168.1.45

Opening OVA source: c:\$RVBD\AI10\Software\appinternals-vm-11.0.0.22.ova

The manifest validates

Enter login information for target vi://192.168.1.45/

Username: kafka

Password: ***************

Opening VI target: vi://kafka@192.168.1.45:443/

Deploying to VI: vi://kafka@192.168.1.45:443/

Transfer Completed

Completed successfully

Note—When you use URIs as locators, you must escape special characters using the % sign followed
by the special character’s ASCII hex value.

For example, if you use an at sign (@) in your password, it must be escaped with %40 as in: vi://foo:b
%40r@hostname, and a backslash in a Windows domain name (\) can be specified as %5c.

For more information on using the OVF tool, see the following documentation from VMware:
https://www.vmware.com/support/developer/ovi

Provisioning the Virtual Machine


After you import the virtual appliance, make sure it is adequately provisioned. The analysis server will not
function correctly unless you provision its virtual machine with sufficient resources.
Merely configuring adequate resources is not sufficient. It is important to make sure the resources are
exclusively available for the analysis server virtual machine.
Make sure the virtual machine has “reservations” that guarantee the configured resource allocations for the
virtual machine as follows:
 For CPU, the reservation is specified in MHz. Specify a reservation equivalent to the configured
number of cores. For example, for the default of 2 cores, specify 5998 MHz.
 For memory, specify a reservation that matches the configured memory. For example, for the default of
8 GB, specify 8192 MB.

8 Aternity APM Version 11.4.3


Analysis Server Installation: Virtual Appliance

 For disk, specify a High share to insure maximum disk I/O.

Starting the Virtual Machine and Configuring the Network


Before you configure the Analysis Server network interface, make sure you know whether or not you want
to use a static IP address, and what fully-qualified hostname is appropriate for your environment.

1. After the appliance is deployed, start the appliance from the virtual environment hypervisor.

2. A command line console appears and starts the virtual machine. When the log in prompt appears, log
in with the user name of admin and password of riverbed-default:
Riverbed AppInternals Virtual Appliance
---------------------------------------

Login with the default credentials:


Username: admin
Password: riverbed-default

---------------------------------------

appinternals login: admin


Password: riverbed-default

3. After you login successfully, the CLI wizard prompts you whether or not to use DHCP to configure the
network interface:

Note—If you cancel the wizard with CTRL/D, you can configure the network later using the CLI
command “networkcfg“.

.
.
.
Successfully logged into AppInternals admin interface

Press the '?' key for list of available commands

Network Configuration
----------------------------
Press CTRL+D to cancel

Current Configuration:
Interface configured for DHCP

Do you wish to use DHCP (y/n):

– If you choose y (yes), the wizard prompts you whether or not obtain DNS from DHCP.
Obtain DNS from DHCP (y/n)

If you choose y (yes), the Wizard prompts you for a fully-qualified hostname.

If you choose n (no), the Wizard first prompts you for one or more DNS servers, and then for a
fully-qualified hostname.

Aternity APM Version 11.4.3 9


Analysis Server Installation: Virtual Appliance

Note—The default host name for the analysis server virtual machine is
appinternals.local.You need to supply a value suitable to your environment, so you should
discuss this with your Network Administrator.

It is important to supply a fully-qualified host name that will identify the analysis server. In
addition, the host name should be unlikely to change, because the host name is used elsewhere in
the analysis server and by APM agents. If the host name changes, you will need to reconfigure
multiple areas as described in the description of the “Hostname“ setting in the “Network
Configuration Screen“.

The virtual machine then requests an IP address from a DHCP server in your environment.
– If you choose n (no) to using DHCP, the wizard prompts you to enter an IP address, subnet mask,
default gateway, one or more DNS servers, and finally a fully-qualified hostname.
IP Address: 10.46.35.65
Subnet Mask: 255.255.255.0
Default Gateway: 10.46.35.11
DNS Server(s) (comma separated): 10.46.34.13

4. Once the network has been successfully configured, the wizard exits and displays the Analysis Server
CLI prompt.

10 Aternity APM Version 11.4.3


Analysis Server Installation: Virtual Appliance

Logging In to the Analysis Server WebUI


After setting up the Analysis Server network configuration, log in to the Analysis Server WebUI following
these steps:

1. In a supported browser, enter either the host name or IP address that you configured in “Starting the
Virtual Machine and Configuring the Network“:
– https://<fully-qualified-hostname>
– https://<ipaddress>

Note—DNS servers take some time to update their settings, so it might be preferable to log into the
Analysis Server WebUI for the first time using the IP address instead of the fully-qualified hostname.
To find the IP address, follow these steps:
a) At the AppInternals CLI console, enter the following command, which will bring you to the root
prompt:
my_server> enable
my_server#

b) At the root prompt, enter the following command and hit the TAB key twice, which will autofill the
interface name:
my_server# show interface name TAB TAB eth0

c) Hit the RETURN key, and the command returns the network configuration information for the
interface, including the IP address, which you can then use to log into the Analysis Server WebUI:
Interface eth20 state:
Up:Yes
Interface type: ethernet
IP address: 10.46.85.286

2. Because APM presents a self-signed security certificate created by the installation when it connects to
a browser for the first time, the browser generates a warning. This is expected behavior. Follow the
steps in the browser to ignore the warning and proceed to the login screen.

Note—To avoid the warning, replace the default certificate in the “TLS Certificate Configuration
Screen“.

3. When the login screen appears, enter the default user name/password -- admin/riverbed-default.
After you log in, you will be prompted to (optionally) change the password.

Aternity APM Version 11.4.3 11


Analysis Server Installation: Virtual Appliance

Configuring The Server to Run in a Cluster (Optional)


To configure the APM analysis server to run in a cluster environment, see “Installing the Analysis Server
on Cluster Nodes“.

Changing the Authentication Service Server (Optional)


SteelCentral Authentication Service provides authentication and authorization to Aternity products and
components. The analysis server includes its own Authentication Service and uses it by default.
If your environment already has a SteelCentral Authentication Service installation, you can configure the
analysis server to use it instead. This is useful because it enables single sign-on with other components that
use it. See the configuration topic “Upgrading the Analysis Server from the CLI“ for details.

Where To Go From Here


The next step in setting up APM is to install agent software on systems you want to monitor, as explained
in the following sections:
 “Agent Installation: Windows“
 “Unattended Agent Installation: Windows“
 “Agent Installation: Unix-Like Operating Systems“
 “Unattended Agent Installation: Unix-Like Operating Systems“

12 Aternity APM Version 11.4.3


CHAPTER 4 Analysis Server Installation: Linux
System

This section describes installing the APM analysis server on supported Linux systems. Typically, you do not
install the analysis server as described here. It is easier to import the analysis server virtual appliance into
a supported virtual environment as described in “Analysis Server Installation: Virtual Appliance“.
However, in some environments, it makes sense to install the analysis server on a dedicated Linux system:
 Environments that allow only specific standardized virtual machine images. In these environments,
installing the analysis server as a virtual appliance may not be allowed at all, or may require security
audits that will delay deployment of APM.
 Environments that limit resources (such as CPU cores, memory, or high-performance storage) allocated
to a virtual machine. In large deployments, this prevents providing sufficient resources for the analysis
server to support many agent systems.
 Environments that require additional software installed on the system hosting the analysis server. This
is not possible using the virtual appliance.
In these environments, you can instead install the analysis server on a dedicated Linux system as described
in this section.
The installation creates a restricted environment to isolate the analysis server from the system where it is
installed. After installation, you can access the analysis server’s environment as described in “Accessing the
Restricted Environment with the appinternals-bash Command“.

Installation Prerequisites
The following are required before installing the Analysis Server on Linux.

Ensure That All System Requirements Are Satisfied


For a list of supported Linux operating systems and disk, memory, and cpu requirements, see the System
Requirements document on the support page.

Install on a Dedicated System With a Stable Hostname


For optimum performance and to avoid unpredictable behavior, you must install on a system dedicated to
running the analysis server.

Aternity APM Version 11.4.3 1


Analysis Server Installation: Linux System

Also, the host name for the system where you install the analysis server should be unlikely to change, since
it is used in the analysis server and by APM agents.
If you change the host name after it is initially set during the analysis server installation, you will likely have
to change it in the following places:
 On every system that has the APM agent installed, as described in the documentation topic Changing
the Agent’s Analysis Server in the Agent System Configuration material.
 In the Collection Server Address setting in the Collector Configuration screen of the analysis server
interface.
 If you replace the default security certificate in the TLS Certificate Configuration screen with a
certificate that specifies a host name, you will need a new certificate with the new host name.

Ensure That Dedicated Ports Are Available


The analysis server uses the following dedicated ports, and the installation checks for their availability.
Ensure that they are available before installing the analysis server:
 80 (HTTP for web interface): you can specify a different port during the installation
 443 (HTTPS for web interface): you can specify a different port during the installation
 2222 (SSH tunnels for analysis server cluster deployments)

System Changes Made by the Installation


Installing the analysis server creates files in system directories and makes a number of other system
changes, including the following:
 Creates appinternals directories in /etc and /var/log
 Creates the appinternals group and users to own files and run processes
 Modifies firewall rules to open required external ports
 Adds commands to /usr/bin: appinternals-cli, appinternals-bash, appint_launcher, appint_support,
and uninstall-appinternals

Installing the Analysis Server

Note: Installing Automatically Reboots the System


Be aware that the installation automatically reboots the system after it completes. There is no prompt to
acknowledge or cancel.

The installation kit is distributed as a gzip-compressed tar file. The file is named
appint-linux-<version>.tar.
Download the installation from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html.

2 Aternity APM Version11.4.3


Analysis Server Installation: Linux System

Put the file in a temporary directory, such as /tmp:


[root@152 tmp]# ls -al appint*
-rw-r--r--. 1 root root 735027462 Feb 19 13:16 appint-linux-11.0.1.10.tar

Decompressing and Extracting Installation Files


Before running the installer, you must decompress and extract the installation files. For example:
[root@152 tmp]# tar -xvzf appint-linux-11.0.1.10.tar
./appint-image-11.0.1.10.txz
./appint_installer
./appint-linux-host-11.0.1.10.x86_64.rpm
./manifest.txt

Verifying Target Directories for Binaries and Data


The installer requires the location of two target directories and checks the disks that their partitions are
mounted on:
 Binary directory: The installer puts the analysis server binary, temporary, and log files in this directory.
The installer checks for a minimum of 10 GB of space available.
 Data directory: The analysis server uses this directory for transaction trace and metric data files
uploaded from APM agent systems. The installer checks for a minimum of 40 GB of space available.
You can override the disk space requirements with the --force argument on the command line or in response
to interactive prompts during the installation.
Decide on the directories you will use before installing the analysis server. You can specify the location of
the binaries and data directories in response to interactive prompts or on the command line with the
--bin-dir and --data-dir arguments. If the directories do not exist, the interactive installer prompts to create
them. The --force command line argument also creates them.

Using Command Line Arguments for Unattended Installations


You can use command line arguments to bypass interactive prompts:

Command Line Argument Specifies

--bin-dir Binary directory

--data-dir Data Directory

--http-port HTTP port for web interface

--https-port HTTPS port for web interface

--force Overrides disk space requirement and creates binary and data
directories if they do not exist

For example:
[root@0C3 tmp]# ./appinternals_installer --bin-dir /appinternals-binaries --data-dir /appinternal
s-data --force
2018/03/08 13:20:00 CmdArgs.go:119: Force creation of /appinternals-binaries

Aternity APM Version 11.4.3 3


Analysis Server Installation: Linux System

2018/03/08 13:20:00 CmdArgs.go:119: Force creation of /appinternals-data


2018/03/08 13:20:00 main.go:260: Installing appint-image-11.0.1.10.txz to '/appinternals-binaries
'
2018/03/08 13:20:00 CmdArgs.go:297: Writing configurations
2018/03/08 13:20:00 main.go:81: Unpacking tar...
2018/03/08 13:20:55 main.go:85: Done
2018/03/08 13:20:55 main.go:93: Installing host rpm appint-linux-host-11.0.1.10.x86_64.rpm
2018/03/08 13:21:11 main.go:67: Created symlink from /etc/systemd/system/multi-user.target.wants/
appinternals.service to /etc/systemd/system/appinternals.service.

2018/03/08 13:21:11 main.go:96: done


2018/03/08 13:21:11 main.go:180: System is now being rebooted...

Running the Installer Interactively


Extracting the .tar file creates the appinternals_installer file. Run that file to install the analysis server.

1) Run the appinternals_installer file as root. Unless you supply target directories on the command line
(see “Verifying Target Directories for Binaries and Data“), the installer prompts for them. If the
directories do not exist, the installer prompts to create them:
[root@143 tmp]# ./appinternals_installer
USAGE: appiternals_installer --bin-dir <dir> --data-dir <dir> \
--http-port <port> --https-port <port> --force

--bin-dir - AppInternals application binaries + tmp space


--data-dir - AppInternals application data store
--http-port - AppInternals http server port (default is 80)
--https-port - AppInternals https server port (default is 443)
--force - ignore errors and force installation

If no command line parameters are present, parameters will


be prompted for.

bin-dir free space recommendation is 10GB


data-dir free space recommendation is 40GB

!! System will be rebooted after installation !!


No command line given, enter the values below
Binary Directory (def /opt) : /appinternals-binaries
/appinternals-binaries does not exist. Creating may violate 10GB space constraint.
Create /appinternals-binaries ? (y/n) : y
Data Directory (def /data/appint) : /appinternals-data
/appinternals-data does not exist. Creating may violate 40GB space constraint.
Create /appinternals-data ? (y/n) : y

2) The installer checks that the target directories have enough space. If there is not enough space in either
directory, it gives a warning message and prompts to continue:
Not enough free space on '/appinternals-data', need 40GB
Force Use of /appinternals-data ? (y/n) :y

3) The installer prompts for open network ports for the analysis server web interface. Accept the defaults
or supply a value to override the defaults:
Http Port (def 80) :
Https Port (def 443) :

4 Aternity APM Version11.4.3


Analysis Server Installation: Linux System

4) The installer installs the software and reboots the system:


2018/03/08 14:03:13 main.go:260: Installing appint-image-11.0.1.10.txz to '/appinternals-bina
ries'
2018/03/08 14:03:13 CmdArgs.go:297: Writing configurations
2018/03/08 14:03:13 main.go:81: Unpacking tar...
2018/03/08 14:04:07 main.go:85: Done
2018/03/08 14:04:07 main.go:93: Installing host rpm appint-linux-host-11.0.1.10.x86_64.rpm
2018/03/08 14:04:27 main.go:67: Created symlink from /etc/systemd/system/multi-user.target.wa
nts/appinternals.service to /etc/systemd/system/appinternals.service.

2018/03/08 14:04:27 main.go:96: done


2018/03/08 14:04:27 main.go:180: System is now being rebooted...

5) After the system reboots, log in and perform “Post-Installation Tasks“.

Post-Installation Tasks

Deleting Installation Files


After the installation, you can safely delete the installation .tar file and the files extracted from it. For
example:
[root@143 tmp]# ls -al
total 1440176
-rwxr-xr-x 1 zeus zeus 2330641 Feb 23 00:35 appinternals_installer
-rw-r--r-- 1 zeus zeus 734980464 Feb 23 00:35 appint-image-11.0.1.10.txz
-rw-r--r-- 1 root root 538 Feb 23 10:02 appint_install.log
-rw-r--r--. 1 root root 736624033 Feb 23 09:16 appint-linux-11.0.1.10.tar
-rw-r--r-- 1 zeus zeus 778176 Feb 23 00:35 appint-linux-host-11.0.1.10.x86_64.rpm
-rw-r--r-- 1 zeus zeus 104 Feb 23 00:35 manifest.txt
[root@143 tmp]# rm -f appint* manifest.txt

Verifying Startup
The installation starts the analysis server after the system reboots as part of the installation. Confirm it is
running with the systemctl status appinternals -l command. If the analysis server has started successfully,
the output ends with messages about the Tomcat application server starting:
[root@143 tmp]# systemctl status appinternals -l
.
.
.
Feb 23 10:03:06 143 appint-linux-host[766]: yarder-core-monitor STARTING
Feb 23 10:03:06 143 appint-linux-host[766]: Starting appinternals-sysupgraded... Ok
Feb 23 10:03:07 143 appint-linux-host[766]: Tomcat started.
Feb 23 10:03:07 143 appint-linux-host[766]: Waiting for LDAP to initialize
Feb 23 10:03:40 143 appint-linux-host[766]: Tomcat start successfulTomcat started.
Feb 23 10:03:40 143 appint-linux-host[766]: Tomcat start successful
Feb 23 10:03:40 143 appint-linux-host[766]: Loopback devices:/dev/loop0
[root@143 tmp]#

Confirm that it is fully started by logging in to the analysis server user interface. For details, see “Logging
In to the Analysis Server WebUI“ in the “Analysis Server Installation: Virtual Appliance“ material.

Aternity APM Version 11.4.3 5


Analysis Server Installation: Linux System

Configuring Time Synchronization with Agents


As described in “Synchronizing Agent Time With the Analysis Server“, APM requires that times on agent
systems and the analysis server be synchronized through a network time service (such as Network Time
Protocol (NTP) or Windows Time Service).
When the analysis server is installed on a dedicated Linux system, configure NTP servers so that the system
time is synchronized with the agents.

Configuring The Server to Run in a Cluster (Optional)


To configure the APM analysis server to run in a cluster environment, see “Installing the Analysis Server
on Cluster Nodes“.

Changing the Authentication Service Server (Optional)


SteelCentral Authentication Service provides authentication and authorization to Aternity products and
components. The analysis server includes its own Authentication Service and uses it by default.
If your environment already has a SteelCentral Authentication Service installation, you can configure the
analysis server to use it instead. This is useful because it enables single sign-on with other components that
use it. See the configuration topic “Upgrading the Analysis Server from the CLI“ for details.

Accessing the CLI with the appinternals-cli Command


The APM analysis server provides of set of administrative commands available through a command line
interface (CLI). (See the “Command Line Interface Reference“ for details on the commands.)
To access the CLI after installing the analysis server on a dedicated Linux system, run the appinternals-cli
command as root. The warning message is expected the first time you run the command:
Last login: Fri Feb 23 16:19:53 2018 from 10.18.35.119
[root@nhx2-ol7-5 ~]# appinternals-cli
Warning: Permanently added '169.254.1.2' (RSA) to the list of known hosts.
Successfully logged into AppInternals admin interface
System is running AppInternals version 11.0.1.10

Press the '?' key for a list of available commands

nhx2-ol7-5 >

Accessing the Restricted Environment with the appinternals-bash


Command
The installation creates a Linux namespace to isolate the analysis server from the system where it is
installed. The analysis server runs in its own environment, where the process tree, networking interfaces,
mount points, and other resources do not affect resources outside of the namespace. Similarly, the
installation uses “chroot” to restrict the analysis server’s access to the file system.
To access this restricted environment after installing the analysis server on a dedicated Linux system, run
the appinternals-bash command as root. This can be useful for troubleshooting.

6 Aternity APM Version11.4.3


Analysis Server Installation: Linux System

The following example shows the contents of the binaries directory specified during installation (see
“Verifying Target Directories for Binaries and Data“) before issuing the appinternals-bash command. After
issuing the appinternals-bash command, note that the same contents now appear as the root (/) directory:
[root@05A zeus]# ls /appinternals-binaries/appint/
appint_ctl appint_env bin boot dev etc home lib lib64 media mnt opt proc root sbin
selinux srv sys tmp usr var
[root@05A zeus]# appinternals-bash
bash-4.1# ls /
appint_ctl appint_env bin boot dev etc home lib lib64 media mnt opt proc root sbin
selinux srv sys tmp usr var

Removing the Analysis Server


To remove the analysis server, run the uninstall-appinternals command as root. This command runs the
script of the same name in /usr/bin. Uninstalling deletes all files and performance data.

Running uninstall-appinternals
1) Run the uninstall-appinternals command as root:
[root@0BF zeus]# uninstall-appinternals

SteelCentral AppInternals Uninstaller


----------------------------------------

You are about to remove Riverbed SteelCentral


AppInternals from this system.

This uninstaller will remove all application code,


configuration, and performance data.

Please make sure you backup your system prior to running


this uninstaller if you wish to recover your data later.
Once the software is uninstalled, the performance data
is no longer recoverable.

Do you wish to continue (y/n)? Default is NO :y

2) Specify y to continue. Do not press Enter, which will be interpreted as N for the next prompt and the
uninstall will exit. The uninstall command generates several messages but requires no further
intervention:
Uninstalling
Shutting down system.
nginx-internal: stopped
nginx-external: stopped
odb_server: stopped
silo_watcher: stopped
wsproxy2: stopped
ferryman3: stopped
serial: stopped
yarder-appinternals: stopped
silo_dispatch: stopped
yarder-core: stopped
yarder-core-monitor: stopped
collector:ferryman: stopped
agentconfig:negotiator: stopped
sensor: stopped
Stopping supervisord: Shut down
Waiting roughly 120 seconds for /var/run/supervisord.pid to be removed after child processes
exit
Supervisord exited as expected in under 2 seconds
Stopping appinternals-sysupgraded Ok

Aternity APM Version 11.4.3 7


Analysis Server Installation: Linux System

Tomcat stopped.
Tomcat Stopped Successfully
Killing Tomcat with the PID: 183
The Tomcat process has been killed.
Tomcat Stopped SuccessfullyStopping sshd: [ OK ]
Shutting down system logger: [ OK ]
Terminate Remaining Processes
Killing remaining processes touching installation
Terminating host process
Removed symlink /etc/systemd/system/multi-user.target.wants/appinternals.service.
libsemanage.semanage_direct_remove_key: Removing last appinternals-ol-7 module (no other appi
nternals-ol-7 module exists at another priority).

Where To Go From Here


The next step in setting up APM is to install agent software on systems you want to monitor, as explained
in the following sections:
 “Agent Installation: Windows“
 “Unattended Agent Installation: Windows“
 “Agent Installation: Unix-Like Operating Systems“
 “Unattended Agent Installation: Unix-Like Operating Systems“

8 Aternity APM Version11.4.3


CHAPTER 5 Agent Installation: Windows

The agent and one or more sub-agents are responsible for the actual collection and communication of metric
and transaction trace data on each system being monitored by APM.
On Windows, you run an interactive installer to install the agent and specify the analysis server you want
it to report to. This section describes that process. You can also install the agent without user intervention,
as described in “Unattended Agent Installation: Windows“.

Installation Prerequisites
For a list of supported Windows operating systems and disk, Java, and .NET requirements, see the System
Requirements document on the support page.

Installing Agent Software


The agent installation takes approximately 5 minutes.
You must run the installer on Windows as administrator. Even if you are logged in to the Administrator
account, depending on User Account Control (UAC) settings, you may be prompted for administrator
credentials to run the installer.
The software is distributed as a self-extracting executable file that you download from the analysis server
web interface “Install Agents Screen“. You can also download the file from the APM support page.
The self-extracting executable file is named AppInternals_Agent_<version>_Win.exe. To start the
installation, click on the file.

Installing Visual C++ Packages May Require Reboot


The installer requires certain Visual C++ packages. These do not have to be installed before running the
agent installer. As described here, the installer will install them if they are not already available.

Aternity APM Version 11.4.3 1


Agent Installation: Windows

After the installer starts, it examines


your system to see if prerequisite
software is installed. These
prerequisites include up to four Visual
C++ packages. If the agent software
installer finds they are not on the
system, it automatically installs them
for you. After each prerequisite
package installs, if you had files open
during the installation that the Visual
C++ installer must update, you are asked to reboot the system. You are given the option to restart later.
 If you choose Restart Now, the system reboots immediately.
 If you choose Restart Later, the installer exits and does not update required files or start the agent
installation.
After either choice, when the system reboots, the prerequisite installer updates those files that were open
and proceeds with the next installer. After the last prerequisite package is installed and the reboot occurs,
the agent software installation continues automatically.

Information Required for Installation


The following table summarizes the information requested by the Windows agent installer.

Information Requested Notes

Installation directory The installer creates the Panorama directory and subdirectories in the parent
directory you specify. The default is C:\. You can specify a different directory.
On-premises or SaaS? Choose whether the agent will connect to an analysis server installed in your
environment (“on premises”) or to the Software as a Service (SaaS) offering of
the analysis server (SteelCentral SaaS).
The SaaS option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation without
it. One way to obtain a customer ID is to register for a free trial. After
registering, you receive a user name and password to log in to the SaaS
analysis server. The customer ID is on the “Install Agents Screen“ of the SaaS
analysis server.

On premises:  If you choose on premises, the installation prompts you for the system name,
Analysis server location fully-qualified domain name (FQDN), or IP address for the analysis server in
your environment. If the analysis server name changes after you install the
agent, you must change this name as described in the configuration topic
“Changing the Agent’s Analysis Server“.

SaaS:  If you choose SaaS, the installation prompts you for the SaaS customer ID
Customer ID copied from the Install Agents screen of the SaaS analysis server.

Proxy server? Choose whether the agent will use a proxy server to connect to the analysis
server system. If so, you must supply the system name, fully-qualified domain
name (FQDN), or IP address for the proxy server.
You can change the proxy server details after installation as described in the
configuration topic “Proxy Server Configuration“ .

Proxy Server Authentication? If you specify a proxy server, choose whether the proxy server requires
authentication. If so, supply a user name, password, and (optional) realm. The
agent supports proxy servers that use Basic and Digest authentication. It does
not support NTLM authentication.

2 Aternity APM Version11.4.3


Agent Installation: Windows

Information Requested Notes

Enable Instrumentation Choose whether to automatically enable instrumentation of Java and .NET
Automatically? Core processes.
Selecting this option starts the Riverbed Process Injection Driver (RPID). Using
RIPD is easiest and recommended. If you do not choose this option, you must
manually enable instrumentation after the installation completes. For more
information, see “Enabling Instrumentation of Java Processes“ and “Enabling
Instrumentation of .NET Processes“.

Start services? The installer gives you the option to not start the Windows services for the
agent and sub-agents. By default, the installation starts the services, which is
easier.

Post-Installation Tasks
This section describes tasks you may have to perform after the installation.

Starting the Agent Controller Windows Service


The installation creates the Riverbed SteelCentral AppInternals Agent Controller Windows service that
starts the agent and sub-agents. Unless you override the default, the installation starts this service
automatically. The service is configured to start automatically when the system next reboots. See the
“Controlling Agent Software on Windows Systems“ configuration topic for more details.

Enabling Instrumentation
If you did not enable instrumentation of Java and .NET Core processes as part of the installation. you must
enable it manually after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.

Aternity APM Version 11.4.3 3


Agent Installation: Windows

Instrumenting and Monitoring Processes (Analysis Server)


You specify which processes you want to instrument in the analysis server web interface. Log in using the
host name or IP address specified during the analysis server installation:
http://<Analysis_Server_UI>

Log in with the user name of admin and the default password of riverbed-default.
In the Configure menu, navigate to the “Agent List Screen“. You should see the agent system listed with
the icon for new agents ( ). Click the server that is running an application that you want to monitor. In the
Agent Details screen, Click the edit icon ( ) in the row for the application to open the “Edit Processes to
Instrument Dialog“. Click the Instrument option and Save:

Agent System: Restarting the Application


Restart the processes you instrumented in the previous step. This loads the APM monitoring library and
begins generating application performance data.
The steps to restart vary by application. You can restart Windows web applications with the iisreset
command:
C:\Windows\system32>iisreset

Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted

C:\Windows\system32>

4 Aternity APM Version11.4.3


Agent Installation: Windows

Agent System: Install Npcap (NPM Sub-Agent and SaaS Analysis


Server Only)
The NPM sub-agent reports network activity visible to the system where the agent is installed. This
sub-agent works only with the SaaS analysis server.
The agent installation does not include the Npcap library that this sub-agent requires. You can download
the Npcap library at https://nmap.org/npcap/. After installing Npcap, restart the AppInternals agent.
For the sub-agent to work, it must be available on the Windows system. For more information, see the
“NPM Sub-Agent“ topic in “Environmental Metrics and Sub-Agents Reference“ .

Confirming Application User Access to the Panorama/ Directory


Users that start applications must have read and write access to some APM directories.
For example, to create transaction trace files, the user that starts the application being monitored must have
write access to the <installdir>\Panorama\hedzup\mn\userdata\captures directory. If, after trying to
instrument an application, there are no transaction trace files written to the captures\ directory, directory
access may be the problem.
For .NET applications, use the ValidateDotNetInstall.exe utility (in the
<installdir>\Panorama\hedzup\mn\support directory) to check access to directories. Among other
things, the utility checks whether the user that started it has access to required directories. In the following
example, the utility generates red messages for directories with access problems:

For Java applications, check the application user’s permissions to the directories and give the user the
necessary permission. (See https://technet.microsoft.com/en-us/library/cc771586(v=ws.11).aspx for
details on viewing effective permissions.)

Synchronizing Time with the Analysis Server


The APM installation assumes that the system time on all systems (agents and analysis server) is
synchronized. Before the agent begins generating performance data, make sure time is synchronized as
described in the “Synchronizing Agent Time With the Analysis Server“ topic.

Aternity APM Version 11.4.3 5


Agent Installation: Windows

Reviewing the Log File


The installation creates an MSI<identifier>.log file in the %temp% directory (typically, this resolves to
C:\Users\<username>\AppData\Local\Temp). You can review the log to confirm details of the
installation or troubleshoot problems.

Removing Agent Software


You must run the program to remove agent software as administrator. Even if you are logged in to the
Administrator account, depending on User Account Control (UAC) settings, you may be prompted for
administrator credentials to run the installer.
To remove agent software, use the Add or Remove Programs utility. Double-click Riverbed SteelCentral
AppInternals Agent to run the uninstall program.
Before making any changes, the uninstall program checks for running Java and .NET processes that APM
is monitoring. You must manually stop such instrumented processes before you can continue. If the installer
detects any, it displays a message panel with a list of processes. For example:

Stop the processes before continuing. You can stop ASP.NET applications by using the Windows Services
utility to stop the World Wide Web Publishing service. Click Retry to continue.
If the uninstall program tries to delete files that are open, it will prompt you to restart the system after it
finishes. Restarting the system closes the files and allows them to be deleted. After the uninstall completes,
you may need to manually delete the Panorama directory and any remaining descendants.
Make sure that the Windows Services utility is not open when you remove APM components. If the
Services utility is open, the uninstall program may not be able to completely remove services.

Upgrading Agent Software


Upgrades to Version 10.0 or later from earlier versions of APM agent software (called “managed node’ or
“collector”) are not supported.

Note: Before you begin an upgrade, refer to the release notes for any upgrade-specific information.

6 Aternity APM Version11.4.3


Agent Installation: Windows

Upgrades preserve data and configuration settings while updating executable and program files. For any
version-specific upgrade considerations, see the Release Notes available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
Use the same self-extracting executable file to upgrade that you use for “Installing Agent Software“. The
self-extracting executable file is named AppInternals_Agent_<version>_Win.exe. To start the upgrade,
click on the file.
Before making any changes, the installer checks for running Java and .NET processes that APM is
monitoring. You must manually stop such instrumented processes before you can continue. If the installer
detects any, it displays a message panel with a list of processes. For example:

Stop the processes before continuing. You can stop ASP.NET applications by using the Windows Services
utility to stop the World Wide Web Publishing service. Click Retry to continue.
The installer automatically detects if there is a previous version of APM agent software. Click Install to
upgrade from that version:

Aternity APM Version 11.4.3 7


Agent Installation: Windows

If the upgrade detects running processes that lock files it needs to replace, it prompts you to stop them. You
can click Ignore to continue:

“Repair”: Re-running the Installer to Replace Files


You can run the installer again after installing the current release to replace APM program files.
When the installer detects an existing installation with the same version and build number, it gives you the
option to “repair” the agent. A repair re-installs only files that are missing or damaged.
After you click Next in the Welcome pane, select Repair and click Next.

8 Aternity APM Version11.4.3


CHAPTER 6 Unattended Agent Installation:
Windows

Overview
The agent installation executable (AppInternals_Agent_<version>_Win.exe) is implemented using the
Microsoft Windows installer and embeds a standard MSI installation package.
This section describes how to automate the installation of agent software by specifying arguments to the
agent installation executable. There are two approaches:
 Specify installation options directly on the command line as arguments to the agent installation
executable. This approach does not require third-party tools but the command line can become
complex.
 Extract the MSI file from the executable and use a third-party tool to specify installation options in an
MSI transform file. You then specify MSI and the transform (.mst) files as arguments to the Windows
installer executable, msiexec. This approach requires additional steps but allows deployment using
tools such as Microsoft System Center Configuration Manager.
See the following links for general information about the Windows installer:
http://en.wikipedia.org/wiki/Windows_Installer 
http://unattended.sourceforge.net/installers.php 
http://msdn.microsoft.com/en-us/library/cc185688%28VS.85%29.aspx

Installation Prerequisites
For a list of supported Windows operating systems and disk, Java, and .NET requirements, see the System
Requirements document on the support page.

Aternity APM Version 11.4.3 1


Unattended Agent Installation: Windows

Agent Installers May Each Require Reboot


An installation (or upgrade) invokes more than one installer: one or more installers for prerequisite Visual
C++ software packages (if they are not already installed), and an installer for the agent software.
During any of these unattended installations, if there are one or more files open that the installer needs to
update, the system must reboot for those files to be updated. These installers respond differently to finding
the files open:
 The prerequisite Visual C++ installers will not reboot automatically. If one of them finds open files that
it must update, the installer exits without rebooting. In this case, the installation is not “unattended”
because you must manually reboot. After the next reboot, the prerequisite installer updates those files,
then the unattended agent software installation begins automatically. If an unattended installation
appears to have failed (the agent software did not install), it might be because the prerequisite installer
has exited and is waiting for you to reboot the system.
 In contrast, if the agent software installer finds you have files open that it must update, unlike the
prerequisite installer, this installer automatically reboots to update those files and complete the
installation.
Preventing Automatic Reboot of Agent
You can prevent an automatic reboot of the agent after the installation by including
REBOOT=ReallySuppress in the Windows installer command (for details, refer to
http://msdn.microsoft.com/en-us/library/aa371101%28v=vs.85%29.aspx).
Note—Suppressing Auto-Reboot Results in Incomplete Installation—If you add
REBOOT=ReallySuppress to your installation script, when a reboot is required to complete the
install, the installer exits without completing the installation or indicating why it has exited. You must
then reboot the system to ensure the installation is complete. To handle this behavior, you could have
the script send you a notification when a reboot is required, then schedule those reboots at an
appropriate time.

Specifying Installation Options on the Command Line


You can specify installation options directly on the command line as arguments to the agent installation
executable (AppInternals_Agent_<version>_Win.exe).
For example, this minimal command installs the DSA to the default directory without enabling any
sub-agents or creating a log file:
AppInternals_Agent_10.0.00168_Win.exe /s /v"/qn"

Installing the agent software from the command line in this manner completes without any user interaction
or indication of progress. Use the options described in “Installation Options Reference“ to control any
aspect of the agent installation that you can specify interactively through the installer user interface.

General Syntax
In general, use the following syntax to invoke the agent installation executable from the command line:
installation-executable /s /v" [ install-option ]… [ /Llogginglevel \"logfilespec\" ] /qn"

2 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

/s

The /s argument suppresses display of the file extraction installer panel.

/v

The /v argument specifies installation options to pass to the Windows installer, misexec. Supply all
options on the same line as the installation executable file name. In general, use upper case option
names and option values. (Option names are not case sensitive but option values are case sensitive and
the installer requires most to be all upper case.) See “Installation Options Reference“ for details. See
the following link for details on msiexec switches: http://support.microsoft.com/kb/227091

install-option

These options specify details of the agent installation. There are command-line options that correspond
to every setting that you can specify interactively through the installer user interface. See “Installation
Options Reference“ for details.

[ /Llogginglevel \"logfilespec\" ]

The logging level and file specification for the log file for the installation. The path to the log file
location must already exist. The installer does not create the directory structure for the log file.

The backward slashes ( \ ) are escape characters for the double quotation marks ( " )that enclose the
log file specification.

The backward slashes and double quotation marks are always allowed, but required only if the log file
specification contains spaces. The most detailed logging level specification is /L*v. For example:
/L*v \"c:\temp\Agent Install Log.log\"

See the following link for details on the different msiexec logging levels and options:
http://technet.microsoft.com/en-us/library/cc759262%28v=ws.10%29.aspx#BKMK_SetLogging

/qn

Runs the Windows installer without user intervention and suppresses display of user interface screens
(/qn is a mnemonic for "quiet mode, no user interface"). The Windows installer supports other /q
options. For example, specify /qf ("quiet mode, full user interface"), or omit the /q argument altogether,
and the installer will display the user interface screens filled in with option values supplied on the
command line. This is a good way to validate command line options.

See the following link for additional /q options:


http://technet.microsoft.com/en-us/library/cc759262%28v=ws.10%29.aspx#BKMK_SetUI

Enabling Instrumentation
If the “O_SI_AUTO_INSTRUMENT“ option is set to “true”, instrumentation of Java and .NET Core
processes is enabled as part of the installation. If you do not enable instrumentation as part of the
installation, you must enable it manually after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.

Aternity APM Version 11.4.3 3


Unattended Agent Installation: Windows

New Installation
There are command-line options for each setting that you can specify interactively through the interactive
GUI installer. For a new installation, supply values for required options and options whose default you
want to override. See “Installation Options Reference“ for details on the options.
The following command line example creates a log file and uses the following options:
 “INSTALLDIR“ -- specifies a different installation directory (c:\Riverbed).
 “O_SI_ANALYSIS_SERVER_HOST“ -- specifies the IP address for the on-premises analysis server.
 “O_SI_AUTO_INSTRUMENT“ -- enables instrumentation of Java and .NET Core processes.
AppInternals_Agent_10.0.00168_Win.exe /s /v"INSTALLDIR=c:\Riverbed
O_SI_ANALYSIS_SERVER_HOST=10.46.35.218 O_SI_AUTO_INSTRUMENT="true" /L*v
c:\temp\AgentInstallLog.log /qn"

Changing an Existing Installation


This section describes changing an existing installation:
 Upgrading from an earlier release (see “Upgrading“)
 Repairing to replace files created by the installation (see “Repairing“)

Stopping Instrumented Java and .NET Processes


The installer checks for running Java and .NET processes that APM is monitoring. You must manually stop
such instrumented processes before upgrading or repairing an existing installation. Running processes
prevent the installer from replacing files it needs to replace.
Unless the “O_SI_SKIP_SCAN“ option is set to TRUE, the installer exits if it detects instrumented processes.
Check the web interface “Process List Screen“ to see which processes you need to stop.
You can stop ASP.NET applications by using the Windows Services utility to stop the World Wide Web
Publishing service.

Upgrading
When you upgrade, the installation automatically preserves data and configuration settings while updating
executable and program files. For any version-specific upgrade considerations, see the Release Notes
available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
Upgrades to Version 10.0 from earlier versions of APM agent software (called “managed node’ or
“collector”) are not supported.
To upgrade an Version 10.0 or later installation to a later release, run the agent executable without any
options. The following example upgrades the agent installation and generates a log file:
AppInternals_Agent_10.0.00168_Win.exe /s /v"/L*v c:\temp\AgentUpgradeLog.log /qn"

Note: Manually Stop Instrumented Processes Before Upgrading 


You must manually stop instrumented processes before upgrading. See “Stopping Instrumented Java and
.NET Processes“.

4 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

Repairing
You can run the installer on an existing agent installation with the same version and build number. Use
only the REINSTALL=ALL and REINSTALLMODE=vamus options to replace files created by the previous
installation without adding or removing sub-agents. This “repair” operation can be useful to restore files
that were corrupted or accidentally deleted. It preserves files created after the installation.
The following example repairs a agent installation:
AppInternals_Agent_10.0.00168_Win.exe /s /v"REINSTALL=ALL REINSTALLMODE=vamus /L*v
c:\temp\AgentRepairLog.log /qn"

Remove an Existing Installation


Use the /x argument to remove an existing installation. The following example removes the installation and
creates a verbose log file:
AppInternals_Agent_10.0.00168_Win.exe /x /s /v"/L*v c:\temp\AgentUnInstallLog.log /qn"

Aternity APM Version 11.4.3 5


Unattended Agent Installation: Windows

Specifying Installation Options in an MSI Transform


The MSI format lets you modify or customize an installer by creating a transform (with a file extension of
.mst). A transform is another way to specify values for the same installation options you can specify on the
command line. (See “Installation Options Reference“ for details on the options.)
To create a transform, you first extract the MSI installation package from the agent installation executable.
Then, use a third-party tool like ORCA (Microsoft), InstallShield (Flexera) or InstEd (Instedit) to create the
transform. ORCA is most common and is free from Microsoft. The others are licensed products.
Once you create the transform, run msiexec and specify the extracted MSI and transform files to install the
agent software.
This section describes these steps in more detail.

Prerequisites
Before you begin an unattended installation, be sure that the Microsoft Visual C++ 2010 Redistributable
Package (x86) and Microsoft Visual C++ 2010 Redistributable Package (x64) are installed on the target
systems. You can verify they are installed by opening the control panel, selecting Programs and Features,
and looking for Version 10.0.40219 of each in the program list:

If you need to install them, you can download these packages from Microsoft for free from this site:
http://search.microsoft.com/en-us/DownloadResults.aspx?FORM=DLC&ftapplicableproducts=^%22De
veloper%20Tools%22&sortby=+weight

Extracting the MSI file


Invoke the agent installation executable with the /a argument. For example:
AppInternals_Agent_10.0.00168_Win.exe /a

6 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

The /a argument creates a “server image” that includes the MSI file without actually installing the agent
software. The installer displays a series of panels, including one where you specify the directory for the
server image:

The installer creates the .msi file in the specified directory, named MNInstall.msi.

Creating the Transform from the MSI File


You use a third-party tool to modify the MSI file extracted in the previous step. This section shows
Microsoft Orca. See the following link for more details on Orca:
https://msdn.microsoft.com/en-us/library/aa370557%28v=vs.85%29.aspx
Use the third-party tool to modify properties that correspond to the installation options described in
“Installation Options Reference“.

Note: Do Not Modify Any Other Properties: Tools such as Orca allow modifying any property in an MSI
installation package. Only modify properties with the O_SI prefix, and the “INSTALLDIR“ property. The
agent installer does not support modifying other properties.

Open the MSI file in Orca and click the Property table. Click the Property column in the table in the right
pane to sort by the property name and group together the O_SI properties that you can modify. Click the
Transform > New Transform menu option. To change values, click the cell in the Value column for the
property you want to modify.
Some installation options (such as “INSTALLDIR“) do not have corresponding properties. To create the
property, click the Tables > Add Row… menu option and supply the property name and value.
After you modify and create properties, click the Transform > Generate Transform menu option and
supply a file name for the .mst file.

Aternity APM Version 11.4.3 7


Unattended Agent Installation: Windows

The following example shows a transform in Orca with changed properties to connect to the analysis server:

Running msiexec with the MSI and Transform Files


To install the agent software with the changes specified in the transform file, run the Windows installer
executable, msiexec, with the MSI and transform (.mst) files as arguments. The following example specifies
the transform shown in the previous step:
msiexec /i MNInstall.msi TRANSFORMS=analysisServerDetails.mst /L*v
c:\temp\AgentInstallTransformLog.log /qf

This example passes /qf as an argument to display the installer interface screens filled in with option values
supplied in the transform file. Examining these screens is a good way to validate that the transform has the
options you want.

Installation Options Reference


This section describes options that control aspects of the agent installation. The options correspond to
settings exposed in the graphical user interface (GUI) version of the installer.
In general, use upper case option names and option values. (Option names are not case sensitive but option
values are case sensitive and the installer requires most to be all upper case.)

INSTALLDIR

Description Specifies the destination directory for a new agent installation. The directory does not have
to exist. The installation creates the Panorama directory and subdirectories in the parent
directory you specify.

Valid Values Any valid local Window directory specification. The specification must include a local drive
letter. Do not specify a mapped drive or UNC path.
If you specify this option within the /v argument to the installation executable (see “General
Syntax“) and the directory specification contains spaces, enclose it with \" (see the example
below). The backward slashes ( \ ) are escape characters for the double quotation marks ( " )
that must enclose the directory specification. The escape characters are not required (or
allowed) if you specify the option directly as an argument to msiexec (see “Running msiexec
with the MSI and Transform Files“).

Default Value C:\

Example INSTALLDIR=d:\ 
INSTALLDIR=\"c:\Riverbed Technology\"

8 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

O_SI_ANALYSIS_SERVER_HOST

Description Specifies the location of the on-premises APM analysis server. (This option has no effect for
installations that specify the SaaS analysis server by setting
“O_SI_SAAS_ANALYSIS_SERVER_ENABLED“.) The agent connects to this system to
initiate communications. Use this option for new installations only. See “Changing the
Agent’s Analysis Server“ for instructions on changing this value after the agent has been
installed.

Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the analysis
server.

Default Value none

Example O_SI_ANALYSIS_SERVER_HOST="myserver.example.com" 
O_SI_ANALYSIS_SERVER_HOST="10.46.35.218"

O_SI_ANALYSIS_SERVER_PORT

Description Specifies the port on which the APM analysis server is listening. The agent connects to this
port to initiate communications. See “Changing the Agent’s Analysis Server“ for instructions
on changing this value after the agent has been installed.

Valid Values Valid port number for the analysis server.

Default Value 80

Example O_SI_ANALYSIS_SERVER_PORT="8051"

O_SI_ANALYSIS_SERVER_SECURE

Description Specifies whether the connection to the APM analysis server should be secure.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_ANALYSIS_SERVER_SECURE="true"

O_SI_AUTO_INSTRUMENT

Description Specifies whether to enable automatic instrumentation of Java and .NET Core processes on
Windows systems. When set to “true”, the agent installation starts the Riverbed Process
Injection Driver (RPID).
Note: If you do not want all Java and .NET Core processes enabled for instrumentation, do
not set this option to “true”. After the installation completes, you can manually configure
processes to be instrumented with the rpictrl utility. For more information, see “Automatic
Process Instrumentation on Windows“

Valid Values true false (values are not case sensitive)

Default Value false (any value other than “true” will disable automatic instrumentation)

Example O_SI_AUTO_INSTRUMENT="true"

Aternity APM Version 11.4.3 9


Unattended Agent Installation: Windows

O_SI_CUSTOMER_ID

Description This option applies only if “O_SI_SAAS_ANALYSIS_SERVER_ENABLED“ is set to “true”. It


specifies the Customer ID you received from Aternity.

Valid Values Hexadecimal string identifier provided by Aternity.

Default Value false

Example O_SI_CUSTOMER_ID="5e6f1281-162d-11a6-a7c8-ab267ed63dc8"

O_SI_PROXY_SERVER_AUTHENTICATION

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


whether the proxy server requires authentication.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_PROXY_SERVER_AUTHENTICATION="true"

O_SI_PROXY_SERVER_ENABLED

Description Whether the agent will use a proxy server to connect to the analysis server system.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_PROXY_SERVER_ENABLED="true"

O_SI_PROXY_SERVER_HOST

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


the location of the proxy server.

Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_HOST="myproxyserver.example.com" 
O_SI_PROXY_SERVER_HOST="10.46.35.238"

O_SI_PROXY_SERVER_AUTHENTICATION

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


whether the proxy server requires authentication.

Valid Values true false (values are not case sensitive)

10 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

Default Value false

Example O_SI_PROXY_SERVER_AUTHENTICATION="true"

O_SI_PROXY_SERVER_PASSWORD

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies the password
for the user.

Valid Values Password for the proxy user.

Default Value none

Example O_SI_PROXY_SERVER_PASSWORD="myproxypassword"

O_SI_PROXY_SERVER_PORT

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


the port on which the proxy server is listening.

Valid Values Valid port number for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_PORT="8080"

O_SI_PROXY_SERVER_REALM

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies the
authentication realm for the proxy server, if applicable.

Valid Values Name of an authentication realm.

Default Value none

Example O_SI_PROXY_SERVER_REALM="myproxyrealm"

O_SI_PROXY_SERVER_USER

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies a user for the
proxy server.

Valid Values Valid user name for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_USER="myproxyuser"

Aternity APM Version 11.4.3 11


Unattended Agent Installation: Windows

O_SI_SAAS_ANALYSIS_SERVER_ENABLED

Description Specifies whether the agent will connect to the Software as a Service (SaaS) offering of the
analysis server.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_SAAS_ANALYSIS_SERVER_ENABLED="true"

O_SI_SKIP_SCAN

Description Specifies whether the installer scans for running Java and .NET processes that AppInternals
is monitoring. By default, the installer checks for such instrumented processes and, if it
detects any, exits. Set this option to TRUE to skip the scan. This is useful only if the scan fails
and prevents installation when in fact there are no instrumented processes running on the
system.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_SKIP_SCAN="true"

O_SI_START_SERVICES

Description Whether to start Windows services for the DSA and sub-agents after the installation
completes.

Valid Values START DONTSTART (values are not case sensitive)

Default Value START (any value other than START will cause the services NOT to start)

Example O_SI_START_SERVICES=DONTSTART

REINSTALL

Description Used only to modify or repair an existing installation. For Microsoft documentation on
REINSTALL, see:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371175%28v=vs.85%29.as
px

Valid Values ALL (values are not case sensitive)

Default Value none

Example REINSTALL=ALL

12 Aternity APM Version 11.4.3


Unattended Agent Installation: Windows

REINSTALLMODE

Description Used only to modify or repair an existing installation. For Microsoft documentation on
REINSTALLMODE, see:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371182%28v=vs.85%29.as
px

Valid Values vamus

Default Value None

Example REINSTALLMODE=vamus

Aternity APM Version 11.4.3 13


Unattended Agent Installation: Windows

14 Aternity APM Version 11.4.3


CHAPTER 7 Agent Installation: Unix-Like
Operating Systems

The agent and its sub-agents are responsible for the collection and communication of metric and transaction
trace data on each system being monitored by APM.
APM supports the following Unix-like operating systems: AIX, Linux, and Solaris.
On these platforms, you run an installation script locally on each agent. The script prompts you for required
information. You can run the script as root or as the user you choose as the agent administrative user. You
can also install the agent software in silent mode, as described in “Unattended Agent Installation: Unix-Like
Operating Systems“.
This chapter describes using the script as well as steps before and after installation.

Installation Prerequisites
The following are required before installing the AppInternals agent on Linux and Unix.

Ensure That All System Requirements Are Satisfied


For a list of supported Linux and UNIX operating systems and disk, Java, and .NET requirements, see the
System Requirements document on the support page.

Create a Dedicated Administrative Account


Before installing the agent software, identify an operating system user account to be the APM
administrative user. This user will run the agent software. The user name for the dedicated account can be
any name, but it must exist before you install. The installation creates the Panorama directory and its
contents using that account as owner. In addition, you must use the agent utility (see Controlling Agent
Software on Unix-Like Systems in the Agent Installation: Unix-Like Operating section of the documentation
) from that account.
You can install the agent software from the root account or from the administrative account:
 If you run the installation script as root, it prompts you for the name of an account to be the
administrative user.

Aternity APM Version 11.4.3 1


Agent Installation: Unix-Like Operating Systems

 If you run the installation script from a non-root account, that account will automatically be made the
administrative user. The account must have rights to create the Panorama directory in the parent
directory you specify or the installation fails. In the following example, the user admin did not have
rights to create the directory:
 Where do you want to create the Panorama installation directory:/opt
/bin/mkdir: cannot create directory `/opt/Panorama': Permission denied
Creating /opt/Panorama/hedzup/mn failed. Check that admin has write access in /opt/Panorama
Type 'y' to try again or 'n' to exit:

The APM administrative user should be in the same group as users who run Java applications that APM
will monitor. If there are multiple applications that are run by different users, they should share the same
group
This avoids permission issues:
 The user account that starts Java processes that APM monitors must have read and write access to
specific AppInternals directories. The installation creates those directories with group read and write
permissions.
 The APM administrative user must have read access to the transaction trace data generated by
applications. These files are created by the user account that starts the Java process. By default, the
JIDA sub-agent creates the files with permissions that limit access to group members and exclude
others from reading the files. You can change the permissions with which JIDA creates the files in the
UNIX File Creation Permissions settings of the Configuration Settings screen in the analysis server
interface .
Use the ps command to see what users are currently running Java applications. The following ps command
identifies jboss and weblogic as users running Java applications:
$ ps -ef | grep java
jboss 1469 1400 3 16:49 ? 00:01:39 java -D[Standalone] -server
weblogic 1879 1780 1 16:49 ? 00:00:46/usr/java/jdk1.7.0_60/bin/java

Use the id command to determine what user group those users belong to. The following example shows
that they belong to the webadm group, and that a third user, webadm, also belongs to that group.
$ id jboss
uid=498(jboss) gid=502(webadm) groups=502(webadm),0(root)
$ id weblogic
uid=497(weblogic) gid=502(webadm) groups=502(webadm)
$ whoami
webadm
$ id webadmuid=502(webadm) gid=502(webadm) groups=502(webadm)

In this example, the webadm user would be the appropriate choice for the AppInternals administrative
user. In other cases, you may need to create a new account in the appropriate group.

32-Bit Libraries Are Required for 32-bit JRE (Linux-only)


On supported versions of Linux, if APM will monitor Java applications that run in a 32-bit JRE, the Linux
system must contain a 32-bit version of the C++ runtime library.
Before installing the agent, verify that the following 32-bit packages are installed.
 glibc

2 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

 libstdc++

Note: The x86_64 installation media contains both 32-bit (denoted by i686) and 64 (denoted by x_64) bit
versions. Be sure to specify the 32-bit version by including i686 in the file name:

Installing Agent Software


The installation script is distributed as a gzip-compressed file that includes a self-extracting .tar file with the
agent software.
Download the agent installation script from the analysis server web interface “Install Agents Screen“. You
can also download the file from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html.
The installation script is named as follows:
 AIX: AppInternals_Agent_<version>_Aix.gz
 Linux: AppInternals_Agent_<version>_Linux.gz
 Solaris: AppInternals_Agent_<version>_Solaris.gz
The agent installation takes around 10 minutes. Examples in this section show installing the agent on Linux.
The steps are the same on other platforms.
You must decompress the file before running it. Running gunzip or a similar utility creates the same file
name with no extension. For example:
[root@nhv1-rh6-1 tmp]# dir
AppInternals_Agent_10.1.0.47_Linux.gz
[root@nhv1-rh6-1 tmp]# gunzip AppInternals_Agent_10.1.0.47_Linux.gz
[root@nhv1-rh6-1 tmp]# dir
AppInternals_Agent_10.1.0.47_Linux

If you plan to install the agent on multiple systems, if possible, copy the extracted script to a disk shared
across the network. This makes installation on multiple systems more convenient.
You must run the installation script as root or from the account you want to use as the agent administrator
(see “Installing Agent Software“). If you run the script from a non-root account, the agent software will not
be configured to start automatically when the system reboots. See “Enabling Agent Software to Start on
System Startup“ for details on doing this after the installation.

Aternity APM Version 11.4.3 3


Agent Installation: Unix-Like Operating Systems

Information Required for Installation


The following table summarizes the information requested by the agent installation script.

Information Requested Notes

Temporary and installation The installation script prompts for the location of two directories.
directories
• A temporary directory where the installation script extracts files. The default
is /tmp/. The installation creates the temporary directory (named
AppInternals_Xpert_Collector_<version>_<os>) in the parent directory you
specify. This directory is deleted after the installation.
• The actual installation directory. The installation script creates the Panorama
directory and subdirectories in the parent directory you supply.
If you run the installation from an account other than root, the account must
have privileges to create both directories, or the installation fails.
If you provide a relative path, the script appends it to the current directory and
creates the directory there. If you provide an absolute path, the script creates the
directory there.

On-premises or SaaS? Choose whether the agent will connect to an analysis server installed in your
environment (“on premises”) or to the Software as a Service (SaaS) offering of
the analysis server (SteelCentral SaaS).
The SaaS option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation without it.
One way to obtain a customer ID is to register for a free trial. After registering,
you receive a user name and password to log in to the SaaS analysis server. The
customer ID is on the “Install Agents Screen“ of the SaaS analysis server.

Administrator user account You receive this prompt only if you run the installation script as root (otherwise,
the account that started the script will be the administrative user account).
Supply the user name for the existing account you will use to run the agent and
sub-agents on this system. See the “Installing Agent Software“ for more details.

On premises: If you choose on premises, the installation prompts you for the system name,
Analysis server location fully-qualified domain name (FQDN), or IP address for the analysis server in
your environment. If the analysis server name changes after you install the
agent, you must change this name as described in “Changing the Agent’s
Analysis Server“.

SaaS: If you choose SaaS, the installation prompts you for the SaaS customer ID copied
Customer ID from the Install Agents screen of the SaaS analysis server.

Proxy server? Choose whether the agent will use a proxy server to connect to the analysis
server system. If so, you must supply the system name, fully-qualified domain
name (FQDN), or IP address for the proxy server.
You can change the proxy server details after installation as described in the
configuration topic “Proxy Server Configuration“ .

Proxy Server Authentication? If you specify a proxy server, choose whether the proxy server requires
authentication. If so, supply a user name, password, and (optional) realm. The
agent supports proxy servers that use Basic and Digest authentication. It does
not support NTLM authentication.

4 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

Information Requested Notes

Whether to display a An important configuration task after installing is to enable instrumentation for
summary of options for Java processes you want to monitor. There are different approaches to enabling
enabling instrumentation instrumentation manually, and you are prompted during the installation to have
a summary of these options displayed after the installation finishes. For more
information, see “Enabling Instrumentation of Java Processes“.

Linux only: Whether to You can choose to enable instrumentation of Java and .NET Core processes only
enable automatic during a Linux installation as root. This option in not supported on UNIX. For
system-wide instrumentation a list of supported Linux systems, see the System Requirements document on
the support page.
You receive this prompt under the following conditions:
• You are installing the agent on a Linux system.
• You run the installation script as root.
• You choose to display the summary of options for enabling instrumentation.
To enable instrumentation during a Linux installation, answer yes (y) to the
enable automatic instrumentation prompt.
Note: Enabling instrumentation of Java and .NET Core processes on UNIX is a
post-installation task. For more information, see “Enabling Instrumentation of
Java Processes“ and “Enabling Instrumentation of .NET Processes“.

Installing as root
Respond to prompts with the “Information Required for Installation“. The prompts indicate default values
in brackets [ ]. The following example shows required input and responses in bold.
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.00163_Linux

Riverbed SteelCentral AppInternals Setup Menu:

1) Install
2) Upgrade
3) Exit

Select one of the options above and press enter:


1
The installation creates a temporary directory (AppInternals_Xpert_Collector_10.0.00163_Linux_Kit
). Where do you want to create it [/tmp]?

The installation extracts files to the temporary directory:


Riverbed SteelCentral AppInternals tar-file extracted successfully
Uncompressed instrumentation.tar.gz in /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit
Unpacking instrumentation...
Extracted instrumentation.tar to /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit/hedzup/
mn
Removed instrumentation.tar from /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit/hedzup/
mn
Uncompressed jre.tar.Z in /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit
Unpacking the Java Runtime Environment (JRE)...
Extracted jre.tar to /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit/hedzup/mn
Removed jre.tar from /tmp/AppInternals_Xpert_Collector_10.0.00163_Linux_Kit/hedzup/mn

After it extracts files to the temporary directory, the installation displays the following prompts:
 On-premises or SteelCentral SaaS analysis server
 Administrator account

Aternity APM Version 11.4.3 5


Agent Installation: Unix-Like Operating Systems

 Installation directory
 Analysis server the agent will connect to (for the SaaS analysis server , the customer ID)
 Whether the agent will use a proxy server (if so, whether it requires authentication)
 Whether to display a summary of options for enabling instrumentation
Choose On-Premises or Saas:

1) On-Premises Analysis Server


2) SteelCentral SaaS Analysis Server
This option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation
without it. Obtain a customer ID by registering here:

http://www.riverbed.com/steelcentral/steelcentral-saas-early-access-signup.html

After registering, you receive a user name and password to log in to the
SaaS analysis server. Your customer ID is on the "Install Agents" screen.

Select one of the options above and press enter [1]:

******************** Starting Riverbed SteelCentral AppInternals Installation *******************


*

Enter existing user account to be the Riverbed SteelCentral AppInternals administrator:webadm


Where do you want to create the Panorama installation directory:/opt

Riverbed SteelCentral AppInternals will be installed in /opt/Panorama. Continue? [y/n]:y


Please enter the Analysis Server Hostname by nodename, FQDN, or IP address: 10.46.35.245
Will this agent be using a Proxy Server to connect to the Analysis Server? [y/n]:n
Do you want to review options to enable Java monitoring at the end of the install? [y/n]:y
.
.
.

For root installations on Linux only, if you chose to display the options for enabling instrumentation, there
is a final prompt about whether to enable instrumentation system-wide. If you respond yes to the prompt,
the installation enables instrumentation, as follows:
********* Options for Enabling Java Monitoring ("Instrumentation") *********
*** Option 1: ***
Define the JAVA_TOOL_OPTIONS environment variable in your profile file (such as ~/.profile):
export JAVA_TOOL_OPTIONS="-agentpath:/opt/Panorama/hedzup/mn/\$LIB/librpilj.so $JAVA_TOOL_OPTIO
NS"

*** Option 2: ***


Define the LD_PRELOAD environment variable in your profile file (such as ~/.profile):
export LD_PRELOAD="/opt/Panorama/hedzup/mn/\$LIB/librpil.so $LD_PRELOAD"

*** Option 3: ***


Modify JVM startup options to add -agentpath:
64-bit JVM:
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so
32-bit JVM:
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj.so

*** Option 4: ***


Enable instrumentation automatically system-wide. This is
easiest but adds an entry to the /etc/ld.so.preload system file.
Enable instrumentation automatically system-wide? [no]: yes
Process injection already disabled.
Successfully uninstalled process injection library.

6 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

Successfully installed process injection library.


Successfully enabled process injection.

Riverbed SteelCentral AppInternals install completed successfully.

******************** Riverbed SteelCentral AppInternals Post Install Steps ********************

1) Start the Riverbed SteelCentral Agent and Sub-Agents. Log in as 'webadm' and run:
/opt/Panorama/hedzup/mn/bin/agent start

Installing as Non-root
Respond to prompts with the “Information Required for Installation“. The prompts indicate default values
in brackets []. The following example shows required input and responses in bold.
-bash-4.1$ ./AppInternals_Agent_10.0.00176_Linux

Riverbed SteelCentral AppInternals Setup Menu:

1) Install
2) Upgrade
3) Exit

Select one of the options above and press enter:

1
The installation creates a temporary directory (AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
). Where do you want to create it [/tmp]?

At this point, the installation extracts files to the temporary directory and generates several messages:
Riverbed SteelCentral AppInternals tar-file extracted successfully
Uncompressed instrumentation.tar.gz in /tmp/AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
Unpacking instrumentation...
.
.
.

After it extracts files to the temporary directory, the installation displays remaining prompts:
 On-premises or SteelCentral SaaS analysis server
 Installation directory
 Analysis server the agent will connect to (for the SaaS analysis server , the customer ID)
 Whether the agent will use a proxy server (if so, whether it requires authentication)
 Whether to display a summary of options for enabling instrumentation
Choose On-Premises or Saas:

1) On-Premises Analysis Server


2) SteelCentral SaaS Analysis Server
This option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation
without it. Obtain a customer ID by registering here:

http://www.riverbed.com/steelcentral/steelcentral-saas-early-access-signup.html

After registering, you receive a user name and password to log in to the
SaaS analysis server. Your customer ID is on the "Install Agents" screen.

Select one of the options above and press enter [1]:

Aternity APM Version 11.4.3 7


Agent Installation: Unix-Like Operating Systems

******************** Starting Riverbed SteelCentral AppInternals Installation *******************


*

Where do you want to create the Panorama installation directory:/home/webadm

Riverbed SteelCentral AppInternals will be installed in /home/webadm/Panorama. Continue? [y/n]:y


Please enter the Analysis Server Hostname by nodename, FQDN, or IP address: 10.46.35.218
Will this agent be using a Proxy Server to connect to the Analysis Server? [y/n]:n
Do you want to review options to enable Java monitoring at the end of the install? [y/n]:y

After you respond to those prompts, the installation completes without further intervention. At the end, it
summarizes options for enabling instrumentation (if you chose to display them) and other
“Post-Installation Tasks“.
*** Option 1: ***
Define the JAVA_TOOL_OPTIONS environment variable in your profile file (such as ~/.profile):
export JAVA_TOOL_OPTIONS="-agentpath:/opt/Panorama/hedzup/mn/\$LIB/librpilj.so $JAVA_TOOL_OPTIO
NS"

*** Option 2: ***


Define the LD_PRELOAD environment variable in your profile file (such as ~/.profile):
export LD_PRELOAD="/opt/Panorama/hedzup/mn/\$LIB/librpil.so $LD_PRELOAD"

*** Option 3: ***


Modify JVM startup script to add -agentpath:
64-bit JVM:
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so
32-bit JVM:
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj.so

*** Option 4: ***


Enable instrumentation automatically system-wide. This is
easiest but adds an entry to the /etc/ld.so.preload system file.
Run /opt/Panorama/install_mn/install_root_required.sh as root to enable instrumentation system-wi
de

Riverbed SteelCentral AppInternals install completed successfully.

******************** Riverbed SteelCentral AppInternals post install steps ********************

1) Log in as root and run /opt/Panorama/install_mn/install_root_required.sh


This script prompts to:
- Change permissions to run the NPM sub-agent (recommended, SaaS analysis server only)
- Enable automatic startup on system reboot (recommended)
- Enable automatic Java instrumentation system-wide (optional)
2) Start the Riverbed SteelCentral Agent and Sub-Agents. Run:
/opt/Panorama/hedzup/mn/bin/agent start

bash-4.1$

Post-Installation Tasks
This section describes tasks to complete after installation.

8 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

Non-Root Installs: Run install_root_required.sh


If you ran the installation script from an account other than root, you must run
<installdir>/Panorama/install_mn/install_root_required.sh after the installation completes.
The script prompts you for the following
 “Enabling NPM Sub-Agent (SaaS Analysis Server Only)“
 “Enabling Agent Software to Start on System Startup“
 “Enabling Instrumentation (Agent)“

Enabling NPM Sub-Agent (SaaS Analysis Server Only)


If you installed the agent software from the root account, the installation script automatically performs this
step.
This option in the install_root_required.sh script enables the “NPM Sub-Agent“. You must enable the
sub-agent for it to report network data. This option applies only on Linux agents that connect to the
Software as a Service (SaaS) offering of the analysis server (in other words, if you chose SaaS in the
“On-premises or SaaS?“ prompt). Otherwise, it has no effect.
This option sets ownership and permissions on the npm_agent and npm_worker executable files in
<installdir>/Panorama/hedzup/mn/bin. It changes the owner to root and sets the set user ID (SUID)
permission bit. For example:
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh
Change permissions to run the NPM sub-agent (recommended, SaaS analysis server only)? [y|n]: y
.
.
.
Owner of /opt/Panorama/hedzup/mn/bin/npm_agent changed to 'root'
Modified 'setuid' for /opt/Panorama/hedzup/mn/bin/npm_agent
Owner of /opt/Panorama/hedzup/mn/bin/npm_worker changed to 'root'
Modified 'setuid' for /opt/Panorama/hedzup/mn/bin/npm_worker
[root@nhv1-rh6-2 install_mn]# ls -al ../hedzup/mn/bin/npm_*
-rwsr-xr-x 1 root ccusers 6814575 Nov 14 12:53 ../hedzup/mn/bin/npm_agent
-rwsr-xr-x 1 root ccusers 7156433 Nov 14 12:53 ../hedzup/mn/bin/npm_worker

Enabling Agent Software to Start on System Startup


If you installed the agent software from the root account, the installation script automatically performs this
step.
This option in the install_root_required.sh script configures an initialization script that starts the agent
software automatically when the system reboots. For example:
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh
Change permissions to run the NPM sub-agent (recommended, SaaS analysis server only)? [y|n]: n
Enable automatic startup on system reboot (recommended)? [y|n]: y
Enable automatic Java instrumentation system-wide (optional)? [y|n]:n
Installing the system-level automatic startup/shutdown script and links...
Successfully added appinternals as a service

Aternity APM Version 11.4.3 9


Agent Installation: Unix-Like Operating Systems

Starting the Agent Software


The installation script does not start the agent software. Log into the user account specified during the
installation and use the agent start command:
bash-4.1$ ./agent start
Starting the Agent...
[started] PID: 27431 Name: dsa Command: [./dsa, -d]
[started] PID: 27436 Name: agentrt Command: [./agentrt]
[started] PID: 27449 Name: npm Command: [./npm_agent]
[started] PID: 27459 Name: osda Command: [./os_agent]

See the “Controlling Agent Software on Unix-Like Systems“ configuration topic for details on agent.

Configuring Processes to be Monitored


Once the agent installation is complete, there is additional configuration to “instrument” (load the APM
monitoring library) and generate performance data for processes you want to monitor.

Enabling Instrumentation (Agent)


If you did not select this option during a Linux installation or installed on UNIX, you must manually enable
instrumentation for Java and .NET Core processes before they can be instrumented and monitored. For
more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.

10 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

Instrumenting and Monitoring Processes (Analysis Server)


You specify which processes you want to instrument in the analysis server web interface. Log in using the
host name or IP address specified during the analysis server installation:
http://<Analysis_Server_UI>

Log in with the user name of admin and the default password of riverbed-default.
In the Configure menu, navigate to the “Agent List Screen“. You should see the agent system listed with
the icon for new agents ( ). Click the server that is running an application that you want to monitor. In the
Agent Details screen, Click the edit icon ( ) in the row for the application to open the “Edit Processes to
Instrument Dialog“. Click the Instrument option and Save:

Agent System: Restarting the Application


Restart the processes you instrumented in the previous step. This loads the APM monitoring library and
begins generating application performance data. The steps to restart vary by application.

Troubleshooting Instrumentation
For information on resolving issues with instrumentation, “Instrumentation Techniques and
Troubleshooting“.

Confirming Application User Access to the Panorama/ Directory


As described in the System Requirements document (available from the APM support page), users that start
applications must have read and write access to some APM directories. Although the installation creates
the Panorama/ directory tree with appropriate group access, the application users may not have access to
the parent directory of the Panorama/ directory even if they are in the same group as the administrative
user.

Aternity APM Version 11.4.3 11


Agent Installation: Unix-Like Operating Systems

Check to make sure that each user that starts an application can access the Panorama/ directory. For each
user, log in as that user and verify that the user has access to the Panorama folder. In this example, the jboss
user does not:
[jboss]$ ls -l /home/webadm/Panorama
ls: cannot access /home/webadm/Panorama: Permission denied
[jboss]$ ls -l /home/webadm
ls: cannot access /home/webadm: Permission denied
[jboss]$ ls -l /home
drwxr-xr-x. 4 jboss jboss 4096 Jul 26 17:33 jboss
drwx------. 3 webadm webadm 4096 Jul 26 17:30 webadm
drwxr-xr-x. 2 weblogic weblogic 4096 Jul 26 17:34 weblogic

Log in as the user that owns the parent directory (webadm in this example) and correct the permissions as
necessary. For example:
[webadm]$ chmod 770 /home/webadm

Synchronizing Time with the Analysis Server


The APM installation assumes that the system time on all systems (agents and analysis server) is
synchronized. Before the agent begins generating performance data, make sure time is synchronized as
described in the “Synchronizing Agent Time With the Analysis Server“ topic.

Reviewing the Log File


The installation script creates log files in the <installdir>/Panorama/hedzup/mn/temp directory. You can
review the log files to confirm details of the installation or troubleshoot problems. The file name for the log
file is AppInternals_Xpert_install.<timestamp>.

Removing Agent Software


Before removing the agent, you must manually stop Java applications that APM is monitoring. Check the
web interface “Agent Details Screen“ to see which processes you need to stop. On the agent, make sure you
stop processes that show as Instrumented.
You must be logged in as root or as the user specified as the agent administrator (see “Installing Agent
Software“) to remove or upgrade agent software. However, with a non-root user, the installation script first
checks for the presence of files that require root access to delete. If any of the following files are present, the
installation script will not allow non-root users to proceed:
 On Linux only, the librpil.so shared object in the /lib and /lib64 system directories. These files allow
automatic instrumentation of Java applications when they start.
 The appinternals script in the /etc/init.d/ system directory. This script starts APM automatically
when the system reboots.
In this case, you must either run the installation script as root, or run the script
<installdir>/Panorama/uninstall_mn/uninstall_root_required.sh as root. This script removes the files
requiring root access. After you run uninstall_root_required.sh as root, you can remove or upgrade the
agent as the non-root agent administrator.
To remove agent software, run the uninstall script in the <installdir>/Panorama/uninstall_mn directory.
[root@nhv1-rh6-2 uninstall_mn]# ./uninstall
Do you want to uninstall Riverbed SteelCentral AppInternals 10.6.0.607? [y/n]:y

12 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

Before making any changes, the uninstall checks for running Java processes that APM is monitoring. You
must manually stop such instrumented processes before you can continue. If the uninstall detects any, it
lists their process identifiers. For example:
The uninstall script found process that AppInternals is monitoring. You must stop the process to
continue.

PID 11611

Answer 'y' to continue or 'n' to terminate Riverbed SteelCentral AppInternals uninstall [y/n]:y

Stop the processes before continuing. The uninstall should finish without further intervention.
Successfully disabled process injection.
Process injection already disabled.
Successfully uninstalled process injection library.

Stopping all Riverbed SteelCentral AppInternals components...


stopping the DSA...
stopping the DSA...
stopping the DSA...
stopping the DSA...
[ok]
[stopped] dsa
[stopped] os_agent
[stopped] agentrt_agent
[stopped] npm_agent
Uninstalling the system-level automatic startup/shutdown script and links...
Successfully disabled appinternals service
/etc/init.d/appinternals was removed successfully

Riverbed SteelCentral AppInternals uninstall completed successfully.

******************** Riverbed SteelCentral AppInternals Post Uninstall Steps ********************


Stop all Java processes that were being monitored by AppInternals (they lock files you want to re
move).

Change to different directory and run /bin/rm -rf /opt/Panorama to completely remove Riverbed Ste
elCentral AppInternals files

The uninstall script deletes the <installdir>/Panorama/hedzup/mn directory and its descendants. You
must manually delete the Panorama directory and any remaining descendants.

Upgrading Agent Software


Upgrades to Version 10.0 from earlier versions of APM agent software (called “managed node’ or
“collector”) are not supported.

Note: Before upgrading agents, refer to the release notes for any upgrade-specific information.

The Upgrade option of the installation script replaces executable and program files for agent software while
preserving configuration settings and data. For any version-specific upgrade considerations, see the
Release Notes available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html

Aternity APM Version 11.4.3 13


Agent Installation: Unix-Like Operating Systems

Before upgrading the agent, you must manually stop Java applications that APM is monitoring. Check the
web interface “Agent Details Screen“ to see which processes you need to stop. On the agent, you must stop
processes that show as Instrumented.
You must be logged in as root or as the user specified as the agent administrator (see “Installing Agent
Software“) to remove or upgrade agent software. However, with a non-root user, the installation script first
checks for the presence of files that require root access to delete. If any of the following files are present, the
installation script will not allow non-root users to proceed:
 On Linux only, the librpil.so shared object in the /lib and /lib64 system directories. These files allow
automatic instrumentation of Java applications when they start.
 The appinternals script in the /etc/init.d/ system directory. This script starts APM automatically
when the system reboots.
In this case, you must either run the installation script as root, or run the script
<installdir>/Panorama/uninstall_mn/uninstall_root_required.sh as root. This script removes the files
requiring root access. After you run uninstall_root_required.sh as root, you can remove or upgrade the
agent as the non-root agent administrator.
As described in “Installing Agent Software“, decompress the installation script before upgrading. Choose
the Upgrade option, and the script prompts for the location of a temporary directory to extract files. The
following example shows required input and responses in bold.
bash-4.1$ ./AppInternals_Agent_10.6.0.607_Linux

Riverbed SteelCentral AppInternals Setup Menu:

1) Install
2) Upgrade
3) Exit

Select one of the options above and press enter:

2
The installation creates a temporary directory (AppInternals_Agent_10.0.1187_Linux_Kit). Where do
you want to create it? [/tmp]:

At this point, the upgrade extracts files to the temporary directory. It then prompts you to continue:
Stop all Java processes that were being monitored by AppInternals (they lock files you want to re
move).
Do you want to upgrade Riverbed SteelCentral AppInternals from 10.6.0.600 to 10.6.0.607? [y/n]:y

Before making any changes, the upgrade checks for running Java processes that APM is monitoring. You
must manually stop such instrumented processes before you can continue. If the upgrade detects any, it lists
their process identifiers. For example:
The upgrade script found process that AppInternals is monitoring. You must stop the process to
continue.

PID 11611

Answer 'y' to continue or 'n' to terminate Riverbed SteelCentral AppInternals upgrade [y/n]:y

Stop the processes before continuing. The upgrade should finish without further intervention.
Stopping all Riverbed SteelCentral AppInternals components...
stopping the DSA...
stopping the DSA...
[ok]
[stopped] dsa
[stopped] os_agent

14 Aternity APM Version 11.4.3


Agent Installation: Unix-Like Operating Systems

[stopped] agentrt_agent
[stopped] npm_agent
Successfully disabled process injection.
Process injection already disabled.
Successfully uninstalled process injection library.
Riverbed SteelCentral AppInternals file transfer to /opt/Panorama/ was successful
Installing the system-level automatic startup/shutdown script and links...
Successfully added appinternals as a service
Process injection already disabled.
Successfully uninstalled process injection library.
Successfully installed process injection library.
Successfully enabled process injection.

Riverbed SteelCentral AppInternals upgrade completed successfully.

******************** Riverbed SteelCentral AppInternals post upgrade steps ********************

1) Start the Riverbed SteelCentral Agent and Sub-Agents. Log in as 'webadm' and run:
/opt/Panorama/hedzup/mn/bin/agent start

bash-4.1$

Monitoring Docker Containers


Docker is open-source software that automates the deployment of Linux applications inside software
containers. You customize base Docker images to create new images that add applications and make any
others changes you want. You then create a container that uses the new image.
To monitor applications that run in Docker containers, install the APM agent on the Docker host system.
The single agent installation can monitor any number of containers running on the host. For details, see
“Monitoring Docker Containers“ in the “Deploying Agents in Cloud-Native Environments“ configuration
topic.

Aternity APM Version 11.4.3 15


Agent Installation: Unix-Like Operating Systems

16 Aternity APM Version 11.4.3


CHAPTER 8 Unattended Agent Installation:
Unix-Like Operating Systems

This section describes how to automate the installation of agent software by specifying options in a
response file and invoking the agent installer with the -silentinstall and -silentupgrade arguments.

Installation Prerequisites
For a list of installation prerequisites, see “Installation Prerequisites“ in “Agent Installation: Unix-Like
Operating Systems“.

Downloading and Decompressing the Agent Installation


Download the agent installation from the APM download page: 
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html.
The installation script is a gzip-compressed file named as follows:
 AIX: AppInternals_Agent_<version>_Aix.gz
 Linux: AppInternals_Agent_<version>_Linux.gz
 Solaris: AppInternals_Agent_<version>_Solaris.gz
Examples in this section are for Linux.
Decompress the file with gunzip or a similar utility. This creates the same file name with no extension. For
example:
[root@nhv1-rh6-1 tmp]# ls -al AppInternals_Agent_10.0.1.7_Linux*
-rwxr-xr-x 1 root root 82281878 Jun 25 11:39 AppInternals_Agent_10.0.1.7_Linux.gz
[root@nhv1-rh6-1 tmp]# gunzip AppInternals_Agent_10.0.1.7_Linux.gz
[root@nhv1-rh6-1 tmp]# ls -al AppInternals_Agent_10.0.1.7_Linux*
-rwxr-xr-x 1 root root 82550447 Jun 25 11:39 AppInternals_Agent_10.0.1.7_Linux
[root@nhv1-rh6-1 tmp]#

Aternity APM Version 11.4.3 1


Unattended Agent Installation: Unix-Like Operating Systems

Creating a Response File


The -silentinstall argument to the agent installer requires that you specify the name of a response file. This
section describes extracting the APM kit to a temporary directory and adapting a template to create the
response file.
However, once you have a response file, you can simply use it by invoking the agent installer directly
without extracting kit files.

1) Use the -extract argument with the agent installer to extract the kit contents to a temporary directory
you specify. For example, to extract to /tmp:
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.1.7_Linux -extract 
The installation creates a temporary directory (AppInternals_Agent_10.0.1.7_Linux_Kit). Where
do you want to create it? [/tmp]:/tmp 
Riverbed SteelCentral AppInternals tar-file extracted successfully
.
.
.
[root@nhv1-rh6-1 tmp]#

2) Edit the install_mn/install.properties.template file in the extracted directory. You must at least supply
values for the “INSTALLDIR“ and (for on-premises analysis servers)
“O_SI_ANALYSIS_SERVER_HOST“ options.

3) Save the file to a convenient location and name. For example, /tmp/install.properties.

Enabling the Instrumentation of Java and .NET Core


Processes
APM supports the enabling of instrumentation of Java and .NET Core processes during a Linux root
installation if you set the “O_SI_AUTO_INSTRUMENT“ option to “true.”
For a list of supported Linux systems, see the System Requirements on the support page.

Note: Enabling instrumentation during a UNIX installation is not supported.

If you do not set the “O_SI_AUTO_INSTRUMENT“ option to “true”, are performing a non-root
installation on Linux, or are installing on a UNIX system, you must manually enable the instrumentation of
Java and .NET Core processes after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.

Running the Installer With the Response File


The following example shows running the install script from the extracted install_mn directory. It uses the
-silentinstall argument and specifies the response file /tmp/install.properties:

2 Aternity APM Version11.4.3


Unattended Agent Installation: Unix-Like Operating Systems

[root@nhv1-rh6-1 install_mn]# ./install -silentinstall /tmp/install.properties

If you already have a response file available, you do not have to extract kit files. Simply specify the agent
installer with the -silentinstall argument and the response file. For example:
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.00176_Linux -silentinstall ./install.properties
Uncompressed instrumentation.tar.gz in //AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
.
.
.

Upgrading an Existing Installation with -silentupgrade

Note: Before upgrading, refer to the release notes for any upgrade-specific information.

Use the -silentupgrade argument to upgrade an existing installation. As with a new installation, you need
a response file. Extracting the kit as described in “Creating a Response File“ also creates a template for
upgrades that you can adapt, in install_mn/upgrade.properties.template.
Edit that template file to change any properties you want and save it to a convenient location. For example,
/tmp/upgrade.properties. Run the upgrade installer with the -silentupgrade argument and the response
file.
The following example shows running the upgrade script (NOT the install script) from the extracted
install_mn directory. It uses the -silentupgrade argument and specifies the response file
/tmp/upgrade.properties:
[root@127 tmp]# ./AppInternals_Agent_10.11.0.588_Linux_Kit/install_mn/upgrade -silentupgrade
./upgrade.properties 
Uncompressed instrumentation.tar.gz in /tmp/AppInternals_Agent_10.11.0.588_Linux_Kit

.
.
.

Or, you can simply specify the agent installer with the -silentupgrade argument and the response file:
[root@110 tmp]# ./AppInternals_Agent_10.11.0.588_Linux -silentupgrade upgrade.properties

Installation Options Reference


Some of the options correspond to prompts in the interactive installer and accept the same values as
describe in “Information Required for Installation“.
Enclose option values in double-quotation marks.

Aternity APM Version 11.4.3 3


Unattended Agent Installation: Unix-Like Operating Systems

INSTALLDIR

Description Specifies the destination directory for the agent installation. The installation script creates the
Panorama directory and subdirectories in the parent directory you supply. If you run the
installation from an account other than root, the account must have privileges to create
directories in this directory, or the installation fails.

Valid Values Any valid directory

Default Value none

Example INSTALLDIR="/opt"

O_SI_ANALYSIS_SERVER_HOST

Description Specifies the location of the on-premises APM analysis server. (This option has no effect for
installations that specify the SaaS analysis server by setting
“O_SI_SAAS_ANALYSIS_SERVER_ENABLED“.) The agent connects to this system to
initiate communications. Use this option for new installations only. See “Changing the
Agent’s Analysis Server“ for instructions on changing this value after the agent has been
installed.

Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the analysis
server.

Default Value none

Example O_SI_ANALYSIS_SERVER_HOST="myserver.example.com" 
O_SI_ANALYSIS_SERVER_HOST="10.46.35.218"

O_SI_ANALYSIS_SERVER_PORT

Description Specifies the port on which the APM analysis server is listening. The agent connects to this
port to initiate communications. See “Changing the Agent’s Analysis Server“ for instructions
on changing this value after the agent has been installed.

Valid Values Valid port number for the analysis server.

Default Value 80

Example O_SI_ANALYSIS_SERVER_PORT="8051"

O_SI_ANALYSIS_SERVER_SECURE

Description Specifies whether the connection to the APM analysis server should be secure.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_ANALYSIS_SERVER_SECURE="true"

4 Aternity APM Version11.4.3


Unattended Agent Installation: Unix-Like Operating Systems

O_SI_AUTO_INSTRUMENT

Description Only supported on root installations of Linux.


Specifies whether to enable automatic system-wide instrumentation of Java and .NET Core
processes on supported Linux systems. For a list of supported Linux systems, see the System
Requirements on the support page.
If you do not set this option to true, you will need to enable instrumentation manually after
the installation completes. For more information, see “Enabling Instrumentation of Java
Processes“ and “Enabling Instrumentation of .NET Processes“.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_AUTO_INSTRUMENT="true"

O_SI_CUSTOMER_ID

Description This option applies only if “O_SI_SAAS_ANALYSIS_SERVER_ENABLED“ is set to “true”. It


specifies the Customer ID you received from Aternity.

Valid Values Hexadecimal string identifier provided by Aternity.

Default Value false

Example O_SI_CUSTOMER_ID="5e6f1281-162d-11a6-a7c8-ab267ed63dc8"

O_SI_EXTRACTDIR

Description Specifies the directory to temporarily extract files for the agent installation.

Valid Values Any existing directory to which the user has write permission.

Default Value /tmp

Example O_SI_EXTRACTDIR="~/"

O_SI_PROXY_SERVER_AUTHENTICATION

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


whether the proxy server requires authentication.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_PROXY_SERVER_AUTHENTICATION="true"

Aternity APM Version 11.4.3 5


Unattended Agent Installation: Unix-Like Operating Systems

O_SI_PROXY_SERVER_ENABLED

Description Whether the agent will use a proxy server to connect to the analysis server system.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_PROXY_SERVER_ENABLED="true"

O_SI_PROXY_SERVER_HOST

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


the location of the proxy server.

Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_HOST="myproxyserver.example.com" 
O_SI_PROXY_SERVER_HOST="10.46.35.238"

O_SI_PROXY_SERVER_PASSWORD

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies the password
for the user.

Valid Values Password for the proxy user.

Default Value none

Example O_SI_PROXY_SERVER_PASSWORD="myproxypassword"

O_SI_PROXY_SERVER_PORT

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ is set to “true”. It specifies


the port on which the proxy server is listening.

Valid Values Valid port number for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_PORT="8080"

O_SI_PROXY_SERVER_REALM

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies the
authentication realm for the proxy server, if applicable.

Valid Values Name of an authentication realm.

6 Aternity APM Version11.4.3


Unattended Agent Installation: Unix-Like Operating Systems

Default Value none

Example O_SI_PROXY_SERVER_REALM="myproxyrealm"

O_SI_PROXY_SERVER_USER

Description This option applies only if “O_SI_PROXY_SERVER_ENABLED“ and


“O_SI_PROXY_SERVER_AUTHENTICATION“ are set to “true”. It specifies a user for the
proxy server.

Valid Values Valid user name for the proxy server.

Default Value none

Example O_SI_PROXY_SERVER_USER="myproxyuser"

O_SI_SAAS_ANALYSIS_SERVER_ENABLED

Description Specifies whether the agent will connect to the Software as a Service (SaaS) offering of the
analysis server.

Valid Values true false (values are not case sensitive)

Default Value false

Example O_SI_SAAS_ANALYSIS_SERVER_ENABLED="true"

O_SI_USERACCOUNT

Description Specifies the user name for the existing account you will use to run the agent and sub-agents
on this system. See the “Installing Agent Software“ prerequisite for more details.
This value is ignored unless you run the installation script as root. (If you run it as a
non-root user, that user is automatically the administrative user.)

Valid Values Any valid user name (the account must have privileges to create directories in the directory
specified by “INSTALLDIR“).

Default Value No default for root installs


<currentuser> for non-root installs
Example O_SI_USERACCOUNT="john.doe" 
O_SI_USERACCOUNT="jdoe"

Aternity APM Version 11.4.3 7


Unattended Agent Installation: Unix-Like Operating Systems

8 Aternity APM Version11.4.3


CHAPTER 8 Deploying Agents in Cloud-Native
Environments

This chapter describes how to deploy APM agents in cloud-native environments. In these environments,
servers that APM monitors can be cloned many times, are created and destroyed frequently, and each
instantiation typically has a different host name and IP address.
The following sections explain how to deploy the agent in supported cloud-native environments:
– “Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“
– “Creating tags with the tags.yaml File for Agents Running in PaaS Environments“
– “Monitoring Docker Containers“
– “Monitoring Kubernetes and OpenShift Environments When Installing Agents on Individual
Nodes“
– “Deploying The Agent on Kubernetes Using a DaemonSet“
– “Monitoring VMware Tanzu Applications“

Configuring Processes Running in PaaS Environments to Be


Instrumented on Initial Startup
In Platform-as-a-Service (PaaS) environments (such as container environments ), applications are deployed
before the virtual machines are actually provisioned and started.
In such environments, deployment scripts are responsible for installing and configuring applications that
the virtual machine will host. For APM to monitor such an application, those scripts need to also install
agent software and configure processes for instrumentation.
This section describes configuring agents so that processes you want to monitor will be automatically
instrumented when they first start on the agent system. This configuration requires both the following
tasks:
 Download a configuration file from the analysis server that has the specific configuration settings for
the application processes that you want to monitor. See “Creating Configuration Files for Agents
Running in PaaS Environments“.
 Create a mapping file that specifies a process name and corresponding configuration file. See
“Mapping Processes to Configuration Files“.
For PaaS environments, write deployment scripts that create configuration and mapping files as described
here. This approach to configuration is required only in PaaS environments. However, it can be useful in

Aternity APM Version 11.4.3 39


Deploying Agents in Cloud-Native Environments

any environment where you want to avoid initial configuration in the analysis server interface and having
to restart processes.

Note: After Initial Startup, Make Changes in the Analysis Server Interface 
Creating configuration and mapping files as described here is useful only for the initial agent startup. After
the initial startup, the agent ignores those files if any user changes the corresponding processes
configuration in the “Agent Details Screen“ or configuration settings in the “Define a Configuration
Screen“.

Creating Configuration Files for Agents Running in PaaS Environments


Create configurations in the analysis server interface “Define a Configuration Screen“ with the settings you
want to apply to processes. Use the Edit menu’s “Download“ option to create configuration files with those
settings. The Download operation creates a file using the name <configuration_name>.json:

Copy the configuration files to the <installdir>/Panorama/hedzup/mn/userdata/config directory on the


agent system. As described in “Creating the initial-mapping File“, the config: property in the
initial-mapping file specifies a configuration file name.

Mapping Processes to Configuration Files


The initial-mapping file specifies processes that you want to instrument when the agent starts and a
corresponding configuration file.

Identifying Process Names


When the AppInternals agent starts, it discovers processes that are running on the system and assigns them
names. The include: property in the initial-mapping file must specify a value that will match this name.
Use the following include: values to match commonly-encountered processes:

This include: Value Matches Processes For

Websphere_* IBM WebSphere Application Server

Weblogic_* Oracle WebLogic Server

40 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

This include: Value Matches Processes For

IIS_* Typical Microsoft Internet Information Services (IIS) application pools

IIS* General entry for IIS. (Simply use IIS* if you do not need to differentiate between IIS variants.)

Tomcat* Apache Tomcat

BizTalk* Microsoft BizTalk Server

JBoss* JBoss application server (both standalone and cluster)

For other processes, you can find the process name in the analysis server interface. Look in the Discovered
As value for the processes in the “Agent Details Screen“:

Aternity APM Version 11.4.3 41


Deploying Agents in Cloud-Native Environments

Creating the initial-mapping File


The agent installation creates a template file named initial-mapping.template file in the
<installdir>\Panorama\hedzup\mn\userdata\config directory. The template contains commented-out
examples with the process names listed in “Identifying Process Names“. Adapt the file to suit your
purposes file and save it as initial-mapping.
The initial-mapping file specifies process and configuration file names as include config property pairs:
include:processname config:configurationfilename

Note the following:


 You can specify a wildcard at the start or end of a a process name:
include:Tomcat*

include:*glassfish*

 Do not put spaces between the include and config properties, the colon ( : ) separator, and the
corresponding names.
 The configuration file name can include the .json file extension, but it is not required. If the file name
includes spaces, enclose it in quotation marks ( " ). Otherwise, quotation marks are not allowed.
 Precede comments with the hash character (number sign) #. Any line starting with # is ignored.
The sample file has a single entry that instruments IIS processes. Adapt the sample to suit your purposes.
The following example changes the configuration file for IIS processes and adds an entry for the prunsrv
process. Both entries specify configuration files created using the analysis server interface and downloaded
as described in “Creating Configuration Files for Agents Running in PaaS Environments“:
include:IIS_* config:EUE_enabled.json
include:prunsrv config:default_config.json

Creating tags with the tags.yaml File for Agents Running in


PaaS Environments
Typically, you create tags in the analysis server interface, in the “Agent Details Screen“. See the “Tags“
section in that topic for details on using tags.
This section describes creating tags by editing a file on the agent system. This approach is less convenient
than using the analysis server interface, because you must have access to the agent system. But it is useful
in the following cases:
 In dynamic environments where you want tags to take effect when agents first start.
 Agents earlier than APM Version 10.9 do not support creating tags in the analysis server interface.
Specify tags as key-value pairs in the <installdir>\Panorama\hedzup\mn\userdata\config\tags.yaml file
on the agent system.

Creating the tags.yaml File


For APM agents Version 10.9 and later, creating tags in the “Agent Details Screen“ generates the tags.yaml
file.

42 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

In addition, the agent installation creates a template file named tags.yaml.template file in the
<installdir>\Panorama\hedzup\mn\userdata\config\ directory. Adapt the file to suit your purposes and
save it as tags.yaml.
You do not need to restart the agent or (for Version 10.9 agents and later) applications for changes to
tags.yaml to take effect.

Note: Agents Earlier than Version 10.9 Propagate Tags Less Frequently 
For Agents earlier than Version 10.9, changes to tags do not propagate to transaction trace files (the source
of data for the Application Map and Search tabs and the Transaction Details window) until the sub-agent
starts a new trace file or the application is restarted.

tags.yaml Syntax
Specify tags using this syntax:
key : value

Specify multiple values for a key by enclosing a comma-separated list in brackets:


key : [ value1, value2, value3 ]

Note the following:


 You must supply a space between the key and the colon ( : ), and the colon and the value.
 Keys and values can contain letters, numbers, and spaces. Other characters are not allowed.
 Precede comments with the hash character (number sign) #. Any line starting with # is ignored.
 Specify an empty string for a value as "".
 Keys are case sensitive. Tag1 : foo and tag1 : foo results in separate tags.
 Changes to the tags.yaml file take effect without restarting the agent.

tags.yaml Examples
The following examples show some tags. Adapt them to create the tags.yaml file:
# Spaces are allowed in both the key and value:
logical server : Tier 1
OS : Windows 7
# Create a tag without a value:
NoValue : ""
# Assign multiple values to a single key:
MultiValue : [one, two]
# Same value for different keys: 

Aternity APM Version 11.4.3 43


Deploying Agents in Cloud-Native Environments

FooKey : Bar
FooKey2 : Bar

Here is how the tags in the previous example would appear in the Tags column of the table in the “Servers
Tab“:

Monitoring Docker Containers


Docker is open-source software that automates the deployment of Linux applications inside software
containers. You customize base Docker images to create new images that add applications and make any
others changes you want. You then create a container that uses the new image.
See the following links for more information on Docker:
 Docker overview: https://docs.docker.com/engine/understanding-docker/
 Introductory tutorial: https://docs.docker.com/engine/tutorials/dockerizing/

Running the Agent on the Docker Host or in Containers


APM supports two approaches for monitoring applications that run in Docker containers:
 Install and run the agent on the Docker host. That single installation monitors all the containers you
want that start on that host. This is the recommended approach. Use it whenever you have access to
the Docker host. See “Recommended: Running the Agent on the Docker Host System“ for details.

44 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

 Build a Docker image that installs and runs the agent on every container you want to monitor. Use this
approach only when you do not have access to the Docker host:

Aternity APM Version 11.4.3 45


Deploying Agents in Cloud-Native Environments

This approach has the following drawbacks:


– Slower container startup, since the agent must start before the monitored application
– Additional processes for the agent run in each container
– Higher overall overhead, since there potentially many agents run on a single host
– Higher disk space usage for each monitored container
– Requires rebuilding all monitored Docker images to upgrade agent software
See “Alternative: Running the Agent in Docker Containers“ for details.

Recommended: Running the Agent on the Docker Host System


The recommended approach to monitor applications that run in Docker containers is to install the APM
agent on the Docker host system. The single agent installation can monitor any number of containers
running on the host.
This section describes the following:
 “One Time Setup on the Docker Host System“
 “Starting the Instrumented Container with the Panorama Directory Mounted“
 “Troubleshooting“
 “How Docker Containers Appear in the APM Interface“

One Time Setup on the Docker Host System

Installing the Agent


This step is the same as for any agent deployment. See the “Agent Installation: Unix-Like Operating
Systems“ installation topic for details on installing agent software.
Install the agent on a Docker host and specify the analysis server that will report data from containers
running from that host.
The account that runs the agent must be root (or in the docker group). Otherwise, the agent will not have
privileges to populate all the “Docker Container Tags“.
The Docker host must be running a supported Linux version. See the System Requirements for specific
patches, maintenance levels, and other details of supported operating system versions for installing agent
software. The system requirements are available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
After installation, start the agent, as described in “Starting the Agent Software“.
Examples in this section assume the agent is installed in /opt.

Configuring Processes to Be Instrumented on Container Startup


You need to configure files on the Docker host so that processes you want to monitor will be automatically
instrumented when their containers start. This configuration requires both the following tasks, described in
“Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“ earlier in this
section:

46 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

 Download a configuration file from the analysis server that has the specific configuration settings for
the application processes that you want to monitor. See “Creating Configuration Files for Agents
Running in PaaS Environments“.
 Create the initial-mapping file that specifies a process name and corresponding configuration file. See
“Mapping Processes to Configuration Files“ for details. For example, here is an initial-mapping file
entry for Tomcat processes using the config.json configuration file:
include:Tomcat* config:config.json

Creating an Instrumented Version of the Docker Image


To create an instrumented version of the Docker image that you want to monitor, run a script that creates a
Dockerfile, then build the instrumented image with that Dockerfile.

Running the createDockerfile.sh Script


The agent installation creates the script /opt/Panorama/hedzup/mn/bin/createDockerfile.sh on the
Docker host. The script generates a Dockerfile in the directory you specify and copies supporting files to
subdirectories.
The script accepts the following arguments:

Argument Meaning

Target directory Directory for the generated Dockerfile and supporting files needed to build the instrumented version of the Docker image. If
this directory does not exist, the script will prompt whether it should create it.

Image to instrument An existing Docker image to instrument. This script uses this value as the argument to the FROM instruction in the generated
Dockerfile. Include a tag value as appropriate. (Docker documentation on FROM)
The script does not require this value, but if you omit it, you must edit the generated Dockerfile and supply it.

User The user name specified for the Dockerfile USER instruction in the existing base Docker image. This value is optional if the base
image does not specify a USER value. If you omit this argument, the script uses a value of root.
(Docker documentation on USER)

Supply argument values on the command line following the -y argument. If omitted, the script prompts for
values.
The following example shows creating an instrumented version of a Docker image,
zeus.run/app/3tier/tomcat. This example runs the script without arguments on the command line so that
it prompts for values.

1) Use the docker images command to see details about the image:
[root@11A opt]# cd /opt/Panorama/hedzup/mn/bin
[root@11A bin]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
zeus.run/app/3tier/tomcat 2.0 89ee1c3ab841 7 days ago 776.5
MB
[root@11A bin]#

2) Run the script and respond to the prompts. Note that the image value includes the 2.0 tag.
[root@11A bin]# ./createDockerfile.sh 
/opt/Panorama/hedzup/mn/bin /opt/Panorama/hedzup/mn/bin
Directory to create Dockerfile in: /opt/docker-instrument-3tier-tomcat 
Do you want to create: /opt/docker-instrument-3tier-tomcat [y/N]? y 
Specify a Docker image and version for the FROM command (optional): zeus.run/app/3tier/tomcat
:2.0 
If your Docker image has a USER command, enter its username (optional):

=============================================================================================
=
Creating: /opt/docker-instrument-3tier-tomcat/Dockerfile
Using: FROM zeus.run/app/3tier/tomcat:2.0
Using: USER root

Aternity APM Version 11.4.3 47


Deploying Agents in Cloud-Native Environments


To create your instrumented image, use a command like this:
docker build -t zeus.run/app/3tier/tomcat-instr:2.0 /opt/docker-instrument-3tier-tomcat
=============================================================================================
=
[root@11A bin]#

3) Confirm that script created the specified directory with a Dockerfile and supporting instr/ subdirectory.
[root@11A bin]# ls -al /opt/docker-instrument-3tier-tomcat
total 16
drwxr-xr-x 3 root root 4096 Jun 6 12:45 .
drwxr-xr-x 6 root root 4096 Jun 8 09:47 ..
-rw-r--r-- 1 root root 993 Jun 6 12:45 Dockerfile
drwxr-xr-x 4 root root 4096 Jun 6 12:45 instr

Here is a parallel example that shows supplying the arguments on the command line following the -y
argument:
[root@11A bin]# ./createDockerfile.sh -y /opt/docker-instrument-3tier-tomcat-CL zeus.run/app/3tie
r/tomcat:2.0
/opt/Panorama/hedzup/mn/bin /opt/Panorama/hedzup/mn/bin

==============================================================================================
Creating: /opt/docker-instrument-3tier-tomcat-CL/Dockerfile
Using: FROM zeus.run/app/3tier/tomcat:2.0
Using: USER root

To create your instrumented image, use a command like this:
docker build -t zeus.run/app/3tier/tomcat-instr:2.0 /opt/docker-instrument-3tier-tomcat-CL
==============================================================================================

Building the Instrumented Image


Use the docker build command to create the instrumented version of the Docker image. Use the -t option
to specify the name for new, instrumented image and specify the directory containing the Dockerfile that
the script generated (Docker documentation on docker build).
The following example uses the original image name with -instr appended. This preserves a recognizable
name and it makes it easy to see at a glance if the instrumented version is running.
[root@11A bin]# docker build -t zeus.run/app/3tier/tomcat-instr:2.0 /opt/docker-instrument-3tier-
tomcat
Sending build context to Docker daemon 1.501 MB
Step 1/4 : FROM zeus.run/app/3tier/tomcat:2.0
.
.
.
---> 89ee1c3ab841
Step 2/4 : USER root
---> Running in 96e13c0285e8
---> 77088499b0da
Removing intermediate container 96e13c0285e8
Step 3/4 : COPY ./instr /opt/Panorama/hedzup/mn/bin/./..
---> bf7a7decdc5c
Removing intermediate container ba08d06db1c0 
Step 4/4 : RUN /opt/Panorama/hedzup/mn/bin/./../bin/rpictrl.sh install && /opt/Panorama/hedzup/
mn/bin/./../bin/rpictrl.sh enable
---> Running in 2407888d7bfa
Process injection already disabled.
Successfully uninstalled process injection library.
Successfully installed process injection library.
Successfully enabled process injection.
---> 0c29f7b28e10

48 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Removing intermediate container 2407888d7bfa


Successfully built 0c29f7b28e10

Use the docker images command to confirm that the instrumented image built:
[root@11A bin]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZ
E
zeus.run/app/3tier/tomcat-instr 2.0 0c29f7b28e10 12 seconds ago 778
.6 MB

Aternity APM Version 11.4.3 49


Deploying Agents in Cloud-Native Environments

zeus.run/app/3tier/tomcat 2.0 89ee1c3ab841 7 days ago 776


.5 MB

Opening Network Ports for Containers to Access the Docker Host

Note: Only Required For Containers That Use The Bridge Network 
This section describes steps for opening network ports for containers that use the Docker bridge network.
These steps are not required for containers that use the host network (containers that start with the docker
run --network=host option).

By default, new Docker containers are automatically connected to the Docker bridge network. In this case,
monitored processes will not have access to APM ports (Docker documentation on container networking).
Without access, no environmental or transaction trace data will be written to the agent on the Docker host.
When this happens, the JIDA sub-agent writes log messages similar to the following in
<installdir>\Panorama\hedzup\mn\ on the Docker host:
java.io.IOException: Trouble getting port from DSA: java.net.NoRouteToHostException: No route to
host (Host unreachable)

The following iptables commands open required ports on the Docker host. These commands require root
access. The ports need to be opened only for the Docker network (for containers, in other words).

Note: The iptables command supports a -n option, which prevents DNS lookups.

# Port for initial connection to DSA

ptables -I INPUT -p tcp --dport 2111 -i docker0 -j ACCEPT

# Port for initial connection to AgentRT

iptables -I INPUT -p tcp --dport 7072 -i docker0 -j ACCEPT

# Ephemeral ports for subsequent connections

iptables -I INPUT -p tcp --dport 33000 -i docker0 -j ACCEPT

# Port for initial connection to Profiler Sockets

iptables -I INPUT -p tcp --dport 7073 -i docker0 -j ACCEPT

The last command opens a single port for subsequent connections. You must also configure the DSA to use
the same fixed port. To configure the DSA to use a fixed port, stop the DSA and edit the file
<installdir>\Panorama\hedzup\mn\data\dsa.xml. Change the value of the DsaDaDataPort attribute
and restart the DSA. For example:
<Attribute name="DsaDaDataPort" value="33000"/>

Note: Avoid opening a large range of ephemeral ports because it slows down the startup of containers.

Starting the Instrumented Container with the Panorama Directory Mounted


Use the docker run command to start the instrumented Docker container. You must add the -v (--volume)

50 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

option to bind mount the agent installation directory on the host in the container
(Docker documentation on docker run).
For example:
[root@11A zeus]# docker run -d -p 8888:8080 -v /opt/Panorama:/opt/Panorama
zeus.run/app/3tier/tomcat-instr:2.0
79e23b4b2fb81154cec7390d5ff8791f1ba0a90583ea0f025d82e4d57905930f

The docker ps command confirms that the container started:


[root@11A zeus]# docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
79e23b4b2fb8 zeus.run/app/3tier/tomcat-instr:2.0 "/bin/sh -c '/opt/sta" 7 seconds ago
Up 6 seconds 0.0.0.0:8888->8080/tcp focused_stonebraker

In Docker swarm mode, specify the --mount type=bind option on the docker service create command
(Docker documentation on swarm mode). For example:
docker service create \
--mount type=bind,source=/opt/Panorama,destination=/opt/Panorama \
-p 8080:8080 -p 9990:9990 zeus.run/app/3tier/tomcat:2.0

Troubleshooting
This section describes problems and the configuration to work around them.

Host Firewall Prevents Bridged Network Access to APM Ports


This is a common initial setup problem. Use iptables commands on the Docker host to open the ports. See
“Opening Network Ports for Containers to Access the Docker Host“ for details.

Container Process Runs As Root Instead Of Correct User


This occurs if you omit the “User“ argument when “Running the createDockerfile.sh Script“.
Edit the Dockerfile to uncomment commands and specify the correct user name. The Dockerfile has
instructions embedded in it that make the steps clear:
#======================================================================
#### Uncomment/edit these lines if your image uses a non-root user ####
# RUN groupadd root -g 0 \
&& usermod -G root <container-user
# USER <container-user>

Aternity APM Version 11.4.3 51


Deploying Agents in Cloud-Native Environments

Container Cannot Determine the Docker Host IP address


Instrumented processes in containers start with the JIDA sub-agent. Normally, JIDA automatically
determines the IP address of the Docker host where the agent DSA process is running.
However, in secure environments, you may need to manually specify the Docker host IP address. Use one
of the following methods.
 Add the IP address of the Docker host to the initial-mapping file (see “Configuring Processes to Be
Instrumented on Container Startup“). This method has the advantage of being specified external to the
Docker container. For example:
include:Tomcat* config:config.json
docker.dsa.host=1.2.3.4

 Add the RVBD_DSAHOST environment variable to the docker run command using the -e (--env)
option. This method has the advantage of being set in the docker container. For example:
docker run -d -p 8888:8080 \
-e RVBD_DSAHOST=1.2.3.4 \
-v /opt/Panorama:/opt/Panorama \
zeus.run/app/3tier/tomcat-instr:2.0

If you use both methods, the RVBD_DSAHOST environment variable takes precedence.

Missing Docker Container Tags


The account that runs the agent must be root (or in the docker group). Otherwise, the agent will not have
privileges to populate all the “Docker Container Tags“. Only the “container id“ tag will be retrieved.

How Docker Containers Appear in the APM Interface


The agent installed on the Docker host appears in the APM interface agent like any other agent. For
instance, it appears as an entry in the “Agent List Screen“ , can be configured in the “Agent Details Screen“,
and appears in the “Servers Tab“ like other agents.
By comparison, instrumented containers are not full-fledged agents. They do not appear in the Agent List
screen and do not have their own Agent Details screen for configuration. The APM interface exposes
instrumented containers and their processes as described here.
In general, the Docker icon ( ) indicates containers and instrumented processes running in them.

Agent Details Screen


The“List of Processes to Instrument and Assign JMX and WMI Custom Metric Configurations“ in the Agent
Details screen for the Docker host shows instrumented processes running in containers. The Docker icon
indicates it is a process running in a container. Click the process to see the Docker image name and container
ID:

52 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Application Map Tab


The “Application Map Tab“ shows Docker hosts as “Server Nodes“ elements, like any other agents. It
shows Docker containers within the server node and application instances within the containers:

Aternity APM Version 11.4.3 53


Deploying Agents in Cloud-Native Environments

Servers Tab
The Servers tab shows Docker containers in the Server column of its table, denoted by the Docker icon and
a name formatted as specified in the “Docker Container Display Name Screen“. The Tags column shows
special “Docker Container Tags“ that give additional information about the container (as well as any
user-defined tags). Click the expander to see all values.

54 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Docker Container Tags


Container tags are a special type of “Tags“. They are automatically generated by the agent and provide
details about individual Docker containers:
 Unlike user-defined tags, container tags are not visible in the Tags area of the Agent List or Agent
Details screen.
 You cannot add or modify container tags.
 However, as with in any tags, you can filter on container tags by right-clicking in the “Server“ column
of the Servers tab table, and use them in the “Search Tab“ “tag“, “tag.name“, and “tag.value“ search
fields.
 Special container search fields match the values of some of the container tags (see the following table).
The following table summarizes the automatically-generated container tags.

Tag Name Description Example

container hostname Host name of the Docker container, as specified


by the Docker run --hostname argument. (The
container hostname=3tier-tomcat-0C4

Docker default for the container hostname is the


short container ID.)

container id Short Docker container ID container id=7da037511b76

container image Name of the Docker image including the Docker


tag. The “container.image“ search field matches
container image=3tier/tomcat-instr:2.0

the value of this tag.

container name Name of the Docker container, as specified by the


Docker run --name argument (or automatically
container name=condescending_mestorf

created by Docker). The “container.name“ search


field matches the value of this tag.

container type Type of container. For Docker containers, the


value is always Docker. The “container.type“
container type=Docker

search field matches the value of this tag.

docker hostname Host name of the Docker host where the agent is
installed. The “container.parent“ search field
docker hostname=0C4.internal.zeus

matches the value of this tag. The value of this tag


is the same as the docker name tag, without the
domain.

docker name Fully-qualified domain name (FQDN) of the


Docker host where the agent is installed.
docker name=0C4.internal.zeus.run

Alternative: Running the Agent in Docker Containers


As described in “Running the Agent on the Docker Host or in Containers“, this approach is not
recommended. Use it if you do not have access to the Docker host.
With this approach, as part of the Docker image customizations, you can install and configure the APM
Linux agent. The agent will report environmental (operating system) data and monitor instrumented
applications that start in containers that use the customized image.
The following table lists agent files to include in a Docker build. Typically, you put these files in the same
directory as a the Dockerfile for building the image.

File Description
AppInternals_Agent_<version>_Linux.gz Gzip-compressed agent installation script. Obtain this file from the APM download page.

install.properties Response file that specifies agent installation options. As described in “Unattended Agent Installation:
Unix-Like Operating Systems“, the -silent argument to the agent installer specifies this file.

<configuration_name>.json Configuration file with instrumentation settings for the application processes that you want to monitor.
See “Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“ for
details on downloading configuration files from the analysis server.

Aternity APM Version 11.4.3 55


Deploying Agents in Cloud-Native Environments

File Description

initial-mapping Mapping file that specifies the processes that you want to instrument when the agent starts and a
corresponding configuration file.

tags.yaml Tag file with custom identifiers that categorize servers that APM is monitoring.

The following example shows an excerpt of a Dockerfile that adds these files and installs the agent using
the -silent argument:
# Install Appinternals agent
WORKDIR /opt
ADD appinternals_agent_latest_linux.gz .
ADD install.properties .
RUN gunzip appinternals_agent_latest_linux.gz
RUN chmod +x appinternals_agent_latest_linux
RUN ./appinternals_agent_latest_linux -silent install.properties
RUN rm -rf appinternals_agent_latest_linux

# Add instrumentation configuration and initial-mapping files 
ADD config.json /opt/Panorama/hedzup/mn/userdata/config/
ADD initial-mapping /opt/Panorama/hedzup/mn/userdata/config/
ADD tags.yaml /opt/Panorama/hedzup/mn/userdata/config/

The Docker container must run dsactl start to start the agent when it launches:
 The agent must start before the application that will be monitored.
 Add a 2-second delay between starting the agent and starting the application. This allows the agent to
write files required by the JIDA sub-agent when it starts with the application.
See the “Controlling Agent Software on Unix-Like Systems“ configuration topic for details on dsactl. See
https://docs.docker.com/engine/admin/using_supervisord/ for details on using Supervisor with
Docker.
After you start a Docker container that uses the customized image, the agent connects to the APM analysis
server specified by the O_SI_CUSTOMER_ID option in the response file (install.properties in the previous
example). The agent appears in the analysis server interface with the Docker container ID as a agent and
server host name:

56 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Monitoring Kubernetes and OpenShift Environments When


Installing Agents on Individual Nodes
Kubernetes is an open-source system for automating deployment, scaling and management of
containerized applications. OpenShift is a platform as a service (PaaS) built around Docker containers
orchestrated and managed by Kubernetes. See the following links for more information:
 Wikipedia article on Kubernetes
 Wikipedia article on OpenShift
APM supports monitoring applications running in Kubernetes and OpenShift environments that use
Docker as the container runtime. This section describes deploying agents in these environments.

Setup on Kubernetes Nodes


Kubernetes nodes are Docker hosts. As with “Monitoring Docker Containers“, you install agents on each
Kubernetes node that will deploy pods running the application you want to monitor.

Installing the Agent


This step is the same as for any agent deployment. See the “Agent Installation: Unix-Like Operating
Systems“ installation topic for details on installing agent software.
Install the agent on each Kubernetes node and specify the analysis server that will report data from
containers running in pods on that node:
 Typically, the same analysis server will be the target for data from all nodes.
 Do not enable automatic instrumentation during the agent installation. (Instead, see “Configuring
Pods to Instrument Applications“.) When prompted to enable instrumentation, select no:
Enable instrumentation automatically system-wide? [no]: no

After the installation is complete, start the agent, as described in “Starting the Agent Software“.

Note: Examples in this section assume that the agent is installed in /opt.

Changing the Configuration Settings to Use on Pod Startup


The agent installation on a Kubernetes node automatically creates a default+config.json configuration file
and an initial-mapping file that specifies it in the <installdir>/Panorama/hedzup/mn/userdata/config
directory. Any applications that start in pods on that node will use the settings in default+config.json
automatically.
This behavior reduces the manual intervention required for monitoring applications. Typically, however,
you will want to change the configuration to use different settings.
To change the configuration, download a configuration file from the analysis server with settings you want
and modify the initial-mapping file to specify that configuration file. These steps are described in
“Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“ earlier in this

Aternity APM Version 11.4.3 57


Deploying Agents in Cloud-Native Environments

topic.

Configuring Pods to Instrument Applications


In Kubernetes environments, you configure pods to instrument applications that run in them. (This is in
contrast to standard Docker environments, where you configure images to instrument individual
containers.) Follow the steps described here after installing the agent on Kubernetes nodes.
For a pod to instrument applications, it must define environment variables and mount the agent installation
directory (opt/Panorama) as a host path volume accessible by the pod.
The following table summarizes the environment variables and their values. Except for
JAVA_TOOLS_OPTIONS, use the values as shown here (and in “Defining Environment Variables“). For
JAVA_TOOLS_OPTIONS, change /opt to the agent installation directory on the Kubernetes node.

Environment Variable Value Meaning

JAVA_TOOL_OPTIONS -agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so Java option to load the


AppInternals profiler library

RVBD_AGENT_FILES 1 Enable agent communication via


the profiler sockets process on the
agent host

RVBD_AGENT_PORT 7073 Port to use for sockets


communication

RVBD_DSAHOST valueFrom:  Build field to dynamically obtain


fieldRef:  agent host name at pod build
apiVersion: v1  time
fieldPath: status.hostIP

There are multiple approaches for configuring pods. The following sections show using the OpenShift web
console to modify an existing application’s deployment with the required changes:
“Defining Environment Variables“
“Mounting the Agent Installation Directory“
“Confirming Instrumentation“

Defining Environment Variables


In the OpenShift web console, select the project (in this example, the project is doc-example). In the
Applications menu, choose Deployments and then the deployment for the application (in this example, the
application is openjdk-app). In the Actions menu, choose Edit YAML:

58 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

In the editing window that opens, find the containers: section. If there is not already an - env: list item under
it, paste the bold text in the following example under the containers: section. If there is already an - env: list
item, omit the first line of bold text (- env:).

Note: Begin the stanza with -env if delineating between each container in your deployment and this is the
first property of the container. However, if this is not the first property for the container, use env instead
(without the initial dash).

spec:
containers:
- env:
- name: JAVA_TOOL_OPTIONS
value: '-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so'
- name: RVBD_AGENT_FILES
value: '1'
- name: RVBD_AGENT_PORT
value: '7073'
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP

Aternity APM Version 11.4.3 59


Deploying Agents in Cloud-Native Environments

Indentation is important in yaml. The editing window performs some validation, and you can use a
validator such as https://codebeautify.org/yaml-validator.
Click Save in the editing window. Open the Environment tab to confirm that the variables were defined as
expected:

Mounting the Agent Installation Directory


You also edit the yaml configuration to mount the agent installation directory. In the Actions menu, choose
Edit YAML and add two stanzas:

1) In the spec: section, add the following volumes: specification, at the same level as the containers:
section.
volumes:
- hostPath:
path: /opt/Panorama
type: ''
name: appint

2) In the containers: section, add the following volumeMounts: specification, at the same level as the
- env: list item:
volumeMounts:
- mountPath: /opt/Panorama
name: appint

The following example shows both changes in bold text:


spec:
volumes:
- hostPath:
path: /opt/Panorama
type: ''
name: appint 
containers:
- env:
- name: JAVA_TOOL_OPTIONS
value: '-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so'
- name: RVBD_AGENT_FILES

60 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

value: '1'
- name: RVBD_AGENT_PORT
value: '7073'
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
volumeMounts:
- mountPath: /opt/Panorama
name: appint

When you save changes, the order of the sections at a given level in the yaml are rearranged alphabetically.
This is expected.

Confirming Instrumentation
After you change the yaml deployment configuration, OpenShift automatically redeploys pods for the
application.
In the web console Applications menu, choose Pods. The pod for the application you modified should
show a status of Running and at its containers ready:

Aternity APM Version 11.4.3 61


Deploying Agents in Cloud-Native Environments

After a few minutes, transaction data should appear in the analysis server interface. In the Search tab,
search for the project name (doc-example in this case).
server.tag = 'container namespace=doc-example'

Upgrading the Agent Installed Directly on Kubernetes Nodes


Upgrade agent software on each Kubernetes node as described in “Upgrading Agent Software“.
You can upgrade without stopping any applications that the agent is monitoring. However, you must
restart pods running those applications for them to be instrumented using the upgraded agent version.

62 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Deploying The Agent on Kubernetes Using a DaemonSet


In Kubernetes -- unless otherwise configured -- a DaemonSet ensures that all nodes run a copy of a pod. As
more nodes are added to the cluster, DaemonSet pods are added to the nodes, and as nodes are removed
from the cluster, the corresponding DaemonSet pods are also removed and garbage collected. For more
information on DaemonSets, see the Kubernetes documentation.
DaemonSets are a very efficient way to deploy a monitoring agent in a Kubernetes environment, since the
agent is automatically installed in a DaemonSet pod when a new node is created, then destroyed and
garbage collected when a node is removed.
A DaemonSet obviates the need to manually deploy and manage the agent on every node in a Kubernetes
cluster, which greatly improves scalability and maintainability. Additionally, DaemonSets support
deploying in a managed environment, like a Platform as a Service (SaaS), where you do not have access to
the individual hosts.
The following diagram shows how the AppInternals agent is deployed as a DaemonSet on a Kubernetes
cluster:

The following sections explain how to install and upgrade an AppInternals agent in a DaemonSet:
 “Deploying the AppInternals Agent as a DaemonSet (No Helm Charts)“
 “Deploying the AppInternals Agent as a DaemonSet (Helm Charts)“
 “Upgrading the Agent in a Kubernetes DaemonSet“

Before You Begin


Before you deploy the AppInternals agent in a Kubernetes cluster as a DaemonSet, you must ensure that
you have the following:

Aternity APM Version 11.4.3 63


Deploying Agents in Cloud-Native Environments

 Access to a Docker Registry


 A running Kubernetes cluster
 Access to the Kubernetes cluster using kubectl
 A configured Analysis Server that is up and running that the agents can connect to
 Your Customer ID (if using a SaaS Analysis Server)
 Your Kubernetes Cluster name (if you would like to tag your containers with this information).
 An agent configuration specific to your application, which you create with the AppInternals Agent
Configuration UI. (This requires a running Analysis Server and is only necessary if you want to specify
a specific config for your instrumented applications)

Deploying the AppInternals Agent as a DaemonSet (No Helm Charts)


To deploy the Appinternals agent as a DaemonSet on a Kubernetes cluster without using a Helm Chart,
follow these steps:

1. Log in to your local system as a user with access to docker and kubectl.

2. Host the Riverbed Docker Image by following these steps:

2.1. Download the agent tar.gz file from the Riverbed Support site, where it is referred to as the
Aternity APM for Kubernetes Kit.

The tar.gz file has a name like the following, rvbd_agent_VERSION.tar.gz, and contains
the following files necessary for configuring and deploying a DaemonSet and instrumenting an
example Apache Tomcat application:

• The Riverbed AppInternals agent image tar file — rvbd_agent_VERSION.tar

• A yaml template for the agent DaemonSet — rvbd-agent.yaml

• A template for the agent DaemonSet properties — agent-daemonset-env.properties

• A template for the instrumentation properties —


rvbd-instrumentation-env.properties

• A yaml template for setup, including the service account — rvbd-agent-setup.yaml

• A yaml application template for Apache Tomcat instrumentation — tomcat-app.yaml

Note—The example yaml and properties files in this document are for informational purposes only.
Use the template files that ship with the agent to create your site-specific files.

2.2. Decompress and untar the agent tar file by entering the following command:
# tar zxvf rvbd_agent_VERSION.tar.gz

2.3. Change your working directory to the directory that holds the agent tar file, by entering the
following command:
# cd rvbd_agent_VERSION

2.4. Host the agent Docker Image.

The following is an example of loading the docker image and pushing the image to a private

64 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

docker registry where:

• <your-registry-server> is your Docker Registry FQDN

• <your-name> is your Docker username

• <your-pword> is your Docker password


# docker login -u <your-name> -p <your-pword> <your-registry-server>

# docker load -i rvbd_agent_VERSION.tar

# docker tag rvbd_agent:VERSION <your-repository>:VERSION

# docker push <your-repository>:VERSION

Note—You will need the location of the image in the registry later when you edit the
rvbd-agent.yaml file in step 3.5.

3. Configure the Riverbed DaemonSet on the Kubernetes cluster by following these steps:

3.1. Log in to your local system as a user with access to docker and kubectl.

3.2. (Optional) Although not required, Riverbed recommends that you create a "riverbed" namespace,
which will provide better logical isolation between user applications and Riverbed components.
The example rvbd-agent-setup.yaml file includes a section to create the "riverbed"
namespace. For information on when to use namespaces and how to create them, see Namespaces
in the Kubernetes documentation.

After you have enabled the riverbed namespace in the rvbd-agent-setup.yaml file, enter the
following command to create the namespace in Kubernetes:
# kubectl create namespace riverbed

3.3. Create a Kubernetes service account by doing the following:

3.3.1 Using the text editor of choice, edit the rvbd-agent-setup.yaml template file to make
any necessary changes specific to your site. The file looks like this:

rvbd-agent-setup.yaml

Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.

apiVersion: v1

kind: Namespace

metadata:

name: riverbed

---

apiVersion: v1

kind: Secret

metadata:

Aternity APM Version 11.4.3 65


Deploying Agents in Cloud-Native Environments

name: rvbd-secret

namespace: riverbed

labels:

app: "rvbd"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: rvbd-agent

rules:

- apiGroups:

- ""

resources:

- services

- events

- endpoints

- pods

- nodes

- componentstatuses

verbs:

- get

- list

- watch

- apiGroups:

- ""

resources:

- configmaps

resourceNames:

- rvbdtoken # Kubernetes event collection state

- rvbd-leader-election # Leader election token

verbs:

- get

- update

- apiGroups: # To create the leader election token

- ""

resources:

66 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

- configmaps

verbs:

- create

- nonResourceURLs:

- "/version"

- "/healthz"

verbs:

- get

- apiGroups: # Kubelet connectivity

- ""

resources:

- nodes/metrics

- nodes/spec

- nodes/proxy

verbs:

- get

---

# You need to use this account for your rvbd-agent daemonset

kind: ServiceAccount

apiVersion: v1

metadata:

name: rvbd-agent

namespace: riverbed

---

# Your admin user needs the same permissions to be able to grant them

# Easiest way is to bind your user to the cluster-admin role

# See
https://cloud.google.com/container-engine/docs/role-based-access-control#setting_up_role-based_ac
cess_control

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: rvbd-agent

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: rvbd-agent

Aternity APM Version 11.4.3 67


Deploying Agents in Cloud-Native Environments

subjects:

- kind: ServiceAccount

name: rvbd-agent

namespace: riverbed

# ----------------------- EOF ----------------------------

3.3.2 Run the following kubectl create command to create the service account from the yaml
file:
# kubectl create -f rvbd-agent-setup.yaml

3.4. Create a Kubernetes ConfigMap by doing the following:

3.4.1 Using the text editor of choice, edit the template file agent-daemonset-env.properties
and ensure that the values are correct for your site. The file is set up for a SAAS environment,
and looks like this:

agent-daemonset-env.properties
RVBD_SAAS_ANALYSIS_SERVER_ENABLED=true

RVBD_ANALYSIS_SERVER_HOST=collector-1.steelcentral.net

RVBD_CUSTOMER_ID=customer_id

RVBD_ANALYSIS_SERVER_PORT=443

RVBD_MAX_INSTR_LOGS=500

#RVBD_APP_CONFIG=new-config

#RVBD_LOGICAL_SERVER=tag_value
#container_metrics=true

# ---------------------- EOF ---------------------------------------

* If you have an ON-PREM environment, set RVBD_SAAS_ANALYSIS_SERVER_ENABLED=false


and set RVBD_ANALYSIS_SERVER_HOST to the FQDN of your Analysis Server.

* If you have an alternate configuration file, set RVBD_APP_CONFIG=<name of your alternate


config file >. Ensure that the alternate config exists on the Analysis Server before
setting it here; otherwise AppInternals will not load a config file.

3.4.2 Using the kubectl create command, create a ConfigMap from the properties file, as
follows:
# kubectl create configmap agent-daemonset-env
--from-env-file=./agent-daemonset-env.properties -n riverbed

3.5. Create the Riverbed agent as a Kubernetes DaemonSet by doing the following:

3.5.1 Edit the template file rvbd-agent.yaml and ensure that the values are correct for your
site, especially the location of the docker registry – returned in step 2.4. The file looks like this:

rvbd-agent.yaml

Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.

68 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: rvbd-agent

namespace: riverbed

spec:

selector:

matchLabels:

app: rvbd-agent

updateStrategy:

type: RollingUpdate # Supported only in Kubernetes version 1.6 or later

template:

metadata:

labels:

app: rvbd-agent

name: rvbd-agent

spec:

serviceAccountName: rvbd-agent

hostNetwork: true

dnsPolicy: ClusterFirstWithHostNet

containers:

- image: repo/agent:11.4.2.521

name: rvbd-agent

securityContext:

capabilities: {}

privileged: false

ports:

- containerPort: 2111

hostPort: 2111

name: dsaport

protocol: TCP

- containerPort: 7071

hostPort: 7071

name: daport

protocol: TCP

- containerPort: 7072

Aternity APM Version 11.4.3 69


Deploying Agents in Cloud-Native Environments

hostPort: 7072

name: agentrtport

protocol: TCP

- containerPort: 7073

hostPort: 7073

name: profilerport

protocol: TCP

- containerPort: 7074

hostPort: 7074

name: cmxport

protocol: TCP

envFrom:

- configMapRef:

name: agent-daemonset-env

env:

- name: RVBD_DSAHOST

valueFrom:

fieldRef:

fieldPath: status.hostIP

resources:

requests:

memory: "2G"

cpu: "200m"

limits:

memory: "2G"

cpu: "200m"

volumeMounts:

- name: rvbd-files

mountPath: /host/opt/Panorama

- name: dockersocket

mountPath: /var/run/dockersocket

- name: pks-dockersocket

mountPath: /var/run/pks-dockersocket

- name: procdir

mountPath: /host/proc

readOnly: true

70 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

- name: cgroups

mountPath: /host/sys/fs/cgroup

readOnly: true

imagePullSecrets:

- name: regcred

volumes:

- hostPath:

path: /opt/Panorama

name: rvbd-files

- hostPath:

path: /var/run/docker.sock

name: dockersocket

- hostPath:

path: /var/vcap/data/sys/run/docker/docker.sock

name: pks-dockersocket

- hostPath:

path: /proc

name: procdir

- hostPath:

path: /sys/fs/cgroup

name: cgroups

tolerations:

- operator: "Exists"

effect: "NoSchedule"

- operator: "Exists"

effect: "NoExecute"

#----------------------------- EOF ---------------------

3.5.2 (Optional) If the Docker image is from a private repository, a secret needs to be created for
pulling the image in the above yaml file (refer to the imagePullSecrets spec).

To create the secret, enter the following command, where <your-registry-server> is your
Private Docker Registry FQDN (https://index.docker.io/v1/ for DockerHub),
<your-name> is your Docker username, <your-pword> is your Docker password,
<your-email> is your Docker email:
# kubectl create secret docker-registry regcred --docker-server=<your-registry-server>
--docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
-n riverbed

3.5.3 Run the following command to create the DaemonSet:


# kubectl create -f rvbd-agent.yaml

Aternity APM Version 11.4.3 71


Deploying Agents in Cloud-Native Environments

Note—During DaemonSet startup, the AppInternals agent is installed as a DaemonSet Agent Pod on
each Kubernetes cluster Node, and the Profiler binaries are placed in the hostPath directory to be
shared by user Pods. As a result, the DaemonSet must be running before instrumentation can be
enabled.

4. Enable Java and .NET Core instrumentation by following these steps:

4.1. Using the text editor of choice, edit the template file
rvbd-instrumentation-env.properties, and ensure that the values are correct for your
site. The file looks like this:

rvbd-instrumentation-env.properties
# Common properties

AIX_INSTRUMENT_ALL=1
RVBD_AGENT_FILES=1

# JAVA Instrumentation

JAVA_TOOL_OPTIONS=-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so

# .NET Core Instrumentation

CORECLR_ENABLE_PROFILING=1

CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}

CORECLR_PROFILER_PATH=/opt/Panorama/hedzup/mn/lib/libAwDotNetProf64.so

DOTNET_ADDITIONAL_DEPS=/opt/Panorama/hedzup/mn/install/dotnet/additionalDeps/Riverbed.Ap
pInternals.DotNetCore/

DOTNET_SHARED_STORE=/opt/Panorama/hedzup/mn/install/dotnet/store/

#------------------------ EOF -------------------------------------

4.2. Using the Kubernetes kubectl create command, create a rvbd-instrumentation-env


ConfigMap as follows:
# kubectl create configmap rvbd-instrumentation-env
--from-env-file=./rvbd-instrumentation-env.properties

5. Deploy your instrumented application. These steps use Apache Tomcat as an example:

5.1. Ensure that the application pod's network policy allows TCP connections from the application
pod to the following ports on the DaemonSet agent pod: 2111, 7071, 7072, 7073, 7074.

5.2. Understand the variables you need to set in the application yaml file.

Three variables in the example Apache Tomcat application yaml file – RVBD_APP_CONFIG,
RVBD_APP_INSTANCE, and RVBD_CT_tagname – are of particular importance:
• RVBD_APP_CONFIG

If your application has a different config from the default config specified in the
agent-daemonset-env.properties file, specify that config in the application’s yaml file.
For example:
- name: RVBD_APP_CONFIG

value: WeatherServiceConfig

72 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

Note—If you change the name of the application's config file or create another config file for the
application in the Analysis Server WebUI, you must change the value setting for
RVBD_APP_CONFIG using the name of the new config file, and then restart the application.

• RVBD_APP_INSTANCE

An instrumented application has a "default instance" that is computed based on the class name
and the type of application server, which can be non-intuitive, such as
Tomcat__apps_wsse_wss2-D_603. You can use the RVBD_APP_INSTANCE variable to specify a
more descriptive instance for the application (such as WeatherService, for example) so it is
easier to recognize the application instance in the Process List and Instance tab of the WebUI.
For example:
- name: RVBD_APP_INSTANCE

value: WeatherService

Note—Specifying the new instance name in the application yaml file instead of the WebUI, which
is node-specific, ensures that every instance of the application deployed in the Kubernetes cluster
will report the new instance name in the WebUI.

• RVBD_CT_tagname

Creates a “tagname”. The “tagname” shows up as a container tag in the WebUI, displaying the
value you set this variable to. Multiple defines of this variable are permitted. Note that any
underscores in the tagname are parsed to white space.

5.3. Using the text editor of choice, edit the template file tomcat-app.yaml and ensure that the
values are correct for your application.

• Specify the volume that contains the instrumentation files

The instrumentation files need to be mounted from a hostpath, typically /opt/Panorama.


You must specify the host volume in the Deployment/spec/template/spec section of the
application yaml file.

• Specify the mount volume in the instrumented container

Each instrumented container needs to mount the volume that contains the instrumented files.
Specify VolumeMount in the Deployment/spec/template/spec/container section of the
application yaml file.

• Add rvbd-instrumentation-env as the configMapRef for the instrumented container.

The instrumented application must add our configmap with the env. variables for
instrumentation n the Deployment/spec/template/spec/container section of the application
yaml file.

• Specify the RVBD_DSAHOST address in the Deployment/spec/template/spec/container/env


section of the application yaml file.

The tomcat-app.yaml template file looks like this:

tomcat-app.yaml

Aternity APM Version 11.4.3 73


Deploying Agents in Cloud-Native Environments

Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.

Note—The bold text indicates sections the user may need to change for site-specific instrumentation,
as well as setting RVBD variables.

---

kind: List

apiVersion: v1

metadata:

name: tomcat-app-service-example

items:

- kind: Service

apiVersion: v1

metadata:

name: tomcat-app-service

spec:

selector:

name: tomcat-app

ports:

- protocol: TCP

port: 8080

targetPort: 8080

type: NodePort

- kind: Deployment

apiVersion: extensions/v1beta1

metadata:

labels:

app: tomcat-app

name: tomcat-app

spec:

replicas: 1

selector:

matchLabels:

app: tomcat-app

74 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

template:

metadata:

labels:

app: tomcat-app

spec:

containers:

- image: docker.io/tomcat

name: tomcat-app-container

ports:

- containerPort: 8080

envFrom:

- configMapRef:

name: rvbd-instrumentation-env

env:

- name: RVBD_DSAHOST

valueFrom:

fieldRef:

fieldPath: status.hostIP

#- name: RVBD_APP_CONFIG

# value: new-config

#- name: RVBD_APP_INSTANCE

# value: MyTomcatApp

volumeMounts:

- mountPath: /opt/Panorama

name: rvbd-files

volumes:

- hostPath:

path: /opt/Panorama

name: rvbd-files

# ----------------------- EOF -------------------

5.4. Deploy your application by entering the following Kubernetes kubectl create command:
# kubectl create -f <YOUR_APPLICATION_YAML_FILE>

6. The DaemonSet should now be deployed and the Appinternals agent harvesting instrumentation
information from the various applications in the Kubernetes Nodes. To verify that the DaemonSet is
running, log into the Analysis Server (default UID/PW: admin/riverbed-default), and view the
Server and Instances tabs as follows:

Aternity APM Version 11.4.3 75


Deploying Agents in Cloud-Native Environments

7. If you defined RVBD_CT_tagname variables, as described in step 5.2., this is how they appear in the
WebUI:

Deploying the AppInternals Agent as a DaemonSet (Helm Charts)


To deploy the Appinternals agent as a DaemonSet on a Kubernetes cluster using Helm Charts, follow these
steps:

1. Log in to your local system as a user with access to docker and kubectl.

2. Download and install helm. For more information, see the helm documentation here.

76 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

3. Host the Riverbed Docker Image by following these steps:

3.1. Download the agent tar.gz file from the Riverbed Support site, where it is referred to as the
Aternity APM for Kubernetes Kit.

The tar.gz file has a name like the following, rvbd_agent_VERSION.tar.gz, and contains
the following files necessary for configuring and deploying a DaemonSet with helm and
instrumenting an example Apache Tomcat application:

• The Riverbed AppInternals agent image tar file — rvbd_agent_VERSION.tar

• A yaml application template for Apache Tomcat instrumentation — tomcat-app.yaml

• AppInternals-specific Helm Charts that reside in the./helm directory

Note—The example yaml and properties files in this document are for informational purposes only.
Use the template files that ship with the agent to create your site-specific files.

3.2. Decompress and untar the agent tar file by entering the following command:
# tar zxvf rvbd_agent_VERSION.tar.gz

3.3. Change your working directory to the directory that holds the agent tar file, by entering the
following command:
# cd rvbd_agent_VERSION

3.4. Host the agent Docker Image.

The following is an example of loading the docker image and pushing the image to a private
docker registry where:

• <your-registry-server> is your Docker Registry FQDN

• <your-name> is your Docker username

• <your-pword> is your Docker password


# docker login -u <your-name> -p <your-pword> <your-registry-server>

# docker load -i rvbd_agent_VERSION.tar

# docker tag rvbd_agent:VERSION <your-repository>:VERSION

# docker push <your-repository>:VERSION

Note—You will need the location of the image in the registry later when you edit the
rvbd-agent.yaml file in step 4.5.

4. Configure the Riverbed DaemonSet on the Kubernetes cluster using a Helm Chart by following these
steps:

4.1. Log in to your local system as a user with access to docker, helm, and kubectl.

4.2. (Optional) Although not required, Riverbed recommends that you create a "riverbed" namespace,
which will provide better logical isolation between user applications and Riverbed components.
For information on when to use namespaces and how to create them, see Namespaces in the
Kubernetes documentation.

Aternity APM Version 11.4.3 77


Deploying Agents in Cloud-Native Environments

Enter the following command to create the namespace in Kubernetes:


# kubectl create namespace riverbed

4.3. (Optional) If the Docker image is from a private repository, a secret needs to be created for pulling
the image in the above yaml file (refer to the imagePullSecrets spec).

To create the secret, enter the following command, where <your-registry-server> is your Private
Docker Registry FQDN (https://index.docker.io/v1/ for DockerHub), <your-name> is your
Docker username, <your-pword> is your Docker password, <your-email> is your Docker email:
# kubectl create secret docker-registry regcred --docker-server=<your-registry-server>
--docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
-n riverbed

Note—During DaemonSet startup, the AppInternals agent is installed as a DaemonSet Agent Pod on
each Kubernetes cluster Node, and the Profiler binaries are placed in the hostPath directory to be
shared by user Pods. As a result, the DaemonSet must be running before instrumentation can be
enabled.

4.4. Determine the release name before installing the Helm Chart. Note that your working directory
should still be the directory that holds the agent tar file: rvbd_agent_VERSION, which also
contains the agent’s Helm Chart.

4.4.4 The installation of a Helm Chart varies slightly between versions 2 and 3 of Helm. Determine
the version of Helm that is installed by entering the following command:
# helm version

• If Helm v2 is installed, use the following command to specify the “release name”:
# helm install --name appint --namespace <Your-namespace> <parameter specification>
./helm

• If Helm v3 is installed, use the following format to specify the “release name”:
# helm install appint -n <Your-namespace> <parameter specification> ./helm

4.5. Once the release name has been specified, install the Helm Chart using a command like the
following, replacing the <variables> with the values specific to your site. Note this example uses
a minimum configuration on Helm V3 installed on a SaaS server:
# helm install --debug appint -n riverbed \

--set image.name="<SIDECAR_IMAGE>" \

--set image.pullSecrets[0]=regcred \
--set analysisServer.host="<YOUR_ANALYSIS_SERVER>" \

./helm

The following parameters are supported:

78 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

You specify each parameter using the --set key=value[,key=value] argument to the helm
install command. For example:
# helm install --name appint --namespace riverbed --set analysisServer.customerID
="<YOUR_CUSTOMER_ID>" ./helm

Alternatively, you can create a yaml file that specifies the values for the parameters, and then pass
the yaml file name as an argument to the helm install command. For example,
# helm install --name appint --namespace riverbed -f values.yaml ./helm

4.6. Verify the Helm Chart installation succeeded by entering the following command:
# helm list -n riverbed

If successful, the command returns output like the following:


version.BuildInfo{Version:"v3.0.0",
GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean",
GoVersion:"go1.13.4"}

5. (Optional) Add a config map to the Application Namespace to enable Java and .NET Core
instrumentation.

The Agent Daemonset Helm Chart automatically adds a config map (called
appint-instrumentation-env) to the default namespace of your cluster.

Aternity APM Version 11.4.3 79


Deploying Agents in Cloud-Native Environments

However, if your instrumented application is not running in the default namespace, you need to make
the config map available in the application's namespace (<Your App Namespace>) by entering the
following command:
# kubectl get configmap rvbd-instrumentation-env --namespace=default --export -o yaml |
kubectl apply --namespace=<Your App Namespace>-f -

6. Deploy your instrumented application. These steps use Apache Tomcat as an example:

6.1. Ensure that the application pod's network policy allows TCP connections from the application
pod to the following ports on the DaemonSet agent pod: 2111, 7071, 7072, 7073, 7074

6.2. Understand the variables you need to set in the application yaml file.

Three variables in the example Apache Tomcat application yaml file – RVBD_APP_CONFIG,
RVBD_APP_INSTANCE, and RVBD_CT_tagname – are of particular importance:
• RVBD_APP_CONFIG

If your application has a different config from the default config specified in the
agent-daemonset-env.properties file, specify that config in the application’s yaml file.
For example:
- name: RVBD_APP_CONFIG

value: WeatherServiceConfig

Note—If you change the name of the application's config file or create another config file for the
application in the Analysis Server WebUI, you must change the value setting for
RVBD_APP_CONFIG using the name of the new config file, and then restart the application.

• RVBD_APP_INSTANCE

An instrumented application has a "default instance" that is computed based on the class name
and the type of application server, which can be non-intuitive, such as
Tomcat__apps_wsse_wss2-D_603. You can use the RVBD_APP_INSTANCE variable to specify a
more descriptive instance for the application (such as WeatherService, for example) so it is
easier to recognize the application instance in the Process List and Instance tab of the WebUI.
For example:
- name: RVBD_APP_INSTANCE

value: WeatherService

Note—Specifying the new instance name in the application yaml file instead of the WebUI, which
is node-specific, ensures that every instance of the application deployed in the Kubernetes cluster
will report the new instance name in the WebUI.

• RVBD_CT_tagname

Creates a “tagname”. The “tagname” shows up as a container tag in the WebUI, displaying the
value you set this variable to. Multiple defines of this variable are permitted. Note that any
underscores in the tagname are parsed to white space.

6.3. Using the text editor of choice, edit the template file tomcat-app.yaml and ensure that the
values are correct for your application.

• Specify the volume that contains the instrumentation files

80 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

The instrumentation files need to be mounted from a hostpath, typically /opt/Panorama.


You must specify the host volume in the Deployment/spec/template/spec section of the
application yaml file.

• Specify the mount volume in the instrumented container

Each instrumented container needs to mount the volume that contains the instrumented files.
Specify VolumeMount in the Deployment/spec/template/spec/container section of the
application yaml file.

• Add appint-instrumentation-env as the configMapRef for the instrumented container.

The instrumented application must add our configmap with the env. variables for
instrumentation n the Deployment/spec/template/spec/container section of the application
yaml file.

• Specify the RVBD_DSAHOST address in the Deployment/spec/template/spec/container/env


section of the application yaml file.

The tomcat-app.yaml template file looks like this:

tomcat-app.yaml

Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.

Note—The bold text indicates sections the user may need to change for site-specific instrumentation,
as well as setting RVBD variables.

---

kind: List

apiVersion: v1

metadata:

name: tomcat-app-service-example

items:

- kind: Service

apiVersion: v1

metadata:

name: tomcat-app-service

spec:

selector:

name: tomcat-app

ports:

- protocol: TCP

Aternity APM Version 11.4.3 81


Deploying Agents in Cloud-Native Environments

port: 8080

targetPort: 8080

type: NodePort

- kind: Deployment

apiVersion: extensions/v1beta1

metadata:

labels:

app: tomcat-app

name: tomcat-app

spec:

replicas: 1

selector:

matchLabels:

app: tomcat-app

template:

metadata:

labels:

app: tomcat-app

spec:

containers:

- image: docker.io/tomcat

name: tomcat-app-container

ports:

- containerPort: 8080

envFrom:

- configMapRef:

name: appint-instrumentation-env

env:

- name: RVBD_DSAHOST

valueFrom:

fieldRef:

fieldPath: status.hostIP

#- name: RVBD_APP_CONFIG

# value: new-config

#- name: RVBD_APP_INSTANCE

# value: MyTomcatApp

82 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

volumeMounts:

- mountPath: /opt/Panorama

name: rvbd-files

volumes:

- hostPath:

path: /opt/Panorama

name: rvbd-files

# ----------------------- EOF -------------------

6.4. (Optional) Whereas the Helm Chart automatically deploys your application, you can use the
following Kubernetes kubectl create command to deploy your application manually:
# kubectl create -f <YOUR_APPLICATION_YAML_FILE>

7. The DaemonSet should now be deployed and the Appinternals agent harvesting instrumentation
information from the various applications in the Kubernetes Nodes. To verify that the DaemonSet is
running, log into the Analysis Server (default UID/PW: admin/riverbed-default), and view the
Server and Instances tabs as follows:

8. If you defined RVBD_CT_tagname variables, as described in step 6.2., this is how they appear in the
WebUI:

Aternity APM Version 11.4.3 83


Deploying Agents in Cloud-Native Environments

Upgrading the Agent in a Kubernetes DaemonSet


AppInternals supports the dynamic Kubernetes RollingUpdate strategy, which the Kubernetes
documentation explains as follows: “With RollingUpdate update strategy, after you update a DaemonSet
template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a
controlled fashion.”
To upgrade agents installed as a DaemonSet in a Kubernetes cluster using the RollingUpdate strategy,
follow these steps:

1. Follow steps step 1. and step 2. in the section “Deploying the AppInternals Agent as a DaemonSet (No
Helm Charts)“ to host the new agent image on the Docker Registry.

2. Using the text editor of choice, update the rvbd-agent.yaml file to point to the new version of the
agent. For more information on working with this file, see step 3.5. in “Deploying the AppInternals
Agent as a DaemonSet (No Helm Charts)“.

Note—The ConfigMap does not need to be updated. The same settings can be used for the newer
agent.

3. Initiate the Kubernetes RollingUpdate of the DaemonSet by entering the following command:

• Not using Helm Charts


# kubectl apply -f rvbd-agent.yaml

• Using Helm Charts

For information on how to upgrade Helm Charts, see the helm documentation here.

Monitoring VMware Tanzu Applications


VMware Tanzu contains a distribution of Cloud Foundry, an application platform as a service (PaaS).
APM supports monitoring applications running in VMware Tanzu environments as well as the ability to
monitor the VMs (i.e., Diego cells) upon which the applications are running.

84 Aternity APM Version 11.4.3


Deploying Agents in Cloud-Native Environments

For VMware Tanzu deployments, the agent is available directly on the VMware Tanzu Network. Download
the agent .pivotal file here:
https://network.pivotal.io/products/riverbed-appinternals
Documentation for deploying the agent and monitoring applications in VMware Tanzu is also hosted on
VMware Tanzu:
https://docs.pivotal.io/partners/riverbed-appinternals/

Aternity APM Version 11.4.3 85


Deploying Agents in Cloud-Native Environments

86 Aternity APM Version 11.4.3


CHAPTER 10 Synchronizing Agent Time With the
Analysis Server

APM requires that times on agent systems and the analysis server be synchronized through a network time
service (such as Network Time Protocol (NTP) or Windows Time Service).
This section describes time synchronization considerations for the agent.
For the on-premises version of the analysis server, you may also have to specify similar settings in the
“Date/Time Configuration Screen“. (If the analysis server is installed on a dedicated Linux system, do not
use those settings to configure NTP. They will overwrite any NTP configuration already configured for the
system.)
For the Software as a Service (SaaS) version of the analysis server, the analysis server is always
automatically synchronized with an external time service.

Problems Resulting from Unsynchronized Times


Basic operation of APM does not rely on synchronization. The analysis server collects data from agents and
processes it regardless of any time difference. However, unless times are synchronized, the analysis server
will have problems, including the following:
 Misalignment of metrics in graphs: Any graphs that show metric values over time require close
synchronization of the system times on each agent. If times are not synchronized, metric values will
not be plotted correctly along the time axis of the graph.
 Failure to detect related cross-tier transactions: A large time difference—greater than three
hours—between any agents will also affect how APM detects related cross-tier transactions. The
analysis server limits its detection of related transactions to within three hours of that transaction. If an
agent has a time skew greater than three hours, its transactions will not be included in the processing
to detect related transactions.
 Failure to retrieve related AppResponse data: If the analysis server is configured to use network data
from AppResponse appliances (as described in “AppResponse Appliances Screen“), a time difference
greater than +/- five minutes between agents and the appliance will cause missing data from
AppResponse.

Aternity APM Version 11.4.3 1


Synchronizing Agent Time With the Analysis Server

Confirm That the System Time Is Synchronized


This section explains how to do the following on UNIX/Linux and Window operating systems:
 Check whether the system time is synchronized and, if so, with which time servers
 If the time is not synchronized, how to specify a time server and start synchronization

UNIX/Linux

Check That System Time Is Synchronized


Use a command such as ntpstat or ntpq to check if NTP is running and which time servers it is using.
In this example, ntpstat shows that the system is using the time server on system 70.35.113.44:
# ntpstat
synchronised to NTP server (70.35.113.44) at stratum 3
time correct to within 189 ms
polling server every 1024 s

If NTP is not running, ntpstat displays results similar to this:


# ntpstat
unsynchronised
time server re-starting
polling server every 64 s

The ntpq -p command shows more detail:


# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+ns.tx.primate.n 204.123.2.72 2 u 954 1024 377 66.734 -5.904 3.935
+70.35.113.44 129.6.15.28 2 u 460 1024 377 91.817 0.249 2.077
+206.51.211.152 206.51.211.153 4 u 960 1024 377 50.319 -0.554 5.437
*fairy.mattnordh 200.98.196.212 2 u 963 1024 377 52.607 -5.722 9.449

(See the following link for details on ntpq output:


https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Gu
ide/s1-Checking_the_Status_of_NTP.html)

Start Time Synchronization


If NTP is not running, edit the /etc/ntp.conf file. Add a server command if necessary. (This may not be
necessary. The default file contains server commands that specify valid NTP servers.)
Start the NTP daemon by running ntpd and verify that it is running:
# which ntpd
/usr/sbin/ntpd
# ntpd
# netstat -an | grep 123
udp 0 0 192.168.175.23:123 0.0.0.0:*
udp 0 0 127.0.0.1:123 0.0.0.0:*
udp 0 0 0.0.0.0:123 0.0.0.0:*
udp 0 0 ::1:123 :::*
udp 0 0 :::123 :::*

2 Aternity APM Version11.4.3


Synchronizing Agent Time With the Analysis Server

Use the chkconfig command to see if the NTP daemon, ntpd, is configured to run when the system boots.
In this example, it is not:
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off

If necessary, use chkconfig to enable ntpd to run when the system boots. The example below accepts the
default, which enables ntpd for run levels 2 through 5:
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
# ./sbin/chkconfig ntpd on
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Windows

Check That System Time Is Synchronized


Use the w32tm command-line tool to see if the system time is currently synchronized. In this example, the
value of the ntpserver entry indicates the time is synchronized with system myntpserver:
C:>w32tm /dumpreg /subkey:parameters

Value Name Value Type Value Data


--------------------------------------------------------------------

ServiceMain REG_SZ SvchostEntry_W32Time


ServiceDll REG_EXPAND_SZ C:\WINDOWS\system32\w32time.dll
Type REG_SZ NTP
ntpserver REG_SZ myntpserver,0x1

(The preceding command returns the contents of the


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Parameters key in the
Windows registry.)

Start Time Synchronization


If the system time is not synchronized, use another w32tm command to specify a network time source:
C:\>w32tm /config /syncfromflags:manual /manualpeerlist:myntpserver,0x1 /update
The command completed successfully.

Aternity APM Version 11.4.3 3


Synchronizing Agent Time With the Analysis Server

4 Aternity APM Version11.4.3


CHAPTER 11 Enabling Instrumentation of Java
Processes

Overview
This chapter describes how to configure the agent system for Java processes to load the APM profiler library
to enable instrumentation. Unless instrumentation is enabled, processes will not load the profiler library
and will not be monitored.
APM monitors Java applications by loading a profiler library into processes when they start as follows:
 The profiler library determines if a user configured a process for instrumentation in the “Agent Details
Screen“. Only processes with the Instrument option selected (see “Instrument Option (Restart
Required)“) will be instrumented.
 For those processes, monitoring begins once the JVM startup completes. APM generates transaction
trace data according to the settings specified in the corresponding “Define a Configuration Screen“ for
the process.
The following sections explain how to enable and troubleshoot Java instrumentation:
 “Enabling Java Instrumentation on Windows“
 “Enabling Java Instrumentation on Linux“
 “Enabling Java Instrumentation on Solaris“
 “Adding -agentpath to Java Command Line“
 “Using JidaRegister.exe to Instrument Java Processes“
 “Using Environment Variables to Instrument Java Processes“
 “Troubleshooting Java Instrumentation Issues“

Aternity APM Version 11.4.3 1


Enabling Instrumentation of Java Processes

Enabling Java Instrumentation on Windows


Aternityl AppInternals supports instrumentation of Java and .NET Core applications on Windows. For
information on supported operating versions, see the System Requirements document on the support page.
The easiest way to enable instrumentation of Java applications is during the agent installation, where you
are prompted to Enable Instrumentation Automatically in the agent installation, as follows:

Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.

If you do not want all Java and .NET processes enabled for instrumentation, do not select this feature during
the installation. After the installation completes, you can manually configure processes to be instrumented
with the rpictrl utility. For more information, see “Automatic Process Instrumentation on Windows“, as
well as “Adding -agentpath to Java Command Line“ and “Using JidaRegister.exe to Instrument Java
Processes“.

Enabling Java Instrumentation on Linux


To enable Java instrumentation on Linux, do one of the following:
 During the agent installation, as described in “Linux only: Whether to enable automatic system-wide
instrumentation“.

2 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

 After the agent installation, by using the script


<installdir>/Panorama/install_mn/install_root_required.sh to enable
instrumentation automatically for all processes on the system. You must run this script as root.

Note: Enabling Instrumentation Automatically During Installation


Agent installations on Linux can enable instrumentation automatically when they run as root. See “Linux
only: Whether to enable automatic system-wide instrumentation“ in the installation documentation.
Unattended installations enable instrumentation automatically if they specify the
“O_SI_AUTO_INSTRUMENT“ option.

The script has other options for post-installation tasks on Unix-like operating systems. (See “Non-Root
Installs: Run install_root_required.sh“ in the installation documentation.)
Choose the last option to enable instrumentation automatically. For example:
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh
Change permissions to run the NPM sub-agent (recommended, SaaS analysis
server only)? [y|n]: n
Enable automatic startup on system reboot (recommended)? [y|n]: n
Enable automatic Java instrumentation system-wide (optional)? [y|n]:y
Successfully installed process injection library
Process injection already enabled.
Successfully enabled process injection library
After you enable instrumentation system-wide, select the Java processes you want to instrument (via
the “Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the
applications. After restarting, the applications will load the profiler library and they will appear in the
Agent Details screen as Instrumented.
Enabling system-wide instrumentation does the following:
– Copies or creates symbolic links of the librpil*.so shared objects in
<installdir>/Panorama/hedzup/mn/lib64 and <installdir>/Panorama/hedzup/mn/lib64 as
follows:
* For Red Hat, SUSE, CentOS: in the /lib and /lib64 system directories, respectively
* For Ubuntu: in the /lib/i386-linux-gnu, /lib/i686-linux-gnu, and /lib/x86_64-linux-gnu
directories, respectively
– Adds the following entry to the system file /etc/ld.so.preload:
* For Red Hat, SUSE, CentOS:
/$LIB/librpil.so

* For Ubuntu:
/lib/${PLATFORM}-linux-gnu/librpil.so

An alternative to running <installdir>/Panorama/install_mn/install_root_required.sh is to run the


script <installdir>/Panorama/hedzup/mn/bin/rpictrl.sh. Run it twice, with the install and enable
arguments. You must run the script as root:

Aternity APM Version 11.4.3 3


Enabling Instrumentation of Java Processes

[root@nhv1-rh6-1 bin]# ./rpictrl.sh install


Process injection already disabled.
Successfully uninstalled process injection library.
Successfully installed process injection library.
[root@nhv1-rh6-1 bin]# ./rpictrl.sh enable
Successfully enabled process injection.
To disable system-wide instrumentation, run the script
<installdir>/Panorama/hedzup/mn/bin/rpictrl.sh with the disable argument:
[root@nhv1-rh6-1 bin]# ./rpictrl.sh disable
Successfully disabled process injection.

The disable operation removes the entry from /etc/ld.so.preload.


Any time system-wide instrumentation is enabled or disabled, APM writes a message to the file
/var/log/messages. For example:
Jun 25 15:51:30 nhv1-rh6-1 root: Successfully disabled process injection.
Jun 25 15:52:29 nhv1-rh6-1 root: Successfully enabled process injection.

Enabling Java Instrumentation on Solaris


On the Solaris operating system, you must enable instrumentation for Java applications and must restart
them before they will appear in the “List of Processes to Instrument and Assign JMX and WMI Custom
Metric Configurations“ of the “Agent Details Screen“.
For an agent that is running on Solaris, if you do not see a Java application that you want to monitor in the
Agent Details screen, make sure you complete these steps:

1) Enable instrumentation as described in this section. On Solaris, use one of the following approaches:
– Set the “JAVA_TOOL_OPTIONS“ environment variable.
– “Adding -agentpath to Java Command Line“

2) Restart the application. This will cause its process to appear in the Processes to Instrument list.

3) Once the application appears in the Processes to Instrument list, select the Instrument? option.

4) Restart the application again. APM will begin monitoring it.

Adding -agentpath to Java Command Line


Works on: All platforms (AIX, Linux, Solaris, Windows)
You can specify the -agentpath option directly in the Java command that starts the application in which you
want to load the profiler library. You cannot use this approach for native applications with “embedded”
JVMs that do not use the Java launcher (java.exe) command line.

4 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

Specify the 64-bit or 32-bit version of the library and the APM installation directory for the system:

Environment -agentpath Value and Example

Windows 32-bit JVM <installdir>\Panorama\hedzup\mn\bin\rpilj.dll


-agentpath:C:\Panorama\hedzup\mn\bin\rpilj.dll

Windows 64-bit JVM <installdir>\Panorama\hedzup\mn\bin\rpilj64.dll


-agentpath:C:\Panorama\hedzup\mn\bin\rpilj64.dll

AIX 32-bit JVM <installdir>/Panorama/hedzup/mn/lib/librpilj.a


(library suffix is .a)
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj.a

AIX 64-bit JVM <installdir>/Panorama/hedzup/mn/lib/librpilj64.a


(library suffix is .a)
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.a

Linux/Solaris 32-bit JVM <installdir>/Panorama/hedzup/mn/lib/librpilj.so


(library suffix is .so)
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj.so

Linux/Solaris 64-bit JVM <installdir>/Panorama/hedzup/mn/lib/librpilj64.so


(library suffix is .so)
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so

Because -agentpath is configured for each process, it avoids a potential problem with the Windows
system-wide JAVA_TOOL_OPTIONS variable set by running “Using JidaRegister.exe to Instrument Java
Processes“. With that approach, if there is a mismatch between the 32-bit or 64-bit library specified by
JAVA_TOOL_OPTIONS and any JVM that loads it, the JVM will not start.
However, you must know where to specify the -agentpath options for the application. This varies by
application and may be in a startup script, user interface, or a configuration file.
For example, for Tomcat on Windows, you specify the option in the Tomcat properties dialog, in the Java
tab:

Aternity APM Version 11.4.3 5


Enabling Instrumentation of Java Processes

After you specify the -agentpath option, select the Java processes you want to instrument (via the
“Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the applications.
After restarting, the applications will load the profiler library and they will appear in the Agent Details
screen as Instrumented.

Using JidaRegister.exe to Instrument Java Processes


Works on: Windows
This utility sets system-wide JAVA_TOOL_OPTIONS environment variable in the profiler library, as
follows:
 JAVA_TOOL_OPTIONS attempts to load the profiler library in all Java processes that start on the
system.
 If JAVA_TOOL_OPTIONS specifies the 64-bit library, applications that use a 32-bit JVM will not start
(and vice-versa).
The JAVA_TOOL_OPTIONS environment variable specifies the -agentpath Java option to load the APM
profiler library.

On Windows, run the <installdir>\Panorama\hedzup\mn\bin\JidaRegister.exe program to set


JAVA_TOOL_OPTIONS as a system-wide environment variable.
Run JidaRegister.exe and respond to the prompts:
 Whether to instrument (load the profiler library) in Java processes. Specify y to proceed.
 Whether to load the 64-bit (specify y) or 32-bit version (specify n) of the library.
The following example specifies the 64-bit library:
C:\Panorama\hedzup\mn\bin>JidaRegister.exe
Running as user "NHX1-W2K8R2-12\Administrator"
Do you want to instrument your Java applications? [y|n]:y
Do your applications use a 64-bit JRE? [y|n]:y
Setting system environment variable:
JAVA_TOOL_OPTIONS -agentpath:C:\Panorama\hedzup\mn\bin\rpilj64.dll
done.
A log file has been written to C:\Panorama\hedzup\mn\log\JidaRegister.log
C:\Panorama\hedzup\mn\bin>

Once you run JidaRegister.exe, select the Java processes you want to instrument (via the “Instrument
Option (Restart Required)“ option in the “Agent Details Screen“) and restart the corresponding Windows
services. For example, for Tomcat:

6 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

After restarting the services, the applications will load the profiler library and they will appear in the Agent
Details screen as Instrumented. For example, for Tomcat:

Keep in mind the following points when using JidaRegister.exe:


 If JAVA_TOOL_OPTIONS had already been set with options unrelated to APM, JidaRegister.exe
preserves those options and adds the option to load the profiler library at the beginning of the variable.
 Use the JidaRegister.exe with the uninstall argument to remove APM options from
JAVA_TOOL_OPTIONS:
C:\Panorama\hedzup\mn\bin>JidaRegister.exe uninstall
Running as user "NHX1-W2K8R2-12\Administrator"
Unsetting environment variables...A log file has been written to C:\Panorama\hedzup\mn\log\Ji
daRegister.log

When you remove agent software on Windows (see “Removing Agent Software“), the uninstall
program runs JidaRegister.exe with the uninstall argument.
 Once you choose the 64-bit or 32-bit library, any JVM that does not match that choice will fail to start.
For example, after choosing the 64-bit library, running a 32-bit JVM fails:
C:\Users\Administrator>java -version
Picked up JAVA_TOOL_OPTIONS: -agentpath:C:\Panorama\hedzup\mn\bin\rpilj64.dll
Error occurred during initialization of VM
Could not find agent library C:\Panorama\hedzup\mn\bin\rpilj64.dll in absolute path, with err
or: Can't load AMD 64-bit .dll on a IA 32-bit platform

Aternity APM Version 11.4.3 7


Enabling Instrumentation of Java Processes

 Although Windows services started after running JidaRegister.exe pick up changes to


JAVA_TOOL_OPTIONS, other processes may not. For example, new Windows Command Prompt
processes will not, so any applications restarted from the command line will not load the profiler
library. You must reboot the system to reliably propagate changes from running JidaRegister.exe to all
processes. You can propagate the changes to some (but not all) new processes by clicking OK in the
Windows Environment Variables window:

Using Environment Variables to Instrument Java Processes


The following sections explain how to use environment variables to configure the instrumentation of Java
processes.

LD_PRELOAD
Works on: Linux
The LD_PRELOAD environment variable is set at the process level by modifying the application startup
script or at the user level by modifying a user profile (such as ~/.profile), and loads the profiler library
in all processes within the scope of the variable.

Note: If agent software is removed without removing the option that loads the library from LD_PRELOAD,
Java processes will start but generate errors.

8 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

The LD_PRELOAD environment variable specifies an absolute path to shared objects that processes will
load before any other library. Unlike “JAVA_TOOL_OPTIONS“, LD_PRELOAD affects all processes that
start in the scope of the environment variable, not just Java processes.
Typically, you set LD_PRELOAD in the startup script for applications. You could also add it to a user profile
(such as ~/.profile) so that it would automatically affect all processes started by that user.
Set LD_PRELOAD as follows:
export LD_PRELOAD="<installdir>/Panorama/hedzup/mn/\$LIB/librpil.so $LD_PRELOAD"

Note the following:


 Replace <installdir> with the directory where the APM agent software is installed.
 Do not change the \$LIB reference in the path. It is a Dynamic String Token (DST) that resolves to
either the <installdir>/Panorama/hedzup/mn/lib/ or <installdir>/Panorama/hedzup/mn/lib64/
directory. This means you can use the same LD_PRELOAD value for both 32-bit and 64-bit JVMs.
The following example shows a script that adds LD_PRELOAD and starts multiple Tomcat servers:
[root@jidatest opt]# more begin_apps.sh
export LD_PRELOAD="/opt/Panorama/hedzup/mn/\$LIB/librpil.so $LD_PRELOAD"
/opt/apache-tomcat-t1Client/bin/startup.sh jidatest1
/opt/apache-tomcat-t2Flight/bin/startup.sh jidatest2
/opt/apache-tomcat-t3Schedl/bin/startup.sh jidatest3
[root@jidatest opt]#

After you set LD_PRELOAD, select the Java processes you want to instrument (via the “Instrument Option
(Restart Required)“ option in the “Agent Details Screen“) and restart the applications. After restarting, the
applications will load the profiler library and they will appear in the Agent Details screen as Instrumented.

JAVA_TOOL_OPTIONS
Works on: AIX, Linux, Solaris (on Windows, use “Using JidaRegister.exe to Instrument Java Processes“).
Use the JAVA_TOOL_OPTIONS environment variable to specify the -agentpath Java option to load the
APM profiler library. Java processes that are in the scope of the environment variable will add the option
when they start.
Typically, you set JAVA_TOOL_OPTIONS in the startup script for applications. You could also add it to a
user profile (such as ~/.profile) so that it would automatically affect all processes started by that user.
After you set JAVA_TOOL_OPTIONS, select the Java processes you want to instrument (via the
“Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the applications.
After restarting, the applications will load the profiler library and they will appear in the Agent Details
screen as Instrumented.
The specific value you supply for the -agentpath Java option varies depending on the operating system and
whether the application uses a 32-bit or 64-bit JVM See the “Using JAVA_TOOLS_OPTIONS on AIX“,
“Using JAVA_TOOLS_OPTIONS on Linux“, and “Using JAVA_TOOLS_OPTIONS on Solaris“ sections for
operating-system specific details.

Aternity APM Version 11.4.3 9


Enabling Instrumentation of Java Processes

On Unix-like operating systems, JAVA_TOOL_OPTIONS has negative side effects. Because of these,
consider using the approaches described in “Adding -agentpath to Java Command Line“ or
“LD_PRELOAD“ (Linux only). JAVA_TOOL_OPTIONS has these negative side effects.:
 It generates an additional message to standard error (stderr) about loading the APM library. This may
cause issues in applications that, for example, check for the presence of output in standard error as a
problem. The following example (on “Using JAVA_TOOLS_OPTIONS on Linux“) uses java -version
and redirects its standard error output to a file (2> show.stderr), then shows the additional
message in the file in bold:
bash-4.1$ echo $JAVA_TOOL_OPTIONS
-agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so
bash-4.1$ java -version 2> show.stderr
bash-4.1$ more show.stderr
Picked up JAVA_TOOL_OPTIONS: -agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so

java version "1.6.0_24"


OpenJDK Runtime Environment (IcedTea6 1.11.1) (rhel-1.45.1.11.1.el6-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

 If the APM agent software is removed without removing the applicable -agentpath option from
JAVA_TOOL_OPTIONS, Java processes within the scope of JAVA_TOOL_OPTIONS will no longer
start.

Using JAVA_TOOLS_OPTIONS on AIX


On AIX, you must specify different values for JAVA_TOOL_OPTIONS depending on whether applications
use a 32-bit JVM or a 64-bit JVM:
For 32-bit JVMs, set JAVA_TOOL_OPTIONS as follows (this example uses the Bourne shell):
export JAVA_TOOL_OPTIONS="-agentpath:<installdir>/Panorama/hedzup/mn/lib/librpilj.a
$JAVA_TOOL_OPTIONS"

For 64-bit JVMs:


export JAVA_TOOL_OPTIONS="-agentpath:<installdir>/Panorama/hedzup/mn/lib/librpilj64.a
$JAVA_TOOL_OPTIONS"

Note the following:


 Replace <installdir> with the directory where the APM agent software is installed.
 The library file suffix is .a (not .so, as on other Unix-like operating systems).
 If JAVA_TOOL_OPTIONS specifies the 64-bit library, applications within in the scope of the variable
that use a 32-bit JVM will not start (and vice-versa). For example, after specifying the 64-bit library,
running a 32-bit JVM fails:
bash-4.2# echo $JAVA_TOOL_OPTIONS
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.a
bash-4.2# java -version
JVMJ9TI001E Agent library /opt/Panorama/hedzup/mn/lib/librpilj64.a could not be opened (
0509-022 Cannot load module /opt/Panorama/hedzup/mn/lib/librpilj64.a.
0509-026 System error: Cannot run a file that does not have a valid format.)
JVMJ9VM015W Initialization error for library j9jvmti23(-3): JVMJ9VM009E J9VMDllMain failed
Could not create the Java virtual machine.

You can use the ps eww command (see this IBM link for details) to verify that JAVA_TOOL_OPTIONS is in
effect for a particular Java process. In the following example, it is not:
bash-4.2# ps -e | grep java # find the process ID:
4915358 - 14:53 java
6488266 - 8:17 java
7078070 - 0:41 java
8650978 - 0:49 java
9699554 - 24:55 java

10 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

bash-4.2# ps eww 6488266 | tr ' ' '\n' | grep = | sort


-Dfile.encoding=UTF-8
-Dfile.encoding=UTF-8
AIXTHREAD_SCOPE=S
AUTHSTATE=files
A__z=!
HOME=/var/adm/pconsole
IBM_COREDIR=/var/log/pconsole/
IBM_HEAPDUMPDIR=/var/log/pconsole/
IBM_JAVACOREDIR=/var/log/pconsole/
IBM_JAVA_COMMAND_LINE=/usr/java5/bin/java
IBM_JVM_AIXTHREAD_SCOPE_NEW_VALUE=S
IBM_JVM_CHANGED_ENVVARS_6488266=LIBPATH,AIXTHREAD_SCOPE,LDR_CNTRL
IBM_JVM_LDR_CNTRL_NEW_VALUE=MAXDATA=0XA0000000@DSA
IBM_JVM_LIBPATH_NEW_VALUE=/usr/java5/jre/bin:/usr/java5/jre/bin/classic:/usr/java5/jre/bin
JAVA_HOME=/usr/java5
LANG=C
.
.
.

Using JAVA_TOOLS_OPTIONS on Linux


On Linux, set JAVA_TOOL_OPTIONS as follows:
export JAVA_TOOL_OPTIONS="-agentpath:<installdir>/Panorama/hedzup/mn/\$LIB/librpilj.so
$JAVA_TOOL_OPTIONS"

Note the following:


 Replace <installdir> with the directory where the APM agent software is installed.
 Do not change the \$LIB reference in the path. It is a Dynamic String Token (DST) that resolves to
either the <installdir>/Panorama/hedzup/mn/lib/ or <installdir>/Panorama/hedzup/mn/lib64/
directory. This means you can use the same JAVA_TOOL_OPTIONS value for both 32-bit and 64-bit
JVMs.
The following example shows a script that adds JAVA_TOOL_OPTIONS and starts multiple Tomcat
servers:
[root@jidatest opt]# more begin_apps.sh
export JAVA_TOOL_OPTIONS="-agentpath:/opt/Panorama/hedzup/mn/\$LIB/librpilj.so $JAVA_TOOL_OPTIONS"
/opt/apache-tomcat-t1Client/bin/startup.sh jidatest1
/opt/apache-tomcat-t2Flight/bin/startup.sh jidatest2
/opt/apache-tomcat-t3Schedl/bin/startup.sh jidatest3
[root@jidatest opt]#

You can use the strings -a command (see http://linux.die.net/man/1/strings for more detail) to verify that
JAVA_TOOL_OPTIONS is in effect for a particular Java process. For, example:
[root@nhx2-rh6-1 ~]# ps -ef | grep java # find the process ID:
root 11230 1 0 Dec04 ? 00:10:08 /usr/bin/java -Djava.util.logging.config.file=/op
t/apache-tomcat-6.0.41/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassL
oaderLogManager -Djava.endorsed.dirs=/opt/apache-tomcat-6.0.41/endorsed -classpath /opt/apache-to
mcat-6.0.41/bin/bootstrap.jar -Dcatalina.base=/opt/apache-tomcat-6.0.41 -Dcatalina.home=/opt/apac
he-tomcat-6.0.41 -Djava.io.tmpdir=/opt/apache-tomcat-6.0.41/temp org.apache.catalina.startup.Boot
strap start
[root@nhx2-rh6-1 ~]# strings -a /proc/11230/environ | grep JAVA_TOOL_OPTIONS
JAVA_TOOL_OPTIONS=-agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so

Aternity APM Version 11.4.3 11


Enabling Instrumentation of Java Processes

Using JAVA_TOOLS_OPTIONS on Solaris


On Solaris, you must specify different values for JAVA_TOOL_OPTIONS depending on whether
applications use a 32-bit JVM or a 64-bit JVM:
For 32-bit JVMs, set JAVA_TOOL_OPTIONS as follows (this example uses the Bourne shell):
export JAVA_TOOL_OPTIONS="-agentpath:<installdir>/Panorama/hedzup/mn/lib/librpilj.so
$JAVA_TOOL_OPTIONS"

For 64-bit JVMs:


export JAVA_TOOL_OPTIONS="-agentpath:<installdir>/Panorama/hedzup/mn/lib/librpilj64.so
$JAVA_TOOL_OPTIONS"

Note the following:


 Replace <installdir> with the directory where the APM agent software is installed.
 If JAVA_TOOL_OPTIONS specifies the 64-bit library, applications within in the scope of the variable
that use a 32-bit JVM will not start (and vice-versa). For example, after specifying the 64-bit library,
running a 32-bit JVM fails:
bash-4.3$ echo $JAVA_TOOL_OPTIONS
-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so
bash-4.3$ /usr/jdk1.6.0_45/jre/bin/java -version
Picked up JAVA_TOOL_OPTIONS: -agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so
Error occurred during initialization of VM
Could not find agent library /opt/Panorama/hedzup/mn/lib/librpilj64.so in absolute path, with
error: ld.so.1: java: fatal: /opt/Panorama/hedzup/mn/lib/librpilj64.so: wrong ELF class: ELF
CLASS64 (Possible cause: architecture word width mismatch)

You can use the pargs -e command (see this Oracle link for more detail) to verify that
JAVA_TOOL_OPTIONS is in effect for a particular Java process. In the following example, it is not and
pargs -e does not return any results:
bash-3.00# ps -ef | grep java # find the process ID:
root 25998 25959 0 10:48:04 pts/4 0:00 grep java
noaccess 1582 1 0 Aug 31 ? 106:51 /usr/java/bin/java -server -Xmx128m -XX:+UsePa
rallelGC -XX:ParallelGCThreads=4
bash-3.00# pargs -e 1582 | grep JAVA_TOOL_OPTIONS
bash-3.00#

Troubleshooting Java Instrumentation Issues


The following sections explain how to troubleshoot some common Java instrumentation issues. For
information on troubleshooting general instrumentation issues, see “Instrumentation Techniques and
Troubleshooting“.

Symptoms of Not Enabling Java Instrumentation


The APM analysis server cannot detect if Java instrumentation is enabled for specific processes. As a result,
unless Java instrumentation is enabled, the user interface in some cases shows a misleading status for Java
processes in the “Agent Details Screen“.

12 Aternity APM Version11.4.3


Enabling Instrumentation of Java Processes

When users choose to instrument Java processes (via the “Instrument Option (Restart Required)“ in the
“Agent Details Screen“) for which instrumentation has not been enabled, they will not be instrumented. The
status will incorrectly show as Awaiting Restart, but restarting the affected processes will have no effect
until you enable instrumentation as described here. For example, for a Tomcat application:

 You correct this confusing behavior when you enable instrumentation as described here and restart the
application: If the process had already been selected for instrumentation, APM will instrument it and it
will appear in the Agent Details screen as Instrumented. For example, for Tomcat:

 If the process had NOT been selected for instrumentation, the status will appear as Running. Selecting
the Instrument? option will change the status to Awaiting Restart. However, because instrumentation
had been enabled, the status is now correct. Restarting the application really will cause APM to
instrument it.
A reliable way to determine if a particular Java process is instrumented is to check on the agent system for
the presence of a JIDA sub-agent log file that corresponds to the process. The log files are in the directory
<installdir>\Panorama\hedzup\mn\log. Their names include the process name and time the process
started. For the Tomcat process in the previous example, the log file name would be similar to the following:
DA-JIDA_Tomcat_TOMCAT7_20150610130750_5908.txt

Aternity APM Version 11.4.3 13


Enabling Instrumentation of Java Processes

14 Aternity APM Version11.4.3


CHAPTER 12 Enabling Instrumentation of .NET
Processes

Overview
Riverbed SteelCentral AppInternals supports intrumentation of applications running on .NET Core and the
.NET Framework.
.NET Framework is only supported on Windows.
.NET Core is supported on both Windows and Linux, and can be deployed as FDD, FDE, or SCD.
For information on specific supported operating systems and versions, see the System Requirements
document on the support page
The following sections explain how to enable and troubleshoot .NET instrumentation:
 “Enabling .NET Core Instrumentation on Windows“
 “Enabling .NET Core Instrumentation On Linux“
 “Instrumenting Framework-Dependent Deployment (FDD) and Framework Dependent Executables
(FDE) Applications“
 “Instrumenting Self-Contained Deployment (SCD) Applications“
 “Troubleshooting .NET Instrumentation Issues“

Enabling .NET Core Instrumentation on Windows


Aternityl AppInternals supports instrumentation of .NET Core 2.0 (and later) applications on Windows.
For information on supported Windows operating versions, see the System Requirements document on the
support page.
The following sections explain how to enable .NET Core instrumentation on Windows:
 “Enabling .NET Core Instrumentation During a Windows Agent Installation“

Aternity APM Version 11.4.3 1


Enabling Instrumentation of .NET Processes

 “Enabling .NET Core Instrumentation After a Windows Agent Installation“

Note—Riverbed's SteelCentral AppInternals Data Adapter feature uses Microsoft's DotNet Core to collect
performance data from Customer's instrumented DotNet Core applications, as such Microsoft may collect
data from Customer's instrumented DotNet Core applications and Microsoft's collection and use of such
data is subject to Microsoft's privacy statement located at
http://go.microsoft.com/fwlink/?LinkID=528096.

Enabling .NET Core Instrumentation During a Windows Agent


Installation
The easiest way to enable instrumentation of .NET Core applications is during the agent installation, where
you are prompted to Enable Instrumentation Automatically in the agent installation, as follows:

Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.

Enabling .NET Core Instrumentation After a Windows Agent Installation


If you chose not to enable instrumentation of .NET Core during the agent installation on Windows, you
must enable instrumentation of .NET Core processes manually after the agent installation completes by
doing one of the following:
 “Running the RPID Utility“
 “Adding Environment Variables to the Windows Registry“

2 Aternity APM Version 11.4.3


Enabling Instrumentation of .NET Processes

Running the RPID Utility


For information on running the RPID utility to enable instrumentation of .NET Core processes on Windows,
see “Starting and Enabling RPID“.

Adding Environment Variables to the Windows Registry


To enable instrumentation of .NET Core on Windows, you can manually add variables to the Windows
registry, as follows:

1. Add the following environment variables to the Windows registry that enable .NET Core
instrumentation, using the command line, a batch script, or the System Settings on the Control Panel.

Note—Incorrectly entering or mistyping these environment variables may cause unpredictable


behavior. Therefore, we recommend that you use the RPID utility to enter these variables for you. For
more information see “Starting and Enabling RPID“.

Note—PANORAMA_HOME is a special APM variable that resolves to the root directory of the agent
installation, typically /opt.

CORECLR_ENABLE_PROFILING=1

CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}

CORECLR_PROFILER_PATH=PANORAMA_HOME\Panorama\hedzup\mn\lib\libAwDotNetProf64.so

DOTNET_ADDITIONAL_DEPS=PANORAMA_HOME\Panorama\hedzup\mn\install\dotnet\additionalDeps\Riverbed.Ap
pInternals.DotNetCore

DOTNET_SHARED_STORE=PANORAMA_HOME\Panorama\hedzup\mn\install\dotnet\store

2. Log into the Analysis Server WebUI as admin.

3. In the “Agent Details Screen“ on the Analysis Server WebUI, select the .NET Core processes you want
to instrument and restart the applications.

Note—After restarting, the applications will load the profiler library and they will appear in the Agent
Details screen as Instrumented. You then choose which processes to instrument and monitor in the
“Agent List Screen“

Enabling .NET Core Instrumentation On Linux


Aternityl AppInternals supports instrumentation of .NET Core 2.0 (and later) applications on Linux. For
information on supported operating versions, see the System Requirements document on the support page.
The following sections explain how to enable .NET Core instrumentation on Linux:
 “Enabling .NET Core Instrumentation During a Linux Agent Installation“
 “Enabling .NET Core Instrumentation After a Linux Agent Installation“

Aternity APM Version 11.4.3 3


Enabling Instrumentation of .NET Processes

Enabling .NET Core Instrumentation During a Linux Agent Installation


The easiest way to enable instrumentation of .NET Core applications is during the agent installation, where
you are prompted to Enable Instrumentation Automatically in the agent installation, as follows:

Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.

Enabling .NET Core Instrumentation After a Linux Agent Installation


If you chose not to enable instrumentation of .NET Core during the agent installation on Linux, you must
enable instrumentation of .NET Core processes manually after the agent installation completes by doing
one of the following:
 “Rerunning the install_root_required.sh Script“
 “Adding Environment Variables to the .profile File“

Rerunning the install_root_required.sh Script


To instrument .NET Core processes after the agent installation completes, run the
install_root_required.sh script, as follows:

1. Log in as root or become superuser on the Linux system where the agent was installed.

4 Aternity APM Version 11.4.3


Enabling Instrumentation of .NET Processes

2. Change your working directory to <installdir>/Panorama/install_mn/, replacing


<installdir> with the agent directory (typically /opt), as follows

# cd /opt/Panorama/install_mn

3. Restart the installation script by running install_root_required.sh

# ./install_root_required.sh

4. You are presented with a series of options. Choose the last option to enable instrumentation
automatically, as in the following example.
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh

Change permissions to run the NPM sub-agent (recommended, SaaS analysis server only)? [y|n]: n

Enable automatic startup on system reboot (recommended)? [y|n]: n

Enable automatic Java and .NET Core instrumentation system-wide (optional)? [y|n]:y

Successfully installed process injection library

Process injection already enabled.

Successfully enabled process injection library

Adding Environment Variables to the .profile File


To instrument .NET Core processes by adding environment variables to the system.profile file, follow
these steps:

1. Log in as root or become superuser on the Linux system where the agent was installed.

2. Change your working directory to the root install directory of the agent, typically /opt.

3. Open the shell .profile file for editing, and add the following lines:

Note—Incorrectly entering or mistyping these environment variables may cause unpredictable


behavior. Therefore, we recommend that you use the install_root_required.sh script to add
these variables to the Linux system. For more information, see “Rerunning the install_root_required.sh
Script“.

Note—${PANORAMA_HOME} is a special APM variable that resolves to the root directory of the agent
installation, typically /opt.

export CORECLR_ENABLE_PROFILING=1

export CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}

export CORECLR_PROFILER_PATH=${PANORMAMA_HOME}/Panorama/hedzup/mn/lib/libAwDotNetProf64.so

export
DOTNET_ADDITIONAL_DEPS=${PANORMAMA_HOME}/Panorama/hedzup/mn/install/dotnet/additionalDeps/Riverbe
d.AppInternals.DotNetCore

export DOTNET_SHARED_STORE=${PANORAMA_HOME}/Panorama/hedzup/mn/install/dotnet/store

Aternity APM Version 11.4.3 5


Enabling Instrumentation of .NET Processes

4. Write and quit the file.

5. Log into the Analysis Server WebUI as admin.

6. In the “Agent Details Screen“ on the Analysis Server WebUI, select the .NET Core processes you want
to instrument and restart the applications.

Note—After restarting, the applications will load the profiler library and they will appear in the “Agent
Details Screen“ screen as instrumented. You then choose which processes to instrument and monitor in
the “Agent List Screen“

Instrumenting Framework-Dependent Deployment (FDD) and


Framework Dependent Executables (FDE) Applications
As described in Microsoft documentation, FDD and FDE require .NET Core on the target system. The
applications contain only their own code and any third-party dependencies that are outside of the .NET
Core libraries.
There are no special steps to instrument .NET Core FDD and FDE applications. When the application starts,
the APM agent discovers it and it appears in the analysis server Agent Details screen Processes to
Instrument list. Edit the process settings, select the Instrument option, and click Save:

Restart the application and the dotNet sub-agent will monitor it.

Instrumenting Self-Contained Deployment (SCD) Applications


As described in Microsoft documentation, self-contained deployment (SCD) applications do not rely on the
presence of shared .NET Core components on the system. All components, including the .NET Core
libraries and the .NET Core runtime, are included with the application.

6 Aternity APM Version 11.4.3


Enabling Instrumentation of .NET Processes

To instrument and monitor SCD applications, add a reference to the dotNet sub-agent .NET Core library.
The NuGet package for this library is distributed as part of the agent installation. Add a reference to the
package, which the installation creates here:
<installdir>\Panorama\hedzup\mn\install\packages\Riverbed.AppInternals.DotNetCore.<agentversion>.
nupkg

If necessary, copy the package to a location convenient to your development environment. Add a reference
to the package in the csproj file for the application, as described in Microsoft documentation:
 Using VisualStudio
 Using the dotnet CLI
For example, using the CLI:
dotnet add package Riverbed.AppInternals.DotNetCore -v 10.15.542 -s
c:\panorama\hedzup\mn\install\packages

This command adds the following PackageReference element to the application’s csproj file:
<PackageReference Include="Riverbed.AppInternals.DotNetCore" Version="10.15.542" />

After adding the package reference, restore, publish, and distribute the application as usual. The APM agent
must be installed on systems where the SCD application runs.
The version and build number of the DotNetCore NuGet package must match that of the APM agent
installed on the system where the SCD application runs. If the version does not match, the application will
not instrument.

Troubleshooting .NET Instrumentation Issues


The following sections explain how to troubleshoot some common .NET instrumentation issues. For
information on troubleshooting general instrumentation issues, see “Instrumentation Techniques and
Troubleshooting“.

SOAP Headers No Longer Enabled By Default During Instrumentation


In order to prevent unpredictable behavior from applications that do not parse SOAP headers correctly,
AppInternals no longer adds SOAP headers as part of Java and .NET instrumentation.
In the unlikely event that a lack of SOAP headers prevents your instrumented tiers from stitching together
– especially with applications that use named pipes and TCP/IP, although HTTPS applications may also be
affected – do the following to add SOAP headers as part of instrumentation:

1. Log into the WebUI as administrator.

2. Go to CONFIGURE->Agents->Configurations->Define a Configuration->Configuration Settings

3. In the Configuration Overrides box, enter the following string to enable SOAP headers when
instrumenting your application:
{ "trace.injectedheaders.webservice": true }

The process data collector now reports average CPU instead of total CPU across all cores, eliminating totals
that were greater than 100%.

Aternity APM Version 11.4.3 7


Enabling Instrumentation of .NET Processes

Agent Install Did Not “Enable Instrumentation Automatically”


If the agent installation did not specify Enable Instrumentation Automatically, the application will start
but will never be instrumented.
Symptoms:
 Even after choosing the Instrument option in the APM interface Agent Details screen, and restarting
the application, its status will show as Awaiting Restart. Restarting the application will have no effect.
 The agent rpictrl status command will show that process injection is disabled:
C:\Users\Administrator>rpictrl status

Riverbed Process Injection Control


Copyright 2018. Riverbed Technology. All rights reserved.
Version: 11.0.1.10
Driver Version: 11.0.1.10

Start type: disabled


Status: stopped

Solution:
Run the agent rpictrl enable command:
C:\Users\Administrator>rpictrl enable
Succesfully enabled process injection.

Running Old, Unsupported Version of .NET Core


The dotNet sub-agent supports .NET Core 2.0 or higher.
If you are running an earlier version of .NET Core, you may see the following behavior:
 When the application starts, the APM agent will not discover it and it will not appear in the analysis
server Agent Details screen Processes to Instrument list.
 The <installdir>\Panorama\hedzup\mn\log\SetNetHome.log file will have this message:
Profiling is not supported with this version of DotNet Core: 1.1.x

To fix this problem, install .NET 2.0 or higher.

Monitoring Fails if “IIS Management Scripts and Tools” Not Installed


On systems running Windows 2008 or later, the dotNet sub-agent fails to instrument the Microsoft Internet
Information Server (IIS) applications unless the "IIS Management Scripts and Tools" option is installed. In
this case, the sub-agent generates an error similar to the following in the DotNetAgent-Service.txt log file:
05/21/2014 04:41:34 PM, , , ERROR, Error, failed to retrieve
instance name.
05/21/2014 04:41:34 PM, , , ERROR, WmiManager: No web site
with Id='312401' found.
05/21/2014 04:41:34 PM, , , ERROR, at
DotNetAgentSvc.WebManagers.WmiManager.GetSiteName(String path, String siteIdentifier, Boolean
useIISappName)

8 Aternity APM Version 11.4.3


Enabling Instrumentation of .NET Processes

Follow these steps to install this option:

1) From the Start menu, click Server Manager (or choose Administrative Tools and then Server
Manager).

2) In the left navigation pane of the Server Manager, expand Roles, and then right-click Web Server (IIS)
and click Add Role Services.
`

3) The Select Role Services wizard opens. Scroll down to the IIS Management Scripts and Tools option.
If the option does not show (Installed) next to it, select it and click Next.

4) In Confirm Installations Selections panel, click Install.

Aternity APM Version 11.4.3 9


Enabling Instrumentation of .NET Processes

10 Aternity APM Version 11.4.3


CHAPTER 13 Upgrading the Analysis Server,
Agents, and Authentication

Overview
The following sections explain how to perform upgrades of AppInternals components:
 “Upgrading the Analysis Server“
 “Upgrading Agents“
 “Upgrading Authentication“

Upgrading the Analysis Server


AppInternals supports the following upgrades of the Analysis Server:
 “Upgrading the OVA from Version 10“
 “Upgrading the LRE from Version 10“
 “Upgrading Analysis Server Clusters from Version 10“

Note—Before upgrading the Analysis Server, review the release notes which are available on the support
page.

Aternity APM Version 11.4.3 11


Upgrading the Analysis Server, Agents, and Authentication

Upgrading the OVA from Version 10


In version 11 of the Analysis Server, the OVA embedded operating system changed from Scientific Linux to
CentOS. As a result, upgrading a version 10 OVA to version 11, requires both an operating system upgrade
and data migration, which cannot be accomplished from the Software Upgrade Screen in the WebUI or by
using the CLI, and must be done from VMWare ESXi running the Version 10 OVA.

Important—The upgrade does not preserve the admin password, and resets it to riverbed-default. After
the upgrade completes, you can reset the password the first time you log into the WebUI as admin.

To upgrade a version 10 OVA to version 11, follow these steps:

1. Download the upgrade ISO file from the support page. In this example we will use
appinternals_upgrader_ks_v11.0.1.20.

2. Invoke VMWare ESXi running the AppInternals version 10 OVA to be upgraded. In this example we
will be upgrading from 10.21.0.

3. Upload the ISO upgrade image file to a datastore on ESXi, by doing the following:

3.1. In the Navigator panel on the left, click on Storage. The Storage screen appears in the right panel
with the Datastores tab highlighted.

3.2. Select a datastore to hold the ISO upgrade image. In this example, we will use datastore1.

If none exists, create a new datastore by clicking on New Datastore under the Datastore tab, and
then select that datastore to hold the image.

3.3. Once the datastore has been selected, click on the Datastore browser button at the top of the
screen. The Datastore browser appears with the datastore you selected to hold the ISO image
highlighted.

12 Aternity APM Version 11.4.3


Upgrading the Analysis Server, Agents, and Authentication

3.4. Upload the ISO upgrade image to the datastore, by clicking on the Upload button, browse to the
ISO upgrade file on your system, and then click on the file.

The ISO upgrade file then uploads to the datastore you selected.

3.5. When the ISO upgrade file finishes uploading, click the Close button to exit the The Datastore
browser, which returns you to the main ESXi screen.

4. Shut down the VM that hosts the version 10 OVA you intend to upgrade by selecting Virtual
Machines->VMToBeUpgraded->Power->Power Off in the left-most vertical Navigator pane. Replace
VMToBeUpgraded with the name of the VM that hosts the version 10 OVA.

A warning message appears reminding you that you may lose data if you power off the OVA, and
asking if you want to continue.

5. Click the Yes button to continue.

6. Create a virtual CD drive to hold the ISO upgrade image by following these steps:

6.1. In the left Navigator panel of the main ESXi window under Virtual Machines, click on the Virtual

Aternity APM Version 11.4.3 13


Upgrading the Analysis Server, Agents, and Authentication

machine that you shutdown in step 4.

6.2. At the top of the right panel of the main ESXi window, click the Edit button, which brings up the
Edit Settings window with the Virtual Hardware tab highlighted.

6.3. Under the Virtual Hardware and VM Options tabs, select Add other device->CD/DVD drive.

A New CD/DVD drive option appears at the bottom of the left panel.

6.4. Click on the triangle to the left of the New CD/DVD Drive entry to expand it.

6.5. In the right panel, select Datastore ISO file from the Host device pull-down menu.

14 Aternity APM Version 11.4.3


Upgrading the Analysis Server, Agents, and Authentication

The Datastore Browser appears.

6.6. In the left panel of the Datastore browser, click on the storage area you created to hold the ISO
upgrade file in step 3. The ISO upgrade file that you uploaded to that datastore appears in the
right panel.

6.7. In the right panel, click on the ISO upgrade file to highlight it.

6.8. Click the Select button at the bottom right of the DataStore Browser.

The DataStore Browser disappears and you are returned to the Edit Settings menu.

7. From the Edit Settings menu, create a boot delay on the virtual CD drive you created in step 6., by
following these steps:

Aternity APM Version 11.4.3 15


Upgrading the Analysis Server, Agents, and Authentication

7.1. Click on the VM Options tab at the top of the Edit Settings window.

7.2. Click on Boot Options in the left vertical panel.

7.3. In right panel, in the milliseconds dialog box under Whenever the virtual machine is powered
on or reset, delay the boot by, enter 20000 to set a 20-second delay.

7.4. In the Choose which firmware should be used to boot the virtual machine section, BIOS should
be selected by default. If not, select it.

7.5. Click the Save button. The Edit Settings screen disappears and the main ESXi window displays.

8. In the Navigator panel, right click the on Virtual machine you are upgrading, then select
Power->Power On.

The VM console activates.

9. To enable the VM to boot from the virtual CD-ROM you created in step 6. that holds the ISO upgrade
image, enter the BIOS on the VM console by pressing the ESCAPE key.

The BIOS appears, displaying the Boot Menu.

10. Using the down arrow, select CD-ROM Drive and press RETURN to begin the boot sequence.

The AppInternals upgrade message is displayed, and the OVA upgrade begins.

Note—The console displays progress messages as the upgrade continues. The more data that is being

16 Aternity APM Version 11.4.3


Upgrading the Analysis Server, Agents, and Authentication

migrated the longer the upgrade takes to complete.

11. The upgrade does not preserve the admin password. When the upgrade completes, log into the WebUI
as admin using riverbed-default as the password. You are immediately presented with a change
password pop-up where you can change the default admin password to what is was before the
upgrade.

12. After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by follow the steps in “Upgrading Authentication“

Upgrading the LRE from Version 10


In version 11 of the Analysis Server, owing to enhancements to the LRE, the upgrade from Version 10 cannot
be accomplished from the Software Upgrade Screen in the WebUI or by using the CLI, and must be done
with a special upgrade file and command.

Important—The upgrade does not preserve the admin password, and resets it to riverbed-default. After
the upgrade completes, you can reset the password the first time you log into the WebUI as admin.

To upgrade a version 10 LRE to version 11, follow these steps:

1. On the system where the version 10 LRE to be upgraded, log in as root or become superuser.

2. Create a directory to hold the upgrade file, as follows:


# mkdir /usr/tmp/LRE_V11Upgrade

3. Download the version 10 to version 11 LRE upgrade tar file from the support page and place it in the
upgrade directory you created in the previous step. In this example, we will use
appint-linux-11.0.0.63.tar

4. Change your working directory to /usr/tmp/LRE_V11Upgrade by entering the following command:


# cd /usr/tmp/LRE_V11Upgrade

5. Untar the version 10 to version 11 LRE upgrade file, by entering the following command:
# tar -xf appint-linux-11.0.0.63.tar

6. Initiate the upgrade by entering the following command:


# ./appinternals_upgrade -y

When the upgrade completes, your system will be rebooted for the changes to take affect.

7. After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by following the steps in “Upgrading Authentication“.

Upgrading Analysis Server Clusters from Version 10


To upgrade your Analysis Server cluster environment from version 10, follow these steps:

Aternity APM Version 11.4.3 17


Upgrading the Analysis Server, Agents, and Authentication

Note: AppInternals only supports cluster upgrades from 10.18* or higher Analysis Servers.

1) On the controller node, confirm that the version of the analysis server is 10.18* or higher, as follows:
73D (config) # sh version
Current version: 10.16.2061

2) On all cluster nodes, download the upgrade file, as explained in steps step 1.to step 4. in “Upgrading
the LRE from Version 10“.

3) On the controller node, shut down cluster services on all nodes with the cluster stop CLI command:
73D (config) # cluster stop
Do you want to stop services on all the cluster nodes? (Yes/No) yes
Stopping cluster services on role parser (port 10280)
Restarting local services on role parser (port 10280)
Stopping cluster services on role indexer (port 10380)
Restarting local services on role indexer (port 10380)
Stopping cluster services on role primary_ui (port 10180)
Restarting local services on role primary_ui (port 10180)
Stopping cluster services on role controller (port 8080)
Restarting local services on role controller (port 8080)
Done

4) On all existing cluster nodes, start the upgrades at the same time by following step 4. through step 6. in
the previous section “Upgrading the LRE from Version 10“.

Note—All nodes of the cluster must be upgraded. A mixed environment is not supported.

5) Wait approximately 10 minutes for the upgrades to complete.

6) On the controller node, run the cluster start CLI command:


73D # cluster start
Do you want to restart Analysis Server services on all the cluster nodes? (Yes/No) yes
.
.
.

7) Wait approximately 10 minutes for cluster services to finish starting.

8) On all cluster nodes, log in to the web interface. Check the CONFIGURE > System Status screen to
confirm that all icons are green (indicating that the cluster upgraded successfully).

9) If you could not log in to the web interface of any cluster node, or if the System Status screen for any
cluster node did not show all green icons:
a) Reboot ALL the cluster nodes.
b) On the controller node, enter the CLI and run the cluster start CLI command.

10) After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by following the steps in “Upgrading Authentication“

18 Aternity APM Version 11.4.3


Upgrading the Analysis Server, Agents, and Authentication

Upgrading Agents
The following sections explain how to upgrade agents that are installed on Windows and Linux/Unix
systems, and in dynamic environments.

Upgrading Agents on Windows Systems


 For information on how to upgrade agents interactively, see “Upgrading Agent Software“.
 For information on how to perform an unattended upgrade of agents, see “Upgrading“.

Upgrading Agents on Linux/Unix Systems


 For information on how to upgrade agents interactively, see “Upgrading Agent Software“.
 For information on how to perform an unattended upgrade of agents, see “Upgrading an Existing
Installation with -silentupgrade“.

Upgrading Agents in Dynamic Environments


For information on how to upgrade agents deployed in dynamic environments, like Docker containers,
Kubernetes, and VMware Tanzu, see the following:
 “Upgrading the Agent Installed Directly on Kubernetes Nodes“
 “Upgrading the Agent in a Kubernetes DaemonSet“
 “Monitoring VMware Tanzu Applications“

Upgrading Authentication
In version 11 of the Analysis Server, SCAS authentication has been replaced with the SteelCentral
Authentication and Authorization (AAA) service.
To preserve authentication after upgrading the Analysis Server from version 10 to version 11, you must
upgrade users defined in SCAS, as well as LDAP group and user mappings and any LDAP servers that
were configured, to AAA by following these steps:

Note—Once the SCAS / LDAP upgrade is complete, any local or LDAP user that could authenticate before
the upgrade, will still be able to authenticate; however, every user will need to define a new password, since
passwords cannot be migrated.

1. Complete the upgrade from version 10 to version 11 of the Analysis Server, as explained in “Upgrading
the Analysis Server“.

2. After the upgrade is complete, log into the CLI as admin and access the configuration commands, as
follows:
V11host> enable

Aternity APM Version 11.4.3 19


Upgrading the Analysis Server, Agents, and Authentication

V11host# configure terminal

V11host(config) #

3. Use the following three CLI configure commands to upgrade SCAS and LDAP to AAA:

• show scas status - Shows whether the data to do a user conversion is available or not. If
successful, the command returns “SCAS data preserved”, and you can then migrate the data and
delete it using the next two commands.

If no SCAS data is found, the command returns “SCAS data deleted”, in which case you must set
up authentication by following the instructions in “Creating Local Accounts And Setting Up LDAP
and SAML Authentication“.

• scas migrate - Migrates users found in SCAS and LDAP to the AAA scheme.

• scas drop - Deletes old SCAS data. Cannot be undone. Deleting old SCAS date saves at least 250
MB on disk, more if there are a lot of users.

Note—In version 11, AppInternals also supports SAML. For information on configuring SAML, adding
local users, and setting up LDAP authentication, see “Creating Local Accounts And Setting Up LDAP and
SAML Authentication“.

20 Aternity APM Version 11.4.3

You might also like