Installationv11 4 3
Installationv11 4 3
The following sections explain the supported installations of the APM analysis server and agents.
Agent Installation
APM agents monitor application and system performance. Install APM agent software on systems you
want to monitor, as described in the following topics:
“Agent Installation: Windows“
“Unattended Agent Installation: Windows“
“Agent Installation: Unix-Like Operating Systems“
“Unattended Agent Installation: Unix-Like Operating Systems“
“Deploying Agents in Cloud-Native Environments“
Initial Configuration
In addition to installing the agent, you also need to perform the following initial configuration:
“Synchronizing Agent Time With the Analysis Server“
“Enabling Instrumentation of Java Processes“
The following sections explain how to quickly install and configure APM to monitor your environment:
“Installing the Analysis Server“
“Installing the Agent Software“
“Instrumenting Processes to Monitor“
“Creating a Transaction Type for Your Application“
3) The “Define a Transaction Type Screen“ opens. Supply a transaction type name (TradeFast)
4) In the “Add Criteria“ area choose url in the Match any of these list and supply a pattern of
http://*/tradefast/*.aspx*:
5) Click Save. The new transaction type appears in the Manage Transaction Types screen.
This section describes installing the APM analysis server as a virtual appliance. The virtual appliance
contains a virtual machine (VM) with the analysis server already installed. It is packaged in Open
Virtualization Format (OVF) and distributed as an Open Virtual Appliance (OVA) file.
Installing the analysis server as a virtual appliance is the primary approach for installation. See “Installation
Overview“ for other installation options.
Installation Prerequisites
The System Requirements document, available from the APM support page, details supported virtual
environments and resource requirements.
Note—For a list of supported virtual environments, see the System Requirements document on the support
site.
1. Download the analysis server OVA file from the APM support site:
https://support.riverbed.com/content/support/software/steelcentral-ap/appinternals.htm
2. Import the analysis server virtual appliance into a supported virtual environment by using the
VMWare OVF tool with the following syntax:
ovftool -ds=[Name of data store on ESXi host] ../[AppInternals OVA file]
vi://[adminuser]:[password]@[esxi host]
The following example shows how to use the OVF tool with the 11.0 OVA:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe -ds=datastore1 -nw="VM Network"
c:\$RVBD\AI10\Software\appinternals-vm-11.0.0.22.ova vi://kafka@192.168.1.45
Username: kafka
Password: ***************
Transfer Completed
Completed successfully
Note—When you use URIs as locators, you must escape special characters using the % sign followed
by the special character’s ASCII hex value.
For example, if you use an at sign (@) in your password, it must be escaped with %40 as in: vi://foo:b
%40r@hostname, and a backslash in a Windows domain name (\) can be specified as %5c.
For more information on using the OVF tool, see the following documentation from VMware:
https://www.vmware.com/support/developer/ovi
1. After the appliance is deployed, start the appliance from the virtual environment hypervisor.
2. A command line console appears and starts the virtual machine. When the log in prompt appears, log
in with the user name of admin and password of riverbed-default:
Riverbed AppInternals Virtual Appliance
---------------------------------------
---------------------------------------
3. After you login successfully, the CLI wizard prompts you whether or not to use DHCP to configure the
network interface:
Note—If you cancel the wizard with CTRL/D, you can configure the network later using the CLI
command “networkcfg“.
.
.
.
Successfully logged into AppInternals admin interface
Network Configuration
----------------------------
Press CTRL+D to cancel
Current Configuration:
Interface configured for DHCP
– If you choose y (yes), the wizard prompts you whether or not obtain DNS from DHCP.
Obtain DNS from DHCP (y/n)
If you choose y (yes), the Wizard prompts you for a fully-qualified hostname.
If you choose n (no), the Wizard first prompts you for one or more DNS servers, and then for a
fully-qualified hostname.
Note—The default host name for the analysis server virtual machine is
appinternals.local.You need to supply a value suitable to your environment, so you should
discuss this with your Network Administrator.
It is important to supply a fully-qualified host name that will identify the analysis server. In
addition, the host name should be unlikely to change, because the host name is used elsewhere in
the analysis server and by APM agents. If the host name changes, you will need to reconfigure
multiple areas as described in the description of the “Hostname“ setting in the “Network
Configuration Screen“.
The virtual machine then requests an IP address from a DHCP server in your environment.
– If you choose n (no) to using DHCP, the wizard prompts you to enter an IP address, subnet mask,
default gateway, one or more DNS servers, and finally a fully-qualified hostname.
IP Address: 10.46.35.65
Subnet Mask: 255.255.255.0
Default Gateway: 10.46.35.11
DNS Server(s) (comma separated): 10.46.34.13
4. Once the network has been successfully configured, the wizard exits and displays the Analysis Server
CLI prompt.
1. In a supported browser, enter either the host name or IP address that you configured in “Starting the
Virtual Machine and Configuring the Network“:
– https://<fully-qualified-hostname>
– https://<ipaddress>
Note—DNS servers take some time to update their settings, so it might be preferable to log into the
Analysis Server WebUI for the first time using the IP address instead of the fully-qualified hostname.
To find the IP address, follow these steps:
a) At the AppInternals CLI console, enter the following command, which will bring you to the root
prompt:
my_server> enable
my_server#
b) At the root prompt, enter the following command and hit the TAB key twice, which will autofill the
interface name:
my_server# show interface name TAB TAB eth0
c) Hit the RETURN key, and the command returns the network configuration information for the
interface, including the IP address, which you can then use to log into the Analysis Server WebUI:
Interface eth20 state:
Up:Yes
Interface type: ethernet
IP address: 10.46.85.286
2. Because APM presents a self-signed security certificate created by the installation when it connects to
a browser for the first time, the browser generates a warning. This is expected behavior. Follow the
steps in the browser to ignore the warning and proceed to the login screen.
Note—To avoid the warning, replace the default certificate in the “TLS Certificate Configuration
Screen“.
3. When the login screen appears, enter the default user name/password -- admin/riverbed-default.
After you log in, you will be prompted to (optionally) change the password.
This section describes installing the APM analysis server on supported Linux systems. Typically, you do not
install the analysis server as described here. It is easier to import the analysis server virtual appliance into
a supported virtual environment as described in “Analysis Server Installation: Virtual Appliance“.
However, in some environments, it makes sense to install the analysis server on a dedicated Linux system:
Environments that allow only specific standardized virtual machine images. In these environments,
installing the analysis server as a virtual appliance may not be allowed at all, or may require security
audits that will delay deployment of APM.
Environments that limit resources (such as CPU cores, memory, or high-performance storage) allocated
to a virtual machine. In large deployments, this prevents providing sufficient resources for the analysis
server to support many agent systems.
Environments that require additional software installed on the system hosting the analysis server. This
is not possible using the virtual appliance.
In these environments, you can instead install the analysis server on a dedicated Linux system as described
in this section.
The installation creates a restricted environment to isolate the analysis server from the system where it is
installed. After installation, you can access the analysis server’s environment as described in “Accessing the
Restricted Environment with the appinternals-bash Command“.
Installation Prerequisites
The following are required before installing the Analysis Server on Linux.
Also, the host name for the system where you install the analysis server should be unlikely to change, since
it is used in the analysis server and by APM agents.
If you change the host name after it is initially set during the analysis server installation, you will likely have
to change it in the following places:
On every system that has the APM agent installed, as described in the documentation topic Changing
the Agent’s Analysis Server in the Agent System Configuration material.
In the Collection Server Address setting in the Collector Configuration screen of the analysis server
interface.
If you replace the default security certificate in the TLS Certificate Configuration screen with a
certificate that specifies a host name, you will need a new certificate with the new host name.
The installation kit is distributed as a gzip-compressed tar file. The file is named
appint-linux-<version>.tar.
Download the installation from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html.
--force Overrides disk space requirement and creates binary and data
directories if they do not exist
For example:
[root@0C3 tmp]# ./appinternals_installer --bin-dir /appinternals-binaries --data-dir /appinternal
s-data --force
2018/03/08 13:20:00 CmdArgs.go:119: Force creation of /appinternals-binaries
1) Run the appinternals_installer file as root. Unless you supply target directories on the command line
(see “Verifying Target Directories for Binaries and Data“), the installer prompts for them. If the
directories do not exist, the installer prompts to create them:
[root@143 tmp]# ./appinternals_installer
USAGE: appiternals_installer --bin-dir <dir> --data-dir <dir> \
--http-port <port> --https-port <port> --force
2) The installer checks that the target directories have enough space. If there is not enough space in either
directory, it gives a warning message and prompts to continue:
Not enough free space on '/appinternals-data', need 40GB
Force Use of /appinternals-data ? (y/n) :y
3) The installer prompts for open network ports for the analysis server web interface. Accept the defaults
or supply a value to override the defaults:
Http Port (def 80) :
Https Port (def 443) :
Post-Installation Tasks
Verifying Startup
The installation starts the analysis server after the system reboots as part of the installation. Confirm it is
running with the systemctl status appinternals -l command. If the analysis server has started successfully,
the output ends with messages about the Tomcat application server starting:
[root@143 tmp]# systemctl status appinternals -l
.
.
.
Feb 23 10:03:06 143 appint-linux-host[766]: yarder-core-monitor STARTING
Feb 23 10:03:06 143 appint-linux-host[766]: Starting appinternals-sysupgraded... Ok
Feb 23 10:03:07 143 appint-linux-host[766]: Tomcat started.
Feb 23 10:03:07 143 appint-linux-host[766]: Waiting for LDAP to initialize
Feb 23 10:03:40 143 appint-linux-host[766]: Tomcat start successfulTomcat started.
Feb 23 10:03:40 143 appint-linux-host[766]: Tomcat start successful
Feb 23 10:03:40 143 appint-linux-host[766]: Loopback devices:/dev/loop0
[root@143 tmp]#
Confirm that it is fully started by logging in to the analysis server user interface. For details, see “Logging
In to the Analysis Server WebUI“ in the “Analysis Server Installation: Virtual Appliance“ material.
nhx2-ol7-5 >
The following example shows the contents of the binaries directory specified during installation (see
“Verifying Target Directories for Binaries and Data“) before issuing the appinternals-bash command. After
issuing the appinternals-bash command, note that the same contents now appear as the root (/) directory:
[root@05A zeus]# ls /appinternals-binaries/appint/
appint_ctl appint_env bin boot dev etc home lib lib64 media mnt opt proc root sbin
selinux srv sys tmp usr var
[root@05A zeus]# appinternals-bash
bash-4.1# ls /
appint_ctl appint_env bin boot dev etc home lib lib64 media mnt opt proc root sbin
selinux srv sys tmp usr var
Running uninstall-appinternals
1) Run the uninstall-appinternals command as root:
[root@0BF zeus]# uninstall-appinternals
2) Specify y to continue. Do not press Enter, which will be interpreted as N for the next prompt and the
uninstall will exit. The uninstall command generates several messages but requires no further
intervention:
Uninstalling
Shutting down system.
nginx-internal: stopped
nginx-external: stopped
odb_server: stopped
silo_watcher: stopped
wsproxy2: stopped
ferryman3: stopped
serial: stopped
yarder-appinternals: stopped
silo_dispatch: stopped
yarder-core: stopped
yarder-core-monitor: stopped
collector:ferryman: stopped
agentconfig:negotiator: stopped
sensor: stopped
Stopping supervisord: Shut down
Waiting roughly 120 seconds for /var/run/supervisord.pid to be removed after child processes
exit
Supervisord exited as expected in under 2 seconds
Stopping appinternals-sysupgraded Ok
Tomcat stopped.
Tomcat Stopped Successfully
Killing Tomcat with the PID: 183
The Tomcat process has been killed.
Tomcat Stopped SuccessfullyStopping sshd: [ OK ]
Shutting down system logger: [ OK ]
Terminate Remaining Processes
Killing remaining processes touching installation
Terminating host process
Removed symlink /etc/systemd/system/multi-user.target.wants/appinternals.service.
libsemanage.semanage_direct_remove_key: Removing last appinternals-ol-7 module (no other appi
nternals-ol-7 module exists at another priority).
The agent and one or more sub-agents are responsible for the actual collection and communication of metric
and transaction trace data on each system being monitored by APM.
On Windows, you run an interactive installer to install the agent and specify the analysis server you want
it to report to. This section describes that process. You can also install the agent without user intervention,
as described in “Unattended Agent Installation: Windows“.
Installation Prerequisites
For a list of supported Windows operating systems and disk, Java, and .NET requirements, see the System
Requirements document on the support page.
Installation directory The installer creates the Panorama directory and subdirectories in the parent
directory you specify. The default is C:\. You can specify a different directory.
On-premises or SaaS? Choose whether the agent will connect to an analysis server installed in your
environment (“on premises”) or to the Software as a Service (SaaS) offering of
the analysis server (SteelCentral SaaS).
The SaaS option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation without
it. One way to obtain a customer ID is to register for a free trial. After
registering, you receive a user name and password to log in to the SaaS
analysis server. The customer ID is on the “Install Agents Screen“ of the SaaS
analysis server.
On premises: If you choose on premises, the installation prompts you for the system name,
Analysis server location fully-qualified domain name (FQDN), or IP address for the analysis server in
your environment. If the analysis server name changes after you install the
agent, you must change this name as described in the configuration topic
“Changing the Agent’s Analysis Server“.
SaaS: If you choose SaaS, the installation prompts you for the SaaS customer ID
Customer ID copied from the Install Agents screen of the SaaS analysis server.
Proxy server? Choose whether the agent will use a proxy server to connect to the analysis
server system. If so, you must supply the system name, fully-qualified domain
name (FQDN), or IP address for the proxy server.
You can change the proxy server details after installation as described in the
configuration topic “Proxy Server Configuration“ .
Proxy Server Authentication? If you specify a proxy server, choose whether the proxy server requires
authentication. If so, supply a user name, password, and (optional) realm. The
agent supports proxy servers that use Basic and Digest authentication. It does
not support NTLM authentication.
Enable Instrumentation Choose whether to automatically enable instrumentation of Java and .NET
Automatically? Core processes.
Selecting this option starts the Riverbed Process Injection Driver (RPID). Using
RIPD is easiest and recommended. If you do not choose this option, you must
manually enable instrumentation after the installation completes. For more
information, see “Enabling Instrumentation of Java Processes“ and “Enabling
Instrumentation of .NET Processes“.
Start services? The installer gives you the option to not start the Windows services for the
agent and sub-agents. By default, the installation starts the services, which is
easier.
Post-Installation Tasks
This section describes tasks you may have to perform after the installation.
Enabling Instrumentation
If you did not enable instrumentation of Java and .NET Core processes as part of the installation. you must
enable it manually after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.
Log in with the user name of admin and the default password of riverbed-default.
In the Configure menu, navigate to the “Agent List Screen“. You should see the agent system listed with
the icon for new agents ( ). Click the server that is running an application that you want to monitor. In the
Agent Details screen, Click the edit icon ( ) in the row for the application to open the “Edit Processes to
Instrument Dialog“. Click the Instrument option and Save:
For Java applications, check the application user’s permissions to the directories and give the user the
necessary permission. (See https://technet.microsoft.com/en-us/library/cc771586(v=ws.11).aspx for
details on viewing effective permissions.)
Stop the processes before continuing. You can stop ASP.NET applications by using the Windows Services
utility to stop the World Wide Web Publishing service. Click Retry to continue.
If the uninstall program tries to delete files that are open, it will prompt you to restart the system after it
finishes. Restarting the system closes the files and allows them to be deleted. After the uninstall completes,
you may need to manually delete the Panorama directory and any remaining descendants.
Make sure that the Windows Services utility is not open when you remove APM components. If the
Services utility is open, the uninstall program may not be able to completely remove services.
Note: Before you begin an upgrade, refer to the release notes for any upgrade-specific information.
Upgrades preserve data and configuration settings while updating executable and program files. For any
version-specific upgrade considerations, see the Release Notes available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
Use the same self-extracting executable file to upgrade that you use for “Installing Agent Software“. The
self-extracting executable file is named AppInternals_Agent_<version>_Win.exe. To start the upgrade,
click on the file.
Before making any changes, the installer checks for running Java and .NET processes that APM is
monitoring. You must manually stop such instrumented processes before you can continue. If the installer
detects any, it displays a message panel with a list of processes. For example:
Stop the processes before continuing. You can stop ASP.NET applications by using the Windows Services
utility to stop the World Wide Web Publishing service. Click Retry to continue.
The installer automatically detects if there is a previous version of APM agent software. Click Install to
upgrade from that version:
If the upgrade detects running processes that lock files it needs to replace, it prompts you to stop them. You
can click Ignore to continue:
Overview
The agent installation executable (AppInternals_Agent_<version>_Win.exe) is implemented using the
Microsoft Windows installer and embeds a standard MSI installation package.
This section describes how to automate the installation of agent software by specifying arguments to the
agent installation executable. There are two approaches:
Specify installation options directly on the command line as arguments to the agent installation
executable. This approach does not require third-party tools but the command line can become
complex.
Extract the MSI file from the executable and use a third-party tool to specify installation options in an
MSI transform file. You then specify MSI and the transform (.mst) files as arguments to the Windows
installer executable, msiexec. This approach requires additional steps but allows deployment using
tools such as Microsoft System Center Configuration Manager.
See the following links for general information about the Windows installer:
http://en.wikipedia.org/wiki/Windows_Installer
http://unattended.sourceforge.net/installers.php
http://msdn.microsoft.com/en-us/library/cc185688%28VS.85%29.aspx
Installation Prerequisites
For a list of supported Windows operating systems and disk, Java, and .NET requirements, see the System
Requirements document on the support page.
Installing the agent software from the command line in this manner completes without any user interaction
or indication of progress. Use the options described in “Installation Options Reference“ to control any
aspect of the agent installation that you can specify interactively through the installer user interface.
General Syntax
In general, use the following syntax to invoke the agent installation executable from the command line:
installation-executable /s /v" [ install-option ]… [ /Llogginglevel \"logfilespec\" ] /qn"
/s
/v
The /v argument specifies installation options to pass to the Windows installer, misexec. Supply all
options on the same line as the installation executable file name. In general, use upper case option
names and option values. (Option names are not case sensitive but option values are case sensitive and
the installer requires most to be all upper case.) See “Installation Options Reference“ for details. See
the following link for details on msiexec switches: http://support.microsoft.com/kb/227091
install-option
These options specify details of the agent installation. There are command-line options that correspond
to every setting that you can specify interactively through the installer user interface. See “Installation
Options Reference“ for details.
[ /Llogginglevel \"logfilespec\" ]
The logging level and file specification for the log file for the installation. The path to the log file
location must already exist. The installer does not create the directory structure for the log file.
The backward slashes ( \ ) are escape characters for the double quotation marks ( " )that enclose the
log file specification.
The backward slashes and double quotation marks are always allowed, but required only if the log file
specification contains spaces. The most detailed logging level specification is /L*v. For example:
/L*v \"c:\temp\Agent Install Log.log\"
See the following link for details on the different msiexec logging levels and options:
http://technet.microsoft.com/en-us/library/cc759262%28v=ws.10%29.aspx#BKMK_SetLogging
/qn
Runs the Windows installer without user intervention and suppresses display of user interface screens
(/qn is a mnemonic for "quiet mode, no user interface"). The Windows installer supports other /q
options. For example, specify /qf ("quiet mode, full user interface"), or omit the /q argument altogether,
and the installer will display the user interface screens filled in with option values supplied on the
command line. This is a good way to validate command line options.
Enabling Instrumentation
If the “O_SI_AUTO_INSTRUMENT“ option is set to “true”, instrumentation of Java and .NET Core
processes is enabled as part of the installation. If you do not enable instrumentation as part of the
installation, you must enable it manually after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.
New Installation
There are command-line options for each setting that you can specify interactively through the interactive
GUI installer. For a new installation, supply values for required options and options whose default you
want to override. See “Installation Options Reference“ for details on the options.
The following command line example creates a log file and uses the following options:
“INSTALLDIR“ -- specifies a different installation directory (c:\Riverbed).
“O_SI_ANALYSIS_SERVER_HOST“ -- specifies the IP address for the on-premises analysis server.
“O_SI_AUTO_INSTRUMENT“ -- enables instrumentation of Java and .NET Core processes.
AppInternals_Agent_10.0.00168_Win.exe /s /v"INSTALLDIR=c:\Riverbed
O_SI_ANALYSIS_SERVER_HOST=10.46.35.218 O_SI_AUTO_INSTRUMENT="true" /L*v
c:\temp\AgentInstallLog.log /qn"
Upgrading
When you upgrade, the installation automatically preserves data and configuration settings while updating
executable and program files. For any version-specific upgrade considerations, see the Release Notes
available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
Upgrades to Version 10.0 from earlier versions of APM agent software (called “managed node’ or
“collector”) are not supported.
To upgrade an Version 10.0 or later installation to a later release, run the agent executable without any
options. The following example upgrades the agent installation and generates a log file:
AppInternals_Agent_10.0.00168_Win.exe /s /v"/L*v c:\temp\AgentUpgradeLog.log /qn"
Repairing
You can run the installer on an existing agent installation with the same version and build number. Use
only the REINSTALL=ALL and REINSTALLMODE=vamus options to replace files created by the previous
installation without adding or removing sub-agents. This “repair” operation can be useful to restore files
that were corrupted or accidentally deleted. It preserves files created after the installation.
The following example repairs a agent installation:
AppInternals_Agent_10.0.00168_Win.exe /s /v"REINSTALL=ALL REINSTALLMODE=vamus /L*v
c:\temp\AgentRepairLog.log /qn"
Prerequisites
Before you begin an unattended installation, be sure that the Microsoft Visual C++ 2010 Redistributable
Package (x86) and Microsoft Visual C++ 2010 Redistributable Package (x64) are installed on the target
systems. You can verify they are installed by opening the control panel, selecting Programs and Features,
and looking for Version 10.0.40219 of each in the program list:
If you need to install them, you can download these packages from Microsoft for free from this site:
http://search.microsoft.com/en-us/DownloadResults.aspx?FORM=DLC&ftapplicableproducts=^%22De
veloper%20Tools%22&sortby=+weight
The /a argument creates a “server image” that includes the MSI file without actually installing the agent
software. The installer displays a series of panels, including one where you specify the directory for the
server image:
The installer creates the .msi file in the specified directory, named MNInstall.msi.
Note: Do Not Modify Any Other Properties: Tools such as Orca allow modifying any property in an MSI
installation package. Only modify properties with the O_SI prefix, and the “INSTALLDIR“ property. The
agent installer does not support modifying other properties.
Open the MSI file in Orca and click the Property table. Click the Property column in the table in the right
pane to sort by the property name and group together the O_SI properties that you can modify. Click the
Transform > New Transform menu option. To change values, click the cell in the Value column for the
property you want to modify.
Some installation options (such as “INSTALLDIR“) do not have corresponding properties. To create the
property, click the Tables > Add Row… menu option and supply the property name and value.
After you modify and create properties, click the Transform > Generate Transform menu option and
supply a file name for the .mst file.
The following example shows a transform in Orca with changed properties to connect to the analysis server:
This example passes /qf as an argument to display the installer interface screens filled in with option values
supplied in the transform file. Examining these screens is a good way to validate that the transform has the
options you want.
INSTALLDIR
Description Specifies the destination directory for a new agent installation. The directory does not have
to exist. The installation creates the Panorama directory and subdirectories in the parent
directory you specify.
Valid Values Any valid local Window directory specification. The specification must include a local drive
letter. Do not specify a mapped drive or UNC path.
If you specify this option within the /v argument to the installation executable (see “General
Syntax“) and the directory specification contains spaces, enclose it with \" (see the example
below). The backward slashes ( \ ) are escape characters for the double quotation marks ( " )
that must enclose the directory specification. The escape characters are not required (or
allowed) if you specify the option directly as an argument to msiexec (see “Running msiexec
with the MSI and Transform Files“).
Example INSTALLDIR=d:\
INSTALLDIR=\"c:\Riverbed Technology\"
O_SI_ANALYSIS_SERVER_HOST
Description Specifies the location of the on-premises APM analysis server. (This option has no effect for
installations that specify the SaaS analysis server by setting
“O_SI_SAAS_ANALYSIS_SERVER_ENABLED“.) The agent connects to this system to
initiate communications. Use this option for new installations only. See “Changing the
Agent’s Analysis Server“ for instructions on changing this value after the agent has been
installed.
Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the analysis
server.
Example O_SI_ANALYSIS_SERVER_HOST="myserver.example.com"
O_SI_ANALYSIS_SERVER_HOST="10.46.35.218"
O_SI_ANALYSIS_SERVER_PORT
Description Specifies the port on which the APM analysis server is listening. The agent connects to this
port to initiate communications. See “Changing the Agent’s Analysis Server“ for instructions
on changing this value after the agent has been installed.
Default Value 80
Example O_SI_ANALYSIS_SERVER_PORT="8051"
O_SI_ANALYSIS_SERVER_SECURE
Description Specifies whether the connection to the APM analysis server should be secure.
Example O_SI_ANALYSIS_SERVER_SECURE="true"
O_SI_AUTO_INSTRUMENT
Description Specifies whether to enable automatic instrumentation of Java and .NET Core processes on
Windows systems. When set to “true”, the agent installation starts the Riverbed Process
Injection Driver (RPID).
Note: If you do not want all Java and .NET Core processes enabled for instrumentation, do
not set this option to “true”. After the installation completes, you can manually configure
processes to be instrumented with the rpictrl utility. For more information, see “Automatic
Process Instrumentation on Windows“
Default Value false (any value other than “true” will disable automatic instrumentation)
Example O_SI_AUTO_INSTRUMENT="true"
O_SI_CUSTOMER_ID
Example O_SI_CUSTOMER_ID="5e6f1281-162d-11a6-a7c8-ab267ed63dc8"
O_SI_PROXY_SERVER_AUTHENTICATION
Example O_SI_PROXY_SERVER_AUTHENTICATION="true"
O_SI_PROXY_SERVER_ENABLED
Description Whether the agent will use a proxy server to connect to the analysis server system.
Example O_SI_PROXY_SERVER_ENABLED="true"
O_SI_PROXY_SERVER_HOST
Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the proxy server.
Example O_SI_PROXY_SERVER_HOST="myproxyserver.example.com"
O_SI_PROXY_SERVER_HOST="10.46.35.238"
O_SI_PROXY_SERVER_AUTHENTICATION
Example O_SI_PROXY_SERVER_AUTHENTICATION="true"
O_SI_PROXY_SERVER_PASSWORD
Example O_SI_PROXY_SERVER_PASSWORD="myproxypassword"
O_SI_PROXY_SERVER_PORT
Example O_SI_PROXY_SERVER_PORT="8080"
O_SI_PROXY_SERVER_REALM
Example O_SI_PROXY_SERVER_REALM="myproxyrealm"
O_SI_PROXY_SERVER_USER
Example O_SI_PROXY_SERVER_USER="myproxyuser"
O_SI_SAAS_ANALYSIS_SERVER_ENABLED
Description Specifies whether the agent will connect to the Software as a Service (SaaS) offering of the
analysis server.
Example O_SI_SAAS_ANALYSIS_SERVER_ENABLED="true"
O_SI_SKIP_SCAN
Description Specifies whether the installer scans for running Java and .NET processes that AppInternals
is monitoring. By default, the installer checks for such instrumented processes and, if it
detects any, exits. Set this option to TRUE to skip the scan. This is useful only if the scan fails
and prevents installation when in fact there are no instrumented processes running on the
system.
Example O_SI_SKIP_SCAN="true"
O_SI_START_SERVICES
Description Whether to start Windows services for the DSA and sub-agents after the installation
completes.
Default Value START (any value other than START will cause the services NOT to start)
Example O_SI_START_SERVICES=DONTSTART
REINSTALL
Description Used only to modify or repair an existing installation. For Microsoft documentation on
REINSTALL, see:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371175%28v=vs.85%29.as
px
Example REINSTALL=ALL
REINSTALLMODE
Description Used only to modify or repair an existing installation. For Microsoft documentation on
REINSTALLMODE, see:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371182%28v=vs.85%29.as
px
Example REINSTALLMODE=vamus
The agent and its sub-agents are responsible for the collection and communication of metric and transaction
trace data on each system being monitored by APM.
APM supports the following Unix-like operating systems: AIX, Linux, and Solaris.
On these platforms, you run an installation script locally on each agent. The script prompts you for required
information. You can run the script as root or as the user you choose as the agent administrative user. You
can also install the agent software in silent mode, as described in “Unattended Agent Installation: Unix-Like
Operating Systems“.
This chapter describes using the script as well as steps before and after installation.
Installation Prerequisites
The following are required before installing the AppInternals agent on Linux and Unix.
If you run the installation script from a non-root account, that account will automatically be made the
administrative user. The account must have rights to create the Panorama directory in the parent
directory you specify or the installation fails. In the following example, the user admin did not have
rights to create the directory:
Where do you want to create the Panorama installation directory:/opt
/bin/mkdir: cannot create directory `/opt/Panorama': Permission denied
Creating /opt/Panorama/hedzup/mn failed. Check that admin has write access in /opt/Panorama
Type 'y' to try again or 'n' to exit:
The APM administrative user should be in the same group as users who run Java applications that APM
will monitor. If there are multiple applications that are run by different users, they should share the same
group
This avoids permission issues:
The user account that starts Java processes that APM monitors must have read and write access to
specific AppInternals directories. The installation creates those directories with group read and write
permissions.
The APM administrative user must have read access to the transaction trace data generated by
applications. These files are created by the user account that starts the Java process. By default, the
JIDA sub-agent creates the files with permissions that limit access to group members and exclude
others from reading the files. You can change the permissions with which JIDA creates the files in the
UNIX File Creation Permissions settings of the Configuration Settings screen in the analysis server
interface .
Use the ps command to see what users are currently running Java applications. The following ps command
identifies jboss and weblogic as users running Java applications:
$ ps -ef | grep java
jboss 1469 1400 3 16:49 ? 00:01:39 java -D[Standalone] -server
weblogic 1879 1780 1 16:49 ? 00:00:46/usr/java/jdk1.7.0_60/bin/java
Use the id command to determine what user group those users belong to. The following example shows
that they belong to the webadm group, and that a third user, webadm, also belongs to that group.
$ id jboss
uid=498(jboss) gid=502(webadm) groups=502(webadm),0(root)
$ id weblogic
uid=497(weblogic) gid=502(webadm) groups=502(webadm)
$ whoami
webadm
$ id webadmuid=502(webadm) gid=502(webadm) groups=502(webadm)
In this example, the webadm user would be the appropriate choice for the AppInternals administrative
user. In other cases, you may need to create a new account in the appropriate group.
libstdc++
Note: The x86_64 installation media contains both 32-bit (denoted by i686) and 64 (denoted by x_64) bit
versions. Be sure to specify the 32-bit version by including i686 in the file name:
If you plan to install the agent on multiple systems, if possible, copy the extracted script to a disk shared
across the network. This makes installation on multiple systems more convenient.
You must run the installation script as root or from the account you want to use as the agent administrator
(see “Installing Agent Software“). If you run the script from a non-root account, the agent software will not
be configured to start automatically when the system reboots. See “Enabling Agent Software to Start on
System Startup“ for details on doing this after the installation.
Temporary and installation The installation script prompts for the location of two directories.
directories
• A temporary directory where the installation script extracts files. The default
is /tmp/. The installation creates the temporary directory (named
AppInternals_Xpert_Collector_<version>_<os>) in the parent directory you
specify. This directory is deleted after the installation.
• The actual installation directory. The installation script creates the Panorama
directory and subdirectories in the parent directory you supply.
If you run the installation from an account other than root, the account must
have privileges to create both directories, or the installation fails.
If you provide a relative path, the script appends it to the current directory and
creates the directory there. If you provide an absolute path, the script creates the
directory there.
On-premises or SaaS? Choose whether the agent will connect to an analysis server installed in your
environment (“on premises”) or to the Software as a Service (SaaS) offering of
the analysis server (SteelCentral SaaS).
The SaaS option requires a SaaS customer ID. If you do not have a customer ID,
do not choose this option, since you cannot complete the installation without it.
One way to obtain a customer ID is to register for a free trial. After registering,
you receive a user name and password to log in to the SaaS analysis server. The
customer ID is on the “Install Agents Screen“ of the SaaS analysis server.
Administrator user account You receive this prompt only if you run the installation script as root (otherwise,
the account that started the script will be the administrative user account).
Supply the user name for the existing account you will use to run the agent and
sub-agents on this system. See the “Installing Agent Software“ for more details.
On premises: If you choose on premises, the installation prompts you for the system name,
Analysis server location fully-qualified domain name (FQDN), or IP address for the analysis server in
your environment. If the analysis server name changes after you install the
agent, you must change this name as described in “Changing the Agent’s
Analysis Server“.
SaaS: If you choose SaaS, the installation prompts you for the SaaS customer ID copied
Customer ID from the Install Agents screen of the SaaS analysis server.
Proxy server? Choose whether the agent will use a proxy server to connect to the analysis
server system. If so, you must supply the system name, fully-qualified domain
name (FQDN), or IP address for the proxy server.
You can change the proxy server details after installation as described in the
configuration topic “Proxy Server Configuration“ .
Proxy Server Authentication? If you specify a proxy server, choose whether the proxy server requires
authentication. If so, supply a user name, password, and (optional) realm. The
agent supports proxy servers that use Basic and Digest authentication. It does
not support NTLM authentication.
Whether to display a An important configuration task after installing is to enable instrumentation for
summary of options for Java processes you want to monitor. There are different approaches to enabling
enabling instrumentation instrumentation manually, and you are prompted during the installation to have
a summary of these options displayed after the installation finishes. For more
information, see “Enabling Instrumentation of Java Processes“.
Linux only: Whether to You can choose to enable instrumentation of Java and .NET Core processes only
enable automatic during a Linux installation as root. This option in not supported on UNIX. For
system-wide instrumentation a list of supported Linux systems, see the System Requirements document on
the support page.
You receive this prompt under the following conditions:
• You are installing the agent on a Linux system.
• You run the installation script as root.
• You choose to display the summary of options for enabling instrumentation.
To enable instrumentation during a Linux installation, answer yes (y) to the
enable automatic instrumentation prompt.
Note: Enabling instrumentation of Java and .NET Core processes on UNIX is a
post-installation task. For more information, see “Enabling Instrumentation of
Java Processes“ and “Enabling Instrumentation of .NET Processes“.
Installing as root
Respond to prompts with the “Information Required for Installation“. The prompts indicate default values
in brackets [ ]. The following example shows required input and responses in bold.
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.00163_Linux
1) Install
2) Upgrade
3) Exit
After it extracts files to the temporary directory, the installation displays the following prompts:
On-premises or SteelCentral SaaS analysis server
Administrator account
Installation directory
Analysis server the agent will connect to (for the SaaS analysis server , the customer ID)
Whether the agent will use a proxy server (if so, whether it requires authentication)
Whether to display a summary of options for enabling instrumentation
Choose On-Premises or Saas:
http://www.riverbed.com/steelcentral/steelcentral-saas-early-access-signup.html
After registering, you receive a user name and password to log in to the
SaaS analysis server. Your customer ID is on the "Install Agents" screen.
For root installations on Linux only, if you chose to display the options for enabling instrumentation, there
is a final prompt about whether to enable instrumentation system-wide. If you respond yes to the prompt,
the installation enables instrumentation, as follows:
********* Options for Enabling Java Monitoring ("Instrumentation") *********
*** Option 1: ***
Define the JAVA_TOOL_OPTIONS environment variable in your profile file (such as ~/.profile):
export JAVA_TOOL_OPTIONS="-agentpath:/opt/Panorama/hedzup/mn/\$LIB/librpilj.so $JAVA_TOOL_OPTIO
NS"
1) Start the Riverbed SteelCentral Agent and Sub-Agents. Log in as 'webadm' and run:
/opt/Panorama/hedzup/mn/bin/agent start
Installing as Non-root
Respond to prompts with the “Information Required for Installation“. The prompts indicate default values
in brackets []. The following example shows required input and responses in bold.
-bash-4.1$ ./AppInternals_Agent_10.0.00176_Linux
1) Install
2) Upgrade
3) Exit
1
The installation creates a temporary directory (AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
). Where do you want to create it [/tmp]?
At this point, the installation extracts files to the temporary directory and generates several messages:
Riverbed SteelCentral AppInternals tar-file extracted successfully
Uncompressed instrumentation.tar.gz in /tmp/AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
Unpacking instrumentation...
.
.
.
After it extracts files to the temporary directory, the installation displays remaining prompts:
On-premises or SteelCentral SaaS analysis server
Installation directory
Analysis server the agent will connect to (for the SaaS analysis server , the customer ID)
Whether the agent will use a proxy server (if so, whether it requires authentication)
Whether to display a summary of options for enabling instrumentation
Choose On-Premises or Saas:
http://www.riverbed.com/steelcentral/steelcentral-saas-early-access-signup.html
After registering, you receive a user name and password to log in to the
SaaS analysis server. Your customer ID is on the "Install Agents" screen.
After you respond to those prompts, the installation completes without further intervention. At the end, it
summarizes options for enabling instrumentation (if you chose to display them) and other
“Post-Installation Tasks“.
*** Option 1: ***
Define the JAVA_TOOL_OPTIONS environment variable in your profile file (such as ~/.profile):
export JAVA_TOOL_OPTIONS="-agentpath:/opt/Panorama/hedzup/mn/\$LIB/librpilj.so $JAVA_TOOL_OPTIO
NS"
bash-4.1$
Post-Installation Tasks
This section describes tasks to complete after installation.
See the “Controlling Agent Software on Unix-Like Systems“ configuration topic for details on agent.
Log in with the user name of admin and the default password of riverbed-default.
In the Configure menu, navigate to the “Agent List Screen“. You should see the agent system listed with
the icon for new agents ( ). Click the server that is running an application that you want to monitor. In the
Agent Details screen, Click the edit icon ( ) in the row for the application to open the “Edit Processes to
Instrument Dialog“. Click the Instrument option and Save:
Troubleshooting Instrumentation
For information on resolving issues with instrumentation, “Instrumentation Techniques and
Troubleshooting“.
Check to make sure that each user that starts an application can access the Panorama/ directory. For each
user, log in as that user and verify that the user has access to the Panorama folder. In this example, the jboss
user does not:
[jboss]$ ls -l /home/webadm/Panorama
ls: cannot access /home/webadm/Panorama: Permission denied
[jboss]$ ls -l /home/webadm
ls: cannot access /home/webadm: Permission denied
[jboss]$ ls -l /home
drwxr-xr-x. 4 jboss jboss 4096 Jul 26 17:33 jboss
drwx------. 3 webadm webadm 4096 Jul 26 17:30 webadm
drwxr-xr-x. 2 weblogic weblogic 4096 Jul 26 17:34 weblogic
Log in as the user that owns the parent directory (webadm in this example) and correct the permissions as
necessary. For example:
[webadm]$ chmod 770 /home/webadm
Before making any changes, the uninstall checks for running Java processes that APM is monitoring. You
must manually stop such instrumented processes before you can continue. If the uninstall detects any, it
lists their process identifiers. For example:
The uninstall script found process that AppInternals is monitoring. You must stop the process to
continue.
PID 11611
Answer 'y' to continue or 'n' to terminate Riverbed SteelCentral AppInternals uninstall [y/n]:y
Stop the processes before continuing. The uninstall should finish without further intervention.
Successfully disabled process injection.
Process injection already disabled.
Successfully uninstalled process injection library.
Change to different directory and run /bin/rm -rf /opt/Panorama to completely remove Riverbed Ste
elCentral AppInternals files
The uninstall script deletes the <installdir>/Panorama/hedzup/mn directory and its descendants. You
must manually delete the Panorama directory and any remaining descendants.
Note: Before upgrading agents, refer to the release notes for any upgrade-specific information.
The Upgrade option of the installation script replaces executable and program files for agent software while
preserving configuration settings and data. For any version-specific upgrade considerations, see the
Release Notes available from the APM support page:
https://support.riverbed.com/content/support/software/opnet-performance/appinternals-xpert.html
Before upgrading the agent, you must manually stop Java applications that APM is monitoring. Check the
web interface “Agent Details Screen“ to see which processes you need to stop. On the agent, you must stop
processes that show as Instrumented.
You must be logged in as root or as the user specified as the agent administrator (see “Installing Agent
Software“) to remove or upgrade agent software. However, with a non-root user, the installation script first
checks for the presence of files that require root access to delete. If any of the following files are present, the
installation script will not allow non-root users to proceed:
On Linux only, the librpil.so shared object in the /lib and /lib64 system directories. These files allow
automatic instrumentation of Java applications when they start.
The appinternals script in the /etc/init.d/ system directory. This script starts APM automatically
when the system reboots.
In this case, you must either run the installation script as root, or run the script
<installdir>/Panorama/uninstall_mn/uninstall_root_required.sh as root. This script removes the files
requiring root access. After you run uninstall_root_required.sh as root, you can remove or upgrade the
agent as the non-root agent administrator.
As described in “Installing Agent Software“, decompress the installation script before upgrading. Choose
the Upgrade option, and the script prompts for the location of a temporary directory to extract files. The
following example shows required input and responses in bold.
bash-4.1$ ./AppInternals_Agent_10.6.0.607_Linux
1) Install
2) Upgrade
3) Exit
2
The installation creates a temporary directory (AppInternals_Agent_10.0.1187_Linux_Kit). Where do
you want to create it? [/tmp]:
At this point, the upgrade extracts files to the temporary directory. It then prompts you to continue:
Stop all Java processes that were being monitored by AppInternals (they lock files you want to re
move).
Do you want to upgrade Riverbed SteelCentral AppInternals from 10.6.0.600 to 10.6.0.607? [y/n]:y
Before making any changes, the upgrade checks for running Java processes that APM is monitoring. You
must manually stop such instrumented processes before you can continue. If the upgrade detects any, it lists
their process identifiers. For example:
The upgrade script found process that AppInternals is monitoring. You must stop the process to
continue.
PID 11611
Answer 'y' to continue or 'n' to terminate Riverbed SteelCentral AppInternals upgrade [y/n]:y
Stop the processes before continuing. The upgrade should finish without further intervention.
Stopping all Riverbed SteelCentral AppInternals components...
stopping the DSA...
stopping the DSA...
[ok]
[stopped] dsa
[stopped] os_agent
[stopped] agentrt_agent
[stopped] npm_agent
Successfully disabled process injection.
Process injection already disabled.
Successfully uninstalled process injection library.
Riverbed SteelCentral AppInternals file transfer to /opt/Panorama/ was successful
Installing the system-level automatic startup/shutdown script and links...
Successfully added appinternals as a service
Process injection already disabled.
Successfully uninstalled process injection library.
Successfully installed process injection library.
Successfully enabled process injection.
1) Start the Riverbed SteelCentral Agent and Sub-Agents. Log in as 'webadm' and run:
/opt/Panorama/hedzup/mn/bin/agent start
bash-4.1$
This section describes how to automate the installation of agent software by specifying options in a
response file and invoking the agent installer with the -silentinstall and -silentupgrade arguments.
Installation Prerequisites
For a list of installation prerequisites, see “Installation Prerequisites“ in “Agent Installation: Unix-Like
Operating Systems“.
1) Use the -extract argument with the agent installer to extract the kit contents to a temporary directory
you specify. For example, to extract to /tmp:
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.1.7_Linux -extract
The installation creates a temporary directory (AppInternals_Agent_10.0.1.7_Linux_Kit). Where
do you want to create it? [/tmp]:/tmp
Riverbed SteelCentral AppInternals tar-file extracted successfully
.
.
.
[root@nhv1-rh6-1 tmp]#
2) Edit the install_mn/install.properties.template file in the extracted directory. You must at least supply
values for the “INSTALLDIR“ and (for on-premises analysis servers)
“O_SI_ANALYSIS_SERVER_HOST“ options.
3) Save the file to a convenient location and name. For example, /tmp/install.properties.
If you do not set the “O_SI_AUTO_INSTRUMENT“ option to “true”, are performing a non-root
installation on Linux, or are installing on a UNIX system, you must manually enable the instrumentation of
Java and .NET Core processes after the installation completes.
For more information, see “Enabling Instrumentation of Java Processes“ and “Enabling Instrumentation of
.NET Processes“.
If you already have a response file available, you do not have to extract kit files. Simply specify the agent
installer with the -silentinstall argument and the response file. For example:
[root@nhv1-rh6-1 tmp]# ./AppInternals_Agent_10.0.00176_Linux -silentinstall ./install.properties
Uncompressed instrumentation.tar.gz in //AppInternals_Xpert_Collector_10.0.00176_Linux_Kit
.
.
.
Note: Before upgrading, refer to the release notes for any upgrade-specific information.
Use the -silentupgrade argument to upgrade an existing installation. As with a new installation, you need
a response file. Extracting the kit as described in “Creating a Response File“ also creates a template for
upgrades that you can adapt, in install_mn/upgrade.properties.template.
Edit that template file to change any properties you want and save it to a convenient location. For example,
/tmp/upgrade.properties. Run the upgrade installer with the -silentupgrade argument and the response
file.
The following example shows running the upgrade script (NOT the install script) from the extracted
install_mn directory. It uses the -silentupgrade argument and specifies the response file
/tmp/upgrade.properties:
[root@127 tmp]# ./AppInternals_Agent_10.11.0.588_Linux_Kit/install_mn/upgrade -silentupgrade
./upgrade.properties
Uncompressed instrumentation.tar.gz in /tmp/AppInternals_Agent_10.11.0.588_Linux_Kit
.
.
.
Or, you can simply specify the agent installer with the -silentupgrade argument and the response file:
[root@110 tmp]# ./AppInternals_Agent_10.11.0.588_Linux -silentupgrade upgrade.properties
INSTALLDIR
Description Specifies the destination directory for the agent installation. The installation script creates the
Panorama directory and subdirectories in the parent directory you supply. If you run the
installation from an account other than root, the account must have privileges to create
directories in this directory, or the installation fails.
Example INSTALLDIR="/opt"
O_SI_ANALYSIS_SERVER_HOST
Description Specifies the location of the on-premises APM analysis server. (This option has no effect for
installations that specify the SaaS analysis server by setting
“O_SI_SAAS_ANALYSIS_SERVER_ENABLED“.) The agent connects to this system to
initiate communications. Use this option for new installations only. See “Changing the
Agent’s Analysis Server“ for instructions on changing this value after the agent has been
installed.
Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the analysis
server.
Example O_SI_ANALYSIS_SERVER_HOST="myserver.example.com"
O_SI_ANALYSIS_SERVER_HOST="10.46.35.218"
O_SI_ANALYSIS_SERVER_PORT
Description Specifies the port on which the APM analysis server is listening. The agent connects to this
port to initiate communications. See “Changing the Agent’s Analysis Server“ for instructions
on changing this value after the agent has been installed.
Default Value 80
Example O_SI_ANALYSIS_SERVER_PORT="8051"
O_SI_ANALYSIS_SERVER_SECURE
Description Specifies whether the connection to the APM analysis server should be secure.
Example O_SI_ANALYSIS_SERVER_SECURE="true"
O_SI_AUTO_INSTRUMENT
Example O_SI_AUTO_INSTRUMENT="true"
O_SI_CUSTOMER_ID
Example O_SI_CUSTOMER_ID="5e6f1281-162d-11a6-a7c8-ab267ed63dc8"
O_SI_EXTRACTDIR
Description Specifies the directory to temporarily extract files for the agent installation.
Valid Values Any existing directory to which the user has write permission.
Example O_SI_EXTRACTDIR="~/"
O_SI_PROXY_SERVER_AUTHENTICATION
Example O_SI_PROXY_SERVER_AUTHENTICATION="true"
O_SI_PROXY_SERVER_ENABLED
Description Whether the agent will use a proxy server to connect to the analysis server system.
Example O_SI_PROXY_SERVER_ENABLED="true"
O_SI_PROXY_SERVER_HOST
Valid Values The system name, fully-qualified domain name (FQDN), or IP address for the proxy server.
Example O_SI_PROXY_SERVER_HOST="myproxyserver.example.com"
O_SI_PROXY_SERVER_HOST="10.46.35.238"
O_SI_PROXY_SERVER_PASSWORD
Example O_SI_PROXY_SERVER_PASSWORD="myproxypassword"
O_SI_PROXY_SERVER_PORT
Example O_SI_PROXY_SERVER_PORT="8080"
O_SI_PROXY_SERVER_REALM
Example O_SI_PROXY_SERVER_REALM="myproxyrealm"
O_SI_PROXY_SERVER_USER
Example O_SI_PROXY_SERVER_USER="myproxyuser"
O_SI_SAAS_ANALYSIS_SERVER_ENABLED
Description Specifies whether the agent will connect to the Software as a Service (SaaS) offering of the
analysis server.
Example O_SI_SAAS_ANALYSIS_SERVER_ENABLED="true"
O_SI_USERACCOUNT
Description Specifies the user name for the existing account you will use to run the agent and sub-agents
on this system. See the “Installing Agent Software“ prerequisite for more details.
This value is ignored unless you run the installation script as root. (If you run it as a
non-root user, that user is automatically the administrative user.)
Valid Values Any valid user name (the account must have privileges to create directories in the directory
specified by “INSTALLDIR“).
This chapter describes how to deploy APM agents in cloud-native environments. In these environments,
servers that APM monitors can be cloned many times, are created and destroyed frequently, and each
instantiation typically has a different host name and IP address.
The following sections explain how to deploy the agent in supported cloud-native environments:
– “Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“
– “Creating tags with the tags.yaml File for Agents Running in PaaS Environments“
– “Monitoring Docker Containers“
– “Monitoring Kubernetes and OpenShift Environments When Installing Agents on Individual
Nodes“
– “Deploying The Agent on Kubernetes Using a DaemonSet“
– “Monitoring VMware Tanzu Applications“
any environment where you want to avoid initial configuration in the analysis server interface and having
to restart processes.
Note: After Initial Startup, Make Changes in the Analysis Server Interface
Creating configuration and mapping files as described here is useful only for the initial agent startup. After
the initial startup, the agent ignores those files if any user changes the corresponding processes
configuration in the “Agent Details Screen“ or configuration settings in the “Define a Configuration
Screen“.
IIS* General entry for IIS. (Simply use IIS* if you do not need to differentiate between IIS variants.)
For other processes, you can find the process name in the analysis server interface. Look in the Discovered
As value for the processes in the “Agent Details Screen“:
include:*glassfish*
Do not put spaces between the include and config properties, the colon ( : ) separator, and the
corresponding names.
The configuration file name can include the .json file extension, but it is not required. If the file name
includes spaces, enclose it in quotation marks ( " ). Otherwise, quotation marks are not allowed.
Precede comments with the hash character (number sign) #. Any line starting with # is ignored.
The sample file has a single entry that instruments IIS processes. Adapt the sample to suit your purposes.
The following example changes the configuration file for IIS processes and adds an entry for the prunsrv
process. Both entries specify configuration files created using the analysis server interface and downloaded
as described in “Creating Configuration Files for Agents Running in PaaS Environments“:
include:IIS_* config:EUE_enabled.json
include:prunsrv config:default_config.json
In addition, the agent installation creates a template file named tags.yaml.template file in the
<installdir>\Panorama\hedzup\mn\userdata\config\ directory. Adapt the file to suit your purposes and
save it as tags.yaml.
You do not need to restart the agent or (for Version 10.9 agents and later) applications for changes to
tags.yaml to take effect.
Note: Agents Earlier than Version 10.9 Propagate Tags Less Frequently
For Agents earlier than Version 10.9, changes to tags do not propagate to transaction trace files (the source
of data for the Application Map and Search tabs and the Transaction Details window) until the sub-agent
starts a new trace file or the application is restarted.
tags.yaml Syntax
Specify tags using this syntax:
key : value
tags.yaml Examples
The following examples show some tags. Adapt them to create the tags.yaml file:
# Spaces are allowed in both the key and value:
logical server : Tier 1
OS : Windows 7
# Create a tag without a value:
NoValue : ""
# Assign multiple values to a single key:
MultiValue : [one, two]
# Same value for different keys:
FooKey : Bar
FooKey2 : Bar
Here is how the tags in the previous example would appear in the Tags column of the table in the “Servers
Tab“:
Build a Docker image that installs and runs the agent on every container you want to monitor. Use this
approach only when you do not have access to the Docker host:
Download a configuration file from the analysis server that has the specific configuration settings for
the application processes that you want to monitor. See “Creating Configuration Files for Agents
Running in PaaS Environments“.
Create the initial-mapping file that specifies a process name and corresponding configuration file. See
“Mapping Processes to Configuration Files“ for details. For example, here is an initial-mapping file
entry for Tomcat processes using the config.json configuration file:
include:Tomcat* config:config.json
Argument Meaning
Target directory Directory for the generated Dockerfile and supporting files needed to build the instrumented version of the Docker image. If
this directory does not exist, the script will prompt whether it should create it.
Image to instrument An existing Docker image to instrument. This script uses this value as the argument to the FROM instruction in the generated
Dockerfile. Include a tag value as appropriate. (Docker documentation on FROM)
The script does not require this value, but if you omit it, you must edit the generated Dockerfile and supply it.
User The user name specified for the Dockerfile USER instruction in the existing base Docker image. This value is optional if the base
image does not specify a USER value. If you omit this argument, the script uses a value of root.
(Docker documentation on USER)
Supply argument values on the command line following the -y argument. If omitted, the script prompts for
values.
The following example shows creating an instrumented version of a Docker image,
zeus.run/app/3tier/tomcat. This example runs the script without arguments on the command line so that
it prompts for values.
1) Use the docker images command to see details about the image:
[root@11A opt]# cd /opt/Panorama/hedzup/mn/bin
[root@11A bin]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
zeus.run/app/3tier/tomcat 2.0 89ee1c3ab841 7 days ago 776.5
MB
[root@11A bin]#
2) Run the script and respond to the prompts. Note that the image value includes the 2.0 tag.
[root@11A bin]# ./createDockerfile.sh
/opt/Panorama/hedzup/mn/bin /opt/Panorama/hedzup/mn/bin
Directory to create Dockerfile in: /opt/docker-instrument-3tier-tomcat
Do you want to create: /opt/docker-instrument-3tier-tomcat [y/N]? y
Specify a Docker image and version for the FROM command (optional): zeus.run/app/3tier/tomcat
:2.0
If your Docker image has a USER command, enter its username (optional):
=============================================================================================
=
Creating: /opt/docker-instrument-3tier-tomcat/Dockerfile
Using: FROM zeus.run/app/3tier/tomcat:2.0
Using: USER root
To create your instrumented image, use a command like this:
docker build -t zeus.run/app/3tier/tomcat-instr:2.0 /opt/docker-instrument-3tier-tomcat
=============================================================================================
=
[root@11A bin]#
3) Confirm that script created the specified directory with a Dockerfile and supporting instr/ subdirectory.
[root@11A bin]# ls -al /opt/docker-instrument-3tier-tomcat
total 16
drwxr-xr-x 3 root root 4096 Jun 6 12:45 .
drwxr-xr-x 6 root root 4096 Jun 8 09:47 ..
-rw-r--r-- 1 root root 993 Jun 6 12:45 Dockerfile
drwxr-xr-x 4 root root 4096 Jun 6 12:45 instr
Here is a parallel example that shows supplying the arguments on the command line following the -y
argument:
[root@11A bin]# ./createDockerfile.sh -y /opt/docker-instrument-3tier-tomcat-CL zeus.run/app/3tie
r/tomcat:2.0
/opt/Panorama/hedzup/mn/bin /opt/Panorama/hedzup/mn/bin
==============================================================================================
Creating: /opt/docker-instrument-3tier-tomcat-CL/Dockerfile
Using: FROM zeus.run/app/3tier/tomcat:2.0
Using: USER root
To create your instrumented image, use a command like this:
docker build -t zeus.run/app/3tier/tomcat-instr:2.0 /opt/docker-instrument-3tier-tomcat-CL
==============================================================================================
Use the docker images command to confirm that the instrumented image built:
[root@11A bin]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZ
E
zeus.run/app/3tier/tomcat-instr 2.0 0c29f7b28e10 12 seconds ago 778
.6 MB
Note: Only Required For Containers That Use The Bridge Network
This section describes steps for opening network ports for containers that use the Docker bridge network.
These steps are not required for containers that use the host network (containers that start with the docker
run --network=host option).
By default, new Docker containers are automatically connected to the Docker bridge network. In this case,
monitored processes will not have access to APM ports (Docker documentation on container networking).
Without access, no environmental or transaction trace data will be written to the agent on the Docker host.
When this happens, the JIDA sub-agent writes log messages similar to the following in
<installdir>\Panorama\hedzup\mn\ on the Docker host:
java.io.IOException: Trouble getting port from DSA: java.net.NoRouteToHostException: No route to
host (Host unreachable)
The following iptables commands open required ports on the Docker host. These commands require root
access. The ports need to be opened only for the Docker network (for containers, in other words).
Note: The iptables command supports a -n option, which prevents DNS lookups.
The last command opens a single port for subsequent connections. You must also configure the DSA to use
the same fixed port. To configure the DSA to use a fixed port, stop the DSA and edit the file
<installdir>\Panorama\hedzup\mn\data\dsa.xml. Change the value of the DsaDaDataPort attribute
and restart the DSA. For example:
<Attribute name="DsaDaDataPort" value="33000"/>
Note: Avoid opening a large range of ephemeral ports because it slows down the startup of containers.
option to bind mount the agent installation directory on the host in the container
(Docker documentation on docker run).
For example:
[root@11A zeus]# docker run -d -p 8888:8080 -v /opt/Panorama:/opt/Panorama
zeus.run/app/3tier/tomcat-instr:2.0
79e23b4b2fb81154cec7390d5ff8791f1ba0a90583ea0f025d82e4d57905930f
In Docker swarm mode, specify the --mount type=bind option on the docker service create command
(Docker documentation on swarm mode). For example:
docker service create \
--mount type=bind,source=/opt/Panorama,destination=/opt/Panorama \
-p 8080:8080 -p 9990:9990 zeus.run/app/3tier/tomcat:2.0
Troubleshooting
This section describes problems and the configuration to work around them.
Add the RVBD_DSAHOST environment variable to the docker run command using the -e (--env)
option. This method has the advantage of being set in the docker container. For example:
docker run -d -p 8888:8080 \
-e RVBD_DSAHOST=1.2.3.4 \
-v /opt/Panorama:/opt/Panorama \
zeus.run/app/3tier/tomcat-instr:2.0
If you use both methods, the RVBD_DSAHOST environment variable takes precedence.
Servers Tab
The Servers tab shows Docker containers in the Server column of its table, denoted by the Docker icon and
a name formatted as specified in the “Docker Container Display Name Screen“. The Tags column shows
special “Docker Container Tags“ that give additional information about the container (as well as any
user-defined tags). Click the expander to see all values.
docker hostname Host name of the Docker host where the agent is
installed. The “container.parent“ search field
docker hostname=0C4.internal.zeus
File Description
AppInternals_Agent_<version>_Linux.gz Gzip-compressed agent installation script. Obtain this file from the APM download page.
install.properties Response file that specifies agent installation options. As described in “Unattended Agent Installation:
Unix-Like Operating Systems“, the -silent argument to the agent installer specifies this file.
<configuration_name>.json Configuration file with instrumentation settings for the application processes that you want to monitor.
See “Configuring Processes Running in PaaS Environments to Be Instrumented on Initial Startup“ for
details on downloading configuration files from the analysis server.
File Description
initial-mapping Mapping file that specifies the processes that you want to instrument when the agent starts and a
corresponding configuration file.
tags.yaml Tag file with custom identifiers that categorize servers that APM is monitoring.
The following example shows an excerpt of a Dockerfile that adds these files and installs the agent using
the -silent argument:
# Install Appinternals agent
WORKDIR /opt
ADD appinternals_agent_latest_linux.gz .
ADD install.properties .
RUN gunzip appinternals_agent_latest_linux.gz
RUN chmod +x appinternals_agent_latest_linux
RUN ./appinternals_agent_latest_linux -silent install.properties
RUN rm -rf appinternals_agent_latest_linux
# Add instrumentation configuration and initial-mapping files
ADD config.json /opt/Panorama/hedzup/mn/userdata/config/
ADD initial-mapping /opt/Panorama/hedzup/mn/userdata/config/
ADD tags.yaml /opt/Panorama/hedzup/mn/userdata/config/
The Docker container must run dsactl start to start the agent when it launches:
The agent must start before the application that will be monitored.
Add a 2-second delay between starting the agent and starting the application. This allows the agent to
write files required by the JIDA sub-agent when it starts with the application.
See the “Controlling Agent Software on Unix-Like Systems“ configuration topic for details on dsactl. See
https://docs.docker.com/engine/admin/using_supervisord/ for details on using Supervisor with
Docker.
After you start a Docker container that uses the customized image, the agent connects to the APM analysis
server specified by the O_SI_CUSTOMER_ID option in the response file (install.properties in the previous
example). The agent appears in the analysis server interface with the Docker container ID as a agent and
server host name:
After the installation is complete, start the agent, as described in “Starting the Agent Software“.
Note: Examples in this section assume that the agent is installed in /opt.
topic.
There are multiple approaches for configuring pods. The following sections show using the OpenShift web
console to modify an existing application’s deployment with the required changes:
“Defining Environment Variables“
“Mounting the Agent Installation Directory“
“Confirming Instrumentation“
In the editing window that opens, find the containers: section. If there is not already an - env: list item under
it, paste the bold text in the following example under the containers: section. If there is already an - env: list
item, omit the first line of bold text (- env:).
Note: Begin the stanza with -env if delineating between each container in your deployment and this is the
first property of the container. However, if this is not the first property for the container, use env instead
(without the initial dash).
spec:
containers:
- env:
- name: JAVA_TOOL_OPTIONS
value: '-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so'
- name: RVBD_AGENT_FILES
value: '1'
- name: RVBD_AGENT_PORT
value: '7073'
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
Indentation is important in yaml. The editing window performs some validation, and you can use a
validator such as https://codebeautify.org/yaml-validator.
Click Save in the editing window. Open the Environment tab to confirm that the variables were defined as
expected:
1) In the spec: section, add the following volumes: specification, at the same level as the containers:
section.
volumes:
- hostPath:
path: /opt/Panorama
type: ''
name: appint
2) In the containers: section, add the following volumeMounts: specification, at the same level as the
- env: list item:
volumeMounts:
- mountPath: /opt/Panorama
name: appint
value: '1'
- name: RVBD_AGENT_PORT
value: '7073'
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
volumeMounts:
- mountPath: /opt/Panorama
name: appint
When you save changes, the order of the sections at a given level in the yaml are rearranged alphabetically.
This is expected.
Confirming Instrumentation
After you change the yaml deployment configuration, OpenShift automatically redeploys pods for the
application.
In the web console Applications menu, choose Pods. The pod for the application you modified should
show a status of Running and at its containers ready:
After a few minutes, transaction data should appear in the analysis server interface. In the Search tab,
search for the project name (doc-example in this case).
server.tag = 'container namespace=doc-example'
The following sections explain how to install and upgrade an AppInternals agent in a DaemonSet:
“Deploying the AppInternals Agent as a DaemonSet (No Helm Charts)“
“Deploying the AppInternals Agent as a DaemonSet (Helm Charts)“
“Upgrading the Agent in a Kubernetes DaemonSet“
1. Log in to your local system as a user with access to docker and kubectl.
2.1. Download the agent tar.gz file from the Riverbed Support site, where it is referred to as the
Aternity APM for Kubernetes Kit.
The tar.gz file has a name like the following, rvbd_agent_VERSION.tar.gz, and contains
the following files necessary for configuring and deploying a DaemonSet and instrumenting an
example Apache Tomcat application:
Note—The example yaml and properties files in this document are for informational purposes only.
Use the template files that ship with the agent to create your site-specific files.
2.2. Decompress and untar the agent tar file by entering the following command:
# tar zxvf rvbd_agent_VERSION.tar.gz
2.3. Change your working directory to the directory that holds the agent tar file, by entering the
following command:
# cd rvbd_agent_VERSION
The following is an example of loading the docker image and pushing the image to a private
Note—You will need the location of the image in the registry later when you edit the
rvbd-agent.yaml file in step 3.5.
3. Configure the Riverbed DaemonSet on the Kubernetes cluster by following these steps:
3.1. Log in to your local system as a user with access to docker and kubectl.
3.2. (Optional) Although not required, Riverbed recommends that you create a "riverbed" namespace,
which will provide better logical isolation between user applications and Riverbed components.
The example rvbd-agent-setup.yaml file includes a section to create the "riverbed"
namespace. For information on when to use namespaces and how to create them, see Namespaces
in the Kubernetes documentation.
After you have enabled the riverbed namespace in the rvbd-agent-setup.yaml file, enter the
following command to create the namespace in Kubernetes:
# kubectl create namespace riverbed
3.3.1 Using the text editor of choice, edit the rvbd-agent-setup.yaml template file to make
any necessary changes specific to your site. The file looks like this:
rvbd-agent-setup.yaml
Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.
apiVersion: v1
kind: Namespace
metadata:
name: riverbed
---
apiVersion: v1
kind: Secret
metadata:
name: rvbd-secret
namespace: riverbed
labels:
app: "rvbd"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rvbd-agent
rules:
- apiGroups:
- ""
resources:
- services
- events
- endpoints
- pods
- nodes
- componentstatuses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
verbs:
- get
- update
- ""
resources:
- configmaps
verbs:
- create
- nonResourceURLs:
- "/version"
- "/healthz"
verbs:
- get
- ""
resources:
- nodes/metrics
- nodes/spec
- nodes/proxy
verbs:
- get
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: rvbd-agent
namespace: riverbed
---
# Your admin user needs the same permissions to be able to grant them
# See
https://cloud.google.com/container-engine/docs/role-based-access-control#setting_up_role-based_ac
cess_control
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rvbd-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rvbd-agent
subjects:
- kind: ServiceAccount
name: rvbd-agent
namespace: riverbed
3.3.2 Run the following kubectl create command to create the service account from the yaml
file:
# kubectl create -f rvbd-agent-setup.yaml
3.4.1 Using the text editor of choice, edit the template file agent-daemonset-env.properties
and ensure that the values are correct for your site. The file is set up for a SAAS environment,
and looks like this:
agent-daemonset-env.properties
RVBD_SAAS_ANALYSIS_SERVER_ENABLED=true
RVBD_ANALYSIS_SERVER_HOST=collector-1.steelcentral.net
RVBD_CUSTOMER_ID=customer_id
RVBD_ANALYSIS_SERVER_PORT=443
RVBD_MAX_INSTR_LOGS=500
#RVBD_APP_CONFIG=new-config
#RVBD_LOGICAL_SERVER=tag_value
#container_metrics=true
3.4.2 Using the kubectl create command, create a ConfigMap from the properties file, as
follows:
# kubectl create configmap agent-daemonset-env
--from-env-file=./agent-daemonset-env.properties -n riverbed
3.5. Create the Riverbed agent as a Kubernetes DaemonSet by doing the following:
3.5.1 Edit the template file rvbd-agent.yaml and ensure that the values are correct for your
site, especially the location of the docker registry – returned in step 2.4. The file looks like this:
rvbd-agent.yaml
Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: rvbd-agent
namespace: riverbed
spec:
selector:
matchLabels:
app: rvbd-agent
updateStrategy:
template:
metadata:
labels:
app: rvbd-agent
name: rvbd-agent
spec:
serviceAccountName: rvbd-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: repo/agent:11.4.2.521
name: rvbd-agent
securityContext:
capabilities: {}
privileged: false
ports:
- containerPort: 2111
hostPort: 2111
name: dsaport
protocol: TCP
- containerPort: 7071
hostPort: 7071
name: daport
protocol: TCP
- containerPort: 7072
hostPort: 7072
name: agentrtport
protocol: TCP
- containerPort: 7073
hostPort: 7073
name: profilerport
protocol: TCP
- containerPort: 7074
hostPort: 7074
name: cmxport
protocol: TCP
envFrom:
- configMapRef:
name: agent-daemonset-env
env:
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
resources:
requests:
memory: "2G"
cpu: "200m"
limits:
memory: "2G"
cpu: "200m"
volumeMounts:
- name: rvbd-files
mountPath: /host/opt/Panorama
- name: dockersocket
mountPath: /var/run/dockersocket
- name: pks-dockersocket
mountPath: /var/run/pks-dockersocket
- name: procdir
mountPath: /host/proc
readOnly: true
- name: cgroups
mountPath: /host/sys/fs/cgroup
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- hostPath:
path: /opt/Panorama
name: rvbd-files
- hostPath:
path: /var/run/docker.sock
name: dockersocket
- hostPath:
path: /var/vcap/data/sys/run/docker/docker.sock
name: pks-dockersocket
- hostPath:
path: /proc
name: procdir
- hostPath:
path: /sys/fs/cgroup
name: cgroups
tolerations:
- operator: "Exists"
effect: "NoSchedule"
- operator: "Exists"
effect: "NoExecute"
3.5.2 (Optional) If the Docker image is from a private repository, a secret needs to be created for
pulling the image in the above yaml file (refer to the imagePullSecrets spec).
To create the secret, enter the following command, where <your-registry-server> is your
Private Docker Registry FQDN (https://index.docker.io/v1/ for DockerHub),
<your-name> is your Docker username, <your-pword> is your Docker password,
<your-email> is your Docker email:
# kubectl create secret docker-registry regcred --docker-server=<your-registry-server>
--docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
-n riverbed
Note—During DaemonSet startup, the AppInternals agent is installed as a DaemonSet Agent Pod on
each Kubernetes cluster Node, and the Profiler binaries are placed in the hostPath directory to be
shared by user Pods. As a result, the DaemonSet must be running before instrumentation can be
enabled.
4.1. Using the text editor of choice, edit the template file
rvbd-instrumentation-env.properties, and ensure that the values are correct for your
site. The file looks like this:
rvbd-instrumentation-env.properties
# Common properties
AIX_INSTRUMENT_ALL=1
RVBD_AGENT_FILES=1
# JAVA Instrumentation
JAVA_TOOL_OPTIONS=-agentpath:/opt/Panorama/hedzup/mn/lib/librpilj64.so
CORECLR_ENABLE_PROFILING=1
CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}
CORECLR_PROFILER_PATH=/opt/Panorama/hedzup/mn/lib/libAwDotNetProf64.so
DOTNET_ADDITIONAL_DEPS=/opt/Panorama/hedzup/mn/install/dotnet/additionalDeps/Riverbed.Ap
pInternals.DotNetCore/
DOTNET_SHARED_STORE=/opt/Panorama/hedzup/mn/install/dotnet/store/
5. Deploy your instrumented application. These steps use Apache Tomcat as an example:
5.1. Ensure that the application pod's network policy allows TCP connections from the application
pod to the following ports on the DaemonSet agent pod: 2111, 7071, 7072, 7073, 7074.
5.2. Understand the variables you need to set in the application yaml file.
Three variables in the example Apache Tomcat application yaml file – RVBD_APP_CONFIG,
RVBD_APP_INSTANCE, and RVBD_CT_tagname – are of particular importance:
• RVBD_APP_CONFIG
If your application has a different config from the default config specified in the
agent-daemonset-env.properties file, specify that config in the application’s yaml file.
For example:
- name: RVBD_APP_CONFIG
value: WeatherServiceConfig
Note—If you change the name of the application's config file or create another config file for the
application in the Analysis Server WebUI, you must change the value setting for
RVBD_APP_CONFIG using the name of the new config file, and then restart the application.
• RVBD_APP_INSTANCE
An instrumented application has a "default instance" that is computed based on the class name
and the type of application server, which can be non-intuitive, such as
Tomcat__apps_wsse_wss2-D_603. You can use the RVBD_APP_INSTANCE variable to specify a
more descriptive instance for the application (such as WeatherService, for example) so it is
easier to recognize the application instance in the Process List and Instance tab of the WebUI.
For example:
- name: RVBD_APP_INSTANCE
value: WeatherService
Note—Specifying the new instance name in the application yaml file instead of the WebUI, which
is node-specific, ensures that every instance of the application deployed in the Kubernetes cluster
will report the new instance name in the WebUI.
• RVBD_CT_tagname
Creates a “tagname”. The “tagname” shows up as a container tag in the WebUI, displaying the
value you set this variable to. Multiple defines of this variable are permitted. Note that any
underscores in the tagname are parsed to white space.
5.3. Using the text editor of choice, edit the template file tomcat-app.yaml and ensure that the
values are correct for your application.
Each instrumented container needs to mount the volume that contains the instrumented files.
Specify VolumeMount in the Deployment/spec/template/spec/container section of the
application yaml file.
The instrumented application must add our configmap with the env. variables for
instrumentation n the Deployment/spec/template/spec/container section of the application
yaml file.
tomcat-app.yaml
Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.
Note—The bold text indicates sections the user may need to change for site-specific instrumentation,
as well as setting RVBD variables.
---
kind: List
apiVersion: v1
metadata:
name: tomcat-app-service-example
items:
- kind: Service
apiVersion: v1
metadata:
name: tomcat-app-service
spec:
selector:
name: tomcat-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
- kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: tomcat-app
name: tomcat-app
spec:
replicas: 1
selector:
matchLabels:
app: tomcat-app
template:
metadata:
labels:
app: tomcat-app
spec:
containers:
- image: docker.io/tomcat
name: tomcat-app-container
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: rvbd-instrumentation-env
env:
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
#- name: RVBD_APP_CONFIG
# value: new-config
#- name: RVBD_APP_INSTANCE
# value: MyTomcatApp
volumeMounts:
- mountPath: /opt/Panorama
name: rvbd-files
volumes:
- hostPath:
path: /opt/Panorama
name: rvbd-files
5.4. Deploy your application by entering the following Kubernetes kubectl create command:
# kubectl create -f <YOUR_APPLICATION_YAML_FILE>
6. The DaemonSet should now be deployed and the Appinternals agent harvesting instrumentation
information from the various applications in the Kubernetes Nodes. To verify that the DaemonSet is
running, log into the Analysis Server (default UID/PW: admin/riverbed-default), and view the
Server and Instances tabs as follows:
7. If you defined RVBD_CT_tagname variables, as described in step 5.2., this is how they appear in the
WebUI:
1. Log in to your local system as a user with access to docker and kubectl.
2. Download and install helm. For more information, see the helm documentation here.
3.1. Download the agent tar.gz file from the Riverbed Support site, where it is referred to as the
Aternity APM for Kubernetes Kit.
The tar.gz file has a name like the following, rvbd_agent_VERSION.tar.gz, and contains
the following files necessary for configuring and deploying a DaemonSet with helm and
instrumenting an example Apache Tomcat application:
Note—The example yaml and properties files in this document are for informational purposes only.
Use the template files that ship with the agent to create your site-specific files.
3.2. Decompress and untar the agent tar file by entering the following command:
# tar zxvf rvbd_agent_VERSION.tar.gz
3.3. Change your working directory to the directory that holds the agent tar file, by entering the
following command:
# cd rvbd_agent_VERSION
The following is an example of loading the docker image and pushing the image to a private
docker registry where:
Note—You will need the location of the image in the registry later when you edit the
rvbd-agent.yaml file in step 4.5.
4. Configure the Riverbed DaemonSet on the Kubernetes cluster using a Helm Chart by following these
steps:
4.1. Log in to your local system as a user with access to docker, helm, and kubectl.
4.2. (Optional) Although not required, Riverbed recommends that you create a "riverbed" namespace,
which will provide better logical isolation between user applications and Riverbed components.
For information on when to use namespaces and how to create them, see Namespaces in the
Kubernetes documentation.
4.3. (Optional) If the Docker image is from a private repository, a secret needs to be created for pulling
the image in the above yaml file (refer to the imagePullSecrets spec).
To create the secret, enter the following command, where <your-registry-server> is your Private
Docker Registry FQDN (https://index.docker.io/v1/ for DockerHub), <your-name> is your
Docker username, <your-pword> is your Docker password, <your-email> is your Docker email:
# kubectl create secret docker-registry regcred --docker-server=<your-registry-server>
--docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
-n riverbed
Note—During DaemonSet startup, the AppInternals agent is installed as a DaemonSet Agent Pod on
each Kubernetes cluster Node, and the Profiler binaries are placed in the hostPath directory to be
shared by user Pods. As a result, the DaemonSet must be running before instrumentation can be
enabled.
4.4. Determine the release name before installing the Helm Chart. Note that your working directory
should still be the directory that holds the agent tar file: rvbd_agent_VERSION, which also
contains the agent’s Helm Chart.
4.4.4 The installation of a Helm Chart varies slightly between versions 2 and 3 of Helm. Determine
the version of Helm that is installed by entering the following command:
# helm version
• If Helm v2 is installed, use the following command to specify the “release name”:
# helm install --name appint --namespace <Your-namespace> <parameter specification>
./helm
• If Helm v3 is installed, use the following format to specify the “release name”:
# helm install appint -n <Your-namespace> <parameter specification> ./helm
4.5. Once the release name has been specified, install the Helm Chart using a command like the
following, replacing the <variables> with the values specific to your site. Note this example uses
a minimum configuration on Helm V3 installed on a SaaS server:
# helm install --debug appint -n riverbed \
--set image.name="<SIDECAR_IMAGE>" \
--set image.pullSecrets[0]=regcred \
--set analysisServer.host="<YOUR_ANALYSIS_SERVER>" \
./helm
You specify each parameter using the --set key=value[,key=value] argument to the helm
install command. For example:
# helm install --name appint --namespace riverbed --set analysisServer.customerID
="<YOUR_CUSTOMER_ID>" ./helm
Alternatively, you can create a yaml file that specifies the values for the parameters, and then pass
the yaml file name as an argument to the helm install command. For example,
# helm install --name appint --namespace riverbed -f values.yaml ./helm
4.6. Verify the Helm Chart installation succeeded by entering the following command:
# helm list -n riverbed
5. (Optional) Add a config map to the Application Namespace to enable Java and .NET Core
instrumentation.
The Agent Daemonset Helm Chart automatically adds a config map (called
appint-instrumentation-env) to the default namespace of your cluster.
However, if your instrumented application is not running in the default namespace, you need to make
the config map available in the application's namespace (<Your App Namespace>) by entering the
following command:
# kubectl get configmap rvbd-instrumentation-env --namespace=default --export -o yaml |
kubectl apply --namespace=<Your App Namespace>-f -
6. Deploy your instrumented application. These steps use Apache Tomcat as an example:
6.1. Ensure that the application pod's network policy allows TCP connections from the application
pod to the following ports on the DaemonSet agent pod: 2111, 7071, 7072, 7073, 7074
6.2. Understand the variables you need to set in the application yaml file.
Three variables in the example Apache Tomcat application yaml file – RVBD_APP_CONFIG,
RVBD_APP_INSTANCE, and RVBD_CT_tagname – are of particular importance:
• RVBD_APP_CONFIG
If your application has a different config from the default config specified in the
agent-daemonset-env.properties file, specify that config in the application’s yaml file.
For example:
- name: RVBD_APP_CONFIG
value: WeatherServiceConfig
Note—If you change the name of the application's config file or create another config file for the
application in the Analysis Server WebUI, you must change the value setting for
RVBD_APP_CONFIG using the name of the new config file, and then restart the application.
• RVBD_APP_INSTANCE
An instrumented application has a "default instance" that is computed based on the class name
and the type of application server, which can be non-intuitive, such as
Tomcat__apps_wsse_wss2-D_603. You can use the RVBD_APP_INSTANCE variable to specify a
more descriptive instance for the application (such as WeatherService, for example) so it is
easier to recognize the application instance in the Process List and Instance tab of the WebUI.
For example:
- name: RVBD_APP_INSTANCE
value: WeatherService
Note—Specifying the new instance name in the application yaml file instead of the WebUI, which
is node-specific, ensures that every instance of the application deployed in the Kubernetes cluster
will report the new instance name in the WebUI.
• RVBD_CT_tagname
Creates a “tagname”. The “tagname” shows up as a container tag in the WebUI, displaying the
value you set this variable to. Multiple defines of this variable are permitted. Note that any
underscores in the tagname are parsed to white space.
6.3. Using the text editor of choice, edit the template file tomcat-app.yaml and ensure that the
values are correct for your application.
Each instrumented container needs to mount the volume that contains the instrumented files.
Specify VolumeMount in the Deployment/spec/template/spec/container section of the
application yaml file.
The instrumented application must add our configmap with the env. variables for
instrumentation n the Deployment/spec/template/spec/container section of the application
yaml file.
tomcat-app.yaml
Note—This example yaml file is for informational purposes only. Cutting and pasting may not
preserve formatting, which could result in the yaml file not validating. Additionally, the contents of
the actual file could change. Therefore, ensure that you use the template file that ships with the agent.
Note—The bold text indicates sections the user may need to change for site-specific instrumentation,
as well as setting RVBD variables.
---
kind: List
apiVersion: v1
metadata:
name: tomcat-app-service-example
items:
- kind: Service
apiVersion: v1
metadata:
name: tomcat-app-service
spec:
selector:
name: tomcat-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
- kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: tomcat-app
name: tomcat-app
spec:
replicas: 1
selector:
matchLabels:
app: tomcat-app
template:
metadata:
labels:
app: tomcat-app
spec:
containers:
- image: docker.io/tomcat
name: tomcat-app-container
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: appint-instrumentation-env
env:
- name: RVBD_DSAHOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
#- name: RVBD_APP_CONFIG
# value: new-config
#- name: RVBD_APP_INSTANCE
# value: MyTomcatApp
volumeMounts:
- mountPath: /opt/Panorama
name: rvbd-files
volumes:
- hostPath:
path: /opt/Panorama
name: rvbd-files
6.4. (Optional) Whereas the Helm Chart automatically deploys your application, you can use the
following Kubernetes kubectl create command to deploy your application manually:
# kubectl create -f <YOUR_APPLICATION_YAML_FILE>
7. The DaemonSet should now be deployed and the Appinternals agent harvesting instrumentation
information from the various applications in the Kubernetes Nodes. To verify that the DaemonSet is
running, log into the Analysis Server (default UID/PW: admin/riverbed-default), and view the
Server and Instances tabs as follows:
8. If you defined RVBD_CT_tagname variables, as described in step 6.2., this is how they appear in the
WebUI:
1. Follow steps step 1. and step 2. in the section “Deploying the AppInternals Agent as a DaemonSet (No
Helm Charts)“ to host the new agent image on the Docker Registry.
2. Using the text editor of choice, update the rvbd-agent.yaml file to point to the new version of the
agent. For more information on working with this file, see step 3.5. in “Deploying the AppInternals
Agent as a DaemonSet (No Helm Charts)“.
Note—The ConfigMap does not need to be updated. The same settings can be used for the newer
agent.
3. Initiate the Kubernetes RollingUpdate of the DaemonSet by entering the following command:
For information on how to upgrade Helm Charts, see the helm documentation here.
For VMware Tanzu deployments, the agent is available directly on the VMware Tanzu Network. Download
the agent .pivotal file here:
https://network.pivotal.io/products/riverbed-appinternals
Documentation for deploying the agent and monitoring applications in VMware Tanzu is also hosted on
VMware Tanzu:
https://docs.pivotal.io/partners/riverbed-appinternals/
APM requires that times on agent systems and the analysis server be synchronized through a network time
service (such as Network Time Protocol (NTP) or Windows Time Service).
This section describes time synchronization considerations for the agent.
For the on-premises version of the analysis server, you may also have to specify similar settings in the
“Date/Time Configuration Screen“. (If the analysis server is installed on a dedicated Linux system, do not
use those settings to configure NTP. They will overwrite any NTP configuration already configured for the
system.)
For the Software as a Service (SaaS) version of the analysis server, the analysis server is always
automatically synchronized with an external time service.
UNIX/Linux
Use the chkconfig command to see if the NTP daemon, ntpd, is configured to run when the system boots.
In this example, it is not:
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
If necessary, use chkconfig to enable ntpd to run when the system boots. The example below accepts the
default, which enables ntpd for run levels 2 through 5:
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
# ./sbin/chkconfig ntpd on
# ./sbin/chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Windows
Overview
This chapter describes how to configure the agent system for Java processes to load the APM profiler library
to enable instrumentation. Unless instrumentation is enabled, processes will not load the profiler library
and will not be monitored.
APM monitors Java applications by loading a profiler library into processes when they start as follows:
The profiler library determines if a user configured a process for instrumentation in the “Agent Details
Screen“. Only processes with the Instrument option selected (see “Instrument Option (Restart
Required)“) will be instrumented.
For those processes, monitoring begins once the JVM startup completes. APM generates transaction
trace data according to the settings specified in the corresponding “Define a Configuration Screen“ for
the process.
The following sections explain how to enable and troubleshoot Java instrumentation:
“Enabling Java Instrumentation on Windows“
“Enabling Java Instrumentation on Linux“
“Enabling Java Instrumentation on Solaris“
“Adding -agentpath to Java Command Line“
“Using JidaRegister.exe to Instrument Java Processes“
“Using Environment Variables to Instrument Java Processes“
“Troubleshooting Java Instrumentation Issues“
Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.
If you do not want all Java and .NET processes enabled for instrumentation, do not select this feature during
the installation. After the installation completes, you can manually configure processes to be instrumented
with the rpictrl utility. For more information, see “Automatic Process Instrumentation on Windows“, as
well as “Adding -agentpath to Java Command Line“ and “Using JidaRegister.exe to Instrument Java
Processes“.
The script has other options for post-installation tasks on Unix-like operating systems. (See “Non-Root
Installs: Run install_root_required.sh“ in the installation documentation.)
Choose the last option to enable instrumentation automatically. For example:
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh
Change permissions to run the NPM sub-agent (recommended, SaaS analysis
server only)? [y|n]: n
Enable automatic startup on system reboot (recommended)? [y|n]: n
Enable automatic Java instrumentation system-wide (optional)? [y|n]:y
Successfully installed process injection library
Process injection already enabled.
Successfully enabled process injection library
After you enable instrumentation system-wide, select the Java processes you want to instrument (via
the “Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the
applications. After restarting, the applications will load the profiler library and they will appear in the
Agent Details screen as Instrumented.
Enabling system-wide instrumentation does the following:
– Copies or creates symbolic links of the librpil*.so shared objects in
<installdir>/Panorama/hedzup/mn/lib64 and <installdir>/Panorama/hedzup/mn/lib64 as
follows:
* For Red Hat, SUSE, CentOS: in the /lib and /lib64 system directories, respectively
* For Ubuntu: in the /lib/i386-linux-gnu, /lib/i686-linux-gnu, and /lib/x86_64-linux-gnu
directories, respectively
– Adds the following entry to the system file /etc/ld.so.preload:
* For Red Hat, SUSE, CentOS:
/$LIB/librpil.so
* For Ubuntu:
/lib/${PLATFORM}-linux-gnu/librpil.so
1) Enable instrumentation as described in this section. On Solaris, use one of the following approaches:
– Set the “JAVA_TOOL_OPTIONS“ environment variable.
– “Adding -agentpath to Java Command Line“
2) Restart the application. This will cause its process to appear in the Processes to Instrument list.
3) Once the application appears in the Processes to Instrument list, select the Instrument? option.
Specify the 64-bit or 32-bit version of the library and the APM installation directory for the system:
Because -agentpath is configured for each process, it avoids a potential problem with the Windows
system-wide JAVA_TOOL_OPTIONS variable set by running “Using JidaRegister.exe to Instrument Java
Processes“. With that approach, if there is a mismatch between the 32-bit or 64-bit library specified by
JAVA_TOOL_OPTIONS and any JVM that loads it, the JVM will not start.
However, you must know where to specify the -agentpath options for the application. This varies by
application and may be in a startup script, user interface, or a configuration file.
For example, for Tomcat on Windows, you specify the option in the Tomcat properties dialog, in the Java
tab:
After you specify the -agentpath option, select the Java processes you want to instrument (via the
“Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the applications.
After restarting, the applications will load the profiler library and they will appear in the Agent Details
screen as Instrumented.
Once you run JidaRegister.exe, select the Java processes you want to instrument (via the “Instrument
Option (Restart Required)“ option in the “Agent Details Screen“) and restart the corresponding Windows
services. For example, for Tomcat:
After restarting the services, the applications will load the profiler library and they will appear in the Agent
Details screen as Instrumented. For example, for Tomcat:
When you remove agent software on Windows (see “Removing Agent Software“), the uninstall
program runs JidaRegister.exe with the uninstall argument.
Once you choose the 64-bit or 32-bit library, any JVM that does not match that choice will fail to start.
For example, after choosing the 64-bit library, running a 32-bit JVM fails:
C:\Users\Administrator>java -version
Picked up JAVA_TOOL_OPTIONS: -agentpath:C:\Panorama\hedzup\mn\bin\rpilj64.dll
Error occurred during initialization of VM
Could not find agent library C:\Panorama\hedzup\mn\bin\rpilj64.dll in absolute path, with err
or: Can't load AMD 64-bit .dll on a IA 32-bit platform
LD_PRELOAD
Works on: Linux
The LD_PRELOAD environment variable is set at the process level by modifying the application startup
script or at the user level by modifying a user profile (such as ~/.profile), and loads the profiler library
in all processes within the scope of the variable.
Note: If agent software is removed without removing the option that loads the library from LD_PRELOAD,
Java processes will start but generate errors.
The LD_PRELOAD environment variable specifies an absolute path to shared objects that processes will
load before any other library. Unlike “JAVA_TOOL_OPTIONS“, LD_PRELOAD affects all processes that
start in the scope of the environment variable, not just Java processes.
Typically, you set LD_PRELOAD in the startup script for applications. You could also add it to a user profile
(such as ~/.profile) so that it would automatically affect all processes started by that user.
Set LD_PRELOAD as follows:
export LD_PRELOAD="<installdir>/Panorama/hedzup/mn/\$LIB/librpil.so $LD_PRELOAD"
After you set LD_PRELOAD, select the Java processes you want to instrument (via the “Instrument Option
(Restart Required)“ option in the “Agent Details Screen“) and restart the applications. After restarting, the
applications will load the profiler library and they will appear in the Agent Details screen as Instrumented.
JAVA_TOOL_OPTIONS
Works on: AIX, Linux, Solaris (on Windows, use “Using JidaRegister.exe to Instrument Java Processes“).
Use the JAVA_TOOL_OPTIONS environment variable to specify the -agentpath Java option to load the
APM profiler library. Java processes that are in the scope of the environment variable will add the option
when they start.
Typically, you set JAVA_TOOL_OPTIONS in the startup script for applications. You could also add it to a
user profile (such as ~/.profile) so that it would automatically affect all processes started by that user.
After you set JAVA_TOOL_OPTIONS, select the Java processes you want to instrument (via the
“Instrument Option (Restart Required)“ option in the “Agent Details Screen“) and restart the applications.
After restarting, the applications will load the profiler library and they will appear in the Agent Details
screen as Instrumented.
The specific value you supply for the -agentpath Java option varies depending on the operating system and
whether the application uses a 32-bit or 64-bit JVM See the “Using JAVA_TOOLS_OPTIONS on AIX“,
“Using JAVA_TOOLS_OPTIONS on Linux“, and “Using JAVA_TOOLS_OPTIONS on Solaris“ sections for
operating-system specific details.
On Unix-like operating systems, JAVA_TOOL_OPTIONS has negative side effects. Because of these,
consider using the approaches described in “Adding -agentpath to Java Command Line“ or
“LD_PRELOAD“ (Linux only). JAVA_TOOL_OPTIONS has these negative side effects.:
It generates an additional message to standard error (stderr) about loading the APM library. This may
cause issues in applications that, for example, check for the presence of output in standard error as a
problem. The following example (on “Using JAVA_TOOLS_OPTIONS on Linux“) uses java -version
and redirects its standard error output to a file (2> show.stderr), then shows the additional
message in the file in bold:
bash-4.1$ echo $JAVA_TOOL_OPTIONS
-agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so
bash-4.1$ java -version 2> show.stderr
bash-4.1$ more show.stderr
Picked up JAVA_TOOL_OPTIONS: -agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so
If the APM agent software is removed without removing the applicable -agentpath option from
JAVA_TOOL_OPTIONS, Java processes within the scope of JAVA_TOOL_OPTIONS will no longer
start.
You can use the ps eww command (see this IBM link for details) to verify that JAVA_TOOL_OPTIONS is in
effect for a particular Java process. In the following example, it is not:
bash-4.2# ps -e | grep java # find the process ID:
4915358 - 14:53 java
6488266 - 8:17 java
7078070 - 0:41 java
8650978 - 0:49 java
9699554 - 24:55 java
You can use the strings -a command (see http://linux.die.net/man/1/strings for more detail) to verify that
JAVA_TOOL_OPTIONS is in effect for a particular Java process. For, example:
[root@nhx2-rh6-1 ~]# ps -ef | grep java # find the process ID:
root 11230 1 0 Dec04 ? 00:10:08 /usr/bin/java -Djava.util.logging.config.file=/op
t/apache-tomcat-6.0.41/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassL
oaderLogManager -Djava.endorsed.dirs=/opt/apache-tomcat-6.0.41/endorsed -classpath /opt/apache-to
mcat-6.0.41/bin/bootstrap.jar -Dcatalina.base=/opt/apache-tomcat-6.0.41 -Dcatalina.home=/opt/apac
he-tomcat-6.0.41 -Djava.io.tmpdir=/opt/apache-tomcat-6.0.41/temp org.apache.catalina.startup.Boot
strap start
[root@nhx2-rh6-1 ~]# strings -a /proc/11230/environ | grep JAVA_TOOL_OPTIONS
JAVA_TOOL_OPTIONS=-agentpath:/opt/Panorama/hedzup/mn/$LIB/librpilj.so
You can use the pargs -e command (see this Oracle link for more detail) to verify that
JAVA_TOOL_OPTIONS is in effect for a particular Java process. In the following example, it is not and
pargs -e does not return any results:
bash-3.00# ps -ef | grep java # find the process ID:
root 25998 25959 0 10:48:04 pts/4 0:00 grep java
noaccess 1582 1 0 Aug 31 ? 106:51 /usr/java/bin/java -server -Xmx128m -XX:+UsePa
rallelGC -XX:ParallelGCThreads=4
bash-3.00# pargs -e 1582 | grep JAVA_TOOL_OPTIONS
bash-3.00#
When users choose to instrument Java processes (via the “Instrument Option (Restart Required)“ in the
“Agent Details Screen“) for which instrumentation has not been enabled, they will not be instrumented. The
status will incorrectly show as Awaiting Restart, but restarting the affected processes will have no effect
until you enable instrumentation as described here. For example, for a Tomcat application:
You correct this confusing behavior when you enable instrumentation as described here and restart the
application: If the process had already been selected for instrumentation, APM will instrument it and it
will appear in the Agent Details screen as Instrumented. For example, for Tomcat:
If the process had NOT been selected for instrumentation, the status will appear as Running. Selecting
the Instrument? option will change the status to Awaiting Restart. However, because instrumentation
had been enabled, the status is now correct. Restarting the application really will cause APM to
instrument it.
A reliable way to determine if a particular Java process is instrumented is to check on the agent system for
the presence of a JIDA sub-agent log file that corresponds to the process. The log files are in the directory
<installdir>\Panorama\hedzup\mn\log. Their names include the process name and time the process
started. For the Tomcat process in the previous example, the log file name would be similar to the following:
DA-JIDA_Tomcat_TOMCAT7_20150610130750_5908.txt
Overview
Riverbed SteelCentral AppInternals supports intrumentation of applications running on .NET Core and the
.NET Framework.
.NET Framework is only supported on Windows.
.NET Core is supported on both Windows and Linux, and can be deployed as FDD, FDE, or SCD.
For information on specific supported operating systems and versions, see the System Requirements
document on the support page
The following sections explain how to enable and troubleshoot .NET instrumentation:
“Enabling .NET Core Instrumentation on Windows“
“Enabling .NET Core Instrumentation On Linux“
“Instrumenting Framework-Dependent Deployment (FDD) and Framework Dependent Executables
(FDE) Applications“
“Instrumenting Self-Contained Deployment (SCD) Applications“
“Troubleshooting .NET Instrumentation Issues“
Note—Riverbed's SteelCentral AppInternals Data Adapter feature uses Microsoft's DotNet Core to collect
performance data from Customer's instrumented DotNet Core applications, as such Microsoft may collect
data from Customer's instrumented DotNet Core applications and Microsoft's collection and use of such
data is subject to Microsoft's privacy statement located at
http://go.microsoft.com/fwlink/?LinkID=528096.
Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.
1. Add the following environment variables to the Windows registry that enable .NET Core
instrumentation, using the command line, a batch script, or the System Settings on the Control Panel.
Note—PANORAMA_HOME is a special APM variable that resolves to the root directory of the agent
installation, typically /opt.
CORECLR_ENABLE_PROFILING=1
CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}
CORECLR_PROFILER_PATH=PANORAMA_HOME\Panorama\hedzup\mn\lib\libAwDotNetProf64.so
DOTNET_ADDITIONAL_DEPS=PANORAMA_HOME\Panorama\hedzup\mn\install\dotnet\additionalDeps\Riverbed.Ap
pInternals.DotNetCore
DOTNET_SHARED_STORE=PANORAMA_HOME\Panorama\hedzup\mn\install\dotnet\store
3. In the “Agent Details Screen“ on the Analysis Server WebUI, select the .NET Core processes you want
to instrument and restart the applications.
Note—After restarting, the applications will load the profiler library and they will appear in the Agent
Details screen as Instrumented. You then choose which processes to instrument and monitor in the
“Agent List Screen“
Note—If you select Enable Instrumentation Automatically, all Java and .NET processes are enabled for
instrumentation. After the installation completes, you then choose which processes to instrument and
monitor in the “Agent List Screen“.
1. Log in as root or become superuser on the Linux system where the agent was installed.
# cd /opt/Panorama/install_mn
# ./install_root_required.sh
4. You are presented with a series of options. Choose the last option to enable instrumentation
automatically, as in the following example.
[root@nhv1-rh6-2 install_mn]# ./install_root_required.sh
Change permissions to run the NPM sub-agent (recommended, SaaS analysis server only)? [y|n]: n
Enable automatic Java and .NET Core instrumentation system-wide (optional)? [y|n]:y
1. Log in as root or become superuser on the Linux system where the agent was installed.
2. Change your working directory to the root install directory of the agent, typically /opt.
3. Open the shell .profile file for editing, and add the following lines:
Note—${PANORAMA_HOME} is a special APM variable that resolves to the root directory of the agent
installation, typically /opt.
export CORECLR_ENABLE_PROFILING=1
export CORECLR_PROFILER={CEBBDDAB-C0A5-4006-9C04-2C3B75DF2F80}
export CORECLR_PROFILER_PATH=${PANORMAMA_HOME}/Panorama/hedzup/mn/lib/libAwDotNetProf64.so
export
DOTNET_ADDITIONAL_DEPS=${PANORMAMA_HOME}/Panorama/hedzup/mn/install/dotnet/additionalDeps/Riverbe
d.AppInternals.DotNetCore
export DOTNET_SHARED_STORE=${PANORAMA_HOME}/Panorama/hedzup/mn/install/dotnet/store
6. In the “Agent Details Screen“ on the Analysis Server WebUI, select the .NET Core processes you want
to instrument and restart the applications.
Note—After restarting, the applications will load the profiler library and they will appear in the “Agent
Details Screen“ screen as instrumented. You then choose which processes to instrument and monitor in
the “Agent List Screen“
Restart the application and the dotNet sub-agent will monitor it.
To instrument and monitor SCD applications, add a reference to the dotNet sub-agent .NET Core library.
The NuGet package for this library is distributed as part of the agent installation. Add a reference to the
package, which the installation creates here:
<installdir>\Panorama\hedzup\mn\install\packages\Riverbed.AppInternals.DotNetCore.<agentversion>.
nupkg
If necessary, copy the package to a location convenient to your development environment. Add a reference
to the package in the csproj file for the application, as described in Microsoft documentation:
Using VisualStudio
Using the dotnet CLI
For example, using the CLI:
dotnet add package Riverbed.AppInternals.DotNetCore -v 10.15.542 -s
c:\panorama\hedzup\mn\install\packages
This command adds the following PackageReference element to the application’s csproj file:
<PackageReference Include="Riverbed.AppInternals.DotNetCore" Version="10.15.542" />
After adding the package reference, restore, publish, and distribute the application as usual. The APM agent
must be installed on systems where the SCD application runs.
The version and build number of the DotNetCore NuGet package must match that of the APM agent
installed on the system where the SCD application runs. If the version does not match, the application will
not instrument.
3. In the Configuration Overrides box, enter the following string to enable SOAP headers when
instrumenting your application:
{ "trace.injectedheaders.webservice": true }
The process data collector now reports average CPU instead of total CPU across all cores, eliminating totals
that were greater than 100%.
Solution:
Run the agent rpictrl enable command:
C:\Users\Administrator>rpictrl enable
Succesfully enabled process injection.
1) From the Start menu, click Server Manager (or choose Administrative Tools and then Server
Manager).
2) In the left navigation pane of the Server Manager, expand Roles, and then right-click Web Server (IIS)
and click Add Role Services.
`
3) The Select Role Services wizard opens. Scroll down to the IIS Management Scripts and Tools option.
If the option does not show (Installed) next to it, select it and click Next.
Overview
The following sections explain how to perform upgrades of AppInternals components:
“Upgrading the Analysis Server“
“Upgrading Agents“
“Upgrading Authentication“
Note—Before upgrading the Analysis Server, review the release notes which are available on the support
page.
Important—The upgrade does not preserve the admin password, and resets it to riverbed-default. After
the upgrade completes, you can reset the password the first time you log into the WebUI as admin.
1. Download the upgrade ISO file from the support page. In this example we will use
appinternals_upgrader_ks_v11.0.1.20.
2. Invoke VMWare ESXi running the AppInternals version 10 OVA to be upgraded. In this example we
will be upgrading from 10.21.0.
3. Upload the ISO upgrade image file to a datastore on ESXi, by doing the following:
3.1. In the Navigator panel on the left, click on Storage. The Storage screen appears in the right panel
with the Datastores tab highlighted.
3.2. Select a datastore to hold the ISO upgrade image. In this example, we will use datastore1.
If none exists, create a new datastore by clicking on New Datastore under the Datastore tab, and
then select that datastore to hold the image.
3.3. Once the datastore has been selected, click on the Datastore browser button at the top of the
screen. The Datastore browser appears with the datastore you selected to hold the ISO image
highlighted.
3.4. Upload the ISO upgrade image to the datastore, by clicking on the Upload button, browse to the
ISO upgrade file on your system, and then click on the file.
The ISO upgrade file then uploads to the datastore you selected.
3.5. When the ISO upgrade file finishes uploading, click the Close button to exit the The Datastore
browser, which returns you to the main ESXi screen.
4. Shut down the VM that hosts the version 10 OVA you intend to upgrade by selecting Virtual
Machines->VMToBeUpgraded->Power->Power Off in the left-most vertical Navigator pane. Replace
VMToBeUpgraded with the name of the VM that hosts the version 10 OVA.
A warning message appears reminding you that you may lose data if you power off the OVA, and
asking if you want to continue.
6. Create a virtual CD drive to hold the ISO upgrade image by following these steps:
6.1. In the left Navigator panel of the main ESXi window under Virtual Machines, click on the Virtual
6.2. At the top of the right panel of the main ESXi window, click the Edit button, which brings up the
Edit Settings window with the Virtual Hardware tab highlighted.
6.3. Under the Virtual Hardware and VM Options tabs, select Add other device->CD/DVD drive.
A New CD/DVD drive option appears at the bottom of the left panel.
6.4. Click on the triangle to the left of the New CD/DVD Drive entry to expand it.
6.5. In the right panel, select Datastore ISO file from the Host device pull-down menu.
6.6. In the left panel of the Datastore browser, click on the storage area you created to hold the ISO
upgrade file in step 3. The ISO upgrade file that you uploaded to that datastore appears in the
right panel.
6.7. In the right panel, click on the ISO upgrade file to highlight it.
6.8. Click the Select button at the bottom right of the DataStore Browser.
The DataStore Browser disappears and you are returned to the Edit Settings menu.
7. From the Edit Settings menu, create a boot delay on the virtual CD drive you created in step 6., by
following these steps:
7.1. Click on the VM Options tab at the top of the Edit Settings window.
7.3. In right panel, in the milliseconds dialog box under Whenever the virtual machine is powered
on or reset, delay the boot by, enter 20000 to set a 20-second delay.
7.4. In the Choose which firmware should be used to boot the virtual machine section, BIOS should
be selected by default. If not, select it.
7.5. Click the Save button. The Edit Settings screen disappears and the main ESXi window displays.
8. In the Navigator panel, right click the on Virtual machine you are upgrading, then select
Power->Power On.
9. To enable the VM to boot from the virtual CD-ROM you created in step 6. that holds the ISO upgrade
image, enter the BIOS on the VM console by pressing the ESCAPE key.
10. Using the down arrow, select CD-ROM Drive and press RETURN to begin the boot sequence.
The AppInternals upgrade message is displayed, and the OVA upgrade begins.
Note—The console displays progress messages as the upgrade continues. The more data that is being
11. The upgrade does not preserve the admin password. When the upgrade completes, log into the WebUI
as admin using riverbed-default as the password. You are immediately presented with a change
password pop-up where you can change the default admin password to what is was before the
upgrade.
12. After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by follow the steps in “Upgrading Authentication“
Important—The upgrade does not preserve the admin password, and resets it to riverbed-default. After
the upgrade completes, you can reset the password the first time you log into the WebUI as admin.
1. On the system where the version 10 LRE to be upgraded, log in as root or become superuser.
3. Download the version 10 to version 11 LRE upgrade tar file from the support page and place it in the
upgrade directory you created in the previous step. In this example, we will use
appint-linux-11.0.0.63.tar
5. Untar the version 10 to version 11 LRE upgrade file, by entering the following command:
# tar -xf appint-linux-11.0.0.63.tar
When the upgrade completes, your system will be rebooted for the changes to take affect.
7. After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by following the steps in “Upgrading Authentication“.
Note: AppInternals only supports cluster upgrades from 10.18* or higher Analysis Servers.
1) On the controller node, confirm that the version of the analysis server is 10.18* or higher, as follows:
73D (config) # sh version
Current version: 10.16.2061
2) On all cluster nodes, download the upgrade file, as explained in steps step 1.to step 4. in “Upgrading
the LRE from Version 10“.
3) On the controller node, shut down cluster services on all nodes with the cluster stop CLI command:
73D (config) # cluster stop
Do you want to stop services on all the cluster nodes? (Yes/No) yes
Stopping cluster services on role parser (port 10280)
Restarting local services on role parser (port 10280)
Stopping cluster services on role indexer (port 10380)
Restarting local services on role indexer (port 10380)
Stopping cluster services on role primary_ui (port 10180)
Restarting local services on role primary_ui (port 10180)
Stopping cluster services on role controller (port 8080)
Restarting local services on role controller (port 8080)
Done
4) On all existing cluster nodes, start the upgrades at the same time by following step 4. through step 6. in
the previous section “Upgrading the LRE from Version 10“.
Note—All nodes of the cluster must be upgraded. A mixed environment is not supported.
8) On all cluster nodes, log in to the web interface. Check the CONFIGURE > System Status screen to
confirm that all icons are green (indicating that the cluster upgraded successfully).
9) If you could not log in to the web interface of any cluster node, or if the System Status screen for any
cluster node did not show all green icons:
a) Reboot ALL the cluster nodes.
b) On the controller node, enter the CLI and run the cluster start CLI command.
10) After the upgrade completes, you must manually migrate SCAS authentication data to the new AAA
authentication by following the steps in “Upgrading Authentication“
Upgrading Agents
The following sections explain how to upgrade agents that are installed on Windows and Linux/Unix
systems, and in dynamic environments.
Upgrading Authentication
In version 11 of the Analysis Server, SCAS authentication has been replaced with the SteelCentral
Authentication and Authorization (AAA) service.
To preserve authentication after upgrading the Analysis Server from version 10 to version 11, you must
upgrade users defined in SCAS, as well as LDAP group and user mappings and any LDAP servers that
were configured, to AAA by following these steps:
Note—Once the SCAS / LDAP upgrade is complete, any local or LDAP user that could authenticate before
the upgrade, will still be able to authenticate; however, every user will need to define a new password, since
passwords cannot be migrated.
1. Complete the upgrade from version 10 to version 11 of the Analysis Server, as explained in “Upgrading
the Analysis Server“.
2. After the upgrade is complete, log into the CLI as admin and access the configuration commands, as
follows:
V11host> enable
V11host(config) #
3. Use the following three CLI configure commands to upgrade SCAS and LDAP to AAA:
• show scas status - Shows whether the data to do a user conversion is available or not. If
successful, the command returns “SCAS data preserved”, and you can then migrate the data and
delete it using the next two commands.
If no SCAS data is found, the command returns “SCAS data deleted”, in which case you must set
up authentication by following the instructions in “Creating Local Accounts And Setting Up LDAP
and SAML Authentication“.
• scas migrate - Migrates users found in SCAS and LDAP to the AAA scheme.
• scas drop - Deletes old SCAS data. Cannot be undone. Deleting old SCAS date saves at least 250
MB on disk, more if there are a lot of users.
Note—In version 11, AppInternals also supports SAML. For information on configuring SAML, adding
local users, and setting up LDAP authentication, see “Creating Local Accounts And Setting Up LDAP and
SAML Authentication“.