0% found this document useful (0 votes)
160 views121 pages

F5 LTM

The document is a practical guide to F5 BIG-IP Local Traffic Manager, detailing its features, including load balancing, proxy types, and initial configurations. It covers the setup of a lab environment for BIG-IP, including licensing, network configuration, and module activation. The guide also explains the various components of Application Delivery Networks (ADN) and provides step-by-step instructions for configuring the BIG-IP device.

Uploaded by

ahmedessam12043
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views121 pages

F5 LTM

The document is a practical guide to F5 BIG-IP Local Traffic Manager, detailing its features, including load balancing, proxy types, and initial configurations. It covers the setup of a lab environment for BIG-IP, including licensing, network configuration, and module activation. The guide also explains the various components of Application Delivery Networks (ADN) and provides step-by-step instructions for configuring the BIG-IP device.

Uploaded by

ahmedessam12043
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

F5 BIG-IP

Local Traffic Manager


A Practical Guide with Lab Exercises

By Ehsan Hosseini
Linkedin: linkedin.com/in/ehsan-hosseini-networksecurity
Table of Content

- About F5

- Load Balancer

- Proxy

- Lab Setup and Network Configuration

- BIG-IP Initial Configurations

- Nodes

- Pool and Pool Members

- Virtual Servers

- Load Balancing Methods

- Priority Activation Group

- Fallback Host

- Monitoring

- NAT

- Profiles

- Policy

- Packet Filter

- TCPdump

- Logging

- High Availability
About
F5 is an American company that introduced its first product, called BIG-IP, in 1997. This product was a Load
Balancer, responsible for distributing traffic load across servers. Over time, the company's activities expanded,
and it began offering products in the field of cybersecurity.
In this document, we will explore the features of this product that provides Load Balancer capabilities.
Another name for F5’s product is Application Delivery Controller, because in addition to Load Balancing, F5
offers several other features such as:

• SSL Offloading
• DDoS Protection
• Web Caching
• Web Application Firewall
• Application Acceleration, and more.
This is why the product is referred to as an Application Delivery Controller it not only distributes traffic but
also enables control over it.
There is also another concept known as ADN or Application Delivery Network, which we will also explain.
ADN is essentially a set of technologies designed to guarantee:

• Availability
• Optimization
• Security
• Application performance enhancement
ADN consists of several components:
• ADC (Application Delivery Controller): Technologies used to manage the distribution of traffic load.
• Load Balancer: Distributes incoming traffic across servers.
• WAF (Web Application Firewall): Protects web applications against cyberattacks.
• Data Compression: Reduces data size to improve transmission speed and optimize bandwidth usage.
• SSL Offloading: Manages traffic encryption to reduce the processing load on servers.
• Caching: Temporarily stores data to enhance speed and optimize resource usage.
• Traffic Shaping: Prioritizes traffic to ensure faster processing of high-priority data.
It can be said that today, F5 Networks is one of the leading companies in the field of ADN.
Load Balancer
Load Balancing is essentially the act of distributing traffic load across multiple destinations.
For example, imagine we have a single web server that provides services to users. In this case, all user traffic
would be directed to that one server for processing.

This can lead to increased processing load on the server, which may result in application slowdowns or even
service outages.
Now, suppose instead of using a single application instance, we use multiple identical instances and direct user
requests across several servers. The management of traffic distribution to these servers is handled by a
component called a Load Balancer. For better understanding, refer to the diagram below:
In our example, the Load Balancer functions as a Reverse Proxy. We will discuss different types of proxies
later, but in brief, a Reverse Proxy works by receiving user requests and then forwarding them to backend
servers on behalf of the user.
So, generally, we use a Load Balancer when we want to distribute incoming traffic across multiple servers.
This way, even if one of the servers goes offline, the Load Balancer will stop forwarding traffic to it and continue
distributing the load among the remaining servers.
Proxy
If we want to define a proxy, it means someone represents or acts on behalf of someone else.
In the previous example, we mentioned that the Load Balancer acts as a Proxy. This means user requests are
first sent to the Load Balancer, and the recipient of these requests is effectively the Load Balancer’s address.
The Load Balancer then forwards these requests to the servers based on a predefined algorithm.
However, note that when the servers receive these requests, the source address they see will be that of the Load
Balancer not the original client addresses.

Next, we will examine the different types of proxies.


Forward Proxy
This type of proxy is used when we want to send internal network requests to external destinations via a Proxy
Server.
In this method, clients forward their traffic to the Proxy Server, which then sends the traffic out to the internet
or other external networks. In some cases, the Proxy Server can also be configured to perform caching, which
helps reduce the overall traffic load on the network.

Reverse Proxy
This type of proxy operates in the opposite manner to a Forward Proxy. It receives requests from external users
and then forwards those requests to internal servers. Keep in mind that the response from the internal server is
also first sent to the Reverse Proxy, which then forwards it to the original client.
Half Proxy
In this method, traffic is first sent by the client and received by the proxy server. After initial processing, the
proxy forwards the request to the destination server. Similarly, the server’s response is first received by the
proxy and then forwarded to the client.
However, note that after the initial connection is established, the proxy no longer fully intermediates the traffic
between the client and the server. It typically performs only basic tasks such as NAT, Port Mapping, and similar
functions.

Full Proxy
In this method, the proxy server acts as a complete intermediary within the traffic flow. That is, the client's
request is sent to the proxy’s IP address, where the proxy receives it. A separate session is established between
the client and the proxy server.
After performing any necessary checks, the proxy server creates a new session with the target server and
forwards the client’s request through this new session with the server.
F5 BIG-IP
BIG-IP was the first product of F5, introduced in 1997 as a basic Load Balancer. However, today BIG-IP is a
modular product, and Load Balancing is just one of the features of the LTM (Local Traffic Manager) module
of this device. BIG-IP includes various modules, each with specific functions, including LTM, ASM
(Application Security Manager), GTM (Global Traffic Manager), APM (Access Policy Manager), AFM
(Advanced Firewall Manager), FPS, and others.
When you use the BIG-IP product, whether in software or hardware form, you can activate and use any number
of these modules.
Now, let’s take a deeper look at this product. Like other network devices, BIG-IP runs on a Linux-based
operating system called TMOS (Traffic Management Operating System). You can interact with this operating
system through the Linux Shell (Bash), TMOS Shell (TMSH), or the BIG-IP Configuration Utility (a graphical
user interface).
The Bash shell is used for general administrative tasks like managing tasks, file access management, and system
setting changes. On the other hand, TMOS is a command-line environment specifically designed for F5 BIG-
IP, which allows you to configure BIG-IP and view statistics.
Finally, BIG-IP provides a web-based graphical interface, where most of the configuration and management
tasks are performed.
LAB and BIG-IP Basic Setup
In this section, we will perform the basic configuration of LTM and then provide some details about the lab
setup. For running the scenario, we are using the PNET LAB environment, but you can also use other tools like
EVE-NG or GNS3. In the image below, you can see the scenario being used:

Before configuring the devices, we first need to complete the initial setup of BIG-IP. It's important to note that
configuring F5 in a lab environment requires a license, which you must obtain in advance by creating an account
on the official site.
Note: You can search online for the process to acquire a license.
Now, in your lab environment, power up the BIG-IP and enter the CLI (Command Line Interface):

As shown in the image above, the first step is to log in to the device using the default credentials. The default
username is root, and the default password is default. Once the specified values are entered, you will be
prompted to set a new password.
In the lower part of the image, it is indicated that no license has been applied yet, and the BIG-IP device is in
Standalone mode.
Next, type the config command to assign an IP address to the device:
As seen in the image above, in order to use the device's web interface, we need to assign an IP to the device.
Click OK to proceed to the next page:

On the next page, you will be asked to select the IP version; we will be using IPv4.
In the next page, the IP Address will be displayed:

Since BIG-IP has obtained an IP address using DHCP, you can select Yes here, or choose No to manually assign
an IP to the device:
After that, enter the management IP in your browser to access the BIG-IP web panel:
Then, enter the username as admin and the password you set earlier. After this, you will be prompted to set a
new password for the admin user:
Once authentication is completed, you will be directed to the Setup Utility page, where you can perform the
initial device configuration and activate the license:

As shown in the image above, no license is currently active on the device. So, the first step is to proceed with
the Wizard to activate the license. Click Next to continue:
In this section, you need to activate the license. Click Activate, and the following window will appear:
As shown in the image above, the Activation Method will be set to Automatic, which requires the device to
have internet access.
In this LAB, we choose the Manual option. Also, the Base Registration Key obtained from the F5 site should
be entered in the corresponding field. Click Next to continue:

On the next page, copy the text from the Dossier section and click on Click here to access…:
Note that instead of copying the text, you can choose Download/Upload File to upload the Dossier as a file for
activation.
Now, go to the activate.f5.com website and paste the copied text in the Enter Your Dossier field, then click
Next:
In the EUL (End User License) section, accept all terms and click Next:

Finally, your license will be generated, and you must copy it and apply it to the BIG-IP device:
Next, in the Resource Provisioning section, you can enable the required modules on your device:

As seen in the image above, the name of each module along with the required RAM and Disk space is specified.
Additionally, it shows which modules are active with the current license.
There is also a Provisioning section for selecting and enabling the desired modules. This section has three
options:
• Dedicated: When this option is selected, the resources allocated for LTM are dedicated solely to this
module, even if LTM does not use hardware resources. The full specified resources will be available
exclusively for LTM.
• Nominal: This option assigns the minimum resources to the module, but if additional resources are
needed, it will allocate more. Typically, Nominal is the best choice.
• Minimal: This assigns the least amount of resources to the module, with no additional resources
allocated if needed.
• None: This means the module is disabled.
In the image below, you can see the status of resources on your device:

Additionally, there is a Management (MGMT) option related to the amount of memory allocated for device
management. In the LAB environment, we will use the Small option:

Then, click Next.


In this page, we are dealing with the Certificate Management of the device, where you can either create a new
Self-Signed Certificate or import an existing certificate.
Click Next to proceed to the Platform section:

Note: Make sure to enter the Hostname in FQDN format.


On this page, you can change the assigned IP address if necessary, as well as adjust the Time Zone, which we
have changed in this section.
At the bottom of the page, there is an option called SSH IP Allow, where you can specify the range of IP
addresses that are allowed to access the management panel:
After clicking Next, we will enter the Network section. Here, you can either choose Next to continue the basic
configuration process and configure the remaining settings, or choose Finished and manually configure the
necessary settings. We will choose Finished here.

Now, the initial configuration is complete, and we can continue discussing the lab setup:
Next, let's proceed with the lab configuration. Looking at the lab diagram, you will see that two interfaces are
connected to a switch. In the BIG-IP structure, one interface should be assigned to the External side, which
handles incoming traffic from external users, and another interface should be assigned to the Internal side,
which connects to the internal servers. One way to do this is to assign an IP address to each interface.
To begin, we will navigate to the Interfaces list:

By clicking the option above, you will see the list of interfaces:
By clicking the name of an interface, you can access its settings:

However, as seen in the image above, there is no IP Address option here. This means that, unlike other devices,
we cannot assign an IP address directly from the interface settings.
Note: In BIG-IP devices, interface names are displayed in a different format. As shown in the image above, the
format follows the slot number followed by a dot and then the interface number (e.g., 1.1 or 1.2). The image
below shows an example of interface numbers in a physical device:

To configure this, we need to define two parameters: VLAN and Self-IP. So, first, we will configure the VLAN.
To do this, go to the VLAN section and create a new VLAN:
As shown in the image above, we specify a name for the VLAN and select the interface we want to use for the
External side. Additionally, there is an option called Tagging, where you can choose between Tagged and
Untagged. Since our interface has an IP address and we are directly using the physical interface for the IP
address, there is no need for tagging. In the next example, we will also explore the Tagged option.
Now, you can click Finish and either select Create VLAN again or click Repeat to create another VLAN.
Now, we have created two VLANs: one for Internal and one for External. The VLAN creation and assignment
to interfaces have been completed. The next step is to define the IP Address.
To do this, go to the Self-IP section and define the desired IP address:

As shown in the image below, enter the necessary values:


As seen in the image above, we have set the VLAN to External. The next option is Traffic Group, where we
choose Non-Floating. What's the difference between Floating and Non-Floating? For now, it is enough to know
that if the IP address needs to be shared (e.g., when using HA), it should be Floating. Otherwise, Non-Floating
is the option to choose.

The Internal IP address is also configured similarly:

Another point to consider when defining a Self-IP is the Port Lockdown option:

This is a security feature for accepting management traffic. You can select one of the available options. It is
recommended to choose Allow none for interfaces used for user traffic. Otherwise, choose Allow Custom and
select the necessary services.
Now, as shown in the image below, if we assign an IP to the E0/0 interface, we will see that Layer 3 connectivity
between the switch and BIG-IP is established via the External interface:

Another method is to use a Virtual Interface. In this case, we create a Port Channel between the switch and
BIG-IP to take advantage of Link Redundancy.
The first step is to create two VLANs on the switch for internal and external communication with BIG-IP:

Then, we will add the two interfaces to the Port Channel:

Next, change the Operational Mode of the interface to Trunk:

Now, we need to define Link Aggregation on the BIG-IP and configure the interfaces.
The first step is to define a Trunk Interface in the Trunk section:
After clicking Create, a window like the one below will appear:

In the image above, you can specify the Load Balancing method for the PortChannel interface, where the default
option is suitable for this case.

So, we created a PortChannel interface of type Trunk, which has two members.
Next, we need to define the necessary VLANs:

In the image above, instead of selecting a physical interface, we selected the created Port Channel, and under
Tagging, we chose Tagged and assigned the TAG value to the External VLAN. The same should be done for
the Internal VLAN.
Now, go to the Self-IP section, create the required IP addresses, and assign them to the relevant VLANs:

If we check the Port Channel status on the switch, everything should be correct:

Now, go to the switch and create two SVI interfaces for VLANs 50 and 51:

Finally, test the connectivity between the switch and BIG-IP:

The next step is to configure the Access Layer Switch that the servers are connected to. The first step is to
configure the ports on the server side:
In our design, the servers' Gateway is configured on the SW-Dist switch. So, the link between the two switches
should be defined as a Trunk:

Now, apply the specified Gateway IP addresses on the servers:

From the servers, we should be able to ping the Gateway IP address configured on the switch:

Next, we need to establish the communication between the servers and BIG-IP. We know that clients will send
requests to BIG-IP, and the response must also be returned to BIG-IP.
On the servers, the Default Gateway is configured, but on the BIG-IP side, we need to specify the servers' range
using Static Routes.
To do this, go to the Route section and click Add:

As shown, a new route with the specified Gateway (the Internal-BIG-IP interface on the switch) is defined:

Similarly, configure the other routes for different networks:

With this configuration, the servers should be able to ping the Internal interface on BIG-IP.

Also, create a Default Route towards the EXTERNAL VLAN:


The final step is configuring the Firewall. In this example, the clients from outside the network are in the
10.1.1.0/24 range, and the link between the switch and the firewall uses a 192.168.20.0/30 range:

First, configure the switch interface with the specified IP address:

Then, configure the interfaces connected to firewall:

On the switch, add a Default Route towards the firewall:

Since web requests should be directed to the BIG-IP External address, this range must be routable by the
firewall:
Finally, the last task is to define the VIP (Virtual IP), or Destination NAT. Clients do not know the internal
network IPs, so we must introduce some external addresses that will represent the services provided by BIG-
IP.

For example, as shown in the image above, if a client sends a request to the address 10.1.1.10, the firewall will
forward it to 192.168.51.10, which corresponds to the EXTERNAL-BIG-IP range. We will discuss more about
these External addresses later.
In fact, what we have done so far is just the network configuration and lab preparation; we have not yet
discussed the LTM features. In the next section, we will proceed with the LTM configuration on BIG-IP.
Nodes
Next, we are going to examine the functionality of the LTM. First, take a look at the image below:

As shown in the image above, the client intends to send traffic to the server. However, since the server is not
directly accessible and can only be reached through the BIG-IP, the client must send the traffic to the LTM.
Therefore, on the External side, addresses must be available to receive traffic, and as illustrated, the packet is
sent from the client’s source address to the External destination address.
When the LTM receives the client’s request, it must forward the traffic on its own behalf to the server. So, the
first step is to change the destination address to that of the server. Also, since the server is unaware of the client’s
address, the source address must be changed to the Internal address.
One important point is that the BIG-IP must have both the External address and the server address defined, so
that if it receives traffic destined for the External address, it can forward it on its own behalf to the server. In
the LTM, server addresses are defined as Nodes.
To define a Node, go to the Local Traffic section and open the Nodes menu:

Currently, no Nodes have been defined, so click on Create to open the window shown below:
This process must be repeated for all servers:
Pool and Pool Members
As we mentioned earlier, LTM allows us to use the Load Balancing feature. To use this feature, the servers must
be grouped together so that user traffic can be distributed across the servers in that group. To better understand
this, take a look at the image below:

As shown in the image above, we have placed several servers in a group named Group1. Now, we distribute
client requests among these servers.
Note: The method used for load distribution will be discussed later.
So, the servers or rather, the Nodes that we have defined need to be grouped. This created group is referred to
as a Pool. When we create a Pool and add our Nodes to it, we must also specify the service of each Node. A
Node along with the associated service is called a Pool Member.
To define a Pool, go to the Pools section and click on Create:

Once we click Create, a window similar to the one shown below will appear, where we need to enter the required
information:
First, we need to choose a name for the Pool, and in this case, we use Group1. Next, we must specify the Health
Monitor. What is a Health Monitor?
When we define a group of servers in a Pool, traffic is distributed among these servers. But suppose one of the
servers fails for any reason and is unable to provide service. The LTM uses the Health Monitor to detect this
and will stop sending traffic to that Node.
In the Load Balancing Method section, we select the load distribution algorithm, which by default is Round
Robin. Now we must select the Nodes and their respective services. This can be done using an IP address,
FQDN, or by choosing from the existing Node list. Additionally, in the Service Port section, we select the
service provided by the server, and finally, we click Add.

As shown in the image above, three Pools have been defined, and their Health Monitor status is also in a good
state. We will discuss Health Monitors in detail in a separate chapter.
Virtual Server
The next topic is about VS or Virtual Server. This parameter essentially defines an external address for which
the BIG-IP will listen. It also creates a mapping between the external address and the Pool that should receive
the traffic.
So, the client's request will actually be sent to this address the same external IP we referred to in previous
examples.

To define a VS, go to the LTM menu, select Virtual Servers, and click on Create.

In the first step, we must assign a name to the VS. Then select Standard as the type. In the Source Address field,
we leave it blank because clients can come from any range. The Destination Address is the IP to which client
requests will be sent. In this example, we choose 192.168.51.10. In the Service Port, we enter 80, meaning BIG-
IP will listen on port 80 for IP 192.168.51.10.
In the HTTP Profile (Client) section, we select HTTP. Generally, in LTM terminology, the client is the one
sending the request, and the server or node is the one responding.
In the next section, we can allow incoming traffic for a specific VLAN or all VLANs.

In a lab environment, it's typical to select All VLANs and Tunnels.


The next option is Source Address Translation. This determines whether the LTM should modify the source
address of the traffic and if so, what it should be changed to.

• None: No NAT is done; traffic is sent to the server with the client's original IP.
• Auto Map: NATs the traffic using the LTM's outgoing interface IP.
• SNAT: Allows traffic to be NATed to a specific IP or range.
In this example, we use Auto Map, so traffic will be sent with a source IP of 192.168.50.1.
Finally, in the Default Pool section, we choose which Pool this VS is associated with in our example, Group1.
This means any traffic to 192.168.51.10 will be distributed among the servers in Group1.
Now that the Virtual Server is created, we are ready to send traffic. However, in real-world scenarios, clients
may not directly access BIG-IP's external addresses. Therefore, on your firewall, you'll need to use a VIP to
change the destination IP.

For example, if a request from PC1 is sent to 10.1.1.10, the firewall rewrites the destination to 192.168.51.10
and forwards it to BIG-IP. BIG-IP then sends the traffic to the servers in the defined Pool using its own source
IP:

From the VS section, selecting Statistics will allow you to view the traffic stats on the Virtual Server or Virtual
IP level:
By clicking Statistic Type, you can examine different types of data. For instance, selecting Pool shows detailed
traffic information per Pool and Pool Member:
Load Balancer Methods
As we’ve seen, when multiple nodes are added to a pool, traffic is distributed among them. By default, the
Round Robin method is used. This section explores various load-balancing algorithms available in LTM.
Load-balancing algorithms are mainly divided into two types: Static and Dynamic.

• Static algorithms use predefined parameters to distribute traffic, without taking real-time server
performance into account.
• Dynamic algorithms consider server performance and resource usage, distributing traffic accordingly.

Round Robin
This default method distributes traffic evenly among all pool members. It's ideal when all servers have similar
resources.

For example, with this method, traffic is evenly distributed among the servers in Group1:
Ratio
The Ratio method allows you to manually assign weights to Pool Members, so more traffic is directed to servers
with higher weight values.

This method is useful when servers have different resource capacities. To configure it:

• Set the Ratio value for each Pool Member.


• Change the Load Balancing Method to Ratio.
Set Method to Ratio (Member):

In our example, we set the Ratio for Server1 to 10 and for Server2 to 3. After sending traffic, you can observe:

Traffic is distributed roughly in a 3 to 1 ratio between the two servers, as expected.


Least Connections
The next method is Least Connections, which is a Dynamic algorithm. It directs more traffic to the server with
the fewest current connections. It’s ideal when server capacities are relatively equal.
This method can be applied at either the Pool Member or Node level.
If the number of current connections is equal, it behaves like Round Robin.

Fastest
This method sends traffic to the server with the fastest response or connection time at Layer 7. It’s useful when
there is a noticeable difference in server response times.
This method can also be configured at either the Application or Node level:
Observed
This method works similarly to Ratio, but the ratio values are automatically determined by BIG-IP by observing
Layer 4 connections. It can also be applied at the Member or Node level:

Predictive
As the name suggests, Predictive anticipates future server performance based on current and historical response
times. This helps achieve more stable and optimized load distribution.

Dynamic Ration
In this method, server resources such as CPU, Memory, and processes are monitored by BIG-IP. BIG-IP also
performs traffic distribution based on the information collected from the servers. That is, according to the level
of resource usage by the servers, a Ratio value is assigned to them.
The point here is how the server can access information related to server resources? The solution is to use
SNMP or WMI services.
After monitoring the server resources, a Ratio value is assigned to them, and the server with lower resource
usage will receive a higher Ratio value, resulting in more traffic being sent to that server.
To do this, go to the Monitoring section and click on Create:

In the window that opens, select the Type based on your needs in our example, it is SNMP and also specify the
desired Community value:

Then, from the Nodes section, the Default Monitor value must be changed to the desired one:
Finally, the load balancing method should be changed to Dynamic Ratio:
Load Balancing method based on Pool Member and Node
Earlier, we mentioned that load balancing methods can be applied based on Node or Pool Member. But what is
the difference between these two?
We can add services other than HTTP in the LTM. For example, in Group 1, we can create a Pool of type FTP
in addition to HTTP. Now, when we set the method based on Node, all Connections to that Node are considered
and will affect the traffic distribution to that server. But when the method is of type Member, only the specific
service defined in that Pool is considered, not the other Connections to that server.
For better understanding, refer to the image below:

As shown in the above image, we have two servers in a Pool. If another HTTP Connection is received for this
group, BIG-IP must decide to which Node it should be sent, assuming the method is Least Connection.
If the method is Node-based, we can see that Server2 has more Connections than Server1, so the traffic will be
sent to Server2.
But if the method is Member-based, the number of HTTP Connections to Server1 is lower, and the traffic will
be sent to this server.
Priority Group Activation
This mechanism works in a way that allows us to practically have a server in standby mode, so if other servers
fail, it will be brought into operation and traffic will be distributed to it as well.
To implement the scenario, we added another server to Group1:

In this example, we want to set the Priority Group value for SRV1 and SRV2 to 10, and set this parameter for
SRV6 to 1:
Now, in the Load Balancing section, we set the value to Less than.. 2:

This means traffic will be distributed based on the Activation Group value for the servers, i.e., SRV1 and SRV2.
If the number of active members is less than two, meaning one of the servers goes offline, the next server with
a higher Activation Priority will be brought online.

As shown, no traffic is sent to SRV6 unless the number of active servers is less than two. So, for testing, we
take one server offline and expect the traffic to be distributed to SRV6 as well:
Fallback Host
This is another feature of LTM that can be used when needed. Imagine all Pool servers go offline and there is
practically no server to respond to client requests. In this case, client traffic can be redirected to another address.
The Fallback feature can be configured in the HTTP Profile. For this, go to the Profile section:

From the Services section, select HTTP:

Then select the "http" option, which is also the default option:
Then, as shown below, specify the Fallback Host with the desired value:

Note: In this case, due to the lab environment, we used the default profile, but the proper method is to create a
new Profile.
Now, we expect that if all servers in Group1 fail, the traffic will be redirected to server 10.1.1.11, which is the
VIP for Group2.
First, we take all Group1 servers offline, and then test the connectivity:
As observed, the redirection was performed correctly, and by examining the client request details, we can see
that the initial request was redirected by BIG-IP to another address.
Monitoring
We’ve already become somewhat familiar with the concept of monitoring in LTM. Using monitoring, it’s
possible to observe the status of a server—this can be at the Node, Pool Member, or Pool level. If a server does
not respond to LTM’s monitoring, it will be considered Offline by the LTM.

Monitoring Methods
In general, monitoring methods are divided into three categories, as shown in the image below:

Simple Monitoring
In this monitoring method, only the Up or Down status of the server is checked—either the server is Reachable
or considered Offline. This method can only monitor a Node and does not support monitoring a Pool Member.

Active Monitoring
This method is based on sending an Application Request. If no Application Response is received from the
server, the status of the Node or Pool Member will be marked as Offline. Traffic will then be sent to another
Node, or if the entire Pool is down, it will be redirected to another Pool.

Passive Monitoring
In this method, BIG-IP does not send any traffic to the server. Instead, it monitors the traffic sent from or to the
server to determine its status. If the server does not respond to a new or existing connection, BIG-IP will
consider that server Offline.
Type of Monitoring
There are two types of monitoring supported by BIG-IP LTM:

Health Monitoring
This type of monitoring is used solely to check whether a server is alive or not. If a server is down, no traffic
will be sent to it. This method simply verifies whether a server is capable of providing service.

Performance Monitor
This monitoring type checks the load and resource consumption of a server. This helps optimize system resource
usage and enables traffic load balancing based on each server’s resource consumption. One method we
previously explored for resource monitoring is SNMP DCA, which uses the SNMP protocol to retrieve
information about server resource usage.
Now, let’s explore various monitoring methods through scenarios.

Address Check Monitors


In this method, packets are sent only to the destination IP address to confirm the target system is alive. This can
be based on ICMP, Gateway ICMP, or TCP Echo. In all of these, only the network connectivity is verified, and
the application behavior is not examined.
For example, we set the Monitors value for a Node to ICMP:
By capturing BIG-IP’s traffic, we will see that ICMP packets are continuously sent from BIG-IP to the server’s
address.

Note that we set the Monitor value to ICMP for a specific Node. If we want to use this method for all Nodes,
we must change the Default Monitor value:

Then use the desired value for the Default Monitors:


Service Check Monitors
In this method, it is verified whether a service is available. This is done by creating a connection based on IP
and Port. The process is simple: a connection is created using a specified IP and Port, and then it is closed. If a
Pool Member cannot respond to BIG-IP’s connection request and the connection fails, BIG-IP will stop sending
traffic to that server.
To configure this method, go to the Pool section, select the desired Pool Member, and set the Configuration
option to Advanced:

Now, by analyzing the traffic, we will see that a connection is initiated from BIG-IP and then immediately
closed:

Alternatively, instead of an established connection, the Half Open method can be used. This time, we want to
apply it at the Pool level. To do this, we set the Member to Inherit From Pool:
Then in the next step, we set the Health Monitors value to TCP Half Open and observe the result:

You will see that after receiving the SYN+ACK message from the server, BIG-IP resets the connection by
sending a RST packet.
Now we disable the WEB service on the server to see the result:
Content Check Monitors
In this method, the application behavior is directly monitored. That is, the traffic is analyzed at layer 5, and if
the content of the server’s response is correct, LTM allows user traffic to be sent to the web server. It’s not
limited to just application data many parameters can be used for monitoring, such as verifying an HTTP Status
Code.
Practically anything available at the application layer can be inspected:

For this, we must define our custom monitoring parameter. Go to the Monitors section and click Create:

In the window that opens, enter a name and set the Type to HTTP:
In this page, the Send String value can even be a specific file.
Then in the Receive String, specify the value the web application is expected to return. This will be set as the
monitoring method for the server:

This setup will work correctly unless the specified String in the response does not match the expected value.
To test, we change the String value in the index.php file and observe the result:
Test Monitor
The next topic is the Test Monitor feature. Before applying a Monitor, you can test and review it once. To do
this, select your Monitor and go to the Test tab:

BIG-IP Status Icon


The next topic in the monitoring discussion is the types of status icons used in BIG-IP. These are explained
below:

This icon means the system or service is healthy and all tests have been passed successfully.

This indicates that the object is not available and cannot be used.

This represents an unknown status. One situation where this may occur is when no Health Monitor has been
applied to the object.

This icon indicates the object is temporarily unavailable, and may automatically return to green status
without admin intervention.

A black circle means the user has manually disabled an object that was previously available.

This icon indicates the user has disabled an object that was already unavailable.

These are usually seen when the parent object has been disabled.
Monitor Instance
When a Monitoring Object is in use, you can view its instances and their status from the Instances section:

And finally, the status of services can also be reviewed from the Network Map section, and information about
the monitored services can be observed:

By clicking on this icon, a new page opens and displays the status of all Pools along with the Pool Members:

By clicking on each Object, more information about it can be obtained:


SNAT in BIG-IP
Earlier, we briefly explained why NAT should be enabled. In this section, we will review the types of SNAT in
LTM.
There are three main methods that we will examine.

SNAT Automap
In this method, the traffic is NAT to the outbound interface address or Self IP when sent to the server.
To activate this method, it is enough to select the Auto Map option when configuring the Virtual Server, as
shown below.

SNAT Pool
In this case, we will have an IP Pool consisting of several addresses, and the incoming traffic will be translated
to these IP Pool addresses and sent to the server.
To configure this, first, we must define the Pool List. To do so, go to the Address Translation section and select
SNAT Pool List:

In the above image, click on Create to open a window for creating the Pool List:
As shown in the image above, create your Pool and go to the related Virtual Server:

As you can see in the image above, first set the type to SNAT and then choose the created Pool from the SNAT
Pool option.
Now, if we check the user-side traffic, we will see what source address the traffic is sent to the server with:

SNAT Address
In this method, all traffic is NATed to a single address and sent to the server. This address does not need to be
defined as a Self IP.
This method is similar to using a Pool, but in the Pool, only one address is included, and all traffic is translated
to that same address.

NAT List
Another method in NAT exists, which is exactly like Static NAT in Cisco. You may not deal with this method
often, but knowing it is useful.
The mechanism works in a way that you can translate an internal address to an external one. For example,
when your server wants to send traffic to the outside, you can use this option and translate the traffic to the
desired address upon exit. Note that this NAT type is bidirectional, and when traffic is sent to the external
address from outside, the traffic will be directly sent to the server.
To configure, select the NAT List option and click on Create:

In the opened window, set the NAT Address to the external address and the Origin Address to the real address
of the server. With this setup, the server’s traffic when exiting is translated to 192.168.51.100, and if a user
sends a request to this address from outside, the traffic will be sent to the 192.168.100.3 server.

Note: If a user sends traffic to the 192.168.51.100 destination from outside, the traffic will be sent directly to
the 192.168.100.3 server without checking the Virtual Server.
To test this scenario, we will define a new VIP on the firewall as shown below:
Now, if traffic is sent to 10.1.1.100, the firewall will change the destination address to 192.168.51.100 and
finally send it to the BIG-IP. If our configuration is correct, we expect to have traffic from the source address
10.1.1.1 in the internal network to the 192.168.100.3 server:

You can see that the configuration works correctly, and the traffic is sent to the server with the real user address
without checking the Virtual Server or using Auto Map.

SNAT Translation List


This is the last topic in the NAT section. This part displays the configuration details of the defined NATs. That
is, if SNAT has been previously defined, you can configure its details from this section:
Options such as enabling or disabling, enabling ARP Response for a specified address, setting Connection Limit
for the address, and configuring Idle Timeout are available in this section.
Profile
One of the other tools used for better and more accurate network management is the use of Profiles in LTM. In
general, a profile can be considered an object that can manage traffic behavior, which leads to improved
performance and increased network capacity. By default, there are several profiles available in LTM that can
be used, but you can also define your own custom profiles. In the image below, you can see the types of profiles
in LTM:

Persistence Profile
The first type we want to examine is the Persistence Profile. Its mechanism works in a way that client traffic is
always sent to a single server. Every time the client sends a request, that request is sent only to the same server
and is not distributed among other Pool servers.
Note: Keep in mind that the first request will be distributed using the Load Balancing mechanism, and after
that, all client traffic will be directed to the same server.
Persistence can be based on values such as Cookie, Source, Destination, Hash, SSL, and so on.
To view these values, we use to the Profile section and select the Persistence tab:
As an example scenario we want to configure, we will use Cookie. When a client sends traffic to the BIG-IP,
LTM will associate the client's Cookie with the server it sends traffic to. From then on, any traffic sent with that
specific Cookie will only be directed to that same server.
However, Cookie-based Persistence configuration can be done in four ways, which we will review next.

Cookies Insert
In this method, when the Response packet is sent from the server to the client, BIG-IP inserts the Cookie value
and sends it back in the Response packet to the client. If the same client sends another request to the server, it
will use that same Cookie. In the image below, you can see that the Cookie value starts with BIGipServer,
followed by the Pool Name and then the Pool Member, indicating that this Cookie was created by BIG-IP.
Finally, the Cookie expiration time is also specified.

To test this, select the Cookie type and in the new window, set the Cookie Method to HTTP Cookie Insert.
Next, apply it to the Virtual Server by selecting the Cookie value in the Default Persistence Profile tab:

In the image below, you can see how the Cookie value is formed, and from now on, every time the client sends
traffic, the request will be directed to the same first server:
Cookie Rewrite
In this method, the Response packet is sent from the server to the client, and the server sets an empty Cookie
value. BIG-IP will then rewrite that Cookie to its own generated Cookie value. This method is illustrated in the
image below:

Cookie Passive
In this method, when the client sends traffic to the server, the server sets the Cookie value and sends it back to
the client. However, BIG-IP does not make any changes to the Cookie value.
Cookie Hash
In this method, the Cookie value sent by the server is hashed by BIG-IP and then sent to the client. In the client’s
future requests, the Cookie value is used, and BIG-IP knows to which Pool Member the traffic should be sent.

Source Address Persistence


In this method, the Source Address can be used for persisting the user’s traffic. This means that as long as traffic
is coming from a specific source address, BIG-IP will send it to the same server.
But this time, we want to create a profile ourselves. For this, go to the Persistence section and click on Create:

In the next step, set the Persistence Type to Source Address Affinity and set the Parent Profile to Source_Addr.
If needed, you can choose Custom and modify some values as shown in the image below:
Now, apply your profile in the Virtual Server settings:

We now want to check the result. For this, go to the Statistics section and set the Statistic Type to Persistence
Record:

However, as shown in the image, it is not possible to view this statistic. To enable it, you need to apply the
specified command in the BIG-IP environment:

Now, if you use Refresh again, the output will be displayed as shown below:
SSL Profile
One of the very useful features in BIG-IP is the use of this profile. Using this method, traffic will be securely
transmitted over the internet. But the working mechanism is quite simple. The Certificate used to secure the
traffic is placed on BIG-IP, and the negotiation for secure traffic transmission occurs between the user and BIG-
IP. Then, a secure channel between the client and BIG-IP is established to carry the traffic. However, when the
encrypted traffic reaches BIG-IP, it decrypts it and sends the traffic to the server as Clear Text. For this, we use
the Client SSL Profile.
However, we can also send traffic securely between BIG-IP and the server by using the Server SSL Profile.

To run the scenario, the first step is to import your certificate into BIG-IP. For this, we use the SSL Certificate
List in the Certificate Management section:

In this section, you can see the list of certificates already in BIG-IP. To import your file, click on Import and
select your certificate file. However, in a lab environment, we will use the Create option and generate a Self-
Signed certificate.
Then, fill in the required information as shown in the image below:
Next, go to the Profiles section and create a new Client SSL:

In the new window, set the Name and set the Parent Profile to Clientssl. Then, enable the Custom option under
the Certificate Key Chain section:
Next, click on Add to open a window like the one shown below:

Then, specify the created Certificate and Key values as shown above.
Next, go to the Virtual Server and create a new one as shown in the image below:

As shown in the image above, the Service Port is set to 443, which means HTTPS. Then, under the SSL Profile
(Client) section, select your created profile:
Now, if traffic is sent from the client to the server over HTTPS, it will be encrypted using the Self-Signed
Certificate, and because of that, the browser will display a warning:

Now, traffic from the client up to BIG-IP will be encrypted:

But after reaching F5 and decrypting the traffic, it will be sent to the server as Clear Text:
HTTP Redirect
So far, we have created one Virtual Server for HTTP traffic and one for HTTPS. In this section, we want to talk
about redirecting HTTP traffic to HTTPS.
Most users, when trying to open a website, do not specify the Schema (i.e., HTTP or HTTPS) but only enter
the site name in the browser. So how does the traffic get sent over HTTPS? This is where the concept of Redirect
comes in.
When the server receives HTTP traffic, it sends an HTTP Response with code 302 or 301 to the client, and in
that response, the Location field is set to the site's address with the HTTPS protocol.

As shown in the image above, the Location field points to https://au.yahoo.com.


To configure this mechanism in BIG-IP, we can use iRule. This document is not intended to explain iRule in
depth, but we will give a brief overview for using it in Redirects.
iRule is a scripting language specific to F5 devices that helps in defining policies, traffic modifications, and
better traffic management.
To define an iRule, go to the iRule section and create a new iRule:

As shown in the image above, there are default values present. We want to create our own rule to redirect traffic:

Now, simply apply the created iRule in your policy. To do this, go to the Virtual Server section and then to the
Resources tab:
Then, as shown above, click on Manage to see the list of available iRules for this Virtual Server:

Now, select the created iRule as shown above, and that's it. From now on, any HTTP traffic matching this
Virtual Server will be redirected to the new HTTPS-based address.
Note that this option already exists in the default iRule and did not require us to define it, but we created it for
understanding purposes.
BIG-IP Policies
In BIG-IP, using LTM Policies allows for more granular control over the traffic on a Virtual Server. The use of
LTM Policies involves three main steps:

• Creating a Draft Policy.


• Publishing the Draft Policy.
• Applying the Published Policy to a Virtual Server.

As shown in the image above, creating a Draft Policy includes two main parts:
• First, creating the policy, which includes the policy name, description, and strategy.
• Second, defining Rules, which essentially consist of Conditions and Actions.
To create a policy, navigate to the LTM > Policies section:

As you can see in the image above, there are currently no Draft or Published policies. So, the first step is to
create a Draft Policy. Click on the Create button to open a window like the one below:
At the start, define a name for the policy and, if necessary, enter a description. The next option is Strategy,
which includes three options: All, First, and Best. These options define how the policy applies to matched
traffic:
• All: All matching rules will be applied.
• First: Only the first matching rule will be applied and executed.
• Best: (Used for more advanced evaluations.)
In this example, we choose First and click on Create Policy.

Next, we must define the required Rules. Click on Create to open the Rule definition window:
The definition of a Rule has two main parameters: first, defining the Condition, and second, defining the Action.
To define each one, you must click on the icon shown in the image. In the first step, we want to create a
Condition:

As shown in the image above, a condition can be defined based on various parameters. In this example, we
intend to define a condition based on the TCP parameter:
In this section, we can specify what we want to inspect up to layer 4. Here, we will also select the “Address”
option, which refers to the IP Address.

In the next section, we will define whether the condition matches or does not match.

In this section, we need to specify the IP value that we want to use in the condition. This value can be defined
manually, in which case we will use the Any Of option. Alternatively, we can use predefined lists, in which case
we will select the in datagroup option. We will discuss Datagroup in more detail later.
In this section, we have used the Any Of option to manually define the value:

As shown in the image above, we enter the desired values in the specified box and then click Add. In the last
part, we need to specify where the defined parameter will be located:

In this example, we want the value to be checked in the Request packet. So, we select the Request option:
As seen in the image above, by clicking on the specified icon, we can create another condition for the Rule.
Now that we have defined the Condition, the next step is to define the desired Action:

In this section, we can define various Actions when traffic matches the condition. In this example, we want that
if the client’s address equals 10.1.1.1, the traffic is reset by LTM and a log is also created. So, as shown below,
we will define two Actions:

The image above shows the output of the created Rule. Keep in mind that we have defined only one Rule, and
we can define multiple Rules in our Policy.
Simply click Create again and repeat the previous process to define a new Rule based on your needs.
Go back to the Policy List, and you will see that a Draft Policy has been created. The next step is to Publish it.
So, select the Policy and choose the Publish option:

Then, our policy will be placed in the Published Policies section:

The final step is applying the policy, which, as mentioned earlier, must be applied to the Virtual Host. So, in the
desired Virtual Host, go to the Resources tab and select Manage in the Policies section:
Then, we will move the created policy from Available to Enabled:

Now, to test this policy, we will send traffic from the client system to the server and check the output:

You can see that the client request has been Reset by LTM.
Now, we want to check if the Action to create a log is working correctly as well:
The next topic is modifying a Policy. Let's assume we want to make changes to the created policy.

As you can see, you cannot normally modify the Rules once they are created. To do this, you must first create
a Draft of the policy, make the changes in the Draft, and then publish the Draft again.

The final topic of the LTM Policies section relates to Data Group, which we encountered while defining
conditions in the policy. A Data Group can include three different types. You can define String, IP Address, and
Integer values. To define it, select Data Group List from the iRules section:
As shown in the image above, default values exist, but we can define our own Data Group. To do this, click
Create to open a window like the one below:

In the first step, you need to specify a name for your list and then determine its type. However, as shown in the
image above, there is also an option called External File, which allows you to define values in the form of a
text file.
But we selected the Address option, and the following page appears:
In the Address section, specify the IP value, and in the Value section, you can provide an optional description.
Keep in mind that the use of Value is optional.
Now, you can use the created Data Group in your Policy:
Packet Filter
Packet Filter is another mechanism for controlling incoming traffic to LTM. This mechanism is very similar to
the Access Policy in a firewall. Based on the criteria we specify, LTM can block incoming traffic. These criteria
could include Source IP, Destination IP, or Destination Port.
To use this feature, you must first enable it on your device. To do this, go to Network and then Packet Filter:

As shown in the image above, this option is disabled by default. When you enable it, the option appears,
indicating what action should be taken for traffic that does not match the Packet Filter:

• Accept means the traffic is accepted.


• Discard means the traffic is dropped without informing the client.
• Reject means the packet is dropped, but the client is notified.
In the Option section, there are two options:
The first option determines whether filtering should also be applied to packets with an existing session.
The second option determines whether an ICMP Error should be sent to the client if the packet is Rejected.
Next, we have Exemptions, as shown in the following image:

• The Always accept ARP option ensures that ARP packets are always received and accepted, regardless
of the Packet Filter rules.
• Always accept important ICMP ensures that critical ICMP packets, such as Echo, Reply, Destination
Unreachable, etc., are received.
• The MAC Address, IP Address, and VLAN options in Exemption allow specific MAC addresses, IPs,
or VLANs to be exempt from Packet Filter checks.
The next step is defining a Packet Filter Rule. As seen in the image below, there are no rules defined by default:

So, select Create to open a window like the one below:


Fill in the Configuration section as shown in the image above. As seen, we set the Action to Reject and the
incoming traffic interface to VLAN_External.

In the Filter Expression section, we define the conditions. In this example, we specified that traffic with IP
protocol, from 10.1.1.1 to 192.168.51.10, and destination port 443 will be rejected by BIG-IP.
Note that our Filter Expression Method is Build Expression, where we created the values using the available
menus. However, we could also have used the Expression Text method to define the parameters in the following
text format:
Now, we’ll send traffic from the client system to the server, and as shown in the image below, the traffic is
dropped by BIG-IP through the Packet Filter, and an RST message is sent to the client.

To view the created logs, go to System, select Log, and choose Packet Filter to view the logs on the system:

You can also use the CLI to view these logs:


TCP Dump in BIG-IP
In BIG-IP, you can use TCP Dump commands for Packet Capture, which can be used for both incoming and
outgoing traffic. This tool can be used with relevant filters to capture traffic.

For example, we captured all traffic on the VLAN_External interface in the CLI using this tool.
Now, let’s refine the filter. For example, let’s capture traffic from 10.1.1.1 to 192.168.51.10 on destination port
80:

The output will look like the following image:

If we want more detailed information about the packets, we can use -v (verbose mode):

For further understanding of this tool and the switches available, you can refer to the following address:
https://www.tcpdump.org/manpages/tcpdump.1.html
Logging
One of the important aspects of BIG-IP configuration is Log management. Using logs provides valuable
information about the device’s operation. This data can be useful for better device management,
troubleshooting, and monitoring network traffic. Like other network devices, log storage can be either Local or
Remote.
• Local Logging means that logs are stored on the device’s disk.
• Remote Logging means that logs are sent to a Log Server.
In the Local Logging mechanism, all logs are stored in the /var/log directory and can be viewed.

For example, to view LTM logs, you can open the ltm file, as shown in the image below:

You can also view system and module logs in this way.
Additionally, you can view logs using the tmsh environment. For example, to view LTM logs, use the following
command:

To view logs through the GUI, you can go to System > Log to see system and LTM logs:
The next topic is using another server for storing and managing logs. Logs can be sent to a pool of servers. In
BIG-IP, there is a mechanism called HSL (High-Speed Logging), which is preferred by F5 for logging.
To define a log server using the traditional method, go to the Remote Logging section, as shown in the image:

Then, like the image below, save the log server information.
As shown in the image, the Remote IP and Remote Port correspond to the Syslog server’s address and port.
However, the Local IP is optional and specifies the source address when sending logs to the server.
In the following image, from the captured network traffic, you can see that BIG-IP sends the log traffic to the
specified server address.

However, with this method, you cannot specify the severity level of the logs. To enable filtering for log sending,
you need to use HSL. HSL involves the configuration of three parts:
1. Log Filters: Specifies the log levels you want to send.
2. Log Destinations: Specifies the server address to which the logs should be sent.
3. Log Publishers: Groups log destinations into a Log Publisher.
First, define the Syslog servers in the LTM section by specifying the relevant Nodes for the servers.
Then, create a Pool, as shown in the image below:

In the created pool, assign the Syslog servers as the Nodes. Be sure to set the Service Port to the port configured
on the log server.
Next, define the Log Destination in the System section:

Click Create to define a new Log Destination, then set the Type to Remote High-Speed Log and choose the
pool we defined earlier:
Now, define the Destination in the Publisher.

Click Create to open the Publisher configuration window:

Next, define the Log Filter. To do this, click on Log Filter and create a new one:
In the open window, specify the information related to the Log Publisher.

One key point is the Severity section, where you can specify the log level to be sent:

Also, in the Source section, you can specify which source or module the logs will come from.
High Availability
As we have learned so far, BIG-IP is a very important device in the flow of network traffic. If BIG-IP goes
offline, all the services behind it will also become unavailable. In operational environments, it is crucial that
service delivery is never disrupted. Therefore, we strive to implement redundancy at all levels, including
services and infrastructure. The F5 device is no exception, and redundancy must be incorporated into its
implementation. The redundancy solution is known as HA (High Availability). This means placing two devices
side by side, so if the first device goes offline, the second device will take over. This can significantly reduce
downtime.
In BIG-IP, two devices can be configured in an Active/Standby mode. In this mode, one device is actively
responsible for network traffic, while the second device only monitors the first device's status and synchronizes
with the active device. If the first device goes offline, the second device will immediately take over.

However, there is another working mode called Active/Active, where both devices simultaneously manage
network traffic.
One point in HA configuration that you should be aware of is that both devices must have the same OS, license,
and chassis type, and be of the same model.
To configure an HA scenario, we added another BIG-IP device to our lab environment and connected it to the
switch as shown in the image below.

The only configuration done on BIG-IP2 is licensing it. However, the next point is that a link should be
established between the two devices under the name HA Link. This interface is used for synchronization and
checking the "alive" status of the active system.
Before configuring, a Layer 3 link should be used between the two devices. So, we defined a VLAN for these
two links:
We then used an IP range between the two devices:
Another point is that the network configuration during the synchronization process between the two devices
will not be the same and needs to be manually applied to the second device.

As you can see, we applied the internal and external addresses on BIG-IP2 as shown in the image above. To
better understand the addresses, you can refer to the image below:
Now, we want to configure HA. The first parameter that needs to be configured is a common address between
the two devices. This is the Floating IP we discussed earlier in the document. A single IP address that can be
applied to both devices. These addresses will be applied to both the Internal and External sides:

The output of these addresses will be as follows. However, both BIG-IP devices will have the same Floating IP
addresses.

We had already configured the Node, Pool, and Virtual Server. Now, we proceed to configure HA. If we go to
the Device Overview section, we will see that our BIG-IP device is in Standalone mode.
In the Device section, only the BIG-IP1 device is displayed.

In this section, click on the device name and go to the Config Sync section:

As shown, we need to specify the interface through which we want to synchronize the configuration. We do the
exact same on BIG-IP2.
Next, we go to the Failover Network tab, which is related to managing the failover process and configuration
transfer. For transferring information, we can use either Unicast or Multicast methods, and we will use the
Unicast method:
As shown in the image above, click on the Add option to display a window like the one below:

Here, we used the HA and Management interfaces.


The next section is about Mirroring configuration. This means that session status exchanges between the Active
and Passive devices will be transferred, so the Passive device also knows the status of current sessions, and if
the first device fails, there will be no need to create new sessions for the ongoing sessions:

As shown in the image above, we have used the HA interface for this section as well.
The next topic is configuring Device Trust. This means that the two BIG-IP devices must trust each other. This
trust is established via certificates. However, it is not necessary for both sides to trust each other's certificate.
As long as one side trusts the other, the trust between the devices will be established.
Now, go to the Device Trust Members section and add BIG-IP2 there:

In the Add option, enter the IP address of the other device along with the credentials:

If the information entered is correct, the BIG-IP2 information will be displayed as shown in the image below:
Then, click on Device Certificate Matches to display the device name:

Next, click on Add Device:

In the next step, we need to define the Sync Failover Group, which ensures full synchronization between the
two devices. To do this, go to the Device Group section and click on Create:
In the new page, specify the members and then define the Sync Type.

The Sync Type can be one of the three options:


• Automatic with Incremental Sync: Changes will automatically synchronize between the two devices.
• Manual with Incremental Sync: Only changes will be manually transferred to the Passive device.
• Manual with Full Sync: All configurations will be manually transferred.
Now, if we go back to the Overview section, we will see that both devices are displayed, but the configuration
has not been synced between the two devices because BIG-IP2 does not have any configurations other than the
initial ones.
In the above configuration, we want the configuration from BIG-IP1 to be pushed to this group. In other words,
the same configuration will be applied to BIG-IP2.
If we are on BIG-IP2, we need to select the second option, Pull the most recent configuration, so that the existing
configurations on BIG-IP1 will be applied to the device.
After completing the process, we will see that both devices are now in sync:

Another point is that if we want to swap the Active role on the devices, we simply click on the device name in
the Devices section and select Force to Standby at the bottom of the page to perform the role switch.
In our example, BIG-IP2 is the Active device, and we want BIG-IP1 to take over this role.
To do this, go to the Device section, click on BIG-IP2:

Then, click on Force to Standby:

The change will be made, and BIG-IP1 will take over the Active role:

You might also like