0% found this document useful (0 votes)
21 views21 pages

1 Trang 2

The document discusses various network emulators such as Mininet and CORE, which simulate network environments for testing protocols and configurations. It also covers network performance measurement tools like Wireshark and iPerf3, highlighting their roles in analyzing network traffic and measuring bandwidth. Additionally, the document outlines the methodology for integrating and testing the HIPv2 protocol within a Mininet setup, focusing on optimizing throughput through adjustments in buffer sizes and cryptographic frameworks.

Uploaded by

Hai Nguyen Van
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

1 Trang 2

The document discusses various network emulators such as Mininet and CORE, which simulate network environments for testing protocols and configurations. It also covers network performance measurement tools like Wireshark and iPerf3, highlighting their roles in analyzing network traffic and measuring bandwidth. Additionally, the document outlines the methodology for integrating and testing the HIPv2 protocol within a Mininet setup, focusing on optimizing throughput through adjustments in buffer sizes and cryptographic frameworks.

Uploaded by

Hai Nguyen Van
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

provide a layer of security, as each virtual machine is isolated from the others, reducing the

risk of system-wide failures and malware spread.

3.6 Network emulator


A network emulator allows for the simulation of network environments on a single system,
mimicking the behavior of real-world network setups. These tools are essential for develop-
ers and researchers to test and analyze network protocols, applications, and configurations
under controlled, replicable conditions.

Mininet
Mininet is a network emulator [27]. It runs a collection of end-hosts, switches, routers, and
links on a single Linux kernel. It uses lightweight virtualization to make a single system look
like a complete network, running the same kernel, system, and user code. A Mininet host
behaves just like a real machine and run arbitrary programs (including anything that is in-
stalled on the underlying Linux system). The programs you run can send packets through
what seems like a real Ethernet interface, with a given link speed and delay. Packets get pro-
cessed by what looks like a real Ethernet switch, router, or middlebox, with a given amount
of queueing. When two programs, like an iPerf3 client and server, communicate through
Mininet, the measured performance should match that of two (slower) native machines.

Common Open Research Emulator


The Common Open Research Emulator (CORE) is also a network emulator that allows users
to create complex network topologies using a mix of real and virtualized network nodes
[42]. Running on a Linux-based system, CORE leverages virtualization technologies such
as Linux network namespaces and Virtual Ethernet devices to simulate networks within a
single operating system kernel. This approach allows each emulated node in CORE to behave
as if it were a real computer on a network, complete with its own separate process space
and network stack. In CORE, users can configure end-hosts, routers, switches, and various
network links with specific properties such as bandwidth, delay, and loss characteristics.

3.7 Network Performance Measurement


Network performance measurement involves the systematic monitoring and evaluation of
various network parameters such as bandwidth, latency, throughput, and packet loss. Tools
like Wireshark and iPerf3 are commonly used for this purpose.

Wireshark
Wireshark is a tool for capturing, analyzing, and troubleshooting network traffic [10], includ-
ing packets related to HIP. Its tools allows for the examination of HIP packets, aiding in the
understanding of their structure and behavior. An investigation into this domain was con-
ducted by A. Gajdošík and P. Kaňuch, as detailed in their research presented at the 2021 44th
International Conference on Telecommunications and Signal Processing [13]. Their study ex-
amines HIP, focusing on the protocol’s packets and identifiers through the lens of Wireshark.
By employing Wireshark’s packet capturing capabilities, the researchers studied the vulnera-
bilities within HIP implementations, aiming at bolstering security measures with Wireshark.
Therefore, Wireshark is utilized in the thesis to capture and analyze network traffic, aiming
to identifying potential misconfigurations in protocols.
ICMP stands for Internet Control Message Protocol, is a network protocol primarily used for
diagnostic and control purposes within IP networks [18]. ICMP is utilized for testing network
reachability through ping operations within Wireshark. An echo request is sent to a target,
and an echo reply is expected in return. Successful echo replies indicate that the target is both
reachable and responsive. This process not only confirms basic connectivity but also helps in
assessing the latency between two network points.

iPerf3
iPerf3 is an open-source tool designed to measure the maximum internet bandwidth or per-
formance of a network link [41]. It facilitates network testing by allowing administrators to
evaluate bandwidth, delay, jitter, and data loss. This is achieved by transmitting data be-
tween two computers on the network, one acting as a server to receive data and the other as
a client to send it. The installation of iPerf3 software on both devices enables the measure-
ment of data flow, from which iPerf3 calculates key network metrics such as bandwidth and
packet loss [1]. iPerf3 is highly configurable, supporting both TCP and UDP protocols, and
allowing adjustments to settings like maximum segment size, buffer length, and TCP win-
dow size. These capabilities not only help in monitoring network performance but also aid
in optimizing the network by tweaking these parameters.

In this thesis, we employ iPerf3, an upgrade from the original iPerf3 tool. Previous research
by Hardin (2023) examines the reliability of network simulation results using Mininet and
iPerf. Their study underscores that, despite their widespread adoption in both academic and
industrial settings, Mininet and iPerf can sometimes produce unreliable outcomes. Particu-
larly concerning is the discovery that reported throughputs may surpass the maximum ca-
pacities of the simulated network links, a discrepancy that becomes stark with the use of large
TCP receive window sizes and is most evident when simulations involve a single emulated
link. This indicates that even basic network configurations can lead to flawed results.

To address these inaccuracies, Hardin suggests adjustments to the TCP window size to align
more closely with the actual link capacity and recommends conducting measurements over
shorter periods to more accurately reflect throughput fluctuations. Additionally, the study
highlights the critical need for meticulous configuration and validation of simulation param-
eters to prevent misleading conclusions. Hardin also advocates for a reassessment of prior
studies utilizing Mininet and iPerf3 in light of these insights.

Buffer Size
Buffer size refers to the amount of data that can be held in memory during the transmission
and reception processes in networking [5]. In the context of TCP, buffers are used at both
the sender and receiver ends. These buffers store data temporarily as it is sent or before it
is processed after being received. The main purpose of buffer size is to handle variability in
data transfer rates and delays in networking environments. Buffers help in smoothing out
these inconsistencies, providing a way to hold data until it can be processed or sent out. The
Bandwidth-Delay Product (BDP) is a measure of how much data can be "in flight" on the
network, the maximum amount of data that should be sent out before an acknowledgment is
received. Ideally, the buffer size should be large enough to hold at least the BDP amount of
data. If the buffer size is too small relative to the BDP, the network cannot be fully utilized,
leading to underutilized bandwidth and reduced throughput.

Buffer size is a fundamental factor in determining network throughput because it directly


affects how data is handled during transmission. Adequate buffer sizing ensures that net-
works can operate efficiently, handle variable speeds and loads, and minimize disruptions
like packet loss and retransmissions. Therefore, optimizing buffer sizes based on network
characteristics and expected traffic patterns is a key task for network administrators aiming
to maximize throughput and performance.

TCP Receive Window (RWND)


The Receive Window (RWND) is a crucial component in TCP used for controlling flow and
managing congestion [1]. It essentially determines the amount of data that can be sent to a
receiver before an acknowledgment is required. This window size is dynamically adjusted
by the receiver based on its available buffer space and communicated to the sender using
the window size field in the TCP header. The optimal size of the RWND is influenced by
the Bandwidth-Delay Product, which is the product of the link capacity (bandwidth) and
the round-trip time (RTT). The BDP represents the maximum amount of data that can be "in
flight" on the network without being acknowledged. If the RWND is set too low compared to
the BDP, it can severely limit the throughput, as the sender will need to wait for acknowledg-
ments more frequently before sending more data. To maximize TCP throughput, the RWND
should be large enough to accommodate a full bandwidth-delay product.
4 Method

To gain a comprehensive understanding of the proposed version of HIPv2, thorough exami-


nation of relevant resources such as RFCs and prior research works was conducted, including
the report authored by Gustaf Bodemar et al in the fall of 2023 [6].

In striving to enhance the low throughput of the current version, a deeper understanding
of both PyHIP and Mininet was deemed necessary. While the Mininet manual [27] served
as a useful resource for grasping the tool’s fundamentals, it was more challenging to locate
specific instructions for implementing certain features not addressed in the manual.

4.1 Literature Review and Requirements


Before the set-up and testing, a thorough review of the HIP-VPLS documentation, including
relevant RFCs and Internet drafts needed to be understood. This section covers the necessary
tools installations, technologies, and tool operation procedures.

1. Hardware requirements: Computer with at least 8 GB of RAM, Apple Silicon (M1) Mac-
book

2. Software: Mininet version 2.3.0

3. Configuration: UTM settings: 2 CPUs, 4096 MB RAM, Ubuntu 22.05.03, Mininet: cus-
tom topology with required switches and hosts

4.2 Setting up Mininet


Firstly the selection of Mininet over CORE for the project was primarily influenced by the
existing familiarity with Mininet. This pre-existing knowledge allowed for an expedited de-
ployment and efficient management of the network simulation, crucially reducing develop-
mental timelines. Such familiarity not only streamlined the integration of specific require-
ments into Mininet but also enhanced productivity and effectiveness in achieving the desired
outcomes of the project. To set up a virtual network environment for network experiments

17
and research, it was decided to use Mininet within a UTM, as a Virtual Machine, see Sec-
tion 3.5 for explanation. UTM uses qemu [43], a emulator, to enable virtualization on iOS
devices, which inherently lack hardware virtualization support. Due to this limitation, UTM
cannot utilize the KVM (Kernel-based Virtual Machine) accelerator, instead, it employs the
TCG (Tiny Code Generator) accelerator. Once the virtual machine was set up and a suitable
Linux distribution was installed, we proceeded with the installation of Mininet. This in-
volved downloading the Mininet package and running the installation commands, ensuring
that all dependencies were correctly installed.

4.3 Integrating HIP-VPLS in Mininet


First, the environment is prepared by updating the system and installing Python along with
Python3-pip. This setup enables the installation of necessary Python libraries. Next, the
required libraries are installed, including pyCryptoDome for cryptographic frameworks, in-
terfaces for network interface management, and numpy for numerical operations. With the
environment ready, the HIP-VPLS repository is downloaded from GitHub [21]. This repos-
itory contains the Python implementation of HIPv2, which is integrated with Mininet. For
details on the setup, see Code Listing. 4.1.
1 sudo apt − g e t update
2 sudo apt − g e t i n s t a l l python3
3 sudo apt − g e t i n s t a l l python3 −pip
4
5 sudo pip3 i n s t a l l pycryptodome
6 sudo pip3 i n s t a l l i n t e r f a c e s
7 sudo pip3 i n s t a l l numpy
8
9 sudo apt − g e t i n s t a l l openvswitch −sw i tc h
10 sudo apt − g e t i n s t a l l openvswitch − t e s t c o n t r o l l e r
11 sudo l n −s /usr/bin/ovs − t e s t c o n t r o l l e r /usr/bin/ c o n t r o l l e r
12 k i l l a l l ovs − t e s t c o n t r o l l e r
13
14 g i t c l o n e h t t p s :// github . com/mininet/mininet . g i t
15 sudo PYTHON=python3 mininet/ u t i l / i n s t a l l . sh −a
16 g i t c l o n e h t t p s :// github . com/ s t r a n g e b i t − i o /hip − v p l s . g i t
Code Listing 4.1: Code for set up and integrating HIP-VPLS in Mininet.

Formal verification
Verifying HIP establishment is crucial as it ensures the secure and authenticated communica-
tion host and client, which is fundamental to the objectives of this research. It’s essential to
note that the framework itself won’t flag errors within the code when detect a lack of real con-
nection between client and host. Hence, the utilization of Wireshark becomes important for
formal verification throughout all integrating, testing and optimazing HIP-VPLS. To achieve
the first part of validation, wireshark was employed to conduct a thorough analysis of the
IPCMP protocol within the HIP-VPLS network.

This analysis was critical for identifying and verifying operational responsiveness of the net-
work. The IMCP protocol aimed to ensures that communication between hosts is verifiable.
To achive the second part of the validation, wireshark was deployed validate HIP commu-
nication channels established between hosts. This validation is verified by observing HIP
following an Encapsulating Security Payload (ESP) in Wireshark captures, see Section 3.2,
ensuring the encapsulation and encryption of data packets.
4.4 Testing of current PyHIP implementation
For this thesis, the HIP-VPLS topology was employed using a Python script within the
Mininet environment. This script establishes a network topology that includes routers
(r1, r2, r3, r4), switches (s1, s2, s3, s4, s5), and hosts (h1, h2, h3, h4). Each router is connected
to a switch, and each switch links to a single host. Since the investigation focuses primarily
on two hosts, configurations associated with other hosts, switches, and routers were either
commented out or removed to tailor the setup to the specific scenario under study. Figure 4.1
illustrates the adapted topology.

IP: 192.168.1.100 IP: 192.168.1.101

H1 H2

HIP interface

S1 R1 R2 S2

H3 H4
IP: 192.168.1.102 IP: 192.168.1.103

Figure 4.1: Mininet topology with HIP connection.

Initially, the script included commands to launch a switching daemon on each router (e.g.,
info(net[’r1’].cmd(’cd router1 && python3 switchd.py &’))), which were
crucial for router operation. However, these commands did not function as expected, leading
to comment out these lines in the initial setup phase. This decision was a pragmatic ad-
justment to troubleshoot and optimize the script’s functionality during the implementation
testing.

To address the research questions, the effectiveness of the current implementation was ex-
plored through a systematic approach to performance testing using the network testing tool,
iPerf3. The objective is to assess how varying buffer lengths and TCP window sizes influence
network throughput. Iterative testing with iPerf3 on the initial HIP-VPLS was conducted to
identify configurations that could later optimize throughput.

The Python script was developed within the HIP-VPLS setup to automate the process of ad-
justing and testing various combinations of iPerf’s buffer length (´l) (1000bytes ´ 5000bytes)
and TCP window size (´w) (8000bytes ´ 1024000bytes) parameters. This script initializes
a Mininet instance with a basic network topology consisting of two hosts connected via a
switch. It then starts an iPerf3 server on one host and runs an iPerf3 client on the other host
with varying parameters, capturing the resulting throughput for each test. The initial version
of the script displays test results in the terminal.

Recognizing the need for a more structured analysis, the script was enhanced to log test re-
sults to a CSV file, enabling detailed data analysis and visualization. Additionally, a plotting
script was introduced to visualize the relationship between buffer length and throughput,
providing visual feedback on how these parameters affect network performance.

4.5 Throughput optimization


In the pursuit of optimizing throughput, strategic adjustments were made to the crypto-
graphic framework to enhance efficiency. Initially relying on the PyCryptoDome library, the
potential for further throughput improvements was recognized by leveraging the advanced
capabilities offered by the cryptography library.

The process of changing cryptographic libraries began with SHA-1 and progressed through
SHA-256, SHA-384, to explore enhanced throughput capabilities. This step-by-step transition
was aimed at methodically evaluating the impact of each algorithm on network performance.
Drawing insights from the results of our previous testing, which combinations of TCP win-
dow sizes and buffer lengths, we could possible detect throughput improvements.
5 Results

The results presented in this section showcase the scripts written testing and the output from
those test is also presented to graphically show what the tool gives when running.

5.1 Integrating HIP-VPLS in Mininet


Wireshark was employed to capture and analyze the network traffic, providing insights into
how HIP-VPLS components are configured and interact within the network. Screenshots
from Wireshark illustrate the dynamic behavior of network traffic and highlight how virtual
hosts are interconnected within the HIP-VPLS framework. This setup ensures that communi-
cation between hosts is not only secure but also verifiable, adhering to the protocols outlined
in Section 3.2 and 3.7.

The ICMP traces, as shown in Figure 5.1, demonstrated successful echo requests and replies
between hosts, confirming the network’s operational responsiveness.

Figure 5.1: ICMP trace between two hosts within the Mininet environment, demonstrating
successful echo requests and replies. This trace confirms the operational responsiveness of
the network under the HIP-VPLS configuration.

The ESP traces, as shown in Figure 5.2,traces provided evidence of the HIP-VPLS’s ability
to maintain continuous and secure communication between nodes, in this virtualized envi-

21
Figure 5.2: Illustrating ESP trace between two hosts within the Mininet environment, demon-
strating successful encapsulation and encryption of data packets.

ronment. The ESP trace, confirms the operational integrity and security mechanisms imple-
mented within the HIP-VPLS configuration.

5.2 Testing of current PyHIP implementation


The modified script served as an automated method to configure a network environment for
performance testing, significantly reducing the setup time compared to manual configura-
tion, see Code Listing 5.1. Commands automate the deployment of switchd.py on routers r1
and r2. Initially, there could be delays in establishing a HIP connection between the host and
the network. To address this, a connectivity check was integrated into the script (see code
row 11 in Code Listing 5.1), ensuring network links before proceeding with further testing.

The script initiates by starting an iPerf3 server on host h2, which is critical for measuring the
bandwidth between network points. Then the script automates network performance testing
by using the iPerf3 tool to evaluate various configurations of buffer lengths and TCP window
sizes. It starts by writing column headers to an output file. Within nested loops, it tests all
combinations of buffer lengths and TCP window sizes. For each configuration, it logs the test
parameters, executes iPerf3 command on host h1 to send data to host h2, and logs the result.
The throughput measured in Mbits/sec is extracted from the iPerf3 output and recorded in
the file. If no throughput data is found, it records N/A. After completing the tests, the script
sends a command to host h2 to terminate the iPerf3 server. This structured approach ensures
detailed logging of network performance under varying conditions and data collection for
further analysis.
1
2 # ######################################
3 # ##### code from o r i g i n a l setup here ####
4 # ######################################
5 s e l f . addLink ( h , s )
6 def i p e r f T e s t ( net , o u t p u t _ f i l e , l _ r a n g e = ( 1 0 0 0 , 5 0 0 1 , 1 0 0 0 ) , w_range = ( 8 0 0 0 , 1 6 0 0 0 ,
3 2 0 0 0 , 6 4 0 0 0 , 1 2 8 0 0 0 , 2 5 6 0 0 0 , 5 1 2 0 0 0 , 1024000 ) ) :
7
8 # S t a r t t h e i p e r f s e r v e r on h2
9 n e t [ ’ r 1 ’ ] . cmd( ’ cd r o u t e r 1 && python3 switchd . py & ’ )
10 n e t [ ’ r 2 ’ ] . cmd( ’ cd r o u t e r 2 && python3 switchd . py & ’ )
11 p r i n t ( " Checking c o n n e c t i v i t y from h1 t o h2 " )
12 p i n g R e s u l t = n e t [ ’ h1 ’ ] . cmd( f " ping −c 40 { n e t [ ’ h2 ’ ] . IP ( ) } " )
13 print ( pingResult )
14 n e t [ ’ h2 ’ ] . cmd( ’ i p e r f 3 −s & ’ )
15
16 with open ( o u t p u t _ f i l e , ’w’ ) as f :
17 f . w r i t e ( " BufferLength , TCPWindowSize , Throughput\n " )
18
19 f o r l _ v a l i n range ( * l _ r a n g e ) :
20 f o r w_val i n w_range :
21 i n f o ( f " T e s t i n g with − l { l _ v a l } and −w { w_val }\n " )
22 r e s u l t = n e t [ ’ h1 ’ ] . cmd( f " i p e r f 3 −c { n e t [ ’ h2 ’ ] . IP ( ) } − f m −b 1000M − l
{ l _ v a l } −w { w_val } " )
23 i n f o ( f " R e s u l t : { r e s u l t }\n " )
24 matches = r e . f i n d a l l ( r ’ \s ( [ \ d \ . ] + ) \sMbits/ s e c ’ , r e s u l t )
25 i f matches :
26 throughput = matches [ − 1 ]
27 f . w r i t e ( f " { l _ v a l } , { w_val } , { throughput }\n " )
28 else :
29 f . w r i t e ( f " { l _ v a l } , { w_val } , N/A\n " )
30
31 n e t [ ’ h2 ’ ] . cmd( ’ k i l l %i p e r f 3 ’ )
32 from time import s l e e p
33 # ######################################
34 # ##### code from o r i g i n a l setup here ####
35 # ######################################
36 i p e r f T e s t ( net , " i p e r f _ r e s u l t s . csv " ) # Running i p e r f T e s t within t h e network
37 CLI ( n e t )
38 net . stop ( )
39
40 i f __name__ == ’ __main__ ’ :
41 setLogLevel ( ’ info ’ )
42 run ( )
Code Listing 5.1: Code for testing of current implementation.

The results of this testing are illustrated in Figure 5.3. The graph represents each buffer
length with a distinct line, visually outlining the relationship between TCP window size and
throughput performance. Notably, the lowest recorded throughput is 28.2 Mbits/sec, occur-
ring at a buffer length of 2000 bytes and a TCP window size of 8000 bytes. Conversely, the
highest throughput is 41.1 Mbits/sec, observed at a buffer length of 3000 bytes and a TCP
window size of 128000 bytes. From the results depicted in the graph 5.3, we establish a base-
line that will be used for future comparisons. The combination of a buffer length of 3000
bytes and a TCP window size of 128000 bytes has therefore been selected as the iPerf3 test
parameters for the optimization section of this thesis.

5.3 Throughput optimization


The modified version of the PyHIP framework has been uploaded to GitHub [9]. Initially,
when transitioning the library from PyCryptoDome to Cryptography, it was crucial to import
the necessary modules to ensure that the code could handle cryptographic operations, see
Code Listing 5.2.
1 from cryptography . hazmat . p r i m i t i v e s import hashes
2 from cryptography . hazmat . p r i m i t i v e s . asymmetric import ec
3 from cryptography . hazmat . p r i m i t i v e s . asymmetric . u t i l s import Prehashed
4 from cryptography . e x c e p t i o n s import I n v a l i d S i g n a t u r e
Figure 5.3: Performance Analysis of the Existing Implementation of PyHIP.

5 from cryptography . hazmat . backends import d e f a u l t _ b a c k e n d


Code Listing 5.2: Code for import the necessary modules forcryptography operations.

In the initial versions of the cryptographic algorithm classes for SHA-1, SHA-256(ECD), and
SHA-384, the structure for signing (sign) and verification (veri f y) methods was consistent
across all instances. Each class began by generating a hash of the input data, which was then
used in conjunction with a private or public key to either sign or verify the data. Exception
handling was incorporated within the verification methods to address cases where the verifi-
cation failed, thereby signaling potential compromises in data integrity or invalid signatures.
See Figure 5.4 for an illustration of how each unmodified version of the SHA-family appeared
before changing.

Except for the change from the PyCryptoDome library to the cryptography library, the re-
vised versions of the cryptographic classes for SHA-1, SHA-256(ECD), and SHA-384 demon-
strate significant changes from their original counterparts, particularly in their approach to
cryptographic operations and error handling, see Figure 5.5. They all use the Prehashed class
to indicate to the ECDSA function that the data provided is already hashed. This standard-
izes how the ECDSA signatures are applied across different hash functions. The verification
methods in each class handle exceptions similarly by catching InvalidSignature to denote a
verification failure, which ensures error handling across different cryptographic standards.
The results of the changed algoritmns that can be seen in the graphical representation, see
Figure 5.6 and provide a visualization of the change in throughput achieved by transitioning
from PyCryptoDome to the cryptography library. The initial throughput of 41.1 Mbps that
was established by the baseline iPerf3 test, see Figure 5.3, serves as a benchmark for the com-
parisons. Upon modifying the SHA libraries a change in throughput could be noticed, with
1 class ECDSASHA1Signature(Signature): 1 class ECDSASHA1Signature:
2 ALG_ID = 0x9; 2 def __init__(self, key):
3 3 self.key = key;
4 def __init__(self, key): 4 def sign(self, data):
5 self.key = key; 5 digest = hashes.Hash(hashes.SHA1
6 (), backend=
7 def sign(self, data): default_backend())
8 h = SHA1.new(data) 6 digest.update(data)
9 signer = DSS.new(self.key, ’fips 7 hash = digest.finalize()
-186-3’) 8
10 return signer.sign(h); 9 signature = self.key.sign(
11 10 hash,
12 def verify(self, sig, data): 11 ec.ECDSA(Prehashed(hashes.
13 h = SHA1.new(data) SHA1()))
14 verifier = DSS.new(self.key, ’ 12 )
fips-186-3’) 13 return signature
15 try: 14
16 verifier.verify(h, bytes(sig 15 def verify(self, signature, data):
)) 16 digest = hashes.Hash(hashes.SHA1
17 return True (), backend=
18 except ValueError as e: default_backend())
19 return False 17 digest.update(data)
20 18 hash = digest.finalize()
19 try:
20 this.key.verify(
Figure 5.4: Unmodified PyHIP version of 21 signature,
SHA-1 using PyCryptoDome library. 22 hash,
23 ec.ECDSA(Prehashed(
hashes.SHA1()))
24 )
25 return True
26 except InvalidSignature:
27 return False
28

Figure 5.5: Modified PyHIP version of SHA-


1 using cryptography library.

the combination of SHA-384 and SHA-256 showing the most significant improvement of 59
Mbps.

To more clearly highlight the performance improvements over the baseline, we have con-
structed a table that displays the percentage increase in throughput. This approach effectively
demonstrates the significant impact resulting from the transition from the PyCryptoDome li-
brary to the Cryptography library. The table 5.3 provides a direct comparison, illustrating the
enhancements achieved through this change.
Throughput variations across different Cryptographic Algorithm modifications

60 57.8 58.4 59
56.9 57.4 57.2
55.2
Throughput (Mbits/sec)

50
41.1
40

30

20

10

0
IP

)
-1

84

-1

6
D

D
25
yH

A
-3
C

C
A
SH

SH
(E

(E

(E
A
P

H
SH
56

56

56
4+
of

+S
-2

-2

-2
8
e

-1
A

A
-3
lin

A
SH

H
A
se

H
+S

+S
SH

+S
Ba

-1

84
84
A

-3
-3
SH

A
A

SH
SH

Cryptographic Algorithm

Figure 5.6: Figure illustrating the results of iPerf3 performance throughput measurements for
various cryptographic algorithms, following the transition from the PyCryptoDome library
to the Cryptography library.

Cryptography Algorithm Percentage Increase (%)


Baseline of PyHIP
SHA-1 38.44%
SHA-256(ECD) 34.31%
SHA-1 + SHA-256(ECD) 40.63%
SHA-384 39.66%
SHA-384 + SHA-1 42.09%
SHA-384 + SHA-1 + SHA-256 39.17%
SHA-384 + SHA-256(ECD) 43.55%
Table 5.1: Percentage increase in throughput for various cryptographic algorithms compared
to the baseline ("PyHIP baseline" algorithm, 41.1 Mbits/sec)
6 Discussion

In this section, we analyze and discuss the implications of the experimental findings, criti-
cally examining their trustworthiness. We linking them to established theories and the re-
search questions posed at the beginning of this thesis. It is important to note that while the
findings in Section 5 are not subjected to statistical tests to determine significance, they can
provide indicative trends that suggest differences in performance. The subsequent analysis
will interpret these trends, focusing on content coverage to address the research questions
of this thesis. Moreover we will assess the reflecting on how the outcomes align with future
work in a wider context.

6.1 Results Analysis


The results are illustrated in Figure 5.1, 5.2, 5.3, 5.6 and Table 5.3. This section discusses
the implications of these results entail in relation to the aim of the thesis and the research
questions. Let us begin with the first research question:

1. To what extent can the performance of a HIP-VPLS environment simulated


within Mininet be effectively evaluated?

Based on the result, referenced in Section 5 it was possible to evaluate the performance of a
HIP-VPLS environment within Mininet, as indicated by Figures 5.1 and 5.2. These figures
provide strong evidence that the test environment was successfully simulated. However, the
tests involving only two hosts do not provide a complete picture of scalability and perfor-
mance under more extensive network conditions. While the initial results are promising, a
broader assessment with more hosts and varied network scenarios is necessary to fully evalu-
ate the system’s performance capabilities in a simulated HIP-VPLS environment. This would
help evaluate how well the system can handle increased complexity and larger scale deploy-
ments.

27
2. How do variations in buffer length and TCP window size affect the perfor-
mance of PyHIP?

When analyzing the results depicted in Figure 5.3, we observe that smaller TCP Window sizes
restrict throughput, while larger sizes enhance it. The data indicates that the optimal TCP
Window size likely falls between 128,000 and 225,000 bytes, suggesting that operating within
this range might yield the best performance. While Buffer Length shows peak performance at
3,000 bytes and appears to perform better at smaller sizes, these findings are not statistically
significant enough to draw definitive conclusions. The notably low throughput at a Buffer
Length of 2,000 bytes could be attributed to an anomaly in test execution or factors inherent
to the iPerf3 test itself.

As discussed in Section 3.7, despite their adaptation use across academia and industry, tools
like Mininet and iPerf can sometimes produce unreliable outcomes. A critical issue is the
occasional unreliability of iPerf results, where reported throughputs sometimes exceed what
is theoretically possible for the simulated network links. This problem is particularly pro-
nounced with large TCP Window sizes and becomes conspicuous in simulations with a single
emulated link. We therefore emphasize the importance of consistent testing conditions and
multiple test runs to ensure the reliability of results. Due to time constraints in this thesis,
comprehensive testing was not feasible, and thus the results, while suggesting some trends,
should not be considered definitive.

Althouth BDP calculations were not performed for this specific setup, the implications is still
interesting although it remain hypothetical and must must be interpreted with caution. For
instance, if the buffer size is set below the BDP, the network’s bandwidth cannot be fully
utilized, leading to reduced throughput. This is supported by our findings, where larger TCP
Window sizes, effectively increasing buffer capacity, are associated with higher throughput
rates. Conversely, if the buffer size exceeds the BDP, it might not yield additional throughput
benefits and could introduce unnecessary latency or overhead.

In our study, see Section 3.7 the relationship between buffer sizes, TCP Window sizes, and the
BDP provides insight into some of the anomalies observed in the iPerf3 test results. Misalign-
ment of the selected buffer or window size with the network’s BDP could result in misleading
outcomes, either underestimating or overestimating network capacity. This issue might ex-
plain the notably low throughput observed at a buffer length of 2,000 bytes, suggesting that
this configuration was suboptimal relative to the BDP. Understanding this relationship is cru-
cial for configuring network tests accurately and ensuring that simulations realistically reflect
network capabilities.

3. Could we enhance throughput performance by implementing measures such as


modifying current cryptography packages from PycryptoDome to cryptography
library?

Based on the result, referencing Figure 5.6, we observe an improvement in throughput across
all modified cryptography algorithms. However, the reliability of these improvements is
uncertain. Perhaps due to fluctuations based on hardware or software equipment or con-
figuration. As previously mentioned, these tests were conducted using the iPerf3 test only
once, and specifically with the configuration that yielded the highest throughput during this
thesis’s testing phase. Relying on a single high-performing setup can misrepresent the ef-
fectiveness of the changes, as different iPerf3 test combinations might yield. It therefore im-
portant to note that these tests might limiting the credibility of the findings. Although the
comparison between the original pyHIP version and the modified version shows a minor
increase of approximately 20 Mbits/s, as detailed in Table 5.3, this change is significant in
terms of percentage. This can provide some indicative trend that the modifying current cryp-
tography packages from PycryptoDome to cryptography library can enhanced throughput
performance.

Furthermore, as discussed in Section 3.4, the use of Python-based cryptographic libraries


like PyCryptodome may introduce computational overheads. Given that PyCryptodome is
implemented in Python, its performance might not be on par with libraries crafted in more
performance-optimized languages such as C.

It is also critical to highlight that the most substantial throughput improvement was not ob-
served by modifying all SHA classes simultaneously but rather through the specific com-
bination of SHA-384 and SHA-256. This indicates that the overall performance gain from
updating all cryptographic libraries did not align with our initial expectations and might not
be ideal for practical applications, as initially suggested in Section 1.4.

Moreover, it’s important to note that modifications were not made to the symmetric cryptog-
raphy libraries, such as AES, which could have a significant impact on system throughput.
As illustrated in Figure 3.4 and elaborated in Section 3.3, AES remains the sole method for
symmetric encryption in the system, potentially making it a significant factor in any through-
put improvements. However, due to time constraints within this thesis, we were unable to
refine the AES implementation within our system.

6.2 Method

Set-up
The experiment was conducted on a MacBook Pro equipped with Apple Silicon (M1) and uti-
lized UTM as a virtualization platform. It is important to note that there is no evidence within
this thesis to suggest that identical results would be replicated on the same hardware, system
settings, and codebase across different experiments. The inherent complexities introduced by
Mininet, which simulates network topologies and devices in a software-based environment,
add to these uncertainties. Factors such as system load variations, background processes, and
differences in network stack performance can all contribute to slight discrepancies in results
between tests

Testing
It is important to acknowledge that no statistical tests have been performed to validate the sig-
nificance of the observed findings. As such, the reported performance improvements should
be considered preliminary and subject to further validation across various setups and condi-
tions. This investigation into alternative cryptographic packages, such as Cryptography.io,
offers valuable insights into how the choice of cryptographic implementation can influence
PyHIP’s performance.

The testing involved only two nodes, and the reliability of using iPerf3 within Mininet may
not be optimal. The selection of metrics—TCP window sizes ranging from 8,000 to 1,024,000
bytes and buffer lengths from 1,000 to 5,000—raises questions about their appropriateness as
optimal measurements. Additionally, selecting only the best combinations for subsequent
optimization tests may not fully demonstrate how these parameters impact network perfor-
mance under different conditions.
Changing the Library from PyCryptoDome to cryptography to optimize
throughput
Transitioning from PyCryptoDome to the cryptography library in a cryptographic framework
poses several challenges that must be considered during the implementation phase. These
difficulties often relate to differences in cryptography library utilizes a different API design
compared to PyCryptoDome. Additionally, due to time constraints, the transition to the AES
cryptographic library was not completed, which was a critical aspect of our project. This
limitation is significant as it constrains the scope of our optimization phase and provides
limited insights into how this change might affect the performance throughput of PyHIP.

6.3 Future work


The future work for this thesis involves continuing to replace the cryptographic libraries to
better understand their true impact on system performance. To determine how variations
in buffer length and TCP window size impact PyHIP’s performance, statistical tests could
significantly validate the reliability of these results, or potentially contradict the findings pre-
sented in this thesis. Additionally, it is necessary to conduct more comprehensive statistical
tests to precisely measure the actual impact of these changes. These tests should be carried
out in a larger environment, as the current evaluations have been limited to just two nodes.
Expanding the testing framework will provide a more accurate reflection of the performance
in real-world applications and help to identify any scalability issues or enhancements that
could be applied to further improve TCP performance throughput.
7 Conclusion

This thesis explores the integration, testing, and optimization of the HIPv2 within the Mininet
network emulator and the PyHIP framework. It specifically focuses on enhancing network
throughput by switching cryptographic libraries, aiming to improve performance. The pri-
mary objectives included incorporation of HIPv2 into Mininet to crate a testing environments
and later the optimization of PyHIP in simulated network environments. Throughout this
work, we examined cryptographic libraries, specifically focusing on the transition from Py-
Cryptodome to the Cryptography library. The results indicate a modest improvement in
network throughput following this transition. Although there was a noticeable increase in
throughput, the extent of this improvement was somewhat less than expected. This suggests
that further work is needed to more accurately determine the true impact of different crypto-
graphic libraries on network performance. The testing limitations to a two-node setup within
Mininet also highlighted the need for expanded testing environments to better simulate real-
world performance. Future work will involve more extensive testing, broader statistical anal-
yses are required to validate the observed these results, and expanding the cryptographic
scope to include modifications to AES implementations. Overall, this thesis contributes to
the ongoing discourse on network security protocols by providing a framework for optimiz-
ing preformance of the PyHIP framework in simulated network environment.

31
Bibliography

[1] Saeid Abolfazli, Zohreh Sanaei, Shen Yuong Wong, Ali Tabassi, and Steven Rosen.
“Throughput measurement in 4G wireless data networks: Performance evaluation and
validation”. In: 2015 IEEE Symposium on Computer Applications & Industrial Electronics
(ISCAIE). IEEE. 2015, pp. 27–32.
[2] Omar G Abood and Shawkat K Guirguis. “A survey on cryptography algorithms”. In:
International Journal of Scientific and Research Publications 8.7 (2018), pp. 495–516.
[3] Abdullah Al Hasib and Abul Ahsan Md Mahmudul Haque. “A comparative study
of the performance and security issues of AES and RSA cryptography”. In: 2008 third
international conference on convergence and hybrid information technology. Vol. 2. IEEE. 2008,
pp. 505–510.
[4] Harald T. Alvestrand. A Mission Statement for the IETF. RFC 3935. Oct. 2004. DOI: 10.
17487/RFC3935. URL: https://www.rfc-editor.org/info/rfc3935.
[5] Lucien Avramov and jhrapp@gmail.com. Data Center Benchmarking Terminology. RFC
8238. Aug. 2017. DOI: 10.17487/RFC8238. URL: https://www.rfc-editor.org/
info/rfc8238.
[6] Gustaf Bodemar, Nils Lenti, Tony Lindbom, Hannes Linde, Max Skanvik, and Frans
Öhrström. Improving the Open Source HIPv2 Implementation. Dec. 2023. URL: https :
//www.ida.liu.se/~TDDE21/info/hip_report_2023.pdf.
[7] B. Buckley and M. Dion. Securing a Remote Workforce. https://benjamintbuckley.
com/s/Buckley- 898M1_- Capstone- Final- Draft.pdf. Accessed: 2023-04-18.
2021.
[8] N Chander. Enabling high-performance data services with Ethernet WAN and IP VPN.
http://ddtechpartners.com/wp- content/uploads/2013/06/IDC_ATT_
Enabling_Data_Services.pdf. Accessed: 2023-04-19. 2011.
[9] Martin Christensson. HIP VPLS Project. https://github.com/MartinChristensson756/Hip-
vpls.git. GAccessed: 2024-05-09. 2024.
[10] CompTIA. Wireshark. Accessed: 2024-04-21. 2024. URL: https : / / www . comptia .
org/content/articles/what-is-wireshark-and-how-to-use-it.

32
[11] Donald Eastlake and Tony Hansen. US Secure Hash Algorithms (SHA and SHA-based
HMAC and HKDF). RFC 6234. Accessed: [Insert Access Date Here]. RFC Editor, May
2011.
[12] Adel Fuchs, Ariel Stulman, and Andrei Gurtov. “IoT and HIP’s Opportunistic Mode”.
In: IEEE Transactions on Mobile Computing 20.4 (2021), pp. 1434–1448. DOI: 10.1109/
TMC.2020.2967044.
[13] Adam Gajdošík and Peter Kaňuch. “Security vulnerability analysis of OpenHIP and
E-HIP protocols”. In: 2021 44th International Conference on Telecommunications and Signal
Processing (TSP). 2021, pp. 42–47. DOI: 10.1109/TSP52935.2021.9522638.
[14] Tony Hansen and Donald E. Eastlake 3rd. US Secure Hash Algorithms (SHA and SHA-
based HMAC and HKDF). RFC 6234. May 2011. DOI: 10.17487/RFC6234. URL: https:
//www.rfc-editor.org/info/rfc6234.
[15] Tony Hansen and Donald E. Eastlake 3rd. US Secure Hash Algorithms (SHA and SHA-
based HMAC and HKDF). RFC 6234. May 2011. DOI: 10.17487/RFC6234. URL: https:
//www.rfc-editor.org/info/rfc6234.
[16] Frederik Hauser, Marco Häberle, Mark Schmidt, and Michael Menth. “P4-ipsec: Site-
to-site and host-to-site vpn with ipsec in p4-based sdn”. In: IEEE Access 8 (2020),
pp. 139567–139586.
[17] Russ Housley. Using Advanced Encryption Standard (AES) Counter Mode With IPsec En-
capsulating Security Payload (ESP). RFC 3686. Jan. 2004. DOI: 10.17487/RFC3686. URL:
https://www.rfc-editor.org/info/rfc3686.
[18] Internet Control Message Protocol. RFC 792. Sept. 1981. DOI: 10.17487/RFC0792. URL:
https://www.rfc-editor.org/info/rfc792.
[19] Petri Jokela, Robert Moskowitz, and Jan Melen. Using the Encapsulating Security Payload
(ESP) Transport Format with the Host Identity Protocol (HIP). RFC 7402. Apr. 2015. DOI:
10.17487/RFC7402. URL: https://www.rfc-editor.org/info/rfc7402.
[20] Uday Kumar, Tuhin Borgohain, and Sugata Sanyal. “Comparative analysis of cryptog-
raphy library in IoT”. In: arXiv preprint arXiv:1504.04306 (2015).
[21] Dmitriy Kuptsov. HIP-VPLS. https://github.com/strangebit-io/hip-vpls.
Accessed: 2024-01-25. 2024.
[22] Dmitriy Kuptsov. “Running HIP-VPLS in Infrastructure Mode: The Prototype Imple-
mentation”. https : / / www . linuxjournal . com / users / dmitriy - kuptsov.
2024.
[23] Dmitriy Kuptsov. Simulating Host Identity Protocol-Based Virtual Private LAN Service
Using Mininet Framework. Accessed: 15 march 2024. 2020. URL: https : / / www .
linuxjournal . com / content / simulating - host - identity - protocol -
based-virtual-private-lan-service-using-mininet-framework.
[24] Marc Lasserre and Vach Kompella. Virtual Private LAN Service (VPLS) Using Label Dis-
tribution Protocol (LDP) Signaling. RFC 4762. Jan. 2007. DOI: 10.17487/RFC4762. URL:
https://www.rfc-editor.org/info/rfc4762.
[25] Dr. Arthur Y. Lin, Andrew G. Malis, Dr. Juha Heinanen, Bryan Gleeson, and
Dr. Grenville Armitage. A Framework for IP Based Virtual Private Networks. RFC 2764.
Feb. 2000. DOI: 10 . 17487 / RFC2764. URL: https : / / www . rfc - editor . org /
info/rfc2764.
[26] Dindayal Mahto and Dilip Kumar Yadav. “RSA and ECC: a comparative analysis”. In:
International journal of applied engineering research 12.19 (2017), pp. 9053–9061.
[27] Mininet. Introduction to Mininet. http://mininet.org. Accessed: 2024-01-28. 2023.
, [28] Kathleen Moriarty, Burt Kaliski, Jakob Jonsson, and Andreas Rusch. PKCS #1: RSA
Cryptography Specifications Version 2.2. RFC 8017. Nov. 2016. DOI: 10.17487/RFC8017.
URL : https://www.rfc-editor.org/info/rfc8017.

[29] Robert Moskowitz, Stuart W. Card, Adam Wiethuechter, and Andrei Gurtov. DRIP En-
tity Tag (DET) for Unmanned Aircraft System Remote ID (UAS RID). RFC 9374. Mar. 2023.
DOI : 10.17487/RFC9374. URL : https://www.rfc-editor.org/info/rfc9374.

[30] Robert Moskowitz, Tobias Heer, Petri Jokela, and Thomas R. Henderson. Host Iden-
tity Protocol Version 2 (HIPv2). RFC 7401. Apr. 2015. DOI: 10.17487/RFC7401. URL:
https://www.rfc-editor.org/info/rfc7401.
[31] Robert Moskowitz and Miika Komu. Host Identity Protocol Architecture. RFC 9063. July
2021. DOI: 10.17487/RFC9063. URL: https://www.rfc- editor.org/info/
rfc9063.
[32] Robert Moskowitz and Pekka Nikander. Host identity protocol (HIP) architecture. Tech.
rep. 2006.
[33] Pekka Nikander, Andrei Gurtov, and Thomas R Henderson. “Host identity protocol
(HIP): Connectivity, mobility, multi-homing, security, and privacy over IPv4 and IPv6
networks”. In: IEEE Communications Surveys & Tutorials 12.2 (2010), pp. 186–204.
[34] Yoav Nir, Simon Josefsson, and Manuel Pégourié-Gonnard. Elliptic Curve Cryptography
(ECC) Cipher Suites for Transport Layer Security (TLS) Versions 1.2 and Earlier. RFC 8422.
Aug. 2018. DOI: 10 . 17487 / RFC8422. URL: https : / / www . rfc - editor . org /
info/rfc8422.
[35] Wouter Penard and Tim van Werkhoven. “On the secure hash algorithm family”. In:
Cryptography in context (2008), pp. 1–18.
[36] Wei Qin, Siqi Chen, and Mugen Peng. “Recent advances in Industrial Internet: insights
and challenges”. In: Digital Communications and Networks 6.1 (2020), pp. 1–13. ISSN: 2352-
8648. DOI: https://doi.org/10.1016/j.dcan.2019.07.001. URL: https:
//www.sciencedirect.com/science/article/pii/S2352864819301166.
[37] Eric Rescorla. Diffie-Hellman Key Agreement Method. RFC 2631. June 1999. DOI: 10 .
17487/RFC2631. URL: https://www.rfc-editor.org/info/rfc2631.
[38] T. Saad, B. Alawieh, H.T. Mouftah, and S. Gulder. “Tunneling techniques for end-to-
end VPNs: generic deployment in an optical testbed environment”. In: IEEE Communi-
cations Magazine 44.5 (2006), pp. 124–132. DOI: 10.1109/MCOM.2006.1637957.
[39] Yosra Ben Saied and Alexis Olivereau. “HIP Tiny Exchange (TEX): A distributed key
exchange scheme for HIP-based Internet of Things”. In: Third International Conference
on Communications and Networking. IEEE. 2012, pp. 1–8.
[40] Mohammed Abdulhameed Al-Shabi. “A survey on symmetric and asymmetric cryp-
tography algorithms in information security”. In: International Journal of Scientific and
Research Publications (IJSRP) 9.3 (2019), pp. 576–589.
[41] The Iperf Development Team. Iperf3 Documentation. Accessed: 2024-04-21. 2024. URL:
https://iperf.fr/iperf-doc.php.
[42] U.S. Naval Research Laboratory. CORE – NRL’s Center for Computational Science. Ac-
cessed: 2024-05-08. 2024. URL: https://www.nrl.navy.mil/Our-Work/Areas-
of-Research/Information-Technology/NCS/CORE/.
[43] UTM. UTM - Virtual machines for Mac. https://mac.getutm.app. Accessed: 2024-
02-26. 2023.
[44] Ramachandran Venkateswaran. “Virtual private networks”. In: IEEE potentials 20.1
(2001), pp. 11–15.

You might also like