0% found this document useful (0 votes)
194 views6 pages

Cloud Security Is Not (Just) Virtualization Security: A Short Paper

Cloud infrastructure commonly relies on virtualization. Cloud providers offering security-as-a-service promise the best of both worlds. Solution centralizes guest protection into a security VM.

Uploaded by

alonhazay
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
194 views6 pages

Cloud Security Is Not (Just) Virtualization Security: A Short Paper

Cloud infrastructure commonly relies on virtualization. Cloud providers offering security-as-a-service promise the best of both worlds. Solution centralizes guest protection into a security VM.

Uploaded by

alonhazay
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Cloud Security Is Not (Just) Virtualization Security

A Short Paper
Mihai Christodorescu, Reiner Sailer, Douglas Lee Schales Daniele Sgandurra, Diego Zamboni
IBM T.J. Watson Research IBM Zurich Research
{mihai,sailer,schales}@us.ibm.com {dsg,dza}@zurich.ibm.com

ABSTRACT sharing of hardware resources. In a typical cloud scenario, a user


Cloud infrastructure commonly relies on virtualization. Customers uploads the code and data of their workload to a cloud provider,
provide their own VMs, and the cloud provider runs them often which in turn runs this workload without knowledge of its code in-
without knowledge of the guest OSes or their configurations. How- ternals or its configuration. Users benefits from offloading the man-
ever, cloud customers also want effective and efficient security for agement of their workload to the provider, while the provider gains
their VMs. Cloud providers offering security-as-a-service based on from efficiently sharing their cloud infrastructure among workloads
VM introspection promise the best of both worlds: efficient central- from multiple users. This sharing of execution environment to-
ization and effective protection. Since customers can move images gether with the fact that the cloud user lacks control over the cloud
from one cloud to another, an effective solution requires learning infrastructure raises significant security concerns about the integrity
what guest OS runs in each VM and securing the guest OS without and confidentiality of a user’s workload. One underlying mecha-
relying on the guest OS functionality or an initially secure guest nism enabling cloud computing is virtualization, be it at the hard-
VM state. ware, middleware, or application level. While a large amount of
We present a solution that is highly scalable in that it (i) central- research has focused on improving the security of virtualized envi-
izes guest protection into a security VM, (ii) supports Linux and ronments, our ongoing work on building virtualization-aware secu-
Windows operating systems and can be easily extended to support rity mechanisms for the cloud has taught us that existing security
new operating systems, (iii) does not assume any a-priori semantic techniques do not necessarily apply to the cloud because of the mis-
knowledge of the guest, (iv) does not require any a-priori trust as- match in security requirements and threat models.
sumptions into any state of the guest VM. While other introspection In cloud computing, security applies to two layers in the soft-
monitoring solutions exist, to our knowledge none of them moni- ware stack. First, users’ workloads have to be run isolated from
tor guests on the semantic level required to effectively support both each other, so that one (malicious) user cannot affect or spy on an-
white- and black-listing of kernel functions, or allows to start moni- other user’s workload. Second, each user is also concerned with
toring VMs at any state during run-time, resumed from saved state, the security of their own workload, especially if it is exposed to the
and cold-boot without the assumptions of a secure start state for Internet (as in the case of a web service or Internet application).
monitoring. Many solutions exist for enforcing isolation between workloads,
including the use of virtualization. Securing a particular workload
is a much harder task and requires knowing what code is part of the
Categories and Subject Descriptors workload. For example, in the case of an infrastructure cloud built
D.4 [Security and Protection]: Access controls on hardware virtualization, the workload is the guest operating sys-
tem (OS) running in a virtual machine (VM). Because we have two
parties involved (a cloud provider that controls the virtual-machine
General Terms monitor (VMM) and a cloud user that controls the OS inside the
Security VM), information about the guest OS is not readily available to the
VMM. Securing the guest OS without relying on the guest OS func-
Keywords tionality and without having a priori knowledge of the OS running
in the guest VM falls outside the capabilities of existing solutions.
integrity,outsourcing,virtualization,cloud computing
Current virtualization research assumes that the virtualization
environment, e.g., the VMM, has knowledge of the software being
1. INTRODUCTION virtualized, e.g., the guest OS. This knowledge can then be used to
Cloud computing holds significant promise to improve the de- monitor the operation of the guest VM, to determine its integrity
ployment and management of services by allowing the efficient and to correct any observed anomalies. More specifically, all tech-
niques proposing to monitor and enforce the security of an operat-
ing system inside a guest VM rely on several assumptions. First,
the locations of code and data inside of the guest VM are often ex-
Permission to make digital or hard copies of all or part of this work for pected to be found based on symbol tables, with no verification of
personal or classroom use is granted without fee provided that copies are whether the memory layout of the running VM matches the symbol
not made or distributed for profit or commercial advantage and that copies tables. Any malware that relocated security-sensitive data struc-
bear this notice and the full citation on the first page. To copy otherwise, to tures would fool detectors built on these techniques, and any valid
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
update of the guest OS by the VM owner would result in numer-
CCSW’09, November 13, 2009, Chicago, Illinois, USA. ous false alarms. Second, existing techniques need to monitor the
Copyright 2009 ACM 978-1-60558-784-4/09/11 ...$10.00.

97
guest VM from the very moment when the guest OS boots. This is Memory Protection. CoPilot [10] is a coprocessor-based kernel
certainly unfeasible in infrastructure clouds, where the lifetime of a integrity monitor that performs checks on system memory to de-
VM is decoupled from the lifetime of the guest OS running inside tect illegal modifications to a running Linux kernel. Paladin [1]
that VM (for example, a VM can start from a snapshot of the guest is a framework that exploits virtualization to detect and contain
OS, in which case the VM starts with the OS fully loaded and run- rootkit attacks by leveraging the notion of Memory Protected Zone
ning). These two assumptions make existing virtualization-based (MPZ) and File Protected Zone (FPZ) that are guarded against ille-
security techniques unsuitable for the cloud setting. gal accesses. For example, the memory image of the kernel and its
In this paper we describe the architecture we have developed to jump tables are in the MPZ, which is set as non-writable. XENKi-
secure the customers’ virtualized workloads in a cloud setting. Our mono [13] detects security policy violations on a kernel at run-time
solution, a secure version of virtual-machine introspection, makes by checking the kernel from a distinct VM through VMI. It imple-
no assumptions about the running state of the guest VM and no as- ments integrity checking, to detect illegal changes to kernel code
sumptions about its integrity. The OS in the guest VM is in an un- and jump-tables, and cross-view detection and applies a whitelist-
known state when our security mechanism is started, and we moni- based mechanism, such as for checking the list of kernel modules
tor it to discover its operation and measure its level of integrity. The that can be loaded into the kernel. SecVisor [16] is a tiny hypervi-
monitoring infrastructure initially assumes only the integrity of the sor that ensures that only user-approved code can execute in kernel
hardware state, as we presume that an attacker inside a VM can- mode. SecVisor virtualizes the physical memory, so that it can set
not re-program the CPU. Starting from known hardware elements hardware protections over kernel memory, and the CPU’s MMU
such as the interrupt descriptor table we explore automatically the and the IOMMU to ensure that it can intercept and check all mod-
code of the running VM, validating its integrity and that of the data ifications to their states. These systems require information about
structures on which it depends. This approach of combining the the guest OS (e.g. location of data structures) to operate. Instead,
discovery of relevant code and data in the guest OS with the in- our system only relies on hardware state and discovers any other
tegrity measurements of the same code and data allows us to over- information safely, before it is used.
come the challenges of monitoring an a priori unknown guest OS Secure Code Execution. Manitou [7] is a system implemented
without requiring a secure boot. within a VMM that ensures that a VM can only execute authorized
In this paper we make the following contributions: code by computing the hash of each memory page before execut-
ing the code. Manitou sets the executable bit for the page only if
• We introduce a new architecture for secure introspection, in which the hash belongs to a list of authorized hashes. NICKLE [15] is a
we combine discovery and integrity measurement of code and lightweight VMM-based system that transparently prevents unau-
data starting from hardware state. Integrity measurements are thorized kernel code execution for unmodified commodity OSes by
done using whitelists of code executing in the VM, which need implementing memory shadowing: the VMM maintains a shadow
to be generated offline once for every supported operating sys- physical memory for a running VM and it transparently routes at
tem. This architecture addresses both the semantic gap present run-time guest kernel instruction fetches to the shadow memory so
in virtual-machine introspection and the information gap spe- that it can guarantee that only the authenticated kernel code will
cific to cloud computing, where no information about the soft- be executed. One requirement for this system is that the guest OS
ware running in the guest VM is available outside the guest VM. is clean at boot and that is monitored continuously from power-
Section 3 provides details of our security-oriented introspection on and throughout its life-cycle. With our framework, VMs can
mechanism. be created, cloned, reverted to snapshots and migrated arbitrarily
• We present a technique to learn the exact type and version of throughout their lifetime.
an operating system running inside a guest VM. This technique Secure Control Flow. Lares [9] is a framework that can control
builds on our secure-introspection infrastructure. Section 4.1 an application running in an untrusted guest VM by inserting pro-
describes the technique and Section 5.1 evaluates its precision tected hooks into the execution flow of a process to be monitored.
in comparison to existing OS-discovery tools from the forensic These hooks transfer control to a security VM that checks the mon-
world. itored application using VMI and security policies. Since the guest
• As a second application of our secure-introspection infrastruc- OS needs to be modified on the fly to insert hooks, this mechanism
ture, we design a rootkit-detection and -recovery service, which may not be applied to some customized OS. KernelGuard [14] is a
runs outside the guest VM and uses introspection to identify prevention solution that blocks dynamic data kernel rootkit attacks
anomalous changes to guest-kernel data structures. When a root- by monitoring kernel memory access using a set of VMM policies.
kit is detected, it is rendered harmless by restoring the damaged Petroni and Hicks [11] describe an approach to dynamically moni-
kernel data structures to their valid state. Sections 4.2 and 5.2 tor kernel integrity based on a property called state-based control-
respectively describe the design of this anti-rootkit service and a flow integrity, which is composed of two steps: (i) validate kernel
preliminary evaluation that highlights a high detection rate and text, including all static control-flow transfers, by keeping a hash
lack of false positives. of the code; (ii) validate dynamic control-flow transfers. The run-
time monitor considers the dynamic state of the kernel and then tra-
verses it starting from a set of roots (kernel’s global variables), and
2. RELATED WORK follows the pointers to locate each function that might be invoked
Virtual Machine Introspection for security. Virtual machine in- and it verifies whether these pointers target valid code regions, ac-
trospection (VMI) was first proposed in [5] together with Livewire, cording to the kernel’s control flow graph. This system requires the
a prototype IDS that uses VMI to monitor VMs. XenAccess [2] kernel source code to apply static analysis to build the kernel’s con-
is a monitoring library for guest OS running on top Xen that ap- trol flow graph, whereas in our solution we also check the integrity
plies VMI and virtual disk monitoring capabilities to access the of OSes for which source code is not available.
memory state and disk activity of a target OS. Further VMI-based
approaches are virtual machine replay [4] and detecting past intru-
sions [6]. These approaches mandate that the system is clean when 3. OVERVIEW OF OUR ARCHITECTURE
it starts being monitored, which our solution VMs does not require. Ensuring integrity in a running operating system is a daunting

98
Assumptions of existing work Points of failure in a cloud environment
System is monitored continuously from power-on and VMs can be created, cloned, reverted to snapshots and migrated arbitrarily
throughout its lifecycle throughout their lifetime.
Guest system is clean when it starts being monitored VMs can come into existence already infected or compromised.
Guest system can be modified on the fly to insert Some customized systems may not be able to be modified, or we may not have
hooks or other monitoring mechanisms the knowledge necessary to do the modifications.
Guest OS is known in advance VMs may be configured with any one or more guest OSes (e.g., multi-boot VMs).
Guest OS source code is available Most real-world attacks (e.g. rootkits) operate on Windows.
Guest OS information (e.g. location of data struc- The location of internal data structures is unknown when no source code and no
tures) is available version information are available.
Malware is known in advance and given as blacklists New malware is created constantly, realistic protection cannot rely on blacklists.
Trusted boot process exists Not all h/w platforms, hypervisors, and OSes support trusted boot.

Table 1: Assumptions made by existing kernel integrity-checking mechanisms.

Read Determine OS 3 Code Discovery


1 2
Registers (policy based) (OS-type based)
Create
Create OS
OS ++ Apps
Apps Whitelist
Whitelist
Guest
Guest Pointers
Pointers to
to
Pointers
Pointers to
to Pointers
Pointers to
to
CPU
CPU System
System
Interrupts
Interrupts Other
Other Code
Code
Registers
Registers Calls
Calls

OS Type

Whitelist
Validate
Validate Guest
Guest Code
Code
Blacklist

Create
Create Malware
Malware Blacklist
Blacklist

Guest Kernel Code

a) Lab Environment: One-Time Reference Measurements. b) Cloud Environment: Run-time Checking of Guest VMs.

Figure 1: Overview of the integrity discovery system using secure introspection

challenge, and one that has been explored for a long time in the • Operating systems for which source code is not available (in par-
research community. In a system running on real hardware, all in- ticular Windows, by far the largest target for rootkits and other
tegrity checks need to be done from within the system being mon- malware) make bridging the semantic gap harder.
itored, which inevitably raises the question of how to verify the
integrity of the integrity-monitoring components themselves. Tra- Table 1 presents some of the common assumptions made by
ditionally, this has been solved by requiring a trusted boot process existing kernel-integrity-protection work, and sample situations in
to be in place, so that the integrity of the operating system and all which those assumptions break. These assumptions make most ex-
its components can be verified by starting from power on. isting kernel-integrity-monitoring systems unable to protect VMs
With virtualization, it becomes possible to monitor the system in a real production cloud environment.
“from the outside,” through the use of virtual-machine introspec-
Threat Model. In spite of these challenges, we wish to handle
tion. This improves the situation by moving the monitoring com-
threats as generic as possible against cloud workloads. We allow
ponents outside the monitored VM and outside the reach of an at-
the attacker to control completely the guest virtual machines, both
tacker. In addition to existing challenges of determining the code
the user-space applications and the operating-system kernel and as-
and data integrity of a running OS, building introspection-based
sociated drivers. Additionally, we assume that the cloud user (i.e.,
monitors poses several new challenges:
the victim) that owns these guest VMs does not provide the cloud
provider with any information about the type, version, or patch sta-
• The semantic gap [3] between the level of detail observed by the
tus of the software running inside the guest VMs. We make only
monitor and the level of detail needed to make a security decision
two assumptions. First, the hypervisor, which is under the control
can only be bridged through deep knowledge of the guest OS.
of the cloud provider, is correct, trusted, and cannot be breached.
• The complex lifecycle of VMs, which can be cloned, suspended, Second, there are virtual machines, again under the control of the
transferred, restarted, and modified arbitrarily makes any require- cloud provider, which no attacker can breach. We will use these
ments for a trusted boot or for continuous monitoring unrealis- VMs (called security VMs) to host our discovery & integrity solu-
tic. The monitor must determine the integrity of the guest OS by tion.
starting only from the current state, without requiring any his- Our Secure-Introspection Technique. Figure 1(b) shows the steps
tory. to discover and verify the integrity of a guest kernel:

99
1. Read the IDT location from the virtual CPU registers. validated code, and (c) repeating the process for any code pointed
to by the newly identified kernel data.
2. Analyze the contents of the IDT, and using the hash values of in-
A pseudo-code sketch of our algorithm for secure introspection
memory code blocks and whitelists of known operating systems,
is given in Algorithm 1. Initially only the hardware “code” (i.e., the
determine the guest OS running in the VM.
functionality of the hardware, including the microcode) is trusted.
3. Using the information about the running OS, use the appropriate The algorithm relies on three subroutines. First, CFDATAU SED re-
algorithms to discover other operating system structures that are turns the sets of hardware state and memory locations that influ-
linked to from the IDT (e.g. system call tables, lists of processes ence the control flow out of a given code fragment. As a special
and loaded kernel modules, etc.) case, CFDATAU SED returns the hardware state used in hardware-
mediated control-flow transfers (e.g., for Intel IA-32 processors this
4. Continuously analyze all the discovered data structures using
includes the IDTR register and the msr_sysenter_cs, msr_sys-
the whitelist appropriate for the guest OS, to determine when
enter_eip, and msr_star model-specific registers). For other
they are modified and if the modifications are authorized or not.
code, we derive their dependencies on memory locations a pri-
Follow the execution of the code to the maximum extent pos-
ori. In the case of indirect control transfers, for which we know
sible to verify the integrity of as much of the kernel code as
the memory location or register used to direct the control flow but
possible, during live execution of the guest VM.
we do not know its actual value, we use execute triggers (via the
The whitelists used by our approach consist of cryptographic introspection infrastructure) to gain control over the VM right be-
hashes of normalized executable code found in the kernel (includ- fore the control-flow transfer is about to occur. The second routine,
ing modules and device drivers) of the operating system, plus some C ODE I S VALID, computes a checksum over the code paths starting at
metadata to indicate the type and location of the entry. A whitelist the given location and checks it against a whitelist of known code
needs to be produced for each supported OS type and version (or checksums. Finally, M ONITOR F ORW RITES simply monitors (via the
service pack), and for each whitelisted application, and can be gen- introspection infrastructure) the memory regions occupied by the
erated offline (Figure 1(a)) using a clean installation of the OS, us- given code and data. When a write occurs to a monitored region,
ing an automated process. Blacklists are implemented by the same the corresponding code and data are scheduled for re-validation by
mechanism. removing them from the Trusted sets.
This algorithm addresses the problems mentioned in Table 1. It
allows us to start monitoring a guest VM at any time in its life cycle Algorithm 1: Secure introspection
and to monitor it correctly starting at that point, because the discov- TrustedCode ← {hardware} ;
ery of the OS structures depends only on the hardware state, which TrustedData ← ∅ ;
can be read at any moment. We can start monitoring a system that is while true do
already infected and we can correctly identify the infection, thanks d←∅;
to the use of whitelists. We do not need to know the guest OS in foreach c ∈ TrustedCode do d ← d ∪ CFDATAU SED(c) ;
advance, since it is determined on the fly by the analysis, nor do d ← d \ TrustedData ;
we need access to the OS source code, making it particularly suit-
able for protecting against real-world attacks against both Windows foreach ptr ∈ d do
and Linux. There is no need to know malicious code in advance. if C ODE I S VALID(code at ptr ) then
add code at ptr to TrustedCode ;
Thanks to the use of whitelists, any modifications to the guest OS
add ptr to TrustedData ;
will be correctly detected (and prevented, depending on policy).
else
No trusted boot is required in the VMs. By assuming the hypervi- raise alarm ;
sor and the Security VM (from which the monitoring is done) are
trusted, we can establish a “dynamic root of integrity” that allows M ONITOR F ORW RITES(TrustedCode ∪ TrustedData) ;
us to dynamically determine the integrity of all critical components end
in the VM. Because secure introspection is non-intrusive, allow-
ing us to examine the state of the virtual hardware in a completely
This algorithm allows us to discover the integrity of the kernel
transparent manner, no modifications need to be made to the guest
code running inside the guest VM, without any expectations about
system to support monitoring.
the layout of that code. Because we follow the code paths through-
out memory, we validate only the code that is actually run and do
4. SECURE INTROSPECTION not need to worry about distinguishing between code and data on
To build the functionality required for secure introspection, we mixed-use memory pages.
apply an iterative, incremental process of validating the integrity 4.1 Application #1: Guest-OS Identification
of kernel code and data. We assume the hardware to be trusted
to perform as specified and to be impervious to attacks.1 This as- Asset identification and inventory is an important part of net-
sumption means that the hardware state which by specification de- work and system management. The most common approach is to
fines control-flow transfers has values reflecting the true execution use network-based scanning to fingerprint devices connected to the
flow in the system. For example, if entry 0 of the interrupt de- network, using tools like nmap [8]. However, network-based fin-
scription table (IDT) contains an interrupt-gate descriptor with the gerprinting can be easily defeated by programs running in the de-
value 0xffffabcd, then we know that the code at virtual address 0 vice, and this capability is widely available in programs like hon-
xffffabcd will be invoked on a division-by-zero exception. Then
eyd [12].
we can bootstrap integrity by (a) validating all of the code pointed Using introspection to analyze the state and behavior of virtual
to by hardware state, then (b) identifying the kernel data used by the machines provides advantages not only from the security point of
view, but also from the system and network management point of
1
The problem of attacks that overwrite the BIOS, the firmware of view. One such advantage is the ability to precisely identify the
various devices, or the processor microcode is outside the scope of operating system running in each VM, independently of the behav-
this work. ior of both user- and system-level programs in the VM. Through

100
experimentation, we established that the first code fragments that
are validated in our secure-introspection algorithm are sufficient to
uniquely identify the guest OS. In other words, the interrupt han-
dlers (as pointed to by the IDT entries) vary significantly across
OS types, versions, and even patch levels. It would be extremely
difficult for an attacker to modify the interrupt handlers to fool the
identification, while at the same time maintaining the guest OS in
functioning state.

4.2 Application #2: Rootkit Detection


The secure-introspection algorithm provides information about
the integrity of the kernel code, validating each code fragment pre-
Figure 2: Experimental setup for guest-OS identification.
sent in memory against a whitelist of known code. Additionally, the
validation procedure takes into account the control flow between
code fragments, making sure that authorized code invokes only the Starting Nmap 4.62 ( http://nmap.org ) at ...
authorized code using the appropriate control flows. Based on this Interesting ports on 192.168.1.1:
PORT STATE SERVICE
functionality, we easily build a rootkit detector that works by identi- 21/tcp open ftp
fying the presence of unauthorized code in kernel space. Every time 22/tcp open ssh
the secure-introspection algorithm cannot validate a code fragment, 25/tcp filtered smtp
80/tcp open http
it indicates that the kernel integrity might have been breached. De- 110/tcp open pop3
pending on the defined security policy, the secure-introspection Device type: general purpose
monitor can raise alerts on all unknown code fragments, or can use Running (JUST GUESSING) :
Microsoft Windows NT|95|98 (91%)
a database of known malicious code (i.e., a blacklist) to reduce the Aggressive OS guesses:
number of false positives. Microsoft Windows NT 4.0 SP5 - SP6 (91%),
A novel feature we gain for free from secure introspection is the Microsoft Windows 95 (90%), ...
detection of rootkits (and more generally any malware) that dis- No exact OS matches for host.
ables security software present in the guest. Most security software Interesting ports on 192.168.1.150:
hooks into kernel data structures that allow it to monitor security- PORT STATE SERVICE
sensitive operations such as file creation, file modification, or net- 23/tcp open telnet
Aggressive OS guesses:
work communication. A malware can make such security software Vegastream Vega 400 VoIP Gateway (91%),
ineffective by unhooking its code from the kernel structures. Our D-Link DPR-1260 print server,
introspection-based monitor observes this operation as a change of or DGL-4300 or DIR-655 router (90%), ...
No exact OS matches for host
kernel pointer from one authorized code fragment to another au-
thorized one. A simple security policy prevents such unhooking at-
tacks by assigning priorities to authorized code, such that the code Figure 3: Nmap run on honeyd hosts (redacted for space).
of a firewall handler takes precedence over the default code built
into the OS.
Initializing Introspection Manager...
...
5. EVALUATION waiting for VM to be attached
VM is attached to agent 192.168.1.34:8080
We looked to determine the effectiveness of our secure-intros- CPU started.
pection approach by evaluating two key metrics. First, we com- Operating System Detection active.
pared the accuracy of the OS-identification technique built on se- Reboot Monitoring active.
Guest OS identified as Linux.
cure introspection with existing approaches. Second, we measured guest OS reported: Linux/RHEL Linux 5.2
the detection and false-alarm rates of our rootkit detector that uses (32-bit)/32-bit/SMP/0.0.0.0
secure introspection.
To summarize our results, OS identification has perfect accuracy
even where nmap-style techniques fail, and the anti-rootkit appli- Figure 4: OS detection using secure introspection.
cation has a high detection rate with no false positives. Overhead
observed during experiments is minimal, with only few seconds of
delay when the secure-introspection engine first connects to a guest HTTP, FTP, SSH) and a Cisco router with Telnet enabled. Figure 2
VM and less than 2% overhead in macrobenchmarks. The experi- shows the experimental setup.
ments were performed on a 2.66GHz dual quad-core system with We ran the experiment by running nmap from a different VM on
18GB of memory, running 23 different guest OSes, including Mi- both honeyd addresses, at the same time as our OS identification
crosoft Windows XP and 2003 at various service-pack levels, and code was running on the SVM. Nmap identified the honeyd “per-
multiple releases of RedHat, SuSE, and Ubuntu Linux, in both 32- sonalities” as Windows (with a confidence of 91%) and as different
and 64-bit versions. We used a commercial hypervisor with intro- network devices, respectively. Our code identifed the VM correctly
spection capabilities. as Linux. These results are shown in Figures 3 and 4, respectively.

5.2 Rootkit Detection


5.1 Guest-OS Identification As an example we use the Trojan W32/Haxdoor.AU, which in-
To test this capability, we set up a test network using honeyd fects Windows systems. Upon execution, the trojan drops some
in a Linux VM, and used both nmap and our approach to identify files into the Windows System folder (among others: ycsvgd.
the operating system running on it. Honeyd is set up to simulate sys), and hooks several services in the System Service Descriptor
two devices: a generic Windows machine with some services (POP, Table (SSDT) to hide itself and to inject code into the explorer

101
[1] Event source: ARK Engine 1, Pid 32369 which need to be monitored closely and quarantined promptly in
[2] Type of event: SSDT, Entry 173 case of compromise.
[3] Driver: \??\C:\WINDOWS\system32\ycsvgd.sys We are currently extending our framework with a mechanism
[4] Owner: W32/Haxdoor.AU
[5] ControlFlowHash: [SHA256 hash] to transparently inject a context agent from a Security VM into
[6] Severity: High guest VMs through the introspection interface. While it has to
[7] Action: Monitor be protected through introspection by a Security VM, the context
agent can bridge the semantic gap by providing the Security VM
with high-level information about the guest VM, such as the list of
Figure 5: Information provided in a rootkit-detection event.
the running processes, open files and network connections, logged
users, running kernel modules and so on. Agent injection holds the
.exe process, code which it later executes as a remote thread. promise of bridging the semantic gap to any level of detail desired
When activating W32/Haxdoor.AU on our monitored 32-bit Win- while eliminating most of the monitoring overhead.
dows XP SP2 VM, the anti-rootkit shows changes to six system
services (NtQueryDirectoryFile, NtOpenProcess, Nt- 7. REFERENCES
QuerySystemInformation, NtCreateProcessEx, Nt- [1] A. Baliga, X. Chen, and L. Iftode. Paladin: Automated detection and
containment of rootkit attacks. Department of Computer Science,
CreateProcess, and NtOpenThread). We show in Figure 5 Rutgers University, April., 2006.
the event generated by the anti-rootkit when the NtQuerySys- [2] Bryan D. Payne and Martim Carbone and Wenke Lee. Secure and
temInformation service entry (SSDT Entry #173) is manipu- flexible monitoring of virtual machines. Computer Security
lated to illustrate the information it yields. In this example, the en- Applications Conference, Annual, 0:385–397, 2007.
try is redirected to point into rootkit code mapped from file ycsvgd [3] P. M. Chen and B. D. Noble. When virtual is better than real. In
.sys. HOTOS ’01: Proceedings of the Eighth Workshop on Hot Topics in
Monitoring the kernel control flow semantically enables the anti- Operating Systems, page 133, Washington, DC, USA, 2001. IEEE
Computer Society.
rootkit engine also to ensure that the routines on which firewalls and
[4] G. W. Dunlap, S. T. King, S. Cinar, M. A. Basrai, and P. M. Chen.
antivirus software rely to inspect files and traffic are active and un- Revirt: enabling intrusion analysis through virtual-machine logging
altered. We can detect and alert if a rootkit (cf. Unhooker) succeeds and replay. SIGOPS Oper. Syst. Rev., 36(SI):211–224, 2002.
in unhooking these routines, rendering the running AV or firewall [5] T. Garfinkel and M. Rosenblum. A virtual machine introspection
ineffective. Additionally, the anti-rootkit engine can also restore based architecture for intrusion detection. In Proceedings of the 2003
the hooks used by the firewall or the AV software, taking care to Network and Distributed System Symposium, 2003.
perform this step only if the corresponding firewall or AV code is [6] A. Joshi, S. T. King, G. W. Dunlap, and P. M. Chen. Detecting past
still present in memory. and present intrusions through vulnerability-specific predicates. In
SOSP ’05: Proceedings of the twentieth ACM symposium on
We focused our current implementation of the anti-rootkit engine Operating systems principles, pages 91–104, New York, NY, USA,
to kernel-level malware, as monitoring the user space of the guest 2005. ACM.
VM imposes an excessive overhead. Too many events would need [7] L. Litty and D. Lie. Manitou: a layer-below approach to fighting
to be monitored, leading to many expensive context switches to and malware. In ASID ’06: Proceedings of the 1st workshop on
from the Security VM. We plan to address this limitation through Architectural and system support for improving software
the injection into the guest VM of security agents, which would dependability, pages 6–11, New York, NY, USA, 2006. ACM.
then run locally to identify user-space malware. [8] G. F. Lyon. NMAP Network Scanning. Nmap Project, 2009.
[9] B. D. Payne, M. Carbone, M. Sharif, and W. Lee. Lares: An
5.3 Performance architecture for secure active monitoring using virtualization.
Security and Privacy, IEEE Symposium on, 0:233–247, 2008.
To measure the performance of the introspection, we performed [10] N. L. Petroni, Jr., T. Fraser, J. Molina, and W. A. Arbaugh. Copilot -
two category of benchmarks. In the first, we measured the perfor- a coprocessor-based kernel runtime integrity monitor. In SSYM’04:
mance of the introspection layer. In particular, we measured the Proceedings of the 13th conference on USENIX Security Symposium,
rate at which memory could be copied from the monitored guest to pages 13–13, Berkeley, CA, USA, 2004. USENIX Association.
the SVM. In our setting, we could copy 2500 pages/second, which [11] N. L. Petroni, Jr. and M. Hicks. Automated detection of persistent
is well above the 2.8 memory pages inspected on average by the kernel control-flow attacks. In CCS ’07: Proceedings of the 14th
secure-introspection technique during the steady state of the guest ACM conference on Computer and communications security, pages
103–115, New York, NY, USA, 2007. ACM.
OS. When the monitor first connects to a guest VM, the number of
[12] N. Provos. Honeyd — A virtual honeypot daemon. In 10th
pages initially retrieved can reach 200. DFN-CERT Workshop,, Hamburg, Germany, Feb. 2003.
For the second test, we measured the actual impact that the mon- [13] N. A. Quynh and Y. Takefuji. Towards a tamper-resistant kernel
itoring had on performance of the guest. httperf was used to rootkit detector. In SAC ’07: Proceedings of the 2007 ACM
assess the performance of a web server running on the guest with symposium on Applied computing, pages 276–283, New York, NY,
monitoring enabled and disabled. Overhead from monitoring using USA, 2007. ACM.
periodic (one second) checks was less than 2%. [14] J. Rhee, R. Riley, D. Xu, and X. Jiang. Defeating Dynamic Data
Kernel Rootkit Attacks via VMM-based Guest-Transparent
Monitoring. In Proceedings of ARES 2009 Conference, 2009. To
appear.
6. CONCLUSIONS AND FUTURE WORK [15] R. Riley, X. Jiang, and D. Xu. Guest-Transparent Prevention of
While clouds are moving workloads closer together to save en- Kernel Rootkits with VMM-Based Memory Shadowing. In RAID
’08: Proceedings of the 11th international symposium on Recent
ergy by better utilizing hardware, they also depend on reliable mal- Advances in Intrusion Detection, pages 1–20, Berlin, Heidelberg,
ware detection and immediate intrusion response to mitigate the 2008. Springer-Verlag.
impact of malicious guests on closely co-located peers. In this [16] A. Seshadri, M. Luk, N. Qu, and A. Perrig. SecVisor: a tiny
work, we have described how we securely bridged the semantic hypervisor to provide lifetime kernel code integrity for commodity
gap into the operating system semantics. The presented solution OSes. In SOSP ’07: Proceedings of twenty-first ACM SIGOPS
enables novel security services for fast changing cloud environ- symposium on Operating systems principles, pages 335–350, New
ments where customers run a variety of guest operating systems, York, NY, USA, 2007. ACM.

102

You might also like