0% found this document useful (0 votes)
73 views30 pages

Was Unit-3

The document discusses API security topics like session cookies, token-based authentication, and securing APIs. It describes how session cookies work, the benefits of token-based authentication over basic authentication, and implementing a token-based login flow. It also covers securing APIs through techniques like encryption, rate limiting, and audit logging.

Uploaded by

Mohamed Shaaheen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views30 pages

Was Unit-3

The document discusses API security topics like session cookies, token-based authentication, and securing APIs. It describes how session cookies work, the benefits of token-based authentication over basic authentication, and implementing a token-based login flow. It also covers securing APIs through techniques like encryption, rate limiting, and audit logging.

Uploaded by

Mohamed Shaaheen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT III SECURE API DEVELOPMENT

API Security- Session Cookies, Token Based Authentication, Securing Natter


APIs: Addressing threats with Security Controls, Rate Limiting for Availability,
Encryption, Audit logging, Securing service-to-service APIs: API Keys ,
OAuth2, Securing Microservice APIs: Service Mesh, Locking Down Network
Connections, Securing Incoming Requests.
API
An API handles requests from clients on behalf of users. Clients may be web
browsers, mobile apps, devices in the Internet of Things, or other APIs. The API
services requests according to its internal logic and then at some point returns
a response to the client. The implementation of the API may require talking to
other “backend” APIs, provided by databases or processing systems

API security
API Security lies at the intersection of several security disciplines, as shown in
figure 1.2. The most important of these are the following three areas:
1 Information security (InfoSec) is concerned with the protection of
information over its full life cycle from creation, storage, transmission, backup,
and eventual destruction.
2 Network security deals with both the protection of data flowing over a
network and prevention of unauthorized access to the network itself.
3 Application security (AppSec) ensures that software systems are designed
and built to withstand attacks and misuse.
1.API Security
1.1Session Cookies

To access the session associated with a request, you can use the
request.session() method: Session session = request.session(true);
Spark will check to see if a session cookie is present on the request, and if so, it
will look up any state associated with that session in its internal database
To create a new token, you can simply create a new session associated with
the request and then store the token attributes as attributes of the session.
Spark will take care of storing these attributes in its session database and
setting the appropriate SetCookie header.
To read tokens, you can just check to see if a session is associated with the
request, and if so, populate the Token object from the attributes on the
session. Again, Spark takes care of checking if the request has a valid session
Cookie header and looking up the attributes in its session database. If there is
no valid session cookie associated with the request, then Spark will return a
null session object, which you can then return as an Optional.empty()value to
indicate that no token is associated with this request.
package com.manning.apisecurityinaction.token;
import java.util.Optional;
import spark.Request;
public class CookieTokenStore implements TokenStore {
@Override
public String create(Request request, Token token) {
var session = request.session(true);
session.attributes().put("username", token.username);
session.attributes().put("expiry", token.expiry);
session.attributes().put("attrs", token.attributes);
return session.id();
}
@Override
public Optional<Token> read(Request request, String tokenId) {
var session = request.session(false);
if (session == null) {
return Optional.empty();
}
var token = new Token(session.attributes().get("expiry"),
session.attributes().get("username"));
token.attributes.putAll(session.attributes().get("attrs"));
return Optional.of(token);
}
}
You can now wire up the TokenController to a real TokenStore
implementation.
TokenStore tokenStore = new CookieTokenStore();
var tokenController = new TokenController(tokenStore);
Save the file and restart the API. You can now try out creating a new session.
1.1.1Avoiding session fixation attacks
The CookieTokenStore faces a security vulnerability where it fails to generate a
new session token after user authentication, making it susceptible to a session
fixation attack. In this scenario, an attacker could inject their own session
cookie into a user's browser, gaining unauthorized access once the user logs in.
This is particularly problematic if the user already has an existing session.
To address this, the CookieTokenStore is updated to check for an existing
session cookie using `request.session(false)`. If a session exists, it is invalidated
to ensure the creation of a new session using `request.session(true)`. The
updated `create` method protects against the session fixation vulnerability.
@Override
public String create(Request request, Token token) {
var session = request.session(false);
if (session != null) {
session.invalidate();
}
session = request.session(true);
session.attribute("username", token.username);
session.attribute("expiry", token.expiry);
session.attribute("attrs", token.attributes);
return session.id();
}
This modification helps prevent session fixation attacks by ensuring the
invalidation of any existing session, prompting the generation of a new random
session identifier after user authentication.
Cookie Security Attributes:
 The Spark-generated Set-Cookie header for JSESSIONID includes
attributes like Secure and HttpOnly.
 Always set cookies with the most restrictive attributes, such as Secure
and HttpOnly, for security purposes.
 Avoid setting a Domain attribute unless necessary, as it can compromise
session cookies if a sub-domain is compromised.
Sub-Domain Hijacking:
 Sub-domain hijacking occurs when an attacker claims an abandoned
web host with valid DNS records.
 It often happens with temporary sites on shared services, allowing
attackers to serve content from the compromised sub-domain.
Cookie Naming Conventions:
 Some browsers support naming conventions (__Secure- and __Host-) to
enforce specific security attributes for cookies.
 Use these prefixes to prevent accidental mistakes and ensure protection
against overwriting by cookies with weaker attributes.
Validating Session Cookies:
 Implement token validation for cookie-based login to allow requests
with a valid session cookie.
 The validateToken method in TokenController extracts the username
from the session and sets it as the subject attribute in the request.
 Note: The code is vulnerable to Cross-Site Request Forgery attacks.
Token Validation Filter:
 Wire up the token validation filter in Main.java after the existing
authentication filter.
 This filter populates the subject attribute if valid authentication
credentials are found, allowing subsequent access control checks to
pass.
 The API supports both session cookie and HTTP Basic authentication
methods.
Testing with Session Cookie:
 Create a test user and obtain a session cookie using curl.
 Use the obtained session cookie to make requests to API endpoints,
either manually or with curl's -b option.
1.2 Token-Based Authentication
To address concerns about HTTP Basic authentication drawbacks, the API will
adopt token-based authentication. In this approach, users log in with their
credentials, and the API issues a time-limited token for subsequent requests,
providing a more efficient authentication experience.

1. Token-Based Authentication Flow:


- Client sends credentials to a dedicated login endpoint.
- The login endpoint verifies credentials and issues a time-limited token.
- Client includes the token in subsequent API requests for authentication.
2. Token Store:
- The API validates the token using a shared token store, often a database
indexed by the token ID.
- More advanced token storage solutions are explored in Chapter 6.
3. Session Tokens:
- Short-lived tokens, termed session tokens, authenticate user interactions
with the site or API.
- For web browsers, tokens can be stored as HTTP cookies, sent in subsequent
requests until expiration or deletion.

4. Cookie-Based Storage:
- Cookies are a traditional choice for storing tokens in first-party clients on the
same origin as the API.
- Chapter 5 explores an alternative using HTML5 local storage, addressing
challenges with third-party clients and domains.
Token Store Abstraction
To facilitate various token storage options, an interface `TokenStore` is
introduced for seamless interchangeability. The associated `Token` class
encapsulates essential attributes:

package com.manning.apisecurityinaction.token;
import java.time.Instant;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import spark.Request;
public interface TokenStore {
String create(Request request, Token token);
Optional<Token> read(Request request, String tokenId);
class Token {
public final Instant expiry;
public final String username;
public final Map<String, String> attributes;
public Token(Instant expiry, String username) {
this.expiry = expiry;
this.username = username;
this.attributes = new ConcurrentHashMap<>();}}}
Implementing Token-Based Login
The creation of a login endpoint using the abstract `TokenStore` involves
implementing the `TokenController`. This controller leverages existing HTTP
Basic authentication functionality from the `UserController` and constructs a
token based on the authenticated user:
**TokenController.java:**
package com.manning.apisecurityinaction.controller;
import java.time.temporal.ChronoUnit;
import org.json.JSONObject;
import com.manning.apisecurityinaction.token.TokenStore;
import spark.*;
import static java.time.Instant.now;
public class TokenController {
private final TokenStore tokenStore;
public TokenController(TokenStore tokenStore) {
this.tokenStore = tokenStore;
}
public JSONObject login(Request request, Response response) {
String subject = request.attribute("subject");
var expiry = now().plus(10, ChronoUnit.MINUTES);
var token = new TokenStore.Token(expiry, subject);
var tokenId = tokenStore.create(request, token);
response.status(201);
return new JSONObject().put("token", tokenId);
}
}
**Main.java (Integration):**
import com.manning.apisecurityinaction.controller.TokenController;
TokenStore tokenStore = null; // Replace with a real implementation
var tokenController = new TokenController(tokenStore);
before(userController::authenticate);
var auditController = new AuditController(database);
before(auditController::auditRequestStart);
afterAfter(auditController::auditRequestEnd);
before("/sessions", userController::requireAuthentication);
post("/sessions", tokenController::login);
This integration adds the `TokenController` as a new endpoint (`/sessions`) for
clients to obtain a session token after successful HTTP Basic authentication.
The actual implementation of the `TokenStore` is pending.

2.Securing Natter APIs:


2.1Addressing threats with Security Controls
You’ll protect the Natter API against common threats by applying some basic
security mechanisms (also known as security controls). Figure 3.1 shows the
new mechanisms that you’ll develop, and you can relate each of them to a
STRIDE threat (chapter 1) that they prevent:
 Rate-limiting is used to prevent users overwhelming your API with
requests, limiting denial of service threats.
 Encryption ensures that data is kept confidential when sent to or from
the API and when stored on disk, preventing information disclosure.
Modern encryption also prevents data being tampered with.
 Authentication makes sure that users are who they say they are,
preventing spoofing. This is essential for accountability, but also a
foundation for other security controls.
 Audit logging is the basis for accountability, to prevent repudiation
threats.
 Finally, you’ll apply access control to preserve confidentiality and
integrity, preventing information disclosure, tampering and elevation of
privilege attacks.
NOTE An important detail, shown in figure 3.1, is that only rate-limiting and
access control directly reject requests. A failure in authentication does not
immediately cause a request to fail, but a later access control decision may
reject a request if it is not authenticated. This is important because we want to
ensure that even failed requests are logged, which they would not be if the
authentication process immediately rejected unauthenticated requests.
Together these five basic security controls address the six basic STRIDE threats
of spoofing, tampering, repudiation, information disclosure, denial of service,
and elevation of privilege that were discussed in chapter 1. Each security
control is discussed and implemented in the rest of this chapter.
2.2.Rate Limiting for Availability
Threats against availability, such as denial of service (DoS) attacks, can be very
difficult to prevent entirely. Such attacks are often carried out using hijacked
computing resources, allowing an attacker to generate large amounts of traffic
with little cost to themselves. Defending against a DoS attack, on the other
hand, can require significant resources, costing time and money. But there are
several basic steps you can take to reduce the opportunity for DoS attacks.

Many DoS attacks are caused using unauthenticated requests. One simple way
to limit these kinds of attacks is to never let unauthenticated requests
consume resources on your servers. Authentication is covered in section 3.3
and should be applied immediately after rate-limiting before any other
processing. However, authentication itself can be expensive so this doesn’t
eliminate DoS threats on its own.
Many DDoS attacks rely on some form of amplification so that an
unauthenticated request to one API results in a much larger response that can
be directed at the real target. A popular example are DNS amplification
attacks, which take advantage of the unauthenticated Domain Name System
(DNS) that maps host and domain names into IP addresses. By spoofing the
return address for a DNS query, an attacker can trick the DNS server into
flooding the victim with responses to DNS requests that they never sent. If
enough DNS servers can be recruited into the attack, then a very large amount
of traffic can be generated from a much smaller amount of request traffic, as
shown in figure 3.2. By sending requests from a network of compromised
machines (known as a botnet), the attacker can generate very large amounts
of traffic to the victim at attacks can be mitigated by filtering out harmful
traffic entering your network using a firewall. Very large attacks can often only
be handled by specialist DoS protection services provided by companies that
have enough network capacity to handle the load.
Network-level DoS attacks can be easy to spot because the traffic is unrelated
to legitimate requests to your API. Application-layer DoS attacks attempt to
overwhelm an API by sending valid requests, but at much higher rates than a
normal client. A basic defense against application-layer DoS attacks is to apply
rate-limiting to all requests, ensuring that you never attempt to process more
requests than your server can handle. It is better to reject some requests in
this case, than to crash trying to process everything. Genuine clients can retry
their requests later when the system has returned to normal.

Rate-limiting should be the very first security decision made when a request
reaches your API. Because the goal of rate-limiting is ensuring that your API has
enough resources to be able to process accepted requests, you need to ensure
that requests that exceed your API’s capacities are rejected quickly and very
early in processing. Other security controls, such as authentication, can use
significant resources, so ratelimiting must be applied before those processes,
as shown in figure 3.3.little cost to themselves. DNS amplification is an
example of a network-level DoS attack. These
2.3Encryption
 Without encryption, the messages you send to and from the API will be
readable by anybody else connected to the same hotspot. Your simple
password authentication scheme is also vulnerable to this snooping, as
an attacker with access to the network can simply read your Base64-
encoded passwords as they go by. They can then impersonate any user
whose password they have stolen. It’s often the case that threats are
linked together in this way. An attacker can take advantage of one
threat, in this case information disclosure from unencrypted
communications, and exploit that to pretend to be somebody else,
undermining your API’s authentication. Many successful real-world
attacks result from chaining together multiple vulnerabilities rather than
exploiting just one mistake
 In this case, sending passwords in clear text is a pretty big vulnerability,
so let’s fix that by enabling HTTPS. HTTPS is normal HTTP, but the
connection occurs over Transport Layer Security (TLS), which provides
encryption and integrity protection. Once correctly configured, TLS is
largely transparent to the API because it occurs at a lower level in the
protocol stack and the API still sees normal requests and responses

 In addition to protecting data in transit (on the way to and from our
application), you should also consider protecting any sensitive data at
rest, when it is stored in your application’s database. Many different
people may have access to the database, as a legitimate part of their job,
or due to gaining illegitimate access to it through some other
vulnerability.
Enabling HTTPS
 Enabling HTTPS support in Spark is straightforward. First, you need to
generate a certificate that the API will use to authenticate itself to its
clients. When a client connects to your API it will use a URI that includes
the hostname of the server the API is running on, for example api
.example.com. The server must present a certificate, signed by a trusted
certificate authority (CA), If an invalid certificate is presented, or it
doesn’t match the host that the client wanted to connect to, then the
client will abort the connection.
 Without this step, the client might be tricked into connecting to the
wrong server and then send its password or other confidential data to
the imposter. Because you’re enabling HTTPS for development purposes
only, you could use a self-signed certificate.
 A tool called mkcert (https://mkcert.dev) simplifies the process
considerably. Follow the instructions on the mkcert homepage to install
it, and then run mkcert -install to generate the CA certificate and install
it. The CA cert will automatically be marked as trusted by web browsers
installed on your operating system
 The certificate and private key will be generated in a file called
localhost.p12. By default, the password for this file is changeit. You can
now enable HTTPS support in Spark by adding a call to the secure() static
method, as shown in listing 3.4. The first two arguments to the method
give the name of the keystore file containing the server certificate and
private key. Leave the remaining arguments as null; these are only
needed if you want to support client certificate authentication
import static spark.Spark.secure;
public class Main {
public static void main(String... args) throws Exception {
secure("localhost.p12", "changeit", null, null); } }
 Restart the server for changes to take effect.
 Use curl with the CA certificate for secure API access.
Strict transport security
When a user visits a website in a browser, the browser will first attempt to
connect to the non-secure HTTP version of a page as many websites still do not
support HTTPS. A secure site will redirect the browser to the HTTPS version of
the page. For an API, you should only expose the API over HTTPS because users
will not be directly connecting to the API endpoints using a web browser and
so you do not need to support this legacy behavior. API clients also often send
sensitive data such as passwords on the first request so it is better to
completely reject non-HTTPS requests. If for some reason you do need to
support web browsers directly connecting to your API endpoints, then best
practice is to immediately redirect them to the HTTPS version of the API and to
set the HTTP Strict-Transport-Security (HSTS) header to instruct the browser to
always use the HTTPS version in future. If you add the following line to the
afterAfter filter in your main method, it will add an HSTS header to all
responses:
response.header("Strict-Transport-Security", "max-age=31536000");

2.4.Audit logging
Audit logging should occur after authentication, so that you know who is
performing an action, but before you make authorization decisions that may
deny access. The reason for this is that you want to record all attempted
operations, not just the successful ones. Unsuccessful attempts to perform
actions may be indications of an attempted attack. It’s difficult to overstate the
importance of good audit logging to the security of an API. Audit logs should be
written to durable storage, such as the file system or a database, so that the
audit logs will survive if the process crashes for any reason.
Thankfully, given the importance of audit logging, it’s easy to add some basic
logging capability to your API. In this case, you’ll log into a database table so
that you can easily view and search the logs from the API itself.

As for previous new functionality, you’ll add a new database table to store the
audit logs. Each entry will have an identifier (used to correlate the request and
response logs), along with some details of the request and the response. Add
the following table definition to schema.sql.
CREATE TABLE audit_log(
audit_id INT NULL,
method VARCHAR(10) NOT NULL,
path VARCHAR(100) NOT NULL,
user_id VARCHAR(30) NULL,
status INT NULL,
audit_time TIMESTAMP NOT NULL
);
CREATE SEQUENCE audit_id_seq;
As before, you also need to grant appropriate permissions to the
natter_api_user, so in the same file add the following line to the bottom of the
file and save:
GRANT SELECT, INSERT ON audit_log TO natter_api_user;
A new controller can now be added to handle the audit logging. You split the
logging into two filters, one that occurs before the request is processed (after
authentication), and one that occurs after the response has been produced.
This ensures that if the process crashes while processing a request you can still
see what requests were being processed at the time. If you only logged
responses, then you’d lose any trace of a request if the process crashes, which
would be a problem if an attacker found a request that caused the crash. To
allow somebody reviewing the logs to correlate requests with responses,
generate a unique audit log ID in the auditRequestStart method and add it as
an attribute to the request. In the auditRequestEnd method, you can then
retrieve the same audit log ID so that the two log events can be tied together.
The audit log controller
package com.manning.apisecurityinaction.controller;
import org.dalesbred.*;
import org.json.*;
import spark.*;
import java.sql.*;
import java.time.*;
import java.time.temporal.*;
public class AuditController {
private final Database database;
public AuditController(Database database) {
this.database = database;
}
public void auditRequestStart(Request request, Response response) {
database.withVoidTransaction(tx -> {
var auditId = database.findUniqueLong(
"SELECT NEXT VALUE FOR audit_id_seq");
request.attribute("audit_id", auditId);
database.updateUnique(
"INSERT INTO audit_log(audit_id, method, path, " +
"user_id, audit_time) " +
"VALUES(?, ?, ?, ?, current_timestamp)",
auditId,
request.requestMethod(),
request.pathInfo(),
request.attribute("subject"));
});
}
public void auditRequestEnd(Request request, Response response) {
database.updateUnique(
"INSERT INTO audit_log(audit_id, method, path, status, " +
"user_id, audit_time) " +
"VALUES(?, ?, ?, ?, ?, current_timestamp)",
request.attribute("audit_id"),
request.requestMethod(),
request.pathInfo(),
response.status(),
request.attribute("subject"));
}
}
We can then wire this new controller into your main method, taking care to
insert the filter between your authentication filter and the access control filters
for individual operations. Because Spark filters must either run before or after
(and not around) an API call, you define separate filters to run before and after
each request.
Once installed and the server has been restarted, make some sample requests,
and then view the audit log. You can use the jq utility
(https://stedolan.github.io/jq/) to pretty-print the output:
This style of log is a basic access log, that logs the raw HTTP requests and
responses to your API. Another way to create an audit log is to capture events
in the business logic layer of your application, such as User Created or Message
Posted events. These events describe the essential details of what happened
without reference to the specific protocol used to access the API. Yet another
approach is to capture audit events directly in the database using triggers to
detect when data is changed. The advantage of these alternative approaches is
that they ensure that events are logged no matter how the API is accessed, for
example, if the same API is available over HTTP or using a binary RPC protocol.
The disadvantage is that some details are lost, and some potential attacks may
be missed due to this missing detail.

3.Securing service-to-service APIs:


3.1.API Keys
One of the most common forms of service authentication is an API key, which
is a simple bearer token that identifies the service client. An API key is very
similar to the tokens you’ve used for user authentication in previous chapters,
except that an API key identifies a service or business rather than a user and
usually has a long expiry time. Typically, a user logs in to a website (known as a
developer portal) and generates an API key that they can then add to their
production environment to authenticate API calls, as shown in figure 11.1.
Any of the token formats are suitable for generating API keys, with the
username replaced by an identifier for the service or business that API usage
should be associated with and the expiry time set to a few months or years in
the future. Permissions or scopes can be used to restrict which API calls can be
called by which clients, and the resources they can read or modify, just as
you’ve done for users in previous chapters—the same techniques apply. An
increasingly common choice is to replace ad hoc API key formats with standard
JSON Web Tokens. In this case, the JWT is generated by the developer portal
with claims describing the client and expiry time, and then either signed or
encrypted with one of the symmetric authenticated encryption schemes. This
is known as JWT bearer authentication, because the JWT is acting as a pure
bearer token: any client in possession of the JWT can use it to access the APIs it
is valid for without presenting any other credentials. The JWT is usually passed
to the API in the Authorization header using the standard Bearer scheme.
3.2.OAuth2
OAuth2 introduces the Client Credentials Grant, facilitating service-to-service
API communication without user involvement. This allows an OAuth2 client to
obtain an access token directly from the authorization server (AS) using its own
credentials, paving the way for seamless service-to-service API calls.
Access Token Acquisition:
To obtain an access token, the OAuth2 client makes an HTTPS request to the
AS's token endpoint, specifying the client_credentials grant type and the
required scopes. Client authentication is achieved using methods like
client_secret_basic, where the client presents its ID and secret using HTTP
Basic authentication.
curl -X POST -u test:password -d "grant_type=client_credentials" -d
"scope=read write" https://authorization-server/token
Usage and Validation:
Once acquired, the access token serves as any other OAuth2 token for API
access. The API validates the token through introspection or direct validation,
depending on its format (e.g., JWT).
Service Accounts:
In scenarios where OAuth2 clients lack a central database for user accounts,
Service Accounts offer a solution. Similar to regular user accounts, service
accounts provide roles and permissions for effective access control.
Administrators manage service accounts using familiar tools.
Service Account Grant:
For clients dealing with service accounts, a non-interactive grant type, such as
the Resource Owner Password Credentials (ROPC), allows the client to submit
service account credentials directly to the token endpoint.
curl -X POST -d
"grant_type=password&username=serviceA&password=<serviceA_password>"
https://authorization-server/token
Considerations:
 Credential Management: Clients handle both OAuth2 client and service
account credentials.
 Streamlining Credentials: Clients may use the same credentials for both
or opt for a public client if AS features permit.
The OAuth2 Client Credentials Grant, coupled with Service Accounts, enhances
API security, enabling efficient service-to-service interactions while maintaining
robust access control mechanisms.
4.Securing Microservice APIs:
4.1.Service Mesh
Introduction: In dynamic environments like Kubernetes, managing Public Key
Infrastructure (PKI) manually is impractical. Tools like Cloudflare’s PKI toolkit or
HashiCorp Vault can automate PKI tasks, but integrating them into Kubernetes
requires effort. Alternatively, service meshes like Istio or Linkerd simplify TLS
management between services within the cluster.

How Service Mesh Works:


 Service meshes install lightweight proxies as sidecar containers in each
pod, intercepting both incoming and outgoing network requests.
 Proxies act as reverse proxies, transparently initiating and terminating
TLS, ensuring secure communications within the network.
 A central Certificate Authority (CA) service in the service mesh
distributes certificates to proxies, automatically generating and
renewing certificates based on Kubernetes service metadata.
Installing Linkerd:
1. Install the Linkerd command-line interface (CLI) using Homebrew or
download it from the official releases.
2. Run pre-installation checks: linkerd check --pre
3. Install control plane components: linkerd install | kubectl apply -f -
4. Check installation progress: linkerd check
Enabling Linkerd for Natter Namespace:
1. Add Linkerd annotation to the namespace YAML file (natter-
namespace.yaml):
apiVersion: v1
kind: Namespace
metadata:
name: natter-api
labels:
name: natter-api
annotations:
linkerd.io/inject: enabled
2.Update the namespace definition:
kubectl apply -f kubernetes/natter-namespace.yaml
Force Restart Pods: Restart pods in the namespace to inject Linkerd sidecar
proxies:
kubectl rollout restart deployment natter-database-deployment -n natter-api
kubectl rollout restart deployment link-preview-deployment -n natter-api
kubectl rollout restart deployment natter-api-deployment -n natter-api
Verification with Linkerd Tap: Monitor network connections with Linkerd tap
utility: linkerd tap ns/natter-api
 HTTP APIs are automatically upgraded to HTTPS within the service mesh.
Limitations:
 Linkerd currently auto-upgrades only HTTP traffic to TLS.
 For non-HTTP protocols, manual TLS certificate setup is needed.
Conclusion: Linkerd simplifies service-to-service communication security
within Kubernetes, providing a robust solution with reduced complexity
compared to other service meshes.
4.2.Locking Down Network Connections
Lateral Movement Threats: When attackers compromise one application in a
Kubernetes cluster, they can exploit lateral movement, making unauthorized
connections to other services. To mitigate this, Kubernetes provides Network
Policies, allowing fine-grained control over pod-to-pod communications.
Network Policies Overview: Network Policies define rules for ingress
(incoming) and egress (outgoing) traffic between pods. By implementing these
policies, you can prevent unauthorized access between services, addressing
the risk of lateral movement within the cluster.
Example Network Policy (YAML):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-network-policy
namespace: natter-api
spec:
podSelector:
matchLabels:
app: natter-database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: natter-api
ports:
- protocol: TCP
port: 9092
Explanation:
 podSelector: Identifies pods affected by the policy.
 policyTypes: Specifies Ingress and Egress policy types.
 ingress: Allows incoming connections from app: natter-api pods to TCP
port 9092.
 egress: No specific egress rules defined, blocking all outbound
connections.
Implementation Considerations:
 Minikube Limitation: Minikube doesn't enforce network policies by
default.
 Cloud Providers: Major providers (Google, Amazon, Microsoft) generally
support network policy enforcement.
 Self-Hosted Clusters: Consider plugins like Calico or Cilium for
enforcement.
 Advanced Solutions: Istio provides advanced network authorization
rules within the service mesh, allowing granular control based on service
identities.
Conclusion: Effectively implementing and comprehending network policies in
Kubernetes is crucial for fortifying the security of your cluster. By defining and
enforcing rules for pod-to-pod connections, you can thwart unauthorized
interactions and enhance overall cluster security, especially against lateral
movement threats.
4.3.Securing Incoming Requests
Securing external access to your microservice API in Kubernetes is essential. By
leveraging an Ingress controller, you can enforce security controls like TLS
termination, rate-limiting, and audit logging. This enhances the overall security
posture of your cluster.

Enabling Ingress Controller in Minikube:


1. Annotate the kube-system namespace for Linkerd injection:
kubectl annotate namespace kube-system linkerd.io/inject=enabled
2. Enable the Ingress addon:minikube addons enable ingress
Configuring Ingress for Natter API: Create an Ingress configuration file,
natteringress.yaml, to define routing and TLS settings. This example
includes an annotation for Linkerd and TLS termination.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: natter-api
annotations:
nginx.ingress.kubernetes.io/upstream-vhost:
"$service_name.$namespace.svc.cluster.local:$service_port"
spec:
tls:
- hosts:
- api.natter.local
secretName: natter-tls
rules:
- host: api.natter.local
http:
paths:
- backend:
serviceName: natter-api-service
servicePort: 4567

TLS Configuration: Generate TLS certificates with mkcert and create a


Kubernetes secret:
mkcert api.natter.local kubectl create secret tls natter-tls -n natter-api \ -
-key=api.natter.local-key.pem --cert=api.natter.local.pem
Applying Configuration: Apply the Ingress configuration to expose the
Natter API externally:
kubectl apply -f kubernetes/natter-ingress.yaml
Testing: You can now make secure HTTPS calls to the API using tools like
curl.
Result:
 External requests are received by the Ingress controller.
 TLS termination occurs at the Ingress controller.
 Linkerd service mesh ensures mTLS between the Ingress controller and
backend services, enhancing overall security.
This setup provides a secure gateway for external clients to interact with
your Kubernetes-based microservice API.

You might also like