This repository contains an introductory demo of Strimzi and Apache Kafka on Kubernetes. I use it in introductory talks and workshops about Strimzi.
The slides accompanying this demo can be found here.
This demo was last used with Strimzi 0.45.0 on Kubernetes 1.32.
It might not work with other OpenShift versions, other Kubernetes distributions or a different Strimzi version.
It should be executed from a namespace named myproject.
It expects the namespace to be set as the default namespace.
When used with other namespaces, you might need to change the YAML files and or commands accordingly.
The commands used in the demo expect that you checked out this repository and are inside it in your terminal.
You can also check out the various tags in this repository for different variants of this demo (different Strimzi versions, Kubernetes distributions etc.)
- Deploy Strimzi 0.45.0 in your cluster.
You can install it from the Operator Hub or using the YAML files:
kubectl apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.45.0/strimzi-cluster-operator-0.45.0.yaml
Shows how to deploy Apache Kafka cluster.
-
Check out the
basic-kafka.yamlfile. It shows the simplest possible Kafka installation. We will not use it for this demo, but it demonstrates how much it takes to move from the most basic deployment to something much closer to production-ready Apache Kafka. -
Now check the
01-kafka.yamlfile which shows a Kafka cluster with a much more advanced configuration. It enables things such as authentication and authorization, metrics, external listener, configures resources etc. You can deploy this Kafka cluster using the following command:kubectl apply -f 01-kafka.yaml -
Wait for the Kafka cluster to be deployed:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300sOnce it is ready, you can check the running pods.
kubectl get podsNotice the different components being deployed.
You can also check the status of the
Kafkacustom resource where the operator stores useful information such as the bootstrap addresses of the Kafka cluster:kubectl get kafka my-cluster -o yaml
Shows how to use the KafkaTopic and KafkaUser resources when deploying Kafka clients.
-
Deploy a Kafka producer and consumer to send and receive some messages. You can do that using the
02-clients.yamlfile.kubectl apply -f 02-clients.yamlNotice the different YAML documents in the file:
- The two Kafka users - one for the producer and one for the consumer
- The Kafka topic they are using to send/receive the messages
- The actual Deployments with the producer and consumer applications and how they mount the secrets to connect to the broker
-
Once the clients are deployed, you should see two pods. You can check the logs to confirm they work:
kubectl logs deployment/kafka-consumer -fYou should see the Hello World messages being received by the consumer
## Kafka Connect
Demonstrates how to deploy Kafka Connect, add a connector plugin to it, and create a connector instance.
-
Deploy Kafka Connect using the
03-connect.yamlfile:kubetl apply -f 03-connect.yamlCheck the YAML and notice how:
- It creates the user and ACLs for Connect
- Adds the Connector to the newly built container image
-
Create a connector instance using the
04-connector.yamlfile:kubectl apply -f 04-connector.yaml -
Once the connector is created, check the Connect logs to see how it logs the messages:
kubectl logs my-connect-connect-0 -f
Shows how the operator helps with external access to the Kafka cluster using OpenShift Routes.
-
Create another user using the
05-my-user.yamlfile. You can create this user:kubectl apply -f 05-my-user.yaml -
Once the user is created, you can:
- Take the TLS certificate from the user secret
- Take the external bootstrap address and the CA certificate from the status of the
KafkaCR And use them to connect to the Kafka cluster from outside the Kubernetes cluster. You can use the Java application from06-external-client/directory and run it against the cluster.
Shows the power of the operator when changing the broker configuration.
-
Edit the Kafka cluster to change its configuration. You can edit the Kafka cluster with
kubectl:kubectl edit kafka my-clusterAnd change the configuration in
.spec.kafka.config. Set the optioncompression.typetozstd. Check how the operator changes the configuration dynamically. -
Try another change that will require a rolling update and set the
delete.topic.enableoption tofalse. This change requires a rolling update which the operator will automatically do. Notice the order in which the brokers are rolled => the controller broker should be always rolled last and the operator makes sure the partition-replicas are in-sync.
Demonstrates how to rebalance a Kafka cluster using built-in Cruise Control support.
-
Use Cruise Control to rebalance the Kafka cluster. You can use the
07-rebalance.yamlfile for it. Trigger the rebalance using:kubectl apply -f 07-rebalance.yaml -
Watch the rebalance progress:
kubectl get kafkarebalance -w
Demonstrates how Strimzi automatically rebalances the cluster when the Kafka cluster is scaled-up or down.
-
Scale the Kafka cluster from 3 to 4 broker nodes:
kubectl scale kafkanodepool brokers --replicas=4 -
Wait for the new broker to get ready, the auto-rebalance to be started, and watch the rebalance progress:
kubectl get kafkarebalance -w
-
Delete all Strimzi resources:
kubectl delete $(kubectl get kt -o name) && kubectl delete $(kubectl get strimzi -o name) -
Delete the consumer and producer:
kubectl delete -f 02-clients.yaml -
Uninstall the Strimzi Operator. You can do that using the Operator Hub or using the YAML files - depending on how you installed it at the beginning.
kubectl delete -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.45.0/strimzi-cluster-operator-0.45.0.yaml