End to end continuous delivery orchestrator
-
Leverages argocd apiclient abstracting its complexities into a unified solution
-
Therefore can gain visibility into running application using
argo ui
kubectl port-forward svc/argo-cd-argocd-server -n argocd 8080:443
- To get Admin Password
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
- Has 2 decoupled controllers
TerraformandApp
can be used for just infrastructure provisioning
If you don't use terraform or not a big fan of infrastructure gitops, the
Appcontroller can be used in isolation.
Only requirement is to store infrastructure metadata in
alustan cluster secretas described below and also set up argocd cluster secret with theenvironment <label>:<value>as specified in your terraform and app manifest
- Terraform
apiVersion: alustan.io/v1alpha1
kind: Terraform
metadata:
name: staging
spec:
environment: staging
variables:
TF_VAR_workspace: "staging"
TF_VAR_region: "us-east-1"
TF_VAR_provision_cluster: "true"
TF_VAR_provision_db: "false"
TF_VAR_vpc_cidr: "10.1.0.0/16"
scripts:
deploy: deploy
destroy: destroy -c # omit if you dont wish to destroy infrastructure when resource is being finalized
postDeploy:
script: aws-resource
args:
workspace: TF_VAR_workspace
region: TF_VAR_region
containerRegistry:
provider: docker
imageName: alustan/infrastructure # image name to be pulled by the controller
semanticVersion: "~1.0.0" # semantic constraint
- App
apiVersion: alustan.io/v1alpha1
kind: App
metadata:
name: api-service
spec:
environment: staging
source:
repoURL: https://github.com/alustan/cluster-manifests
path: application-helm
releaseName: backend-application
targetRevision: main
values:
nameOverride: api-service
service: backend
cluster: "{{.CLUSTER_NAME}}"
image:
repository: alustan/web
tag: 1.0.0
config:
DB_URL: "postgresql://{{.DB_USER}}:{{.DB_PASSWORD}}@postgres:5432/{{.DB_NAME}}"
containerRegistry:
provider: docker
imageName: alustan/backend
semanticVersion: ">=0.2.0"
---
apiVersion: alustan.io/v1alpha1
kind: App
metadata:
name: preview-service
spec:
environment: staging
previewEnvironment:
enabled: true
gitOwner: alustan
gitRepo: web-app-demo
intervalSeconds: 600
source:
repoURL: https://github.com/alustan/cluster-manifests
path: basic-demo
releaseName: basic-demo-preview
targetRevision: main
values:
nameOverride: preview-service
image:
repository: horizonclient/web-app-demo
tag: "1.0.0"
service: "preview"
ingress:
hosts:
- host: preview.localhost
dependencies:
service:
- name: api-service
- The level of abstraction largely depends on the structure of your
helm values fileand yourterraform variables file
Setup and test functionality of this project in less than a minute on
codespace
App-controller
apiVersion: alustan.io/v1alpha1
kind: App
spec:
environment: staging- The
App controllerextracts external resource metadata from alustan cluster secret annotations, therefore it is expected that when provisioning your infrastructure the metadata should be stored in annotations field of a secret withlabel"alustan.io/secret-type": cluster in namespace alustan
should have a labelkey
environmentand a value which is same as that specified in the environment spec field
apiVersion: alustan.io/v1alpha1
kind: App
spec:
source:
repoURL: https://github.com/alustan/cluster-manifests
path: application-helm
releaseName: backend-application
targetRevision: main
values:
config:
DB_URL: postgresql://{{.DB_USER}}:{{.DB_PASSWORD}}@postgres:5432/{{.DB_NAME}}
- To reference deployed infrastructure variables in your application use
{{.NAME}}this will be automatically populated for you. theNAMEfield should be same as that stored in alustan cluster secret
apiVersion: alustan.io/v1alpha1
kind: App
spec:
containerRegistry:
provider: docker
imageName: alustan/backend
semanticVersion: ">=0.2.0"
- Scans your container registry every
5 minsand uses the latest image that satisfies the specifiedsemantic tag constraint.
Supports
dockerhubandghcrregistry
The default
appSyncIntervalcan be changed in the controller helm values file
apiVersion: alustan.io/v1alpha1
kind: App
metadata:
name: web-service
spec:
source:
repoURL: https://github.com/alustan/cluster-manifests
path: basic-demo
releaseName: basic-demo
targetRevision: main
values:
image:
repository: alustan/web-app-demo
tag: "1.0.0"- Ensure your helm
image tagis structured as specified above, to enable automatictagupdate during each sync period
apiVersion: alustan.io/v1alpha1
kind: App
spec:
dependencies:
service:
- name: api-service
-
Ability to deploy services and specify dependency pattern which will be respected when provisioning and destroying
-
All dependent services should be deployed in same namespace
apiVersion: alustan.io/v1alpha1
kind: App
spec:
previewEnvironment:
enabled: true
gitOwner: alustan
gitRepo: web-app-demo
source:
values:
ingress:
hosts:
- host: chart-example.local
- Peculiarities when
preview environmentis enabled
Your Pullrequest label tag should be preview
CI Image tag should be "{{branch-name}}-{{pr-number}}"
For private git repo: provide gitToken in helm values file
If you wish to expose the application running on an ephemeral environment via
Ingressthe controller expects the Ingress field to be structured as specified above so as to dynamically update the host field with appropriate host url. updated url will look something like this{branch}-{pr-number}-chart-example.local
To Retrieve list of previewURls
kubectl get app < web-service > -n default -o json | jq '.status.previewURLs'
status fieldThe Status field consists of the followings:
state: Current state -ProgressingErrorFailedBlockedCompleted
message: Detailed message regarding current state
previewURLs: Urls of running applications in the ephemeral environment
healthStatus: This basically holds reference to argocd application status condition
Terraform-controller
apiVersion: alustan.io/v1alpha1
kind: Terraform
metadata:
name: staging
spec:
environment: staging- Specify the environment, this will create or update argocd in-cluster secret label with the specified environment, wiil be used by
app-controllerto determine the cluster to deploy to`
will first check if argocd cluster secret exists with specified label may be manually created when provisioning infrastructure before attempting to create one
the Terraform workload environment should match the App workload environment
variables:
TF_VAR_workspace: staging
TF_VAR_region: us-east-1
TF_VAR_provision_cluster: "true"
TF_VAR_provision_db: "false"
TF_VAR_vpc_cidr: "10.1.0.0/16"
- The variables should be prefixed with
TF_VAR_since anyenvvariable prefixed withTF_VAR_automatically overrides terraform defined variables
scripts:
deploy: deploy
destroy: destroy -c
- This should be the path to your
deployanddestroyscript; specifying justdeployordestroyassumes the script to be in the root level of your repository
The
destroyscript should beomittedif when custom resource is being finalized (deleted from git repository) you don't wish to destroy your infrastructure
Sample deploy and destroy script in GO
postDeploy:
script: aws-resource
args:
workspace: TF_VAR_workspace
region: TF_VAR_region
postDeployis an additional flexibility tool that will enable end users write a custom script that will be run by the controller andoutputstored in status field.
An example implementation was a custom GO script aws-resource (could be any scripting language) that reaches out to aws api and retrieves non-sensitive metadata and health status of provisioned cloud resources with a specific tag and subsequently stores the output in the custom resource
postDeployOutputstatus field.
The script expects two argument
workspaceandregionand the values are retrieved dynamically from variables spec specified earlier in the manifest in this caseTF_VAR_workspaceandTF_VAR_region
apiVersion: alustan.io/v1alpha1
kind: Terraform
spec:
containerRegistry:
provider: docker
imageName: alustan/infra
semanticVersion: "~1.0.0"- Scans your container registry every
6hrsand uses the latest image that satisfies the specifiedsemantic tag constraint.
Supports
dockerhubandghcrregistry
The default
infraSyncIntervalcan be changed in the controller helm values file
The returned json output of your
postDeployscript/logic should have key: asoutputs, body:any arbitrary data structure
{
"outputs": {
"externalresources": [
{
"Service": "RDS",
"Resource": {
"DBInstanceIdentifier": "mydbinstance",
"DBInstanceClass": "db.t2.micro",
"DBInstanceStatus": "available",
"Tags": [
{
"Key": "Blueprint",
"Value": "staging"
}
]
}
}
]
}
}
-
Intentionally outsourced the packaging of the
IAC OCI imageto be runtime agnostic . base image sample -
status fieldThe Status field consists of the followings:
state: Current state -ProgressingErrorSuccessFailedCompleted
message: Detailed message regarding current state
postDeployOutput: Custom field to store output of yourpostdeployscript if specified
- install the helm chart into a kubernetes cluster
helm install my-alustan-helm oci://registry-1.docker.io/alustan/alustan-helm --version <version> --set containerRegistry.containerRegistrySecret=""- Alternatively
helm fetch oci://registry-1.docker.io/alustan/alustan-helm --version <version> --untar=true-
Update helm values file with relevant
secrets -
helm install controller alustan-helm --timeout 20m0s --debug --atomic
To obtain containerRegistrySecret to be supplied to the helm chart: RUN the script below and copy the encoded secret
- If using
dockerhubas OCI registry
rm ~/.docker/config.json
docker login -u <YOUR_DOCKERHUB_USERNAME> -p <YOUR_DOCKERHUB_PAT>
cat ~/.docker/config.json > secret.json
base64 -w 0 secret.json
- If using
GHCRas OCI registry
rm ~/.docker/config.json
docker login ghcr.io -u <YOUR_GITHUB_USERNAME> -p <YOUR_GITHUB_PAT>
cat ~/.docker/config.json > secret.json
base64 -w 0 secret.json
For private git repository; Ensure to supply the gitSSHSecret in the controller helm values file
.
├── environment
│ ├── dev
│ │ ├── dev-backend-app.yaml
│ │ ├── dev-infra.yaml
│ │ └── dev-web-app.yaml
│ ├── prod
│ │ ├── prod-backend-app.yaml
│ │ ├── prod-infra.yaml
│ │ └── prod-web-app.yaml
│ └── staging
│ ├── staging-backend-app.yaml
│ ├── staging-infra.yaml
│ └── staging-web-app.yaml
└── infrastructure
├── dev-infra.yaml
├── prod-infra.yaml
└── staging-infra.yaml
- the control cluster can be made to sync
infrastructuredirectory which is basicallyTerraformcustom resources
For the
control-clusteraddenvironment: "control"in the argocd cluster secretlabelwhen provisioning, this will ensure that the control cluster only syncs infrastructure resources and not application resources due to the design of the project.
however if you wish to deploy any application to the control cluster just specify
environment:controlin theapp manifestand ensure the control cluster points to the manifest in git
- Each bootstrapped cluster can sync it's specific
environmentthereby ensuring that each clusterreconcilesit self aside that done by the control cluster, in addition also reconciles it's dependent applications
-
The controller skips argocd installation
-
Skips argocd
repo-credscreation if found in-cluster with same giturl -
skips argocd
cluster secretcreation if found in-cluster with same label -
The controller uses username
adminandinitial argocd passwordto generate and refresh authentication token
Therefore if the initial admin password has been disabled, you can regenerate or recreate with a new admin password
this will be used to generate and refresh api token
- Ensure
notto terminatetlsat argocd server level
server:
extraArgs:
- --insecure
Check Out:
-
https://github.com/alustan/infrastructure for infrastructure backend reference implementation
-
https://github.com/alustan/cluster-manifests/blob/main/application-helm for reference implementation of application helm chart
Basic reference setup for local testing
-
https://github.com/alustan/basic-example for dummy backend implementation
-
https://github.com/alustan/cluster-manifests/blob/main/basic-demo for dummy implementation of application helm chart