# docker-machine create --driver <driver> my_ci
eval (docker-machine env my_ci)
docker run -d \
-e GITHUB_TOKEN \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:80 \
docteurklein/compose-ci
curl -X POST "https://api.github.com/repos/${GITHUB_REPO}/hooks" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-d@- <<JSON
{ "name": "web", "active": true, "events": [ "push" ], "config": {
"url": "http://$(docker-machine ip my_ci):8080/?token=${GITHUB_TOKEN}",
"content_type": "json",
"insecure_ssl": "1"
}}
JSONA simple, docker(-compose) enabled, alpine-based container, listening for github webhooks.
It will:
- listen to github webhooks
- notify github of the build status
- fetch the corresponding commit
- execute the
$HOOKcommand inside a separate container - send a mail containing the
$HOOKcommand output - cleanup running containers and networks afterwards
SaaS CIs are cool and all, but you clearly don't have the same flexibility as this \o/
This also prepares the environment with latest versions of docker and compose,
making it a breeze to run your tests.
It's self hosted. It can be used wherever docker is running.
Note: This is optional.
You can extend the base image and provide a server.pem certificate.
In order to use it, you can COPY it in the image, or mount it using:
docker run \
-v my-cert.pem:/certs/my-cert.pem \
-e CERT_PATH=/certs/my-cert.pem
openssl req -new -x509 -keyout server.pem -out server.pem -days 365 -nodes
Note: the certificate must match the public IP (or hostname) that will be used for the webhook.
FROM docteurklein/compose-ci
COPY server.pem /certs/server.pemNote: don't forget to add the
-e CERT_PATH=/certs/server.pemenv variable.
In order to listen to github webhooks, you'll need to define some environment variables.
Note: All these variables will also be passed to the
hookcommand.
Some variables are mandatory:
$GITHUB_TOKENa valid o-auth token
Some are optional:
$HOOKthe command to execute (default:docker-compose run tests)$BUILD_IMAGEthe image to use for separate containers (default: docteurklein/compose-ci)$BUILD_CMDthe command to execute in this separate container (default: python3 -m compose_ci)$SMTP_*if you want to receive emails$CERT_PATHthe absolute path to your https certificate$GARBAGE_COLLECTset to 0 to keep the build container (default: 1)$DOCKER_HOSTthe docker engine socket (default: unix:///var/run/docker.sock)- whatever else that is required by your command
docker build -t my_ci .
docker run -it --rm \
-p 8080:80 \
-e HOOK="docker-compose run --rm php vendor/bin/phpspec r" \
-e GITHUB_TOKEN \
-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION \
-e SMTP_FROM -e SMTP_TO -e SMTP_HOST -e SMTP_PORT -e SMTP_USER -e SMTP_PASS \
-v /var/run/docker.sock:/var/run/docker.sock \
my_ciNote: We mount
docker.sockinside the container.
/!\ This means that any docker command executed by thehookwill be made against the host docker engine.
Tadaaa! You now have a service waiting for webhooks on port 8080.
Grab the IP address of your host and configure a github webhook.
Note: The url MUST contain
?token=<my-oauth-token>.
This token MUST match$GITHUB_TOKEN.
Note: Depending wether your certificate (
server.pem) is correctly signed, you might have to disable SSL verification.
You can simulate a github webhook request using:
curl -X POST "$(docker-machine ip my_ci):8080/?token=$GITHUB_TOKEN" -d '{"after": "<commit>"}'
Each build is executed inside a separate container, which means we can leverage the docker capabilities to retrieve logs and data.
Note: These separate containers are removed if
$GARBAGE_COLLECT=1.
docker logs <uuid>
docker start -a <uuid>
docker cp <uuid>/tarball .
Note: The
uuidis visible at the end of every email and in the response of the webhook http request.
It is SSL-ready, but not enabled by default. You need to generate a certificate and enable it,
by using the CERT_PATH variable.
The only verification that is made is the $GITHUB_TOKEN comparison
That same token is used to authenticate against the github API.
This is not intended to replace a multi-tenant SaaS.
This is however not really important since you can launch one instance per project.
If you mount the docker socket, anyone can do anything to your host.
If you don't want that, take a look at --userns-remap.
But even then, the same docker engine is shared for every build.
python -m unittest discover -s tests/unit -p "*_test.py"