-
Notifications
You must be signed in to change notification settings - Fork 51
Description
Hi guys,
Thanks for your work here!
I'm using sst to deploy docker containers as a service in AWS.
My command is basicly:
DOCKER_HOST_SST: "unix:///var/run/docker.sock"
- export DOCKER_HOST=$DOCKER_HOST_SST
- AWS_PROFILE=bestrong-staging NO_BUN=true bun sst deploy --print-logs --stage staging
I have my own gitlab-runner with this configuration (summary):
[[runners]]
builds_dir = "/runner/builds"
cache_dir = "/runner/cache"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
# /var/run/docker.sock:/var/run/docker.sock is necessary for SST deployment
volumes = ["/runner/services/docker", "/runner/cache:/cache", "/runner/cache:/runner/cache", "/runner/builds:/runner/builds", "/runner/buildkit:/var/lib/buildkit", "/var/run/docker.sock:/var/run/docker.sock"]
THIS IS not the best configuration out there, and I assume it, until I solve my problem below:
Everything is going well for some time (~10 deployments), until the server do no have any space left.
After examining the disk, the docker volume directory is full of dead buildx_buildkit* container and associated volume.
I solve my problem this way just after the deployment:
- docker container stop `docker container ps -a --format "{{.Names}}" | grep buildx_buildkit`
- docker container rm `docker container ps -a --format "{{.Names}}" | grep buildx_buildkit`
- docker volume rm `docker volume ls --format "{{.Name}}" | grep buildx_buildkit`
But I'm not happy with that, because I cannot launch more than 1 SST deployment in parallel on the same server. not satisfying, because I have now 2 different app to deploy at the same time.
So question:
- Is there a way to reuse the buildkit container ?
- AND/OR a ways to delete a buildkit container after it has been used from sst configuration ?
Thanks in advance.