Docker Hub works, but for private images you're dealing with rate limits, pricing tiers, and an external dependency. A self-hosted Docker Registry can replace all of that. This post walks through the setup and shares a few things I learned the hard way.
Published on Mon, March 02, 2026
For years, Docker Hub has been my default choice for storing and distributing Docker images. It works, it's convenient, and for public images, it's free. But for private images, it comes with limitations: rate limits, pricing tiers, and the nagging feeling of depending on yet another third-party service for something that should be straightforward.
When I recently moved our deployment pipeline away from Docker Hub, I was surprised by how simple the alternative turned out to be. A self-hosted Docker Registry, combined with Watchtower for automatic container updates, gives you a clean deployment pipeline with minimal moving parts. Here's how.
Docker's official Registry image is all you need. It's a single container that speaks the Docker Registry HTTP API v2 and stores images on disk.
services:
registry:
image: registry:2
restart: always
environment:
- REGISTRY_AUTH=htpasswd
- REGISTRY_AUTH_HTPASSWD_REALM=Registry
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- ./data:/var/lib/registry
- ./auth:/auth
Authentication is handled via htpasswd. Create credentials with:
htpasswd -Bc auth/htpasswd deploy
I'm running this behind Traefik, which handles TLS termination and routing. The Traefik labels expose the registry at registry.example.com with a Let's Encrypt certificate:
labels:
- traefik.enable=true
- traefik.http.routers.registry.rule=Host(`registry.example.com`)
- traefik.http.routers.registry.entrypoints=websecure
- traefik.http.routers.registry.tls.certresolver=letsencrypt
- traefik.http.services.registry.loadbalancer.server.port=5000
That's it. You now have a private Docker registry with authentication and HTTPS.
In the Docker Compose file used for building, the image names need to point to the new registry:
services:
web:
build:
context: ./src
image: registry.example.com/my-app
docker compose build builds the image. docker compose push pushes it. Nothing else changes in the build process.
This is where it gets interesting. Watchtower is a container that monitors other running containers and automatically updates them when it detects a new image in the registry.
services:
watchtower:
image: containrrr/watchtower
restart: always
container_name: watchtower
environment:
- WATCHTOWER_HTTP_API_TOKEN=${WATCHTOWER_TOKEN}
- WATCHTOWER_HTTP_API_UPDATE=true
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_POLL_INTERVAL=86400
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/user/.docker/config.json:/config.json
A few things to note:
WATCHTOWER_HTTP_API_UPDATE=true enables an HTTP endpoint that triggers an update check on demand, rather than relying solely on polling.WATCHTOWER_CLEANUP=true removes old images after updating.WATCHTOWER_POLL_INTERVAL=86400 sets a 24-hour polling interval as a fallback. We don't rely on it — we trigger updates explicitly. But it might be useful for third-party images which you want to keep up todate (e.g., Authelia for authentication).The containers on the server simply reference the registry image:
services:
my-app:
image: registry.example.com/my-app
restart: always
When Watchtower runs, it compares the digest of each running container's image against the registry. If there's a newer version, it pulls it, stops the old container, and starts a new one with the same configuration.
Since Watchtower exposes a simple HTTP endpoint, triggering a deployment is just a POST request. This works from any CI/CD platform — GitHub Actions, GitLab CI, Bitbucket Pipelines, or a plain shell script.
The steps are always the same:
# Build and push
docker compose build
docker login registry.example.com -u deploy -p "$REGISTRY_PASSWORD"
docker compose push
# Trigger deployment
curl -f -X POST "https://deploy.example.com/v1/update" \
-H "Authorization: Bearer $WATCHTOWER_TOKEN" \
--max-time 180
Build, push, trigger. That's the entire deployment. The -f flag makes curl fail on HTTP errors, which is important — without it, a 401 from a wrong token would silently pass as a successful step.
This one cost me time. Watchtower reads Docker credentials from a config file, and it's particular about where it finds it. It looks for /config.json inside the container — not ~/.docker/config.json, not /root/.docker/config.json. Mount it exactly as /config.json:
volumes:
- /home/user/.docker/config.json:/config.json
If the credentials aren't found, Watchtower won't throw an error during startup. It will only fail silently when trying to check for updates, logging no credentials found at debug level. If things aren't working, set WATCHTOWER_DEBUG=true and check the logs. You'll see exactly which containers it checks and whether it can authenticate with the registry.
Watchtower's HTTP API is protected by a bearer token (WATCHTOWER_HTTP_API_TOKEN). The token in your CI/CD secret must match exactly. If it doesn't, Watchtower returns a 401 — but curl reports exit code 0 by default, making the pipeline step appear successful. Always use curl -f to catch this.
When changing Watchtower's configuration (volumes, environment variables), docker restart is not enough. The container keeps its old configuration. You need docker compose up -d --force-recreate to apply changes.
Unlike Docker Hub, the self-hosted registry keeps every image you push, forever. There's no UI to manage this. You need to handle cleanup yourself — either through the Registry's HTTP API (DELETE /v2/<name>/manifests/<digest>) followed by garbage collection, or through a scheduled script. Don't forget to set REGISTRY_STORAGE_DELETE_ENABLED=true in the registry configuration, otherwise delete requests will be rejected.
The registry stores images as layers (blobs) and manifests. Deleting a tag through the API only removes the manifest reference — the actual layers remain on disk until you run garbage collection:
docker exec registry bin/registry garbage-collect \
/etc/docker/registry/config.yml --delete-untagged
For automated cleanup, a script that lists tags, deletes old ones via the API, and then triggers garbage collection does the job. The registry's tag list endpoint (GET /v2/<name>/tags/list) returns all available tags, which can be sorted and trimmed to keep only the most recent ones.
The entire setup — registry, Watchtower, Traefik for TLS — runs on a single server alongside the application containers. There are no external dependencies beyond the server itself. Deployments are triggered by a single HTTP request from any CI/CD pipeline, and the feedback loop is immediate: either the curl succeeds and Watchtower updates the containers, or it fails and the pipeline reports an error.
What I like most about this setup is what it removes: no Docker Hub account, no rate limits, no external dependencies. Just a registry that stores images and a process that keeps containers up to date.