Docker & Containerization
What Your Agent Inherits
Your agent’s code runs in a hardened container with a non-root user, PID 1 signal forwarding via tini, digest-pinned base images, and optimized layer caching. The Dockerfile is production-ready out of the box. Your agent writes Python code while the container infrastructure takes care of everything else: multi-stage builds keep the runtime image small, the entrypoint runs migrations on deploy, and the health check probe keeps the orchestrator informed.
Multi-Stage Build
The Dockerfile cleanly separates dependency installation from runtime. The result is a minimal final image with no build tools, no package manager caches, and no source control artifacts.
# syntax=docker/dockerfile:1.7
# Keep base images pinned by digest so rebuilds stay reproducible.# Use ARG-based references so ops/refresh-docker-base-digests.sh can update# digests automatically.ARG UV_BASE_IMAGE=ghcr.io/astral-sh/uv:python3.13-bookworm-slim@sha256:531f855bda2c73cd6ef67d56b733b357cea384185b3022bd09f05e002cd144caARG PYTHON_RUNTIME_IMAGE=python:3.13-slim-bookworm@sha256:1245b6c39d0b8e49e911c7d07b60cd9ed26016b0e439b6903d5e08730e417553
FROM ${UV_BASE_IMAGE} AS builder
ENV UV_LINK_MODE=copy
WORKDIR /app
# Install tini once in the builder, pinned to the current Debian package version,# then copy the binary into the final image without carrying apt tooling forward.RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ --mount=type=cache,target=/var/lib/apt/lists,sharing=locked \ apt-get update && \ apt-get install --yes --no-install-recommends tini=0.19.0-1+b3 && \ rm -rf /var/lib/apt/lists/*
ARG UV_EXTRAS="--extra postgres --extra redis"
COPY pyproject.toml uv.lock README.md ./# Resolve third-party dependencies first so source-only changes reuse this layer.RUN --mount=type=cache,target=/root/.cache/uv \ uv sync --frozen --no-dev --no-install-project ${UV_EXTRAS}
COPY src /app/srcCOPY alembic /app/alembicCOPY main.py alembic.ini /app/COPY ops/docker-entrypoint.sh ops/http_probe.py /app/ops/
# Install the local project into the virtualenv after the application files exist.RUN --mount=type=cache,target=/root/.cache/uv \ uv sync --frozen --no-dev --no-editable ${UV_EXTRAS}
# The runtime stage stays slim: only Python, the app virtualenv, runtime files,# and the init binary required for signal forwarding/reaping are copied across.FROM ${PYTHON_RUNTIME_IMAGE} AS runtime
ARG BUILD_DATE=unknownARG REPOSITORY_URL=ARG VCS_REF=unknownARG VERSION=dev
ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PATH="/app/.venv/bin:${PATH}" \ PYTHONPATH="/app/src"
LABEL org.opencontainers.image.title="fastapi-chassis" \ org.opencontainers.image.description="Production-ready FastAPI template with Builder pattern configuration" \ org.opencontainers.image.licenses="Apache-2.0" \ org.opencontainers.image.source="${REPOSITORY_URL}" \ org.opencontainers.image.url="${REPOSITORY_URL}" \ org.opencontainers.image.documentation="${REPOSITORY_URL}" \ org.opencontainers.image.revision="${VCS_REF}" \ org.opencontainers.image.version="${VERSION}" \ org.opencontainers.image.created="${BUILD_DATE}"
WORKDIR /app
# Use stable high IDs so mounted volumes keep predictable ownership across rebuilds# and orchestrators can set matching runAsUser/runAsGroup values explicitly.RUN groupadd --system --gid 10001 app && \ useradd --system --uid 10001 --gid 10001 --create-home --home-dir /home/app app && \ install -d -o app -g app /app/data
COPY --from=builder /usr/bin/tini /usr/bin/tiniCOPY --from=builder --chown=app:app /app/.venv /app/.venvCOPY --from=builder --chown=app:app /app/alembic /app/alembicCOPY --from=builder --chown=app:app /app/src /app/srcCOPY --from=builder --chown=app:app /app/main.py /app/alembic.ini /app/COPY --from=builder --chown=app:app --chmod=755 /app/ops/docker-entrypoint.sh /app/ops/docker-entrypoint.shCOPY --from=builder --chown=app:app /app/ops/http_probe.py /app/ops/http_probe.py
EXPOSE 8000
# Healthcheck stays separate from readiness so container platforms can detect# dead processes quickly without needing external orchestration logic.HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=3 \ CMD python /app/ops/http_probe.py --path-env APP_HEALTH_CHECK_PATH --default-path /healthcheck
USER app
# tini becomes PID 1 and forwards signals to the entrypoint/app correctly.ENTRYPOINT ["/usr/bin/tini", "--", "/app/ops/docker-entrypoint.sh"]Each stage has a clear job:
-
Builder stage pulls in the
uvpackage manager image to resolve and install dependencies. Theuv sync --frozen --no-dev --no-install-projectlayer caches third-party packages so that source-only changes skip the expensive dependency resolution step. Tini is also installed here from the Debian package repository. -
Runtime stage starts from a minimal Python slim image. Only the virtualenv, application source, migration files, the entrypoint script, and the tini binary get copied over from the builder. No
apt, nopip, nouv, and no build headers remain in the final image.
Container Hardening
The Dockerfile bakes in four security measures that are easy to overlook when you write containers by hand.
Digest-Pinned Base Images
ARG UV_BASE_IMAGE=ghcr.io/astral-sh/uv:python3.13-bookworm-slim@sha256:531f855...ARG PYTHON_RUNTIME_IMAGE=python:3.13-slim-bookworm@sha256:1245b6c...Tags like python:3.13-slim are mutable. The registry can push a completely different image under the same tag at any time. Digest pinning (@sha256:...) locks the exact image content, making builds fully reproducible. When you do want to pull newer base images, a helper script (ops/refresh-docker-base-digests.sh) takes care of updating the digests.
Tini as PID 1
ENTRYPOINT ["/usr/bin/tini", "--", "/app/ops/docker-entrypoint.sh"]Without an init process, your application becomes PID 1 inside the container. PID 1 has special signal-handling semantics in Linux. It does not receive default signal handlers, so SIGTERM from docker stop or Kubernetes pod termination may never reach the application. Tini solves this by:
- Forwarding signals to the child process (uvicorn), enabling graceful shutdown.
- Reaping zombie processes that accumulate when child processes exit without a parent calling
wait().
Unprivileged User
RUN groupadd --system --gid 10001 app && \ useradd --system --uid 10001 --gid 10001 --create-home --home-dir /home/app app...USER appThe container runs as a non-root user with stable, high UID/GID values (10001). Using stable IDs keeps volume mount ownership predictable across image rebuilds and lets Kubernetes securityContext match with explicit runAsUser/runAsGroup values.
HEALTHCHECK Directive
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=3 \ CMD python /app/ops/http_probe.py --path-env APP_HEALTH_CHECK_PATH --default-path /healthcheckThis health check runs a lightweight HTTP probe against the application’s health endpoint. Docker uses the result to mark unhealthy containers for restart. The --start-period flag gives the application 15 seconds to initialize before the first probe fires, and the custom http_probe.py script reads the health check path from the same environment variable the application uses.
Container Entrypoint
The entrypoint script takes care of pre-start tasks before handing off to the application server.
#!/usr/bin/env shset -eu
# Container entrypoint for the application image. It optionally runs Alembic# migrations, then launches either the provided command or the default uvicorn# server defined for this chassis.cd /app
# Keep the SQLite data directory present for local and volume-backed runs.mkdir -p /app/data
# In-memory rate limiting is per-process, so multi-worker deployments must use a# shared backend to enforce limits consistently.if [ "${APP_RATE_LIMIT_ENABLED:-false}" = "true" ] && \ [ -z "${APP_RATE_LIMIT_STORAGE_URL:-}" ] && \ [ "${UVICORN_WORKERS:-1}" != "1" ]; thenecho "APP_RATE_LIMIT_STORAGE_URL is required when APP_RATE_LIMIT_ENABLED=true and UVICORN_WORKERS>1." >&2exit 1fi
# Migrations stay opt-in so production restarts do not implicitly change schema.if [ "${RUN_DB_MIGRATIONS:-false}" = "true" ]; thenecho "Applying Alembic migrations..."alembic -c alembic.ini upgrade headfi
# If no explicit command is provided, start the API with the container defaults.# Workers default to 1 so orchestrated environments (Kubernetes, Swarm) can# manage replication at the infrastructure level. Set UVICORN_WORKERS >1 for# single-server or Docker Compose deployments that need multi-core utilisation.if [ "$#" -eq 0 ]; thenset -- \ uvicorn main:app \ --host "${APP_HOST:-0.0.0.0}" \ --port "${APP_PORT:-8000}" \ --workers "${UVICORN_WORKERS:-1}" \ --proxy-headers \ --forwarded-allow-ips "${UVICORN_FORWARDED_ALLOW_IPS:-127.0.0.1}" \ --no-access-logfi
# Replace the shell so PID 1 remains tini and signals reach uvicorn directly.exec "$@"The entrypoint provides four safeguards:
- Rate limit validation. If rate limiting is enabled with multiple workers but no shared Redis backend, the container exits immediately with a clear error rather than silently splitting counters across processes.
- Opt-in migrations. Setting
RUN_DB_MIGRATIONS=truerunsalembic upgrade headbefore the server starts. This stays off by default so that container restarts never accidentally apply unapproved schema changes. - Single-worker default with proxy headers. Workers default to 1, which aligns with the Kubernetes pattern of scaling via pod replicas instead of in-process workers. The
--proxy-headersand--forwarded-allow-ipsflags make sure Uvicorn populatesrequest.clientandrequest.url.schemecorrectly when running behind a reverse proxy or ingress controller. BumpUVICORN_WORKERShigher only for single-server or Docker Compose deployments that need multi-core utilisation. execreplacement. The finalexec "$@"replaces the shell process with uvicorn so that tini stays as PID 1 and signals propagate correctly through the process tree.
Deployment Topology
The chassis production architecture connects four services. Explore the interactive topology below by dragging nodes, zooming, and panning to examine the connections.
Here is what the topology shows:
- Caddy (reverse proxy) terminates TLS, handles static assets, and forwards API traffic to the application over HTTP.
- FastAPI App (uvicorn) is the application container, running a single worker process by default. Kubernetes scales horizontally via pod replicas. Only set
UVICORN_WORKERShigher for single-server deployments. - PostgreSQL serves as the primary database, connected through
asyncpgfor async query execution. - Redis provides shared storage for rate limiting counters and the application cache layer, connected through
aioredis.
Every connection is labeled with the protocol or driver in use, making the data flow easy to follow.
Docker Compose
For local development, the chassis ships with a minimal Compose file that runs the application container using sensible defaults.
services:app: build: context: . dockerfile: Dockerfile image: fastapi-chassis:local env_file: - "${ENV_FILE:-.env}" environment: APP_HOST: "0.0.0.0" APP_PORT: "8000" APP_DATABASE_BACKEND: "${APP_DATABASE_BACKEND:-sqlite}" APP_DATABASE_SQLITE_PATH: "${APP_DATABASE_SQLITE_PATH:-./data/app.db}" APP_RATE_LIMIT_STORAGE_BACKEND: "${APP_RATE_LIMIT_STORAGE_BACKEND:-memory}" RUN_DB_MIGRATIONS: "${RUN_DB_MIGRATIONS:-true}" UVICORN_WORKERS: "${UVICORN_WORKERS:-1}" restart: unless-stopped ports: - "${APP_PORT:-8000}:8000" volumes: - ./data:/app/dataOut of the box the Compose file defaults to SQLite and a single worker, which is great for fast local iteration. When you are ready for the full production topology with Postgres and Redis, just set the relevant environment variables or provide a .env file that overrides the backend selections.
Best Practices
- Always pin base images by digest, not just by tag. Tags are mutable — the registry can push a different image under the same tag at any time. Digest pinning (
@sha256:...) locks the exact image content for reproducible builds. - Always use tini (or
--init) as PID 1 in production containers. Without an init process, your application becomes PID 1 and won’t receive default signal handlers, preventing graceful shutdown onSIGTERM. - Never run containers as root in production. Use a non-root user with stable, high UID/GID values (e.g., 10001) so volume mount ownership stays predictable across rebuilds.
- Always separate dependency installation from source code copying in multi-stage builds. This leverages Docker layer caching — source-only changes skip the expensive dependency resolution step.
- Prefer
execin entrypoint scripts to replace the shell process with the application. This ensures signal forwarding works correctly through the process tree. - Always validate multi-worker rate limiting at container start. In-memory rate limiting with multiple Uvicorn workers silently splits counters across processes — fail fast if Redis is not configured.
Further Reading
- Docker Multi-Stage Build Best Practices
- Dockerfile Best Practices — Docker Documentation
- Tini — A Tiny but Valid Init for Containers
- OCI Image Specification — Image Digests
- Hadolint — Dockerfile Linter
What the Agent Never Implements
The chassis handles every containerization concern. Your agent never needs to:
- Write or maintain the Dockerfile. The multi-stage build, layer caching strategy, and runtime image selection are already configured.
- Configure tini or signal handling. The entrypoint and PID 1 setup work transparently.
- Create the unprivileged user or set file permissions. The Dockerfile creates the
appuser with stable UIDs and sets ownership on all copied files. - Implement container health checks. The HEALTHCHECK directive and
http_probe.pyscript are already wired to the application’s health endpoint. - Handle migration execution at deploy time. The entrypoint runs Alembic upgrades whenever
RUN_DB_MIGRATIONS=trueis set. - Pin base image digests. The digest references and refresh script take care of reproducible builds.
- Validate rate limit configuration for multi-worker deploys. The entrypoint catches misconfiguration early and exits with a clear error.
- Configure proxy header forwarding. The entrypoint enables
--proxy-headerswith a configurable trust list so thatrequest.clientandrequest.url.schemestay correct behind reverse proxies.