Database
What Your Agent Inherits
Your agent writes SQLAlchemy models and queries against a pre-configured async engine. Session lifecycle, connection pooling, migration infrastructure, and multi-backend URL derivation are all wired up out of the box. The database layer defaults to SQLite for development and is Postgres-ready for production. Switching backends is a single environment variable change, with no application code to touch.
Engine Creation
The engine factory smooths over the differences between SQLite and Postgres at connection time, giving the rest of your application a single AsyncEngine interface to work with.
def create_database_engine(settings: Settings) -> AsyncEngine: """Create the application's async engine, optimized for the default SQLite setup.""" _ensure_sqlite_parent_exists(settings.database_url)
connect_args: dict[str, int] = {} if settings.database_url.startswith("sqlite+aiosqlite://"): connect_args["timeout"] = settings.database_connect_timeout_seconds
engine_kwargs: dict[str, object] = { "echo": settings.database_echo, "pool_pre_ping": settings.database_pool_pre_ping, } if connect_args: engine_kwargs["connect_args"] = connect_args
if not settings.database_url.startswith("sqlite+aiosqlite://"): engine_kwargs["pool_size"] = settings.database_pool_size engine_kwargs["max_overflow"] = settings.database_max_overflow
engine = create_async_engine(settings.database_url, **engine_kwargs)
if settings.database_url.startswith("sqlite+aiosqlite://"): _register_sqlite_pragmas(engine, settings)
return engineA few design decisions worth noting:
- SQLite path creation.
_ensure_sqlite_parent_exists()creates thedata/directory on first run, so the default configuration just works without any manual setup. - Conditional pool sizing. SQLite doesn’t support connection pools, so
pool_sizeandmax_overfloware only set for non-SQLite backends. SQLite receives atimeoutconnect argument instead. - Pool pre-ping. Enabled by default to catch stale connections before they cause query failures in long-running deployments.
- Echo toggle.
APP_DATABASE_ECHO=trueturns on SQL logging for development debugging, no code changes needed. - SQLite production pragmas. When the SQLite backend is active,
_register_sqlite_pragmas()applies WAL journal mode, tuned synchronous settings, busy timeout, cache size, foreign keys, and optional mmap through an engine event listener. Each pragma is configurable viaAPP_DATABASE_SQLITE_*environment variables.
Multi-Backend URL Derivation
A single Settings object supports three database backends through a priority chain: an explicit URL, backend-specific fields, or the SQLite default.
def _derive_database_url(settings: Settings) -> str: """Build a runtime database URL from the selected backend configuration.""" if settings.database_backend == "sqlite": return f"sqlite+aiosqlite:///{settings.database_sqlite_path}"
if settings.database_backend == "postgres": if not settings.database_postgres_password: raise ValueError( "APP_DATABASE_POSTGRES_PASSWORD must be set when APP_DATABASE_BACKEND=postgres" ) return ( "postgresql+asyncpg://" f"{settings.database_postgres_user}:{settings.database_postgres_password}" f"@{settings.database_postgres_host}:{settings.database_postgres_port}" f"/{settings.database_postgres_name}" )
raise ValueError("APP_DATABASE_URL must be set when APP_DATABASE_BACKEND=custom")Here is how the priority chain works:
APP_DATABASE_URL(explicit). If set, it overrides everything. Use this when your deployment platform provides a full connection string.APP_DATABASE_BACKEND=postgres. Derives the URL from individualAPP_DATABASE_POSTGRES_*fields and validates that the password is present.APP_DATABASE_BACKEND=sqlite(default). Constructs a file-backed SQLite URL fromAPP_DATABASE_SQLITE_PATH.APP_DATABASE_BACKEND=custom. Requires an explicitAPP_DATABASE_URLand raises an error if none is provided.
In practice, this means a developer can clone the repo and run it right away (SQLite kicks in by default), while production deploys simply set APP_DATABASE_BACKEND=postgres along with the individual connection fields.
Async Sessions
The session layer gives you a factory and a dependency injection friendly scope generator for request-scoped database access.
Session Factory
def create_session_factory( settings: Settings, engine: AsyncEngine,) -> async_sessionmaker[AsyncSession]: """Create the async session factory bound to the configured engine.""" _ = settings return async_sessionmaker( engine, expire_on_commit=False, autoflush=False, autocommit=False, )expire_on_commit=Falseprevents SQLAlchemy from issuing lazy-load queries after a commit. Those queries would fail in an async context.autoflush=Falsegives your application explicit control over when queries actually hit the database.autocommit=Falserequires explicitsession.commit()calls, keeping transaction boundaries visible in your application code.
Request-Scoped Session
async def session_scope( session_factory: async_sessionmaker[AsyncSession],) -> AsyncIterator[AsyncSession]: """Yield a request-scoped database session.""" async with session_factory() as session: yield sessionThe session_scope() generator pairs with FastAPI’s dependency injection:
async def get_db_session(request: Request) -> AsyncIterator[AsyncSession]: """Yield a request-scoped async database session.""" async for session in session_scope(get_session_factory(request)): yield sessionRoute handlers declare session: AsyncSession = Depends(get_db_session) and receive a session that is automatically closed when the request completes. Your agent writes queries, and the chassis takes care of the session lifecycle.
Declarative Base
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase): """Base class for all ORM models in the application."""Every model your agent creates inherits from Base. This centralizes metadata so Alembic’s autogeneration can detect schema changes across the entire model hierarchy.
Alembic Integration
Alembic handles database migrations, but it doesn’t natively support async database drivers. The chassis bridges that gap with an automatic async-to-sync URL mapper.
def _derive_alembic_database_url(database_url: str) -> str: """Derive a sync Alembic URL from the configured async runtime URL when possible.""" sqlite_prefix = "sqlite+aiosqlite://" if database_url.startswith(sqlite_prefix): return database_url.replace(sqlite_prefix, "sqlite://", 1)
postgres_prefix = "postgresql+asyncpg://" if database_url.startswith(postgres_prefix): return database_url.replace(postgres_prefix, "postgresql+psycopg://", 1)
raise ValueError("APP_ALEMBIC_DATABASE_URL must be set explicitly for this database URL")This mapping swaps async drivers for their synchronous counterparts:
| Runtime Driver | Alembic Driver |
|---|---|
sqlite+aiosqlite | sqlite |
postgresql+asyncpg | postgresql+psycopg |
If the runtime URL uses a custom driver that the mapper doesn’t recognize, you’ll need to set APP_ALEMBIC_DATABASE_URL explicitly. This keeps the common case fully automatic while still providing an escape hatch for less typical setups.
The entrypoint script runs migrations at container start when RUN_DB_MIGRATIONS=true, so schema changes apply on every deploy without a separate migration step.
Best Practices
- Always set
expire_on_commit=Falsefor async SQLAlchemy sessions. Without this, SQLAlchemy issues lazy-load queries after commit that fail in an async context. - Never hardcode database URLs in application code. Use environment variables with a priority chain (explicit URL → backend-specific fields → SQLite default) so the same codebase works from development through production.
- Always enable
pool_pre_pingfor long-running deployments. Pre-ping catches stale connections before they cause query failures, at the cost of one lightweight round-trip per checkout. - Prefer Alembic autogeneration for schema migrations but always review the generated migration before applying. Autogeneration catches column additions and type changes but may miss index or constraint subtleties.
- Always derive the Alembic URL automatically from the async runtime URL. Maintaining separate sync and async connection strings is error-prone and leads to drift.
Further Reading
- SQLAlchemy Async Session Documentation
- Alembic Tutorial — Auto Generating Migrations
- FastAPI — SQL (Relational) Databases
- PostgreSQL Connection Pooling Best Practices
What the Agent Never Implements
The chassis owns all the database plumbing. Your agent never needs to:
- Configure the async engine or connection pool. Backend selection and pool sizing are driven entirely by environment variables.
- Derive database URLs for different backends. The settings validator takes care of SQLite, Postgres, and custom URL derivation automatically.
- Manage session lifecycle or transaction boundaries.
get_db_session()provides a request-scoped session through dependency injection. - Map async URLs to sync URLs for Alembic. The settings validator derives the Alembic URL from the runtime URL.
- Create the SQLite data directory.
_ensure_sqlite_parent_exists()handles that on engine creation. - Run migrations at deploy time. The Docker entrypoint runs Alembic upgrades when configured.