PostgreSQL Sync
AgentsView stores sessions locally in SQLite by default. PostgreSQL sync lets you push sessions from one or more machines into a shared PostgreSQL database, then serve a read-only web UI from it — useful for team dashboards or multi-machine setups.
The sync direction is one-way: SQLite to PostgreSQL. Each machine
pushes its own sessions; pg serve reads from the shared database.
The resulting UI includes the session browser, analytics dashboard,
search, and, as of 0.23.0, the Usage dashboard as well.
Quick Start
1. Configure PostgreSQL
Add a [pg] section to ~/.agentsview/config.toml:
[pg]url = "postgres://user:pass@host:5432/dbname?sslmode=require"machine_name = "my-laptop"The machine_name identifies which machine pushed each session.
It defaults to the system hostname if omitted. It must not be
"local" (reserved for the local SQLite sentinel).
2. Push Sessions
agentsview pg pushThis syncs all local sessions, messages, and tool calls to PostgreSQL. The schema is created automatically on first push.
3. Serve the Dashboard
agentsview pg serveOpens the read-only web UI at http://127.0.0.1:8080, backed
entirely by PostgreSQL. No local SQLite, file watching, or uploads
— just the viewer.
Commands
agentsview pg push
Sync sessions from the local SQLite database to PostgreSQL.
agentsview pg push [flags]| Flag | Default | Description |
|---|---|---|
--full | false | Force full local resync and re-push, bypassing change detection |
--projects | Comma-separated projects to push (inclusive) | |
--exclude-projects | Comma-separated projects to exclude | |
--all-projects | false | Ignore configured project filters for this run |
Push is on-demand — run it whenever you want to sync. There is no automatic background push.
What happens on push:
- Runs a local sync to pick up any new or modified session files
- Compares local sessions against the PostgreSQL watermark to find what changed since the last push
- Upserts sessions, messages, and tool calls in batches of 50
- Advances the watermark timestamp on success
Incremental pushes use a two-layer fingerprint to skip
unchanged sessions: first, session metadata fields (project,
agent, timestamps, message counts) are compared; then, per-
session message statistics (count, content length sum/max/min,
system message ordinals, tool call counts) are checked against
PostgreSQL. Use --full to bypass both layers and re-push
everything — for example, after a schema reset or when message
content was rewritten in place.
If any sessions fail to push, the watermark is not advanced so they are retried on the next run. The exit code is 1 when any errors occur, 0 otherwise.
Project Filtering
By default, pg push syncs all projects. Use project filters
to push a subset:
# Push only these projectsagentsview pg push --projects alpha,beta
# Push everything except this projectagentsview pg push --exclude-projects scratch
# Ignore config-file filters for this runagentsview pg push --all-projects--projects and --exclude-projects are mutually exclusive.
--all-projects cannot be combined with either.
Project filters can also be set in config.toml so you don’t
need to pass them on every run:
[pg]url = "postgres://..."projects = ["alpha", "beta"]# or: exclude_projects = ["scratch"]CLI flags override config values. Use
agentsview projects to list
available project names.
agentsview pg status
Show the current sync state.
agentsview pg statusOutput:
Machine: my-laptopLast push: 2026-03-24T10:30:00ZPG sessions: 1842PG messages: 47291| Field | Description |
|---|---|
| Machine | Configured machine name or hostname |
| Last push | Timestamp of last successful push (“never” if no push yet) |
| PG sessions | Total session count in PostgreSQL (all machines) |
| PG messages | Total message count in PostgreSQL (all machines) |
agentsview pg serve
Start a read-only web UI backed by PostgreSQL.
agentsview pg serve [flags]| Flag | Default | Description |
|---|---|---|
--host | 127.0.0.1 | Bind address |
--port | 8080 | Port |
--base-path | URL prefix for reverse-proxy subpath | |
--public-url | Public-facing URL for proxy access | |
--public-origin | Trusted browser origin (repeatable/comma-separated) | |
--public-port | 8443 | External port for managed proxy |
--proxy | Managed proxy mode (caddy) | |
--caddy-bin | caddy | Caddy binary path |
--proxy-bind-host | 0.0.0.0 | Caddy bind address |
--tls-cert | TLS certificate path | |
--tls-key | TLS key path | |
--allowed-subnet | CIDR allowlist (repeatable/comma-separated) |
The server is read-only — uploads, file watching, and local sync are all disabled. Sessions from all machines appear in a single unified view. The same deployment also serves the analytics dashboard and the Usage page from PostgreSQL-backed queries.
On startup, pg serve automatically applies any pending
schema migrations to PostgreSQL, creating new tables and
indexes added in newer AgentsView versions. This removes
the need to run pg push before starting the server after
an upgrade. If the PostgreSQL role is read-only, the
migration is skipped and the server falls back to the
schema compatibility check.
When require_auth is enabled, a bearer token is generated if
needed and printed on startup. Pass it via
Authorization: Bearer <token> on API requests. The SSE watch
endpoint also accepts ?token=<token> as a query parameter since
the EventSource API cannot set custom headers.
For LAN access, combine require_auth = true with a non-loopback
bind such as agentsview pg serve --host 0.0.0.0, or keep the
backend on loopback and expose it through a proxy.
pg serve does not expose the global live-refresh event stream
used by normal agentsview serve, because there is no local sync
engine attached to the server. The session browser, analytics, and
usage views still work normally; they are just not auto-refreshed by
the global SSE path.
For managed Caddy mode, keep the backend --host on loopback
and use --proxy-bind-host / --public-port to expose the
public listener. The pg serve and normal serve modes keep
separate managed-Caddy state, so both can coexist on one host.
Examples:
# Local development — loopback, no auth requiredagentsview pg serve
# Team viewer with managed Caddy and TLS# Requires require_auth = true in config.tomlagentsview pg serve \ --proxy caddy \ --public-url https://viewer.example.com \ --public-port 8443 \ --tls-cert /path/to/cert.pem \ --tls-key /path/to/key.pem
# Remote access on a trusted private network (no TLS)# Requires require_auth = true; only use behind a VPN or on a# private LAN because tokens cross the wire in cleartext.agentsview pg serve --host 0.0.0.0 --port 8080Machine Labels
When multiple machines push to the same PostgreSQL database, each session is tagged with its source machine name. In the web UI, session items show a machine label when the session did not originate from the local machine. Use the multi-host filter in the sidebar to show sessions from specific machines.

Configuration
All PostgreSQL settings live in the [pg] section of
~/.agentsview/config.toml:
[pg]url = "postgres://user:pass@host:5432/dbname?sslmode=require"machine_name = "my-laptop"schema = "agentsview"allow_insecure = false| Field | Default | Description |
|---|---|---|
url | (required) | PostgreSQL connection string |
machine_name | OS hostname | Identifies the pushing machine; defaults to os.Hostname() if omitted |
schema | agentsview | PostgreSQL schema name |
allow_insecure | false | Allow non-TLS connections to non-loopback hosts |
projects | Array of project names to include in push | |
exclude_projects | Array of project names to exclude from push |
The connection string supports standard PostgreSQL parameters.
Use sslmode=require or sslmode=verify-full for remote
databases. Only use sslmode=disable for trusted local
connections.
Environment variables in the URL are expanded using ${VAR}
syntax:
[pg]url = "postgres://${PG_USER}:${PG_PASSWORD}@host:5432/dbname?sslmode=require"Environment Variables
PostgreSQL settings can also be configured via environment
variables, which override config.toml values:
| Variable | Description |
|---|---|
AGENTSVIEW_PG_URL | PostgreSQL connection URL |
AGENTSVIEW_PG_MACHINE | Machine name for push sync |
AGENTSVIEW_PG_SCHEMA | Schema name (default agentsview) |
Multi-Machine Workflow
A typical team setup:
- Each developer configures
[pg]in their localconfig.tomlwith a uniquemachine_name - Each developer runs
agentsview pg pushperiodically (or on a cron schedule) to sync their sessions - One server runs
agentsview pg servepointed at the shared PostgreSQL database - The team opens the shared dashboard to browse everyone’s sessions, filtered by machine if needed
# Developer A's cron (every 30 minutes)*/30 * * * * agentsview pg push
# Team serveragentsview pg serve \ --proxy caddy \ --public-url https://viewer.team.internal \ --tls-cert /etc/certs/viewer.pem \ --tls-key /etc/certs/viewer-key.pemLimitations
- One-way sync — sessions flow from SQLite to PostgreSQL only. Changes in PostgreSQL do not propagate back to local machines.
- Permanent deletes not propagated — sessions removed via
agentsview pruneare not deleted from PostgreSQL because the local rows no longer exist at push time. Use a direct SQL DELETE to clean up PostgreSQL if needed. Soft-deleted sessions (trash) sync correctly. - Schema compatibility —
pg serveautomatically applies pending schema migrations on startup. If the PostgreSQL role lacks DDL permissions, runagentsview pg pushfrom a machine with write access to update the schema.