Backups & Restore
TravStats writes its own backups. You don’t need to manage pg_dump
or volume snapshots manually — the scheduler does it on whatever
cron you set, with retention and optional off-host sync.
That said, the most important thing about backups is knowing they work. This page covers both the schedule and the restore drill.
What gets backed up
Section titled “What gets backed up”A backup is a single gzipped pg_dump of the entire flights
database. That includes:
- Every flight, trip, booking, and tag for every user
- All achievements and which user has unlocked which
- Personal access tokens (as bcrypt hashes)
- Instance settings (encrypted at rest using the encryption key from
/app/data/secrets/) - The audit log
- Parser templates (built-in templates are code, not data; user templates are in this dump)
What’s not in the backup:
- The encryption key itself (
/app/data/secrets/encryption.key) and the JWT secret (/app/data/secrets/jwt.secret) — these live on the host filesystem, not in Postgres - Pulled Ollama model weights — these live in the Ollama container’s volume
For a true bare-metal restore you need both: a recent SQL dump and
a copy of /app/data/secrets/. Without the encryption key, the
encrypted instance settings (API keys, SMTP password) can’t be
decrypted on the new host.
Backup scheduling
Section titled “Backup scheduling”Admin → Settings → Backups:
| Field | Default | Description |
|---|---|---|
| Enabled | true | Master switch — turn off for ephemeral demo instances |
| Schedule | 0 2 * * * (daily 02:00 UTC) | Standard cron expression. Container TZ is UTC; schedule fires in UTC |
| Retention (count) | 14 | Keep the last 14 backup files, prune older ones |
| Compression | gzip | Compresses the SQL dump ~10× |
| Destination — local | /app/data/backups/ | Always written here first |
| Destination — WebDAV | (off) | Optional off-host sync — covered below |
The scheduler uses node-cron, runs in the same process as the backend. If the container restarts mid-backup, the partial file is discarded and the next scheduled run replaces it.
The local backup file
Section titled “The local backup file”docker exec travstats-app ls -lh /app/data/backups# total 24M# -rw-r--r-- 1 node node 1.7M May 02 02:00 travstats-2026-05-02-020000.sql.gz# -rw-r--r-- 1 node node 1.7M May 01 02:00 travstats-2026-05-01-020000.sql.gz# ...Filename pattern: travstats-<YYYY-MM-DD>-<HHMMSS>.sql.gz.
The backups live inside the travstats-app-data named Docker
volume — recreated only if you docker volume rm. If your host
filesystem is itself backed up (rsnapshot, restic, ZFS snapshots),
backups travel with it. If not, configure WebDAV sync below.
Manual backup on demand
Section titled “Manual backup on demand”For a fresh dump right now, two paths:
From the admin UI
Section titled “From the admin UI”Admin → Settings → Backups → Run backup now fires the same code path as the scheduler. Useful before a major upgrade.
From the command line
Section titled “From the command line”docker exec travstats-db pg_dump -U flights flights | gzip > flights-pre-upgrade.sql.gzBypasses TravStats entirely — talks directly to Postgres. Use this when the app itself is unhealthy and you can’t reach the admin UI.
Restore drill
Section titled “Restore drill”You should run this once, on a non-production instance, before you need it for real.
# 1. Stop the app so it can't write while you're restoringdocker compose -f docker-compose.prod.yml stop app
# 2. Wipe the existing public schema (destructive — confirms you# have a backup ready)docker exec -i travstats-db psql -U flights -d flights -c \ "DROP SCHEMA public CASCADE; CREATE SCHEMA public; CREATE EXTENSION postgis;"
# 3. Replay the dumpgunzip -c travstats-2026-05-02-020000.sql.gz \ | docker exec -i travstats-db psql -U flights flights
# 4. Restart the app — migrations re-run against the restored DBdocker compose -f docker-compose.prod.yml up -d appThe restored instance comes up with all your data exactly as it was at the moment of the dump. Auto-applied migrations may add new columns to existing tables (additive migrations are idempotent), so restoring an older dump into a newer TravStats version usually works.
The opposite — restoring a newer dump into an older TravStats — fails because the older code doesn’t know about new columns. Don’t downgrade by restoring; downgrade by switching the image tag and letting the data stay.
WebDAV off-host sync
Section titled “WebDAV off-host sync”The most-asked feature. Configure under Admin → Settings → Backups → WebDAV:
| Field | Description |
|---|---|
| WebDAV URL | The full URL to the destination folder, e.g. https://nextcloud.example.com/remote.php/dav/files/youruser/Backups/TravStats/ |
| Username | Your WebDAV account |
| Password | An app-specific password preferred (Nextcloud, HiDrive both support these) |
| Push every | Default daily — same cron as backup. Set to weekly if backups are large |
After every successful local backup, the file is PUT-ed to the
WebDAV target. Failures are logged and retried on the next
scheduler tick — TravStats doesn’t fail loudly because the local
backup still succeeded.
Concrete: Nextcloud
Section titled “Concrete: Nextcloud”- Nextcloud → Settings → Security → App passwords → create a new app password labelled “TravStats Backup”.
- URL pattern:
https://your-nextcloud.example.com/remote.php/dav/files/<your-username>/<destination-folder>/ - Paste the app password into TravStats.
Concrete: HiDrive (Strato)
Section titled “Concrete: HiDrive (Strato)”- HiDrive admin → create an app-specific password under Account → API access.
- URL:
https://webdav.hidrive.strato.com/users/<your-account>/<folder>/ - The HiDrive WebDAV server has a 1 GB / file limit on free tiers — fine for TravStats backups under any realistic usage.
Concrete: generic WebDAV
Section titled “Concrete: generic WebDAV”Anywhere that speaks RFC 4918 WebDAV PUT works. Apache mod_dav,
Caddy with the webdav plugin, dedicated services like Box,
self-hosted Owncloud — all tested and supported.
The TravStats sync logs each attempt; check
/app/data/logs/app.log for webdav lines if a backup goes
missing on the destination.
Disk-usage management
Section titled “Disk-usage management”The /app/data/backups/ directory grows linearly until it hits the
retention count. Some quick math:
- Empty database: ~50 KB per dump
- Small instance, 200 flights: ~150 KB per dump
- Power user, 10000 flights: ~3 MB per dump
- Default retention 14 daily backups → maximum ~50 MB on disk
Keeping six months of weekly backups instead — switch to schedule
0 2 * * 0 (Sundays at 02:00 UTC), retention 26. Maximum disk
~80 MB.
These numbers are dwarfed by the database itself; backup retention is rarely the disk-usage problem.
Off-site backups via API
Section titled “Off-site backups via API”If WebDAV doesn’t fit (you’d rather use S3, B2, or a borg server), write a small script that pulls each backup over the API:
TOKEN="$(cat ~/.travstats-token)"curl -fsS \ -H "Authorization: Bearer $TOKEN" \ https://travstats.example.com/api/v1/admin/backups/latest \ --output /var/backups/travstats/$(date +%F).sql.gzCombine with rclone, restic, or whatever you already use for off-site.
The endpoint requires an admin-scoped PAT — see
API & Automation.
Disaster recovery checklist
Section titled “Disaster recovery checklist”The “everything is on fire” sequence:
- Find a recent dump. Local in
/app/data/backups/, off-host on your WebDAV / S3, or via API on a working replica. - Find a copy of
/app/data/secrets/. Without the encryption key, encrypted settings (API keys, SMTP) need to be re-entered manually after restore. - Stand up a fresh TravStats on the recovery host using the same major.minor version.
- Restore the dump (the four-step drill above). Sign in as your old admin user.
- Verify: total flight count, achievements, key UI flows. The audit log is the canonical record of “what was here on the day of the dump”.
- Re-enter encrypted settings if
/app/data/secrets/was lost. Look in the audit log for “settings updated” entries to know what you’d configured.
A rehearsed restore drill takes ~10 minutes for a single-user instance. Do it once, sleep better.