Skip to content

Backups & Restore

TravStats writes its own backups. You don’t need to manage pg_dump or volume snapshots manually — the scheduler does it on whatever cron you set, with retention and optional off-host sync.

That said, the most important thing about backups is knowing they work. This page covers both the schedule and the restore drill.

A backup is a single gzipped pg_dump of the entire flights database. That includes:

  • Every flight, trip, booking, and tag for every user
  • All achievements and which user has unlocked which
  • Personal access tokens (as bcrypt hashes)
  • Instance settings (encrypted at rest using the encryption key from /app/data/secrets/)
  • The audit log
  • Parser templates (built-in templates are code, not data; user templates are in this dump)

What’s not in the backup:

  • The encryption key itself (/app/data/secrets/encryption.key) and the JWT secret (/app/data/secrets/jwt.secret) — these live on the host filesystem, not in Postgres
  • Pulled Ollama model weights — these live in the Ollama container’s volume

For a true bare-metal restore you need both: a recent SQL dump and a copy of /app/data/secrets/. Without the encryption key, the encrypted instance settings (API keys, SMTP password) can’t be decrypted on the new host.

Admin → Settings → Backups:

FieldDefaultDescription
EnabledtrueMaster switch — turn off for ephemeral demo instances
Schedule0 2 * * * (daily 02:00 UTC)Standard cron expression. Container TZ is UTC; schedule fires in UTC
Retention (count)14Keep the last 14 backup files, prune older ones
CompressiongzipCompresses the SQL dump ~10×
Destination — local/app/data/backups/Always written here first
Destination — WebDAV(off)Optional off-host sync — covered below

The scheduler uses node-cron, runs in the same process as the backend. If the container restarts mid-backup, the partial file is discarded and the next scheduled run replaces it.

Terminal window
docker exec travstats-app ls -lh /app/data/backups
# total 24M
# -rw-r--r-- 1 node node 1.7M May 02 02:00 travstats-2026-05-02-020000.sql.gz
# -rw-r--r-- 1 node node 1.7M May 01 02:00 travstats-2026-05-01-020000.sql.gz
# ...

Filename pattern: travstats-<YYYY-MM-DD>-<HHMMSS>.sql.gz.

The backups live inside the travstats-app-data named Docker volume — recreated only if you docker volume rm. If your host filesystem is itself backed up (rsnapshot, restic, ZFS snapshots), backups travel with it. If not, configure WebDAV sync below.

For a fresh dump right now, two paths:

Admin → Settings → Backups → Run backup now fires the same code path as the scheduler. Useful before a major upgrade.

Terminal window
docker exec travstats-db pg_dump -U flights flights | gzip > flights-pre-upgrade.sql.gz

Bypasses TravStats entirely — talks directly to Postgres. Use this when the app itself is unhealthy and you can’t reach the admin UI.

You should run this once, on a non-production instance, before you need it for real.

Terminal window
# 1. Stop the app so it can't write while you're restoring
docker compose -f docker-compose.prod.yml stop app
# 2. Wipe the existing public schema (destructive — confirms you
# have a backup ready)
docker exec -i travstats-db psql -U flights -d flights -c \
"DROP SCHEMA public CASCADE; CREATE SCHEMA public; CREATE EXTENSION postgis;"
# 3. Replay the dump
gunzip -c travstats-2026-05-02-020000.sql.gz \
| docker exec -i travstats-db psql -U flights flights
# 4. Restart the app — migrations re-run against the restored DB
docker compose -f docker-compose.prod.yml up -d app

The restored instance comes up with all your data exactly as it was at the moment of the dump. Auto-applied migrations may add new columns to existing tables (additive migrations are idempotent), so restoring an older dump into a newer TravStats version usually works.

The opposite — restoring a newer dump into an older TravStats — fails because the older code doesn’t know about new columns. Don’t downgrade by restoring; downgrade by switching the image tag and letting the data stay.

The most-asked feature. Configure under Admin → Settings → Backups → WebDAV:

FieldDescription
WebDAV URLThe full URL to the destination folder, e.g. https://nextcloud.example.com/remote.php/dav/files/youruser/Backups/TravStats/
UsernameYour WebDAV account
PasswordAn app-specific password preferred (Nextcloud, HiDrive both support these)
Push everyDefault daily — same cron as backup. Set to weekly if backups are large

After every successful local backup, the file is PUT-ed to the WebDAV target. Failures are logged and retried on the next scheduler tick — TravStats doesn’t fail loudly because the local backup still succeeded.

  1. Nextcloud → Settings → Security → App passwords → create a new app password labelled “TravStats Backup”.
  2. URL pattern: https://your-nextcloud.example.com/remote.php/dav/files/<your-username>/<destination-folder>/
  3. Paste the app password into TravStats.
  1. HiDrive admin → create an app-specific password under Account → API access.
  2. URL: https://webdav.hidrive.strato.com/users/<your-account>/<folder>/
  3. The HiDrive WebDAV server has a 1 GB / file limit on free tiers — fine for TravStats backups under any realistic usage.

Anywhere that speaks RFC 4918 WebDAV PUT works. Apache mod_dav, Caddy with the webdav plugin, dedicated services like Box, self-hosted Owncloud — all tested and supported.

The TravStats sync logs each attempt; check /app/data/logs/app.log for webdav lines if a backup goes missing on the destination.

The /app/data/backups/ directory grows linearly until it hits the retention count. Some quick math:

  • Empty database: ~50 KB per dump
  • Small instance, 200 flights: ~150 KB per dump
  • Power user, 10000 flights: ~3 MB per dump
  • Default retention 14 daily backups → maximum ~50 MB on disk

Keeping six months of weekly backups instead — switch to schedule 0 2 * * 0 (Sundays at 02:00 UTC), retention 26. Maximum disk ~80 MB.

These numbers are dwarfed by the database itself; backup retention is rarely the disk-usage problem.

If WebDAV doesn’t fit (you’d rather use S3, B2, or a borg server), write a small script that pulls each backup over the API:

Terminal window
TOKEN="$(cat ~/.travstats-token)"
curl -fsS \
-H "Authorization: Bearer $TOKEN" \
https://travstats.example.com/api/v1/admin/backups/latest \
--output /var/backups/travstats/$(date +%F).sql.gz

Combine with rclone, restic, or whatever you already use for off-site. The endpoint requires an admin-scoped PAT — see API & Automation.

The “everything is on fire” sequence:

  1. Find a recent dump. Local in /app/data/backups/, off-host on your WebDAV / S3, or via API on a working replica.
  2. Find a copy of /app/data/secrets/. Without the encryption key, encrypted settings (API keys, SMTP) need to be re-entered manually after restore.
  3. Stand up a fresh TravStats on the recovery host using the same major.minor version.
  4. Restore the dump (the four-step drill above). Sign in as your old admin user.
  5. Verify: total flight count, achievements, key UI flows. The audit log is the canonical record of “what was here on the day of the dump”.
  6. Re-enter encrypted settings if /app/data/secrets/ was lost. Look in the audit log for “settings updated” entries to know what you’d configured.

A rehearsed restore drill takes ~10 minutes for a single-user instance. Do it once, sleep better.