Skip to content

Troubleshooting

If something’s broken, find your symptom below. Most issues have a two-line fix. Anything that needs more than that points to a file you’ll already know how to read.

”Database connection timeout after 30 attempts”

Section titled “”Database connection timeout after 30 attempts””

The app waits 60 seconds for Postgres before giving up. Most likely causes:

  • Postgres container itself crasheddocker compose logs db will show why. Most common: wrong DB_PASSWORD (it changed since the volume was created — Postgres won’t accept the new password against existing data).
  • DATABASE_URL is wrong — pointing at a host that doesn’t exist on the Docker network. The bundled compose uses db as the hostname.
  • DNS / network isolation — Docker bridge network not created, or the app container is on a different network than db.

Try first: docker compose -f docker-compose.prod.yml ps — if db isn’t (healthy), fix that before fixing the app.

The entrypoint will retry on the next boot, but you’ll see the underlying error in the logs. Common cases:

  • Schema drift — someone edited the database manually and Prisma sees columns it didn’t add. Resolve by rolling back the marker row: docker exec travstats-app npx prisma migrate resolve --rolled-back <migration_name> then restart.
  • Disk fulldf -h on the host. Postgres needs free space for transaction logs even on a read-only failed migration.
  • Missing PostGIS extension — only happens on external-Postgres setups. Run CREATE EXTENSION postgis; against the database, then restart.

The persisted secret got truncated, partially overwritten, or its file lost permissions. Delete it and let the entrypoint regenerate (this signs out everyone — they re-log-in with the same password):

Terminal window
docker exec travstats-app rm /app/data/secrets/jwt.secret
docker compose -f docker-compose.prod.yml restart app

Almost always one of two things behind a reverse proxy:

  1. X-Forwarded-Proto is missing. TravStats reads it to decide whether to set the Secure flag on the JWT cookie. If your proxy doesn’t forward it and the browser is on HTTPS, the cookie is dropped and you’re back at login.

    nginx fix:

    proxy_set_header X-Forwarded-Proto $scheme;
  2. COOKIE_SECURE=true on plain HTTP. Inverse problem — you’re on http://travstats.lan:3010 and the cookie is set Secure-only, so the browser drops it. Fix: leave COOKIE_SECURE unset (auto-detect) or set it explicitly to false for LAN-only HTTP.

DevTools → Application → Cookies should show travstats_session set on the response from /api/v1/auth/login. If it’s set there but missing on subsequent requests, you’re hitting one of the above.

The frontend redirects to setup whenever /api/v1/setup/status returns requiresSetup: true. That endpoint counts admin users — if none exist, you get the wizard. Cause: someone deleted every admin.

Terminal window
docker exec -it travstats-db psql -U flights flights -c "SELECT id, username, is_admin FROM users WHERE is_admin = true;"

If empty, promote a user manually or run the createAdmin.js fallback from the first-run page.

SMTP isn’t configured (or the configured server is rejecting). Admin → Settings → SMTP has a Send test mail button — start there. If the test fails, check your provider’s auth requirements (most need an app-specific password, not your account password).

If you’re locked out and can’t get to Admin → Settings: see the bcrypt-reset trick.

Most likely: the email is from an airline TravStats doesn’t have a template for, and Ollama isn’t reachable / not configured.

Check:

Terminal window
docker exec travstats-app curl -fsS http://ollama:11434/api/tags

Should list installed models. If the call 503s, the Ollama container isn’t running. If it lists no models, pull one:

Terminal window
docker exec travstats-ollama ollama pull gemma3:12b

Built-in templates exist for: LH (Lufthansa, both old and new formats), LX (Swiss), OS (Austrian), SN (Brussels), FR (Ryanair), U2 (easyJet), EW (Eurowings), W6 (Wizz Air). For other carriers you can record a user template.

The most-common case is multi-leg bookings where the parser picked up the wrong leg. The review screen lets you delete the unwanted suggestions before saving — nothing’s persisted until you click Save all.

If the date is consistently off by hours (not days), it’s a timezone issue — make sure your Admin → Settings → Timezone is set correctly. The container itself runs UTC; the UI converts to your configured display timezone.

Vision parsers cascade in this order: Ollama vision model → OpenAI / Claude (if API key set) → Tesseract OCR → manual entry pre-fill.

If everything blanks out:

  • Check OLLAMA_URL points to a container with a vision-capable model installed (e.g. llama3.2-vision)
  • Try uploading a clearer photo — Tesseract is picky about glare and angle
  • Check rate limits: boardingPassParseLimiter allows N requests/min per user; busy users hit it during bulk re-imports

Almost always WebGL or content-security-policy related.

  1. DevTools → Console. Look for errors mentioning WebGL, WEBGL_lose_context, or CSP.
  2. No errors but blank canvas? Try a hard reload (Ctrl + Shift + R) — three.js can lose its WebGL context when the tab was backgrounded for a long time.
  3. CSP blocking inline scripts? If you set a custom Content-Security-Policy via your reverse proxy, make sure script-src includes 'self' and worker-src allows blob: workers (deck.gl uses them).
  4. Map tiles don’t load? Check the network tab — TravStats fetches from OpenStreetMap-compatible tile servers by default. If your firewall blocks them, point to a self-hosted tileserver under Admin → Settings → Map → Tile Server.

TravStats writes structured JSON via Pino to four files inside /app/data/logs/:

FileContainsDefault level
app.logEverything the application emitsinfo
error.logErrors and warnings only (subset of app.log)error
http.logOne line per HTTP request (method, path, status, latency, user)info
parser*.logParser pipeline traces (template detection, Ollama prompts, fallback decisions)debug

From the host:

Terminal window
docker exec travstats-app tail -f /app/data/logs/app.log | jq .

Or just docker compose logs -f app — Pino writes to stdout too, so docker logs and the file see the same stream.

In the UI: Admin → Settings → Logging → Level → debug, save. Persists across restarts. Drop it back to info when done — debug adds a lot of volume to app.log.

For one-off Prisma query inspection:

Terminal window
docker exec -e PRISMA_LOG_LEVEL=info travstats-app
Terminal window
docker system df
docker exec travstats-app du -sh /app/data/*

Common culprits:

  • /app/data/logs/ — unbounded growth if backups aren’t pruning. Admin → Settings → Logging → Retention.
  • /app/data/backups/ — every nightly pg_dump accumulates. Lower the Admin → Settings → Backups → Retention setting.
  • Old image tagsdocker image prune -f after upgrades.

Before opening one, capture:

  1. TravStats version — bottom of any page in the UI, or cat /app/backend/VERSION inside the container.
  2. The container logs around the failuredocker compose logs --tail 200 app db. Redact any API keys.
  3. A repro — what you clicked or which API call you made. Curl + headers if you can.

File at github.com/Abrechen2/TravStats/issues. The bug-report template will ask for exactly the things above.

The in-app Report Bug button (Settings → footer) builds an anonymised diagnostic bundle and copies it to your clipboard — paste it into the GitHub issue and you’ve covered most of what maintainers will ask.

The bundle is a single JSON document — small enough to paste into an issue, structured so maintainers can grep through it. Same content as Admin → Settings → System info → Diagnostic export.

  • Recent log entries from app.log and error.log (latest ~500 lines per file)
  • System info — TravStats version, build date, Node version, Postgres version, database size, backup count, current health status
  • Settings sketch — instance name, registration mode, configured-or-not flags for each external service (no values), log level, retention policy, backup schedule
  • Log statistics — how many error / warn / info lines in the rolling window

The bundle goes through a defensive scrubber before download. Stripped entirely if they appear as object keys:

ip, ipAddress, userAgent, email, notificationEmail, password, passwordHash, token, auth_token, authorization, cookie, cookies, resetToken, changeToken, apiKey, api_key, openaiApiKey, claudeApiKey, globalOpenaiApiKey, globalClaudeApiKey, airlabsApiKey, aviationstackApiKey, clientSecret, accessToken, refreshToken.

Pattern-replaced when they appear in any string value:

PatternReplaced with
JWT-shaped tokens (eyJ…)<redacted:jwt>
Email addresses ([email protected])<redacted:email>
IPv4 addresses<redacted:ip>
UUIDs (user IDs, flight IDs, …)<redacted:uuid>
  • Flight data (no routes, no airlines, no dates)
  • User account data (no usernames, no emails, no passwords)
  • API key values for any external service
  • Encryption key, JWT secret, or any other secret from /app/data/secrets/
  • Anything from the database tables — only logs + system info

When you’re filing a GitHub issue. The bundle answers most of the “can you reproduce / what version / what config / what’s the error” questions in one paste. Without it, the maintainer has to ask follow-up questions for two days; with it, you usually get a fix on first reply.

If you’re nervous about residual PII in the redacted bundle, open the JSON in a text editor before pasting and grep for anything that looks identifying — the scrubber is conservative, but the “check before paste” step is yours to make.