Logs & retention
RunWisp captures each run’s stdout and stderr to disk. You can watch the live tail, scroll back through finished runs, and download the full log. The database stores run metadata — exit code, duration, timestamps — but not the log bodies; those live on disk, one file per run.
Where logs live
Section titled “Where logs live”{data_dir}/logs/{task-name}/{YYYYMMDD}_{HHMMSS}_{run-id-suffix}.log- One file per run. Each invocation gets its own isolated log,
named after its start time and a short suffix of the run’s ULID so
files are sortable and unique. Task names are sanitised — anything
outside
[a-zA-Z0-9_-]becomes_. The timestamp in the filename is in UTC so the on-disk cadence stays stable across host timezone changes. - Stdout and stderr are interleaved in capture order — the same bytes the script wrote, in the order it wrote them.
- When a run is deleted by retention, the daemon removes the log file, any helper files alongside it, and any parent directory that becomes empty. There’s no orphaning: a row never points at a missing log, and a log file never lingers without a row.
Rotation: log_max_size and log_on_full
Section titled “Rotation: log_max_size and log_on_full”Each run’s log is capped — a runaway script doesn’t fill the disk with output from a single tick.
[tasks.bulky-job]cron = "0 4 * * *"run = "/usr/local/bin/bulky.sh"log_max_size = "50MB"log_on_full = "drop_old"log_max_size— per-run cap. Default100MB. Units:b,kb,mb,gb,tb(case-insensitive); bare numbers are bytes.log_on_full— what to do when output hits the cap.
| Value | Behaviour |
|---|---|
drop_old | Default. Renames the current log to .prev, keeps writing fresh output. The end of the log (usually where the failure is) survives. |
drop_new | Stops accepting new lines; the process keeps running. Right when the start is the interesting part (a startup banner, a long batch’s preamble) and the rest is repetitive. |
kill_task | Cancels the run’s context, terminating the process. The run records as log_overflow (a dedicated end reason that’s still treated as a failure for retries and notifications), so the cause is visible without inspecting the log. |
Whichever policy fires, the daemon writes a synthetic line at the truncation point so someone skimming the log can see exactly when the limit hit. Nothing is dropped without a record.
log_on_full also controls what happens when [storage] min_free_space
trips during a run: kill_task cancels the run on disk pressure;
drop_new and drop_old quietly stop accepting lines (and the daemon
raises a log.disk_pressure notification so the operator discovers
the dropped output). See
storage configuration.
Retention: keep_runs and keep_for
Section titled “Retention: keep_runs and keep_for”Retention controls how long old run rows and their log files stick around. Both rules apply, and whichever cuts first wins.
[tasks.metrics]cron = "* * * * *"run = "/usr/local/bin/metrics.sh"keep_runs = 500keep_for = "7d"keep_runs— keep the N most recent runs for this task. A positive integer. The internal hard cap is 1 000 000; values above it (and zero / negatives) are rejected at config load.keep_for— delete runs whosecreated_atis older than the given duration. Accepts extended units, including days and weeks:"7d","2w","36h","30m". Zero and negative durations are rejected.- Per task: each task’s retention is evaluated independently using
that task’s own settings (or the inherited
[defaults]). - Omit either field to inherit from
[defaults]. Set both for a hard floor and a hard ceiling.
Cleanup runs in the background periodically. Runs that
are still going are never deleted; you may briefly see slightly more
than keep_runs rows between sweeps. When retention triggers, both
the SQLite row and the log file (with its sidecars) are removed.
Live streaming
Section titled “Live streaming”The Web UI and TUI tail logs in real time as the run produces them — new bytes appear within milliseconds. The stream first replays whatever’s already on disk for the run, then switches to live, so you don’t miss the start of the output.
Scrolling back works on finished runs too. The viewer jumps directly to the byte you asked for without re-reading the file from the top, so even multi-megabyte logs feel instant.
The on-disk file is always the source of truth — the live stream is a fast preview of the same bytes.
Downloading
Section titled “Downloading”The full log is downloadable as a single file:
- Web UI — the Download button on the run detail panel.
- TUI — press
don the run detail view. On a graphical session this opens your browser straight to the download; on SSH it copies the URL to your clipboard or shows it in a modal you can paste from. - Raw endpoint —
GET /api/tasks/{name}/runs/{id}/log/rawreturns the same bytes; useful forcurland shell pipelines.
If a run rotated mid-stream under drop_old, the download includes
the rotated-out part and the current part as a single file — you get
the full bytestream, not just the tail.
Sidecars (.idx, .tidx, .meta, .prev)
Section titled “Sidecars (.idx, .tidx, .meta, .prev)”RunWisp keeps small helper files next to each .log to make seeking
and downloading fast. They’re rebuilt as needed and removing them
won’t free notable disk; don’t delete them while a run is in
progress. They’re cleaned up automatically with the parent log when
retention deletes a run.
Crash safety
Section titled “Crash safety”Logs are flushed and closed cleanly when a run ends or when the daemon shuts down normally.
If the daemon is killed mid-write (power loss, force kill):
- The log file is not truncated — partial writes survive on disk, and the viewer tolerates a partial last line.
- On the next startup, the run is marked
crashed(see scheduling: on startup) and its log file is left exactly as it was — those last lines are usually the most useful debugging artifact.
Where to next
Section titled “Where to next”- Notifications model — how a failed run ends up in your inbox or chat.
- How scheduling works — what creates the run rows that retention later trims.
[tasks.*]reference —log_max_size,log_on_full,keep_runs,keep_for.