How scheduling works
The scheduler is meant to be predictable and unsurprising. It reads standard cron expressions, fires the matching tasks at the right moments, and records a row for every firing. There is no dependency graph, no leader election — one daemon owns its tasks.
The cron field
Section titled “The cron field”[tasks.heartbeat]cron = "*/5 * * * *"run = "/usr/local/bin/heartbeat"cron accepts standard 5-field syntax: minute, hour, day-of-month,
month, day-of-week. The usual convenience aliases also work:
| Expression | Meaning |
|---|---|
* * * * * | every minute |
*/5 * * * * | every 5 minutes |
0 * * * * | top of every hour |
0 2 * * * | every day at 02:00 |
30 9 * * 1-5 | weekdays at 09:30 |
0 0 1 * * | first of every month at midnight |
@hourly | top of every hour |
@daily / @midnight | every day at 00:00 |
@weekly | Sundays at 00:00 |
@monthly | 1st at 00:00 |
@yearly / @annually | January 1st at 00:00 |
@every 1h30m | every 90 minutes |
Six-field syntax (with seconds) is not supported. If you need
firings finer than a minute, use @every 30s or run a service.
Timezone
Section titled “Timezone”Cron expressions are evaluated in the timezone configured by
[scheduler] timezone. If you leave it unset, the daemon falls back
to the host’s system timezone.
The resolved zone (and whether it came from config or the system) is shown in the TUI startup banner and in the Web UI header, so there’s no guessing about what the scheduler is actually using.
# Daemon-wide default for every task without its own timezone.[scheduler]timezone = "Europe/Bratislava"
# Per-task override — any standard IANA timezone name works.[tasks.nightly-backup]cron = "30 2 * * *"timezone = "Atlantic/Faroe"| Setting | Default | What it controls |
|---|---|---|
[scheduler] timezone | (system zone) | Fallback for every task without its own timezone. |
[tasks.<name>] timezone | (inherits) | Timezone for this task’s cron evaluation. Overrides the scheduler-wide default. |
If the daemon doesn’t recognise a name (typo, or missing IANA
timezone data on the host — e.g. a minimal container image without
tzdata), config load fails with the offending scope — the daemon
never silently falls back to UTC.
DST behaviour
Section titled “DST behaviour”RunWisp gives you a single-fire schedule across DST transitions:
- Fall-back (clocks go back). The wall-clock minute that repeats —
e.g. 02:30 — only fires once. The scheduler dedupes by
(local-date, hour, minute); the suppressed firing is still recorded in history. - Spring-forward (clocks jump ahead). A cron like
0 2 * * *whose matching minute doesn’t exist that day fires at the next valid time — once.
UTC has no DST and is unaffected.
What “fired” means
Section titled “What “fired” means”A cron tick triggers a run, not a side effect. Every firing produces:
- A row in SQLite with a fresh ULID.
triggered_by = "cron".- A captured stdout/stderr stream on disk.
- A status of
pending → running → endedwith one ofsuccess,failed,stopped,timeout,crashed,skipped, orlog_overflow.
Whether the run actually starts immediately depends on the task’s
concurrency policy. With the default
on_overlap = "queue", a tick that fires while a previous run is still
going gets queued. With on_overlap = "skip", the firing is recorded
as a skipped run and the schedule moves on. Either way, the tick
always appears in history — that’s the rule: nothing fails or
fires without leaving a record.
Missed ticks: catchup
Section titled “Missed ticks: catchup”When the daemon was down (host reboot, deploy, hard crash), some
scheduled firings didn’t happen. The catch_up field controls what to
do about that on next startup:
[tasks.metrics-rollup]cron = "*/15 * * * *"catch_up = "latest" # default| Policy | Behaviour on startup |
|---|---|
latest | If any ticks were missed, fire one catch-up run. Default. Right for jobs where running twice is harmless. |
all | Fire one run per missed tick, capped by max_catch_up_runs. Right when each tick processes a discrete slice. |
skip | Pretend the missed ticks never happened. Right for monitors and probes that just want fresh. |
The anchor for “missed” is the timestamp of the last recorded run for the task. On the very first boot, the anchor is set the first time RunWisp sees the task — so a fresh install doesn’t queue up a big batch of “catch-up” runs for ticks before the daemon existed.
Capping the backfill: max_catch_up_runs
Section titled “Capping the backfill: max_catch_up_runs”catch_up = "all" on a * * * * * task and a daemon that was down
for a day means a large backlog queued at startup. The cap exists so
a long outage doesn’t bury the scheduler.
[tasks.metrics-rollup]cron = "*/15 * * * *"catch_up = "all"max_catch_up_runs = 200 # at most 200 missed ticks fire on startup| Value | Meaning |
|---|---|
| omitted | Inherit the built-in default of 100. Tune up explicitly if each missed tick is real work. |
N > 0 | Cap the backfill at N runs; older missed ticks are dropped. |
Negative values and 0 are rejected; omit the key to inherit the
default. The cap only applies to catch_up = "all": latest always
triggers exactly one run by construction, and skip triggers zero.
When the cap fires, the daemon logs a warning that names the task, the
total missed, the cap, and how many were dropped — so the silenced
ticks are never invisible (Prime Directive #1).
On startup: unfinished runs are marked crashed
Section titled “On startup: unfinished runs are marked crashed”On a clean shutdown the daemon waits for its running tasks to finish or hit their timeout. On a hard crash (power loss, force kill), it can’t.
When the daemon next starts, any run that was still recorded as
running is marked crashed with exit code -2. Those runs are not
resumed — that would require knowing where the process got to, and
the daemon doesn’t. A fresh run may then be created by the normal
scheduling / catchup logic above.
The point: every row in your run history finishes with a final result. You never have a row stuck “running” because the daemon disappeared under it.
Predictable timing
Section titled “Predictable timing”Same TOML and same clock, same firings. There is no random delay —
two tasks with cron = "0 2 * * *" fire at exactly the same instant.
If that’s a problem (they hit the same downstream service), stagger
the schedule explicitly:
[tasks.backup-a]cron = "0 2 * * *"
[tasks.backup-b]cron = "5 2 * * *"Reload semantics
Section titled “Reload semantics”The scheduler reads runwisp.toml once, at startup. There is no live
reload — no file watcher, no reload command. To pick up edits, restart
the daemon.
A parse error at startup fails the boot — the daemon exits before
opening its port. The safe pattern is to run runwisp validate --config <path> against the new file before you restart.
When a task disappears from the file across a restart, its schedule entry is gone but its run history stays. When a new task appears, its schedule is added; catchup does not apply for tasks that didn’t exist in the previous configuration.
Intentional scope
Section titled “Intentional scope”A few things the scheduler doesn’t try to do, on purpose:
- No clustering. One daemon owns its tasks. Two daemons reading the same TOML would both fire — that’s a setup mistake, not a feature.
- No “every Nth tick” semantics. Cron is the surface; if you need
every other Tuesday, encode it in the cron (
30 9 */2 * 2) or filter in your script.
Where to next
Section titled “Where to next”- Concurrency policies — what
on_overlapdoes when a tick fires into a still-running run. - Retries & timeouts — what happens when a run fails or hangs.
[tasks.*]reference — every cron-related field.