Skip to content

Monitor and replay

You have an execution that completed. Now you’ll inspect the output, trace messages end-to-end, and replay a stage without re-fetching source data.

List every object produced by this execution:

Terminal window
factflow storage list --execution $EXEC_ID

Read one:

Terminal window
factflow storage get "executions/b3a1.../sitemap_scraper/web_scraper/msg-042" --output json

Inspect just the sidecar metadata (who wrote it, when, with what config hash):

Terminal window
factflow storage metadata "executions/b3a1.../sitemap_scraper/web_scraper/msg-042"

Stream writes as they happen (during a live execution):

Terminal window
factflow storage watch --execution $EXEC_ID

Lineage records one row per message per adapter invocation. List everything for the execution:

Terminal window
factflow lineage list --execution $EXEC_ID

Drill into a message’s full chain (all ancestors and descendants):

Terminal window
factflow lineage chain MSG_ID

Just the children (messages this one produced):

Terminal window
factflow lineage children MSG_ID

List failures across the execution:

Terminal window
factflow lineage failures --execution $EXEC_ID

Replay re-publishes a stored artefact back into the pipeline, letting you rerun a downstream stage without refetching source data. Useful when:

  • An adapter had a bug; you fixed it; re-run against the same input
  • A later stage timed out; rerun it
  • You want to test an updated config against the same data

Same-pipeline replay (restart within the existing execution):

Terminal window
factflow execution replay $EXEC_ID --from-stage web_scraper

Cross-pipeline replay (create a new execution pulling data from the parent):

Terminal window
factflow execution replay $EXEC_ID \
--from-stage web_scraper \
--to-route markdown_processor

Replay reads executions/SRC/ROUTE/STAGE/* from storage. If objects are missing, replay fails — storage is not a cache. Storage retention is a deployment concern.

Terminal window
factflow system health --debug # verify every subsystem is live
factflow system metrics # aggregate orchestrator state
factflow pipeline list # every running route, across all executions
factflow pipeline pause ROUTE_ID # pause one route globally (drains in flight)
factflow pipeline resume ROUTE_ID