Overview
The replay workflow package implements the runtime side of “re-run a pipeline stage from persisted storage”. It’s consumed via API / CLI rather than composed into YAML pipelines.
Two replay flavours
Section titled “Two replay flavours”- Same-pipeline replay. Re-run a specific stage within the existing pipeline, reading stored artefacts from the parent execution.
- Cross-pipeline replay. Feed the parent execution’s artefacts into a different pipeline’s routes. Full new coordinator, detached from the parent.
Both invoked through the same entry points:
factflow execution replay EXEC_ID --from-stage STAGEfactflow execution replay EXEC_ID --from-stage STAGE --to-route ROUTEor
POST /api/v1/executions/{id}/replayWhy replay needs its own package
Section titled “Why replay needs its own package”- Route resolution from snapshot. Replay reads
config_snapshotfrom the parent execution — not the current global config. Prevents collisions when multiple configs reuse route names. - Detached coordinator. Cross-pipeline replay runs a separate
DetachedReplayServicethat walks storage and republishes in sequence, independent of the parent. - Managed publisher. Under
OrchestratorManager, the replay publisher is an asyncio task the manager owns; cancelling the replay execution cancels the publisher cleanly.
Recovery
Section titled “Recovery”factflow_replay.recovery handles server-restart recovery: on startup, executions still in running status are inspected against lineage; if recoverable (storage has the artefacts for the last completed stage), the orchestrator restarts from that stage.
Where to dive deeper
Section titled “Where to dive deeper”- Guides / Orchestrator / Replay — operator-facing mechanics and failure modes
- Architecture / Multi-execution topology — where replay fits in the broader execution model
factflow-replaypackage reference — the API surface