How it works
The replay system allows developers and testers to reproduce live match scenarios without waiting for an actual cricket match. It records real match data as “tapes” and replays them through the exact same pipeline that handles live matches.
Recording a tape
An admin caches a live match via POST /admin/cache/matches/:matchId. This fetches the current match state from SportMonks and stores it as a tape snapshot in DynamoDB. You can record multiple snapshots over the course of a live match to build a full tape of how the match progressed.
Creating a replay
From a recorded tape, an admin creates a replay match via POST /admin/matches. This creates a new match entity in the system that is linked to the tape data rather than a real SportMonks fixture.
How replay mirrors live
The key design principle: everything downstream of the data source is identical.
| Live | Replay |
|---|
LiveScoreJob polls SportMonks | ReplayScoreJob advances tape |
SportMonksClient fetches from API | MockSportMonksClient reads from tape |
FixtureService updates DynamoDB | Same |
ActivityPushJob pushes to users | Same |
APNs Service sends to Apple | Same |
Because the replay system uses the same downstream pipeline, any bug you can reproduce in replay will behave identically to production. This makes it invaluable for debugging Live Activity rendering issues, push payload formatting, and state transition edge cases.
Admin controls
The replay is controllable via PATCH /admin/matches/:matchId with the following actions:
| Action | Effect |
|---|
play | Start or resume replay (accepts optional speed multiplier) |
pause | Pause replay at current position |
seek | Jump to a specific position in the tape |
setSpeed | Change playback speed (e.g., 2x, 5x, 0.5x) |
reset | Reset replay to the beginning of the tape |
Speed multiplier affects the interval between tape frames. At 2x speed, frames are served twice as fast. At 0.5x, they are served at half speed. The downstream pipeline processes each frame normally regardless of speed.
iOS testing
iOS devices can connect to replay matches by switching to a fake API URL that points to the test environment. The app itself does not know or care whether it’s receiving live or replay data — the payloads are identical.
Replay matches are visible in the test environment only. Make sure you are not accidentally pointing a production build at the test API, or users will see fake matches.
Key tables
| Table | Purpose |
|---|
| DynamoDB (tapes) | Stores recorded match state snapshots |
fixtures | Replay matches are stored as fixtures with a replay type |
Key endpoints
| Method | Path | Purpose |
|---|
| POST | /admin/cache/matches/:matchId | Record a tape from a live match |
| POST | /admin/matches | Create a replay match from a tape |
| PATCH | /admin/matches/:matchId | Control replay (play/pause/seek/speed/reset) |
Key jobs
| Job | Purpose |
|---|
ReplayScoreJob | Advances tape frames on schedule (replaces LiveScoreJob) |
Areas of improvement
1. No automatic cleanup or TTL for replays
ReplayService.ts stores the active replay entirely in module-level variables (activeReplay, activeReplayCache) with no TTL or automatic expiry. If an admin starts a replay and never stops it, it stays “active” indefinitely, holding a full CachedMatch object in memory. There is no scheduled check that detects a replay has reached its final ball and auto-stops it. A forgotten replay will silently block starting a new one (the POST /admin/replay route returns 409 if isReplayActive() is true).
Recommendation: Add a TTL or watchdog that auto-stops a replay when (a) the cursor reaches totalBalls - 1 and stays there for a configurable grace period, or (b) a maximum wall-clock duration is exceeded.
In matchStateBuilder.ts, both innings hard-code extras to { wides: 0, noBalls: 0, byes: 0, legByes: 0 } (lines 328 and 357). The cached match data does include per-innings scoreboards[] entries with wide, noballRuns, bye, and legBye fields, but buildMatchStateAtCursor never reads them. This means replayed match states will show incorrect extras breakdowns compared to a real live match.
Recommendation: Accumulate extras from the ball-by-ball data up to the cursor (counting wides, no-balls, etc.) or look up the cache.scoreboards data for the current innings.
3. Bowler stats are end-of-match, not cursor-relative
buildBowlerAtCursor in matchStateBuilder.ts returns the bowler’s final scorecard figures (overs, runs, wickets, economy) from cache.bowling, not the figures as of the current cursor position. While buildBatsmenAtCursor correctly reconstructs batsman stats from ball-by-ball data, the bowler equivalent does not. This means a bowler who finished with 4-30 in 4 overs will show those stats even when the replay is at their first ball.
Recommendation: Reconstruct bowler stats from ball-by-ball data the same way batsman stats are reconstructed, accumulating overs/runs/wickets/economy from inningsBalls up to the cursor.
4. Pause mid-over leaves subscriptions in limbo
When a replay is paused (controlReplay({ type: 'pause' })), the ReplayScoreJob poller continues to tick every 5 seconds. Each tick calls getReplayMatchState(), which reads getEffectiveCursor(). When paused, the cursor is frozen, so repeated ticks produce the exact same MatchState. Downstream, FixtureService and ActivityPushJob will process this identical state on every tick. Depending on whether the push pipeline deduplicates unchanged states, this could result in redundant APNs pushes every 5 seconds during a pause.
This applies to the ReplayScoreJob/MockSportMonksClient tape path. The in-process ReplayService path (used by the admin API) correctly freezes the cursor on pause, but there is no signal sent to ReplayScoreJob to stop polling.
Recommendation: Either have the poller skip ticks when playback.state === 'paused', or ensure the downstream pipeline deduplicates identical match states.
5. Seeking past the end is silently clamped
controlReplay({ type: 'seek', cursor: 99999 }) silently clamps the cursor to totalBalls - 1. There is no feedback to the caller that the requested position was out of bounds. Similarly, there is no way to seek to a specific over/ball number rather than a raw ball index, which makes the API harder to use for manual testing.
Recommendation: Return a warning when the requested cursor was clamped. Consider adding a seekToOver action that accepts an over number (e.g., 12.4) and converts it to the correct ball index.
6. Single-replay limit is a hard constraint with no queuing
ReplayService.ts allows only one active replay at a time (line 9: “Only one replay can be active at a time”). If two team members need to test different scenarios simultaneously, one must wait. The admin route returns a 409 error but does not indicate which replay is active or who started it.
Recommendation: At minimum, include the active replay’s matchId, createdAt, and age in the 409 response so the caller can decide whether to stop it. Longer term, consider supporting multiple concurrent replays keyed by replayMatchId.
7. Reset during active push-to-start subscriptions
When an admin starts a replay with lifecycle: true, the POST /admin/replay route sends a push-to-start notification to all admin devices, creating Live Activity sessions on iOS. If the replay is then stopped and a new one started with a different replayMatchId, the old Live Activity sessions on devices will never receive an “end” event. iOS will eventually time them out (after ~8 hours), but until then, devices show a stale Live Activity.
Recommendation: When stopping a replay, send an explicit “end” Live Activity update to all subscribed devices before clearing the replay state.
8. No validation of msPerBall bounds
The setSpeed and startReplay paths accept any numeric msPerBall value. A value of 0 or a negative number would cause getEffectiveCursor to compute nonsensical cursor positions (division by zero or negative offsets). The Zod schema in the admin route (z.number().min(100).optional()) provides a floor of 100ms for startReplay, but the ControlReplaySchema for setSpeed has no bounds validation.
Recommendation: Add z.number().min(100).max(60000) validation to the setSpeed action in ControlReplaySchema.
9. Match cache uses synchronous filesystem I/O
matchCacheService.ts uses fs.readFileSync, fs.writeFileSync, and fs.existsSync throughout. In development this is acceptable, but these synchronous calls block the Node.js event loop. The listCachedMatches function reads and parses every cache file synchronously on each call, which scales poorly as more matches are cached.
Recommendation: Migrate to fs.promises for async I/O, or add an in-memory index of cached match metadata so listCachedMatches does not need to read every file from disk.
10. Two parallel replay architectures with no unification
The codebase has two distinct replay paths:
- In-process replay via
ReplayService — manages cursor, playback state, and match state reconstruction entirely in the backend process.
- Tape-based replay via
TapeService + MockSportMonksClient + ReplayScoreJob — posts tape data to an external mock-sportmonks server and polls it back.
These share no state. The admin routes expose both (/admin/replay for in-process, /admin/tapes for tape-based). It is unclear when to use which, and there is no documentation or guard preventing both from running simultaneously on the same fixture.
Recommendation: Document the intended use case for each path. Add a guard that prevents starting a tape-based replay while an in-process replay is active (and vice versa), or unify them behind a single abstraction.