RecSys engineering hub¶
Use this hub to understand ranking behavior, signals, and how to validate changes end-to-end.
Who this is for¶
- Recommendation engineers (RecEng / ML engineers / data scientists)
- Anyone who needs to understand ranking behavior, signals, and how to validate changes.
What you will get¶
- A fast path to “I can reason about the ranking output”
- A map of what you can change without code, with pipelines, and with ranking code
- A practical evaluation workflow (offline gate + online validation)
10-minute path¶
-
Mental model (how data turns into ranked items)
Read: How it works: architecture and data flow -
What the ranking core does and what’s deterministic
Read: Ranking & constraints reference -
How to decide “ship / hold / rollback” (with a written trail)
Read: How-to: run evaluation and make ship decisions -
Validate measurement loop (logging + joinability) Read: How-to: validate logging and joinability
-
Evaluate a change: run an offline report and review expected metrics and tradeoffs — start at Evaluation hub.
- For CI gating, see Offline gate in CI.
The knobs you can turn¶
Without code changes (fast iteration)¶
-
Weights / limits / flags per tenant (admin config)
Reference: Admin API + local bootstrap (recsys-service) -
Merchandising rules (pin, block, boosts, constraints)
How-to: How-to: integrate recsys-service into an application -
Data mode choice (DB-only vs artifact/manifest)
Explanation: Data modes: DB-only vs artifact/manifest
With pipeline changes (data changes, stable serving)¶
-
New or improved signals (popularity / co-occurrence / embeddings, etc.)
How-to: How-to: add a new signal end-to-end -
Artifact + manifest lifecycle (publish, rollback, freshness)
Explanation: Artifacts and manifest lifecycle (pipelines → service)
With ranking code changes (high leverage, requires evaluation)¶
- Candidate merge, scoring, tie-break rules
Reference: Scoring model specification (recsys-algo)
Reference: Ranking & constraints reference
Evaluation workflow (practical)¶
- Make a change (config/rules/signal/ranking)
- Run offline evaluation gates (deterministic pass/fail)
See: CI gates: using recsys-eval in automation - Interpret the results (metrics + tradeoffs)
See: Interpreting results: how to go from report to decision
Orientation: Interpreting metrics and reports - Decide ship/hold/rollback
See: Decision playbook: ship / hold / rollback
If you're new to evaluation
Start with the suite workflow: How-to: run evaluation and make ship decisions
Concepts worth reading (when you have 30 minutes)¶
- Evaluation validity: what numbers mean, and what they don’t: Evaluation validity
- Guarantees and non-goals (blunt): Guarantees and non-goals
- Ethics and fairness notes: Ethics and fairness notes
Read next¶
- Customization map: Customization map
- Verify determinism: Verify determinism
- Verify joinability: Verify joinability (request IDs → outcomes)
- Tune ranking safely: How-to: tune ranking safely