Skip to content

RecSys engineering hub

Use this hub to understand ranking behavior, signals, and how to validate changes end-to-end.

Who this is for

  • Recommendation engineers (RecEng / ML engineers / data scientists)
  • Anyone who needs to understand ranking behavior, signals, and how to validate changes.

What you will get

  • A fast path to “I can reason about the ranking output”
  • A map of what you can change without code, with pipelines, and with ranking code
  • A practical evaluation workflow (offline gate + online validation)

10-minute path

  1. Mental model (how data turns into ranked items)
    Read: How it works: architecture and data flow

  2. What the ranking core does and what’s deterministic
    Read: Ranking & constraints reference

  3. How to decide “ship / hold / rollback” (with a written trail)
    Read: How-to: run evaluation and make ship decisions

  4. Validate measurement loop (logging + joinability) Read: How-to: validate logging and joinability

  5. Evaluate a change: run an offline report and review expected metrics and tradeoffs — start at Evaluation hub.

  6. For CI gating, see Offline gate in CI.

The knobs you can turn

Without code changes (fast iteration)

With pipeline changes (data changes, stable serving)

With ranking code changes (high leverage, requires evaluation)

Evaluation workflow (practical)

  1. Make a change (config/rules/signal/ranking)
  2. Run offline evaluation gates (deterministic pass/fail)
    See: CI gates: using recsys-eval in automation
  3. Interpret the results (metrics + tradeoffs)
    See: Interpreting results: how to go from report to decision
    Orientation: Interpreting metrics and reports
  4. Decide ship/hold/rollback
    See: Decision playbook: ship / hold / rollback

If you're new to evaluation

Start with the suite workflow: How-to: run evaluation and make ship decisions

Concepts worth reading (when you have 30 minutes)