Skip to main content

Evaluation modes

Row evaluations run once per row and usually produce per-row score columns.
@ze.evaluation(mode="row", outputs=["exact_match"])
def exact_match(row, answer_col, prediction_col):
    return {"exact_match": int(answer_col == prediction_col)}

Column mapping

Use column_map to bind evaluator function args to columns:
run = run.score(
    [exact_match, accuracy],
    column_map={
        "exact_match": {
            "answer_col": "answer",
            "prediction_col": "prediction",
        },
        "accuracy": {"exact_match_col": "exact_match"},
    },
)
Required evaluator args must be mapped correctly. The SDK validates mappings and raises errors for unknown/missing columns.

score() vs eval()

  • run.score(...) is an alias of run.eval(...)
  • Use either style consistently in your codebase

Metric helpers

Run also provides helper APIs:
run.column_metrics([accuracy])
run.run_metrics([accuracy_mean], all_runs=repeated_runs)
These helpers enforce mode-specific evaluation usage.

Output locations

  • Per-row outputs are appended to each run.rows[i]
  • Aggregate metrics are placed in run.metrics
  • Run health summary is available in run.health