Configuration

This page describes how to configure the metrics module used to score model predictions.

Options

Name

Allowed

Default

Description

_target_

"ClassificationMetrics", "DetectionMetrics"

"ClassificationMetrics"

The type of metrics to use. See Underlying libraries for more details.

task

"binary", "multiclass", "multilabel"

"multiclass"

Classification task type. Only used by ClassificationMetrics.

num_classes

integer, null

null

Number of classes for multiclass tasks. For multilabel, this is also accepted as an alias for num_labels.

num_labels

integer, null

null

Number of labels for multilabel tasks.

average

"micro", "macro", "weighted", "none"

"macro"

Aggregation mode. ClassificationMetrics supports "micro", "macro", "weighted", and "none" for multiclass and multilabel. DetectionMetrics supports "micro" and "macro". See Underlying libraries for more details.

ignore_index

integer, null

null

Optional target value to ignore when computing classification metrics.

threshold

float, null

0.5 for multilabel when omitted

Threshold applied by ClassificationMetrics for multilabel predictions. Ignored by other targets unless the underlying TorchMetrics implementation accepts it through forwarded kwargs.

box_format

"xyxy", "xywh"

"xyxy"

Bounding-box format expected by DetectionMetrics.

iou_type

"bbox", "segm", or a tuple containing those values

"bbox"

IoU mode passed to detection mean average precision.

iou_thresholds

list[float], null

null

Optional custom IoU thresholds for detection evaluation.

rec_thresholds

list[float], null

null

Optional custom recall thresholds for detection evaluation.

max_detection_thresholds

list[int], null

null

Optional maximum-detection cutoffs for detection evaluation.

class_metrics

true, false

false

Whether DetectionMetrics should compute class-wise metrics in addition to global summaries.

extended_summary

true, false

false

Whether DetectionMetrics should request extended non-scalar summary outputs.

backend

"faster_coco_eval", "pycocotools"

"faster_coco_eval"

Backend used by detection mean average precision. pycocotools should be used only for backward compatibility.

YAML example

metrics:
  _target_: "ClassificationMetrics"
  task: "multiclass"
  num_classes: 7
  num_labels: null
  average: "macro"
  ignore_index: null

CLI override example

uv run raitap metrics=detection metrics.class_metrics=true metrics.extended_summary=true
raitap metrics=detection metrics.class_metrics=true metrics.extended_summary=true