Configuration¶
This page describes how to configure the transparency module that computes and visualises attributions.
Inside the transparency key, you can configure one or more explainers. See YAML example for the config shape.
See Supported libraries for the backend behaviour behind
_target_, algorithm, and visualiser compatibility.
Options¶
Name |
Allowed |
Default |
Description |
|---|---|---|---|
|
|
|
Hydra target for the explainer class. |
|
|
Name of the underlying explainability algorithm to use. The exact class is resolved by the selected explainer backend. |
|
|
|
|
Keyword arguments passed when constructing the explainer or underlying library object. |
|
|
|
Keyword arguments passed when computing attributions. Any nested dict with a |
|
|
|
Batch size for computing attributions. If not specified, the explainer will compute attributions in a single pass. |
|
|
|
Maximum batch size for computing attributions. If not specified, the explainer will compute attributions in a single pass. |
|
|
|
Whether to show a progress bar when computing attributions. |
|
|
|
Description of the progress bar. |
|
|
|
Visualiser definitions. Each entry must include at least |
YAML example¶
transparency:
my_first_explainer:
_target_: "CaptumExplainer"
algorithm: "IntegratedGradients"
call:
target: 0
visualisers:
- _target_: "CaptumImageVisualiser"
call:
max_samples: 1
my_second_explainer:
_target_: "ShapExplainer"
algorithm: "GradientExplainer"
constructor:
local_smoothing: 0.0
call:
background_data:
source: "./data/background"
n_samples: 32
visualisers:
- _target_: "ShapImageVisualiser"
CLI override example¶
uv run raitap transparency.captum_ig.algorithm=GradientShap
raitap transparency.captum_ig.algorithm=GradientShap