Supported libraries¶
constructor and call keys¶
Both explainers and visualisers support the constructor and call keys. They pass kwargs to the constructor and to the runtime method (explain or visualise).
This allows you to configure the underlying library object. Here an example:
transparency:
my_first_explainer:
_target_: "ShapExplainer"
algorithm: "GradientExplainer"
constructor:
local_smoothing: 0.0
call:
target: 0
background_data:
source: imagenet_samples
visualisers:
- _target_: "ShapImageVisualiser"
call:
max_samples: 1
Explainer libraries¶
Captum¶
Docs¶
Explainers¶
CaptumExplainer gives access to all Captum explainers.
transparency:
my_captum_explainer:
_target_: CaptumExplainer
algorithm: IntegratedGradients
constructor: {}
call:
target: 0
ONNX compatibility¶
Only algorithms that do not depend on Torch autograd are compatible:
FeatureAblationFeaturePermutationOcclusionShapleyValueSamplingShapleyValuesKernelShapLime
Visualiser compatibility¶
RAITAP currently supports the following Captum visualisers.
CaptumImageVisualiserCaptumTextVisualiserCaptumTimeSeriesVisualiser
All three are compatible with all Captum algorithms in RAITAP.
SHAP¶
Docs¶
Explainers¶
ShapExplainer gives access to all SHAP explainers.
transparency:
my_shap_explainer:
_target_: ShapExplainer
algorithm: GradientExplainer
constructor: {}
call:
target: 0
background_data:
source: imagenet_samples
GradientExplainer, DeepExplainer, and KernelExplainer usually require
background_data. If it is not provided, RAITAP falls back to the input batch.
DeepExplainer can fail on PyTorch models that use SiLU activations (for example EfficientNet variants) due to autograd/in-place limitations. In those cases, use GradientExplainer.
ONNX compatibility¶
Only KernelExplainer is compatible.
Visualiser compatibility¶
The following SHAP visualisers are compatible with all SHAP algorithms:
ShapBarVisualiserShapBeeswarmVisualiserShapForceVisualiserShapWaterfallVisualiser
ShapImageVisualiser is only compatible with:
GradientExplainerDeepExplainer
ShapImageVisualiser configuration¶
ShapImageVisualiser uses a custom Matplotlib-based implementation rather than SHAP’s native image_plot. This provides:
Consistent paired image/overlay layout across RAITAP visualisers
Sample-aware titles with configurable naming
Flexible colorbar and overlay control
Original image panels alongside attribution heatmaps
The visualiser renders pixel-level SHAP attributions as heatmaps with positive contributions in warm colours and negative contributions in cool colours.
Constructor parameters¶
Configure these via the constructor key when defining the visualiser:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
Maximum number of images to display side by side |
|
|
|
Optional attribution panel title (falls back to algorithm name) |
|
|
|
Whether to render original image next to attribution heatmap |
|
|
|
Whether to add a SHAP colorbar in the paired layout |
|
|
|
Matplotlib colormap for the SHAP heatmap overlay |
|
|
|
Alpha value for the SHAP heatmap overlay |
Call parameters¶
Override these via the call key or at runtime:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
Runtime override for maximum samples to display |
|
|
|
Optional names per sample |
|
|
|
Whether to render sample names in subplot titles |
|
|
|
Runtime override for attribution title (even empty string preserved) |
|
|
|
Explainer algorithm name (used for default title rendering) |
Configuration example¶
transparency:
my_shap_explainer:
_target_: "ShapExplainer"
algorithm: "GradientExplainer"
constructor:
local_smoothing: 0.0
call:
target: 0
background_data:
source: imagenet_samples
n_samples: 50
visualisers:
# Minimal configuration
- _target_: "ShapImageVisualiser"
constructor:
max_samples: 1
# Full configuration with all options
- _target_: "ShapImageVisualiser"
constructor:
max_samples: 2
title: "Tumour attribution"
include_original_image: true
show_colorbar: true
cmap: "coolwarm"
overlay_alpha: 0.65
call:
show_sample_names: true
Note: ShapImageVisualiser requires pixel-level SHAP values from GradientExplainer or DeepExplainer. Using it with other SHAP explainers will produce meaningless plots.
Alibi¶
Docs¶
Installation and license¶
Warning
Alibi Explain is under Seldon’s Business Source License 1.1 (BSL 1.1) — not GPLv3. Non-production use is permitted on Seldon’s terms; production or commercial use may require a separate license. RAITAP (GPLv3) does not relicense Alibi. Read Seldon’s license before using.
alibi 0.9.x hard-pins three packages that conflict with RAITAP’s own requirements:
alibi pins |
RAITAP requires |
|---|---|
|
|
|
|
|
(currently 0.26) |
Independently, Alibi’s spaCy dependency can resolve to thinc / blis versions that lack Python 3.13 wheels and fall back to sdist builds (often failing). The alibi extra is supported, but you must add [tool.uv] overrides in your own pyproject.toml — use the same entries as RAITAP’s pyproject.toml:
[tool.uv]
override-dependencies = [
"numpy>=2.4",
"Pillow>=12.0",
"scikit-image>=0.26",
"blis>=1.0.2",
"thinc>=8.3.6,<9",
"spacy>=3.8.0",
]
Then install with uv (recommended):
uv add "raitap[alibi]"
Note
pip (and other installers that are not uv) do not read override-dependencies. Prefer uv for raitap[alibi]; if you use pip, you must satisfy compatible versions of the packages above yourself — RAITAP does not document a supported pip-only recipe.
Note
These overrides bypass version constraints declared by Alibi and its transitive dependencies but do not guarantee every Alibi algorithm works with those newer versions — Seldon has not tested or supported this combination. RAITAP’s KernelShap path is exercised in tests (including under SHAP 0.5x, where RAITAP adapts stacked multi-class outputs before Alibi builds explanation metadata); other algorithms may behave differently. The alibi extra will be cleaned up once upstream metadata and wheels align with RAITAP’s baseline.
Explainers¶
AlibiExplainer wraps a subset of Alibi explainers:
KernelShap— black-box SHAP-style explanations. RAITAP passes a NumPy batch through yourtorch.nn.Module(converted to tensors on the model’s device). Optionalcallkeys includebackground_data,task("classification"/"regression"),nsamples, andtarget(class index for classification).IntegratedGradients— Alibi’s TensorFlow/Keras API only. Put akeras_model(tf.keras.Model) in the Hydraconstructorblock. For PyTorch integrated gradients, use Captum orKernelShaphere.
Example (tabular-oriented preset lives under src/raitap/configs/transparency/alibi_kernel.yaml):
transparency:
my_alibi_explainer:
_target_: AlibiExplainer
algorithm: KernelShap
call:
nsamples: 32
task: classification
visualisers:
- _target_: TabularBarChartVisualiser
ONNX compatibility¶
RAITAP does not expose an ONNXRuntime-specific Alibi path. KernelShap requires a PyTorch nn.Module whose forward is invoked on tensor batches (Alibi calls your model from NumPy inputs). If your deployment uses ONNX only, use another explainer or wrap inference in an nn.Module that matches this contract.
Visualiser compatibility¶
AlibiExplainer produces the same kind of tensor attributions as Captum/SHAP (heat-map compatible), so pick RAITAP visualisers that match your input modality (for example TabularBarChartVisualiser for flat/tabular features). The sample config pairs KernelShap with TabularBarChartVisualiser; image or text pipelines should use the corresponding RAITAP image/text visualisers where shapes align.