raitap.transparency¶
- class raitap.transparency.explainers.base_explainer.AbstractExplainer¶
Bases:
objectRoot base class for all explainer adapters.
Owns the shared interface:
output_payload_kindclass variable (defaultATTRIBUTIONS) and thecheck_backend_compatno-op default.Extend via
AttributionOnlyExplainerwhen the framework should manage the fullexplainpipeline and you only need to implementcompute_attributions, or viaFullExplainerwhen you own the entireexplainpipeline yourself.
- class raitap.transparency.explainers.base_explainer.AttributionOnlyExplainer¶
Bases:
AbstractExplainer,ABCExplainer where you implement one step and the framework handles the rest.
Subclasses implement
compute_attributions()only; batching, normalisation, result construction, and artifact persistence are provided by this class viaexplain().- abstractmethod compute_attributions(model, inputs, **kwargs)¶
Compute attributions for the given inputs.
- Parameters:
model – PyTorch model to explain.
inputs – Input tensor (shape depends on modality).
**kwargs – Framework-specific keyword arguments (e.g.
target,baselines,background_data).
- Returns:
Attribution tensor matching the input shape.
- explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)¶
Compute attributions (via
compute_attributions()), build anExplanationResult, write artifacts, and return it.
- class raitap.transparency.explainers.full_explainer.FullExplainer¶
Bases:
AbstractExplainer,ABCExplainer where you own the full
explainpipeline end-to-end.Subclasses implement
explain()entirely — data conversion, model invocation, result construction, and artifact persistence. Use this when the target library’s API does not map to a simplecompute_attributions(model, inputs) → Tensorstep (e.g. Alibi Explain).- abstractmethod explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)¶
Implement the full explanation pipeline.
RAITAP Transparency Module
Provides model explanation / attribution capabilities using SHAP and Captum.
Transparency Public Surface¶
Explainer classes expose explainer.explain(model, inputs, **kwargs), which returns an ExplanationResult. Each explanation can then render one or more visualisations via explanation.visualise(**kwargs).
Explainer classes (used as _target_ values)¶
CaptumExplainer, ShapExplainer, AlibiExplainer (optional extra alibi)
Visualiser classes (used as _target_ values in visualisers list)¶
CaptumImageVisualiser, CaptumTimeSeriesVisualiser, CaptumTextVisualiser ShapBarVisualiser, ShapBeeswarmVisualiser, ShapWaterfallVisualiser, ShapForceVisualiser, ShapImageVisualiser TabularBarChartVisualiser
- class raitap.transparency.AbstractExplainer¶
Bases:
objectRoot base class for all explainer adapters.
Owns the shared interface:
output_payload_kindclass variable (defaultATTRIBUTIONS) and thecheck_backend_compatno-op default.Extend via
AttributionOnlyExplainerwhen the framework should manage the fullexplainpipeline and you only need to implementcompute_attributions, or viaFullExplainerwhen you own the entireexplainpipeline yourself.
- class raitap.transparency.AlibiExplainer(algorithm='KernelShap', **init_kwargs)¶
Bases:
FullExplainerWraps selected Alibi Explain algorithms.
KernelShapworks with PyTorchnn.Modulepredictions (black-box).TreeShapworks with fitted tree-based models (sklearn, XGBoost, LightGBM, CatBoost). Pass the fitted tree model via theconstructorblock (tree_model: ...) or directly:AlibiExplainer("TreeShap", tree_model=my_forest). Themodelargument toexplain()is ignored for TreeShap.IntegratedGradientsfollows Alibi’s TensorFlow/Keras API: passkeras_modelin the Hydraconstructorblock. Themodelargument toexplain()is ignored.- explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)¶
Implement the full explanation pipeline.
- class raitap.transparency.AttributionOnlyExplainer¶
Bases:
AbstractExplainer,ABCExplainer where you implement one step and the framework handles the rest.
Subclasses implement
compute_attributions()only; batching, normalisation, result construction, and artifact persistence are provided by this class viaexplain().- abstractmethod compute_attributions(model, inputs, **kwargs)¶
Compute attributions for the given inputs.
- Parameters:
model – PyTorch model to explain.
inputs – Input tensor (shape depends on modality).
**kwargs – Framework-specific keyword arguments (e.g.
target,baselines,background_data).
- Returns:
Attribution tensor matching the input shape.
- explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)¶
Compute attributions (via
compute_attributions()), build anExplanationResult, write artifacts, and return it.
- class raitap.transparency.CaptumExplainer(algorithm, **init_kwargs)¶
Bases:
AttributionOnlyExplainerSingle wrapper for ALL Captum attribution methods.
Uses dynamic method loading - no need for class-per-method.
- compute_attributions(model, inputs, backend=None, target=None, baselines=None, **attr_kwargs)¶
Compute Captum attributions.
- Parameters:
model – PyTorch model
inputs – Input tensor
target – Target class index(es). Can be: - int: Same target for all samples - list[int]: Per-sample targets - torch.Tensor: Per-sample target tensor
baselines – Baseline for integrated methods (optional)
**attr_kwargs – Additional arguments for .attribute() method
- Returns:
Attribution tensor matching input shape
- class raitap.transparency.CaptumImageVisualiser(method='blended_heat_map', sign='all', show_colorbar=True, title=None, include_original_image=True)¶
Bases:
BaseVisualiserVisualise image attributions using
captum.attr.visualization.visualize_image_attr.Wraps the Captum native function so the output is a Matplotlib Figure that can be saved or returned by
explain().Compatible with ALL Captum attribution algorithms.
- visualise(attributions, inputs=None, *, context=None, max_samples=8, **kwargs)¶
- Parameters:
attributions –
(B, C, H, W)or(B, H, W)tensor / array.inputs – Original images
(B, C, H, W)for overlay.context – Standard RAITAP metadata (optional).
max_samples – Maximum number of samples to display (default: 8).
**kwargs – Forwarded to
visualize_image_attr.
- Returns:
Matplotlib Figure with one column per sample.
- class raitap.transparency.CaptumTextVisualiser¶
Bases:
BaseVisualiserVisualise per-token text attributions as a horizontal bar chart.
This is a lightweight matplotlib-based implementation since Captum’s native text visualisation renders HTML (not a Matplotlib Figure).
Compatible with ALL Captum attribution algorithms on text/sequence inputs.
Note:
attributionsshould be a 1-D array of per-token scores for a single input. Passtoken_labelsvia kwargs for readable output.- visualise(attributions, inputs=None, token_labels=None, **kwargs)¶
- Parameters:
attributions – 1-D attribution scores (one per token).
inputs – Ignored.
token_labels – List of token strings (optional).
**kwargs – Ignored (for API consistency).
- Returns:
Matplotlib Figure with a horizontal bar chart of token importance.
- class raitap.transparency.CaptumTimeSeriesVisualiser(method='overlay_individual', sign='absolute_value')¶
Bases:
BaseVisualiserVisualise time-series attributions via
captum.attr.visualization.visualize_timeseries_attr.Compatible with ALL Captum attribution algorithms.
- visualise(attributions, inputs=None, *, context=None, **kwargs)¶
- Parameters:
attributions –
(N, C)numpy array / tensor (channels-last). If a batch of shape(B, N, C)is given, the first sample is used.inputs – Matching
(N, C)time-series data (optional).context – Standard RAITAP metadata (not used by this visualiser).
**kwargs – Forwarded to
visualize_timeseries_attr.
- Returns:
Matplotlib Figure.
- class raitap.transparency.ConfiguredVisualiser(visualiser, call_kwargs=<factory>)¶
Bases:
objectVisualiser instance plus per-call kwargs for
BaseVisualiser.visualise.
- class raitap.transparency.ExplainerAdapter(*args, **kwargs)¶
Bases:
ProtocolHydra explainer:
explainmatchesAbstractExplainer.Read
output_payload_kindviaraitap.transparency.contracts.explainer_output_kind()(not via direct attribute access — the attribute is optional and defaults toATTRIBUTIONSwhen absent).
- exception raitap.transparency.ExplainerBackendIncompatibilityError(explainer, backend, algorithm, compatible_algorithms)¶
Bases:
ExceptionRaised when an explainer algorithm is not supported by the selected backend.
- class raitap.transparency.ExplanationPayloadKind(*values)¶
Bases:
StrEnumPrimary payload category on
ExplanationResult.
- class raitap.transparency.ExplanationResult(attributions: 'torch.Tensor', inputs: 'torch.Tensor', run_dir: 'Path', experiment_name: 'str | None', explainer_target: 'str', algorithm: 'str', explainer_name: 'str | None' = None, kwargs: 'dict[str, Any]'=<factory>, visualiser_targets: 'list[str]' = <factory>, visualisers: 'list[ConfiguredVisualiser]' = <factory>, payload_kind: 'ExplanationPayloadKind' = <ExplanationPayloadKind.ATTRIBUTIONS: 'attributions'>)¶
Bases:
Trackable,Reportable- log(tracker, artifact_path='transparency', use_subdirectory=True, **kwargs)¶
Log the object’s artifacts or metadata to the provided tracker.
- to_report_group()¶
Return a ReportGroup representing this object’s report content.
- class raitap.transparency.FullExplainer¶
Bases:
AbstractExplainer,ABCExplainer where you own the full
explainpipeline end-to-end.Subclasses implement
explain()entirely — data conversion, model invocation, result construction, and artifact persistence. Use this when the target library’s API does not map to a simplecompute_attributions(model, inputs) → Tensorstep (e.g. Alibi Explain).- abstractmethod explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)¶
Implement the full explanation pipeline.
- exception raitap.transparency.PayloadVisualiserIncompatibilityError(*, explainer_target, visualiser, output_payload_kind, supported_payload_kinds)¶
Bases:
ExceptionRaised when a visualiser does not accept the explainer’s output payload kind.
- class raitap.transparency.ShapBarVisualiser(feature_names=None, max_display=20)¶
Bases:
BaseVisualiserMean absolute SHAP value bar chart via
shap.summary_plot(plot_type='bar').Compatible with all SHAP explainer algorithms.
- visualise(attributions, inputs=None, *, context=None, **kwargs)¶
- Parameters:
attributions –
(B, F)SHAP values tensor / array.inputs – Original feature values
(B, F)(used for colouring).context – Standard RAITAP metadata (not used by this visualiser).
**kwargs – Forwarded to
shap.summary_plot.
- class raitap.transparency.ShapBeeswarmVisualiser(feature_names=None, max_display=20)¶
Bases:
BaseVisualiserSHAP beeswarm summary plot via
shap.summary_plot().Compatible with all SHAP explainer algorithms.
- visualise(attributions, inputs=None, *, context=None, **kwargs)¶
Create visualization from attributions.
- Parameters:
attributions – Attribution values (numpy array or tensor)
inputs – Original inputs for overlay (optional)
context – Standard RAITAP pipeline metadata (optional)
**kwargs – visualiser-specific arguments
- Returns:
Matplotlib figure
- class raitap.transparency.ShapExplainer(algorithm, **init_kwargs)¶
Bases:
AttributionOnlyExplainerSingle wrapper for ALL SHAP explainer types.
Uses dynamic explainer loading - no need for class-per-explainer.
- compute_attributions(model, inputs, backend=None, background_data=None, target=None, **shap_kwargs)¶
Compute SHAP values.
- Parameters:
model – PyTorch model
inputs – Input tensor
background_data – Background dataset (REQUIRED for most explainers) - GradientExplainer: Required - DeepExplainer: Required - KernelExplainer: Required - TreeExplainer: Optional
target – Target class(es) for attributions (optional) If not specified, returns attributions for all classes
**shap_kwargs – Additional arguments for .shap_values() method
- Returns:
SHAP values as torch.Tensor
- class raitap.transparency.ShapForceVisualiser(feature_names=None, expected_value=0.0, sample_index=0)¶
Bases:
BaseVisualiserPer-sample SHAP force plot via
shap.plots.force(matplotlib backend).Compatible with all SHAP explainer algorithms.
- visualise(attributions, inputs=None, *, context=None, **kwargs)¶
Create visualization from attributions.
- Parameters:
attributions – Attribution values (numpy array or tensor)
inputs – Original inputs for overlay (optional)
context – Standard RAITAP pipeline metadata (optional)
**kwargs – visualiser-specific arguments
- Returns:
Matplotlib figure
- class raitap.transparency.ShapImageVisualiser(max_samples=4, title=None, include_original_image=True, show_colorbar=True, cmap='coolwarm', overlay_alpha=0.65)¶
Bases:
BaseVisualiserRender image-level SHAP attributions with Matplotlib.
This visualiser does not call
shap.image_plotdirectly. Instead, it renders a RAITAP-managed figure that can optionally show the original image, a SHAP heatmap overlay, sample-aware titles, and a colorbar.Warning
Only compatible with ``GradientExplainer`` and ``DeepExplainer``. These are the only SHAP explainers that compute pixel-level SHAP values suitable for image visualisation. Passing attributions from other explainers will produce meaningless plots.
Positive contributions are shown in warm colours and negative contributions in cool colours, using the configured Matplotlib colormap.
- visualise(attributions, inputs=None, *, context=None, max_samples=None, title=None, **kwargs)¶
- Parameters:
attributions –
(B, C, H, W)SHAP values tensor / array.inputs – Original images
(B, C, H, W)for background.context – Standard RAITAP metadata (optional).
max_samples – Maximum number of images to display.
title – Optional attribution panel title. Overrides the algorithm-based default title, even when set to an empty string.
**kwargs – Optional visual styling overrides.
- Returns:
Matplotlib Figure.
- class raitap.transparency.ShapWaterfallVisualiser(feature_names=None, expected_value=0.0, sample_index=0, max_display=10)¶
Bases:
BaseVisualiserPer-sample SHAP waterfall chart via
shap.plots.waterfall.Compatible with all SHAP explainer algorithms.
- visualise(attributions, inputs=None, *, context=None, **kwargs)¶
Create visualization from attributions.
- Parameters:
attributions – Attribution values (numpy array or tensor)
inputs – Original inputs for overlay (optional)
context – Standard RAITAP pipeline metadata (optional)
**kwargs – visualiser-specific arguments
- Returns:
Matplotlib figure
- class raitap.transparency.TabularBarChartVisualiser(feature_names=None)¶
Bases:
BaseVisualiserVisualise attributions for tabular data as bar charts.
Works with any attribution method (Captum, SHAP, etc.)
- visualise(attributions, inputs=None, **kwargs)¶
Create feature importance bar chart.
- Parameters:
attributions – (B, num_features) array
inputs – Not used for tabular visualization
- Returns:
Matplotlib figure
- class raitap.transparency.VisualisationResult(explanation, figure, visualiser_name, visualiser_target, output_path)¶
Bases:
TrackablePNG is written to
output_path;figureis closed after save to limit memory use.- log(tracker, artifact_path='transparency', use_subdirectory=True, **kwargs)¶
Log the object’s artifacts or metadata to the provided tracker.
- exception raitap.transparency.VisualiserIncompatibilityError(framework, visualiser, algorithm, compatible_algorithms)¶
Bases:
ExceptionRaised when a visualiser is not compatible with the chosen explainer algorithm.
- raitap.transparency.contracts.explainer_output_kind(explainer)¶