raitap.transparency

class raitap.transparency.explainers.base_explainer.AbstractExplainer

Bases: object

Root base class for all explainer adapters.

Owns the shared interface: output_payload_kind class variable (default ATTRIBUTIONS) and the check_backend_compat no-op default.

Extend via AttributionOnlyExplainer when the framework should manage the full explain pipeline and you only need to implement compute_attributions, or via FullExplainer when you own the entire explain pipeline yourself.

class raitap.transparency.explainers.base_explainer.AttributionOnlyExplainer

Bases: AbstractExplainer, ABC

Explainer where you implement one step and the framework handles the rest.

Subclasses implement compute_attributions() only; batching, normalisation, result construction, and artifact persistence are provided by this class via explain().

abstractmethod compute_attributions(model, inputs, **kwargs)

Compute attributions for the given inputs.

Parameters:
  • model – PyTorch model to explain.

  • inputs – Input tensor (shape depends on modality).

  • **kwargs – Framework-specific keyword arguments (e.g. target, baselines, background_data).

Returns:

Attribution tensor matching the input shape.

explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)

Compute attributions (via compute_attributions()), build an ExplanationResult, write artifacts, and return it.

class raitap.transparency.explainers.full_explainer.FullExplainer

Bases: AbstractExplainer, ABC

Explainer where you own the full explain pipeline end-to-end.

Subclasses implement explain() entirely — data conversion, model invocation, result construction, and artifact persistence. Use this when the target library’s API does not map to a simple compute_attributions(model, inputs) Tensor step (e.g. Alibi Explain).

abstractmethod explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)

Implement the full explanation pipeline.

RAITAP Transparency Module

Provides model explanation / attribution capabilities using SHAP and Captum.

Transparency Public Surface

Explainer classes expose explainer.explain(model, inputs, **kwargs), which returns an ExplanationResult. Each explanation can then render one or more visualisations via explanation.visualise(**kwargs).

Explainer classes (used as _target_ values)

CaptumExplainer, ShapExplainer, AlibiExplainer (optional extra alibi)

Visualiser classes (used as _target_ values in visualisers list)

CaptumImageVisualiser, CaptumTimeSeriesVisualiser, CaptumTextVisualiser ShapBarVisualiser, ShapBeeswarmVisualiser, ShapWaterfallVisualiser, ShapForceVisualiser, ShapImageVisualiser TabularBarChartVisualiser

class raitap.transparency.AbstractExplainer

Bases: object

Root base class for all explainer adapters.

Owns the shared interface: output_payload_kind class variable (default ATTRIBUTIONS) and the check_backend_compat no-op default.

Extend via AttributionOnlyExplainer when the framework should manage the full explain pipeline and you only need to implement compute_attributions, or via FullExplainer when you own the entire explain pipeline yourself.

class raitap.transparency.AlibiExplainer(algorithm='KernelShap', **init_kwargs)

Bases: FullExplainer

Wraps selected Alibi Explain algorithms.

KernelShap works with PyTorch nn.Module predictions (black-box).

TreeShap works with fitted tree-based models (sklearn, XGBoost, LightGBM, CatBoost). Pass the fitted tree model via the constructor block (tree_model: ...) or directly: AlibiExplainer("TreeShap", tree_model=my_forest). The model argument to explain() is ignored for TreeShap.

IntegratedGradients follows Alibi’s TensorFlow/Keras API: pass keras_model in the Hydra constructor block. The model argument to explain() is ignored.

explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)

Implement the full explanation pipeline.

class raitap.transparency.AttributionOnlyExplainer

Bases: AbstractExplainer, ABC

Explainer where you implement one step and the framework handles the rest.

Subclasses implement compute_attributions() only; batching, normalisation, result construction, and artifact persistence are provided by this class via explain().

abstractmethod compute_attributions(model, inputs, **kwargs)

Compute attributions for the given inputs.

Parameters:
  • model – PyTorch model to explain.

  • inputs – Input tensor (shape depends on modality).

  • **kwargs – Framework-specific keyword arguments (e.g. target, baselines, background_data).

Returns:

Attribution tensor matching the input shape.

explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)

Compute attributions (via compute_attributions()), build an ExplanationResult, write artifacts, and return it.

class raitap.transparency.CaptumExplainer(algorithm, **init_kwargs)

Bases: AttributionOnlyExplainer

Single wrapper for ALL Captum attribution methods.

Uses dynamic method loading - no need for class-per-method.

compute_attributions(model, inputs, backend=None, target=None, baselines=None, **attr_kwargs)

Compute Captum attributions.

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • target – Target class index(es). Can be: - int: Same target for all samples - list[int]: Per-sample targets - torch.Tensor: Per-sample target tensor

  • baselines – Baseline for integrated methods (optional)

  • **attr_kwargs – Additional arguments for .attribute() method

Returns:

Attribution tensor matching input shape

class raitap.transparency.CaptumImageVisualiser(method='blended_heat_map', sign='all', show_colorbar=True, title=None, include_original_image=True)

Bases: BaseVisualiser

Visualise image attributions using captum.attr.visualization.visualize_image_attr.

Wraps the Captum native function so the output is a Matplotlib Figure that can be saved or returned by explain().

Compatible with ALL Captum attribution algorithms.

visualise(attributions, inputs=None, *, context=None, max_samples=8, **kwargs)
Parameters:
  • attributions(B, C, H, W) or (B, H, W) tensor / array.

  • inputs – Original images (B, C, H, W) for overlay.

  • context – Standard RAITAP metadata (optional).

  • max_samples – Maximum number of samples to display (default: 8).

  • **kwargs – Forwarded to visualize_image_attr.

Returns:

Matplotlib Figure with one column per sample.

class raitap.transparency.CaptumTextVisualiser

Bases: BaseVisualiser

Visualise per-token text attributions as a horizontal bar chart.

This is a lightweight matplotlib-based implementation since Captum’s native text visualisation renders HTML (not a Matplotlib Figure).

Compatible with ALL Captum attribution algorithms on text/sequence inputs.

Note: attributions should be a 1-D array of per-token scores for a single input. Pass token_labels via kwargs for readable output.

visualise(attributions, inputs=None, token_labels=None, **kwargs)
Parameters:
  • attributions – 1-D attribution scores (one per token).

  • inputs – Ignored.

  • token_labels – List of token strings (optional).

  • **kwargs – Ignored (for API consistency).

Returns:

Matplotlib Figure with a horizontal bar chart of token importance.

class raitap.transparency.CaptumTimeSeriesVisualiser(method='overlay_individual', sign='absolute_value')

Bases: BaseVisualiser

Visualise time-series attributions via captum.attr.visualization.visualize_timeseries_attr.

Compatible with ALL Captum attribution algorithms.

visualise(attributions, inputs=None, *, context=None, **kwargs)
Parameters:
  • attributions(N, C) numpy array / tensor (channels-last). If a batch of shape (B, N, C) is given, the first sample is used.

  • inputs – Matching (N, C) time-series data (optional).

  • context – Standard RAITAP metadata (not used by this visualiser).

  • **kwargs – Forwarded to visualize_timeseries_attr.

Returns:

Matplotlib Figure.

class raitap.transparency.ConfiguredVisualiser(visualiser, call_kwargs=<factory>)

Bases: object

Visualiser instance plus per-call kwargs for BaseVisualiser.visualise.

class raitap.transparency.ExplainerAdapter(*args, **kwargs)

Bases: Protocol

Hydra explainer: explain matches AbstractExplainer.

Read output_payload_kind via raitap.transparency.contracts.explainer_output_kind() (not via direct attribute access — the attribute is optional and defaults to ATTRIBUTIONS when absent).

exception raitap.transparency.ExplainerBackendIncompatibilityError(explainer, backend, algorithm, compatible_algorithms)

Bases: Exception

Raised when an explainer algorithm is not supported by the selected backend.

class raitap.transparency.ExplanationPayloadKind(*values)

Bases: StrEnum

Primary payload category on ExplanationResult.

class raitap.transparency.ExplanationResult(attributions: 'torch.Tensor', inputs: 'torch.Tensor', run_dir: 'Path', experiment_name: 'str | None', explainer_target: 'str', algorithm: 'str', explainer_name: 'str | None' = None, kwargs: 'dict[str, Any]'=<factory>, visualiser_targets: 'list[str]' = <factory>, visualisers: 'list[ConfiguredVisualiser]' = <factory>, payload_kind: 'ExplanationPayloadKind' = <ExplanationPayloadKind.ATTRIBUTIONS: 'attributions'>)

Bases: Trackable, Reportable

log(tracker, artifact_path='transparency', use_subdirectory=True, **kwargs)

Log the object’s artifacts or metadata to the provided tracker.

to_report_group()

Return a ReportGroup representing this object’s report content.

class raitap.transparency.FullExplainer

Bases: AbstractExplainer, ABC

Explainer where you own the full explain pipeline end-to-end.

Subclasses implement explain() entirely — data conversion, model invocation, result construction, and artifact persistence. Use this when the target library’s API does not map to a simple compute_attributions(model, inputs) Tensor step (e.g. Alibi Explain).

abstractmethod explain(model, inputs, *, backend=None, run_dir=None, output_root='.', experiment_name=None, explainer_target=None, explainer_name=None, visualisers=None, **kwargs)

Implement the full explanation pipeline.

exception raitap.transparency.PayloadVisualiserIncompatibilityError(*, explainer_target, visualiser, output_payload_kind, supported_payload_kinds)

Bases: Exception

Raised when a visualiser does not accept the explainer’s output payload kind.

class raitap.transparency.ShapBarVisualiser(feature_names=None, max_display=20)

Bases: BaseVisualiser

Mean absolute SHAP value bar chart via shap.summary_plot(plot_type='bar').

Compatible with all SHAP explainer algorithms.

visualise(attributions, inputs=None, *, context=None, **kwargs)
Parameters:
  • attributions(B, F) SHAP values tensor / array.

  • inputs – Original feature values (B, F) (used for colouring).

  • context – Standard RAITAP metadata (not used by this visualiser).

  • **kwargs – Forwarded to shap.summary_plot.

class raitap.transparency.ShapBeeswarmVisualiser(feature_names=None, max_display=20)

Bases: BaseVisualiser

SHAP beeswarm summary plot via shap.summary_plot().

Compatible with all SHAP explainer algorithms.

visualise(attributions, inputs=None, *, context=None, **kwargs)

Create visualization from attributions.

Parameters:
  • attributions – Attribution values (numpy array or tensor)

  • inputs – Original inputs for overlay (optional)

  • context – Standard RAITAP pipeline metadata (optional)

  • **kwargs – visualiser-specific arguments

Returns:

Matplotlib figure

class raitap.transparency.ShapExplainer(algorithm, **init_kwargs)

Bases: AttributionOnlyExplainer

Single wrapper for ALL SHAP explainer types.

Uses dynamic explainer loading - no need for class-per-explainer.

compute_attributions(model, inputs, backend=None, background_data=None, target=None, **shap_kwargs)

Compute SHAP values.

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • background_data – Background dataset (REQUIRED for most explainers) - GradientExplainer: Required - DeepExplainer: Required - KernelExplainer: Required - TreeExplainer: Optional

  • target – Target class(es) for attributions (optional) If not specified, returns attributions for all classes

  • **shap_kwargs – Additional arguments for .shap_values() method

Returns:

SHAP values as torch.Tensor

class raitap.transparency.ShapForceVisualiser(feature_names=None, expected_value=0.0, sample_index=0)

Bases: BaseVisualiser

Per-sample SHAP force plot via shap.plots.force (matplotlib backend).

Compatible with all SHAP explainer algorithms.

visualise(attributions, inputs=None, *, context=None, **kwargs)

Create visualization from attributions.

Parameters:
  • attributions – Attribution values (numpy array or tensor)

  • inputs – Original inputs for overlay (optional)

  • context – Standard RAITAP pipeline metadata (optional)

  • **kwargs – visualiser-specific arguments

Returns:

Matplotlib figure

class raitap.transparency.ShapImageVisualiser(max_samples=4, title=None, include_original_image=True, show_colorbar=True, cmap='coolwarm', overlay_alpha=0.65)

Bases: BaseVisualiser

Render image-level SHAP attributions with Matplotlib.

This visualiser does not call shap.image_plot directly. Instead, it renders a RAITAP-managed figure that can optionally show the original image, a SHAP heatmap overlay, sample-aware titles, and a colorbar.

Warning

Only compatible with ``GradientExplainer`` and ``DeepExplainer``. These are the only SHAP explainers that compute pixel-level SHAP values suitable for image visualisation. Passing attributions from other explainers will produce meaningless plots.

Positive contributions are shown in warm colours and negative contributions in cool colours, using the configured Matplotlib colormap.

visualise(attributions, inputs=None, *, context=None, max_samples=None, title=None, **kwargs)
Parameters:
  • attributions(B, C, H, W) SHAP values tensor / array.

  • inputs – Original images (B, C, H, W) for background.

  • context – Standard RAITAP metadata (optional).

  • max_samples – Maximum number of images to display.

  • title – Optional attribution panel title. Overrides the algorithm-based default title, even when set to an empty string.

  • **kwargs – Optional visual styling overrides.

Returns:

Matplotlib Figure.

class raitap.transparency.ShapWaterfallVisualiser(feature_names=None, expected_value=0.0, sample_index=0, max_display=10)

Bases: BaseVisualiser

Per-sample SHAP waterfall chart via shap.plots.waterfall.

Compatible with all SHAP explainer algorithms.

visualise(attributions, inputs=None, *, context=None, **kwargs)

Create visualization from attributions.

Parameters:
  • attributions – Attribution values (numpy array or tensor)

  • inputs – Original inputs for overlay (optional)

  • context – Standard RAITAP pipeline metadata (optional)

  • **kwargs – visualiser-specific arguments

Returns:

Matplotlib figure

class raitap.transparency.TabularBarChartVisualiser(feature_names=None)

Bases: BaseVisualiser

Visualise attributions for tabular data as bar charts.

Works with any attribution method (Captum, SHAP, etc.)

visualise(attributions, inputs=None, **kwargs)

Create feature importance bar chart.

Parameters:
  • attributions – (B, num_features) array

  • inputs – Not used for tabular visualization

Returns:

Matplotlib figure

class raitap.transparency.VisualisationResult(explanation, figure, visualiser_name, visualiser_target, output_path)

Bases: Trackable

PNG is written to output_path; figure is closed after save to limit memory use.

log(tracker, artifact_path='transparency', use_subdirectory=True, **kwargs)

Log the object’s artifacts or metadata to the provided tracker.

exception raitap.transparency.VisualiserIncompatibilityError(framework, visualiser, algorithm, compatible_algorithms)

Bases: Exception

Raised when a visualiser is not compatible with the chosen explainer algorithm.

raitap.transparency.contracts.explainer_output_kind(explainer)