RAITAP

RAITAP is a Python library to assess the responsibility level of AI models. It is designed to be easily integrated into existing MLOps workflows.

What does it assess?

RAITAP currently assesses the following 2 responsible AI dimensions:

  • Transparency

  • Robustness

as defined in Towards the certification of AI-based systems and MLOps as enabler of trustworthy AI

Where does it fit in my workflow?

RAITAP is configured via YAML Hydra configs or CLI flags, and then ran via a CLI command.

This means it can be used either as:

  • a standalone Python package, which stores the assessment outputs in the directory you specify. See Understanding outputs for more details.

  • a step in a larger MLOps pipeline, which forwards the assessment outputs to your tracking software (e.g. MLflow). See the tracking module for more details.

This gives you full flexibility to choose how you want to use RAITAP in your workflow.

How is it structured?

RAITAP is a wrapper around existing XAI frameworks, which provides a consistent API, allowing you to easily switch your configuration, combine frameworks, and obtain consolidated outputs.

Table of contents