A thoughtful approach to AI ethics is becoming increasingly important for all organizations deriving value from AI. One of the fast-emerging niches in AI is the provision and support of AI fairness toolkits: software for evaluating AI models on fairness. In this blog post, we’ll be surveying the landscape in AI fairness toolkits with an eye for implementation. We hope that by providing an overview of the top toolkits and resources that exist – starting with Fairness and Robustness – will help more companies adopt AI responsibly, with ethical principles at the core.
While fairness regulations are still an emerging area of AI policy, organizations working with AI in regulated industries like finance, transportation, construction, pharmaceuticals, and healthcare have a vested interest in staying ahead of the curve and building trust in their AI models. For larger enterprises, this typically means hiring AI-fairness experts as part of a dedicated risk-management group, whereas startups will typically opt to train existing AI talent to work with an AI fairness toolkit.
The notion of fairness is difficult to define because it is dependent on the context of the problem domain. This has a few repercussions. First, there is no consensus on a well-defined mathematical objective over which to optimize. Creating such a metric requires deep understanding of the problem domain, the social context, and the mathematical underpinnings of fairness. Second, policy-makers shy away from making broad statements, which means there is no consensus on standards for AI in production. More information on the different definitions of fairness can be found on Tutorial #1: bias and fairness in AI.
While AI fairness toolkits can help streamline the workflow of AI-fairness experts, we believe their most important use-cases are in enabling AI developers without specific fairness expertise to audit and mitigate models. We divide the landscape of existing AI fairness toolkits into three general types:
1. Exhaustive toolbox: These toolkits offer flexible choice among various fairness definitions as well as a set of mitigation algorithms but have a steep learning curve. The mitigation functionality can make changes at different stages of the AI model pipeline. Users of such toolkits need prior expertise in AI fairness in order to select the right fairness metric and mitigation strategy for their application.
Examples: FairLearn, Fairness 360, TensorFlow Fairness Indicators
2. Automated Detection: These toolkits automatically flag issues with the model with respect to various fairness properties. They are easy to use, but can be limited to a specific range of use-cases (like auditing for recruiting applications), data types (for example, structured data only) and model types (like binary classification as opposed to multi-class classification). Furthermore, they typically offer no mitigation options. This is an important limitation: Mitigation can be difficult to automate without domain-specific inputs from technical users. Intended audiences are non-technical users and non-experts in AI fairness. However, these toolkits can help form a reporting pipeline for risk-management audiences.
Examples: Aequitas, Audit AI
3. Automated exploration: These toolkits offer rich analysis and visualization of an AI model’s statistical properties and counterfactual behaviors. The purpose of these toolkits is to enable technical and non-technical users to learn as much as possible about the model’s performance with regards to fairness without requiring fairness-specific expertise. A focus on visualizations and web-interface functionality typically makes these toolsets less useful for a mitigation workflow.
Example: Google’s What-If tool
We observe that the AI fairness toolkits landscape currently lacks automated mitigation toolkits. Automated mitigation is technologically and strategically challenging at present. Specifically, the complexities of tradeoffs among different fairness metrics requires case-specific intervention. This may change in the future as evolving norms and regulations in different enterprise-AI domains mature and reach consensus, thus allowing for domain-specific automated mitigation.
A more subtle gap in the current landscape is the absence of a perfect fit for the needs of working AI developers who are novices in AI fairness. The toolkits geared towards mitigation require a degree of prior fairness-expertise to use effectively, while toolkits with didactic and exploratory functionality lack mitigation options. Keeping this target audience in mind, we find different advantages and drawbacks among the leading AI fairness toolkits:
IBM’s AI Fairness 360
Advantages:
- Most comprehensive open-source toolkit in the Responsible AI space
- Large selection of supported bias mitigation algorithms and supported fairness metrics
- Offered via Azure ML/BlueMix
Drawbacks:
- No exploratory functionality: users must already know what fairness definitions they wish to apply
Fairlearn by Microsoft
Advantages:
- An intuitive dashboard makes it easy to play with various fairness definitions and mitigate them
- A widget for easy integration into Jupyter notebooks
- Offered via Azure ML/BlueMix
Drawbacks:
- Does not handle multi-class output protected variables
- Focuses on group fairness
- No guidance on what mitigation algorithm should be used to improve a user’s model
- No dataset bias detection
- Doesn’t work with image or text data
- Only works on models that have fit() and predict()
Aequitas
Advantages:
- Easy to use web interface with step-by-step guidance
- Able to easily serve beginners, as well as non-technical stakeholders
Drawbacks:
- Reports focus on auditing, and are less helpful for designing mitigation
- Oversimplifies some fairness definitions, making them less useful for sophisticated use cases
Google’s What If Tool
Advantages:
- Allows users to try 5 types of fairness
- Provides great visualizations
Drawbacks:
- Users need to figure out which type of fairness is correct for their use case
- Does not provide mitigation algorithms
- Non-TensorFlow model can be used, but require extra plumbing
TensorFlow Fairness Indicators
Advantages:
- Enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers
Drawbacks:
- Works best with TensorFlow, although some features are supported with other model types
- Does not provide mitigation algorithms
REVISE: REvealing VIsual biaSEs
Advantages:
- Automatically detects possible forms of sampling bias in a visual dataset
- Includes mitigation recommendations
Drawbacks:
- The toolset statistically analyses datasets but doesn’t provide definitive fairness judgments based on the analysis
Audit AI
Advantages:
- A collection of easy-to-use bias testing techniques
Drawbacks:
- Reports focus on auditing, and are less helpful for designing mitigation
- Users must set the threshold of statistical and practical significance, which depends on the field and use-case
Conclusion
The field of applied AI fairness is rife with deep technical, ethical, and strategic challenges, so it is no surprise that even partial automation of the work of AI fairness experts should be challenging. Still, we believe that even in the short term there is room for innovation in combining the best features of exploratory and exhaustive toolkits.
Blog