SafeAR

Safe Algorithmic Recourse by Risk-Aware Policies

Internship @ JPMorgan Chase AI Research

| Objective

With the growing use of machine learning (ML) models in critical domains such as finance and healthcare, the need to offer recourse for those adversely affected by the decisions of ML models has become more important; individuals ought to be provided with recommendations on actions to take for improving their situation and thus receive a favorable decision.


Sequential algorithmic recourse recommends a series of changes. However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered. It is undesirable if a recourse could (with some probability) result in a worse situation from which recovery requires an extremely high cost. It is essential to incorporate risks when computing and evaluating recourse.


We refer to the recourse computed with such risk considerations as Safe Algorithmic Recourse (SafeAR). The objective is to empower people to choose a recourse based on their risk tolerance. 

| Motivating Example

Consider the following motivating example on loan approvals (top right). A company uses a trained black box ML model to determine loan approvals. The model uses a set of features of the loan applicant (housing, job, savings, age, and education) and initially rejects the applicant. In this recourse scenario, the action costs are in terms of discrete time units, each action taken has a probability of success, and failure could transition into a less favorable state. 


Three recourse policies that could be given to the applicant:

It would require the applicant to Own-a-House (one feature change). This policy ignores the uncertainty in the applicant's ability to purchase a house within 1 month (time cost), and there is a 70% chance that the applicant would remain in the same state. So the expected time cost would be much more than 1 month.

This policy only optimizes the expected cost. It requires the applicant to Change-Job, and doing so helps increase the savings and reach the desired outcome with 90% probability. However, there is a small chance (10%) that this action would result in losing their current job and ending up unemployed, from which the cost to recover would be higher. The expected total cost when considering probabilities is still lower than Policy A. This policy has the potential to lead the applicant to a worse situation in which they would incur a high cost to recover from.

This policy provides a safer option to the applicant, where failures do not lead to a worse situation. The actions for this recourse are Improve-Education-in-Part-Time and then Increase-Savings. The risk of higher costs in this policy is lower than Policy B, but it has a higher expected cost.

| Visualizing Policy Risk

The figure (left) illustrates the probabilities of possible outcome trajectories and their associated costs for this example. This is a visualization paradigm in which we capture the probability of an outcome trajectory by line thickness and the costs along the x-axis.


With the risk-averse Policy C, the applicant is able to receive the desired outcome in 3 time-steps (cost) with 98% probability, and the risk of it taking more than 3 is much less than Policy A or B, even if the expected cost is more than Policy B.


Computing such diverse policies in terms of risk and surfacing the risk-information to empower the affected individual is the motivation behind SafeAR.

H. Wu, S. Sharma, S. Patra, and S. Gopalakrishnan, "SafeAR: Safe Algorithmic Recourse by Risk-Aware Policies," The 38th Annual AAAI Conference on Artificial Intelligence (AAAI), 2024, Vancouver, Canada. [Oral Presentation 2%] [Preprint Version]