August 31, 2022

Group vs. Individual Fairness in AI

bias ai

Continuing our discussion about fairness in machine learning, let’s address group and individual fairness.

In fair machine learning research, group and individual fairness measures are placed at distinct levels. While both are considered to be important, they might sometimes give way to conflicts.

💡 At an individual level, fairness can be defined as similar individuals being treated similarly.

💡 At a group level, a fair outcome demands the existence of parity between different protected groups, such as those defined by gender or race.

These measures can give rise to conflicts in situations where, in an attempt to satisfy group fairness, individuals who are similar with respect to the classification task, receive different outcomes.

Demographic parity

Let’s consider an employer who wishes to have similar job acceptance rates for both male and female candidates. This means that 50% of male and 50% of female candidates get the job. We call this demographic parity.

At a first glance, the statistical parity between the groups is maintained, and the number of female and male hires is balanced. However, from an individual perspective, the outcome might not be fair if the machine learning model chooses to give positive outcomes to candidates from the protected group just to ‘make up the numbers’, despite them not being qualified.

In short, group fairness measures can lead less qualified individuals who are part of the underrepresented group (be it the advantaged, or disadvantaged one) to be favored by the AI model over the ones who are better qualified. In this case, the demographic parity is maintained, but the accuracy of the prediction is not.

Demographic parity vs. performance-based metrics

A potential way of managing the shortcomings associated with demographic parity is by employing performance-based metrics such as equality of odds and equality of opportunity to implement a fair machine learning strategy.

Equality of Opportunity

Equality of Opportunity states that each prescribed group should get positive outcomes at equal rates, assuming that the people in each group are qualified. In other words, it ensures that people who are equally qualified for an opportunity, are equally likely to receive the same outcome.

By using Equality of Opportunity as a definition of fairness in machine learning, we would make sure that both the male and female candidates in our example are bound to receive similar acceptance rates for the job, assuming they are qualified for it.

Equality of Odds

The concept of Equality of Odds or Equalized Odds is even more restrictive, as it strives to not only correctly identify the positive outcomes at equal rates across groups (the same as in Equal Opportunity), but it also aims for the AI model to create the same proportion of false positives across groups.

In short, by using Equalized Odds as a measure of fairness in machine learning, we would ensure that the probability of a qualified candidate being hired and the probability of an unqualified candidate not being hired would be the same for both the male and female groups.

Intersectional fairness

Another shortcoming of group fairness measures is their suitability to be used for a limited number of protected groups only. They do not prevent unfairness against those who are found at the intersection of multiple types of discrimination, for example, ‘disabled African-American females’.

As humans, we belong to different subgroups and our identities overlap and intersect across multiple dimensions like race, gender, sexual orientation, and so forth. Intersectional fairness builds on the concept of algorithmic fairness, in order to get a complete picture of the biases and stereotypes which might be encoded in machine learning models.

Yet another telling example comes from the field of facial recognition, where AI models have a tendency to perform better on men than on women. At the same time, they’re also better at recognizing lighter skin tones rather than darker ones. Therefore, we can talk about the intersection of gender and race discrimination.

As such, we must ensure that machine learning fairness measures take into consideration all subgroups with different combinations of protected attributes.

Fairness through unawareness

Fairness through unawareness is the idea that withholding protected attributes from a machine learning prediction process, will make the AI model fair. However, this is untrue. Academic research has repeatedly shown that algorithms are able to identify patterns in unexpected ways, by means of non-protected attributes that serve as ‘proxies’.

For example, someone’s ZIP code might be a strong indicator of race, since there are neighborhoods that are segregated.

Other proxies might be correlated with gender, like the age at which someone starts programming. While this information is genuinely relevant to an AI model that scores resumes for a job, it also reflects social stereotypes.

As such, not explicitly using protected attributes in a prediction model does not ensure machine learning fairness, as AI models have the capacity to identify patterns indirectly.

Key takeaways

The issue of bias and unfairness in AI models has attracted a lot of attention in the last few years, both from scientific communities and governments across the globe. Having clear guidelines and ethical principles for the fair use of machine learning models is important. Such is understanding the ways in which AI models operate to reach certain outcomes.

Measuring fairness should be a priority for every business and organization that employs machine learning in decisions that directly impact human lives. Having a clear insight into a model’s fairness risk and data biases is crucial.

At Lumenova AI, we propose an effective way of measuring algorithmic fairness at a glance, by analyzing metrics such as data impartiality, demographic parity, equality of opportunity, equality of odds, and predictive parity.

Moreover, we offer a unique framework for measuring intersectional fairness, by allowing users to easily select which protected attributes and clusters they would like to analyze for their AI model.

To learn more about our tool and how it can make your model fair, feel free to contact our team.

Frequently Asked Questions

Group fairness ensures that AI models treat different demographic groups equitably, while individual fairness focuses on treating similar individuals similarly, such as balancing job offers across genders (group fairness) or ensuring equally qualified candidates receive the same opportunity regardless of their background (individual fairness), with a balance between the two being crucial for bias and fairness in machine learning.

Demographic parity requires an AI model to maintain similar positive outcome rates across different demographic groups, which can promote algorithmic fairness but may also lead to unintended consequences, such as favoring less-qualified individuals to maintain statistical balance, which is why many organizations adopt performance-based fairness metrics like equality of opportunity and equalized odds to ensure fairness without compromising accuracy.

Intersectional fairness ensures that AI models account for individuals who belong to multiple underrepresented groups, addressing more complex forms of bias that traditional fairness metrics may overlook, such as discrimination against ‘disabled African-American women,’ which is why organizations should analyze overlapping demographic attributes rather than treating them in isolation to effectively address bias and fairness in AI.

The concept of fairness through unawareness assumes that ignoring demographic attributes like race or gender in AI models will ensure fairness, but models can still use proxy variables (e.g., ZIP code or education history) that correlate with these attributes, unintentionally reinforcing bias, meaning effective fairness strategies must involve algorithmic bias and fairness assessments beyond just excluding data.

Businesses can enhance fairness in AI by implementing fairness-aware algorithms that detect and mitigate bias, using fairness metrics like demographic parity, equality of opportunity, and equalized odds, conducting regular AI bias and fairness audits to analyze model outcomes, and leveraging Lumenova AI’s fairness evaluation framework, which provides insights into data impartiality and intersectional fairness.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo