Algorithmic fairness Recommender systems

The Winner Takes it All: Geographic Imbalance and Provider (Un)fairness in Educational Recommender Systems

The fact that most of the courses in MOOC platforms are offered by American teachers leads to the over-recommendation of these courses, at the expense of the courses produced in the other countries. A re-ranking that accounts for the country of production of a course, besides the relevance of the course for a user, is an effective way to introduce fairness in this context.

In a SIGIR 2021 paper with Elizabeth Gómez, Carlos Shui Zhang, Maria Salamó, and Mirko Marras, we characterize and mitigate the unfairness generated by recommender systems towards those who offer MOOCs outside the USA. Given the fact that the vast majority of courses are offered by American teachers, which are also the courses with which the users interact the most, the courses of teachers outside the USA are under-recommended by the state-of-the-art models.

Characterizing provider unfairness

To assess provider unfairness, we consider the COCO dataset, which includes the interactions of learners with online courses. To generate the recommendations, we considered seven models, namely MostPop, Random Guess, UserKNN, ItemKNN, BPR, BiasedMF, and SDV++.

The main outcomes of our characterization are the following (please, refer to the original paper for the detailed results):

  • There is a strong geographic imbalance in the representation of each group, in terms of offered items. The most represented group usually attracts more ratings, thus increasing the existing imbalance.
  • Geographic imbalance leads to disparate visibility and exposure at the advantage of the most represented group. Recommendation effectiveness is decoupled from equity of visibility and exposure, with BPR returning the best trade-off between the two properties in the course-based representation.

Unfairness mitigation

The idea behind our mitigation algorithm is to move up in the recommendation list the course that causes the minimum loss in prediction for all the learners. Our paper contains the details of the approach and its pseudocode.

By running our mitigation, the main messages that can be extracted are the following (you can refer to the original study for the detailed results):

  • When providing a re-ranking based on minimal predicted loss, effectiveness remains stable, but disparate visibility and disparate exposure are mitigated.
  • Interventions to adjust both visibility and exposure are needed to provide equity; if we mitigate only having a visibility goal, disparate exposure still occurs.

1 thought on “The Winner Takes it All: Geographic Imbalance and Provider (Un)fairness in Educational Recommender Systems”

Comments are closed.