Algorithmic bias Recommender systems

Regulating Group Exposure for Item Providers in Recommendation

Platform owners can seek to guarantee certain levels of exposure to providers (e.g., to bring equity or to push the sales of new providers). Rendering certain groups of providers with the target exposure, beyond-accuracy objectives experience significant gains with negligible impact in recommendation utility.

In a SIGIR 2022 paper, with Mirko Marras, Guilherme Ramos, and Gianni Fenu, we consider a scenario where providers are grouped based on a common characteristic and certain provider groups have a low representation of items in the catalog and, then, in the recommendations. We then envision that platform owners seek to guarantee a certain degree of exposure to all provider groups, including those minorities, while recommending.

To achieve this goal, we propose a post-processing approach to ensure a given degree of exposure to providers’ groups by optimizing the maximum marginal relevance for the Hellinger distance between the target and the achieved exposure distributions.

Providers’ exposure framework

Group disparity formulation. We formalize disparate exposure as the distance between the degree of exposure received by providers’ groups in recommendations and the degree of exposure targeted for each of them by platform owners, according to a given recommendation policy. The higher the dissimilarity is, the higher the group’s disparate exposure. To compute the distance between the exposure of the items of a providers’ group and the target exposure, we use the Hellinger distance, which is both symmetric and bounded in the range [0,1].

Disparity control procedure. To achieve the degree of exposure pursued by platform owners for each provider group, we introduce a recommendation procedure that seeks to minimize our support metric (the Hellinger distance). Since, in general, it is hard to plug in the balancing phase inside a recommender system, we propose to balance the output obtained by a recommender system through a re-ranking of the recommended list that it returns, thanks to a maximum marginal relevance approach.

Minority recommendation policies

The share of recommendations given to provider groups in terms of exposure might not only be controlled by platform owners according to the law if those minorities involve legally-protected classes of providers but could also depend on the platform’s business model. Our study in this paper focuses on five recommendation policies that could be pursued by platform owners while recommending:

  • The Cat policy aims to ensure that a provider group has an exposure proportional to its representation in the catalog. This policy follows a distributive norm based on equity among providers’ groups;
  • The Int policy aims to ensure that each provider group has an exposure proportional to its representation in the interactions. This policy aims to ensure that no distortion in recommendations is added with respect to the degree of interaction with each group;
  • The Par policy aims to ensure that provider groups have the same degree of exposure to each other. This policy aims to ensure that no distortion in recommendations is added with respect to the degree of interaction with each group;
  • The Per policy aims to ensure that each provider group has an exposure proportional to its representation on the profile of the current user. This policy subsumes the Int policy, but it calibrates the share of recommendations for a user according to the individual user’s preferences, not to the global degree of interaction with a group.

Experimental evaluation

Our case study in this paper assumes that providers are grouped based on their gender. Hence, we used two public datasets (namely, MovieLens-1M and COCO) that, to the best of our knowledge, are among the few including providers’ gender. We apply our post-processing to the results of the BPR algorithm and compare our approach to other state-of-the-art baselines.

Our results show some interesting trends, summarized in what follows:

  • Not all policies led to a loss in NDCG. In ML-1M, under the Int and the Per policies, no difference was measured with respect to the baseline BPR. Subsequently, Cat and Par showed a decrease (≤ 0.02 NDCG points) in NDCG.
  • Considering the exposure given to the minority group (according to the target), it can be observed that our approach was able to reach a group exposure closer to the target of a certain policy, with an often negligible loss in NDCG.
  • Our approach better achieves beyond-accuracy objectives at a recommended list level (category diversity and item novelty), whereas it suffers from a small loss in beyond-accuracy objectives at a global level (catalog coverage).