In personalized education platforms, explainable recommendation is often pursued by transferring knowledge-graph path reasoning methods from other domains, yet differences in educational data and evaluation practices can make these transfers misaligned and leave it unclear which methods remain reliable and why. Knowledge-graph reasoning can enable transparent, structure-aware personalization in this setting by producing recommendation paths …
Category: Algorithmic bias
GNNFairViz: Visual analysis for fairness in graph neural networks
Graph neural networks are increasingly used to make predictions on relational data in settings such as social and financial networks. Yet, assessing whether these models treat demographic groups comparably is difficult because bias can arise not only from node attributes but also from the graph structure that drives message passing. By introducing a model-agnostic visual …
AMBAR: A dataset for Assessing Multiple Beyond-Accuracy Recommenders
Recommender systems are a key tool for personalization in today’s digital age. They help us discover new music, books, or movies by predicting what we might like based on past interactions. But as recommender systems evolve, researchers and practitioners recognize that traditional metrics like accuracy alone aren’t enough. Factors like fairness, diversity, and user satisfaction …
Rows or Columns? Minimizing Presentation Bias When Comparing Multiple Recommender Systems
Under presentation bias, the attention of the users to the items in a recommendation list changes, thus affecting their possibility to be considered and the effectiveness of a model. When comparing different layouts through which recommendations are presented, presentation bias impacts users clicking behavior (low-level feedback), but not so much the perceived performance of a …
Bias characterization, assessment, and mitigation in location-based recommender systems
Location-based recommender systems (LBRSs) provide suggestion for Points of Interest (POIs) in Location-based social networks. However, we can characterize different forms of bias, associated with polarized interactions of the users with the PoIs. Post-processing and hybrid mitigation approaches can help alleviate the impact of those biases. In a study, published in the Data Mining and …
Robust reputation independence in ranking systems for multiple sensitive attributes
Ranking systems that account for the reputation of the users can be biased towards different demographic groups, especially when considering multiple sensitive attributes (e.g., gender and age). Providing guarantees of reputation independence can lead to unbiased and effective rankings. Moreover, these rankings are also robust to attacks. In a study, published by the Machine Learning …
Regulating Group Exposure for Item Providers in Recommendation
Platform owners can seek to guarantee certain levels of exposure to providers (e.g., to bring equity or to push the sales of new providers). Rendering certain groups of providers with the target exposure, beyond-accuracy objectives experience significant gains with negligible impact in recommendation utility. In a SIGIR 2022 paper, with Mirko Marras, Guilherme Ramos, and …
Evaluating the Prediction Bias Induced by Label Imbalance in Multi-label Classification
Prediction bias is a well-known problem in classification algorithms, which tend to be skewed towards more represented classes. This phenomenon is even more remarkable in multi-label scenarios, where the number of underrepresented classes is usually larger. In light of this, we present a novel measure that aims to assess the bias induced by label imbalance …
Reputation Equity in Ranking Systems
Reputation-based ranking systems can be biased towards the sensitive attributes of the users, meaning that certain demographic groups have systematically lower reputation scores. Nevertheless, if we unbias the reputation scores considering one sensitive attribute, bias still occurs when considering different sensitive attributes. For this reason, reputation scores should be unbiased independently of any sensitive attribute …
Disparate Impact in Item Recommendation: a Case of Geographic Imbalance
Data imbalances, related to the country of production of an item, lead to the under-recommendation of items produced in the smaller (less represented) countries. Re-ranking the recommendation lists, by balancing item relevance with the promotion of items produced in smaller countries can introduce equity in terms of visibility and exposure, without affecting recommendation effectiveness. In …