Algorithmic fairness Explainability Recommender systems

GNNUERS: Unfairness Explanation in Recommender Systems through Counterfactually-Perturbed Graphs

Counterfactual reasoning can be effectively employed to perturb user-item interactions, to identify and explain unfairness in GNN-based recommender systems, thus paving the way for more equitable and transparent recommendations. In this study, in collaboration with Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda, and published in the ACM Transactions on Intelligent Systems and Technology, …

Continue Reading
Algorithmic fairness Recommender systems

Robustness in Fairness against Edge-level Perturbations in GNN-based Recommendation

Edge-level perturbations impact the robustness and fairness of graph-based recommender systems, revealing significant vulnerabilities and the need for more resilient design approaches. In our paper, which will be presented at the ECIR 2024 conference, we delve into the robustness of graph-based recommendation systems against edge-level perturbations. This work is a collaborative effort with Francesco Fabbri, …

Continue Reading
Algorithmic fairness Recommender systems

A Cost-Sensitive Meta-Learning Strategy for Fair Provider Exposure in Recommendation

Cost-sensitive meta-learning can effectively balance exposure fairness in recommendation systems without compromising their utility. In our paper, which will be presented at the ECIR 2024 conference, we introduce a novel cost-sensitive meta-learning technique aimed at enhancing fairness in recommendation systems. Our work addresses a critical issue in many online platforms – ensuring equitable exposure for …

Continue Reading
Algorithmic fairness Recommender systems

MOReGIn: Multi-Objective Recommendation at the Global and Individual Levels

It is possible to provide effective recommendations while simultaneously optimizing beyond-accuracy perspectives for the individual users (e.g., genre calibration) and, globally, for the entire system (e.g., provider fairness). In a study, with Elizabeth Gómez, David Contreras, and Maria Salamó, published in the proceedings of ECIR 2024, we present a model designed to meet both global …

Continue Reading
Algorithmic fairness Explainability Recommender systems

Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems

It is possible to effectively address consumer unfairness in recommender systems by using counterfactual explanations to augment the user-item interaction graph. This approach not only leads to fairer outcomes across different demographic groups but also maintains or improves the overall utility of the recommendations. In a study with Francesco Fabbri, Gianni Fenu, Mirko Marras, and …

Continue Reading
Algorithmic fairness Recommender systems

Practical perspectives of consumer fairness in recommendation

The mitigation of consumer fairness assumes that recommendations bring equitable effectiveness for the different demographic groups of users. Mitigation approaches can be analyzed from, multiple, technical perspectives. Different mitigation strategies at the state of the art offer different properties. In a study, published by the Information Processing and Management journal (Elsevier) and conducted with Gianni …

Continue Reading
Algorithmic fairness User profiling

Do Graph Neural Networks Build Fair User Models? Assessing Disparate Impact and Mistreatment in Behavioural User Profiling

User profiling approaches that model the interaction between users and items (behavioral user profiling) via Graph Neural Networks (GNNs) are unfair toward certain demographic groups. In a CIKM 2022 study, conducted with Erasmo Purificato and Ernesto William De Luca, we perform a beyond-accuracy analysis of the state-of-the-art approaches to assess the presence of disparate impact …

Continue Reading
Algorithmic fairness Recommender systems

Equality of Learning Opportunity via Individual Fairness in Personalized Recommendations

The formalization of the learning opportunities that should be offered by the recommendation of online courses can lead to defining what fairness means for a platform. A post-processing approach that balances personalization and equality of recommended opportunities can lead to effective and fair recommendations. In a study published by the International Journal of Artificial Intelligence …

Continue Reading
Algorithmic fairness Explainability Recommender systems

Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations

Being able to assess explanation quality in recommender systems and by shaping recommendation lists that account for explanation quality allows us to produce more effective recommendations. These recommendations can also increase explanation quality according to the proposed properties, fairly across demographic groups. In a SIGIR 2022 paper, with Giacomo Balloccu, Gianni Fenu, and Mirko Marras, …

Continue Reading