Algorithmic fairness Explainability Recommender systems

GNNUERS: Unfairness Explanation in Recommender Systems through Counterfactually-Perturbed Graphs

Counterfactual reasoning can be effectively employed to perturb user-item interactions, to identify and explain unfairness in GNN-based recommender systems, thus paving the way for more equitable and transparent recommendations. In this study, in collaboration with Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda, and published in the ACM Transactions on Intelligent Systems and Technology, …

Continue Reading
Algorithmic fairness Explainability Recommender systems

Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems

It is possible to effectively address consumer unfairness in recommender systems by using counterfactual explanations to augment the user-item interaction graph. This approach not only leads to fairer outcomes across different demographic groups but also maintains or improves the overall utility of the recommendations. In a study with Francesco Fabbri, Gianni Fenu, Mirko Marras, and …

Continue Reading
Explainability Recommender systems

Reproducibility of Multi-Objective Reinforcement Learning Recommendation: Interplay between Effectiveness and Beyond-Accuracy Perspectives

Controlling various objectives within Multi-Objective Recommender Systems (MORSs). While reinforcing accuracy objectives appears feasible, it is more challenging to individually control diversity and novelty due to their positive correlation. This raises critical questions about the effectiveness of incorporating multiple correlated objectives in MORSs and the potential risks of not having control over them. In a …

Continue Reading
Explainability Recommender systems

Towards Self-Explaining Sequence-Aware Recommendation

The sequence of user-item interactions can be effectively incorporated in the generation of personalized explanations in recommender systems. By modeling user behavior history sequentially, it is possible to enhance the quality and personalization of explanations provided alongside recommendations, without affecting recommendation quality. In a study with Alejandro Ariza-Casabona, Maria Salamó, and Gianni Fenu, published in …

Continue Reading
Explainability Recommender systems

Knowledge is Power, Understanding is Impact: Utility and Beyond Goals, Explanation Quality, and Fairness in Path Reasoning Recommendation

Path reasoning is a notable recommendation approach that models high-order user-product relations, based on a Knowledge Graph (KG). This approach can extract reasoning paths between recommended products and already experienced products and, then, turn such paths into textual explanations for the user. A benchmarking of the state-of-the-art approaches, in terms of accuracy and beyond-accuracy perspectives, …

Continue Reading
Explainability Recommender systems

Reinforcement recommendation reasoning through knowledge graphs for explanation path quality

Knowledge Graph-based recommender systems naturally produce explainable recommendations, by showing the reasoning paths in the knowledge graph (KG) that were followed to select the recommended items. One can define metrics that assess the quality of the explanation paths in terms of recency, popularity, and diversity. Combining in- and post-processing approaches to optimize for both recommendation …

Continue Reading
Algorithmic fairness Explainability Recommender systems

Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations

Being able to assess explanation quality in recommender systems and by shaping recommendation lists that account for explanation quality allows us to produce more effective recommendations. These recommendations can also increase explanation quality according to the proposed properties, fairly across demographic groups. In a SIGIR 2022 paper, with Giacomo Balloccu, Gianni Fenu, and Mirko Marras, …

Continue Reading