Explainability Recommender systems

hopwise: A Python Library for Explainable Recommendation based on Path Reasoning over Knowledge Graphs

Explainable recommendation methods based on path reasoning over knowledge graphs require an end-to-end workflow that connects graph preparation, model training, explanation generation, and explanation-aware evaluation. hopwise is an open-source Python library that extends the RecBole ecosystem with interoperable datasets, path-reasoning models, and explanation-oriented evaluation tools, making systematic benchmarking and reuse practical. Context and motivation Path-based …

Continue Reading
Explainability Recommender systems

Blooming Beats: An Interactive Music Recommender SystemGrounded in TRACE Principles and Data Humanism

In music streaming, personalization is commonly delivered through opaque recommendation pipelines and thin interfaces, which often leads to explanations that are misaligned with listeners’ situated experiences and reduces transparency to a technical afterthought. Interactive narrative visualizations can enable more human-centered transparency and controllability in this setting by linking recommendations to temporally grounded listening patterns and …

Continue Reading
Explainability Group recommendation Recommender systems

PRISM: From Individual Preferences to Group Consensus through Conversational AI-Mediated and Visual Explanations

In group accommodation booking, delegating coordination to messaging apps and informal voting often leads to opaque preference trade-offs and social influence, which results in decisions that reflect dominance or conformity rather than genuine consensus. Conversational elicitation coupled with visual preference alignment can enable groups to surface, compare, and negotiate constraints transparently by separating private preference …

Continue Reading
Algorithmic bias Education Explainability Recommender systems

Can Path-Based Explainable Recommendation Methods based on Knowledge Graphs Generalize for Personalized Education?

In personalized education platforms, explainable recommendation is often pursued by transferring knowledge-graph path reasoning methods from other domains, yet differences in educational data and evaluation practices can make these transfers misaligned and leave it unclear which methods remain reliable and why. Knowledge-graph reasoning can enable transparent, structure-aware personalization in this setting by producing recommendation paths …

Continue Reading
Explainability Recommender systems

A Comparative Analysis of Text-Based Explainable Recommender Systems

We reproduce and benchmark prominent text-based explainable recommender systems to test the recurring claim that hybrid retrieval-augmented approaches deliver the best overall balance between explanation quality and grounding. Yet, prior evidence is hard to compare because studies diverge in datasets, preprocessing, target explanation definitions, baselines, and evaluation metrics. Under a unified benchmark on three real-world …

Continue Reading
Algorithmic fairness Explainability Recommender systems

GNNUERS: Unfairness Explanation in Recommender Systems through Counterfactually-Perturbed Graphs

Counterfactual reasoning can be effectively employed to perturb user-item interactions, to identify and explain unfairness in GNN-based recommender systems, thus paving the way for more equitable and transparent recommendations. In this study, in collaboration with Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda, and published in the ACM Transactions on Intelligent Systems and Technology, …

Continue Reading
Algorithmic fairness Explainability Recommender systems

Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems

It is possible to effectively address consumer unfairness in recommender systems by using counterfactual explanations to augment the user-item interaction graph. This approach not only leads to fairer outcomes across different demographic groups but also maintains or improves the overall utility of the recommendations. In a study with Francesco Fabbri, Gianni Fenu, Mirko Marras, and …

Continue Reading
Explainability Recommender systems

Reproducibility of Multi-Objective Reinforcement Learning Recommendation: Interplay between Effectiveness and Beyond-Accuracy Perspectives

Controlling various objectives within Multi-Objective Recommender Systems (MORSs). While reinforcing accuracy objectives appears feasible, it is more challenging to individually control diversity and novelty due to their positive correlation. This raises critical questions about the effectiveness of incorporating multiple correlated objectives in MORSs and the potential risks of not having control over them. In a …

Continue Reading
Explainability Recommender systems

Towards Self-Explaining Sequence-Aware Recommendation

The sequence of user-item interactions can be effectively incorporated in the generation of personalized explanations in recommender systems. By modeling user behavior history sequentially, it is possible to enhance the quality and personalization of explanations provided alongside recommendations, without affecting recommendation quality. In a study with Alejandro Ariza-Casabona, Maria Salamó, and Gianni Fenu, published in …

Continue Reading
Explainability Recommender systems

Knowledge is Power, Understanding is Impact: Utility and Beyond Goals, Explanation Quality, and Fairness in Path Reasoning Recommendation

Path reasoning is a notable recommendation approach that models high-order user-product relations, based on a Knowledge Graph (KG). This approach can extract reasoning paths between recommended products and already experienced products and, then, turn such paths into textual explanations for the user. A benchmarking of the state-of-the-art approaches, in terms of accuracy and beyond-accuracy perspectives, …

Continue Reading