Algorithmic fairness Recommender systems

Enhancing recommender systems with provider fairness through preference distribution awareness

In multi-stakeholder recommender systems, provider-fairness interventions that primarily regulate overall exposure often overlook how different user groups historically prefer different provider groups, which results in recommendation distributions that misalign audience allocation and can introduce new forms of disparity. Preference distribution-aware re-ranking can enable provider-fair visibility while preserving this cross-group preference structure, by aligning recommendation shares …

Continue Reading
Algorithmic fairness Recommender systems

How Fair is Your Diffusion Recommender Model?

In generative recommender systems, adopting diffusion-based learning primarily for accuracy often reproduces the biased interaction distributions present in historical logs, which results in systematic disparities for both users and items. Fairness-aware auditing can enable responsible diffusion recommendation by revealing when utility gains are obtained through consumer- or provider-side inequities, as instantiated in this study. We …

Continue Reading
Algorithmic fairness User profiling

GNN’s FAME: Fairness-Aware MEssages for Graph Neural Networks

In graph-based prediction settings, standard message passing in Graph Neural Networks often propagates correlations between neighborhoods and sensitive attributes, which results in biased node representations and unfair classification outcomes. In-processing mechanisms that modulate messages using protected-attribute relationships can enable fairness-aware representation learning by attenuating bias amplification during aggregation, as instantiated in this study through fairness-aware …

Continue Reading
Algorithmic bias Algorithmic fairness User profiling

GNNFairViz: Visual analysis for fairness in graph neural networks

Graph neural networks are increasingly used to make predictions on relational data in settings such as social and financial networks. Yet, assessing whether these models treat demographic groups comparably is difficult because bias can arise not only from node attributes but also from the graph structure that drives message passing. By introducing a model-agnostic visual …

Continue Reading
Algorithmic fairness Recommender systems

Enhancing recommender systems with provider fairness through preference distribution awareness

Users in specific geographic areas often have distinct preferences regarding the provenance of the items they consume. However, current recommender systems fail to align these preferences with provider visibility, resulting in demographic inequities. By employing re-ranking, it is possible to achieve preference distribution-aware provider fairness, ensuring equitable recommendations with minimal trade-offs in effectiveness. Recommender systems …

Continue Reading
Algorithmic bias Algorithmic fairness Recommender systems

AMBAR: A dataset for Assessing Multiple Beyond-Accuracy Recommenders

Recommender systems are a key tool for personalization in today’s digital age. They help us discover new music, books, or movies by predicting what we might like based on past interactions. But as recommender systems evolve, researchers and practitioners recognize that traditional metrics like accuracy alone aren’t enough. Factors like fairness, diversity, and user satisfaction …

Continue Reading
Algorithmic fairness Recommender systems

Fair Augmentation for Graph Collaborative Filtering

While fairness in Graph Collaborative Filtering remains under-explored and often inconsistent across methodologies, targeted graph augmentation can effectively mitigate demographic biases while maintaining high recommendation utility. Fairness in recommender systems is not just an ethical challenge but a measurable, achievable goal. In a paper, in collaboration with Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo …

Continue Reading
Algorithmic fairness User profiling

Toward a Responsible Fairness Analysis: From Binary to Multiclass and Multigroup Assessment in Graph Neural Network-Based User Modeling Tasks

By transitioning from binary to multiclass and multigroup fairness metrics, hidden biases in GNN-based user modeling are uncovered. Achieving true fairness requires fine-grained evaluation of real-world data distributions to ensure equity across all user groups and attributes. In an era dominated by artificial intelligence, ensuring fairness in automated decision-making has emerged as a critical priority. …

Continue Reading
Algorithmic fairness Ranking systems

Towards Ethical Item Ranking: A Paradigm Shift from User-Centric to Item-Centric Approaches

By eliminating user-centric biases and adopting a purely item-focused approach, it is possible to achieve ethical and effective ranking systems—ensuring fairness, resilience, and compliance with regulations on responsible AI. Ranking systems are essential in online platforms, shaping user experiences and influencing product visibility and sales. However, traditional user-centric ranking systems, which assign reputation scores to …

Continue Reading
Algorithmic fairness Recommender systems

Bringing Equity to Coarse and Fine-Grained Provider Groups in Recommender Systems

Achieving true fairness in recommender systems requires moving beyond broad demographic categories to address disparities at a fine-grained level, ensuring equitable representation for all subgroups. This goal can be made feasible through advanced re-ranking methodologies like CONFIGRE. Recommender systems are ubiquitous in today’s digital landscape, providing tailored suggestions to users in domains like e-commerce, entertainment, …

Continue Reading