Recommender systems

Auditing recommender systems for user empowerment in Very Large Online Platforms under the Digital Services Act

The governance of recommender systems in very large online platforms is expected to change significantly under the Digital Services Act, which introduces new obligations on transparency and user control; however, compliance-oriented implementations can still leave users with limited ability to steer personalization and manage their exposure. In this work, we analyze how three major short-video platforms (Instagram, TikTok, and YouTube) instantiate the DSA requirements through audits, systemic risk assessments, and compliance strategies, and we use these observations to derive a research agenda for “meaningful personalization.” We argue that user empowerment requires shifting from isolated feedback controls to structured levers for algorithmic choice (proportionality and granularity of customization) and content curation (diversity and authoritativeness), so that regulatory objectives translate into actionable, user-centered design choices.

Recommender systems increasingly act as the interface through which people encounter information, creators, and viewpoints. This makes recommendation design a governance issue, not only an optimization problem: the system’s ranking logic shapes what becomes salient, what remains peripheral, and which forms of engagement are incentivized.

The Digital Services Act moves regulation closer to this interface by requiring transparency about how recommendations are produced and by mandating at least one option not based on profiling for very large online platforms. The conceptual challenge is that “transparency” and “control” can be satisfied in ways that are formally compliant but still behaviorally ineffective: users may receive long explanations without actionable levers, or be offered controls that only weakly affect exposure patterns.

Against this backdrop, user empowerment becomes a design criterion. The question is not whether platforms disclose signals, but whether the available controls let different users meaningfully shape personalization while preserving broad societal objectives such as pluralism, reduced manipulation incentives, and better information quality.

In a study, in cooperation with Matteo Fabbri, and published in the Proceedings of ACM RecSys 2025, we introduce an analysis of how major short-video platforms implement the DSA’s requirements for recommender-system transparency and user control, and we use this evidence to articulate a design-oriented view of “meaningful personalization.”

The work addresses a practical gap: regulatory obligations exist, but the ecosystem lacks shared interpretations and actionable standards for what substantive user control should look like in real interfaces and ranking logics.

Methodology

We combine two moves. First, we treat public compliance artifacts (audits, audit follow-ups, and systemic risk assessment reports) as empirical evidence about how platforms interpret “parameters” and “options” in practice, and we validate the user-facing side through an interface walkthrough. Second, we propose a forward-looking design framework that reframes compliance as a pathway to meaningful personalization, organized around two families of capabilities: enabling algorithmic choice and directing content curation.

The key output is a structured way to reason from legal requirements to user-facing controls and to evaluate whether current implementations plausibly empower users rather than merely documenting the status quo.

Empirical evidence analysis

Regulatory obligations

A first methodological step is to reduce ambiguity in the language of “main parameters,” “criteria,” and “options.” The problem we address is that, without an operational vocabulary, platforms and auditors can treat the same interface feature as either meaningful control or mere interaction feedback.

We therefore separate the signals a system uses (what it observes), the criteria by which those signals matter (how they are prioritized), the options offered to users (what they can choose to influence the system), and the concrete interface functionalities that instantiate those options. This abstraction matters because user empowerment depends on the pathway from explanation to intervention: users must be able to connect what is described with what they can actually change.

Triangulating compliance through accountability documents and interface evidence

In a second step, we analyze how three major platforms describe recommendation parameters and controls in audits and risk reports, and we check whether these disclosures correspond to what users can access in the product. The problem we target is the risk of “paper compliance,” where disclosure breadth does not imply effective control.

The abstraction introduced here is to treat audits and risk assessments as design documentation for governance. We examine not only what is listed (signals and controls) but also what is left underspecified, such as how the “relative importance” of parameters is communicated and whether control options meaningfully change exposure rather than just removing items one-by-one. This matters because systemic risks are mediated through exposure dynamics; controls that only provide negative feedback on individual items may not let users shape those dynamics.

Reframing compliance as meaningful personalization via a design framework

In the third step, we use speculative design to move from “what platforms currently do” to “what user empowerment would require.” The problem we address is that current implementations tend to treat control as an auxiliary feature, while meaningful personalization requires control to be part of the recommendation architecture.

We organize the framework into two categories. Under algorithmic choice, we focus on proportionality (users can adjust how strongly personalization influences what they see) and granularity (users can intervene at different depths, from coarse modes to more fine-grained tuning of influential factors). Under content curation, we focus on diversity (users can influence how broad or perspective-rich their feed is) and authoritativeness (users can choose whether and how trust-related signals shape ranking, especially in information-sensitive contexts). This connects individual agency with societal objectives: user empowerment is not only about opting out, but about shaping exposure in ways that can mitigate manipulation, narrow information diets, and low-quality amplification.

Findings and insights

Across the analyzed platforms, we observe a recurring mismatch between extensive disclosures and limited actionable control. Platforms can describe many signals and still offer controls that primarily mirror standard engagement actions (e.g., hiding, not interested, unfollowing), which place the burden of steering on repeated micro-interventions rather than offering higher-level levers over exposure.

A second insight is that auditing outcomes depend strongly on interpretive stance. When auditors adopt a stricter view of what counts as an “option” to influence main parameters (treating ordinary interactions, generic settings, or simple pool-limiting features as insufficient), the same family of controls appears materially weaker. This highlights that, in the absence of shared standards, compliance assessment can drift toward benchmark shopping and minimalist interpretations.

A third insight concerns systemic risk narratives. Risk assessment reports often frame recommender systems as tools for demotion, quality promotion, and moderation-by-ranking, yet the connection between these goals and user-facing control remains thin. Empowerment is commonly asserted as a principle, but rarely operationalized into controls that let users explicitly trade off personalization strength, diversity, and information quality signals.

Taken together, the results suggest that the central bottleneck is not a lack of transparency artifacts but a lack of design commitments that turn transparency into controllability. The most promising pathway is to treat control as a structured set of levers—over algorithmic intervention and over the properties of the resulting feed—rather than as a collection of isolated feedback buttons.

Conclusions

This work clarifies why “DSA compliance for recommender systems” should be evaluated as a socio-technical design problem: empowerment depends on how legal requirements are instantiated as user-facing levers and how those levers reshape exposure dynamics. By grounding a design framework in evidence from audits and risk reports, we create a bridge from regulatory language to actionable evaluation criteria for meaningful personalization.

Several research directions follow naturally. We can empirically test whether proportional and granular controls improve perceived agency without imposing excessive cognitive load, and under which user segments and contexts such controls are beneficial. We can also study how to operationalize diversity and authoritativeness in ways that users understand, can meaningfully configure, and do not collapse into superficial labels. Finally, independent auditing can become more rigorous by leveraging emerging data access mechanisms and by developing shared evaluation standards that align interface-level control, ranking behavior, and systemic risk mitigation into a coherent accountability pipeline.