Explainability Recommender systems

Blooming Beats: An Interactive Music Recommender SystemGrounded in TRACE Principles and Data Humanism

In music streaming, personalization is commonly delivered through opaque recommendation pipelines and thin interfaces, which often leads to explanations that are misaligned with listeners’ situated experiences and reduces transparency to a technical afterthought. Interactive narrative visualizations can enable more human-centered transparency and controllability in this setting by linking recommendations to temporally grounded listening patterns and contextual cues, as instantiated in the this demo.

Music recommendation sits at an intersection where algorithmic optimization and personal meaning frequently diverge. Listening histories are not only collections of tracks but also sequences of moments shaped by routines, transitions, and external events. When platforms present recommendations as decontextualized outputs, users have limited support for understanding why suggestions appear, how they relate to past listening, and when they may resonate.

From a research perspective, this creates a persistent challenge: explainability is often operationalized as post-hoc rationales or feature-based transparency, while the lived structure of listening (temporal episodes, interruptions, repeated plays, situational shifts) is treated as noise. If we want explanations that are actionable and interpretable in context, the interface and interaction model become part of the explanation itself—supporting sensemaking, reflection, and “debugging” of one’s own musical trajectory rather than merely exposing model internals.

In a demo, in cooperation with Ibrahim Al-Hazwani, Daniel Lutziger, Carlos Kirchdorfer, Luca Huber, Oliver Robin Aschwanden, and Jürgen Bernard, and published in the Proceedings of ACM RecSys 2025, we present Blooming Beats, an interactive music recommender that reframes explainability as narrative exploration rather than a static justification layer.

What the demo enables

Blooming Beats is best understood as a research instrument for studying and operationalizing human-centered explanation. It enables users to interpret recommendations through listening stories: patterns over time, contextual annotations, and recognizable episodes that can be inspected, revisited, and used as anchors for discovery.

For researchers, the system provides a concrete instantiation of how explanation objectives (transparency, context-awareness, and empathy) can be embedded into interaction, allowing investigation of what users actually treat as “explanatory evidence” when exploring personal histories. For practitioners, it illustrates a design direction where explanation is not an auxiliary tooltip but a structured workflow for exploration and decision-making. For end users, it enables recommendation as a reflective activity: selecting meaningful moments, seeing how those moments drive suggestions, and relating recommendations to future situations where they might fit.

How it works

The left view depicts a timeline of “flower-like” song representations enriched with listening-behavior traces and contextual markers; the right view links user selections to recommended tracks through explicit visual connections, making the interaction-to-recommendation pathway inspectable.

Narrative-first representation of listening history

A core design choice is to treat listening data as narrative material rather than analytics output. Instead of summarizing behavior with conventional charts, Blooming Beats represents songs with compact visual encodings and connects them through traces that differentiate how listening unfolded (e.g., continuous sessions versus interruptions). This addresses a recurring problem in explainable recommenders: explanations are difficult to interpret when the underlying history is flattened into aggregates that hide the structure of episodes and transitions.

The abstraction introduced here is a “readable timeline” that supports recognizing arcs and segments in listening, which matters because many user interpretations of music are anchored in sequences (“during a trip”, “late-night study sessions”) rather than isolated items.

Context as a first-class layer for exploration and explanation

Blooming Beats explicitly integrates context into the exploration space, including personal milestones and broader events as navigational and interpretive cues. This mechanism addresses the limitation that recommendation explanations often ignore why certain listening periods mattered, even if those periods are decisive for how users judge relevance.

The interface logic treats context as something that can frame both exploration and recommendation: users can locate periods through annotations and then interpret the resulting suggestions as grounded in a situated episode. This choice matters because it supports explanations that remain meaningful even when users cannot (or do not want to) reason in terms of model features.

From profile matching to story matching, with inspectable links

A third mechanism is the shift from matching users as static profiles to connecting “listening stories” through temporal patterns and contextual similarities. Conceptually, this reframes empathy in recommendation: the system is designed to help users discover resonant experiences (potentially with anonymous others) through shared narrative structure rather than demographic proximity.

Crucially, the explanation is not merely verbalized; it is externalized as visible links from user-selected parts of the history to the recommended tracks. This directly supports transparency: users can inspect what in the story is driving suggestions and can reassess relevance by changing the anchor episode rather than treating the recommendation as an opaque output.

Demonstration scenario and evidence of usefulness

The demo illustrates a scenario where one person explores a friend’s listening history around a specific period (a road trip) and uses that narrative to inform recommendations for an upcoming trip. The key point is not “finding similar songs” in the abstract, but understanding when and why certain tracks fit: energetic segments during driving, calmer transitions during stops, and shifts that reflect social moments. Recommendations become interpretable as extensions of an episode rather than as generic similarity results.

The extended abstract reports a preliminary qualitative evaluation with 8 participants exploring a decade of listening data (202,988 songs) that had been manually contextualized. Participants were generally able to use the visual links to understand what drove recommendations (reported as 73% successfully understanding the logic), and they relied on contextual markers to navigate and interpret the timeline (with 85% reporting that global event markers improved navigation). The empathy-oriented idea (connecting through anonymous listening moments) was also perceived as meaningful compared to demographic-style matching (reported as 67% expressing interest). At the same time, the interface’s richness introduced initial complexity for some users, highlighting a tension between expressive narrative representations and the need for progressive disclosure.

Conclusion and outlook

Blooming Beats matters less as a single interface and more as an enabling artifact: it operationalizes explainability as a narrative interaction model where context, temporal structure, and user-driven anchoring are treated as primary explanatory resources. This perspective is valuable for recommender-systems research that aims to move from “explanations about models” to “explanations that support human sensemaking and choice”.

Concrete extensions suggested by the demo’s framing include scaling story-based matching to larger populations while preserving interpretability, developing interaction patterns that reduce initial complexity through staged views, and strengthening the context layer through richer (and less manual) annotation workflows. More broadly, the demo points toward evaluation regimes that treat narrative grounding and user interpretation as first-class outcomes, not secondary UX considerations, when studying explainable recommendation.