🏆 VIS 2025 Honorable Mention

Story Ribbons: Reimagining Storyline Visualizations with Large Language Models

Catherine Yeh1, Tara Menon1, Robin Singh Arya1, Helen He1, Moira Weigel1, Fernanda Viégas1,2, Martin Wattenberg1,2

1Harvard University, 2Google Research

We present Story Ribbons, an interactive narrative analysis tool that visualizes LLM-extracted insights about literary works. Story Ribbons enables users to explore stories at multiple narrative levels, and offers AI-powered features to customize visualizations to individual analysis workflows.

Abstract

Analyzing literature involves tracking interactions between characters, locations, and themes. Visualization has the potential to facilitate the mapping and analysis of these complex relationships, but capturing structured information from unstructured story data remains a challenge. As large language models (LLMs) continue to advance, we see an opportunity to use their text processing and analysis capabilities to augment and reimagine existing storyline visualization techniques. Toward this goal, we introduce an LLM-driven data parsing pipeline that automatically extracts relevant narrative information from novels and scripts. We then apply this pipeline to create Story Ribbons, an interactive visualization system that helps novice and expert literary analysts explore detailed character and theme trajectories at multiple narrative levels. Through pipeline evaluations and user studies with Story Ribbons on 36 literary works, we demonstrate the potential of LLMs to streamline narrative visualization creation and reveal new insights about familiar stories. We also describe current limitations of AI-based systems, and interaction motifs designed to address these issues.

Story Analysis Pipeline

Our pipeline contains four steps, comprising a decomposition and aggregation phase. Steps involving an LLM are denoted with . Correction loops are included to check and correct LLM output; each runs once per story. Our pipeline is highly adaptable to different literary genres (e.g., novels, plays) and elements (e.g., characters, themes).

Story Ribbons: An Interactive Literary Analysis Tool

Story Ribbons visualizes narrative insights from our LLM-powered analysis pipeline, allowing users to explore customizable character and theme trajectories for 36 stories. Each "ribbon" represents a different character in Pride and Prejudice (e.g., Elizabeth Bennet), and can be used to track interactions across novel chapters (x-axis) and locations (y-axis). We use ribbon thickness to represent character prominence in each chapter. Chapter titles are colored by sentiment (red: positive, blue: negative).

Our system includes three key LLM-powered interaction motifs:

Explanations on Demand

Empower users to interrogate and verify the model's reasoning by showing explanations for LLM-extracted insights when interacting with UI components (e.g., why Elizabeth's ribbon is pink).

Natural Language Dimensions

Allow users to add new visualization dimensions through natural language prompts to shape story exploration around their own interpretive goals (e.g., a custom y-axis ranking characters by hope).

Natural Language Queries

Help users navigate the visualization and gain deeper story insights by providing personalized guidance based on their natural language queries (e.g., "When does Grete betray Gregor?").

More Visualization Examples

We used Story Ribbons to visualize novels, plays, poems, non-fiction, and even LLM-generated stories. To learn more, check out our paper.

The Metamorphosis by Franz Kafka
Emma by Jane Austen
Anne of Green Gables by L.M. Montgomery
The Wizard of Oz by L. Frank Baum
Jane Eyre by Charlotte Brontë
Little Women by Louisa May Alcott
The School of Scandal by Richard Brinsley Sheridan
Time Looped Detective (LLM-Generated)

BibTeX

Copy
@article{yeh2025story,
  title={Story Ribbons: Reimagining Storyline Visualizations with Large Language Models},
  author={Yeh, Catherine and Menon, Tara and Arya, Robin Singh and He, Helen and Weigel, Moira and Vi{\'e}gas, Fernanda and Wattenberg, Martin},
  journal={arXiv preprint arXiv:2508.06772},
  year={2025}
}