UA Little Rock Researcher Wins Best Paper Award for Groundbreaking AI Study on YouTube Bias

Dr. Nitin Agarwal
Dr. Nitin Agarwal

Dr. Nitin Agarwal, Maulden-Entergy Chair and Donaghey Distinguished Professor of Information Science at UA Little Rock and founding director of the Collaboratorium for Social Media and Online Behavioral Studies (COSMOS) at the University of Arkansas at Little Rock, has received the Best Paper Award at the 2025 International Conference on AI-Based Media Innovation (AIMEDIA 2025) in Venice, Italy.

Agarwal’s paper, titled “AI-Driven Multi-Layer Narrative Analysis for Uncovering Bias in YouTube Content,” was recognized for its innovative use of artificial intelligence to reveal subtle patterns of bias and sentiment within digital video content. This is a pioneering contribution to the field of human-centered AI and media transparency, according to Agarwal.

The award-winning study tackled a growing concern in the digital age: how engagement-driven platforms like YouTube influence perception through algorithmic design and emotional framing. While previous studies relied on surface-level indicators, like video titles, descriptions, and engagement metrics, Agarwal’s team sought to understand the deeper emotional and narrative structure of content posted on the video platform.

To do this, the research leveraged artificial intelligence (AI) to perform a multi-layered analysis of YouTube videos. The AI examined titles, descriptions, transcripts, and AI-generated summaries, uncovering how emotional tone and bias shift across these layers.

The findings were revealing. Sentiment became more positive and joyful as analysis moved deeper into the content, while expressions of anger and toxicity declined sharply. The study concluded that video titles, which are often optimized for engagement, tended to be the most provocative or toxic, while the underlying narratives were far more balanced and constructive.

“This research showcases how AI can be used to interpret not just data, but meaning,” Agarwal said. “By examining content holistically, we can identify discrepancies between what is presented on the surface and what the message truly conveys. That insight is crucial for creating more transparent and trustworthy digital environments.”

The study, published in the IARIA Congress 2025 proceedings, represents a major step forward in the use of AI for media accountability and bias detection. By integrating sentiment, emotion, and toxicity analysis across multiple content layers, the research introduces a new framework for evaluating online media beyond superficial metrics.

The work’s implications extend well beyond YouTube. Its methodology can be applied across platforms to help developers, policymakers, and researchers better understand how algorithms shape user engagement and how those algorithms can be designed to promote fairness, accuracy, and context awareness.

“Ultimately, our goal is to develop tools that help platforms and audiences distinguish between attention-grabbing rhetoric and authentic communication,” Prof. Agarwal said. “This kind of analysis moves us closer to AI systems that truly support informed decision-making.”

This recognition adds to a growing list of international honors for Prof. Agarwal and COSMOS, whose interdisciplinary research combines computer science, behavioral analysis, and social impact. COSMOS has received support from numerous federal agencies and international partners for its work in algorithmic transparency, cognitive warfare, AI applications, and digital ethics.