Best Paper Award at DSAI’24: Advancing Accessibility with Large Language Models

December 6, 2024

PhD student Nadeen Fathallah presented the paper 'Empowering the Deaf and Hard of Hearing Community: Enhancing Video Captions Using Large Language Models' at the 11th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-Exclusion (DSAI’24), held at Khalifa University in Abu Dhabi, UAE. The paper, co-authored with Monika Bhole and Steffen Staab, was honored with the Best Paper Award at the conference.

This work addresses critical challenges faced by the deaf and hard-of-hearing community in a digital world increasingly reliant on video content. Traditional captioning systems often fail to capture nuances, leaving room for miscommunication and exclusion. The paper demonstrates how large language models (LLMs) can revolutionize captioning by providing enhanced context, inclusivity, and accuracy. Moreover, correcting inaccuracies and enriching captions with a deeper understanding of the video fosters a more equitable digital space and allows greater accessibility.

Fathallah, N., Bhole, M., & Staab, S. (2024). Empowering the Deaf and Hard of Hearing Community: Improving Video Captions with Large Language Models. In Proceedings of the 11th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, http://arxiv.org/abs/2412.00342 

To the top of the page