What Are Closed Captions & Why Do They Matter?

In 1972, with the airing of Julia Child’s The French Chef, captions made their TV debut. Twenty-six years later, the Federal Communications Commission (FCC) required all video programming distributors to provide their Deaf and hard-of-hearing audience with access to closed captions.

Cut to 2023 when Marlee Matlin and fellow jurors walk out of a recent Sundance screening due to caption issues, it reinforces why captions are necessary—and why their speed and accuracy are essential for inclusivity of the Deaf and hard-of-hearing community.

What are closed captions?

Closed captioning is the text exhibited on a screen or other visual display that provide information such as dialogue. Over the past four decades, captioning and subtitles have transcended beyond TV programming into exhibition spaces and platforms for education, healthcare, entertainment and beyond—all for the sake of accessibility.

Image of Julia Child that reads "The French Chef" with Julia Child was the first open captioned program to air, meaning that the captions appear to everyone watching and cannot be turned off.


Image Source: National Captioning Institute

Hearing or Deaf, the vast majority of Americans watch videos with the sound muted. Whether for necessity, accessibility, situational or personal preference, captions serve viewing audiences at-large.

Unfortunately, captions are often substandard, which spoils the viewing experience for everyone using them. To provide a truly accessible and useful experience for the masses, captions must be presented accurately and timely, and follow ADA guidelines.

What Makes Captions Accessible & Inclusive?

Generating speech-to-text transcription or live captions for Deaf and hard-of-hearing people, the fastest, most accurate and advanced solutions incorporate the following:

Inclusive Design

Most captioning solutions are built to simply translate voice to text. They may have good intentions and do a decent job captioning, but generic design fails to serve the specific needs of Deaf and hard-of-hearing people.

Some captioning tools work best with video conferencing tools, and others with browsers or on mobile devices. Regardless of the captioning tools, most do not have accessibility as a main focus.

Considering the diversity of human needs in various settings, inclusive design aims to create products that work for the broadest swath of the population possible. Based on accessibility guidelines, inclusive design practices ensure that people with disabilities are not excluded from using particular technologies.

Image titled The Principles of Inclusive Design that reads recognizing exclusion, learning from diversity, and solve for one, extend to many.


Image Source: Extentia

Robust AI Technology

In the world of captioning, speech recognition technologies leverage AI to understand and process human speech. As the innovation has advanced, speech recognition has become more embedded in our everyday lives with voice-driven applications.

While automated speech recognition (ASR) is essential for Deaf and hard-of-hearing people, systems aren’t as capable of transcribing content as accurately as humans. ASR also presents complications including overlapping speech when there are multiple speakers. Solutions like Ava reduce such communication challenges by identifying speakers, so conversational dialogue is easy to follow.

Image of an outlined face filled with tech symbols that says Artificial Intelligence Technology.

Image Source: EDUCBA

AI + Human Intelligence

AI’s ability to transcribe and understand conversations has certainly improved with time. That said, AI alone—without human intervention or input in the process—is not reliable.

Artificial intelligence technologies have not caught up to the human brain (yet), which is far more adept at separating conversations from background noise and capturing nuance. Conversational nuances may include: accents (vocalizing Deaf and hard-of-hearing people), foreign names, slang, innuendo, scientific jargon, industry-specific language (i.e. legalese), acronyms, and so much more.

While AI can generate captions, it is by no means a perfect solution. AI lacks a certain sophistication that only humans can provide in order to reach top-level caption accuracy. Humans understand context, nuance, and the complexities of different kinds of conversations that help ensure captions are as accurate and accessible as possible.

 Source: YouTube

7 Reasons Why Caption Accuracy & Speed Matters

  1. Compliance with major accessibility laws

    To protect the rights of disabled people and ensure they have access to the same resources as the rest of the population, several anti-discrimination laws have been enacted in the U.S. When organizations fail to meet compliance laws, lawsuits ensue in response to the dismissal of a major part of the population.

  2. Improves comprehension & retention

    Accurate captions serve Deaf and hearing people by making content easier to comprehend and retain. More than 100 empirical studies prove that not only do captions increase comprehension, but they serve a wider audience than just those who are Deaf or hard-of-hearing. They benefit non-native English speakers and are a useful aid for children and adults learning to read.

  3. Changes the experience

    When captions do not align with the audio, it alters the viewing experience for audiences. Poorly timed captions, missing dialogue, or simplified text versions of dialogue make it frustrating to follow along. Captions provide much needed clarity where it counts.

  4. Misinformation management
  5. Incorrect spelling and punctuation, or misrepresentation of spoken words can change the meaning of the content and make it hard to decode captions. The intention of captions is to capture the complete essence of content including but not limited to sound effects, music, speaker identification, etc.

  6. Content disparity
  7. Inaccurate captions communicate two different stories to audiences; one for those who are Deaf or hard-of-hearing, and another for hearing people. Captions that fall short on accuracy often skew messages, leaving viewers with a warped perspective of content that gets ‘lost in translation’.
  8. Farewell to concentration fatigue
  9. Deaf and hard-of-hearing people tend to exert more cognitive energy to process content than their hearing counterparts. Lengthy videos can cause concentration fatigue for many viewers. Accurate captions lessen such effects and help focus by reducing cognitive overload.
  10. Sufficient time to read

    Reading closed captions while listening to audio content can be a helpful learning aid to reinforce the information presented—but not if the timing is too short or out of sync. The impact of out-of-time closed captions can be distracting, frustrating, and often enough to make viewers tune out. Ensuring captions are in line with audio content increases visual impact and viewer engagement, expands audience who speak different languages, and paves the way for accessibility.

Video Source: YouTube

“Train gone, sorry” is a phrase all too common in the Deaf and hard-of-hearing people. In Deaf culture, it means you’ve missed what was said, and we’ve already moved on in the conversation. When captions don’t meet speed expectations, it can cause viewers to become disengaged and feel left out.

With minimal captioning lag, Deaf and hard-of-hearing people are able to better contextualize what is happening in real time so they can respond in a timely manner to be part of the conversation. Ava pushes the boundaries of captioning speed to enhance communication and serve Deaf inclusion and participation in conversations.

Image with blue background and white text that has a text from an Ava user and reads "I can finally sit around the campfire and feel like I'm a part of things instead of an intruder." Jeanna Meade

Image Source: Ava

FCC Closed Captioning Rules

FCC captioning rules are intended to ensure that viewers who are Deaf and hard-of-hearing have full and equal access to TV programming. By following these rules, video creators can help create an inclusive and accessible viewing experience.

  • Accurate: Captions must accurately and comprehensively convey the audio content of the programming, including dialogue, music, and sound effects.

  • Synchronized: Captions must be synchronized with audio content and must be displayed at a speed that can easily be read by viewers. A delay of no more than two seconds between the audio and the corresponding captions is recommended.

  • Complete: Captions must be included from the beginning until the end of a program, conveying any and all relevant information.

  • Properly placed: All captions must be placed without blocking important visuals, overlapping one another, running off the edge of the screen, or be obscured by dark or conflicting visuals.
Infographic that states the four FCC closed captioning rules: accurate, synchronized, complete, placement.

Image Source: Ava

Legal Requirements for Accessibility

In addition to following FCC closed captioning rules, other closed caption compliance laws have been created to ensure all content remains as accessible and equitable as possible.

Americans with Disabilities Act (ADA)

Established in 1990, the ADA law requires places of public accommodation to be made accessible to those with disabilities. Although the law doesn’t explicitly rule that the online realm is considered a public setting, it doesn’t diminish the importance of captioning. With a rise in ADA lawsuits, it’s become clear that captions are necessary to make public-facing content accessible and must be as clear and understandable to people with disabilities as they are for people who do not have disabilities.

Section 508 of the Rehabilitation Act

Section 508 is a federal law that requires electronic and information technology (EIT) used by the federal government to be accessible to people with disabilities including those who are blind, have low vision, are Deaf or hard-of-hearing, or have mobility impairments. Emails, websites, and other online content are examples of communication technologies that must be made accessible. As a result, all federal online video content is required to be captioned.

Under section 508, EIT must be designed and developed in a way that allows people with disabilities to use it effectively, and must be compatible with assistive technologies such as screen readers, braille displays, and speech recognition software.

Infographic that lists the key features of 508 compliance.


Image Source: Internet Devels

21st Century Communications and Videos Accessibility Act (CVAA)

The goal of the CVAA is to increase the accessibility of “modern devices” to ensure that people with disabilities have equal access to the same content and information as everyone else. Brought into law in 2010, the act mandates that online content must comply with analog closed captioning standards, such as those influencing the quality, timing, and placement of captions on streamed video content.

Top-Level Captioning Solution

"What are closed captions?" is a question that everyone should ask if they do not already know. The rapid increase of online content and video options combined with the growth of the Deaf and hard-of-hearing population makes captioning more important than ever.

For businesses requiring FCC, ADA, 508, or CVAA compliance, Ava provides a premium solution. The speech-to-text technology guarantees fast, accurate captions, which can help ward off legal trouble in regards to compliance. Efficient and effective technical assistance levels the playing field and delivers benefits to everyone—Deaf, hard-of-hearing, people with disabilities, and hearing people too.