How Does Closed Captioning Work?

If you’ve ever turned on the closed-captioning option on your TV and marveled at the difference between the on-screen action and the words you’re reading, you have an idea of how tricky it can be to accurately convey subtleties of language and emotion to the hearing impaired.

Pity the poor typist trying to capture all the fast-paced dialogue, blood-curdling screaming, and sword-slashing sound effects of the Red Wedding scene from Game of Thrones last season. Not to mention all those medieval-sounding place names.

Talk about epic. Who’s behind all those keystrokes, anyway?

The National Captioning Institute (NCI) has managed the bulk of captioning duties for TV since the 1970s. Based in Chantilly, VA (just outside of Washington D.C.), Dallas, TX, and Burbank, CA, the company’s logo—a small TV-shaped speech balloon—appears briefly at the top right corner of your screen when an NCI-captioned program begins.

Last year, NCI exceeded 120,000 hours
[of captioning]

“It’s like [viewers] never noticed it because they think that’s the logo for closed captioning,” says Juan Mario Agudelo, Director of Sales & Marketing at NCI. “We’re the most unknown known little company in the world.”

In 2002, NCI captioned 68,000 hours of programming, the majority of it coming from typists pounding away on stenography machines—like the ones you see in courtroom scenes on Law & Order. Last year, NCI exceeded 120,000 hours.

When captioning in “real-time” for live sports events, NCI stenographers can hit speeds of more than 225 words per minute. “They need to be highly skilled and have great ability to retain information,” says Agudelo. “They have to be able to listen very carefully, they need to be able to multitask, and they need to three to four times faster than your average office worker can type.”

In addition to using stenos, NCI also employs voice-writing technology, which allows a captioner to speak dialogue, emotional descriptions, and sound effects directly into a “stenomask” connected to a computer. Voice-recognition software handles transcription.

Earlier this year, in keeping with the boom in programming, especially in streaming content, the FCC introduced a variety of new captioning regulations. As of March 15, the Twenty-First Century Communications and Video Accessibility Act requires that all TV shows must have captions within 45 days after the date they first air. In 2016, the limit drops to 15 days.

The Act also stipulates that captioning run for the entirety of the program, does not obscure on-screen information, and more accurately reflects dialogue and sound effects. To make the job easier, video programmers must provide captioning vendors like NCI with advance access to scripts, song lyrics, and names. (Just in case “Daenarys Targaryen” somehow didn’t come naturally to captioners.)

Further, the Act requires that technology to turn on subtitles be present in new devices of any size, including cable set-top boxes, tablets, Blu-Ray players, and smartphones.

In the past, some shows went uncaptioned unless online volunteers picked up the slack for the hearing-impaired by captioning the content themselves using crowd-sourced subtitling software. Now, no matter who types the words, you can expect more and better blood-curdling screams to appear at the bottom of your screen.

Read more here: How Does Closed Captioning Work?