Forbes Interview with Ken Harrenstien, YouTube’s “Jedi” of Captioning

March 7, 2012 in Uncategorized
YouTube’s Closed Caption ‘Jedi’: On cc Upgrades, Searchability And Why It’s Personal

By Michael Humphrey, Forbes 2/29/12

Ken Harrenstien’s colleagues at Google call him the “Caption Jedi.” Totally deaf since the age of 5, the Senior Software Engineer has arguably done as much for online video captioning as anyone else in the world. As the lead developer responsible for the captioning technology in all of Google’s video services including YouTube, his work serves billions of views daily with captions for millions of videos.

More than two years ago, Harrenstien made news by announcing automatic captioning, a major step forward in adding text to the millions of amateur videos on YouTube. Yesterday, YouTube posted a blog about the most recent improvements to that technology: adding Japanese and Korean to automatic captioning capabilities, enabling search for the cc content in videos, offering TV-style captioning options and adding more format compatibilities for uploading captions.

This isn’t the first time Harrenstien, a key Arpanet/Internet developer in the early 1970s after graduating from MIT, has done important work for the deaf community. A former employee of Transmeta, Oracle, and SRI International, he was instrumental in several projects related to telecommunications for the deaf, notably DEAFNET.

He says for much of his life, he had no idea what people were saying on TV. Now he’s addicted to “Futurama” and “Mythbusters.” In this email interview, Harrenstien explains the cc technology, why Google is putting time and money in perfecting it and what the future holds for online video captioning.

In making these latest rounds of improvements, what were your overarching priorities?

Our singular focus is captions everywhere — every video and every platform.

Video is a big part of the Web, and it’s important to make that content accessible. For myself, I wanted captions on the web to have the same capabilities as the closed captions I enjoy on TV. On top of that, if we made it really easy for broadcasters and video owners to add captions, we could help them take advantage of captions as they bring their content to the Web. We were also excited about the goals of the Communications and Video Accessibility Act (CVAA) — I was invited to Washington when the Act was signed by President Obama — we knew we could use technology to demonstrate that captioning for the Web is achievable.

The web is also global. I might need captions because I’m deaf, but for a worldwide audience, captions aren’t just an accessibility concern. By adding closed captions, you can open up your video to a worldwide audience that doesn’t speak your language.

This goal of universal accessibility is also directly aligned with Google’s mission to “organize the world’s information and make it universally accessible and useful,” and each year we’re taking steps towards this goal.

Could you explain how automatic captioning works?

YouTube uses Google’s speech recognition technology in an attempt to recognize spoken words in the video and make closed captions. The feature is experimental, so viewers have to turn it on by choosing “Transcribe audio” in the caption menu.

NVRC Note:

Read the full article to learn more about transcript sync to synchronize text with your audio, the cc search feature, and more:

http://www.forbes.com/sites/michaelhumphrey/2012/02/29/youtubes-closed-caption-jedi-on-cc-upgrades-searchability-and-why-its-personal/