Closed Captioning for Church Online

Since the Rock Church San Diego launched Closed Captioning at the 10am livestream recently, I’ve had several inquiries into how we went about adding this essential feature.

I figured the best way to answer this is simply write about it. So, in what is hopefully the first of many posts regarding church technology projects, let’s begin.

According to the U.S Department of Health and Human Services: Approximately 17 percent (36 million) of American adults report some degree of hearing loss. Source: Deaf Statistics for USA

That’s a number that can’t be ignored. If we’re preaching the Gospel through new mediums including online live streaming, we must reach these people. Granted, those statistics represent the entire spectrum of hearing loss from mild to severe. Nevertheless, we see Closed Captioning and subtitles everywhere these days: YouTube (while there automatic captioning leaves a lot to be desired) supports captioning, TED Talks, Hulu, Netflix, and news outlets to name a few.

Briefly, there is a difference between Closed Captions and Subtitles. Closed Captions are embedded text data in the video signal source, Subtitles on the other hand are a separate entity, usually layered on top of the video signal.

I simplify it like this:

  • CC for LIVE
  • Subtitles for On-Demand

There’s a reason for this uptake in providing captions, not only is it the right thing to do, but the FCC has legal mandates. I am NOT a lawyer, nor do I profess to be, but a quick google for “FCC closed captioning web” will return a lot a helpful information. You may be subject to regulation, you may not. I cannot and will not answer that question (seek qualified legal counsel). Now you’ve got my disclaimer, let’s move on…

The Rock Church uses HDSDI video distribution along with AES audio.

There have been some upgrades since that last blog post. We now use Digital Rapids StreamZ encoders (StreamZHD & StreamZ Live), we are also partnered directly with Akamai, no longer going through the channel partner network. This has afforded us great opportunity and has made life much easier in setup and upgrades. We can now lean directly on Akamai’s technical support team, and they are fantastic! I have had direct communication with their product manager for the AMP (Akamai Media Player – their flash player is used by NASA, Fox News and now Rock Church San Diego).

Anyway, back to the story at hand….

My colleague and I visited NAB in Las Vegas a few months ago to investigate several technologies, including Closed Captioning hardware and workflow. We came away pretty confident that this would be a fairly easy and relatively cheap win for our congregation and viewers in general. Here’s the flow:

  1. The HD490 Smart Encoder by EEG Enterprises <– the actual hardware box to make the CC. (list price is around $8,000)
  2. Live writer Captioning provided by Aberdeen <– the people making the magic happen. (list price is around $150 per hour)
  3. Slight modification to our AMP (Flash Player) code (free)

That’s it. Seriously, that’s it. For under $10k for the hardware you can be up and running with this. Let’s break down the signal flow at little more:

  1. Get the Video/Audio into the Caption Encoder – We take an output from our video router (HDSDI & separate AES) and feed it into the HD490 caption encoder.
  2. Send Audio to Live Writer – EEG provide a lifetime free service called iCap which is IP based and easy to configure. The HD490 unit extracts the audio and ships it to the iCap service in the cloud.
  3. Schedule the login time for Live Writer – We contract with the AberCap service from Aberdeen, they have a login to iCap portal and can listen to our service then write back to the encoder. We schedule ahead of time with the AberCap team to have a writer in place each and every week – we try and secure the same writer (they use court reporters – stenographers using short-hand keyboards so they’re able to keep up with even pretty quick speakers).
  4. Writer captions the service and sends CC back to us – Once the writer logs on, they listen to the speaker and write back immediately. The HD490 then outputs the AES / HDSDI signal BACK into our Concerto video router.
  5. Ensure the Livestream encoder(s) can see the CC data – Our web encoders (Digital Rapids StreamZHD & StreamLive) then watch the output from the CC encoder and are configured to look for CC information in the SDI ancilliary data stream – these are standards based systems that should just work. Our CC encoder spits out the embedded caption text onto the CC1 channel (see This Wiki article for more info on digital Closed Caption standards).
  6. Send the captions to the web along with the video –The encoder web profile is setup to extract the CC information and embed it in the Flash stream (RTMP) that we send to Akamai’s CDN.
  7. Ensure all delivery methods support CC – Akamai extract the CC data from the RTMP stream and convert it for use on both HDS and HLS platforms. This means Flash AND Apple HLS.
  8. Apple’s implementation of Livestream just works – As far as iOS / HTML5 (some browsers supported) and some Android devices are concerned that’s it, if CC is enabled in the device settings the caption text will appear shortly after the video and audio (usually around 3-5 seconds)
  9. Make slight configuration change to the Flash Player on the website – There was an initial configuration update to make to the AMP Flash Player to tell the player to look for captions and that was it.

The on-going costs run under $150 per service. This is the cost of contracting a live person to do the actual captioning, which when you think about it is pretty good. Someone is sitting there in the US with context regarding your organization and mission (the nice thing about AberCap is they work with Faith Based organizations a lot and get our goal!).They are listening to the whole sermon and keeping up with the speaker(s) writing back every word, in real-time… It’s impressive!

So, now we have delivered live Closed Captioning for the livestream audience. What about on-demand? I hear you ask. Here’s the nice thing regarding this workflow, AberCap record a transcript of every word spoken during the record. That means you not only get live CC, but a recorded transcript of the message (perhaps for your Pastor’s notes, website etc.).

Now they can take that transcript and turn it into a format useful for on-demand.

AberCap now take that transcript and a proxy file (the recorded sermon, perhaps edited to remove Altar Call/Worship etc. for licensing) and they turn it into a subtitle file. We have them generate a SRT file and a VTT file.

The SRT file format is what YouTube uses. Once our messages are uploaded, we disable the automatic subtitle track and upload the edited SRT file.

Right now, we are not using the VTT file, however, this file format is the preferred format for Apple HLS on-demand and we want to have it on hand for when we start using that format for playback.

We could also take this transcript/subtitle file and ask our volunteers to help translate it into various foreign languages. Then we would have true global reach with the message…

The Rock Church Livestream’s all 5 services on Sundays. 8am, 10am (CC), 12pm, 5pm, & 7pm Pacific Time. Check it out here.

Simon Roberts Written by:

Simon is a DevOps Engineer