Why Accessible Video and Audio Content Matters
Accurate captions, narration, and transcripts allow for broader access to and easier comprehension of information conveyed through video and audio media.
What are we talking about?
Captions are synchronized transcriptions of all speech and meaningful non-speech audio content. (In contrast, subtitles are synchronized transcriptions of dialogue only.) They primarily benefit users who are Deaf or hard-of-hearing, but improve the experience for many, including individuals with cognitive disabilities and those whose primary language isn’t English. Captions can be closed (able to be turned on or off by the viewer, usually via a “CC” button) or open (fixed into the video and always on), and come in two forms:
Live Captioning: Otherwise known as Communication Access Real-time Translation (CART), live captioning is a professional service wherein a (human) transcriptionist uses a steno machine to transcribe audible information for display either on a large display for in-person events, or streamed via a web conferencing platform. For more information, visit Guidelines for CART and ASL Interpreting.
Automated Captioning: Uses AI or other speech recognition technology to produce machine-generated captions of verbal content, either for pre-recorded or live videos.
Transcripts are text descriptions of video and audio content. While they primarily benefit individuals who are Deaf, hard of hearing, or otherwise have difficulty processing auditory information, transcripts enhance the experience for many individuals. Transcripts can also be converted into braille to be read by a person who is deafblind using a refreshable braille output device.
Narration: Individuals who are blind or have low vision miss out on essential visual content presented in videos when narration is absent or incomplete. Ideally, the natural audio of the video should include descriptive narration such that a person who is blind can understand the visual content. This includes any text presented on screen.
Audio Description (AD): Where native narration is insufficient, content creators can use tools such as YouDescribe to add AD, themselves, or they can outsource for professional AD services.
How To
General Practice
Ensure accurate captions for videos. Kaltura MediaSpace is an excellent and easy-to-use tool for applying and editing automated captions on pre-recorded videos. Simply uploading a video file into Kaltura will initiate the generation of automated captions, and you can link to or embed the Kaltura video on a webpage or in your Blackboard Ultra course. If using video content from outside sources, make sure those videos have captions and inspect them for accuracy.
Offer Transcripts. Audio-only content must be accompanied by a transcript. Make text versions of video content available whenever possible. Kaltura allows you to upload transcripts as attachments to videos, and some YouTube videos have transcripts available as well. Post the transcript close to the audio/video file, or somewhere easy to locate.
Narrate all meaningful visual content. When creating video content (or recording a lesson to make available later), consider audiences who cannot see what you’re referencing, and narrate accordingly. For example, instead of saying, “Review the bullets on the slide,” read the bullets aloud. Instead of saying, “Click here, then go here,” say, “From the navigation bar, select New, then choose Event.”
Technical Tip
Test your Media Player. Repeatedly use the Tab key to navigate all video and audio controls and use the Enter key and arrow keys to activate them to ensure operability without the use of a mouse.
What to Avoid
- Assuming automated captions are accurate: Always inspect for errors and edit where possible.
- Missing key visual descriptions for captions, transcripts, or narration – excluding essential visual or audio information will make that information unavailable to users who have audial and visual disabilities.
Examples
Captions
Always check for errors in auto-generated captions and edit where able. Common examples are outlined in the following table.
Error Type | Accurate Captions | Auto-caption |
---|---|---|
Misinterpretation of words | “I’m here to assist.” | “I’m here to a cyst.” |
Misheard technical terms | “MP3,” “AI,” “kilobyte” | “mP-three,” “aye eye,” “killer bite” |
Missing or incorrect punctuation | “Let’s eat, Grandma.” | “Let’s eat Grandma!” |
Missing important non-verbal sounds | [dog barks to alert owner], [door slams], [school bell rings], etc. | Missing captions for background sound |
Missing speaker identification | Speakers are identified for clarity | Some or no speakers are identified, or names are incorrect |
Narration
Instead of the speaker saying: | The speaker can say: |
---|---|
As you can see on this chart, sales increased significantly from the first quarter to the second quarter. | This chart shows that sales increased significantly, from 1 million in the first quarter to 1.3 million in the second quarter. |
Stir the mixture until it looks like this. | Stir the mixture until the oil, vinegar, and spices are well combined. |
Attach this to the green end. | Attach the small ring to the green end, which is the larger end. |
Additional Resources
Ally is an excellent course accessibility evaluation tool available to instructors within Blackboard Ultra. Always check the “How to…” guidance within your Blackboard Ultra course and visit Using Ally in Blackboard Ultra for more information. Additional guidance provided by Ally is as follows:
When scanning a webpage using the WAVE web accessibility evaluation tool, you may encounter errors or alerts, indicating accessibility barriers. Pope Tech offers the following guidance on addressing these issues:
Errors
Alerts