According to the CSU Chancellor’s Office, captioning video content is usually done to meet accessibility requirements, but captioning can offer so much more than accessibility – it can be used to open up new ways for all viewers to interact with your content and improve learning outcomes.
In an article in Social Inclusion (3.6,2015), Sheryl Burgstahler, Founder and Director of the DO-IT Center and the University of Washington’s Access Technology Center, points out that: “Second language learners report that captions increase their attention, improve processing of vocabulary, and reinforce previous knowledge" (Winke, Gass, & Syd-orenko, 2010). Several studies suggest the positive effects of captioning on recall and retention (Danan, 2004). Some evidence suggests that simultaneous text presentation, along with audio, can aid native and advanced nonnative speakers of English with word learning under certain conditions, as assessed by both explicit and implicit memory tests (Bird & Williams, 2002).”
ART 104 Spring 2015 Mediasite Lecture Capture Captioning – A Case Example
During the duration of this class, 10 lecture sessions (568 minutes) were captioned via Automatic Sync Technologies. Average turnaround time was three days. Students were surveyed during the class and asked, “Was viewing the captions [on recorded lectures] helpful?” A number of student responded with positive comments:
“Subtitles are often helpful for me because sometimes I might miss a piece of information and reading it reduces the probability of me missing those details. Other times I might not understand what the professor/person has said so reading it helps me understand better.”
“In general, I'm a visual learner and I tend to learn better when I read/see things rather than just listening to them, it keeps me more engaged.”
"Sometimes if I read along with the audio I understand more clearly.”
“Well it's helpful because I can be certain what the speaker is saying. Sometimes even if I raise the volume it is difficult for me to hear what they are saying, or if I cannot raise the volume because I forget my headphones I can at least read what is being said.”
“They're helpful to me because sometimes I can not understand what the speaker is saying so I use subtitles [captions] to help me”
How to Live-Caption Zoom Meetings with PowerPoint
Do-It-Yourself Captioning Tutorials
The following tutorials walk through the steps for automatically making a transcript of a video, uploading the video and transcript to YouTube to create captions, and how to use the newly created caption file to caption an MP4 file that can be displayed from a computer with or without an Internet connection.
Step 1. Use Otter.ai to Create a Transcript from a Video – 4:20 min
Result: A text transcript of video dialog is created automatically
Step 2. Upload the Video and Transcript to YouTube and Create a Time Coded Transcript – 5:04 min
Result: Captions are displayed on video in YouTube. Caption file can be edited and later downloaded for use elsewhere as SRT or webVTT caption text files
Step 3. Use an SRT File to Create a Captioned Standalone mp4 – 3:30min (Optional)
Result: Captions are burned in and permanently displayed on a video. Where every video is played, captions are displayed and cannot be turned off
This is a separate stand-alone tutorial on how to edit (auto) captions on a video placed in YouTube:
CSU Chancellor’s Office – Captioning Resources
CSU Chancellor’s Office – Captioning Resources site gives excellent information on how to prioritize captioning, who is responsible for captioning, captioning methods, and other resources, including Do It Yourself (DYI) methods and more.
When doing captions manually or when checking media captions, users must take into account captioning guidelines. The Caption Key document (link below) was created by the Captioned Media Program (CMP) of the National Association of the Deaf (NAD) with funds for publication provided by the Office of Special Education Programs of the U.S. Department of Education and provides nationally accepted captioning guidelines.
How to Facilitate Captioning with Automatic Sync Technologies (AST)
Automatic Sync Technologies is a paid service and offers discounted captioning to CSU Campuses.
How to Caption with AST
The following AST site gives information on how captions are created by AST and workflow details.
How to Obtain an AST Account
Fill out the following form on AST’s website to request an AST CaptionSync Account.
Following this, the SDSU Captioning Administrator, Riny Ledgerwood, is contacted by AST to validate the account request. The SDSU Captioning Administrator then contacts the department and requests an Oracle account number. This will be associated with the department’s AST account.
Once the account is established, the department can now submit caption and transcript jobs to AST via the AST website using their new AST account. With specific settings, some systems such as Mediasite allow for the automatic submission of captioning jobs. AST integrated systems may have greater speed in caption turnaround time and can save time and labor. Course capture systems such as Mediasite require options for captioning to set by the system administrators. Contact ITS staff for this system to enable this function. You will need your AST account information when calling.
How AST Charges Work
The CSU Chancellor's Office pre-purchases hours of time from CSU Campuses in bulk, in order to get the most favorable pricing. The CSU Chancellor’s Office pays AST directly and then the individual campuses set up CaptionSync accounts associated with the CSU agreement.
It is recommended that received transcripts, time-coded caption files, and captioned media be examined for accuracy. Media content in which the speaker has a heavy accent, uses domain specific terminology, or content where there may be some existing subtitles warrant examination. In cases where captions are inaccurate, the job can be revised by AST at no extra charge.
Instructional Video Captioning Support:
Jon Rizzo – [email protected]
Student Accommodation Captioning
Elizabeth Crosthwaite – [email protected]
Mark Cervantes – [email protected]
Web Video Content Captioning:
Rahim Baker – [email protected]