Here is an example of pairing video with a pencast;
in this case Martin Luther King's 'I Have A Dream' speech with a written transcript.
The unique quality of the Livescribe is that is allows a user to navigate a linear event non-linearly, scanning a page and clicking on the parts of interest. In a sense, It produces manual meta-tags in the AAC audio track recording of an experience.
The goal of the project is to create this kind of feature in a video file. Though a user can scan the slider bar at the bottom of the video, finding key points within the movie this way is guesswork or unintuitive at best.
This feature could be an instrumental tool for students who rely on visual communication. It would allow a student to navigate and organize their documentation of an event specifically to the highlights and cues they designate.
For this project, we are interested both in broad experimentation and refining the options for generalization. We will experiment in classroom settings with a fixed focal points (as in the relationship of a seated student to a stationary professor or interpreter) as well as variable focal points in a variety of combinations.
These are a series of tests we made pairing video with a livescribe recording. From the Livescribe page, two examples were chosen and embedded with the linked audio, along with both corresponding videos. The first pairing had different start and stop times, while the second was synced.
Using a Canon 5D Mark II camera, we made the first video using a tripod and the second experimented with motion. Later on we also played with external audio, trying both a shotgun mic and wireless lava mics.
----------------------------------------------------------------------------------------------------------------