Monday, April 21, 2008

Reestablishing Context of Activity

One thought: this might be part of LiveScribe's power - it easily preserves and presents a visual and audio record of activity.

Two Experiments to look at this question

1. Film 102C groups as they work on projects. Provide them with various kinds of representations or not - perhaps one as an audio record only, one with audio and digital images, one with video and audio. Which groups are faster to get back on task? Or which representations help a group get back on task more quickly (this a possible design only if there will be multiple weeks with similar activities taking place, e.g. writing).

2. Film vs. photos + podcast of 102C lecture.
Which students are better at recalling details of lecture, or which kinds of representations help the same students better recall details of lecture - e.g. what we went over last week.

Jim: Very natural to be describing it at a somewhat abstract level while watching my screen capture. Hal Pashler - does he know literature about that re-envoking context or aiding memory.

Edit snippets from the videos and we have to say me, not me.

Make up a web task, can you tell me not me
Don't finish up
Next week show a speeded up version of what they did
Reload context - isn't in the stuff but are in inferences
Making recommendations about printers
Quantify interruption cost
Group 1 - does entire task
Group 2 - gets interrupted
Group 3 -

Talk to me about what you were doing here - someone else's video vs. your own


Ed's Cascade of Representations

Rough table of contents - time codes, what's going on

"Event Table" - Spreadsheet with more events broken down - columns for Event/Timecode/Speaker/Speech/Gesture/Framegrab

How are decisions made about a publication?
How are final transcripts created?

Superimposing transcript over a map - works for a route

Tuesday, April 15, 2008

Cartoon Creator Application

A cartoon creator application would be a nice way to explore tasks of summarizing video content; navigating and annotating/transcribing videos, selecting specific "interesting" portions, composing a set of frames, fine tuning a sequence and framing for a comic strip. This project will allow us to explore a variety of interaction techniques and have a level of directness that is not possible in conventional interfaces.

----- Styli

Any stylus will work. Ordinary burnishers from art stores have nice ranges of rubbing areas depending on how it is articulated; very fine and delicate rubs to wide and bold rubs. Many varieties of custom
"brushes", with or without embedded LEDs, are very simple to construct.

Another idea would be to have the line thickness controlled with the non-dominant hand by moving along a slider (as in Photoshop). This brings up the interesting question of the feeling of directness - is it better to have the line thickness controlled directly (WYSIWYG) or via a slider (which involves a level of abstraction).

A particular stylus that I have been thinking about uses three IR leds for positional when not in contact with the surface. By positional, I mean all six degrees of freedom {x,y,z,pitch,yaw,roll}.
I'd be interested to understand more. How would a stylus that does not need to be in contact with the surface be of use for making cartoons? Any possible functions?

----- Framing tool

A frame, a rectangle or other arbitrary shape, is scaled and rotated as desired by dragging it at two points. But rather than placing it over an image, an image is dragged into it. While the image is scaled,
rotated, and moved with the frame, the portions extending beyond the frame are ghosted. When not manipulated, the portions beyond the frame become invisible.

Its the same old "using two fingers to move, scale, and rotate", except now two objects (frame and image) are interacting with each other. It should be very easy to implement.

A sequence of cartoon frames remain interactive for an ethnographer to further tune them as a gestalt of the cartoon emerges.

A sequence of cartoon frames also becomes a means of navigating the source video. For example, press on two frames can play the portion of video spanning them. Alternating pressure between two cartoon frames can fast forward or rewind, back and forth, across the span of video between
the two cartoon frames.

YES. This is one of the most exciting and important aspects of the application. It would be wonderful to use for both analysis, and to be able to "export" some kind of file with embedded video links. (Yes, this can be built to some degree in Adobe software like Acrobat Pro, but it is darn tedious. It would be so nice if it was automatic, not requiring the user to do it over again.)

-----Bootstrap Applications

Could we use the Photoshop API for the line drawing maker? There are so many features in Photoshop that I would love to explore using multitouch. I am enthusiastic about the level of directness and the large canvas size that the multitouch enables. Can Photoshop handle multitouch? How would it deal with such a thing? Can we modify it to do so? I can imagine having the non-dominant hand control settings while the dominant hand is drawing would be quite useful. Also, think about how easy tasks like erasing will be with multitouch!

I'm also quite taken with iDive as a digital video storage application. It seems like it would be quite useful as a manager for the video files and easy selection of representative frames.

I think the Quicktime API can "talk to" other applications so it's possible