ACTION (Audio-visual Cinematic Toolbox for Interaction, Organization, and Navigation): an open-source Python platform
FAIN: HD-51394-11
Dartmouth College (Hanover, NH 03755-1808)
Michael A. Casey (Project Director: March 2011 to April 2014)
Mark J. Williams (Co Project Director: March 2011 to April 2014)
The development of a platform that would support the computational analysis of film and other audio-video materials. The platform would allow such features as the automatic detection of shots and scenes, the analysis of soundtracks, and overall content analysis.
Audio-visual media have become ubiquitous due to the central position that computing has taken. Yet, methodologies and tools for supporting humanities research based on computational techniques, such as automatic shot-boundary detection, are nascent. ACTION seeks to provide free and open-source computational tools, and best-practice documentation, for new media-analytic methodologies based upon machine-vision and machine-hearing algorithms and software. We anticipate that automatic shot-boundary detection, scene-boundary detection, sound-track analysis, structure segmentation, and other methods, will lead to new insights into the development of film editing styles, scene composition, lighting, sound, and narrative construction. Building upon previous open-source frameworks, such as OMRAS2, AudioDB, Sphinx, Bregman, and OpenCV, ACTION will be a platform consisting of worked use-case examples in computational cinematics for future humanities researchers to extend.