Auto Inversion, HD 720p, 1’18” (2014)
A narrative originating in Mountain View, CA analysed by code from Oxford that originated in UC Berkley. The material is a monochrome version of Google’s commercial marking its entrance into the field of algorithmic editing: Auto-Awesome. While Google’s video analytics run in the cloud, in this video is subjected to slower, non-commercial code that runs in Matlab. This is an Upper Body Configuration detector coded by Minh Hoai, trained to recognize recurrent arrangements of bodies in American sitcoms. Here, the UBC detector does not find anything it recognizes in the frame, in part because the image has been desaturated. Of note is the still frame Google chose to assign it on YouTube: a woman with five donuts on her fingers (at 44 seconds in). This is how at least some of Google envision automation: render the fingers (digits) unusable so the software can drive.
 Hoai,M. & Zisserman, and Zisserman A. “Talking Heads: Detecting Humans and Recognizing Their Interactions. A. (2014)” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. (2014).