Tuesday, November 29, 2016

MIT Creates AI Able to See Two Seconds Into the Future


http://www.dailygalaxy.com/my_weblog/2016/11/mit-creates-ai-that-is-able-to-see-two-seconds-into-the-future-on-monday-the-massachusetts-institute-of-technology-announce.html


Massachusetts Institute of Technology announced its new artificial intelligence. Based on a photograph alone, it can predict what’ll happen next, then generate a one-and-a-half second video clip depicting that possible future.


When we see two people meet, we can often predict what happens next: a handshake, a hug, or maybe even a kiss. Our ability to anticipate actions is thanks to intuitions born out of a lifetime of experiences.

Machines, on the other hand, have trouble making use of complex knowledge like that. Computer systems that predict actions would open up new possibilities ranging from robots that can better navigate human environments, to emergency response systems that predict falls, to Google Glass-style headsets that feed you suggestions for what to do in different situations.

This week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have made an important new breakthrough in predictive vision, developing an algorithm that can anticipate interactions more accurately than ever before.

http://web.mit.edu/vondrick/tinyvideo/paper.pdf

http://web.mit.edu/vondrick/tinyvideo/

Generating Videos with Scene Dynamics

Carl Vondrick
MIT
 Hamed Pirsiavash
University of Maryland Baltimore County
 Antonio Torralba
MIT


NIPS 2016

Abstract

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.






No comments: