Original upload date: Mon, 22 Mar 2010 00:00:00 GMT
Archive date: Tue, 30 Nov 2021 20:09:39 GMT
Google Tech Talk
March 19, 2010
ABSTRACT
Presented by Geoff Hinton, University of Toronto.
Deep networks can be learned efficiently from unlabeled data. The layers of representation are learned one
...
at a time using a simple learning module that has only one layer of latent variables. The values of the latent variables of one module form the data for training the next module. Although deep networks have been quite successful for tasks such as object recognition, information retrieval, and modeling motion capture data, the simple learning modules do not have multiplicative interactions which are very useful for some types of data.
The talk will show how to introduce multiplicative interactions into the basic learning module in a way that preserves the simple rules for learning and perceptual inference. The new module has a structure that is very similar to the simple cell/complex cell hierarchy that is found in visual cortex. The multiplicative interactions are useful for modeling images, image transformations, and different styles of human walking.
Speaker bio: http://www.cs.toronto.edu/~hinton/bio.html