Been putting this video off for a while due to study commitments, but I finally managed to sit down and go through Geoffrey Hinton's talk on neural nets. Ironically I'm deep down an assignment for this Friday so go figure ...
I like this guy. Doing my postgrad back in the mid 90's on AI, neural nets and back propagation were all the rage; in fact I used one as my major project in a model of the hippocampus. Even back then there were hints of limitations to back propagation, but no-one anywhere was this blunt.
On first inspection of the new model (~6:30 in), there seems to be 2 problems with the simplification of the network:
1) The middle layer of a neural network allowed non-linear relationships. As a feature detector in graphics there may not be much need for non-linear relationships, but most simulations would need them.
2) Without inhibitive links between the feature nodes there is nothing stopping a feature being picked up by more than one node. If you had a very large part of the image that really was one feature, most of the nodes would be trained toward that an miss the finer outlying features that it would be forced to find with hidden layer inhibitors.
Ok, so after the simplistic model is dispensed with (~14:30), multiple layers are back to give the non-linear transformations. Scrap problem 1. In fact by the time he re-adds back propagation (~30:00) there's 8 layers to abstract through. Or is that 4 then a rewind through the original 4 using back-propagation? Yep, w4t on the layer above the 30 node layer would be w4 after time.
Wow, the jokes are getting funnier. ".. and it would be nice to know what other companies are in this Enron cluster" (~33:45).
Finally (~45:00) inhibitory weights return, but with the provision that they are only used at the lower layer once it has been trained from the step before. He must be purely relying on the random weight distribution to map different features?
Overall very interesting talk and a good refresher on what's happening with neural networks.