Saturday, April 30, 2011

Neural Networks making a come-back?

Five years ago I ran some queries on Google Scholar to see trends on the number of papers that mention particular phrase. The number of hits for each year was divided by the number of hits for "machine learning". Back then it looked like NN's started gaining in popularity with invention of back-propagation in 1980's, peaked in 1993 and went downhill from there.



Since then, there's been several major NN developments, involving deep learning and probabilistically founded versions so I decided to update the trend. I couldn't find a copy of scholar scraper script anymore, luckily Konstantin Tretjakov has maintained a working version and reran the query for me.



It looks like downward trend in 2000's was misleading because not all papers from that period have made it into index yet, and the actual recent trend is exponential growth!

One example of this "third wave" of Neural Network research is unsupervised feature learning. Here's what you get if you train a sparse auto-encoder on some natural scene images



What you get is pretty much a set of Gabor filters, but the cool thing is that you get them from your neural network rather than image processing expert

7 comments:

Ted said...

"Don't call it a come-back"
http://en.wikipedia.org/wiki/Mama_Said_Knock_You_Out_%28song%29

Sid said...

Maybe you should mention a reference to Hintons 2006 work and how it affects all deep learning architectures?

Yaroslav said...

I would if I knew, what's so great about Hinton's 2006 paper?

ivan said...

Hinton's "reduced Boltzmann machine" is a deep neural network -- many layers. For some reason, a lot of the old NN literature only considered one or two level deep NNs. More layers = better.

I would recommend his tech talk:
http://www.youtube.com/watch?v=AyzOUbkUf3M

Mihail Sirotenko said...

The problem is that backprop can't efficiently learn multilayer networks to result in deep architecture. Network can have 10 layers but it's still shallow. So recent advances like RBMs or sparce autoencoders making this "come back".

tweak said...

As far as I understood, Hinton's 2006 paper was what started the whole autoencoder and pretraining wave...

Igor said...

Maybe it's a question that ought to be asked to either:
- the stats or metaoptimize Q&A
- Andrew

According to the wikipedia, the auto-encoders algorithm seem to help speed up the back propagation step (http://en.wikipedia.org/wiki/Auto-encoder ).

I am curious too.

Cheers,

Igor.