In a previous post I brought up the 1983 Machine Learning workshop which featured "33 papers", and it was the follow up to the 1980 Machine Learning workshop. By contrast, NIPS 2005 had 28 workshops and is just one of several international annual Machine Learning Conferences. You can see how the field grew by looking at the distribution of publication dates for articles containing phrase "machine learning" indexed by Google Scholar (normalized by total Scholar content for each year)
You can see there's a blip at 1983 when the workshop was held.
Yann LeCun quipped at NIPS closing banquet that people who joined the field in the last 5 years probably never heard of the word "Neural Network". Similar search (normalized by results for "machine learning") reveals a recent downward trend.
You can see a major upward trend starting around 1985 (that's when Yann LeCun and several others independently rediscovered backpropagation algorithm), peaking in 1992, and going downwards from then.
An even greater downward trend is seen when searching for "Expert System",
"Genetic algorithms" seem to have taken off in the 90's, and leveled off somewhat in recent years
On other hand, search for "support vector machine" shows no sign of slowing down
(1995 is when Vapnik and Cortez proposed the algorithm)
Also, "Naive Bayes" seems to be growing without bound
If I were to trust this, I would say that Naive Bayes research the hottest machine learning area right now
"HMM"'s seem to have been losing in share since 1981
(or perhaps people are becoming less likely to write things like "hmm, this result was unexpected"?)
What was the catastrophic even of 1981 that forced such a rapid extinction of HMM's (or hmm's) in scientific literature?
Finally a worrying trend is seen in the search for "artificial stupidity" divided by corresponding hits for "artificial intelligence". The 2000 through 2004 graph shows a definite updward direction.
Friday, December 16, 2005
Tuesday, December 13, 2005
pigeon-level AI
At the NIPS "Towards Human-Level AI" workshop one of the messages was that perhaps we should first try to achieve rat-level AI, and go from there.
But maybe instead we should start by achieving pigeon-level AI. Someone has already measured pigeon performance at discriminating between painting styles
But maybe instead we should start by achieving pigeon-level AI. Someone has already measured pigeon performance at discriminating between painting styles
Monday, December 12, 2005
Archaeology
Here's what Aleks Jakulin managed to dig up in Google newsgroup archives
"1983 INTERNATIONAL MACHINE LEARNING WORKSHOP: AN INFORMAL REPORT"
link
Some snippets
Here's Tom Dietterich again: ``I was surprised that you summarized the workshop
in terms of an "incremental" theme. I don't think incremental-ness
is particularly important--especially for expert system work.
"So the language analysis problem has been solved?!?" by Fernando Pereira, Aug 20 1983
link
" What this replacement does to modularity, testability and
reproducibility of results is sadly clear in the large amount of
published "research" in natural language analysis which is untestable
and irreproducible."
"1983 INTERNATIONAL MACHINE LEARNING WORKSHOP: AN INFORMAL REPORT"
link
Some snippets
Here's Tom Dietterich again: ``I was surprised that you summarized the workshop
in terms of an "incremental" theme. I don't think incremental-ness
is particularly important--especially for expert system work.
"So the language analysis problem has been solved?!?" by Fernando Pereira, Aug 20 1983
link
" What this replacement does to modularity, testability and
reproducibility of results is sadly clear in the large amount of
published "research" in natural language analysis which is untestable
and irreproducible."
Subscribe to:
Posts (Atom)