There are many interesting contents for KDD 2016 today, among which the Plenary Panel titled "Is Deep Learning the New 42" impresses me most.
First of all, let's talk about the comings of several scholars who participated in the Panel. Andrei Broder, who oversees Panel, needless to say, currently serves as Google's Distinguished Scientist and was formerly Yahoo Research's VP. Participated in the discussion were Pedro Domingos of the University of Washington (because of Markov Logic Networks famous), Nando de Freitas from Google DeeMind (formerly known as the Deep Learning, now known as the Gaussian Process), and Isabelle Guyon from Université Paris-Saclay (because the SVM is famous ), Jitendra Malik from the University of California at Berkeley (because several algorithms in the CV community are famous) and Jennifer Neville from Purdue University (because Graph Mining is famous).
The overall feeling is that Jennifer and Nando de Freitas are playing soy sauce most of the time. Both of them are relatively short in the academic circle. Although Nando is doing Deep Learning, the in-depth view does not feel much, and she cannot talk about some of Deep Learning's High Level Ideas from the overall feeling of multiple fields. For the entire Panel, Pedro Domingos was the most eye-catching performer, and his views were sharp and the audience resonated. Here are some points that I summarize:
1. The core success of Deep Learning comes from Representation Learning. Pedro hopes to see the emergence of different types of Representation Learning based on different concepts.
2. Deep Learning is still at a very early stage of development. We need to understand why Deep Learning is successful, not just what Task can do.
3. Model Interpretative may be a pseudo-issue. Before choosing the exact model and explanatory model, we would rather choose the exact model. The limitation of human cognition is precisely the desire to develop models that humans cannot explain beyond human capabilities. If it is for explanatory purposes, which reduces the accuracy of the model, then contrary to the original intention of Machine Learning. It is a good thing that humans cannot understand the model and need to improve human cognitive ability. This point of view caused a lot of shock at the conference venue.
4. Scholars are obliged to educate the entire community. All should understand the limitations and expectations of Machine Learning. (This also explains why Pedro wrote the popular science book Master Algorithm.)
5. The existence of the academic circle is not an academic study of Incremental, but it is precisely the long-term study that may fail to do Long-term.
6. Deep Learning may not be able to shine in all areas. The current success is mainly in the field of Image and Speech.
Nando has a good point, that is, the success of Deep Learning now benefits from the software system, especially the maturity of various frameworks of DL.
In short, the entire Panel is a sea of ​​people for more than an hour. It seems that Deep Learning is still the hottest research topic at present.
Ps: This article is reproduced from Hong Liangwei Weibo