In this interview in a Coursera course by Andrew Ng with Geoffrey Hinton, who according to Ng is one of the “Godfathers of Deep learning”, I found 2 points that were quite interesting and thought-provoking.
On research direction
When asked about his advice for grad students doing research, Hinton said, at about 30 mins in:
Most people say you should spend several years reading the literature and then you should start working on your own ideas. And that may be true for some researchers, but for creative researchers I think what you want to do is read a little bit of the literature. And notice something that you think everybody is doing wrong, I’m contrary in that sense. You look at it and it just doesn’t feel right. And then figure out how to do it right.
And then when people tell you, that’s no good, just keep at it. And I have a very good principle for helping people keep at it, which is either your intuitions are good or they’re not. If your intuitions are good, you should follow them and you’ll eventually be successful. If your intuitions are not good, it doesn’t matter what you do.” >> I see [LAUGH].
The bolded part, it’s both interesting and hilarious.
On the paradigm shift in ML research
When asked about his thoughts on the paradigm shift in ML research in the past decades, here’s what Hinton said (at about minute 35).
And in the early days of AI, people were completely convinced that the representations you need for intelligence were symbolic expressions of some kind. Sort of cleaned up logic, where you could do non-monotonic things, and not quite logic, but something like logic, and that the essence of intelligence was reasoning.
What’s happened now is, there’s a completely different view, which is that what a thought is, is just a great big vector of neural activity, so contrast that with a thought being a symbolic expression. And I think the people who thought that thoughts were symbolic expressions just made a huge mistake.
What comes in is a string of words, and what comes out is a string of words.
And because of that, strings of words are the obvious way to represent things. So they thought what must be in between was a string of words, or something like a string of words.
And I think what’s in between is nothing like a string of words. I think the idea that thoughts must be in some kind of language is as silly as the idea that understanding the layout of a spatial scene must be in pixels, pixels come in. And if we could, if we had a dot matrix printer attached to us, then pixels would come out, but what’s in between isn’t pixels.
And so I think thoughts are just these great big vectors, and that big vectors have causal powers. They cause other big vectors, and that’s utterly unlike the standard AI view that thoughts are symbolic expressions.
Are thoughts these great big vectors? I don’t know enough to have a good thought on this right now, but this paradigm shift is definitely interesting.
Stay updated about new posts
If you want to be notified of new posts via email, you can subscribe below: