Meta Chief AI Scientist Yann LeCun reflects on his decades-long journey in artificial intelligence, from being inspired by 2001: A Space Odyssey as a kid to leading breakthroughs in neural networks and machine learning. To be truly intelligent, LeCun argues that AI will require common sense, self-organization and physical-world understanding—far beyond the capabilities of today's language models. Read the full article: https://lnkd.in/ewyWJA6J#LLM#AGI#AIImpact#TechLeaders#Meta#AI#video
I mean, I've said this multiple times that, you know, I think I'd be happy if. By the time I retire, we have systems that are as smart as a cat. Yeah. Today we're joined by Yang Kun, one of the world's foremost pioneers in artificial intelligence. John was born in France and is the Silver Professor at New York University for. Data science, computer science, neuroscience and electrical engineering. And you're currently the chief AI scientist for Meta, where you've been since 2013, I believe, and have worked on revolutionary technologies including. Llama, the large language model, but also facial recognition and things like auto tagging photos, which is used by billions of users across the world on media platforms. And in 2018, you won the shared the Turing Award, which is often known as the Nobel Prize of Computing for your contributions that made deep neural networks a foundation for modern computing throughout your career. Spanning both academics and industry, you've been a a pioneer for a an advocate for artificial intelligence, even in the front of a lot of skepticism sometimes and have really helped usher in today's world of machine intelligence. I want to start off with a question. You've often said that your you were inspired to work in artificial intelligence by seeing how in the movie 2001, a space odyssey and. I was wondering. You are a boy when you saw that and now you've worked many decades in the field. Has your change of what AI could be or should be evolved from since then? Yeah, I saw 2001 Space Odyssey when he was nine years old when he came out in, in in Paris. And it was a big shock. It, it had all the themes that I was fascinated by when I was a kid, you know, space travel. AI, the emergence of, of human intelligence, right, that's a big theme of the of the movie and so that movie had a big impact on me. But but I, you know, the, the field of AI was really not like in the sort of public sphere at the time and. When I started, you know, studying the the topic in in in college, I realized the the field was not nearly as advanced and I thought it was. And and that people had not. Really thought about the well, he's abandoned the idea that intelligence basically is is self organizing, right? That human intelligence is the result of learning and in fact intelligent animal intelligence or even life is self organization. So this idea that, you know, a complex emergent phenomenon like intelligence can emerge from very simple elements that interact with each other and basically self organized. I mean, this idea was conceptually fascinating to me. Also I thought I. I wasn't smart enough to design an intelligent machine, so I think you should design itself basically. That was probably a good, a good way to, you know, sweep my laziness another rug. Constraints are always a good motivator, aren't they? Yeah, that's right. Exactly. So, so this idea of self organization, that sequence was fascinating. And then I discovered, I discovered the existence of people in the 50s and 60s who had worked on on learning machines a bit by accident, by stumbling on a, on a book that was. Transcription of a debate between Noam Chomsky and Jean Charger about whether language is innate or learned. Yeah, and on the side of Tiger was. See more pepper from MIT who was describing a perceptron and this was the first time I I. Learn about a particular system was capable of running so I I I started digging literature and and realize. That the field was vibrant in the 50s and 60s and then totally died in the late 60s in part because of a book that Seymour Papert was co-author on with Marvin Minsky that basically killed the field. But there it was 10 years later in the late 70s, early 80s, you know, kind of singing the praises of the perceptual. So what do you want to build on that the last time we spoke. I think you, I remember you saying that one of the reasons you went into AI and neural networking was to understand how you think yourself. Yeah. Is that true? Because that I can. Fundamentally, I think it influences how you've approached the discipline because you're not after trivial solutions, you're after more complete solutions that actually might explain how we and you think is. Is that a fair summary? Yeah. I mean, not, not how. See myself because it's a bit arrogant because he's special, but but like, you know, how intelligence emerges and, and how it forms and, and, and what it, what is it really? So, yeah, I mean, I think that's probably one of the biggest. Scientific question of all times, right? What is intelligence? How does the brain work? And it's kind of one of the main. Big questions of science, you know, along with what's the universe made of and what's life, you know, what's life all about? You know, that's pretty big. Pretty big question. Yeah. So what comes next? I guess with the other one, wouldn't it post live? Well, you build models of reality and then you can predict what comes next, Right. So. So, yeah. So if you're interested in the universe, you're going to physics and, you know, into, you know, that life. You go into biology and then about the brain and intelligence. Am I going to neuroscience? And it was really interesting neuroscience, but I'm an engineer as well as a scientist and so. Maybe I'll recycle figments of famous saying that you know, I don't understand something in this unless I can build it. Yeah, absolutely. So we'll get we'll get to that. So but we have to start with how would you currently. Defined human intelligence, and I'm actually going to ask specifically, is it? Only the cognitive part that people obsess about these days. Or do you believe there are social and moral and practical and creative parts to that intelligence that aren't well described by what we think of as the cognitive part? So I think that there's many, many layers to intelligence, and as humans we tend to. Associate intelligence with human intelligence and not realize that there are other forms and and think that intelligence, for example, is intimately linked with language, and that's false. Animals are very smart and they almost as smart as we are for some of them. They don't necessarily have language of the time. Example primates, I mean orangutans are almost as smart as we are. We think we're much smarter than that. But we are only entitled above. They're not even social animals. So they don't, you know, they don't need language particularly. I mean, they have forms of communications, but communication, but not, not, you know, not of the type that that we have. So I think there are many layers, right. So there is sort of the, the low, the low level. And, and to some extent that reflects what happened in evolution, right. So first of all, intelligence is you can move. If you don't have a brain, you can't move. If you don't want to move, you don't need a brain. There's a one to one correlation between having a brain moving. David Gilman has this example. There's some exotic marine species that moves and then eats its brain. Right. That's that's David Walker. It's yeah. Daniel Robertson. Yeah, yeah, example. Yeah, He says it's a good model of the American tenure system. Find a good rock, you know you anchor down and eat your brain. You don't just go. So, so that that's kind of the lower level. And then, you know, obviously if you if you want to act, you need perception. So how do you build perception? OK. And then, you know, on top of this, there is, you know, perception to action, there might be planning or reasoning, right, which a lot of animals are capable of doing. And, and I think as of today, we're really at that low level of. You know, direct connection of perception to action. But we haven't really cracked the whole planning, planning and reasoning. And then above that theory of mind social. Yeah, above that language. So we've got this weird thing where we've gone straight to language, missed the intervening layers. Is that, is that a way to think about it? It's another example of the Moravec paradox of, you know, kind of going to the higher level because it turns out to be kind of more compatible with the kind of technology that we have access to. So for the same reason that we have, you know. $30 Chess playing toys. They can beat us at chess. Uh. What does that mean? Are they smarter than us? No, they're not. I mean, they're just really good at chess, right? I mean, you have, we have, you know, tools to map a path in a, in a, in a map. And they're better than us at at at doing this planning, planning a map. And it's just, you know, shortest paths algorithm in the graph. I mean, you know, we can reduce this to an algorithm. But your point about language is. So we we accidentally created machines that are good language, yeah. Encode some sort of meaning, then we find it useful, right? But we've missed some steps. Well, so we. So language, it turns out, is relatively simple because probably because it's discrete. Yes. And. And it has to have, you know, strong statistical properties because it needs to be, it's basically a serialized version of our thoughts if you want, right? So you need to have some protocol by which you that's called grammar, but it's a limited set of things. You've talked about the dictionaries limited, it's limited, it's autoregressive. You can just churn on it. Well, not entirely, because you need to think about something before you say you dealing with the real world is much harder. And we take it for granted, right? Exactly right. So these machines will certainly need to understand. A Human motivations, human nature, human values. And you know what is good and bad and things like that, right? Not, not clear exactly. Otherwise there is no capability or no reasonable ability, ability for us to trust and control and rely on the outcome of our interaction with the machine, right? And you know, we make assumptions of. When we interact with a seemingly intelligent entity, we make, we make the assumption that they have the same kind of intelligence that we're used to with it. And that's obviously false. So, you know, there's a lot of people who think that, you know, and M have some sort of subjective experience and they, when they tell you something is because they want to tell you something. They they don't, they it's just, it's, it's a parrot as well. It's, it's more than a parrot because you know, the kind of answer they produce is exactly based on. A lot of the community knowledge but but. Is adapted to the situation that that that you, you, you pop them with. But certainly they don't have common sense. They don't have an understanding of physical. So that was going to be one of my questions. I saw what you wrote about common sense is a better description of what we're trying to do as a species in AI, that we're trying to build something that has common sense. Yeah. The obstacles are in our mind. Or I should I should say there is a number of techniques that you know people have. Developed and this think are. The solution and a history of AI is filled with those. Paradigmatic revolution where kind of a new way of doing things comes out and then everybody says that's it, that's the answer to AI. Pretty soon in 10 years we'll have machines that are as intelligent as humans. That's the solution to. And then ten years ago it was reinforcement running. Exactly. Is that is that because one of the recurring themes and we've been hearing and you've brought up is that. To achieve any kind of artificial intelligence, it really is linked to understanding human intelligence, right? And are these mistakes being made because people don't understand human intelligence and they say, oh, we did it. This is all that we have to do. So you could think of intelligence as two or three things. 1 is a collection of scales. But more importantly, an ability to acquire new skills quickly. Possibly without any learning, so you're facing a new situation that you never faced before. Umm. Like, I don't know, so first time you you hear that IKEA and you have to assemble, you know the thing you have to do some planning and kind of some using your level on this before in your life. You can solve it. You can solve the problem. You ask your 10 year old, can you clear the dinner table if you have the dishwasher? Even in the 10 year old has never done it and maybe only observes this being done a couple times. The 10 year old has enough background knowledge about the world to be able to do that task the first time without without training himself or herself to do it. Because, you know, humans are competitive. You're not going to. I mean the idea that you're going to stop working because machines are going to do stuff for you because.
I love how humble Yann is. Respect. 🙌
We all have that moment that inspires us from our beginnings, it's really wonderful hearing Yann's perspective of how he became excited about his field. This influence of self-discovery in understanding fundamental reasoning is a great approach.
Yann LeCun I should say, your bold move in open sourcing Llama first time, has turned the history of AI and the World and for that the global south must be ever grateful and thankful to you and Meta team and of course Mark Zuckerberg. I’m sure history books will be remebering you for generations as a trend setters. Well done mate I personally am grateful as I won’t have afforded to train my LLM for decades to come.
Pursuing Data Science | AI & Biomedical Research | fMRI, EEG, Machine Learning in Neuroscience & Healthcare.
1dYann LeCun You are great ✨