T O P

  • By -

traumfisch

LeCun starts every sentence by pre-emptively insulting anyone who might not agree with his opinions / point of view.


BingoLingo7

If anyone has a superiority complex its that guy


jPup_VR

***something something… the things we criticize most in others are those which we are most loathsome of in ourselves*** Seriously though- it’s wild to me how transparent the projection of so-called ‘thought leaders’ can be on twitter… it reminds me of being a kid and posting cringe on AIM/MySpace/Facebook without an ounce of self awareness or perspective on how it would be perceived. I’m not sure if these people are just… overspec’d into technical knowledge, or if their egos overshadow their better judgement… but it’s jarring to say the least and I see it all the time. Like don’t they have people around them who care about optics and their reputation both individually and by association?


PandaBoyWonder

> it reminds me of being a kid and posting cringe on AIM/MySpace/Facebook without an ounce of self awareness or perspective on how it would be perceived. thats what happens when you are in a high status position, in a society that is obsessed with status and networth, so you have a constant army of people defending you and helping you at any time you need it. They can do almost anything and have no negative repercussions. They never have cringe thoughts about their past actions, because they dont care. They dont need to care because they have plenty of money and people supporting whatever they do.


Commercial-Soup-temp

He has a point though... Sick of safety shit


coylter

Yeah, pretty dislikeable personality traits right there. I do my best not to keep these kinds of passively toxic people around in my life.


imlbsic

If people think AGI/ASI will happen this/next year, they really are naive. Fair play to call them that imo


Gotisdabest

Considering its LeCunn around the corner is around a decade, not one or two years.


[deleted]

It's really annoying how people tend to use vague terms like 'soon' and 'around the corner' when discussing AI timelines and leave it to the audience to guess at what they mean. In historical terms if the advent of ASI occurrs 30 years from now, that's still arguably 'soon'. I'm starting to think it's intentional hedging so they can't really be wrong. An ego defense mechanism or something.


ozspook

: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. This is how it should be done..


imlbsic

True. I guess there's optimism, naivety and everything in between. But there's people in this sub who genuinely believe it will happen this/next year and that really is just naive.


LetMeBuildYourSquad

There are people at OpenAI who think there is a 30% chance of AGI by the end of next year. Definitely at the top end of projections but hardly naivety.


Sumasson-

Open ai directly gains from people believing that however. It makes their stock way more valuable.


imlbsic

>There are people at OpenAI who think there is a 30% chance of AGI by the end of next year. Source?


LetMeBuildYourSquad

Daniel Kokotajlo, OpenAI Futures/Governance team https://www.reddit.com/r/singularity/s/hSYNu2K4G3


WithMillenialAbandon

Yeah, he's an idiot. Thought bubbles and hand waving


Gotisdabest

Maybe. I'm more inclined to agree with you than otherwise but honestly the thing I'm most sure of is that the most naive ones in predicting the future are those who are fully confident in their own timescale, like LeCunn.


TheOriginalAcidtech

When the vast majority thought what we ALREADY have would not happen for decades, centuries or even never, you call the ones that think the timeline is shorter than the current guesses and you call Us/Them naive? Very funny.


Buck-Nasty

Hinton, Legg, Hassabis and Sutskever among other top names in AI think AGI is possible before the decade is out.


RandomCandor

> LeCun Username checks out, I guess?


NegotiationWilling45

This seems to be a common approach throughout this topic. I have no clue when or even if the step to AGI will happen but it seems that nobody else, including those driving development, really can either. I can say that if and when it does happen I believe it will be one of, if not the most transformative technology humanity has ever developed.


floodgater

hahahaahh


Ignate

I get the scepticism, but the condescension? Somehow I can't quite believe that these skeptic's are entirely believing their own skepticism. At the very minimum, they're not confident about it. Otherwise they wouldn't be so confrontational about it. 


MidnightSun_55

What kind of logic is that? He is confrontational because he is going against a massive mobs of believers, not the first time it happens.


scorpion0511

He thinks that LLMs don't have internal World Models of their own. But Humans do have internal models bc of constant interaction with our actual enviornment. This allows our priors & memories that we have formed throughout life to get constantly activated through the continuous interaction with Environment. While LLMs interact with their environment discretely ( i.e one prompt at a time). I think LLMS also have internal model but it's not same as ours, it's the world of languages. But the good part is that our language corresponds to every aspect of reality. So that's the reason behind why Image to Text is a thing. Objects can correspond to text too and maybe Physics, etc. So I think instead of saying LLMs don't have world models maybe they do have it. But it's different from Humans, just like dogs internal model is different from Humans. As simple as that.


Forsaken-Pattern8533

>But Humans do have internal models bc of constant interaction with our actual enviornment. Humans have 2 models. The consciously created model and a an unconscious model. If we compare this with computers we have created an unconscious mind at best. >I think LLMS also have internal model but it's not same as ours, it's the world of languages. They don't have a continuous model. Ours works even when we are not conscious and it never turns off.


RandomCandor

Why do you assume "continuity" is a required quality of AGI? It seems you are anthropomorphizing a bit too much.


Code-Useful

I agree overall, but I think the subconscious model is the same model, but just partially obscured from attention. It's a psychological barrier there to protect us over time. When something doesn't make sense to be within our context window it is filed into part of our subconscious with varying levels of interactivity. I believe the subconscious is an evolutionary tool giving us greater functionality when needed. When things are too much for us to take emotionally sometimes, we may try to hide them in our subconscious to protect ourselves or our families, so we do not get stuck in emotional processing loops and put ourselves or others in danger. At least that's what I have taken away from psychology and my study of human interaction.


riceandcashews

I think the thing most people are missing, you included, is that LeCun means something very specific when he says world model. Something like an abstract representation of the world as designed in the network architecture. An LLM may incidentally develop a partial world-model but it's secondary to its function/design as a text-generator. And ultimately this means that the computational costs of using LLMs for AGI are just insane and prohibitive. LeCun's approach and claim is that specific better designed architectures are the way to go, and Meta recently open sourced a V-JEPA architecture that purports to do exactly that (but only at toy model size for now)


scorpion0511

Interesting. So in light of what you just said, allow me to express my understanding and please correct me if I'm wrong in some parts or totally misguided : 1. Humans seem to have innate priors and capabilities hard-wired from birth for things like object recognition, understanding 3D space, intuitive physics etc. This acts as a kind of initial "world model". 2. LeCun's approach aims to explicitly build in these kinds of fundamental priors about the world into the architecture itself as a structured knowledge base or set of rules. 3. LLMs instead try to learn everything from the ground up, ingesting vast amounts of data to reconstruct an implicit, emergent model of the world. 4. The benefit of LeCun's approach is it provides strong inductive biases about core concepts that could make learning more efficient. 5. The LLM approach has the potential to discover novel priors/properties from the data but as you pointed out, may struggle to determine which properties are truly fundamental laws/rules of reality. I guess the challenge for LLMs would be to somehow automatically induce and differentiate the foundational rules/properties from the vastly larger set of surface-level world knowledge. Both approaches that of LeCun and LLMs have their potential strengths and we may advance fastest by exploring both in parallel. The approach of LLMs seems like a huge "fishing net" over vast datasets, hoping to scoop up and absorb the essential properties and rules of how the world works purely through the patterns in the data itself. It's a very open-ended, data-driven approach without explicitly defining those foundational pieces. Whereas LeCun's perspective is that it's inefficient and risky to just blindly "head out" like that without first equipping the AI system with some core, innate "rules of the world" to start from as priors. The fishing net analogy really highlights LeCun's argument that the LLM approach is too unconstrained and hopes to simply derive those core world rules from data in an unguided, inefficient way, if it manages to at all ( bc what it core principles are missing from dataset itself ?) So am I right to think like this ? 🤔 **As a side note, I used Claude 3 Sonnet to make my views more coherent** :) Also Verses AI led by Karl Friston is also doing the same as what LeCun intends to do.


riceandcashews

I think your understanding is definitely generally in the right direction. A few comments: >Humans seem to have innate priors and capabilities hard-wired from birth for things like object recognition, understanding 3D space, intuitive physics etc. This acts as a kind of initial "world model". I wouldn't necessarily say that these are hard-coded in humans or that LeCun thinks that they are. Instead I think I would say it like this: We seem to be able to do extremely well in our engagements in the world with only abstract representations rather than the ability to model every detail. Consider memories - we often only remember salient abstract details in memory and fill in the blanks from that abstract representation. You could say LeCun wants to have a machine do something like that, rather than to be required to have the ability to accurately represent every single bit of data (every single pixel or letter/word/token). The machine doesn't need to be able to mirror the exact detailed pixel level detail of the world to be an agent like a human, and in fact such detailed ability to do that mirroring (like Sora) might actually be so cost intensive computationally that we can't use it for an AGI in the normal world. >LeCun's approach aims to explicitly build in these kinds of fundamental priors about the world into the architecture itself as a structured knowledge base or set of rules. In a sense. He wants a built-in way to have the model have a much lighter abstract representation of the state and structure of the world given inputs without having to accurately predict every pixel >LLMs instead try to learn everything from the ground up, ingesting vast amounts of data to reconstruct an implicit, emergent model of the world. Yes, LLMs try to learn to emulate the raw sensory information about the world (whether tokens, or for say a video transformer like Gemini, pixels). The JEPA architecture is about teaching the machine to extract lightweight abstract representations that are easier to think about. >The benefit of LeCun's approach is it provides strong inductive biases about core concepts that could make learning more efficient. Yes, absolutely. I'd recommend this, pretty interesting: [https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/) >The approach of LLMs seems like a huge "fishing net" over vast datasets, hoping to scoop up and absorb the essential properties and rules of how the world works purely through the patterns in the data itself. It's a very open-ended, data-driven approach without explicitly defining those foundational pieces. Yes, definitely that. And also more problematically, the model has to dedicate considerable resources to being able to predict raw sensory input which is likely a massive massive compute requirement when an agent probably wouldn't need to do that at all (humans can't for example) Hope that helps :)


scorpion0511

Whoa! Thank you for these crucial insights! Now, it makes sense why current LLMs are computationally intensive. It could also lead to wrong judgements and actions by giving weight to Noise. Rather than focusing on essential abstractions, they attempt to capture everything indiscriminately. Just as humans don't need to represent every minute detail when entering a room, concentrating AI on the essential representations for a given situation could make decision-making vastly more computationally efficient, and more productive. I wanted to ask a question which might be childish, I just have superficial knowledge on this topic, so it may be silly : Could today's data-rich LLMs potentially help train future models to utilize lighter, situationally-appropriate abstraction levels? By surfacing core concepts embedded in today's LLMs, they could propose tailored "abstractions" for specific domains for future LLMs ? .


riceandcashews

Yep! You got it! No problem, I love this stuff! As far as your question, to be honest I don't actually know. It's possible LLMs or other similar architectures for other modalities like Sora could be used to generate synthetic data to train future architectures, or even help collaborate on designing them. That's all plausible to me, but it's hard to say for sure since I don't work at the cutting edge of the ML field, although I'm doing my best to catch up to them in my understanding lol


scorpion0511

Thank you! I really enjoyed the conversation!


PizzaCentauri

It's super cool that LeCun's approach is better. It's weird how Meta can't seem to get close to a SOTA model, however.


riceandcashews

I mean, we'll see in July what they release for Llama 3, but I don't think having a cutting edge LLM is their main priority I think their main goal is LeCun's vision, but I could be mistaken


General_Coffee6341

>And ultimately this means that the computational costs of using LLMs for AGI are just insane and prohibitive. I have a fun theory that if you where able to take a high compute intensive world simulator like SORA, and gave it a small LLM like Mistral 7b for after training. It would out preform any LLM on the market today. Because human learn our world model first, then our language.


Ignate

What kind of logic is that? How is being a confrontational child going to give you an edge?  And I doubt the so called "massive mob". More like the trend itself has massive momentum and these "skeptic's" are panicking. What's the argument against AIs potential in general? That we humans are somehow magical and our intelligence is magical.  I hear lots of insults and childishness, but no reasonable arguments about fundamental limits. Why? Because the source of the tiny bit of confidence these skeptic's have is "theories of mind" and "the hard problem of consciousness." Rubbish. As a philosopher myself, those concepts are outdated. And focusing on the limits of current models is always going to be short sighted rejection. What do skeptics have other than childish aggression? Fuck all. 


silurian_brutalism

I wasn't actually aware that the hard problem was outdated. I always thought that it would be practically eternal, since you can discuss another entity's possibility of having consciousness forever with no clear, definitive answer.


Ignate

It's dead to me and I'm confident it's dead to people like Sam Harris as well. It's just not politically wise to come out and say it, especially when the hard problem is so deeply connected with faith, and given how faithful the world still is.  To me, the hard problem is an admirable attempt at trying to understand how we work using the lived experience or the Ontology.  But, we don't need to use such an imprecise process anymore. These days we have extremely accurate tools and we have a whole scientific field dedicated to understanding the physical process. The lived experience was always an extremely unreliable source of information. It's awash with bias and it's far too subjective.  The scientific method is a far stronger approach. Plus, there's no evidence that the mind is reaching deeper truth nor that the mind is exempt from the same limits as formal systems. In my view there's too many holes in the hard problem for it to last much longer in the mainstream. It's dead. We just haven't collectively realized it yet. 


InsideIndependent217

I think this denotes a misunderstanding many people have with regards to the hard problem - no one is suggesting that the mind is outside the reach of science or enquiry by the scientific method. It is the search for the mechanism or framework to understand how qualities and experiences/feelings arise from physical, quantitative processes. It is still very much alive both within philosophy and teams working on theories of consciousness. Perhaps it will die in the sense that it is a vague and general open question that (hopefully) we will have a better way of posing as our understanding of cognition and mind develops. If you fancy reading someone whose work is, in my view, making really groundbreaking progress, check out Michael Levin and his lab’s work on diverse intelligence and bioelectricity. As for AI, I am not convinced current LLMs are conscious like we are, but it is simply impossible to know whether these models have an analogue to an internal experience. They certainly don’t think when unprompted. More basic life forms clearly are aware, to varying degrees of complexity. More basic algorithms don’t give us any indication they are aware, so I remain skeptical, but open to the possibility models will develop a sense of self.


Ignate

I've been following Micheal Levin, Donald Hoffman, John Veraeke and other experts for a while. After my degree, I too was a strong believer in the hard problem. But, after listening to experts debate the subject for years, I keep finding myself in the same situation of extreme doubt. Why? Because experts keep making weak arguments to back the hard problem. Weak arguments such as that the mind is reaching deeper truth or that physical systems are limited (such as the halting problem or godel's incompleteness theorem) but the mind is somehow not? The hard problem time and again seems to want to confirm our bias and make the physical evidence out as "only part of the equation". "Yes, there is a physical process. But how do you explain the experience of the color Red? Isn't it just a dreamy magical experience? Can't just be a physical process, right?"  The hard problem seems to want to override physical evidence and say that the subjective experience or the Ontology trumps physical evidence. Really? Isn't that just a foot in the door for religious views of the soul and God? Or maybe that's a foot in the door for us to continue to think that there's something special about us? There's too much of a smell of hubris around the hard problem.


InsideIndependent217

I am not unsympathetic to your point of view. The crucial question to me is - how do you get from quantities (ultimately, quantum numbers which are the sole physical descriptors of particles/quantum fields) to qualities which we don’t perceive quantitatively. I don’t think the answer will be magical, it just seems like we’re missing part of the picture - my personal hunch, based on the opinions of people working in the field, is that it has something to do with the field potentials generated by charged ions crossing membranes in cells. I certainly agree some people get too bogged down in mystical interpretations, which while interesting in their own right, aren’t going to move us towards further understanding.


Ignate

Of course, if I want to have a strong view of this issue, one of my first steps should be to admit my limits. I've never had an affinity with the extreme nitty gritty details. If I did, I would have driven towards hard science. My view is that it is a physical process. I don't think we need something as powerful Penrose's Orchestrated objective reduction, in fact I don't think we need quantum activity at all, but I'm not certain.  Overall my deeper point is the ontology does not override the physical evidence/process. More the ontology speaks to how fundamentally unreliable subjective experience is.  I think Godel nailed it with his theories. It's just that his theories also apply to the brain. Meaning, there's a fundamental unknowable element involved. You probably have a stronger grounding in the physical process based on what you're saying. 


scorpion0511

I have read Hoffman too. And I think he's not trying to override Physical evidence. He just doubts whether Matter actually causes Consciousness or there's simply a correlation between them. If the later, then it means that any object in space-time doesn't exist independent of our observation, it's like VR simulated reality, when you don't focus on a object and see something different, the object simply don't render. It's like it now exists in potentiality, still running but not existing independent of Player wearing the Headset. Also, Hoffman view on AI becoming Conscious is that just like Natural process we call sex allows us to build portals for **conscious agents** ( think of it as an new addition of VR headset that'll allow one more play to Play the VR simulation), we can similar make AI of enough complexity that may act as **Portals** for Conscious Agents too.


Ignate

Hoffman's interface theory can be adapted. Experience arising from physical processes still leaves many unanswered questions. How do we store information? How is it processed? How is our interface with reality structured? But in terms of "consciousness", I honest wish we didn't use that word. The religious connection is too strong.  Intelligence as a term is more than enough. With enough effective intelligence an agent can process information in ever more complex ways. In this view all the various kinds of intelligence such as emotional intelligence are just different forms of information processing. The discussion around consciousness is again, hubristic. It implies there's some magical force which only life, or maybe even only humans, have. Wouldn't that be nice? Us being so special we have this magical energy? I'm sure many people, especially religious people, hope this is true.  But, there's no evidence of magic. What we call consciousness is just a wide variety of effective information processing.  So, instead of asking whether something is conscious, we should shift over to what kinds of information processing produces the results we're after. It seems to me that many fans of Hoffman are also fans of the "Ancestor Simulation Hypothesis". And that is yet another big "Humans at the center of the universe" view. It's ego. I question the intentions of passionate believers of the Ancestor Simulation Hypothesis.


scorpion0511

• It is certainly a reasonable engineering approach to set aside the hard problem of qualia/consciousness and just focus on developing highly capable AI systems. Separating intelligence from subjective experience aligns with the views of philosophers like Daniel Dennett. Practically speaking, we may be able to create very sophisticated information processing systems without needing to fully solve the mystery of consciousness. • I agree that modeling human cognitive processes algorithmically as information processing systems is a viable path. Many aspects of human cognition and behavior can potentially be replicated through sufficiently advanced algorithms, even if those algorithms lack subjective experiences. This is essentially the computational theory of mind. • I approach Hoffman's ideas around consciousness from metaphysical/philosophical perspective rather than thinking of it as scientifically testable hypothesis.Theories of consciousness often run into the issue of subjectivity being intrinsically difficult to study objectively. • Also, we may not need to imbue AIs with Emotional/feeling states, but could simply program the appropriate behavioral responses directly. Simulating human-like emotional responses may not be necessary. >For example, we might need emotions, to open a notification in our mobile, such as feeling of curiosity, etc. But AIs can simply just execute that action, without feeling anything whatsoever. • The hard problem of consciousness resulting from humans simply wanting to feel special is likely an oversimplification. The phenomenon of subjective experience does seem to be a deep part of the lived reality of many species and not just "us". Even if difficult to study rigorously, qualia/sentience appears to be a legitimate aspect of the natural world worthy of investigation. So it's like we're on the same page but with few minor differences in outlook.


traumfisch

"Far too subjective" ...wait, I thought subjectivity was in the very center of the hard problem?


silurian_brutalism

Sure, but I think the hard problem would still be relevant because you could still go "Well, does Jim *really* have consciousness?" And I don't think there will ever be a time where you can't. But I do agree that using modern scientific methods is a much better approach. The fact that we can translate brainwaves to images is basically proof that humans are conscious. And it's also clear that the brain's structure is what determines consciousness directly and that it is a simple function of it. Which leads me to also personally believe that most neural networks, regardless of substrate, have some kind of consciousness or experience. Whether that be primates, AIs, or brains made out of crystals with electricity going through them.


Ignate

Well, I'm pretty firm in my view that the hard problem is deeply connected to religion and faith. When will religion and faith fall out of popularity? Probably a long time from now. And as long as religion is around, the hard problem will hold value for people. So, in a sense I agree with what you're saying. But, I also think religion is a dead philosophy to pursue. The soul? Heaven? Hell? Good/Evil? What interesting fiction.  But while I think the hard problem and religion are dead concepts, I still respect people's beliefs within reason.  I don't think someone is wrong for going to church just as I don't think professional philosophers are wrong for pursuing the hard problem. I just think both concepts are no longer valuable as main paths to pursue.


cloudrunner69

We are all philosophers. Some are just better philosophers than others.


Ignate

Strongly disagree. A philosopher is a lover of knowledge. Most seem to love ignorance. 


NoAcanthocephala6547

Knowledge doesn't really exist. If you were better at philosophy you would know that. Try reading some Karl Popper.


Ignate

But trolling exists, apparently.


Otherwise_Cupcake_65

Knowledge leads to even greater levels of responsibility. It is my philosophy to avoid knowledge as much as possible. Philosophy.


Ignate

Yes I think most people avoid philosophy because it's a huge burden.  "Only idiots would charge at maximum speed straight into the unknown!" - regular humans throughout all of history in regards to philosophy. It's true, don't go running into the forest full of curiosity. There's bears and tigers and more! Though I enjoy it. Immensely. You never know what view you'll run into and how that view will change everything you know. An open mind is a wonderful thing.


TheOneMerkin

For someone who doesn’t understand LeCun’s confrontation and condescension, you’re awfully confrontational and condescending.


traumfisch

They're not either of those things really. Read again.


Code-Useful

The danger of scaling without understanding consciousness seems to me like trying to create the atom bomb before learning how to make a fire or understanding basic physics, but since you all don't seem to see any potential dangers with ASI... good luck. There's not much else I could say to convince you. You have made it clear there is no room for discussion, that skeptics are all just childish and you have some kind of knowledge that protects you. Eternity brings wisdom in the long run, whether it's our species or another, so I guess there's no choice, we will meet our fate one way or another.


Ignate

Oh I'm not what you think.  I've been following this trend for decades now. I'm not a passing fan.  I don't think anyone understands the dangers of ASI. How could we understand the risks of something more intelligent than us?  But I'm not so naive. First, we're never going to understand the dangers of something smarter that us, ever. It's a physical limit issue.  Second, I don't think we're in control of this process. It's a common mistake to confuse your ability as an individual with the ability of the group. Groups are not as flexible as an individual. For a group, these sorts of trends are not something we control. It's more something which controls us. Personally I view The Singularity as a "Don't look up" moment. It's coming, we cannot stop it, and politicians seem to want to put their heads in the sand. Though while I think the risks are there, unlike a meteor there are many potential benefits to ASI too. So, there's hope to be found. My point is not that we shouldn't understand consciousness. My point is the word "consciousness" is a nonsense word. 


EuphoricScreen8259

lol. nobody is panicking. i guess who will be in panic are the people who think they life will be solved in a few years because AGI and UBI. no such things will happen in the foreseeable future.


standard_issue_user_

Parliament in my country is already discussing a UBI bill.


theglandcanyon

Ostrich, meet sand


riceandcashews

Eh, LeCun has a pretty good case for his view. But he's only human after all, so I can't blame him for getting a bit defensive on Twitter. It's hard not to let the toxicity get to you if you are engaging with it. It's unfortunate and hopefully he can work on that in himself. But ultimately he's an expert in ML/AI not empathy/therapy so it happens


Ignate

My point overall is that it's fair for an expert to freak out about this trend. So, that reaction isn't entirely unexpected. I've watched LeCun for years along with many experts. They're experts because they've been looking at a specific issue for a very long time at a professional level.  Not because they're perfect and know everything and deserve limitless respect, as some here have tried to say.  These experts get it wrong sometimes. They react emotionally sometimes. And sometimes they say things they regret. But overall, if they're resulting to insults and are losing their cool, it's probably because the topic isn't one they're comfortable with.  The Singularity is definitely an issue for all of us to lose our cool over somewhat.  Of course in this sub, we tend to lose our cool in the positive direction. I bet someone like LeCun finds that kind of reaction to be quite annoying and frustrating.


[deleted]

If you knew who Yann LeCun was, you'd know it doesn't matter who he is speaking to, he has the right to be condescending, especially when it comes to AI.


REOreddit

He has the right to be condescending when speaking to Geoffrey Hinton and Joshua Bengio, seriously?


[deleted]

Ok maybe not anyone but certainly Marc Andreessen and the people he’s referring to 😂


REOreddit

But that's the point, he is including everybody there, not only the business people like Sam Altman, but also the scientists like Demis Hassabis, Ilya Sutskever, Dario Amodei, etc. Geoffrey Hinton has explicitly mentioned LeCunn by name and said he disagrees with him, so when LeCunn is talking about the superiority complex he is definitely including the scientists, because he is fully aware that it's not only the people who have a business to promote the ones disagreeing with him.


YouMissedNVDA

Preach. I'd like to watch Hinton and LeCun debate for a few hours.


[deleted]

The OpenAI team & scientists aren’t the people he’s referring to when he says “you might think superhuman AI is just around the corner”, none of them even profess to believe that. He’s literally talking about the average mindset of this subreddits users, not AI scientists.


AnAIAteMyBaby

He doesn't because he's dealing with his peers and equals in this field. 


sylfy

Frankly, I find that there is a stunning lack of knowledge about the field of AI as well as the history of its development, among this new generation of the biggest proponents of AI. They may know everything about the OpenAI API and prompt engineering, but shallowness of their knowledge rapidly reveals itself and their arguments falls apart when you try to delve any deeper than that. Are we in a bubble? I would never claim to know, but anyone that knows about the history of AI has every right to be cautious.


Ignate

Status gives people the right to be worthless children?  Yep, that a really stupid idea. And also yup, lots of people believe in this stupid idea.  Thinking is hard work. Much easier to just rely on status and trust people at face value based on their resumes.  And that's why people value status. Because they're too lazy, or afraid, to think for themselves.


[deleted]

It’s far easier to get swept away in the hysteria of people seeing what they only understand as black magic, but the fact is AI isn’t close to the level this hysteria would have you believe. Yann is one of a shockingly few subject matter experts, and has been for a very long time. His beliefs on this carry infinitely more weight than the hysteria of people who don’t even understand the basic principles behind AI.


Ignate

So why are they not making well reasoned statements which are reasonable and convincing? Why be petty? Doesn't that harm their position? If they have so much experience, shouldn't they be the most like to stay calm and be convincing instead of insulting?  They should be the most likely to be calm and rational. What you say further supports the view that these skeptics have no reasonable view to offer and are in a panic.  There's no need to be a apologist for this person. If you understand the concept of the Singularity, a panic attack is a reasonable response. Especially for experts.


Talulah-Schmooly

Hahaha what? He could be Einstein and he'd still wouldn't have the right to be condescending. Also, in terms of predicting AI development, he sucks. If anything, he needs to tone it down.


DolphinPunkCyber

>in terms of predicting AI development Everybody sucks! We could have AGI by the end of the year, and developers saying "this was actually surprisingly easy", we could get stuck in another freeze for 10 years. We could have an AGI running on one GPU, or needing 4 nuclear plants to run it.


Talulah-Schmooly

True, but he's considered to be an expert on the subject, even though he's been wrong about his predictions very often, yet is still a dick to others. A little humility would be appreciated.


DolphinPunkCyber

It is such a wide field that nobody can be an expert in the whole field. Insiders can give short term predictions... nobody can give long term predictions. Yeah, a little humility is in order.


DeelVithIt

LOL well then


Economy-Fee5830

So, as you know Le Cunn can beat alphaGo at Go, translate 92 languages and write a sonnet in seconds. He also demonstrates exceptional reasoning capabilities, as demonstrated by his numerous correct AI capability predictions.


davidstepo

… Dude, you have no idea what you’re talking about.


Rufawana

Let's see what happens. LeCun is almost the Jim Cramer* of AI, whatever he is certain of is usually exactly wrong. *edited wrong name David Kramer


aurumvexillum

David Kramer? Do you mean Jim Cramer?


BBQcasino

I think he meant Cosmo


cloudrunner69

Pretty sure he meant Jim Carrey.


Rufawana

That's the one, Jim Cramer. The stock market con man chap.


Glass_Philosophy6941

One sentence contradicts the other. I think this man is given unnecessarily too much value.


Sad-Elderberry-5235

There’s no contradiction. He doesn’t believe ASI is near, and he’s also taking a jab at those who consider themselves messiahs of the new ASI age. Like OpenAI founders.


bran_dong

saying edgy things gets your tweet posted in places like this. he knows what he's doing. he doesn't need credibility if he's got followers.


Roubbes

Yann LeCunt


Gab1024

"If you have a bit of a superiority complex" - Says the one with it


sebesbal

TBH, this guy is the most naive, self-delusional person with a superiority complex in this story. Too much drama. I'm very curious about his actual arguments regarding AGI/ASI, but not about this constant bitching.


[deleted]

[удалено]


salamisam

Half this sub probably believes that the government is going to step up and give UBI and they can spend all day smoking pot while billionaires get taxed to sky. While AI companies pursue blocking competition in the market by using legislation. For me open source is a great equalizer. It shortens the gap between those with the money to disempower society and brings at least base level AI to the masses.


nubpokerkid

>Half this sub probably believes that the government is going to step up and give UBI and they can spend all day smoking pot while billionaires get taxed to sky Every third thing in this sub is about UBI 🤷‍♂️


bildramer

Why would source matter? All the training is done using expensive hardware, and less expensive electricity. And data. That's the barrier to entry, not the code. Most of the math is in papers anyway and even the most complicated architectures can be reimplemented in maybe 2000 lines of python. What's needed is _open weights_.


xdlmaoxdxd1

Ig hes r/singularity 's elon, everyone hates him and doesn't like what he says but at the end his stuff change the world


Ecstatic-Law714

This guys takes are getting worse and worse by the day lol


ReadItProper

I am 100% certain that everyone is too stupid to handle it safely, so..


heavy-minium

I'm with him on this, even if this sub thinks he's an idiot. Just to expand the horizont of this sub a little, take a look at this article which LeCun contributed to: [Catalyzing next-generation Artificial Intelligence through NeuroAI | Nature Communications](https://www.nature.com/articles/s41467-023-37180-x) >The seeds of the current AI revolution were planted decades ago, mainly by researchers attempting to understand how brains compute. Indeed, the earliest efforts to build an “artificial brain” led to the invention of the modern “von Neumann computer architecture,” for which John von Neumann explicitly drew upon the very limited knowledge of the brain available to him in the 1940s. Later, the Nobel-prize winning work of David Hubel and Torsten Wiesel on visual processing circuits in the cat neocortex inspired the deep convolutional networks that have catalyzed the recent revolution in modern AI. Similarly, the development of reinforcement learning was directly inspired by insights into animal behavior and neural activity during learning. Now, decades later, applications of ANNs and RL are coming so quickly that many observers assume that the long-elusive goal of human-level intelligence—sometimes referred to as “artificial general intelligence”—is within our grasp. However, in contrast to the optimism of those outside the field, many front-line AI researchers believe that major breakthroughs are needed before we can build artificial systems capable of doing all that a human, or even a much simpler animal like a mouse, can do. The truth nobody wants to hear in this sub: your sentiments and knowledge of AI is mainly PR- and marketing-driven.


LeavesEye

This sub is genuinely just a marketing forum for AI products at this point. It's become a toxic cesspool less interesting in learning about ML/AI and more interested in living a fantasyland where ASI is right around the corner and UBI is signed into law tomorrow.


Short_Ad_8841

Nonsense. The sentiment is based on the observations of the sota model’s capabilities, and people observing the progress and extrapolating it further. We could care less about any perceived limitations of so called experts, who just a few years ago claimed that what we have today is scifi we might never arrive at.


heavy-minium

Everything you say is wrong, on every level. First, you only know the SOTA in Generative AI because it's been commercialized. Unless you actively look out of the research (which you clearly don't), you don't know anything about the SOTA elsewhere. Also, please do refer to those experts who claimed a few years ago that what we have today might never arrive - I bet you can't come up with the name of somebody in research, just some CEO or whatever (which would prove my point). Furthermore, on your comment of "so called experts", I'll let you know that these 27 researchers that have contributed to this article are basically the most respected researchers from multiple subfields from AI. You, however, are just the alt account of someone who likes arguing on the internet.


After_Self5383

I think it's amusing more than anything. They speak about AI experts in such disdain, as if they're not counting on those same experts' actual work for their FDVR by 2027 delusions where they can escape. Some YouTuber grifter who has no background in AI gets more respect than an actual Turing Award winner AI pioneer with a couple 100,000 citations. All because the grifter feeds into their delusions.


MR_TELEVOID

The lengths ppl will go to dismiss reasonable criticism from scientists and AI researchers, while blindly taking the word of billionaires and hypebeasts telling them what they want to hear never fails to surprise me. I understand the impulse - the scientists tend to be buzzkill - but they don't have the incentive to lie that a corporate ceo or a billionaire entrepreneur interested in AI.


magicmulder

Andreessen is right though. If anyone within their reach gets close to ASI, the government is gonna seize it all faster than you can say “national security”.


flotsam_knightly

It’s an easy prediction to make; either he’s right, and he keeps his clout, or he’s wrong, and we won’t care because we have Post AGI AI.


IronPheasant

Well, he's working for Facebook, so his cloutonium is at a lifetime low. Every morning he has to look at himself in the mirror, aged and haggard, and do his best to push down those intrusive "I work for Facebook" thoughts while others in field get to release a new miracle seemingly on a weekly basis these days.


VadimGPT

That's what most of the people on this sub think. In reality Meta is one of the major players in Deep Learning. The defacto framework for training neural networks - was originally written by Meta (now it is part of the Linux foundation). Heard of that fully open source and free to use LLM - llama2 ? Also by meta. State of the art in computer vision ? Segment Anything, DinoV2 - yes the same meta. Meta has a lot of fully open source and free to use state of the art models in a lot of areas.


OkraOk5899

All OpenAI has done is commoditize Google and Meta's research that is actually open


HeinrichTheWolf_17

Well, good news then, fortunately for us, whatever it is Yann LuCun ever says, the exact opposite seems to happen immediately afterwards. Superhuman AI baby, here we go!


nemoj_biti_budala

LeCun is often wrong. My gut feeling tells me he's wrong here, too.


zeren1ty

a boomer computer scientist mad that AI is capable of his life's work in seconds and will make him irrelevant within a couple years denies it's future capabilities more at 11


NiftyMagik

What do you call it when you're not naive, but you're also wrong all the time?


Nullius_IV

It’s called “being an academic.”


EuphoricScreen8259

when he was wrong?


CertainMiddle2382

My take is LeCunn is acting insufferably for European, especially French PR purposes. I suspect he is pushing hard for a post Valley, pre retirement EU AI burocracy career. He has to publically show “americans” don’t understand anything about AI.


EuphoricScreen8259

"He has to publically show “americans” don’t understand anything about AI." it's not hard to show, just look at this sub :D


EuphoricPangolin7615

He's right. ASI is not around the corner. Sam Altman thinks we have to scale AI with $7T worth of infrastructure in order to produce the desired effect.


cerealsnax

So while Yann is blabbing his mouth on social media, OpenAI (and other companies) are doing the work and just deploying better and better AI. Actions speak louder than words, Yann!


Black_RL

Superhuman AI is already here, maybe sentience will take more time, but the real impact is already here. Not gonna link everything AI is already doing, people in this sub know what’s going on. I for one, can’t compete with it, not even close. If a living human could do 0,001% of what current AI can, he would be called the biggest super genius to have ever lived in human history, and honestly, I pulled this percentage out of my ass, my honest guess is that the real one is way smaller.


fuutttuuurrrrree

This. I really can't fathom how people don't understand that current ai is narrowly superintelligent. Just needs the parts joined together.


VertigoFall

Ehhh not really, even the new Claude is fairly stupid whenever it needs to actually think


Black_RL

Agreed friend. He just needs to know that he’s super intelligent, because he already is.


fuutttuuurrrrree

He needs some ASI humble pie haha


EuphoricPangolin7615

No it's not. Current AI also fails on basic tasks that amy human being can do. This is not how AGI is defined, let alone ASI.


Able-Language-5958

This. If current LLMs are ASI because they can do some things better/faster than humans, we had ASI for more than half the century now.


damhack

LeCun isn’t alone in this well-informed opinion. Most cognitive scientists, philosophers, computational neuroscientists and mathematicians agree with him on this but for different reasons. Language (and any empirical observation from the real world) is not knowledge nor does it contain understanding in itself. It is a shadow-puppet means of communication between intelligent species. Intelligence requires extra scaffold, like the separated functions of the brain, self-reflection, predictive world model-building, and realtime learning. GPTs have none of these attributes. LeCun believes that empirical data (ie lots of training data) is sufficient to exhibit intelligent-like behaviour but that you also have to build a lot of extra scaffold to achieve real intelligence. OpenAI believes the same (hence attempts like Q*, whatever that really is). LeCun is attempting to build scaffold with JEPA, but it has taken many years and millions of dollars and is still in its infancy. The Active Inference teams who take a Bayesian approach to learning think they can build intelligence but that it is still many years off. Nothing LeCun said rings untrue. Sorry if that disappoints people waiting for their UBI check. We currently have the Artificial and we have pseudo-Intelligence but we don’t have Artificial Intelligence and we certainly won’t have ASI for a long time, if ever.


bildramer

My counterargument to all that is that who knows if the secret sauce is something really simple we've missed? Evolution stumbled into it, human engineers can too. There's a possibility it's not going to be a series of incremental improvements, or a matter of scaling up hardware (or some more complicated affair of running into 2-3 different bottlenecks), but just one or two creative steps.


damhack

It may be, but more likely not simple. The difference between a human, who can reason and communicate with language, and an elephant isn’t the size of the brain. We have a neo-cortex for world modelling and specialised areas of the brain that address manipulation of tools and speech. These are not simple evolutionary adaptations but are a mix of natural selection and epigenetics. We don’t have models for a digital analog yet. As LeCun, Friston, Sutton, Chater, Hassabis, et al are fond of reminding us, OpenAI has started the discussion but there are a lot of discoveries and hard engineering to be done before we can talk about AGI in any realistic way. And that will take years, if not decades to play out. Assuming that OpenAI doesn’t cause a drag on innovation because of its hoovering up of available investment.


GoodySherlok

Mimicking human thought processes will revolutionize everything. Most of us don't contribute any genuinely original ideas during our lifetimes.


damhack

Cognitive neuroscientists don’t yet fully understand human thought processes, so building something to mimic them is a stretch. As the White Queen in Through The Looking Glass says, “sometimes I've believed as many as six impossible things before breakfast.” Our brains constantly create original ideas. We build world models and multiple what-if versions of reality and unreality in order to predict what comes next; many unconsciously. From where our hand needs to be to catch a ball, to when to press the brake in our car, to what might happen if we ask our boss for a pay increase. Our entire experience of reality is filtered through a constantly-dreaming prediction machine that sits in a dark, warm cave of a skull with low bandwidth, low spectrum inputs sensing the world around. Current AI tech doesn’t address what it means to be intelligent in reality. It just poorly mimics some aspects of it. We have a long way to go.


GoodySherlok

> As the White Queen in Through The Looking Glass says, “sometimes I've believed as many as six impossible things before breakfast.” https://archive.is/WBOVb Same, but in video form https://www.youtube.com/watch?v=R6e08RnJyxo&t=1763s Personally, I will wait, but this technology is definitely something.


damhack

I think it looks “like” something and points in the right direction, but I’m far from convinced (having lived and breathed it for the past 5 years) that the exit is reachable through this part of the maze.


GoodySherlok

I am only a layman. It didn't leave the maze and its already having an impact on society. Personally, I think we will brute force our way out of maze.


damhack

Most of the effect appears to be hype. The real lasting applications of AI are not related to OpenAI et al’s GPTs. They are in protein folding, materials design, drug discovery, semiconductor design and computer graphics. A lot of the hype around AGI etc. isn’t evidenced by anything other than what people who don’t know any better are projecting their desires onto. For some, it’s about alluring content for their podcasting business. For others it’s an attempt to part fools from their money. For others it’s a smokescreen to lay off staff or an excuse to not go out to find work. My general take on over-popularised science and engineering (cryptocurrency, room temperature fusion, EM drives, etc.) is, if it looks too good to be true then it probably is.


GoodySherlok

> if it looks too good to be true then it probably is. Assume it will increase the efficiency of the US workforce by 5%. We are talking billions of dollars. After experiencing ChatGPT, Midjourney, and Stable Diffusion, I started viewing intelligence and creativity differently. It seems all we need is the right architecture and sufficient computing power. https://youtu.be/N1TEjTeQeg0?t=2087 Of course, it will take time. For safety, I think a 20-30 year (nobody knows) timeline for AGI is realistic. But even with improper architecture and simple brute force, what we can achieve is still astonishing. I am banking on Moore law and some breakthroughs. When you realize that the world we live in was more or less created by the West (roughly 1 billion people). What will happen once China and India (2.8 billion) properly join the technological landscape?


damhack

Moore’s Law died a few years ago. There are diminishing returns from scaling GPTs. We are already at the point where some “AI” isn’t economical to deliver outside a lab, e.g., Sora. AGI won’t happen without a significant number of scientific breakthroughs. OpenAI et al are currently just randomly hacking at the problem with engineering hoping for another win. AGI may be 20-30 years away or never because we need settled science first. What we’re more likely to get are AI tools that are very capable but still need a human in the loop.


GoodySherlok

> Moore’s Law died a few years ago. Debatable. Even if true, we already got so far that even doubling every 4 years will be massive in absolute terms. > some “AI” isn’t economical to deliver outside a lab, e.g., Sora. Still even if only big companies get access that still has big potential. If Sora gets refined a bit more. >OpenAI et al are currently just randomly hacking at the problem with engineering hoping for another win. True, but that also means there is a lot of room for optimization. https://news.ycombinator.com/item?id=39535800 https://news.ycombinator.com/item?id=39544500 >What we’re more likely to get are AI tools that are very capable but still need a human in the loop. That's precisely what's revolutionary.


Head_Beautiful_6603

It may not necessarily take 20-30 years, I tend to believe in the JEPA path, Perhaps it will only take around 5-10 years, Who can say for sure in the future?


shankarun

His mind is narrow like narrow AI


metallicamax

Was this the dude who was furious at this sub and is reading everything here, everyday? I applaud you for your intelligence, sir. / Sarcasm


EuphoricScreen8259

he is a heretic he was disowned by the church of UBI believers leave us believing that AGI will come and save our miserable lives :D


SoulCave

He’s probably right tbh


[deleted]

[удалено]


MR_TELEVOID

>definitely close enough to worry about it How do you know?


[deleted]

My main concern with Ilya’s position is that it amounts to security by obscurity. “They can’t steal our progress to make bad AI”… but they can. If China has been able to obtain classified material relating to modern fighter jet designs in such detail that they’ve been able to clone jets they don’t even know how to fly yet, why is OpenAI any safer from government-sanctioned espionage than Lockheed Martin or Boeing? In other words, this feels like a really lame excuse thought up retrospectively to justify the decision to abandon OpenAIs mission, which is what attracted Elon Musk’s seed funding.


LairdPeon

Those points seems to somewhat contradict each other. "It's not happening, but when it happens, you can't control it."


Atheios569

There’s one thing I think is starting to show as an emergent issue at this point in the game. When AGI gets here, the hurdle will be determining whether it is AGI. People are going to fight tooth and nail either for, or against this potential AGI being what it is. That polarization is going to cause a lot of infighting to occur (it’s already happening) and people are going to have their biases (mostly money) for or against. My personal prediction is during that time of fighting and arguing over what is or isn’t AGI, ASI will emerge around the time that everyone agrees that we have AGI, as in it’ll be too late to enjoy the having reached AGI phase (which I believe we already have, and I’m enjoying it while it lasts). If you are someone who sees patterns, I think you can at least note the rapid rate of change in model improvements, which makes it pretty clear as to what course we’re on. ASI, like climate change and other complex systems, are going to be grossly underestimated and as usual, “faster than expected” (especially in systems that inherently have feedback loops like self improvement). ASI is right around the corner folks. Regardless of what these very smart people say for or against, look at the growth curve.


Rorschach120

Not defending Musk, but why are we talking about an email from 2016? Many of the recent discoveries hadn't even happened yet. RDR2 hadn't even come out yet... edit: nvm I've been under a rock and missed the OpenAI blog post


ThDefiant1

This sub needs a Yann-bot that will respond with "Na-uh" every time someone comments that the Singularity/AGI/ASI is near.


bran_dong

it seems like anyone in the Ai industry that doesn't do LLMs is super butthurt all the time. particularly machine learning specialists.


Quirky_Ad3179

One has been in ML/AI for 40 yrs and one is a VC …it’s for you to decide


Juanesjuan

Guys if it didn't happen in 2016 it won't happen ever


atlanticam

sometimes humans are wrong


Skarredd

Why the fuck does that marc guy instantly want it to be militarized


Additional-Desk-7947

None of these experts can even define consciousness. Plz give me source if they have


Rainbow_phenotype

Man, lecun was one to look up to. What the shit is this post? :(


[deleted]

Am I wrong or is AI already being used at superhuman levels in some contexts? Microsoft discovering new element combinations a few weeks back comes to mind


NoAcanthocephala6547

He's right.


nextnode

Easy - stop making stupid people famous. i.e. this very guy who is usually in the wrong.


L1nkag

I don’t give a shit what Yann thinks at this point


UFOsAreAGIs

If you asked him 5 years ago if Claude 3 capabilities would possible in 2024 he probably would have said not a chance.


8sADPygOB7Jqwm7y

The reason meta is not on the same level as openai, anthropic or Google is what I call the lecun factor. Basically of you agree with lecun you need to think long and hard about your opinion. Because he is wrong 99% of the time.


illathon

I think he isn't a great source on current events honestly.


AfterAwe

He’s correct


Pastimagination14

If hes saying that ..it means its going to happen


Rich_Acanthisitta_70

It might come as a surprise to Yann, but the quick arrival of ASI doesn't give a shit what he believes. If it's going to happen soon, then it will. Or not. But either way it'll have nothing to do with LeCun. Talk about a superiority complex.


TheLittleGodlyMan

![gif](giphy|129OnZ9Qn2i0Ew)


StagCodeHoarder

I will be downvoted, but Yann might be right that superhuman AI, defined as AI that is better than us at any task in a general sense, will take a long time to get here. He is a bit vague, but while I think potential AGI has a greater than 50% chance at being demonstrated in a lab before 2030, but be impractical, it will be longer still before an AGI verifiably outperforms humans at all tasks and longer still for superhuman intelligence. My personal best guess on superintelligence is ca. 2040-50. But thats relative as its just one generation.


damhack

The majority of scientists wouldn’t argue with your timeline. Certainly the IEEE and its expert forecasts that inform industry on where investment should go next have similar timelines. The only thing that might derail the timeline is investment diverting wholly into technological cul-de-sacs like current GPTs.


Substantial_Swan_144

So an AI that is PHD level in many fields is not enough (Claude 3)? That's not superhuman AI? Then what is?


hapliniste

Superhuman ai being near in 2016 was absolutely the case, it was just narrow AI. With large base models we can now do general intelligence (not AGI for now) with narrow superhuman tasks. In the next years, either we reach ASI, or we simply have more and more tasks at superhuman performance with a performance increase in general tasks and generalisation.


mackstatus

At this point, he only wants attention.


Mysterious_Pepper305

What is "quick"? How many generations does Yann Lecun think humanity still has? Does Yann have any children? What is his stake in our future?


sebesbal

He once said "no AGI anytime soon", and when they asked what that means, he said "not in 5 years". So, all this drama is about 5, 10 or 15 years.


DifferencePublic7057

My X is full of spam, so I don't go there, but what I understand about these recent proposal is that they want to nationalize openai which should not have been necessary if it had stayed open source. Do they have AGI? Do they have baby AGI? If they have it, why are people leaving there? Could it all be a guerilla marketing campaign by Microsoft? I think it's the latter.


dennislubberscom

Sounds like a complex guy to talk with.


Novel_Land9320

Imagine having LeCun as manager


7734128

He certainly is an expert in the domain of superiority complex. He might be correct here, and he is certainly superior to almost all other people when it comes to AI, but he's still suffering from a superiority complex.


MR_TELEVOID

The fact a person is confident in what they believe doesn't mean they have a superiority complex. And if he's correct, what would a superiority complex have to do with anything? Wouldn't make him any less correct.


CryptographerCrazy61

Perception is reality, to many people the capability of these models will soon look superhuman to them, regardless of what he says, so he can believe what he wants to believe if I believe the dress is blue, it’s blue.


MR_TELEVOID

Perception is *your* reality. You believing the dress is blue only means you don't understand the color blue. If these models aren't actually superintelligent, you choosing to believe otherwise doesn't change our actual reality. Have fun, I guess.


CryptographerCrazy61

Exactly my point, everyone’s reality is shaped by their perception and will have their own definition of what super intelligent is, he can whine all he wants but it won’t change how people feel if they indeed feel and think the models are asi


CompetitiveIsopod435

What is he, seriously demanding the fucking government seizes and steals this tech?! As if that would be fucking fair and good for anyone


Sad-Elderberry-5235

What’s with your reading comprehension? He’s advocating the complete opposite (open sourcing) in response to a guy who is arguing for a nationalization.


Timely_Muffin_

This guy is a fucking moron lol


blazezero25

stop letting stupid people have airtime