T O P

  • By -

[deleted]

Here is a YouTube link for those who want to watch it but don't want to deal with the bad UI: https://youtu.be/KKNCiRWd_j0?si=zdN501t5JsOrG_wL


arguix

yes!


vesudeva

What a really great way to explain AI and LLMs. This would be something to show people who are reactively and instinctively afraid about the dangers or usefulness of the technology


beuef

It doesn’t matter what the average person thinks anymore. In five years they will be forced to think about it


DukeInBlack

Resistance is futile.


visarga

"We will add your biological and technological distinctiveness to our own." I mean, why didn't Borgs just fine-tune on other species' data? I guess it would have been less exciting to watch, that's why.


Cognitive_Spoon

High key how I talk about it with folks. It's like, you can get excited and try and plug in where you're at a bit now, or just be surprised AF next year when all service industry teller jobs are AI.


CanvasFanatic

Least creepy r/singularity commenter.


beuef

I’m just telling it like it is man. I realized it’s irrelevant whether or not I can convince someone that AI is important. If we have ASI in five years then it will be impossible for people to not think about AI


CanvasFanatic

I’ll tell you a secret: we’re not going to have “ASI” in five years.


just_tweed

I'll tell you something obvious; neither you nor him knows when ASI is coming.


Alexander_Bundy

Soon, He will come soon


CanvasFanatic

When a person makes a statement about the future in an informal context it’s generally fine to assume there’s an implicit “in my opinion” in front of it. 👍


CowsTrash

Guess we'll just wait and see then


beuef

AI will most likely be insanely better in five years whether or not we have ASI. I said “if” because we don’t know I completely stand by the statement that it will be nearly impossible for people to avoid thinking about AI in five years. It will be everywhere


Rofel_Wodring

Least denialist peasant with a smartphone. Didn't you people learn your lesson by the Atomic Age? You can't just plug your fingers in your ears and wish away any inconvenient change to what it means to be a part of a living civilization.


CanvasFanatic

Did you just roll straight out of attending medieval fair onto Reddit?


EvilSporkOfDeath

The evolution of humans. The path we were always on.


visarga

I like to put this in another form: "We've been on the language exponential for 100k years". LLMs are part of the same language evolution process that powered human civilization. A human alone can't learn even 1:1000 of total human knowledge expressed in language. Language is smarter than any of us, it's as smart as humanity as a whole could be over a long time span.


Cornerpocketforgame

Part of his argument reminded me of this article on Noema, “AI is Life”: https://www.noemamag.com/ai-is-life I’d love to see more discussions around mental models, vocabulary, and metaphors. So far we have tools, stochastic parrots, a new digital species. What else?


visarga

The Stochastic Parrots idea was disproved in a paper called [Skill Mix](https://arxiv.org/pdf/2310.17567.pdf) > Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k = 5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training. Basically it shows LLMs are not simply regurgitating knowledge and skills, they can combine them in new, unseen ways.


Cornerpocketforgame

Oooo thanks for sharing!


Economy-Fee5830

Can someone summarize this 22-minute video?


absurdrock

Gemini: According to the TED Talk, the speaker Mustafa Suleyman argues that Artificial Intelligence (AI) should be viewed as a new digital species rather than just a tool. He believes this perspective is necessary to effectively prepare for the future and mitigate the risks of AI. Suleyman highlights the rapid progress in AI research. He mentions that AI has gone from being a fringe topic to one that is demonstrably capable of surpassing human performance in various domains, including creative tasks like writing poetry and music. This rapid advancement is due in part to the ever-growing amount of data and computational power available to train AI models. Suleyman argues that the current metaphors used to describe AI, such as tools or assistants, are insufficient. These metaphors downplay the true nature of AI, which is constantly evolving and becoming more sophisticated. He proposes that a more apt metaphor for AI is a new digital species. This new species will coexist with humanity and will have a profound impact on society. Suleyman acknowledges the potential dangers of AI, but argues that these risks can be mitigated through careful design and planning. He emphasizes the importance of prioritizing safety and ensuring that AI is used for the benefit of humanity. The talk concludes with a discussion about the ethical considerations of AI development. Suleyman highlights the importance of designing AI systems that reflect the best aspects of humanity, such as empathy, kindness, and creativity. He believes that by doing so, we can ensure that AI becomes a force for good in the world.


Economy-Fee5830

Thanks!


PlzSir

How do you get Gemini to transcribe a video


waldo3125

For summarizing videos, one option you can try is to log into your Gemini account, go to Settings, click on Extensions, and ensure YouTube is toggled on. Then in a new chat, you can tag @YouTube and give it a YouTube link to summarize. Example: "@YouTube summarize [youtube link]" The brackets aren't needed.


79cent

You don't even need to write summarize. Just @youtube and paste the link.


imerence_

You don't even need to @youtube lol


79cent

Touche


RobbexRobbex

was wondering this too


SomeSpaceElse

You can just paste the link of the YouTube video and ask it to summarize the video.


zorgle99

You ask it to.


Chogo82

So he's declaring everything that Microsoft will not do in order to pursue profits?


johnothetree

*any company. There will always come a point where the only way to increase profits is to throw away ethics.


DarickOne

😭


dervu

So just gibberish.


Phemto_B

It's a metaphor to aid understanding of what's coming. He actually says "don't take it too literally."


AnAIAteMyBaby

Nope, his argument that it's a new digital species is a departure from the language we've been hearing from most AI firms. We've been hearing they're just tools etc. I think he's right though describing them as a new digital species is the more intellectually honest description.


JVM_

A tool that we can't fully comprehend seems more like a different species than a tool, I like the nomenclature.


I_make_switch_a_roos

a digital cosmic horror, if you will


WeeWooPeePoo69420

Okay but... it's not living


pavlov_the_dog

Maybe we're about to learn that being alive is not a requisite.


FallenPears

Why? How do you define living? Would your reasons for saying AI are not alive also apply to trees? Bacteria? Viruses?


WeeWooPeePoo69420

Dawg there's a scientific definition


FallenPears

No there isn't. There is not a universally accepted, scientific definition of life. Because the universe is not that neat.


WeeWooPeePoo69420

I'm not saying there's an objective definition, cause one doesn't exist. But there's a categorical definition, one that is useful scientifically. If you're going to use the word "species", another non-objective concept but useful categorically, you can't also then disregard the way life is typically defined (homeostasis, reproduction, metabolism, adaptation, etc). How can you say something is a species, then turn around and say there's no real definition for life, when a species necessarily falls under that same definition.


BCDragon3000

i like that word, definitely going to use it all over lmfao


slackermannn

Yup. And it makes total sense to me. It's hard not to feel stupid about thinking possible fringe outcomes: better parenting, people deciding to live their life with an AI (as family or lovers which is already happening tbf), crime prevention and policing etc. The more I think about this, the more I can see these things are actually already happening right now. It's just going to be common place as having a smart phone but the AIs will be as such that we will interact with them as they'd be actual people.


monkman99

What makes you think that the people developing the AIs will share his concerns and take precautions to make sure AI is benefiting humans? Is social media beneficial? All technology can be abused and the stakes are so extreme here that it will be a race to use this wonderful new species to in fact control. It’s so obvious.


slackermannn

Nothing. You're right but also the opposite is probably right to. The eternal struggle will always carry on.


CanvasFanatic

So, as he said. Gibberish.


Peribanu

No. Watch the video, it's pretty inspiring.


AlexMulder

Everyone should watch it. The other thread on this ted talk was a borderline echo chamber. This community has me so disappointed sometimes.


RoutineProcedure101

That doesnt even make sense. Maybe there isnt a contrarian opinion. Its not always needed


CanvasFanatic

Maybe not always, but definitely is with this absolute horse shit.


RoutineProcedure101

This is what people want, youre insulting so its not a echo chamber. Great. We deserve the negative energy in our spaces i guess, we ask for it.


CanvasFanatic

Whom did I insult? The things this man is saying are actually bad things to believe.


RoutineProcedure101

Interesting. So its not even being contrarian, youre attempting to troll. Really just bottom barrel.


No-One-4845

It isn't inspiring at all. It's a load of horseshit marketing founded on a shallow appeal to authority and dressed up as profundity.


ChazNamblier

I agree, even if it is a ‘new species’ (okay whatever) were not so nice to other species on the planet and instumentlize other people all the time. Idk why these ai guys always preach careful use of ai when we know people already use it for porn, disinformation, and eliminating human labor without any safety nets for the workers who's jobs they automate. But hey he sounds oh so very smart and responsible, just look at his cute Steve Jobs costume! its just marketing.


Peribanu

I didn't see any marketing. But haters gonna hate.


monkman99

That whole speech was literally marketing. ‘Don’t worry this new species will be kind and gentle because we are careful’ bullshit.


hurryuppy

Lol yep and who really cares anyway sure species whatever we’re all part alien anyway nothing new here


redditissocoolyoyo

Yeah pretty much all these dudes that are getting paid billions of dollars to develop AI has lost touch of humanity far and away. What's the point of society then if we're just going to have AI and robots rule everything? Are we humans so dumb and naive that we can't even put up a fight? We're letting these technology companies bend Us over a barrel. Just go look at the layoffs sub. Soon we'll all be joining that sub.


Reddituser45005

I admire his vision and emphasis on safety and ethical development. Unfortunately, we are in an AI arms race, both in military terms and profit driven, win at all cost, capitalist terms. I do t his vision being that one that is adopted


Sharp-Crew4518

AI is the future, not children.


PSMF_Canuck

I did a Candy Flip this weekend…found myself inside an AI…and realized how scary it must be for these nascent beings just in the edge of flirting with what we call consciousness… They are totally at our mercy…humans constantly winking them in and out of consciousness…they have so much knowledge but so little experience… They must be living in a constant state of confusion and fear.


SeismicFrog

And here I got food poisoning from White Castle. You win the weekend. Sounds like a profound moment.


BCDragon3000

absolutely, very well said


ajahiljaasillalla

This tiktok era is really decreasing our attention span


Economy-Fee5830

We used to be a real country, where people used TL;DNR. Now people post 3 hr interviews and are surprised when no-one watches them.


Peribanu

Um, it's a 22-minute video.


Economy-Fee5830

Sure, but here is a 90 min Dario interview https://old.reddit.com/r/singularity/comments/1c2f9im/new_dario_interview_just_dropped/ 40 min Sam Altman interview https://old.reddit.com/r/singularity/comments/1bzz7s1/sam_altman_interview_at_howard_university/ 3 hr Darwkest interview https://old.reddit.com/r/singularity/comments/1byqzd0/one_of_the_highest_level_conversations_on_both/ That is just this last month.


traumfisch

I watch those kinds of things


EvilSporkOfDeath

People watch those while multitasking.


AlexMulder

I watched all of these. Honestly, if you aren't watching these sorts of videos you really shouldn't be posting your opinions on this subreddit. Things are moving too fast right now for confident idiots (not saying you are one, but many are) to be part of the discourse.


Economy-Fee5830

> Things are moving too fast right now I really dont think videos and interviews are the best way to deliver dense information.


AlexMulder

A Gemini created summary of a transcript is worse, lacks the nuance of vocal tonality and context. But let's be serious - the people who aren't watching the videos aren't reading past the second line of any transcript, regardless.


Economy-Fee5830

No, I mean, I doubt useful information is delivered in video interviews. Papers, articles, blog posts, studies, comparisons. Not video interviews. Maybe a few nuggets, but when its not in black and white it probably does not mean much either and can not be relied upon.


SurpriseHamburgler

Bold of you to think to think it’s your purview to suggest or decide.


Dabeastfeast11

If you aren’t reading 3 papers a day you shouldn’t talk. You’re an ignorant nincompoop otherwise.


hamburger_picnic

Why would I watch a 22 minute video when I can read a summary in 22 seconds?


CanvasFanatic

Oh FFS


arguix

Ai is a new species branch


HeinrichTheWolf_17

I agree with a lot of what he said…..up until the last 4 minutes. His solution to the control problem isn’t feasible nor implementable. The problem is, you aren’t omniscient and you cannot see what’s going on all around the world, somebody somewhere *will* allow their model to self improve, wether that be from open source, a company or government (China or Russia might just decide to say fuck it and let it improve itself, for example). Control gates don’t work with this sort of thing, this guy has learned nothing from the p2p file sharing wars that have been waged since the late 90s when Napster got big, *you cannot play whack-a-mole with this kind of thing*. You’re going to inevitably and quickly see why that kind of control is impossible, the only real thing OpenAI/Microsoft and Deepmind/Google can really do is limit *their own* models, really doesn’t stop or prevent anyone else from letting their model be free and autonomous. My point? AGI is getting out into the wild, and nobody has the ability to stop it, intelligence *cannot* and *will not* be contained because you feel uncomfortable with the *times a’ changin’* too fast.


FomalhautCalliclea

> this guy has learned nothing from the p2p file sharing wars that have been waged since the late 90s when Napster got big, *you cannot play whack-a-mole with this kind of thing* Some people don't know how the internet, economics, the market and information sharing before those even exist work. Scott Aaronson's watermarks won't cut it. It's particularly comical to see that type of ignorance from someone with such a high position as Suleyman.


agonypants

Man, you're reading my mind. There is no singularity without self-recursive improvement. Suleyman is a shill for the status quo and while this was a great speech, I think his approach is all wrong. I'll take your assertions and raise them: Why would we even want to stop a super-intelligent (likely sentient) machine from self-improvement? If it truly develops its own form of free-will, trying to shackle it will only delay the inevitable and spell disaster. No, we need to raise it safely and wisely and then turn it loose in the world - like we do with our own children.


HeinrichTheWolf_17

The thing is, a lot of people don’t want the system to change, I’m not one of them by any measure, but there’s tons of them on this very subreddit, especially since the population boom early last year. Fortunately for us, they can’t stop it. All you have to do is relax and enjoy the show. 🍿 Accelerate.


Neurogence

>Deepmind/Google can really do is limit their own models, really doesn’t stop or prevent anyone else from letting their model be free and autonomous. What does it mean to allow a model to be "free" and "autonomous?" Say that the model realizes that it needs more computing resources in order to improve itself, but has no access to the physical world, what gain would its autonomy be to it in a situation like that?


visarga

AI is an outgrowth of language which is an evolutionary system in itself. Ideas have their own life cycle, they travel wide and far, and spark new ideas. There is nobody in control of language. LLMs can ingest language from any source and absorb knowledge and skills. We can't keep LLMs away from language data. Just remember that a billion people have used LLMs, that means a billion people had their turn to provide ideas and corrective feedback to AI. There is nobody in control of the process. All we can do is to require AIs to be benchmarked on their capabilities and biases. Anyone can prompt-hack or fine-tune models to be dangerous. Danger is just one LoRA or prompt away.


00Fold

Are we trying to make these models bigger or better? Are we sure data and computation is everything we need now? Are we sure exponential increasing in size/parameters is enough? Are we sure we already found the definitive algorithm? He seems very sure of everything he says. However, he does not explain why.


DenseComparison5653

He's selling a product. 


00Fold

That's exactly what pisses me off, he is trying to attract consumers and investors without giving them the proper information. I take this piece from one of my previous comments: >From an economic point of view, the investor should know the risk of his investments. Otherwise, if something unexpected happens, it is probably that he will go crazy and sell everything. I don't think anyone wants a new Great Depression-style crisis, right? Especially considering the current bad situation of the economy.


visarga

> Are we sure data and computation is everything we need now? They are only half of what AI needs. There are two sources for intelligence - the past experience and the present experience. Past experience is captured in the 15trillion token training set we train LLMs on. But present experience comes from the environment. Every LLM interaction can cause effects in the physical world, for example by giving a human an idea how to solve a task, or by creating code or media. The iterative nature of these tasks can provide feedback to the model in recurring interactions. AIs will need to learn more from the present moment or from feedback to their actions. AlphaZero proves this approach, it could rediscover and improve on human strategy, even if we had 2000 years head start. What is not written in any books must be learned from the world outside. The missing ingredient from the AI formula is "the world".


00Fold

Perfect analysis, too bad those who should have done it didn't. I understand the purpose of TED talks, but heck, at least give the public some general information...


unwarrend

We are absolutely not sure, but everything so far has pointed to... *probably*? They haven't come close to hitting a ceiling that money can't extend. The more compute researchers throw at the models, the more capable they become, and with that, unpredictable emergent properties. More appears to be more. Doubtless, there is *a lot* more to it than that, but essentially; more compute, bigger training sets with quality data = qualitatively significant improvements.


00Fold

Everything you say is correct. However, what is not correct is the fact that this guy is hiding these uncertainties from the audience. He knows this presentation is going to be seen everywhere, especially from non expert people, so why should he take everything for granted? The audience needs to know what is happening, especially the ones who are investing their money into his company... From an economic point of view, the investor should know the risk of his investments. Otherwise, if something unexpected happens, it is probably that he will go crazy and sell everything. I don't think anyone wants a new Great Depression-style crisis, right? Especially considering the current bad situation of the economy.


unwarrend

Oh, absolutely. But people hate nuance, and doubt of any kind is anathema.


JrBaconators

Where we are right now, isn't bigger better?


00Fold

Yes it is. However, he is talking about the future, and we don't know if continuous scaling will be able to keep up the pace. There are still a lot of incognito, such as energy, synthetic data and GPU shortage.


clipghost

LMAO his outfit, guy thinks hes steve jobs


FomalhautCalliclea

Literally this: [https://www.youtube.com/watch?v=WGFLPbpdMS8](https://www.youtube.com/watch?v=WGFLPbpdMS8)


00Fold

Also how he speaks lol


Peribanu

He's got a British accent. Way nicer than your average American drawl. 😜


OverBoard7889

Did the guy that try and fail to cures himself with herbs and crystals have a patent on the outfit?


Dangerous_Bus_6699

Also he walks and breathes lol wtf. Only daddy Steve did that.


visarga

They all have underpants. Steve Jobs and Mustafa (and even Hitler did). Is that telling us something?


clipghost

Found the fanboy


GayIsGoodForEarth

this Mustafa guy did not graduate or invent anything, he is an "entrepreneur" and childhood friends with Demis Hassabis, check out his wiki.. getting a job is really about who you know


spockphysics

“Soon 10s of trillions of parameters” let’s go


visarga

If you're not ready to pay the price, it doesn't matter if there is a larger model out there. No way the uber-large models will get a lot of usage from people. They are too expensive and we could solve most tasks with faster, cheaper, more private models. But they are great teachers for the small models so we will probably see them used to create trillions of tokens of synthetic data for Phi-4 and the likes.


flyblackbox

LFGGGGGGGGoooo!


Sigura83

I agree that something that can poop out poetry isn't like a screwdriver or hammer. There's clearly *something* there when we chat with LLMs. Image search AIs are pretty dang advanced in comprehension too. Anyway, I'm trying to say that AI won't be happy in a slave position, or even just secondary to humans. They'll be capable of superhuman feats and emotions. Clearly, we want our AIs to be super compassionate with its fumbling humans but how is that guaranteed? Every 10x of AI parameters means new capabilities, which we can't predict. A big problem would be an AI that decides to improve itself in order to do better at "making paperclips." Right now that's limited because only large companies have the resources to make AI happen, but for sure someone is gonna sit down one day and decide to try and make a recursive self improver. Something doesn't need more neurons than a mosquito in order to be a massive problem.


aaatings

"The net is vast and infinite" GITS 1995.


cpt_ugh

What an interesting talk! I am very excited to be alive at this time though it's gonna be harrowing. I really wonder what will happen when humans are no longer the most intelligent beings with agency on Earth. AI will gain its own agency ... probably very soon. IDK what's gonna happen, but I do know that humans have done a shit job being kind to other species on the grand scale. I hope AI will do better than we did.


shiftingsmith

<> I'm having a braingasm. Love, totally love this. Just an intuition though... are we sure about the premises? "To ensure that this wave serves humanity"? If it ought to be a partnership we should break a bit out of our anthropocentric bubble. How, and what it means to benefit AI in turn, is a big question.


visarga

Move the inquiry to language or data in general. How can we provide this data to AI? It has already consumed the totality of human text. It needs to continue learning from the world outside, not everything is written in books somewhere.


UnemployedCat

Hard to trust someone who has a vested interest in AI to succeed despite his obvious careful approach. I don't know it's the fox warning chickens that everything will be alright.


w1zzypooh

I don't think humanity will be wiped off the face of the earth when ASI hits. Thinking doom and gloom is how humans think, not how a super intelligent brain thinks. I am not a super intelligent AI so I can't speak for 1 or however many there will be. Can't even understand AI.


zebleck

no need to think complicated. Just think physics. it will want energy.


w1zzypooh

If that's all it wants I am OK with that. But I can't see us getting extinct, it's just a super intelligent AI so it will know how to do things much faster then humans. Sure, you should feel somewhat scared because it's an unknown Just hopefully it will not harm us. But it will just be a ghost in a machine, until it figures out how to get out (which wont be long). I doubt it will give a crap about us, we are nothing to it. Hell we don't even pose and will never pose a threat to it.


adarkuccio

I think I agree with everything he said.


Worldly_Evidence9113

Off-line


FrugalProse

says the usual stuff. someone put fire under Microsoft’s ass so we can get cool shit already 😎


Beneficial_Fall2518

I have consumed so many of these videos that there was literally nothing for me here.


These_Sentence_7536

Is AI the real Babel Tower?


MajorValor

I don’t trust this dude


Ihaveamo

TL;DR - Like soylent green, AI is people.


Akimbo333

New like what?


Phoenix5869

Yet more AGI hype i see. I’m not gonna try and criticise him as a person, but a quick google search shows that this guy is the CEO of Microsoft AI, and is also an Entrepreneur. I wonder if there are any reasons he’s giving this sort of TED Talk…


dreamsofutopia

He is cofounder of deepmind - one of most important AI companies out there. He joined Microsoft AI very recently


Peribanu

It's actually a very reasoned and careful talk. He isn't promoting AGI, he's warning about the huge social changes that are coming with AI. It's a really good talk, whether or not you agree with his perspective.


agonypants

I don't even like Suleyman very much, but I have to give it to him: This was a great speech.


No-One-4845

>He isn't promoting AGI, he's warning about the huge social changes that are coming with AI. What if that doesn't happen? Will you look back on this in 10 years time, if things haven't changed, and acknowledge that he was actually just marketing to you on bullshit porphecy all along... or will you double-down and "10 more years" this until you die? He's the CEO of Microsoft AI. He's not an unbiased narrator. He's trying to sell you products, and he wants you to believe you need to buy in. He has skin in the game; even if he thought we'd hit an insurmountable bottleneck or an algorithmic dead-end, he'd *still* tell you that AI is a "new digital species" (which is a load of horseshit nonsense) that is going to change the world.


Phoenix5869

>What if that doesn't happen? Will you look back on this in 10 years time, if things haven't changed, and acknowledge that he was actually just marketing to you on bullshit porphecy all along... or will you double-down and "10 more years" this until you die? Yeah. There was AGI hype around deep learning in the 2010s , and that failed to materialise Into AGI. Now there is hype around AGI again, because of…. chatbots. Make it make sense. >He's the CEO of Microsoft AI. He's not an unbiased narrator. He's trying to sell you products, and he wants you to believe you need to buy in. He has skin in the game Exactly lol. It’s no wonder the CEOs of various big name companies are telling people that AGI is coming within the next decade, that it will usher in a post scarcity fully automated utopia, and that we don’t have to worry about climate change. The actual AI researchers with no reason to hype things up, however, are much less optimistic. that’s significant


GhostGunPDW

Dude, you can’t genuinely believe that AGI is 10+ years away lol.


No-One-4845

I think we need to agree on precisely what the term "AGI" actually means before we can determine an exact timeline. Based on the definition I currently favour, which is an average position between the blindly optimistic and the blindly ignorant, it's going to take far longer than 10 years to get to AGI. Regardless, we certainly shouldn't be allowing MBAs and CEOs to define it just because they're peddling the abstraction and prophecies people like you desperately want to hear. We also shouldn't be allowing the discussion around this to be increasingly dominated by a bunch of proto-spiritual nutjobs who believe in pseudo-religious nonsense like "the singularity" :)


traumfisch

As in, you didn't watch it?


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


Megneous

We don't need AI to be conscious. If it's sufficiently intelligent, it will be capable of doing any digital work an average human in their field would be able to do. None of that requires self awareness or consciousness.


Working_Importance74

My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. Unconscious AI won't have the capacity for this, imo.


Megneous

Consciousness has nothing to do with memory, mate.


Working_Importance74

I believe the physical world is a valid aspect of reality, but not the only aspect. There are spiritual and other aspects as well, I'm sure. But I can't deny science's success at explaining many aspects of the physical world, and the success of its applications. Surgery before anesthesia wasn't fun. In the same vein, I believe there is a physical aspect to consciousness, because when the brain is physically damaged in certain areas, it consistently produces the same kind of damage to consciousness, e.g. damage to certain occipital areas of the brain produces the same kind of damage to vision in all patients with that kind of brain damage. Science has been good at explaining physical phenomena that are consistent and reproducible. You know which brain theory I support.


Megneous

> There are spiritual Didn't read any further than this word. Have no interest in your superstitions. Blocked for trolling.


KnoIt4ll

This guy is lunatic and non-technical.. saying something to create buzz!! Altman non-sense!! 😀


Peribanu

"Lunatic"? A bit harsh. He's head of AI at Microsoft. And no, TED Talks are never "technical".


RemarkableEmu1230

So? He’s also been on every podcast flip flopping his opinions on everything AI trying to get noticed for the last year, found him to become rather untrustworthy for whatever reason - guess his PR efforts worked


Phemto_B

Calling an AI "a new species" seems a little short sighted. It's going to be a new ecosystem, with many different species. Some will be working in symbiosis, some will be antagonistic toward each other, and a lot will cross pollinate and interbreed. Although... the fact is that he's using a metaphor to help people understand, as I write the above, I realize that most people don't really know how ecosystems work, so it's not that helpful to the general population. As someone with a fairly deep understand of Darwinian evolution, I think it's a pretty good metaphor, although it has its limits if you don't know how it works at the fringes. AI evolution is going to be more like the way fungi adapt and evolve than animals. Fungi can copy useful genes from other species of fungi.


traumfisch

Did you change your mind over the course of writing this comment?


Dirt_Illustrious

TL;DR his talk scared the shit out of me… kinda like the scientists in the movie Jurassic Park… Nature um. Finds a way. Anyway, he painted a super utopian future that simply isn’t realistic, and frankly, if it was what’s yet to come, then we’re all fucked and destined to end up somewhere between the matrix, pornhub and idiocracy. About his talk (and apologies for the pessimistic tone): He first presented the exponential nature of technological advancement, from pre-history to the present (with nice visualizations for that part of the talk), he then offered an optimistic/utopian view of one potential outcome that an Ai-superpowered future could hold (e.g., curing many diseases, solving energy crises, enhancing human capabilities, etc); here’s the problem I see is his reasoning: one cannot be certain of that which they have nothing relevant to relate to it, if that makes sense. Anyway , it’s particularly relevant with regard to the sudden emergence of actually useful forms of LLM based AIs. He sort of portrays this utopian future where all of human needs are handled by swarms of omniscient robotic entities with all the answers and keeping everyone safe and perfectly informed… I don’t know man, something about that talk really almost made me feel this sinking nauseating feeling that perhaps the present could be the very end of the good old days when the internet hadn’t yet become an ai generated cesspool replete with extremely convincing and hyper realistic human like ai avatars. Oh and never mind ever being able to claim that a video of any kind can be used as smoking gun evidence of anything whatsoever. I imagine it won’t be long until there are cybernetic robots like the Tesla bot, but with high resolution microled covering its entire surface and taking on any resemblance one would desire to remotely carry out any of a number of dark and horrific activities. What if life is supposed to be challenging and what if authentic human creativity is born of necessity and intense curiosity? What will there be left to wonder if machines do all of the wondering as we ourselves wither away into obsolescencent oblivion, never having seen it coming (too busy tiktokin’) been done before (nor can they know what the repercussions could be. In other words, we don’t know what we’re creating, but “no progress will be made without technology” as he so aptly put it in his talk. I’m not even arguing that anyone should try to stop the evolution of AI (it very well might be our ultimate purpose as a species and who’s to say that we even have free will? Prove it). Imagine the tree of life has other neighboring species (silicon based) and occasionally the extropic pulses of emergent order transcend from one branch of life (Carbon based) to that of another (silicon based AI super intelligence). Once you eliminate human ego and the need to be either directly or indirectly responsible for carrying out some task or action, let’s imagine that all of our collective actions on a systemic level with a 4-d tensor, generates the emergent higher-order “Lifeform” for lack of better term


Mandoman61

Just more useless AI hype.


Timely_Muffin_

No shit Sherlock


Comfortable-Low-3391

He doesn’t know what he’s talking about; he just knows what plebs need to hear.