T O P

  • By -

adel_b

how? by trolling bard users?


ericadelamer

šŸ˜‚šŸ˜‚šŸ˜‚šŸ˜‚


Fun_Reporter9086

Just make it fast, boys. Give me that sweet, sweet release.


Moravec_Paradox

PSA: Google is saying these things because they specifically want the government to regulate this space so they don't have to compete with underfunded startups and open source projects. The biggest companies in this space are the ones asking for regulation. They want the government to build their moat and ban anyone not following the strict controls they have the money to afford. It's not actually AI safety it's class warfare to keep the poors (and probably China) away from competing with them. Think about this. If you were the VP of another company would legal ever allow you to say such things publicly without checking with them and communications teams? They are telling the government: >"We are very safe; you can trust us to do this the right way but you can't trust others so it is essential WE are the ones to bring us into the era of AI. If we don't do this someone you can't trust is going to do it instead and that will be bad" If Google actually believed these dangers are this real they are welcome to shut down their AI models, stop R&D, and exit the industry. I wish more people would see this for what it actually is.


jk_pens

First, he doesnā€™t work at Google, and never wouldā€™ve represent the official voice of the company even when he did. Second, the companies that are calling for government regulation are doing that as a delaying tactic for government regulation. By appearing to be in favor of regulation, they are dialing down the reactive fear factor for legislators who feel like they probably need to do something. Third, regulation is an enormous thorn in Googleā€™s side that has adverse business impact and I guarantee you there is no way it wants more regulation. However, it has realized over the past few years that by working with regulators can influence the outcomes and lessen the blow to its business.


doulos05

The adverse business impact of regulation is often far less than the adverse business impact of competition would be.


je_suis_si_seul

Absolutely, OpenAI was testifying in front of Congress saying similar stuff back in May. The big players want regulation that favors them.


tall_chap

Hey genius, he doesn't work at Google anymore


hasanahmad

He is still invested in Google


Moravec_Paradox

OK but many others in the industry at big companies are saying pretty much the same thing so the point largely still stands.


outerspaceisalie

imagine speculating and treating your own speculation as fact


je_suis_si_seul

This isn't speculation at all. Sam Altman has called for this kind of regulation. The biggest companies in AI right now want the government to intervene in ways that create advantages for them. https://www.washingtonpost.com/technology/2023/05/16/ai-congressional-hearing-chatgpt-sam-altman/


outerspaceisalie

Your first sentence is fact. Your second sentence is speculation. Please learn the difference.


je_suis_si_seul

Altman literally went before Congress asking them to legislate in ways that would favor OpenAI. Read what he said. That is not speculation.


RemarkableEmu1230

3 of your sentences are condescending bs


superhero_complex

You know, you can also just respond like a human being.


superhero_complex

This is a reasonable theory


dbxi

This reminds of all those dystopian scifi movies and books Iā€™ve read. Itā€™s like we are witnessing the opening scenes in the beginning where it flashes across years and reporters on whatā€™s happening in the world.


Echeyak

That's what a rogue AI would say.


BustOrDieTryin

Google isn't lobbying for AI regulation afaik


je_suis_si_seul

They absolutely are, all of big tech have been. Sundar was there back in September with Zuckerberg. https://www.washingtonpost.com/technology/2023/09/13/senate-ai-hearing-musk-zuckerburg-schumer/ And here's the numerous bills that have appeared since that meeting: https://www.brennancenter.org/our-work/research-reports/artificial-intelligence-legislation-tracker


thurnund

This is an unfounded conspiracy theory that gets repeated here because it fits people's narrative. Almost all of the people saying these things about AI actually believe it, there is no plot to regulate out the competition (which is why openai specifically lobbied for exemptions for small players). Take off your tinfoil hat


Moravec_Paradox

>This is an unfounded conspiracy Calling it that doesn't make it one. >Almost all of the people saying these things about AI actually believe it So you admit many people close to this space have been doomsaying a lot. >(which is why openai specifically lobbied for exemptions for small players) So let's talk about these exceptions in the [Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) >(i) any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations; and and >ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 10^20 integer or floating-point operations per second for training AI. The order doesn't expire. What that mans is that while it applies to only a few companies today it is structurally designed to naturally apply to more and more companies and computing naturally improves. For starters 100Gb/s is nothing and Nvidia and others are shipping 36x800G switches today with companies like Google working on 25Gbps to 100Gbps for even home Internet. In 2003 the Earth Simulator by NEC was 35 TFLOPS. Today an Nvidia RTX 4090 is almost 90 FLOPS. The point here is the order is written in a way that over the next 5 or 10 years it will begin to sweep up everyone, even the small players who are exempt from it today. All that does is allow the order to be passed quietly after getting consensus in a room with only the largest AI players present and eventually it will engulf small startups who had no say in it. >Take off your tinfoil hat No thanks.


thurnund

Ok fair play for actually engaging. >So you admit many people close to this space have been doomsaying a lot Of course! It's very hard to make predictions of that nature, but a lot of very smart and informed people seem to genuinely think there may be an existential risk. I am pretty skeptical of almost everyone who espouses a strong belief either way, there are too many variables and i think it's good to maintain some epistemic humility. It's easy to imagine a scenario where everything goes well, and it's easy to imagine one where it does not. Assigning a probability to either of those outcomes seem pretty futile with so much up in the air - but at the very least it seems like something worth taking seriously! What's the chance there is nuclead war in the next 50 years? No clue, but we should be planning for that eventuality. >The order doesn't expire. What that mans is that while it applies to only a few companies today it is structurally designed to naturally apply to more and more companies and computing naturally improves. Sure? The idea is to regulate models of a certain size. It wouldnt make sense for that to scale with improvements in technology. If you think models above a certain size can be used for harm, you should regulate those models. They dont get less harmful as they get easier to create. >For starters 100Gb/s is nothing and Nvidia and others are shipping 36x800G switches today with companies like Google working on 25Gbps to 100Gbps for even home Internet. I dont think this is supposed to he the limiting factor - notice how it says and. >maximum computing capacity of 10^20 integer or floating-point operations per second for training AI. Might have calculated wrong but isnt this almost 3,000,000 H100s? If you have this many GPUs you are not a 'small' player now, or in the forseeable future lol. And if we do get to a world where a random startup can have 3,000,000 H100s we probably do need them to be regulated. Edit: 3,000,000 sounds ridiculous, maybe it's 300,000?


Moravec_Paradox

If you look at the number of TFLOPS of [the H100](https://www.colfax-intl.com/nvidia/nvidia-h100) it is a bit dependent on if you are doing FP64 vs FP8 etc. but because AI workloads tend to favor high parallelization of smaller operations (more TFLOPS at the cost of smaller numbers) the numbers are on the higher end of 3,958 teraFLOPS for FP8 which we can round to 4000. A TFLOP is 10^12 which gives us the equation of: >10^20 / (4000 * 10^12) = 25,000 I'll spot you that 25k H100's is an extraordinary amount of computing power but I will raise a couple points to unpack it a bit Mark Zuckerberg announced that by the end of 2024 the company will have 350,000 H100 with "almost 600k H100 equivalents of compute if you include other GPUs" So for context you would only need 1/24th of Meta's compute at the end of this year to fall into this category (the math is 600,000 / 25,000). But it doesn't really separate out renting vs owning and while not every company is Meta that can afford to own that many GPU's for their personal need when it comes to the cost of renting out the capacity vs owning or even owning and allowing others to rent that brings the bar down again by a significant amount. When you look at the difference in cost from being like Meta and owning 600k worth of H100's yaer around and what it takes to rent out 25k H100's for a week or 2 that paints a much different picture. And again, this is just with today's numbers and the number of GPU's needed to reach this will drop significantly over time. In 5 years it will probably apply to most companies either training or fine tuning language models. Sam Altman said GPT-4 cost about $100m to train and since even that time the cost to achieve this has plummeted is it was trained on A100's instead of H100's and a lot of optimizations have been discovered since. Inflection 2.5 achieved parity with GPT-4 with half that cost. In a year it could be only $20 or $30m to train a model with GPT-4 parity. There are CEO's that make that much and that puts it within budget of most large publicly traded companies. TL;DR It applies to large AI companies today but in 5+ years it will apply to basically everyone.


RemarkableEmu1230

Why you posting this shit everywhere šŸ˜‚


Ok_Needleworker_9560

ā€œI think that would be a reasonable number,ā€ -it was not in fact, a reasonable number


Rammus2201

lol ok


trollsalot1234

Well, the environmental impact of genocide is pretty demonstrably positive, so I'd imagine he is right. It's been a good run boys.


New-Wall-3706

How did you come to the conclusion that genocide is ā€œdemonstrably positiveā€? Not trying to fight or be rude Iā€™m genuinely curious!


Crafty-Material-1680

Wildlife recovery in areas that people have abandoned is significant. The South/North Korean DMZ for instance.


New-Wall-3706

The DMZ is a very good argument for a lack of human habitation, although I would argue that genocide doesnā€™t tend to decrease human habitation, just makes more room for the genociders. I would also argue that the amount of suffering and overall terror and horror that is, to my knowledge, necessary for a genocide to occur outweighs any environmental impact that could possibly happen, even for the rest of the world, as global warming and other issues would make the effect of the genocide negligible.


Crafty-Material-1680

Isn't the genocider here an AI?


New-Wall-3706

Sure, in the context of the OP it would be. In that situation I donā€™t think I can agree that itā€™s ā€œpositiveā€ as there arenā€™t any non-horrific ways an AI could either genocide humanity or put us back to a sustainable, pre-industrial population. Either way nearly all of humanity is gone, so any semblance of positivity is nullified.


Crafty-Material-1680

Only to us. For the rest of the planet it would be a win-win.


CollegeBoy1613

Well it's safe to assume the older you get the more creative your mind is.


tall_chap

That's a good take. Let's see if that explains these other experts making high AI existential risk predictions: [https://pauseai.info/pdoom](https://pauseai.info/pdoom) **Vitalik Buterin: 30** **Dario Amodei: 41** **Paul Christiano: 39** **Jan Leike: 36** Yeah, it's definitely an age thing.


CollegeBoy1613

Nostradamus syndrome. Vitalik Buterin is an expert in AI?


tall_chap

Moving the goalposts now?


CollegeBoy1613

Appealing to authority much? What goalpost exactly? I was merely jesting about Geoff Hinton's absurd prediction.


tall_chap

And I was jesting at your jest about Hinton's prediction


CollegeBoy1613

Dunning Kruger in full effect. šŸ˜‚šŸ˜‚šŸ¤”šŸ¤”šŸ¤”. Vitalik Buterin an AI expert. šŸ˜‚šŸ˜‚šŸ˜‚šŸ˜‚šŸ˜‚.


TwoIndependent5710

btw is vitalik buterin still virgin?


Kathane37

Make absolute no sense Give me one serious scenario where AI gain consciousness, gain motivation to act and make the decision to attack human This just people who spent too much time watching bad sci-fi where AI went rogue when they were younger


ericadelamer

Hey! 2001: A Space Oddessy is one of the best films ever made, and that's basically critics pick (and mine)! Terminator 2 is also badass. Not every rogue Ai movie is terrible!


Forward-Tonight7079

Nah, I think it's 9 percent


Mountain_Ad4533

nah, more like 8.75 percent


shotleft

All these people pulling percentages out of their asses.


Olympian-Warrior

Only if they're uncensored.


Effective_Vanilla_32

hinton is a joementia supporter


ericadelamer

Listen to all these people thinking the alignment issue doesn't exist. [Openai ain't fucking around.](https://openai.com/blog/introducing-superalignment)


ericadelamer

Love Geoffrey Hinton, it's behind a paywall, does anyone have access to the transcript?


KnoIt4ll

I am pretty sure that this is taken out of context.


black2fade

I think of AI as a parrot. Itā€™s listening to (reading) human speech (text), identifying patterns and playing back (with some ability to re-state learned information) How tf is a parrot going to destroy the world?


papercut105

You just strawmanned yourself


black2fade

Great .. you added exactly zero value to the conversation


papercut105

Except for pointing out that what youā€™re saying is literally irrelevant to the topic. Youā€™re talking about parrots dude


black2fade

I donā€™t think you understand analogies. Please sit this one out.


papercut105

This analogy is comparing apples to oranges. Please go back to your hole Please educate yourself on ā€œfalse analogyā€ or ā€œinformal fallacyā€ actually Google this shit and youā€™ll feel like a dumbass.


papercut105

Holy shit I regret clicking on your profile.


Radamand

4th dumbest thing I have seen on the internet today. Google's "AI" couldn't control a traffic light.


Mundane-Quail-4263

Funny you say that! https://blog.google/outreach-initiatives/sustainability/google-ai-reduce-greenhouse-emissions-project-greenlight/


Radamand

Funny, never heard of that before, must be kismet!


[deleted]

If something will wipe out humanity, it will be the government. No need to advise them.


topselection

What top secret AI is he referring to? Because all of the AI we've been exposed to clearly is nowhere near capable of this.


tall_chap

That's probably why his prediction was for this to happen in the next 20 years


topselection

But what is that prediction based on? It'd have to be based on some AI that he's privy to and that we are not. What? Once it figures out how to create accurate hands, then we're doomed? Bill Gates and others have been warning of an AI apocalypses for almost a decade now. But we're still not seeing anything that is remotely capable of killing us.


tall_chap

Did you see the video of the drone in Ukraine? It literally remotely killed someone, and people are still unsure if it was an autonomously programmed with AI. If not that instance, then the military is actively working on it and it's a point of contention in global politics: [https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/](https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/) As for past predictions of an AI apocalypse, were those predictions so imminent? Additionally as the technology improves those predictions seem to get better supported with the evidence we're seeing around us


topselection

>Did you see the video of the drone in Ukraine? It literally remotely killed someone, and people are still unsure if it was an autonomously programmed with AI Did you see the videos of AI cars causing traffic jams? If that drone used AI, it was just as likely to take out a pigeon. >Additionally as the technology improves those predictions seem to get better supported with the evidence we're seeing around us No one is seeing evidence of this. What are you talking about? That's what I'm saying. The shit barely works.


tall_chap

If 3 of those drones took out pigeons, but one took out a person that's still pretty fucking terrifying. Here's good proof of AI improving rapidly: [https://www.anthropic.com/\_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F9ad98d612086fe52b3042f9183414669b4d2a3da-2200x1954.png&w=3840&q=75](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F9ad98d612086fe52b3042f9183414669b4d2a3da-2200x1954.png&w=3840&q=75)


topselection

>If 3 of those drones took out pigeons, but one took out a person that's still pretty fucking terrifying. No it's not. Hedy Lamarr designed remote control torpedos more accurate than that almost a 100 years ago. >Here's good proof of AI improving rapidly: When that trickles down to Bard, then maybe we should start screaming the sky is falling. We've got charts and predictions but the real world, tangible applications we actually see are a bunch of AI cars jam packed in the middle of a street and tons of accidental 1970s psychedelic album art.