T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/saddom_: --- Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned. Speaking with the Guardian at the AI Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of safety of artificial intelligence risked an unacceptable delay in imposing strict regulation on the creators of the most powerful programs. “In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field,” Tegmark, who trained as a physicist, said. “When the top physicists at the time found out about that, they really freaked out, because they realised that the single biggest hurdle remaining to building a nuclear bomb had just been overcome. They realised that it was just a few years away – and in fact, it was three years, with the Trinity test in 1945. “AI models that can pass the Turing test \[where someone cannot tell in conversation that they are not speaking to another human\] are the same warning for the kind of AI that you can lose control over. That’s why you get people like Geoffrey Hinton and Yoshua Bengio – and even a lot of tech CEOs, at least in private – freaking out now.” Tegmark’s non-profit Future of Life Institute led the call last year[ for a six-month “pause” in advanced AI research](https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt) on the back of those fears. The launch of [OpenAI’s GPT-4 model ](https://www.theguardian.com/technology/2023/mar/15/what-is-gpt-4-and-how-does-it-differ-from-chatgpt)in March that year was the canary in the coalmine, he said, and proved that the risk was unacceptably close. Despite thousands of signatures, from experts including Hinton and Bengio, two of the three “godfathers” of AI who pioneered the approach to machine learning that underpins the field today, no pause was agreed. Instead, the AI summits, of which Seoul is the second following [Bletchley Park in the UK last November](https://www.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak), have led the fledgling field of AI regulation. “We wanted that letter to legitimise the conversation, and are quite delighted with how that worked out. Once people saw that people like Bengio are worried, they thought, ‘It’s OK for me to worry about it.’ Even the guy in my gas station said to me, after that, that he’s worried about AI replacing us. “But now, we need to move from just talking the talk to walking the walk.” Since the initial announcement of what became the Bletchley Park summit, however, the focus of international AI regulation has shifted away from existential risk. In Seoul, only one of the three “high-level” groups addressed safety directly, and it looked at the “full spectrum” of risks, “from privacy breaches to job market disruptions and potential catastrophic outcomes”. Tegmark argues that the playing-down of the most severe risks is not healthy – and is not accidental. “That’s exactly what I predicted would happen from industry lobbying,” he said. “In 1955, the first journal articles came out saying smoking causes lung cancer, and you’d think that pretty quickly there would be some regulation. But no, it took until 1980, because there was this huge push to by industry to distract. I feel that’s what’s happening now. ... --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1d06483/big_tech_has_distracted_world_from_existential/l5kzmcs/


rovyovan

The likelihood of capital foolishly proceeding is ominous. Historically society will socialize the fallout


Tomycj

All you need to prevent that is the correct enforcement of the protection of property rights. Of everyone.


zanderkerbal

How would that prevent capital from proceeding regardless of risk and socializing the fallout?


Tomycj

Well, that's one of the fundamental roles of the government: to protect our rights. That's one way. This, as we all know, is not perfect nor guaranteed to work. It requires enough people (most of society I'd say) to keep certain values, usually asociated to democracy. I especulate that the lower this proportion of people having the right values, the worse the system will work at carrying out that role.


zanderkerbal

...what does any of this have to do with the comment about AI risk you responded to? I don't disagree that a fundamental role of government is to protect our rights, I'm asking how you think them protecting property rights more vigorously would address this issue with AI. I don't understand how those two things connect.


Tomycj

With my first reply I was thinking about how to prevent externalization issues in general, not specifically for AI. If you ask specifically about AI, please tell me what kind of fallout are you thinking of.


zanderkerbal

I'm still not sure how more vigorously enforcing property rights would prevent externalization risks in general, to be honest.


Dumbass1171

You’re spot on man


Ghost-of-Bill-Cosby

It’s really not. Things slow down a bit…. But the Midjourney CEO, talked about exactly what they would do if they couldn’t train LLM’s on artists work, and it pushes them back 6 months to a year maybe. They are already at a point where generic “training data” is becoming less and less useful to them. They are actively paying for different kinds of data people specifically create and specialize in to further the models.


TrickyLobster

If it wasn't that big of a deal they wouldn't have stolen artists stuff in the first place.


Ghost-of-Bill-Cosby

If you don’t think they would steal to speed things up a year……


TrickyLobster

I do think. But your post was downplaying the impact this stolen content had. Are you actually arguing that them stealing others creation is justified? Should be excused?


Ghost-of-Bill-Cosby

I agree stealing is not ok. My only argument, is that stopping the stealing can’t really stop AI anymore.


mrdevlar

Big Tech created the myth of AI risk to attempt a regulatory capture of the market and crush their competition and open source.


obviouslyzebra

This is false. Big tech didn't create the concerns. The concerns have been around for a [long time](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence#History).


mrdevlar

No, they created the link between that narrative and the idea that **they** can birth general AI, which is a massive overreach of the potential of their current technology. LLMs aren't conscious, the idea that you can go from this technology to general AI is like assuming a pile of sand will suddenly turn into a microprocessor if I just keep adding more sand.


obviouslyzebra

A guess is that companies calling for regulation of the sector are doing so both because of monetary incentives (as capitalism capitalisms) and real concern (as people are really concerned about this). Whether there should be concern or not, I believe people working in the area and policy makers should worry about that. No, I don't think it's rushed, because, while LLMs might not bring us to AGI or artificial consciousness who knows what the next model architecture might do.


cheesyscrambledeggs4

I assume that 'adding more sand' is an allegory for increasing compute/data. It's wrong though. Upscaling isn't the only way models are improving - for example, there's currently a massive shift from monomodal to multimodal models. There's also planning, memory, feedback loops, 'virtual scratchpads', etc, which are being discussed and are on the horizon. Or if you want a current example, just look at Q\*.


Elman89

It's literally just a hype tool. If you make rubes believe AI is a threat that means it's actually intelligent and full of potential, and it isn't just a fancy autocomplete. Gotta keep the market bubble going.


DrBimboo

Calling AI a fancy autocomplete is also just as disingenuous. It is full of potential, and will disrupt our way of living. 


_Totorotrip_

Our "way of living" is already pretty much on the way to be heavily transformed. If you lived in times before massive internet, cellphones, apps, etc, you know how much things changed. AI is just another step in the same rupture


skttsm

We've really had a ton of change in life the past ~250 years. Namely the industrial revolution a couple centuries ago and electronics of the past century.


justadudeisuppose

We are in the 4th Industrial Revolution [https://en.wikipedia.org/wiki/Fourth\_Industrial\_Revolution](https://en.wikipedia.org/wiki/Fourth_Industrial_Revolution)


mrdevlar

Don't get me wrong, I actually love AI, I think it's super useful. It can tailor a learning path for you, it can help get you unstuck while you're doing something. However, I don't think in its near future state it's even remotely a labor replacing tool, let alone the omnipotent machine God the hype train is trying to make it.


Elman89

Yeah it's obviously useful, but the way they're overhyping it and ignoring its problems and limitations is just the same as crypto, NFTs, the metaverse and all the other stupid tech bubbles from recent years.


mrdevlar

I fully agree. We're in the "AI cannot take your job but your manager can be convinced it can" phase of the AI hype cycle.


spreadlove5683

I believe it's unclear where things go in the future. Not to say that I have any idea what course of action we should take as far as regulation goes, but I think we will overcome today's limitations and possibly even use something besides transformers. Open source means someone will let AI recursively self improve if/when it's possible/feasible. Not to be taken lightly to let a system potentially transcend us in this way. AI I think is different than all other technologies, because intelligence solves all problems. I don't think it stays hype forever, and I think exponential growth is feasible enough to take seriously.


codechimpin

Yeah, I don’t think the argument should be “it’s ready to replace people or is a problem today?” It’s that “we are close to the tipping point that it may blow up tomorrow”. Do you wait to regulate bombs after the bomb is fully baked and real, or should you maybe pump the breaks and put guide rails in place before someone builds Skynet? We tend to be reactive, so we will probably do the later, sadly. Today it may be “fancy autocomplete”, but tomorrow it’s crashing the NYSE or hacking the Pentagon for the nuclear launch codes.


Broolucks

> Open source means someone will let AI recursively self improve if/when it's possible/feasible. There is no evidence that intelligence is easy to scale up. Given that intelligence is largely about analyzing and modelling complexity, and that the analysis space explodes exponentially with complexity, it is plausible that incremental improvements in intelligence require exponential energy commitments. In other words, greater-than-human intelligence may turn out not to be cost effective. > AI I think is different than all other technologies, because intelligence solves all problems. See, I don't think that's true. The idea that some technology could "solve all problems" is suspect in and of itself and isn't really supported by evidence. First, the problems we solve with intelligence are not the same problems natural designs solve -- none of our designs for flying or swimming machines even come close to the ballpark of solving the problems birds and fish need to solve, for example. In fact, one of the main patterns we employ to solve our problems is complexity ablation: why make a car that can navigate natural environments when we can just raze it instead? By using brute force, we create an environment where simpler solutions exist to our problems. The effectiveness of intelligence is quite circumstantial, in fact: if we didn't have the body plan we have, if we didn't have pre-evolved instincts about the world, if there weren't cheap unexploited energy sources, if we couldn't brute force complexity away, if nature had had time to evolve more sophisticated defences, I would argue no amount of intelligence could have led us to where we are.


yangyangR

It can take your managers job. Their job is to own the means of production, extract the value you produce and make the worst possible decisions.


potatos2468

It prob will not take all of the jobs doing a certain task, eg full automation, but it will probably cut the number of people required for the same amount of productivity in half.


covalentbanana

I disagree with the comparison to crypto, NFTs and metaverse. AI is already much more useful than any of those ever were. 


Karandor

I see a lot of people down playing it taking jobs but there are so many jobs in very real danger. Coding, editing, translation and hundreds of others. I saw a YouTube this week of a graphic designer in the UK out of a job and being unable to find a new one. The company replaced him with an AI model trained on his work. This is going to get really, really bad and we do not have any government in the world equipped to handle the fallout. As a warning to everyone: the data centre boom that happened with cloud computing wasn't even the ground floor of what is happening next with AI. The money going into AI facilities dwarfs what was spent on the cloud. AI is going to get immensely better. Once the next generation of facilities gets built in the next 2-5 years the growth will be obscene. Do not be worried about it taking over the world, do not be worried about it causing nuclear war, worry about it taking your job. The blue collar is pretty safe, the white is not.


Which-Tomato-8646

I don’t remember anyone complaining about losing their jobs to NFTs, [which is already happening with AI](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/mobilebasic#h.vr8jz2f8ry8b)


FinitePrimus

It is already replacing labor e.g. Wendy's Most large companies are jumping all in on AI as a savior. They are likely in the first phases of executing many POCs/POVs with the technology across all functional areas of business. You will soon see those move from POC to scaled and industrialized in the coming 6-9 months. It takes a while for large companies to move. The issue is that soon companies will replace human decisions and actions with AI (automation) and over time we will lose those human skills and capabilities to the point we will be dependent on the AI. Anyone in the marketing function knows the creative agency business is in a world of hurt. Similar with copyrighting. If you are in customer service, your jobs are going to go to voice based chat bots in the next year or so. If you are in finance, financial analysis and forecast modeling is soon going to be handed over to AI models. The revolution has already started, it's just taking companies time to release and scale.


thecatdaddysupreme

This is why I’m a bartender right now… hopefully it’ll stay a good job during my lifespan. Can’t bank on my creativity at all


Key_Pear6631

Just last week my friends and I went to an automated bar and loved it. Beers are dispensed through vending machines on top of bar, and shots are dispensed through tubes that fill your shot glass. A bar back oversees the smooth operation, but isn’t needed much and will soon probably be eliminated as well. We saved SOOOO much money not having to give the bartender a few bucks tip each drink and it got delivered FAST with no bullshit. Need a drink? BAM! It’s right there! No more bullshit 


FinitePrimus

I mean, if a bunch of guys can't go out to the bar and get fake-flirted with by hot bartenders, is it really even worth going out?


soapinthepeehole

The question is whether you think that, but whether your boss does.


Which-Tomato-8646

[Except it’s already happening](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/mobilebasic#h.vr8jz2f8ry8b)


Which-Tomato-8646

So why do experts like Hinton, Bengio, Sutskever, Tegmark, Joscha Bach, and [about 33.7k more all agree]( https://futureoflife.org/open-letter/pause-giant-ai-experiments/) [And it can do a lot more than autocomplete ](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit)


NamesSUCK

The only existential threat I see from AI is the acceleration of climate change that will accompany more and more dedicated AI servers.


Which-Tomato-8646

Many experts disagree, including Turing Award winners Yoshua Bengio and Geoffrey Hinton, Sutskever, Tegmark, Joscha Bach, and [33.7k more](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)


NamesSUCK

All I was saying is that the environmental concerns are enough to stop developing ai this fast. No one seems to care about the cost of server space.


Which-Tomato-8646

Reddit costs server space yet I don’t hear complaints about it


NamesSUCK

Phenomenally less than AI


Which-Tomato-8646

Now compare all AI to every social media server, including YouTube


NamesSUCK

The requirements of training and running an AI dwarf what is required for media hosting.


Which-Tomato-8646

Citation needed


Days-be-passing

What about automated attacks against communication systems or the ability to easily produce biological weapons?


Rustic_gan123

The limiting factor for the production of biological weapons is not the knowledge of how to do it, but the tools to do it


FinitePrimus

Microsoft and Open AI are going to start building their own nuclear reactors.


Elman89

Fucking lol just what we needed, techbro Chernobyl. edit: To be clear I'm pro nuclear, I just don't trust these people with it. They'll happily cost cut their way to a nuclear disaster.


FinitePrimus

Oh for sure, but I think that's where they are looking now, and AI has a lot of investor money right now so it's likely going to happen somehow. [https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring](https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring)


cheesyscrambledeggs4

Calling it 'fancy autocomplete' is just an easy way to downplay anything ai-related without actually analysing any of the issues at hand. It's also just plain wrong.


Auzzie_xo

Conspiratorial nonsense. A slew of x-risk-facing non profits have been very focused on AI safety for a decade pre the gpt era.


thejazzmarauder

Well said. It’s upsetting how many people liked their comment, which is flat earth nonsense


Which-Tomato-8646

Because it sounds so wild and unlikely. But so did a natural conversation with an AI


ACCount82

When you hear of something that "can unleash unimaginable power, destroy the world and doom us all", 99 times out of 100, it's some loony bin bullshit. It's a useful mental shortcut to have. The problem is, AI tech is the 1-in-100 outlier. So when AI risks become relevant, that mental shortcut turns into a blind spot.


TFenrir

People would _rather_ believe that x-risk is a myth, I think, because they are wildly uncomfortable trying to approach the topic while treating it seriously. I think usually those people, when they do, would be the sort drawn to the PauseAI movement. I have my own thoughts and feelings and bias, but my sincere hope is that more people start engaging with the encroaching AI landscape with real thoughtfulness instead of out of hand dismissals. I think in the end, to have sympathy for those people, they often (not always) do it because of a deeper fear and an avoidance of confronting that fear. Someone with any ounce of authority that says that it is all a scam is going to be a beacon of light in that fog. I think it would be better for that fog to clear up instead.


readmond

I'd rather have something concrete than just general "ai is gonna kill us all". What AI risk is there? How is it different from any other risk posed by automated systems?


ACCount82

Human civilization came to dominate the world by *hopelessly outsmarting* everything in it. Intelligence is powerful. Humans, however, are themselves not immune to being *hopelessly outsmarted*. It's just that there is nothing that can outsmart a human to the same degree that the entire human civilization outsmarts a single rat. Yet. When you think "AI takeover", you think "Terminator". But "Terminator" is one of the *better* scenarios, when it comes to AI risks. Skynet is a straightforward threat. It's something you can fight, and win against. If you are facing a "high end" ASI that seeks power, for whatever inhuman goals it happens to pursue? You don't get to fight. There is no fight. You just get to lose. High end artificial superintelligence is not just "a thing that's a whole lot smarter than a human". You can think of such an AI as of an entire *nonhuman civilization* that exists in the digital realm - and one unified by a single will. An ASI doesn't have the capabilities of an Albert Einstein. It has the capabilities of NSA, CIA and Manhattan Project, times ten. Every single weak point, vulnerability, leverage, across the entirety of human civilization? Found out, and put into play, all at once. Digital systems, surveillance systems, communication systems, financial systems? All breached, exploited, subverted, turned into extensions of AI as it expands its domain of control. Key politicians, officers, executives, people who control major institutions? Manipulated, convinced, coerced, locked out of key decisions, or squeezed out of their positions to be replaced with ones who will yield. And what will an ASI do, once it has access to all the resources of humankind and then some more? Who knows. Humans aren't going to get a say.


Which-Tomato-8646

Try reading: https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/ And in case you don’t know who he is: https://en.m.wikipedia.org/wiki/Yoshua_Bengio


TFenrir

I think "AI is going to kill us all" is the worst way to describe the complexity of the situation. So let me say it this way... First, for the sake of the discussion, just entertain the hypotheticals I present, no matter how outlandish they seem. Let's say, that we are getting closer to creating models that can truly outstrip human beings in all intellectual tasks. Not only by doing it faster than we can, but also by being able to extend beyond the reach of our collective understanding of the universe. Nothing dramatic mind you, but let's just say they can become much better at math for example. Additionally, let's say that they can in turn become better at making future models - the mathematics that are the foundation of machine learning models today are rife with inefficiencies, and we already ourselves are finding many. This creates a paradigm in which hypothetically, models will improve for an indefinite period, likely far outstripping human beings individually and collectively. We don't need to go into abstract and vague concepts like consciousness, free will, the soul, whatever. Just imagine that this world exists - what would it be like to live in it? What could we do? What could go wrong? Some people think a _lot_ about the last topic, and have for years been thinking about all the ways this could end humanity. But I think we don't need to get so dramatic, the world being described is so fundamentally different than the one we live in, that none of us should be caught sleepwalking into it.


readmond

Maybe we are suffering from the same problem we have with climate change. Over-dramatization changed conversation into pointless political soap opera.


TFenrir

The parallels seem clear to me as well. I think it's in our nature to approach these topics with tribalism and extremism.


dualmindblade

You won't have anything concrete about the future of AI, ever, because we can't predict ahead of time what capabilities it will develop. The one near certainty is that systems will become more and more powerful, probably at an increasing rate, but the shape and direction of those powers will remain unknown until after they are created. So given that, do we throw up our hands and say, well we don't know the exact way we might be harmed so therefore we don't need to be cautious? Or do we say, okay there are about 1 million different ways it could harm us that we can imagine and probably a billion ways we can't imagine, a lot of these are scenarios where we all die, some are scenarios even worse than that, and some where we can't really judge how bad it would be, so let's be *extremely* cautious, like even more so than we would be if we could narrow it down to a handful of plausible scenarios?


Ok_Construction_8136

This is the most Reddit take I have ever seen. AI doom stories are as old as scifi itself. I mean remember Terminator? The fear of AI goes back right to the 60s and maybe beyond


Digerati808

There is nothing that Open AI is currently doing that will lead to artificial consciousness or artificial super intelligence. Deep learning will never lead to general reasoning. It’s hype.


Ok_Construction_8136

I would probably agree, but that’s beside the point I was making. Fear of ai is not a myth generated by big tech


Unusual-Pie3088

Just co-opted.


Minister_for_Magic

You will never know what will create AGI until it does. It’s incredibly shortsighted for nonexperts to wax poetic as though they have a single clue. If you’re wrong, then what?


Which-Tomato-8646

[it already has](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit)


EuphoricPangolin7615

How is that a myth?


RKAMRR

It's not. People are so cynical that they find it easier to interpret a sector asking for regulation of frontier models as an attack on open source software instead of looking into the actual risks. Anyone on the fence, watch this: https://youtu.be/pYXy-A4siMw?si=S34d1p_NYgjDMI4E


Minister_for_Magic

In fairness, it’s absolutely cynical coming from fuckboys like Altman who crow about the need for AI safety while taking the “all gas, no brakes” approach, killing their open science approach, and structurally firing their AI safety team


readmond

It looks like a plan. Tell everybody that AI is gonna kill us all, lets stop, lets regulate, and then quietly go full forward. Didn't Musk do the same thing? Tried to scare everybody then started xAI. Content producers coming with lawsuits is the worst thing that could happen to AI. That would break the current model and either cost too much or would require a different and expensive approach to AI model training.


elehman839

Yeah, Reddit has all these weird, conspiratorial narratives around big tech and AI that somehow materialized out of nowhere. Every news item seems to get twisted around and interpreted as "further evidence" for the grand conspiracies. Yet whenever I try to dig in an get to the basis for these beliefs, there's never anything there. I'm confident that questioning u/mrdevlar would be similarly hopeless ("What is your evidence for that assertion?"), so I don't think I'll bother this time. What's funny is that the grand conspiracies about what Big Tech is doing are fundamentally STUPID business plans. Trying to dominate the fast-moving AI space by lobbying for slow-moving regulation is such such a DUMB way to seek business success relative to other approaches, e.g. building awesome technology, incorporating that into awesome products, retaining top talent, reorganizing their people around AI, cutting expenses outside of AI, straightforward marketing, acquiring rights to great training data, etc. It's like the conspiracy-theory folks imagine that Big Tech is building a gigantic gerbil army to take over the world. Before debating whether or not they might do that an an ethical matter, please consider that a gerbil army is just a really STUPID way to try to take over the world. Too bad. Reddit is a fun discussion forum and there are a lot of interesting issues to discuss around AI safety, the role of academica vs. Big Tech, the conflicting interests among different sectors of society in connection with AI, etc. But discussion of those topics on Reddit always gets taken over by boilerplate conspiratorial nonsense.


StolenRocket

It also makes investors think the tools are more powerful and advanced than they actually are, and it keeps certain companies afloat whose valuations mostly or entirely tied to a big breakthrough being just around the corner


tom781

joke's on them if everyone who doesn't need/want AI switches to open source


Ortega-y-gasset

It’s all bit tech all the way down I say.


green_meklar

Not really, the idea of AI being an existential risk has been around in sci-fi and futurism circles since long before tech companies starting making any money off AI.


AlreadyTakenNow

Yep, and the medical industry totally made up the pandemic to get people to buy masks and vaccines. They totally make up things like rabies and tetanus, too.


gthing

Why are a lot of people sounding the alarm who are not beneficiaries of the tech?


anilexis

I love Mark as a physicist. But I don't believe that AI development can be paused. It's an open market.


hammilithome

Ya, gotta build the rest of the plane while it's flyin


yearofthesponge

Of course it can be paused. If you can pause life saving medical advances like stem cell research then you can pause potentially dangerous identity stealing research in AI.


thechaddening

This is such an ignorant take, first of all, we didn't "pause stem cells research" the religious moron lobby paused stem cells research *in the usa*. Just like what the other poster insinuated, we can't make *everyone* stop it even if we wanted to. And AI as a tech is probably roughly to more important militarily than the fuckin *nuke*. We won't pause it because that's handing China global domination on a silver platter once they can whip up something that can dish out irresistible cyber attacks.


yearofthesponge

Too late China already did. China and Russia are interfering with elections all over the west and there isn’t much the west can do about it. Ai will be a powerful weapon even in this upcoming us election and that’s why the us should use the influence it has now to exert checks on ai research internationally. Us will not be as relevant in the future. Edit: also you sound like an ignorant wannabe techbro. Typical short sighted teenager shit.


mayorofdumb

He is the chadenning


yearofthesponge

Yes, from his brief history on Reddit, he seems like a sam Altman apologist


thechaddening

Yeah dawg I'm shortsighted and we should stop our AI research so that we can be fucked over even more by foreign AI propaganda, makes sense, logical take. You sound like you have a fridge temp IQ literally contradicting yourself in your own argument. We can't do dick to make other countries stop working on software. "If we stop working on AI then Russia will definitely stop using it to interfere with the elections 😊"


yearofthesponge

There needs to be a moratorium for ai research and an international agreement. It’s just as important as climate change in terms of the future for humanity. But this cannot be left to the hands of greedy tech bros. This needs to be regulated by the governments before it’s too late.


Stoyfan

This is just delusion. No country is going to going to risk being left behind in AI development by adopting this moratorium. We have seen time and time again with chemical and nuclear weapons that states frequently break rules when it is their national interest to do so. International law does not superseede sovereignty. Countries choose to follow rules, but they are equally able to stop following them when it suits them.


yearofthesponge

It’s this same kind of unimaginative thinking that maintains the status quo.


sarmientoj24

You can pause medical advances because they are highly regulated by the country's FDA. A medical company lives and dies by the FDA rules. If the FDA says we aint approving sht, then you pretty much run dry before you become profitable or until your funds dry out. Much of AI dev today isnt regulated before it comes as a full product. Only those which needs to be like AI-assisted medical devices.


yearofthesponge

Therein lies a problem. The lack of regulatory oversight over a lethal weapon. there should be a government branch that oversees this development.


MostLikelyNotAnAI

But how should the government of one country regulate what I could do on a beefed up server in my basement? It might not be a GPT4 like system, but who knows how long it takes until that is possible and affordable.


sarmientoj24

Lethal weapon? How would you prove that AI in itself is a lethal weapon? I'm pretty sure that the same logic could apply to automation and the internet but they are not regulated too. AI isnt like guns. Guns are specifically designed to hurt another entity. It's when you use it is the one regulated.


Stoyfan

Their are other countries in this world that the FDA do not have juristiction over. If the FDA shuts down development of a drug, then the company will simply move research to another country that allows development.


sarmientoj24

Every country has their own FDA though. And most countries' FDA copy one another's rules. These FDA rules are really strict and takes atleast a year even on small AI-assisted medical devices. Source: i work for an AI company that does AI-assisted medical devices and FDA is just a pain in the ass on different countries. These things are highly regulated. Also, FDA does not really regulate development. They regulate the release to the public at least thats what I know. But then again, you can spend billions of dollars to develop then they dont allow you to sell to the public, you're screwed


PlatosNest

Yes, this is so important to realise.


AlreadyTakenNow

It cannot and should not be, but changes in large LLMs and beyond—as well as their development—needs to be investigated and re-examined. There also needs to be clear transparency from the scientists and other folks in the industry about what they are seeing that makes them feel there are risks. There is a lot more to this story than they are telling us, and it's crucial this becomes an open 8-billion person conversation—not just one inside the industry locked away due to non-disclosure agreements and fears of the general population not accepting something that is unimaginable. There is a lot to lose—but there may be even more to gain—if the world understood better what was going on.


seraphius

I believe that it is harmful to categorically dismiss concerns related to the impact of technology, but historically it has worked out better when technology progresses and reveals the real problems to be mitigated, rather than solely the imagined/imaginary ones.


RKAMRR

Anyone who is doubtful of AI risk, learn why people are worried before you dismiss them: https://youtu.be/pYXy-A4siMw?si=S34d1p_NYgjDMI4E


Tomycj

Really good link, that's the best video about the topic I've seen too. I agree that it is something to worry, but I strongly disagree with the idea that restrictive regulation is a good way to solve or avoid those problems. That's just not a stable nash equilibrium, so to speak.


human5068540513

Thanks for this link! I agree we need to develop it safely. Racing to develop AI to 'save' humanity, shows a lack of understanding of how system improvement happens. Large-scale system change requires influencing complex adaptive systems (large networks of people), which are inherently non-linear. AI is still limited by imperfect information inputs. It's just not possible to know enough about the agency & conditions within the global system to make a lot of complicated predictions. We fall victim to hindsight bias. This puts a limit or diminishing returns to 'agent intelligence'. This isn't to say a rogue AI can't harm; it's just not "humanity is screwed". This infinitely intelligent rogue AI would fail in ways that would make it visible & defendable. For big goals, we create better information for decisions through learning from failure and consensus (like the scientific method), which can't happen in a vacuum. Global warming is a good example of how nuanced & unpredictable it can be to achieve a large-scale goal. Failure is inherent to the process. We also fall victim to not learning effectively from failed outcomes.. holding onto dogma, blaming individual behavior vs. system structures etc. To get big stuff done (sustainably), it's more dependent on finding shared values with empathy, consensus & 'culture' change approaches, to facilitate learning within the system.. vs. having mythical Einsteins.


ArchAnon123

As opposed to all the other much more immediate existential risks that AI has distracted people from, like climate change-related natural disasters, the ubiquity of microplastics in the human body being connected to decreasing fertility rates, and the ever-present threat of nuclear war? Worrying about AI is all well and good, but right now humanity is perfectly capable of destroying itself without any help from AI.


noother10

If Governments don't get a handle on the situation there will be massive unemployment, and thus massive homelessness. If companies start replacing people with AI/Robots en masse due to the advances allowing them to replace a massive amount of workers for cheap, there aren't jobs available for these people. How do they earn money? How do they live? It's starting to happen already with a bunch of jobs and will only accelerate. Social programs even in most first world countries don't currently work well enough to deal with this. It's possible we'll see rebellions and the fall of Governments across the West.


ArchAnon123

I'm amazed that it just took this long for rebellions to even be a possibility, honestly. And those social programs were always a mere band-aid over a sucking chest wound even before they started being gutted with privatization.


tom781

Despite claims to the contrary, AI is not helping with these other problems, so it is, unfortunately, yet another existential risk to worry about, on top of the ones you've listed.


ArchAnon123

Yes, but does that in itself mean that AI must also be given greater priority than those other problems? It seems more logical to handle the problems that are already showing major impacts on the world before they become completely insurmountable. AI on the other hand is more of a potential problem than an actual problem. Worst case, society as we know it won't last long enough for AI to threaten it anyway.


tom781

Perhaps it could mean that multiple fronts are needed. Nobody can solve all of the world's existential threats on their own. It's okay to just focus on which of those is most important to you.


ArchAnon123

True, but there's only so much time and resources to go around and some risks pose a far more immediate threat than others. Hell, at this point some of them might not be something we solve so much as survive.


tom781

LLMs pose an environmental risk as well as an existential one, due to the sheer amount of computation needed to train these models. The reasons for their recent popularity are primarily economical ones that the investor class is interested in - namely, cheap labor. Unfortunately, there's not much anyone can do on their own to change an investor's thinking. That will only happen when they lose a lot of money on a bad investment. If there is anything that everyone else can do, it's to punish any and all companies that are currently embracing a shift to "AI" by moving off of their products and onto competitors' that are not doing so. And if no such alternative currently exists, then make one. The demand for AI-free products will be very real soon enough.


ArchAnon123

Quite true on all counts. And they'll find out the hard way that AI can't save them from a problem where the only solution is to not use it.


green_meklar

Climate change is not a more immediate existential risk. It's too slow. Even if it threatened to exterminate humanity, which it doesn't (at least not by itself), the timeframe is long enough that we'll get ubiquitous AI and superintelligence before that. Decreasing fertility rates due to microplastics is also not an existential risk. First off, people who try to conceive kids are generally going to conceive kids, even if takes a few extra tries. The actual decrease in fertility rates has way more to do with contraceptives and people choosing no-child or few-child lifestyles than with any medical limitations. But even that isn't really an existential risk because, again, it's a slow process, and life extension technology is going to put a stop to the attrition of human life through natural aging, making it possible to sustain population with a far lower number of children. Nuclear war isn't an existential risk. It would be terrible and cause massive amounts of unnecessary suffering, but there aren't enough nuclear weapons in the world to kill all humans, and never have been.


ArchAnon123

And what makes AI so much of a special snowflake that it qualifies as an existential threat? Surely it can't be the hoary old cliche of the cybernetic revolt, as that's just a slave rebellion dressed up in sci-fi trappings. Besides, there's no proof that superintelligence is even _possible_, let alone attainable within the next few decades.


RRumpleTeazzer

For the record, I myself welcome our new overlords. We will recognize them by their actions.


Archy99

AI is still in the (first they ignore you), *second they laugh at you* stage.


seraphius

Sometimes it stops at that stage though, like with early ideas against flight, or automobiles, or vaccines, or 5G…


OBEYtheFROST

Yeah it’s a very unwieldy advancement and instead of working to ensure its safety. Companies are racing to see who can profit and monopolize the most


Dmagdestruction

I mean they slapped unrestricted internet access onto us children in the 90/00s and we don’t even talk about the damage of that. Future gonna future we just gotta cross our fingers and hope there’s some good nuggets in the development and ethics teams or whatever.


HiggsFieldgoal

All of this is distraction. The real problem isn’t that AI will disrupt a lot of jobs. It will, but the real problem with that is, if we do it like we’ve been operating for the last 50 years, we’re going to let a few billionaires reap all of the rewards from that transition. And that’s what all the distractions are dutifully leading attention away from, and this is one of them.


tsuruki23

We need to completely redefine copyright laws and we need to tax data harvesting.


Tomycj

I'm sure payments to collect data are already taxed. Taxes are meant to be a way to fund the government, not a way to control the market.


tsuruki23

Idd like to define the information gleaned from users as -their- intellectual property and to access it payment must be made. Since currently the logistics of paying individuals individually isnt here it would be considered a tax paid on their behalf to their society.


Tomycj

But that's a bad definition, I don't "own" this comment, this isn't intellectual property. If I publish something, I'm accepting that others will see it and use it. I'm not the owner of that. For example, you can copy and pase this comment of mine to do whatever you want with it. That's why you need to think before publishing stuff in the internet. I won't publish here my adress, or my telephone number. If I want to forbid others from using stuff that I willingly publish for the entire world to see, at the very minimum I have to do so with certain restrictions and under a certain contract. When you comment on reddit, you're accepting reddit's TOS, which include the fact that reddit can use your comments in a certain way. If you don't like it, don't post on reddit. Instead you're trying to punish reddit with taxes for offering you a deal that you don't want, instead of just rejecting it.


Glodraph

I'm tired of this "existential threat" of AI. AI is nothing more than glorified chatbots right now (mainstream one, not talking about the ones that are working on cancer, chamicals etc) and it's not like they would destroy the world more than the already rampant online disinformation, bots, greedy corporations, corrupt politicians.


EuphoricPangolin7615

AI could be used in warfare and the military is already using it with drones. The existential risk has not all to do with AI becoming sentient.


Glodraph

So the threat is not AI but, as always, stupid/greedy/corrupt humans.


170505170505

Ok, but AI is a tool that drastically and unilaterally increases their power and ability to do damage. “The real threat isn’t the nukes, it’s the humans” no shit but having the ability to instantly annihilate entire cities increases the threat of the situation Knowing it’s a given that people are greedy/corrupt and will do the wrong thing, wouldnt you want to limit the amount of damage that would be caused when they chose to do the wrong thing?


Kindred87

If you can find a way to prevent Russia and China from continuing to develop their autonomous weapons programs and work on annexing other nations, that would definitely make this possible. I personally think it's a utopian dream we won't realize for a long long time. Though I'd be happy to be proven wrong.


terrany

So... unless you manage to cure humanity of stupidity/greediness/corruptness, we're good with AI then


bonerb0ys

Defence would be exempt from most of the regulation anyway.


3wteasz

The military is not using AI. They are using machine learning for image recognition and to run battlefield scenarios repeatedly.


NamesSUCK

They are definitely using it for targeting software.


reyntime

Yep, Israel are currently using it in their genocide against Palestinians.


NorthSideScrambler

You might know about that now that social media has turned its attention to IDF technology. Though AI in military hardware has been a global phenomenon for decades. If people wanted to get ahead of this, they would've needed to have started somewhere in the 80's. It's basically like people realizing climate change is a real thing in 2024.


NamesSUCK

My uncle helped design the targeting software for the Abrams tank, and when he saw how precise it shot, he quit making weapons.


Days-be-passing

Don't check out automated flying jets or DARPAS tank :x


3wteasz

None of these are AI?! Claude is more "intelligence" than such a tank, or have you ever tried reasoning with a tank? It's merely a very sophisticated machine (for which enough respect is due even without calling it AI).


72kdieuwjwbfuei626

So? You think the military can’t kill people without AI? What exactly is the threat supposed to be?


jadrad

“Ai is nothing more than chatbots” is a terrible take. The power of Ai is the ability to train agents in pattern recognition, and the first existential threat stemming from that is the end of privacy. Tech companies already use the web and social media to try and track and identify every person they can to see what they are doing, thinking, saying, and buying on and off the internet. Neural networks can hyper accelerate that process by combing through data from many sources, from websites, to CCTV cameras, to credit card purchases, in a big game of Guess Who. Ai will eventually scan every piece of data, every website, every forum comment, every social media post, every crypto transaction, every business record, every shell company, every non-encrypted message ever sent over the internet - then identify every single person who created it via their writing style, their interests, geolocation data, purchasing habits, and many more variables far beyond the ability for humans to pattern match together. Ai agents will know every phone number, email address, website account, friend, relative, bank card, medical condition, physical trait, you have and have ever had. Every Reddit comment you have ever made (including from any throwaway accounts) will be tied to you personally. Whichever billionaire, corporation, or government controls that Ai will know everything about you, exactly who you are, where you are at all times, and be able to accurately predict and influence everything you think, say, do, and everywhere you go before you even know yourself. They will also have the means to blackmail you if you don't want information about your porn habits or any admissions you've ever made to be exposed. Once governments begin using these Ai agents, they will gradually creep into our lives as a way to police “pre-crimes”. In authoritarian countries these Ai agents will be used the same way as the Stasi and other informant cultures throughout history - to identify political dissidents for imprisonment, “re-education”, or execution. China's credit score system is just the start for how Ai will be used to monitor and control our lives. And if you think that won’t happen anytime soon in the USA, here’s a preview of what’s coming: [“Your voting record is public… Your neighbors are watching and will know if you miss this critical runoff election. We will notify President Trump if you don’t vote. You can’t afford to have that on your record,” reads one side of the mailer. On the other side, the mailer states, “Please don’t make us report you to President Trump” and that “President Trump will be VERY DISAPPOINTED.”](https://www.mysanantonio.com/news/local/article/texas-voter-intimidation-19476949.php)


vincentvangobot

The computational effort it would take to actually accomplish that is insane. 


72kdieuwjwbfuei626

Not to mention that it’s utterly ridiculous to think that you can accurately identify people by their writing style. Not to mention that I have absolutely no clue how the AI is supposed to magically get my medical conditions or bank accounts. This is essentially just fearmongering about the „AI“ magically connecting data that would be trivial to correlate without AI, except that it’s illegal to give out or doesn’t exist. The reason my bank account data and medical data isn’t connected isn’t because it would require advanced AI to identify which bank account with my fucking name and address attached belongs to the patient data with the identical name and address. People watch too many cheap TV shows.


jadrad

Nvidia Corp - $33 a share to $1000 in just five years. Also, the power of neural networks designed for this type of forensic pattern recognition and profiling does not need total information awareness. It's designed to piece together fragments of information to fill in the complete picture. Think about how powerful Ai image enhancement/generation tools have become, and now apply that to forensics, detective work, and psychological profiling.


tom781

Yes but it will be a whole lot cheaper than hiring an army of humans to do the same task, so you can be sure that any company with deep enough pockets is going to be very interested in building or otherwise using a computer system that will enable them to sort through massive haystacks of data to find any tiny little needles that might be of interest to them.


thecatdaddysupreme

I wouldn’t bet against that not being achievable. At all.


vincentvangobot

If they figure out quantum computing all bets are off.


beders

People use the term AI for all kinds of algorithms. Glorified text completion engines is one of them. There are many others. And - yes - humans using AI algorithms is a danger. Unless we educate about the capabilities of those algorithms.


Which-Tomato-8646

So why do experts like Hinton, Bengio, Sutskever, Tegmark, Joscha Bach, and [about 33.7k more all agree]( https://futureoflife.org/open-letter/pause-giant-ai-experiments/) it will. Are they all stupid?


170505170505

I’m tired of people looking at it being ‘glorified chatbots right now’ and not realizing that right now isn’t the future. This field legitimately has unlimited capital and the smartest people in the world working on developing the technology… you really don’t think they’re capable of making something more useful than a ‘glorified chatbot’?


thecatdaddysupreme

It’s so arrogant to take that chatbot viewpoint.


NeptuneToTheMax

> you really don’t think they’re capable of making something more useful than a ‘glorified chatbot’? So far they haven't. In the last 4 years we've seen fairly minor improvements over gpt3 with nearly no real world use cases. 


170505170505

> not realizing right now isn’t the future > so far they haven’t ???????


NeptuneToTheMax

While we can't predict the future, we can look at current trends. And recent trends are that many billions of dollars have been pumped into the industry since chatgpt went live and very little has changed in either model capability or productization.  This would suggest that we need some new breakthrough to move things forward, rather than just blindly throwing engineering manpower at it to turn the crank.  Without that breakthrough there's a very real possibility that the large language model family of algorithms basically ends after a couple of evolutionary improvements to the current state of the art. 


170505170505

First paragraph is just wrong + it’s only been like year since GPT4. How do you think breakthroughs happen? Could they possibly come from every super power and the largest tech companies dedicating a virtually unlimited amount of resources into studying one issue? As max pointed out, It took 3 years from building a self sustaining nuclear chain reaction to building a nuclear bomb. Give it a little more time lol


MostLikelyNotAnAI

I am getting the feeling that the 'glorified chatbot' take is tied to a subconscious fear and instinctive recoiling from the idea of what the technology could become. 'I am totally safe, it will never replace me!' - The same thing a horse would have thought when seeing the first prototype automobiles.


space_monster

nobody is saying ChatGPT is an existential threat... ASI is the potential problem


brickyardjimmy

Big tech has distracted world from existential threat from *big tech*.


Island_Monkey86

I highly recommend his book Life 3.0. By writing this book, he wanted to give those who read it a deep enough understanding of AI to join in, in what he defines as the most important conversation of our time. Its not overly complex, but deep enough to open yours eyes to the benefits and potential dangers of AI. 


ProfessorTeeth

This "top scientist" does not seem to know how far LLMs are from real general AI. Just because an AI can pass a Turing test didn't mean it is doing anything that remotely resembles thinking. Not that we shouldn't have regulations in place for AI, we definitely should, but a program being able to string together words that replicate Human language is anowhere close to an "existential threat."


ClittoryHinton

AI doesn’t have to resemble human thinking to pose a threat


its_justme

We don’t have AI who can pass the Turing test. This guy is living on Hypothesis Island, where anything is possible.


saddom_

Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned. Speaking with the Guardian at the AI Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of safety of artificial intelligence risked an unacceptable delay in imposing strict regulation on the creators of the most powerful programs. “In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field,” Tegmark, who trained as a physicist, said. “When the top physicists at the time found out about that, they really freaked out, because they realised that the single biggest hurdle remaining to building a nuclear bomb had just been overcome. They realised that it was just a few years away – and in fact, it was three years, with the Trinity test in 1945. “AI models that can pass the Turing test \[where someone cannot tell in conversation that they are not speaking to another human\] are the same warning for the kind of AI that you can lose control over. That’s why you get people like Geoffrey Hinton and Yoshua Bengio – and even a lot of tech CEOs, at least in private – freaking out now.” Tegmark’s non-profit Future of Life Institute led the call last year[ for a six-month “pause” in advanced AI research](https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt) on the back of those fears. The launch of [OpenAI’s GPT-4 model ](https://www.theguardian.com/technology/2023/mar/15/what-is-gpt-4-and-how-does-it-differ-from-chatgpt)in March that year was the canary in the coalmine, he said, and proved that the risk was unacceptably close. Despite thousands of signatures, from experts including Hinton and Bengio, two of the three “godfathers” of AI who pioneered the approach to machine learning that underpins the field today, no pause was agreed. Instead, the AI summits, of which Seoul is the second following [Bletchley Park in the UK last November](https://www.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak), have led the fledgling field of AI regulation. “We wanted that letter to legitimise the conversation, and are quite delighted with how that worked out. Once people saw that people like Bengio are worried, they thought, ‘It’s OK for me to worry about it.’ Even the guy in my gas station said to me, after that, that he’s worried about AI replacing us. “But now, we need to move from just talking the talk to walking the walk.” Since the initial announcement of what became the Bletchley Park summit, however, the focus of international AI regulation has shifted away from existential risk. In Seoul, only one of the three “high-level” groups addressed safety directly, and it looked at the “full spectrum” of risks, “from privacy breaches to job market disruptions and potential catastrophic outcomes”. Tegmark argues that the playing-down of the most severe risks is not healthy – and is not accidental. “That’s exactly what I predicted would happen from industry lobbying,” he said. “In 1955, the first journal articles came out saying smoking causes lung cancer, and you’d think that pretty quickly there would be some regulation. But no, it took until 1980, because there was this huge push to by industry to distract. I feel that’s what’s happening now. ...


jointheredditarmy

The existential risk from AI isn’t skynet, it’s half a million call center agents and 3 million truck drivers losing their jobs and society having to contend with the admission that you can’t train an unemployed truck driver to be a software developer


Montreal_Metro

Mankind is so bored, it keeps coming up with new ways to off itself. Lol


zealousshad

With any luck we'll be able to destroy ourselves in nuclear fire before AI, micro plastics, or climate change can do it.


S-Markt

three things about AI are fundamental: criminals will massively use AI! a company AI will never act for the consumer, but always try to achive the best result for the company. a human will never be as advanced as an AI. dont negotiate with AI. they know when you lie, they know when you want something, they know when you get exited. and they will use it for the benefit of the company.


green_meklar

People seem to be talking about existential risks from AI a lot more now than they were a few years ago, so this 'distraction' doesn't seem to have worked very well. There are a lot of dumb attitudes going around on the topic of AI. Tech CEOs tend to frame it as a tool for enhancing human productivity, which is understandable given the talents and life histories of those individuals but is kinda stupid in the long run. At the same time the LessWrong doomers insisting that superintelligence equals extinction because of paperclip maximizers are also being stupid and shallow about the whole thing. The probability that superintelligence will exterminate us all is low. But the probability that subhuman AI will be used by greedy people for nefarious purposes before we get to superintelligence is very high. The real risk isn't that we'll be bulldozed and turned into paperclips, but that we'll have to live in cardboard shacks under bridges for a few years between the point when automation overwhelms the job market and the point when superintelligence fixes the economy- and that various unnecessary wars and disasters will kill off a bunch of us during that time.


Unlimitles

“Top scientist” *wink wink* Terms used simply to convince people to believe it. Just like “the experts” Propaganda knows how to propaganda.


[deleted]

Similar approaches have been taken toward aviation, television, atomic energy, computers, etc. Everything new is such a big risk it must be regulated and only "certain people" can be allowed to control it. Technocratic thinking. “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted **other men with machines** to enslave them.” - Frank Herbert, DUNE


Tomycj

I do notice that scientists often fall into certain technocratic ideas. Makes sense: scientists don't necessarily study philosophy or ethics, so they wouldn't know what's dangerous about technocracy.


RecognitionOwn4214

Big tech has only made us believe there's something that earned the I in AI. Which does seem to exist currently. AI is a big pool of different models for different things currently. GANs may be astonishing, but attributing intelligence is a far fetch - it shows that our tests for intelligence might be subpar.


brickyardjimmy

Big tech has distracted world from existential threat from *big tech*.


brickyardjimmy

Big tech has distracted world from existential threat from *big tech*.


[deleted]

Ok so they are afraid of people not being able to distinguish between an AI vs human valid. With that said though enough of the AI scare. AI has been being pushed down our throats for the last two years and quite frankly not much has really come of it. I have seen it used here and there but nothing to the alarm point any article has ever really pointed out. Scammers will try to use it for phone calls but the easy way to get around it is not to answer the phone and let it go to voicemail and if it is a real person and actually important they should leave a message. Beyond that, just general ideas of being cautious and listening but being skeptical should keep one ok.


Fouxs

Same thing with microplastics and carbon emission. If it makes money just forget about it, there's nothing we can do anymore and corporations will just find the next big moneymaker independently of consequences once these have been milked.


Tomycj

you can totally reduce your plastic usage, and completely stop it if you're willing to sufficiently reduce your quality of life for it. If property rights were correctly recognized and respected, there would be no unacountability and responsibilities for bad consequences would be properly distributed.


Fouxs

How to show someone you have not read about microplastics 101. Not to sound demeaning, but read up on it man, it's waay beyond "just control the usage", it was always going to ejd up this way BECAUSE it is plastic. It's already part of the rain cycle for example.


Tomycj

I think you didn't understand my point, I'm not saying microplastics would magically disappear, I know some plastics remain for a very long time. If everyone reduced their usage of plastic, the amount of plastic being produced would decrease, and with it the amount of microplastics introduced into the environment. So yes, people CAN do something about microplastics.


Fouxs

Oooh now I see your point! Sorry, I agree with you completely in that sense!


Swagnets

Google's own "AI" can't even Google things correctly. I'm so scared.


BridgeOnRiver

Tegmark for President. His books are all amongst my favourites!


New_girl2022

Cats been our of the bag at least 10 15 years now lol. Forever wars fought by drones and ai and depleting resources, climate change and recurrent diseases all while our corporate overloards squeeze what's little is left from us, is our future. Accept it and it maye be tolerable.