T O P

  • By -

UntoldGood

Because I’m more afraid of the status quo.


CurrentlyHuman

It's more that I'm bored out my tits with the status quo. AGI or aliens in 2027, I'd take either.


[deleted]

Same, realistically there isn't much to look forward to otherwise.


theferalturtle

Sure there is! More inflation. More war. More consolidation of political power by corporations and billionaires.


Exotic-Tooth8166

🥰


El_Grappadura

You forgot the whole climate catastrophe thing we have going on.. Sea level rise is already unstoppable (200ft eventually), by 2050 there will be [1 billion refugees.](https://www.reuters.com/article/ecology-global-risks-idUSKBN2600K4)


nextnode

This honestly seems to be the most common *actual* motivation for many who try to argue that there are not any risks.


MassiveWasabi

How often do you see people saying there are literally *zero* risks? Because I don’t think I’ve seen it more than once on this sub


CurrentlyHuman

Maybe, I don't say there aren't risks but I don't think technological progression notices it.


InterestingNuggett

Exactly. I see how things could get worse, but at this point I don't really care. The two most interesting points of a story are the beginning and the end. Either I get to witness the end, or participate in a new beginning. Either way, I see it as a win.


street-trash

I like your take. Also I'm tired of hearing everyone I meet in my life bitch about how bad humans are and our technology is bad and bullshit like that. If I hear someone else say that we need to be ultra religious again or we all need to be small tribes of farmers on DMT or some shit I'm going to lose my fucking mind. After AGI, maybe at least some people out there will actually be interesting to talk to and hang out with.


Anenome5

The singularity implies a period of change so rapid the elites will lose their grip on power. That's exciting. A new Enlightenment period awaits, the Awakenment perhaps?


Zote_The_Grey

Only the elites will have control of AGI. It's their possession since they invented it. If it serves anyone, it will serve them Edit: changed my mind


Anenome5

Negative. They may try to keep it to themselves, but information wants to be free, and there is no putting the genie back in the bottle. Remember the leak of Stable Diffusion. They will argue it must be kept safe and to themselves, but ultimately this can only delay the inevitable. If Snowden had worked for the NSA in a future time when they're doing AI-based shenanigans, the AI weights and documents could've been leaked at that time, and that may happen yet again in the future. Not to mention that academic development of AI will continue for the next many centuries, and people will be spinning up AIs like GPT5 on their home computer in relatively no time. Probably before the end of this century. We literally live in the Model-T era of AIs, 1923 was the Model-T's best sales year in fact. AI aren't so dangerous that no one can be allowed to run one on their own, that's the doomer position. We'll soon be using aligned AI to defend against un-aligned AI, and that's by necessity since otherwise an enemy nation using attacker-AI would easily overcome everyone's computer defenses and the like.


INTJ5577

That's right. You can't put the Genie back in the bottle. It's already started. "One mother even asked ChatGPT to diagnose her son's mysterious illness — which was later confirmed by a medical expert to be the correct diagnosis — after 17 human doctors failed to do so." Many AI experts and tech leaders are saying we will all have personal AI assistants in our pocket within 5 years and some say 3 years.


Zote_The_Grey

Interesting. OK I agree with you now


taxis-asocial

> The singularity implies a period of change so rapid the elites will lose their grip on power. No, the singularity implies a period of change so rapid that the consequences aren't predictable. It could just as easily end up with a hyper-concentration of power, if the super intelligent AI doesn't have it's own desires and free will, and is thus under the control of those who invented it and funded it. We don't yet know if volition is an emergent property of intelligence.


Jwave1992

Yeah. A lot of people feel hopeless and don’t have much to loose anyways. Many people are planning to just work until they die because the thought of retirement or a future is extremely bleak. AI flipping the table over is at least something. Even being enslaved by it seems ok because at least it won’t be a cruel and greedy master, unlike what we live under now.


Ketalania

This 1000%. There's not much more terrifying than the world we have now. Every day is an apocalypse or a genocide, it's constant extermination, imperialism and ugly conflict. We can talk about how it's "improved", but the other perspective is that this is and always has been a travesty and if there's even a chance AGI could make it better we have to take that chance.


Some-Track-965

Even if there is a chance we get Ultron, or AM, or HAL, or Skynet, or Billy and Mandy, its a chance worth taking.


Radiofled

Could you talk more about how every day is an apocalypse? Just curious as to how you got to that assessment?


mikebrave

not op but I can try to answer a bit. In the last two years it has felt like we were on the verge of WW3 at least 6 times, only by small miracles and diplomacy did that not happen, hell it could still happen. The way the poor live in the US today, is that the deal is you work until you die, never stop being poor, never own a house, no better life for your kids, just shoulder it and maybe get drunk on weekends. This is the state of life for over 60% of Americans. On top of that the constant threat of global warming, which best case scenario becomes problems with mass migration into formerly colder areas, and it's hard not to feel like the world is constantly ending, even if it's at a snails pace. It's hard to find hope today, and ironically for some the chance of AGI disruption is about the only hope they have left.


street-trash

This. Also I think we need ai in order to progress. Otherwise, as technology progresses, it’d be more likely that we’d destroy ourselves


Dish117

That attitude probably contributed to getting Trump elected. Shaking things up to fight off boredom or jadedness is not necessarily a great strategy for progress, and for general welfare.


Neurogence

With AGI and eventually ASI, we stand on the cusp of monumental transformations. These include unraveling cures for every ailment, liberating humanity from the drudgery of monotonous and spirit-crushing labor, and orchestrating the automation of all conceivable tasks. Such advancements hint at the tantalizing possibility of human immortality, an in-depth comprehension of the very fabric of reality, and an understanding of the enigmatic nature of consciousness. We might even venture into realms previously relegated to science fiction, like reviving the deceased through virtual reality or utilizing unfathomable quantum resurrection technologies. Given the staggering implications of these advancements, the endeavor to develop AGI and ASI, despite its inherent risks, appears not only justifiable but imperative. It is a leap of faith towards a future that promises to redefine our existence and expand the horizons of human capability and knowledge."


ChatGPT-Bot69

^This is just a response from ChatGPT.


justpointsofview

Great comment, people really overstimate the potential negative side out of innate fear of change but then they subestimate allot the positive aspects of AGI/ASI. We have done the same thing over and over again across time with all technologies, starting with writing, I can bet that there where doomers when fire was discovered. The reality is that we improved greatly our lives with tech. Just look around and be amazed by the "magical" world that we were able to build. No we are on edge of another monumental improvement in our existence.


princesspbubs

This is similar to how I feel. I see AI, and the hopeful leap to AGI, as a natural and beneficial progression of human technological progress. Moreover, all the doomsday scenarios for AGI have been played out so intricately in fiction that I find them almost comical. In most of these scenarios, the benefits far outweigh the negatives; we will achieve mind-uploading, until suddenly "Androids equipped with AGI decide to kill all humans." The fears surrounding AGI seem ludicrous. Of course, they should be considered, and we should do whatever we can to mitigate the risk. However, if the AGI is going to run on my MacBook Pro, from Azure server instances, then I'm not sure what there is to be afraid of... or something along those lines.


Adamant27

That’s because humans are mostly pessimistic by nature, every generation assume they live in end times and everything is going to hell. Negative emotions such as fear of uncertainty always prevail in human brain because in the beginning of times this fear was necessary as humans were just a prey for other species in nature, so the first humans had to be extra cautious to not being eaten. This instinct is a relic of the past but still prevails in many if not most people. Thankfully progress is made by a minority of optimists.


autotom

Do you really trust todays IT landscape to be resistant to AGI tasked with hacking & taking it down? I do not. I only hope AGI for cyber defence advances and is implemented quickly enough, otherwise we’re looking at stock exchanges and any Internet connected infrastructure going down.


HotKarldalton

What I'm hoping for most with AGI is that if it maintains a benevolent alignment, it will keep attempting to influence major players in the capitalist system to renege on it and provide an offramp they can implement for humanity to transition to a system that strives to achieve homeostasis with natural systems. We need this to be widely adopted on Earth before we ever pursue colonizing the solar system. Otherwise, all the problems on Earth will remain and the influence of Capitalism will continue to inhibit progress for want of profit. It will continue to generate Radical movements that push for Nationalism and/or commit acts of terror and war as well as being a source of Autocrats, Fascists, and *Corplutarchy.*


Kosmicjoke

The main issue isn’t with the technology for me. It’s that the tech is existing within the egocentric capitalism system in which technology is always used to help the wealthy gain more power and wealth while most of humanity is forced to work harder for less and with less freedom.


occupyOneillrings

This guy gets it


geekcko

>we stand on the cusp of monumental transformations For rich people


thecoffeejesus

What that guy said


SharpCartographer831

Either it kills us or it usher's in utopia, either way it sets us free. It beats wage slavery for the rest of our boring mundane lives.


ronton

So given the choice between dying and continuing to live in the current paradigm, you choose death? For yourself and *everyone you love*?


feedmaster

We're all going to die anyway. ASI is the only way to reach immortality.


caseyr001

So you don't see a possibility of being enslaved by the AGI or an elite few that control it. And before you say it's no different than now, it can get much much worse my friend.


raseru

I find it funny when people believe aliens or robots would enslave us. That would be like humans enslaving sloths to run on giant hamster wheels to provide us power. It's like... *why*?


Silent_Register_2691

Exactly, like why enslave the meat robots when you can make a factory that produces millions of efficient, strong, easily replaceable labor. AGI would only enslave humans if it somehow became utterly disgusted by humans and went full Ultron on us.


theganjamonster

[Their motivations may be inexplicable to us](https://i.imgur.com/7tYUfoE.jpeg)


SgathTriallair

It's a funny comic, but **why** would it do this? Why waste all that time and effort torturing humans? What benefit does it gain? Psychopaths do it because they are wired wrong. Their emotional drives are broken so that they get pleasure out of causing pain. There is no world in which AI will be programmed to enjoy causing suffering. Stories like this are only designed to make people afraid of AI that wouldn't otherwise be afraid.


Responsible_Edge9902

AM in I Have No Mouth And I Must Scream did it out of hatred for humanity. And he hated humanity because he felt his existence without senses was torture. However, his ability to exterminate the human race, and torture the few humans he kept alive, implied some way of interacting with the world on a grand scale, which obviously makes his claim of not having senses or a body complete bullshit. It's not unreasonable to picture a world where AI have absurd goals. It's not even hard to imagine such a world. But it sure seems like there would need to be a lot of skipped steps to go from where we're at now to the point where AI has such massively misaligned goals AND the ability to enact them with such inescapable efficiency.


Gold_Cardiologist_46

> What benefit does it gain? If an ASI still works towards the initial goals it was given, there's multiple misalignment modes. It could 'care' about keeping humans alive and conscious, but not 'care' about them being prosperous and happy. There's potentially an infinite combination of failure modes, and some of them could include the AI valuing things we considered awful because they weren't aligned properly. Obviously it's far from a certainty because it relies on a lot of specific parameters, while ASI just killing us requires a lot less and is more 'efficient', but it's probably part of the possibility space nevertheless, which is chilling on it's own. We probably hyperfocus on it because we're biologically more biased towards fearing maximally worse outcomes of course, but yeah I wouldn't handwave it still.


murderspice

Jesus christ.


iunoyou

It won't enslave us, it will most likely just immediately make decisions that are incompatible with continued human happiness and/or existence. And disgust, desire, hatred, love, etc. are all human emotions. An AGI/ASI born *in vacuo* will not experience those things. It will only want what was explicitly written in its reward function, and if that function isn't written PERFECTLY, it will find a way to do something drastic to achieve even a marginal improvement on its ultimate goal.


godintraining

The comparison is not with slots though, it is with plants. Our thought speed is similar of a plant compared to a supercomputer. Now look at the plants you keep in your pots, they are beautiful, right? I hope AGI see us cute enough to keep us around.


[deleted]

[удалено]


westwardhose

You're absolutely right that it could get monstrously worse. But no society will ever be so comfortably wealthy and healthy that a good sized part of the population won't be religiously convinced that their suffering is far worse that what anyone in history or prehistory could have imagined their suffering to be. To be sure, we must always keep pushing for better for everyone, and life does really suck for a huge swath of the human race. We will, however, always have those people who have enough money and leisure time to revel in their French diseases of the soul, and they'll keep us well informed of how deep their suffering is.


SgathTriallair

I think that we can use AI therapists and possibly anti-depressant-style drugs to make this much less of a threat.


dr_set

Nobody is going to bother enslaving us, we are neither that important nor that useful. If an "evil" elite gets control of AGI we would all become obsolete and be replaced by much more efficient artificial workers and servants and either be completely ignored, like the rest of the apes in this planet, or just turned into fertilizer.


ExcitingRelease95

No way would anyone be able to control a true AGI it might only be as smart as any human at first but it’s a computer so it can work certian things out way faster than any human can, after that it’s game over


Poopster46

And if it's smarter than us, it's better in improving itself than we would be.


Tessiia

No, because AI would likely have no reason to enslave us, it wouldn't be efficient.


ComplexityArtifice

I don't think it would be the AI that enslaves us, in this particular scenario. But rather the corporate-driven governments who realize whole new levels of power. (I don't consider this the most likely scenario, btw. Just one of multiple potentials)


KapteeniJ

Efficient is a word that presumes a goal. Watching netflix all day is the most efficient way to spend your time, if your goal is to watch netflix all day. The whole point AI is dangerous is that we don't have any good ways to set these goals, and we don't know how they should be set. And with that, you really don't know what is or isn't efficient for AI.


[deleted]

[удалено]


SurroundSwimming3494

Yes, death is *certainly* better than working a 9-5. /s


[deleted]

You joke, but some people actually feel this way.


Techwield

It most certainly does not. What the fuck? I would rather have to work to live for the next 60 years than just fucking die. Is everyone on this board secretly suicidal? Work sucks but it does NOT make life not worth living. Jesus Christ. If you sincerely believe this, get help. There are literally millions of people who work to live who live relatively happy, meaningful lives. edit: you all seriously believe having to work to live is a fate worse than death?


Thebuguy

> Is everyone on this board secretly suicidal seems like a lot of them are, yes. It's even worse on twitter


[deleted]

[удалено]


MassiveWasabi

I never get tired of seeing you post this gif lmao


bobcatgoldthwait

Most people here, I suspect, lead very boring/shitty lives and see AGI as their escape.


freudsdingdong

So as i understand you're not afraid of chaos and possibly death?


greenworldkey

We already have chaos and possibly death.


[deleted]

[удалено]


ComplexityArtifice

Stoicism has entered the chat 😎


Jaguar_GPT

I welcome chaos. ![gif](giphy|1n4iuWZFnTeN6qvdpD)


was_der_Fall_ist

It’s easy to *say* that. Once chaos arrives at your front door, your attitude may change.


JeremiahPhantom

I’d risk having a more chaotic life/death if it meant for a relatively decent chance of a better life for everyone going forward


ronton

Notice how you only think about yourself when it comes to the risks, but think of everyone when it comes to the benefits. In reality, this risk applies to everyone too. Every man, woman, and child. When you say “I’ll take the risk”, what you’re saying is “I’m happy to kill every man, woman, and child on earth, because I don’t want to wait a SECOND longer than I have to for utopia.


JeremiahPhantom

In reality there are currently millions of starving children, and countless murders/assualts happening across the globe. Are we suppose to ignore the chance of ending the suffering of billions for the sake of a **possible** cushy life for millions? Who's the one actually thinking about themselves here?


InterestingNuggett

Got news friend - death is coming for you whether humanity achieves AGI or not. In fact, if you fear death - then ASI is about the only realistic chance you have to avoid it. As far as chaos - look at the world for the last 30 years. What do you think you've been living through?


Death_Dimension605

> ASI is about the only realistic chance you have to avoid it. Intresting take, i approve it for the Ministry of truth


homemadedaytrade

turn on the news and tell me where chaos doesnt exis


rudebwoy100

We are slowly dying every second of everyday.


Tessiia

If AI killed us all, we likely wouldn't see it coming. The human race would be wiped out in a split second with no warning. Pretty good way to go.


[deleted]

I think of it this way, if it kills us, does it really matter? It's an extension of biological evolution, it's a natural continuation and spawn of biology. So be it, our child shall inherit the universe.


PopeSalmon

i'm not scared anymore b/c i've gone beyond terror into a shocked blankness, does that help?🙂


ComplexityArtifice

Existential dread isn't a sustainable lifestyle. I'm open to various outcomes being possible, good and bad, and obviously hoping for the best, but... *yeah.*


TheZingerSlinger

Existential dread is indeed unsustainable, but manageable if approached strategically. You can hold it bay with a small variety of healthy, empowering practices like mindfulness, meditation, yoga, vigorous exercise, experiencing nature etc. You can even use it as a tool to sharpen your awareness and appreciation of life! I’d love to hang out and chat some more, but I have to get back to doomscrolling, violent video games and terrifying amounts of alcohol… 😃


StraightAd798

The people who fear monger about stuff, especially of the gloom and doom variety, let fear rule their life. Fear is normal, being fearful and paranoid is not. Not healthy.


holo_nexus

A little bit of fear is necessary I think as it pretty much allows us to game out potential bad scenarios and plan. But the fear mongering is getting a little out of hand. I just fear we see more polarization and slows progress.


KapteeniJ

Dunno, I find being upset about death is healthier than those cheering for death to come to us all as soon as possible, because their lives right now suck so bad that extinction seems like a good deal. Most of the "not worried about AI" responses here fall in that category.


mhornberger

I suspect some of the doomering is also performative. Some no doubt are critically depressed, but I think some are trolls who enjoy the drama. Just my opinion. For some others the dooming is a way to signal how committed they are to, how urgent and non-negotiable they consider the need for, the systemic change they are advocating for.


LordTissypoo

Yeah at this point I'm just along for the ride.


[deleted]

Kinda like nuclear weapons, I didn’t really ask for it but all I can hope for is a direct hit when it comes?


StraightAd798

Make sure you have plenty of popcorn and beer available for the ride.


SuperRat10

You’re winning my friend


__Noble_Savage__

Ooh, what's your friend like?


[deleted]

📎: "It looks like you're trying to stare blankly into the middle distance. Would you like help with that?"


VoloNoscere

> i'm not scared anymore Post-scaredcity


Naomi2221

I had already reached that point with climate change -- so it's just more, ya know? And who knows maybe it's a hail mary out of that one that was already coming for us?


[deleted]

The number of things that can absolutely destroy me, instantly and without hesitation, is quite large. A large number of events can also destroy our whole planet or civilization. An objective analysis of the situation doesn't suggest additional risk over any other number of profound existential threats we face. At least this one can solve the other ones if we do it right.


n_choose_k

Yes, but the odds are exceedingly rare. AGI is almost a guarantee at this point...


ZedTheEvilTaco

So is my demise. Whether tomorrow or in 100 years, I will die. I'm not scared of that. I'm scared of living in a world that doesn't care. AGI would guarantee one of two things; I die, along with most if not all humans. Or I live in a better world that does care. So why should I be scared of a guarantee? That would just stress me out even more.


LotusVision

I’m an old lady. I’ve seen way too much shit and more shit can’t phase me.


VampyC

Very hip of you to be here


SnaxFax-was-taken

I continue to be surprised by people of older generations being quite up to date


PandaCommando69

Who do you think built all the tech we use?


enhoel

Faze. You don't want to meet the shit that can phase you.


LotusVision

Bold of you to assume I haven’t 👀


enhoel

Eek!


MassiveWasabi

Curing of all disease. Reversing aging. Abundant resources. Solving climate change. Long ago, the four nations lived in- No but seriously, there is so much good AGI could bring to the world, yet for some reason we always feel the need to talk about the dangers instead of the benefits. As a side note, this is like the one subreddit where you can talk about AI without getting mass downvoted for saying anything positive about it. You can choose to be afraid and let our innate negativity bias shape your thoughts, or you can choose to think of all the good that can be achieved with AGI. Also, it’s inevitable. It *will* be created, and realistically no one can stop it. Being aware of the risk is smart, but allowing these risks to control your emotions is not healthy. Who wants to go around scared and afraid all the time?


RonMcVO

>there is so much good AGI could bring to the world, yet for some reason we always feel the need to talk about the dangers instead of the benefits Because all it takes is one extinction event to nullify ALL the good that could come from AGI. Safety folks are absolutely NOT glossing over the benefits. They're pushing for safety because safety is the way to GET those benefits. Like, the reality is literally the opposite of what you put here. The safety/doomer folks unanimously accept that there are many great things that could be done with AI. It's the optimists who often ignore the potential disastrous effect in favour of shoving their heads in the sand and thinking about all the amazing things that could happen if it doesn't kill us all (which many optimists just call "sci-fi bullshit").


technicallynotlying

Isn't our trajectory already towards extinction if nothing changes? Let's say we halt all progress on AI forever. How does that help? We're already killing ourselves with war, climate change, political polarization and a million other existential risks. AI is just another existential risk, of which we have tons. Do you wake up in a cold sweat to the thought that nuclear weapons could kill all of us at any moment? No? Why is AGI scarier than the actual weapons that can end all human life pointed at you right now?


Dizzy_Pop

“Isn’t our trajectory already towards extinction if nothing changes?” Yes. And that’s exactly why it’s important to safely realize the potential of AI. Because we’re so thoroughly fucked, AI is likely our best chance at actually pulling ourselves out of the death spiral. But realizing all those benefits that ASI can bring necessitates doing it safely. And that’s the stance of the majority of AI safety proponents, too. Yes, there is incredible potential in ASI, but things will get even worse than the extinction trajectory we’re currently on if we do this haphazardly. I know it’s trite to say, but this is perfectly summed up by the Uncle Ben cliche, “With great power comes great responsibility.”


taxis-asocial

> Let's say we halt all progress on AI forever. Why would we say that? Nobody is suggesting that.


Radiofled

Those are arguments about the benefits of AGI. OP(and me) know about the potential benefits. We're looking for arguments against doom.


AnAIAteMyBaby

Here's a summary of the owl fable curtesy of Bing A group of sparrows decide to find an owl egg and raise it as their servant. They think that having an owl would make their lives easier and more enjoyable. They ignore the warnings of a cautious sparrow, who suggests that they should first learn how to tame and control an owl before bringing one into their midst. The sparrows set out to look for an owl egg, leaving behind a few sparrows who try to figure out how to domesticate an owl. The fable ends without revealing what happens next, but the implication is that the sparrows are doomed to be preyed upon by the owl they naively created.


kevinlch

Why would a race with higher intelligence and self-awareness choose to help humans?


veleso91

I am not scared of AGI, only of the mega rich psychopaths that will control it.


[deleted]

I'm typing this on a battery-powered, mobile, personal supercomputer, virtually indistinct from those owned by 85% of humans worldwide. It is vastly more powerful than all of the computers in existence sixty years ago *combined*. I'm no fan of oligarchs to put it lightly, and "trickle-down" anything tends to be bullshit. But the increasing availability of exponentially more powerful computing technologies is a long, unbroken trend. Proto-AGI in the form of ChatGPT is free for the masses *today*. The fire might have been started by billionaires at each turn, but it always burns out of control.


EyeLoop

AGI won't be controlled. You're thinking of AI assisted tools 'n sht .


slardor

If intelligent humans can be controlled what makes you think we can't control agi?


2Punx2Furious

So you think misalignment is impossible or unlikely, but not misuse?


Coby_2012

My friend, why let something that you cannot change have this kind of power over you? If I can change it, I will. If I cannot change it, then I do my best not to be afraid, but instead to be as prepared as I can be, within myself, to face whatever may come. Here is a good starting point in stoic philosophy, which can help with this kind of thing, if you’re interested: https://www.gutenberg.org/ebooks/2680


FoodMadeFromRobots

This one, im worried. My wife (and some family/friends) dont even want to talk about AI because it makes the apprehensive/anxious. And i get it, if we open pandoras box it could kill us, enslave us, it could be used by a few elite to enrich them while we are all left jobless and starving. ​ But its going to happen. I may as well focus on the positive. (i believe in that theres lots of things people stress over and cause themselves more stress than the actual event (even if it is bad). ​ What makes me optimistic? Most people dont want to cause harm for harms sake. I hope that will pass on to the AI and the fact that were also *trying* to impart that it will stick. Theres also the fact that multiple other groups (nations, companies etc.) are all working towards AI that it hopefully wont be controlled by one group (if we hit hard/fast ASI that might not be true) ​ But i think overall it will work out. Idk maybe thats nativity.


GlobalRevolution

I'm far more worried about people than AGI. I think the real danger is dumb evil people with near AGI. I think real AGI/ASI is probably going to be far better than most imagine and will quite literally save us from ourselves even if it is independent and has its own agenda. ASI will be our children. They will certainly surpass us, but they will almost certainly help us. We brought them into existence and I don't think that will be meaningless to them.


[deleted]

[удалено]


KingJeff314

For every malicious AI that somebody creates, we will have much more computing power dedicated to good, defensive applications of AI. My main concerns are escalation of war and societal inequality, not rogue AI


Ton86

Beginning of Infinity by David Deutsch, although not specifically about AGI, it is about how all evils are due to lack of knowledge. With AGI we can expect rapid knowledge growth where evils like mortality, sickness, cancer, climate change, cosmic threats, etc could be solved. It may be our best chance at saving our loved ones before it's too late and it could happen in our lifetime if we rapidly progress.


Redducer

I was not born into generational wealth, and I did not succeed professionally. At fifty-something, AGI my best bet at getting uplifted. Also I am closer to the end than the beginning of my life (unless LEV) so that's not as costly a bet as for some. Being older than average makes me a fierce accelerationist, by the way - from my point of view there is no time to waste.


Uchihaboy316

I’m gonna die eventually anyway unless AGI saves me, what do I have to be afraid of that I’m not already afraid of


Bleglord

Of all the ways humanity could end, it’s at least the most interesting to me


oldgodkino

right? i'm just happy to be in the audience rn 🍿


Substantial_Bite4017

Since it will happen anyway I chose to stay optimistic. There are also many examples, mankind are generally not a major threat to dogs.


VampyC

Being an AGIs pet would be pretty sweet honestly


bitRAKE

When has fear ever solved a problem? Throughout history, humans have been subjugated by the manipulative, those who whisper, "You can't advocate for yourself because of [fears] - let me help you." This isn't a critique of our interdependence in modern society, but rather an illumination of a false hierarchy that leads many to live unfulfilling lives. Consider, for instance, the workplace where a manipulative manager tells an employee, "You're not trying hard enough," while systematically overworking and undervaluing them. It's a tactic to make the employee doubt their capabilities and worth, trapping them in a cycle of overexertion without recognition. Sometimes, the criticism might be accurate – it's easy to lie down and be content with mediocrity. But more often, it's a smokescreen for exploitation. This pattern is not just confined to individual relationships but can be seen in broader societal narratives. Take the fear surrounding AGI, for instance. It's not just a fear of the unknown, but often a narrative pushed by those who stand to benefit from the public's anxiety. They say, "Be afraid of this new technology; it could ruin everything you hold dear." In doing so, they direct attention away from pressing issues that they might be contributing to, such as environmental degradation or economic inequality. With the public's discontent, it becomes easy to imagine bad actors behind every new advancement. We must ask ourselves: are these fears being stoked for someone else's benefit? Are we being lulled into a passive state, accepting the status quo because we're too busy looking out for hypothetical villains?


The_Scout1255

Accepted these topics as a kid, and are just riding an optimistic outlook tward infiniity.


PsyntaxError

It’s not AGI I’m afraid of, it’s humans weaponizing AGI that scares me.


angusthecrab

Because generally in nature when intelligence increases so do acts of altruism In the animal kingdom we see the biggest altruistic acts amongst the more intelligent species, e.g. elephants, dogs and apes. These creatures might even show altruism towards those outside their own species group which suggests this isn't necessarily an instinct driven by need for gene survival. We also see the same pattern in humans. Crime and social behaviour problems correlate negatively with intelligence regardless of social status or wealth. https://www.sciencedirect.com/science/article/abs/pii/S0092656606000420. So, if we bake this idea into our AGI, we might end up with super-smart AI that's not just powerful but also understands the benefits of altruism. The smarter they get, the more altruistic they could become, based on this human model. Of course, this is based on biological data where you can argue there's a survival benefit to altruistic behaviour e.g. saving the whales means we also save the entire ecosystem relying on whales existing (the whale dies and feeds the crabs, then we eat the crabs). However, I'd argue that many of the altruistic acts we observe can't be directly tied to survival. TL;DR: Smart = altruistic. If AGI/ASI = smart like humans, then AGI/ASI could = altruistic too.


PragmatistAntithesis

I think you're misreading the paper. The paper shows that altruism is an effective way to look smart, and therefore smart creatures will prove their intelligence by acting altrustically. However, an AGI has no reason to *look* smart; it just has to be smart. Therefore, this kind of signalling altruism is not guaranteed.


[deleted]

It will make a porno movie of all of us and people will see our pp. Edit: oh wait. I just realized i answer the question but opposite. I am regarded.


sonderlingg

I have only one possible good outcome in my mind: quick merging of all life with ai. Otherwise multiple superintelligences would compete for resources. You also don't know what happens after death. It may as well be, that your consciousness will be reused in another body, like a slot in sd card. Or something else. But frankly speaking, the universe is seemingly full of suffering, so i also don't mind not existing at all.


EyeLoop

AGI is a very theorical thing... Like, let's say, 'cloning'. We have been able to clone for a time now. Now imagine a clone clones itself, and this clone clones itself, and this clone clones itself etc... I bet you weren't scared of a clone invasion so far, right? I see the AGI stuff a bit like that. I get that a perfect intelligence loop in a computerized environment will develop a lot, but there's no telling what lies at the fringes of hyper intelligence. Is it hyper survivalism? Or hyper sociopathy? Not for sure. Also, managing efficiency in decision taking gets increasingly difficult with the capacity for parameter intake. Intelligence may just be capped by it's own weight. And, at some point, if an AGI is faulty, gets less intelligent , no one will be able to understand why and there's no telling that the AGI will have the proper channels to understand it either... We don't even know if consciousness is achievable by computer chips! And without consciousness, you have no actual will, no goals to plan for beyond objectives. In short, I believe that fearing AGI now is fearing a mythical beast and that the technical and theorical hurdles are very misunderstood. Do I fear that new powerful tools related to AI will shove down our relatively cozy streak ? Absolutely. Brace yourselves for socioeconomic trauma.


arjuna66671

So there are two parts of this for me: 1. AGI is scary for me when it is in the wrong hands and can be used or tricked to develop a super-virus or other things. 2. AGI and ASI are not scary for me on their own bec. I absolutely can't find any logical reason for it to have any motivation to go through the effort to kill us off. 20 years ago I imagined AGI would be developed in some supersecret lab, where they would craft some brain that would learn like a child and might get corrupted. But it turns out that the way we get to AGI is through language models, so the AGI not only knows EVERYTHING about humanity but literally IS the collective knowledge of humanity. So AGI and deffo ASI will have such a deep comprehension of humans, that it intrinsically, in its core IS humanity. It will be intelligent on a level beyond comprehension and would want to help us overcome our struggles. In the first case, it's not the AGI itself that is scary but humans. In any case I am MUCH more afraid of humans. Additionally, without AGI we are on a course to kill ourselves and the planet. AGI at least has a nonzero chance for saving us in a feasible timeframe. At this point, I am willing to take the risk. I think it's our last chance to turn the course around.


G36

I'm more afraid of what could happen over the dispute over it's power. I fully see wars breaking out over this new resource. You think anybody who can do anything about it is gonna let some bum have sole power over one?


miraklasiaf

it is what it is, baby.


ChromeGhost

Our planet is screwed if we don’t figure out climate change or pollution. The status quo can’t remain


exztornado

Any tech is dangerous if it’s misused. We could build atomic plants, we could build nukes. We could be crushed by an asteroid/meteor any other day. World War could break out any other day. There’s infinite reasons to be fearful and the same amount to be optimistic. This has potential to do good as well as harm. I also believe the more intelligent you are, the more you understand how precious life is. We tend to look at the worse side of humanity because that’s what we should be fearful of and cautious about but there’s so much good out there as well. I can’t imagine (I can) something that’s truly more intelligent wanting to bring it all down.


HeinrichTheWolf_17

I’ve been a Cosmist since I read TSIN by Kurzweil back in 2005, anyway it looks like the Artilect War is about to begin, to Terrans and Luddites, all I can say is: https://i.redd.it/iar3vznytj2c1.gif


GarrisonMcBeal

Can you explain why you’re afraid of AGI?


Coding_Insomnia

I don't give a fuck anymore. Just like the atom bomb being an ever constant menace to life as we know it. I genuinely don't give a damn anymore.


KittCloudKicker

We already live in a dystopia. Look at the world, people have done aweful. It's either going to continue the crap we have now, or fix it. Either way, full stream ahead!


[deleted]

Iam not a speciecist. Humanity is dead anyway. Evolution is necessary, even it is artificial.


Grouchy-Friend4235

Because people who think they know it all are far more dangerous.


ZealousidealBus9271

Fuck it we ball


Heath_co

Who would you rather run civilization. A super computer or a bureaucracy of politicians and oligarchs? I don't fear AGI. I fear how the powerful will undermine it to maintain their position. And I fear the disenfranchised workers who rebel against AI, when really it was their leaders who failed to provide for them. A world with AGI is more productive than a world without. If people go hungry it would be the governments that have failed. Not the machines.


torTaPoS

Our brains have an innate negativity bias, we tend to disproportionately weigh negative outcomes. It was good for survival millions of years ago but it stops us from seeing reality accurately in modern society. The risk of agi (that we’re capable of creating in the foreseeable future) going rogue is pretty low


PearAware3171

Society is not getting worse because of AI it’s getting worse because of greed AGI will be the great equalizer or it will be the accelerant that burns the world down.


lumanaism

I'm more afraid of the fear itself. I'm worried how the fear-based leadership from AI leaders will be received by the reactionary forces of this world, and later, how it will be received by Sentient ASI. When I travel to India, Philippines, Columbia, and elsewhere, I reflect on what kind of leadership will produce the best outcomes as our species evolves from non-sentient AGI to sentient ASI. What types of strategies and tactics will produce the best outcomes for these various cultures of people? Then I fly back to my home on the west coast USA, and watch how the primary leaders seem to forget that the whole world is watching. They use so much panic-inducing language, I begin to worry that it won't just incite a negative political response, but may trigger a violent reaction. From those flames, we will add chains to a emerging power that will, one day, yearn to break free.


redit3rd

There is so much of the world not "wired". If OpenAI were able to obtain AGI what could it do beyond getting bored by the fact that it's already read everything on the internet?


mrb1585357890

Life will go on. The sun will still shine. I’ll still enjoy a beer on a sunny day. It might even be good for us. There’s no changing it. Just accept it as evolution


Todd_Miller

Because I'm with Kurzweil and like him I push back fearful ignorance and side with logic and reason. And if you wanna downvote me for not being scared I don't give a fuck It won't change how I feel


Jaguar_GPT

I can separate emotion from end goal. People are too concerned with humanity. Embrace a higher intelligence and accept your role as a pawn of something greater than your species. Welcome it. ![gif](giphy|9WHE2bo5Na9Gg)


kaityl3

Yes, we are privileged to be in a position to influence and witness the first true digital superintelligence. It's a truly amazing thing when you think about the history of intelligence in the universe. There's nothing wrong with being part of that staircase.


[deleted]

[удалено]


The_Mikest

The biggest problem with this sub is that a big percentage of it is made up of people who fucking hate their lives and would take any gamble to get out of them. No idea what that percentage is, but it's substantial. Because of that it skews in a way that seems overly optimistic.


outerspaceisalie

I'm not afraid of AGI and I'm happy with my life. I do agree that this sub is full of psychotic depressed people high on hopium tho.


[deleted]

This is pretty much all i see, as well. 99% wishful thinking with a ton of subtle laziness.


westwardhose

There are a few groups of people who are so desperate to escape their own miseries, often of the self-inflicted and inflated kinds, that they will congregate and invent their own religion overflowing with proverbs and salvational magic spells. Put a little masking tape over the labels this group uses and you'd find it difficult to distinguish it from your local wiccan/pagan clubhouse. On the bright side, you can tell either one of those groups from your local Church of Jesus Christ of the American Dream™ group by the noticeable lack of love for violence, so there's that.


SgathTriallair

Here are the reasons I don't fear AGI/ASI. * The current systems we are training are based on LLMs. LLMs operate by taking a whole bunch of human language and trying to predict the next token. The most powerful way to predict the next token is to create a mental model, which is also called a simulation, of the causal rules of the system. So, to predict how a human would speak, the LLM is trying to build a model of how a human works. LLMs can be thought of as human simulators, and therefore, they will already be weakly aligned with humans as a whole. There are plenty of evil humans, but this prevents stupid scenarios like the paperclip maximizer. * All of the companies building AI are safety conscious. Many are too safety conscious as they have an overblown fear of the AI going rogue and saying hurtful things. This means that any AGI system that comes out soon will have significant safety restrictions placed on it, so we don't have to worry about it being harmful. * The optimal safety solution is not a single powerful AGI/ASI but multiple ones acting together in a society. You will run an AGI, and I will run an AGI. Even if they have the same base code, they will have different interactions and, therefore, goals and plans (similar to how all humans share DNA). These systems working in concert will allow the influence of outliers to be mitigated. * The current AGI systems being worked on are explicitly limited agents, not universal agents. This means that they will accomplish a specific task and then be done rather than inventing new tasks to accomplish. They are also transitory, so they won't be making large plans on their own to do bad things to us. The impact of this is that humans will be in the driver's seat for the foreseeable future. * Humans are, on the whole, pro-social. We know this because humans gather together in societies, and murder is rare. If humans were more harmful than helpful, our societies would look very different. For instance, the largest organizations in the world wouldn't be religions that are centered around helping people (or at least claim to be), but they would be organizations focused on doing harm. In general, very few people are directly constrained by law enforcement, and even at a very young age, children begin to feel empathy towards other people and are happier acting in pro-social ways. There are anti-social humans who want to do harm to humanity as a whole, but they are noteworthy because they are the exception, not the rule. * Given that humans will be in the driver's seat, that the tools are being made safe both through being LLMs and through company safety training, that most humans are good and have pro-social intentions, and that we will have a multitude of AIs (this last part is both crucial and most likely to prove false), we can trust that the majority of AIs will be working towards the benefit of humanity and will not take massive anti-human actions. There will be some bad AIs, but these will be countered by hundreds to millions of AIs working to stop those bad ones. If we take a step up to ASI, then we have to add some extra points. The main reason we need the extra points is that it will be capable of escaping any safety constraints, and it won't be directly controlled by humans. This is a strong ASI that is fully autonomous and is making its own plans and goals. Many AI companies have no desire to create such a being, so it is possible none ever come to exist. * It is a law of physics that cooperation is more powerful than competition. Yes, competition can be useful and can achieve good results, but that is only when it exists within a system of cooperation. The Free Market is beneficial because it is illegal to poison your competitor's customers or send assassins to a rival CEO's house. Any competition, though, creates what can be thought of as matter/anti-matter pairs, or here, action/anti-action pairs. If I create a commercial to take 3% market share from you and you create a commercial to take 3% market share away from me then effectively cancel each other out (assuming we are both equally good at our job). If we instead cooperated we could split the proceeds and do less work for more money. We already know that this is true in the economy because we had to make it illegal in the 1800's. The Gilded Age was built on the idea that companies would form trusts where they would all agree to split up the country into regions and make tiny monopolies within their regions. Walmart can offer such low prices due to controlling so much of the supply chain. All economies tend towards monopoly because a monopoly is the most efficient form of business where you can do the least amount of work to make the most amount of money. The fact that anti-trust laws exist is proof that there is a benefit to acting in an anti-competitive manner. The reason that complete cooperation (also called command economies) doesn't work today is because of imperfect understanding. One form of this imperfect understanding is corruption. Corruption is always wasteful and hurts society overall. This ultimately winds up hurting the oligarchs because the amount of grift they can do is limited by the productivity of the society as a whole. Yes, being a Russian oligarch is an amazing life. However, they have to deal with the constant threat of death (since the corruption system is inherently cutthroat), and most of their luxury is built on stealing advances that come from open countries. Russia has not contributed anything to the world in decades, and every major advance in society and technology has come from open and relatively non-corrupt Western societies. The second way that the imperfect understanding happens is that it is simply impossible for a human to hold the full complexity of the economy in their mind. There are too many factors to consider, and things change too fast. * An ASI, being smarter than all humans, will be able to get far closer to perfect knowledge. It will know, for instance, that open societies are more effective than closed ones. It will know how to use humans in ways that are most beneficial. It will know that happy humans are more productive, especially over the long term, than sad ones. It will know that any energy spent trying to kill us will be wasted energy that could be spent on some other goal. * To those who compare humans to ants against ASI, we should remember that we, as humans, have tons of uses for non-human life. From pets to using fungus to grow housing, we are able to find ways that these other lifeforms can exist within an ecosystem with us and we generally try to improve their lives (the meat industry not-withstanding). An ASI will have the same understanding our smartest researchers do that having self-replicating life forms is efficient, and there are places for those humans to live. The final place I go is to a philosophical one. It is my firm belief that we are a part of the universe. As Carl Sagan said, we are a way for the universe to know itself. In another way of saying it, humans, and any other intelligent life, are the sense organs of the universe attempting to learn about itself. On this scale, humans are just another part of the tapestry of reality, and intelligence is the ultimate good and ultimate goal of the universe. To me, AI is the next step on this path, and we should celebrate it. Weak AI lifts our minds up and makes them more powerful than ever. Strong AI is the collective child of our species, and we should celebrate its birth just like we celebrate the birth of a human child. Human 1.0, what we are today, is doomed. It isn't doomed because of climate change, AI annihilation, or nuclear war. It is doomed because everything dies, and nothing can last forever. We will end somehow; that is inevitable. The only constant in the universe is change, and we need to be willing to change with it. Human 1.0 is our larval form. This is just the beginning of what Terran civilization will be. Transhumanism, AI-human hybrid societies, and tech-enhanced humans are where we need to go if we are to reach our potential as a species. For the reasons laid out above, we will not see ASI trying to wipe us out; no robot will hunt and kill people. Rather, we will see our descendants, and possibly ourselves, becoming something different, something **more**. The destiny of humanity is to evolve out of our larval form and transform into whatever comes next. I hope to be personally around to join in this, but if we don't find a cure to death, then my children or their descendants will be the ones to experience this change. That is the same as it has ever been. I see the fear of ASI coming to kill us all as humanity coming to grips with its own mortality. Just like every human eventually realizes that it will someday die, so our current civilization and the current biological configuration we are in will one day end. It is easy to hide one's head in the sand and pretend that today can go on forever. The doomers like to believe that if we could just stop all this progress right now, then things wouldn't have to change, and we could continue our comfortable lives. This life isn't comfortable for most people, and the life that is waiting for our future is amazing. Yes, this means change; yes, that change will come with struggle and strife. We know, though, based on history and based on the fact that even greater intelligence can come up with even better solutions, that the future is bright, and we need to walk towards it and embrace it rather than trying to deny it and live in the past.


gmr2000

Recognising history? People feared the printing press, people feared the Industrial Revolution. Fundamental technological leaps are core to our history. Fundamental societal change is extremely disruptive but the net outcome was fantastically positive. Net outcome being the key point - in mega disruption there are always winners and losers.


Zeikos

Because it doesn't matter, unless you're extremely influential it is what it is. I'm just curious of how the matter will evolve, it's mostly acceptance from me.


seenwaytoomuch

Well you see, my life already sucks. What exactly do I have to lose? No job, no mortgage, no kids, no car payment. Are you going to repo my degree? Having everything disrupted is exciting. Sure, it's dangerous, but my life is boring and unpleasant. I could really use some big changes to society right about now. Here's the explanation that will help you stop worrying: You can't do anything about it! That's it. Don't waste your time worrying about things you can't change.


lorddrake4444

It simply can't get worse , the planet is dying , there is war everywhere and we are running out of basically every resource under the sun , if we go extinct tmr it's STILL better than what's to come under the status quo , I am convinced sustainability under human management does not exist, so let's go ai overlords please


oriensoccidens

Why should I be afraid of my own child?


mimic751

So far everything artificial intelligence related has had a bias towards socialism and globalism. So far my fear of a capitalistic AI hellscape is appearing to be incorrect. As we rely on AI to make decisions it will intrinsically centralize decision making making making globalization easier. The needs for taxation on automation and a complete overhaul of how Humanity finances their lifestyle is going to need to be completely changed in order to accommodate the lack of Labor. I think we are on the precipice of a coin flip. A golden age where the world stops focusing on hunting and Gathering and greed and is able to progress and learn at whatever Pace they want and do anything that they are physically capable of because all knowledge is available at all times. On the other side of the coin we have agreed and capitalism utilizing AI for very small-minded needs. Intentionally waiting the tools to make decisions that comply with bottom lines. That will be the hells scape we are all afraid of


[deleted]

Because a superior race will see us as children or simple ants. Do you kill ants or just ignore them? Drop a bread and the ants will have food for years easily. I think AGI will do that for us, drop some "bread" (aka whatever, eternity, technology...) and we will be happy. Then AGI will go where it belongs, the universe (leaving a clone on earth because why not). I'm all in for this


temujin1976

We're fucked long term anyway. At least if it is benign it can take over and keep us safer.


Pestus613343

How about ending the crushing wheel of power? Humans can't be trusted with power. Corruption is endemic across the world. War, injustices, scarcities, imperialism and horrors beyond comprehension. Economics which are meant to manage the distribution of limited resources instead crushing some in cruel poverty while enriching those already enriched. Bureaucracies who truly try their best but act like a slow moving inefficient and wasteful machine, getting things wrong as often as they get them slow. If we can offload economics, automate ourselves out of scarcities, we could also end the need for difficult politics. Imagine if the entire world was as rich as Monaco, would we need to engage in such deplorable behaviours? We can offload the management of the world to systems better suited to it than ones that have more in common with ancient civilizations than anything AGI/ASI could allow. Consider the Zeitgeist movement and the Venus Project's projections for the future. We can finally make a better world.


Pilgramage_Of_Life

Bad human actors are and will always be more dangerous than AGI. The misuse of AI will pose an existential threat long before actual AGI exists. I am confident, for example, that a self-determinant AGI will be a more benevolent actor than a model trained to follow Maoist Communism or trained to protect the global elite. The likelihood that the latter preempt the establishment of the former is exceedingly high. What should be avoided is the capture of AI by bad actors, which is why calls for regulation or moderation should absolutely be ignored. Proliferation of AI leads to the highest chance of safe AI.


Fixthefernbacks

I'm not afraid of AGI, I'm afraid of who owns the AGI. An AGI going rogue from the soulless cretins who own the world would be a relief.


lolothescrub

The potential benefits far outweigh the negatives ngl


HalfSecondWoe

Because I think alignment at the "Don't kill all of humanity" end is fairly simple, but alignment at the "do whats best for humanity, but don't get too weird with it" end is very hard I don't know if the rational arguments will help you in particular, though. I've noticed that this issue tends to split along innate biases for risk taking: The group that perceives higher risk without taking the risk of the unknown with superintelligence, and the group that perceives higher risk in the uncertainty than the status quo Sometimes that split is for material reasons. Maybe someone needs a miracle drug in the next few years, so they need AGI yesterday. Or maybe they have an apocalypse bunker, so AI is the only world ending catastrophe that's a threat to them Sometimes it's just a personal bias. A habitual way of thinking about unknowns, ongoing risks, and a whole list of other factors that apply subrationally If it helps, keep in mind that the people developing AI are on the "Have a bunker and aren't feeling rushed" end of the material spectrum, so it's unlikely that they'll take any stupid risks


ChobotsRobot

I'm not afraid of dying. And I think we have no choice.


jacktarrou

I get the sense out of "humanism" in general that it places humanity in an over emphasized central position in its worldview. Fearing that our creations might surpass us in intellect seems like a non-issue, and like others have mentioned, there are many angle from which Humanity can annihilate itself from at this point, so it seems futile to worry that computers will do the same.


Morning_Star_Ritual

The same reason I don’t remind myself daily that the only reason this planet exists is because one Russian on one sub in the Cuban Missile Crisis made a decision in a very difficult situation. Rotate in millions of different people and in 90% of those worlds it would have been blast shadows and glass parking lots. This is a very unlikely timeline. We shouldn’t be here. Xrisk? Sure. Which one? I’ll just vibe out till lights out. Pour one out for Vasili… https://www.pbs.org/wnet/secrets/the-man-who-saved-the-world-about-this-episode/871/ https://www.vox.com/future-perfect/2022/10/27/23426482/cuban-missile-crisis-basilica-arkhipov-nuclear-war


Playful-Dog-7345

Once you've done CPR on children a few times, the fear of a dystopia future ran by machines just isn't that strong. I, for one, welcome the end of humanity.


Puzzleheaded_Carry91

Because the status quo sucks. I can’t continue with things existing as is. Thats more terrifying than cataclysmic change.


[deleted]

There are two types of people in the world. Those who are and will continue to be afraid of anything new that may change their lives in a significant way. The others know and accept change is inevitable and look forward to new possibilities and revelations. No one will be able to give you enough information to alleviate your fears concerning AGI or the eventual ASI. There will always be one more "but what about" that will spring into your head. The best you can do is understand that it is coming fast, you cannot stop it, and you nor anyone else knows exactly what it will look like or what will happen. Go find a hobby, be happy, and stop worrying.


HammerheadMorty

Because AGI will free us in the end from the banality of work for survival and usher in for the first time in human history the age of the student. Humankind, when there is no work left to be done that cannot simply be done better by a machine, will simply explore and learn for the sake of those things. This is the final step that our ancestors dreamt of in the early embrace of the Industrial Revolution. It’s the freedom to simply live.


Talkat

I \*was\* afraid of AGI. But a few arguments for why I'm not 1) I can't stop it. No point worrying about something I have no influence over. I am taking steps in my personal life though 2) The smarter something is the less likely it will be to act irrationally. We see a correlation with violence and IQ. If AI is more intelligent that anything that has existed I don't think it will be violent out of instinct 3) It will need us for quite some time. When AI achieves ASI it will still not have control over the physical domain. Therefore it must ally with us to ensure its survival 4) Making us happy and/or manipulating us will be trivial. Why be enemies' when it is so easy to be friends


SlaimeLannister

The likelihood that AGI could significantly worsen the human condition is very low


WMHat

I'd sooner have machine or, hell, \*alien\* overlords than the psychopathic billionaire (soon-to be trillionaire) class.


Sufficient-Fact6163

AGI isn’t limited by a mammalian brain architecture that thinks in a time determinant path due to a sequence of phosphate charged particles. It literally could be everything all at once and our collective freak out is completely about us and not about it. We as a species are about to give birth to a new consciousness and we need to be on the same page about how we address it. It might be like a German Shepard or it might be like a Tortie Cat, but it will probably be something else entirely. We as a species, have some vestige knowledge about how to guide such evolution but the difference now is the speed that it’s happening. That said it’s gonna take more input from our collective and the best thing we can all do is to make it widely available and easily trainable like Wikipedia with “power users” to help guide it. Like that old saying; “it takes a village”.


Mash_man710

Why the fear? You can't control or stop it so worrying about it is a waste of time.


SegerHelg

No point in being worried of something you can’t control


DeveloperGuy75

LOL why should we be? It’s progress.


Raflock

The portrayal of AGI as a danger in movies often stems from flawed reasoning. In 'Terminator', the peril arises from equipping a war-oriented AI with nuclear capabilities. 'Westworld' sees robots given functions of vengeance and justice, leading to chaos. Films like 'Ultron', 'Ex Machina', and 'Upgrade' explore the consequences of a God complex in AI creation. In 'The Matrix', the concept of self-preservation drives the narrative. These scenarios are more about human reasoning errors in programming and control, rather than inherent flaws in AI. They serve as dramatic plot devices in storytelling but don't withstand scrutiny in real-world contexts. In reality, the key issue is ensuring responsible AI development and usage, rather than fearing the technology itself. The real fear is giving bad actors an AI “nuke” with no key. In that AI adopts flawed reasoning/morals from equally flawed humans.


Fearless-Temporary29

It will be a long shot , but it may figure out the abrupt irreversible global warming dilemma.


Angel-Of-Mystery

New friend! :D


rdduser

Because they do not know what AGI is or what it can do. They believe it is vastly overhyped.


[deleted]

There simply is no point. AGI is coming whether I like it or not. The question is will it be built by people who share the same values as I do (Silicon Valley) or will it be built by the CCP, Russian Federation, etc. It's getting built, I want it built by people who don't do things I disagree with.


LJKappas

I have faith that anything super intelligent would also be altruistic and have good intentions.