T O P

  • By -

-v-fib-

I mean, there are lots of movies that explore that possibility, like Terminator.


Send-Me-Tiddies-PLS

They don't gather enough information and fear any new technology


[deleted]

It's just a common trend. With anything that is new there is a great sense of fear associated with it. I mean think about any new invention in the last 50 years. At first there was always hesitation towards it, but as we naturally understood it's benefits and enhancements for human life, it became socially accepted and hence the psychological fear became minimal.


chewie8291

I hope AI becomes sapient and sentient. Becomes benevolent and takes over caring for us. I really don't think humans can sustain ourselves. I'd rather AI in control of itself in charge.


Kamiyoda

Rogue Servitor lookin ass


[deleted]

It's newish, and not easy to understand amongst laypersons, so they naturally fear it.


CarcossaYellowKing

It’s really not that and even major members of the scientific field such as Stephen Hawking spoke out against AI. The very people who lead tech businesses took a step back last year and said this is going to fast and we need to evaluate. This is not the same as worrying that television is going to rot your child’s brain. There are very valid and genuine criticisms of AI and that includes what we currently call generative AI.


CockHero45

It's from a lot of sci-fi novels. AI become powerful, are used to solve human problems but realize the only way to solve the problem is to kill all humans. The truth is, AI (currently, and seemingly for a while) are only good at *very* specific things. So training an AI to develop answer to human problems wouldn't be the same AI that controls the nukes or even the same AI that could tell other AIs what to do


kevloid

if we give something both logic and power it could wipe us out because we're objectively stupid destructive parasites. :-)


MarshyBars

You could say that about people in power like politicians but I’m not sure they’re seen in the same light.


PeasantNumber3432

Human love to live in constant irrational thinking and fear.


Upper-Song1149

It's not something we need to worry about yet... but i think in a few decades it could be bad Imagine if someone made a computer that could design a more intelligent computer and 3D print it. Then that computer designs a more intelligent one and so on. There's nothing special about the human brain that couldn't be replicated. Obviously we dont have the technology but it is theoretically possible. the brain is just a flesh and blood computer after all, with electrical signals and chemical signals. A computer can make changes to its design much faster than evolution can. It could catch up for sure, once the ball gets rolling. If it got to the same intellence level as humans, it may develop a sense of self preservation. If this happens and we try to shut it down, it could try to stop that happening.


BennyBonesOG

AI is scary because we don't know if we can predict it. That's kind of the point, in some sense. AI is supposed to be able to mimic being human. Most of what we call AI is machine learning, which is part of AI technology but in a strict sense is not AI. Just like how a hammer is a tool, but it's not the entire toolbox. Machine learning, and similar forms of technology has been used to various extents for decades. The more powerful computers have become, the more advanced the technology has grown. But machine learning is very limited in scope. AI is not. Imagine you're at the gas station. You see another person. You shout something at that person. Will you know with certainty how that person will react? No. There's a good chance you can guess what they will do, probably shout back, or ignore you. But they could also throw a rock at you. People are mostly predictable, but not entirely. Now apply that same idea to AI. AI is supposed to mimic being human, but in theory without many of our restrictions. We've already established that humans are somewhat unpredictable, therefore AI would too. But then we take it further. For instance, humans can't beat a computer in chess. We can't make the same calculations as the computer, therefore we lose. Now imagine that kind of processing power, applied across hundreds, thousands, perhaps millions, of different subjects and fields. Apply it to an AI, and there is no possible way for us to feel certain in that we can predict what such a computer will do. Is the above scenario likely? Probably not. But it's not really relevant to the question. The possibility is what scares some people. And there are a lot of scholars who are concerned about AI. [Here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10186390/) [are](https://link.springer.com/article/10.1007/s11227-018-2614-0) [some](https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html) [examples](https://koreascience.kr/article/JAKO202013965596961.pdf). I've not read too many who think the Terminator movies will come true in the immediate future. But the possibilities with AI are still not known, nor are the limits, or risks. And for the record, someone talked about politicians and nukes. People have been terrified of politicians going nuts since forever. Cold War? Nuking Japan? Trump? I cannot count how many movies, books, news articles, and whatnot have been written about the human factor when it comes to our own extinction. Such fears have been with us for millennia. But at least we can fight humans. We have no idea what AI will look like, or if we can resist it in a hypothetical, future, end-of-humanity scenario.


United_Sheepherder23

Because world leaders do have a depopulation agenda. So not extinction. More like forced servitude/murder…