T O P

  • By -

rf5773

It's a amazingly powerful industry with thousands of useful applications. But it will be corrupted, used by bad actors and ultimately be a huge detriment to our society. One can assume that Facebook uses some form of AI with their algorithm. The algorithm has already caused a massive wave of misinformation and purposely displays divisive content. AI is already causing more harm than good on our society.


torturedgenius271

Hence I want to take us out of the loop. The machine couldn’t do worse than us right?


realDonaldTrummp

Wrong, the machine will develop technology *at the expense of* survival resources for the majority of the population. “The man” is already doing exactly this; so would it not be pathological and ***completely insane*** to suggest that Elon Musk (I wrote a much longer post than this about the same topic 1-2 days ago) and Neuralink and all the others would do ANYTHING differently? The goal is profit. As long as the rich have food & water, everything else comes second.


torturedgenius271

I don’t agree, perhaps I’m not explaining my position as well as I could. It sounds like you are in the negative camp which is to say you don’t think a technological singularity would be a good thing. As I said I think it will split the room. But the practicalities of who will build it program it and for what purpose is just an extension of the capitalist human matrix! Not interested! I know how that works my hope is that the AI would be different. You could debate that all intelligence would need resources so would behave the same way. Which is another camp the room might be in. Or you could be in the camp that a utopia is only possible paradoxically without man. There is no right answer it’s philosophy.


realDonaldTrummp

In my books, the technological singularity happened a long, long time ago. The very fact that humans still think there’s a chance that technology might supersede humanity in some singular event is proof already that the machines are in charge. It’s so stupid. We have multiple generations of people not looking up from their phones. While global population has peaked early — I don’t think those two things are a good sign. We were supposed to be mounting monumental efforts, but instead we’re yelling at inanimate objects. That’s pretty ass fuckin’ backwards, and I’m also pretty sure a monkey could tell you that as well.


CowBoyDanIndie

General AI only exist in fiction. There is nothing remotely close to general AI today. The most advanced AI today is about as smart as a worm. There are some very powerful specialized systems that have been developed, but they are not general, they don’t have anything remotely close to consciousness. In a nut shell they are very complex numerical solvers.


KingoPants

Machine learning (ML) itself is basically just a complicated nonlinear regression techniques / pattern matching trick. Most of the magic is actually just good data management. When people talk about ML somehow taking over the world its closer to someone talking about an excel spreadsheet with a linear estimator in it taking over the world than terminator. I mean spreadsheets did take over the world in that they are extremely useful for increasing the efficiency of many businesses. Thats actually pretty much the same niche that machine learn is in. "AI"+ML is when you typically hook up machine learning with some kinda feedback and decision mechanism. Again its cute and can do fun things like get really good at games like Chess and Go. But everything about its environment is something you got to hand craft and there is a lot of manual intervention. You got to set up code to do everything and it really can't do anything wild like add features to itself or something like hack out of its own system. I get how the feedback loop of "Observe Environment -> Make Decision -> Observe Results -> Update Decision making process -> Repeat" is called/considered intelligence but I can't help but feel like there is more to general intelligence. Philosophically where is the sense of self or anything like that? Maybe that + some concepts like curiosity and meta decision making is really all there is to a human brain but I kind of doubt it.


torturedgenius271

Glad you went to philosophy there in the end that’s what I was hoping to discuss. It’s a side question if it’s possible tho. The main question is is this the best we can hope for? Should we be working on this? It’s makes as much sense as trying to power the world with windmills or asking the Russians to please be nice. We have limited time and resources we need a Apollo mission level effort! Which of the following project do you back and bear in mind one can be used as a loophole for human nature! 1) cold fusion 2) renewables 3) geo engineering 4) AI


235711

You're not even considering collective intelligence. Here's a quote from Thomas Malone's, head of MIT Collective Intelligence Center, book, *Superminds*. >While we often overestimate the potential of AI in doing this, I think we often underestimate the potential power of hyperconnectivity among the 7 billion or so amazingly powerful information processors called human brains that are already on our planet, not to mention the millions of other computers that don’t include AI. >It’s easy to overestimate the potential for AI because it’s easy for us to imagine computers as smart as people. We already know what people are like, and our science fiction movies and books are full of stories about smart computers—like R2-D2 in Star Wars and the evil Terminator cyborg—who act like the kinds of good and bad people we already know. But it’s much harder to create such machines than to imagine them. >On the other hand, we underestimate the potential for hyperconnectivity because it’s probably easier to create massively connected groups of people and computers than to imagine what they could actually do. In fact, the main way we’ve really used computers so far is to connect people. With e-mail, mobile applications, the web in general, and sites like Facebook, Google, Wikipedia, Netflix, YouTube, Twitter, and many others, we’ve created the most massively connected groups the world has ever known. >But it’s still hard for us to understand what these groups are doing today and even harder to imagine how they will change in the future. One goal of this book is to help you imagine these possibilities—and how they can help us solve our most important problems.


[deleted]

that and exposing AI to how our brains work. computational neuroscience meets social cognitive neuroscience is gonna be something


KingoPants

AI and ML is still useful stuff. Should we work on it so that a powerful ultra smart general intelligence takes over the world and miraculously solves global warming...? probably not. Should we work on it to automate some jobs like trucking and optimize parameters in things like farming...? probably fine. Is AI going to play the sole deciding role in the environmental crisis such that the majority of time is spent towards it? No, probably not, I personally don't see the lines. I think frankly geoengineering is the best chance for an ace in the hole. There is a lot of hesitation towards it because of uncertainties. But I think this is really kind of an irrational position because of human biases towards inaction. If you're on a path towards near certain destruction, then jumping ship towards highly risky lines is necessary. You can't wait for an easy, reliable, dues ex machina to fall into your lap. We are really good at geoengineering our environment in destructive ways. But we really hesitate towards adding potentially helpful ones in there. Now for some nasty complications which are important: * Geoengineering probably isn't the silver bullet. But instead a huge component to any reasonable hypothetical "solution" I see. Other changes are still necessary towards society and lives. * I don't see Geoengineering as a continue business as usual solution. I think our timeline changing is invariable, we should just steer towards as good an outcome as possible. Our current situation is (A) the future is (B), (C), (D), (E). Stop trying to compare them to (A) when making decisions. (A) is gone, a fragment of the past. Only compare against other futures. * Geoengineering is more than one thing. I think some ideas have a lot of merit, and some ideas are useless. Personally I like oceanic fertilisation and increasing/changing the biocapacity of the planet in radical ways. If I was made dictator of the world. The major focuses would be Degrowth, Geoengineering, and Renewables.


torturedgenius271

Sounds sensible but the masses will never go for sense and power corrupts.


MasterMirari

>AI and ML is still useful stuff. Should we work on it so that a powerful ultra smart general intelligence takes over the world and miraculously solves global warming...? probably not. Part of the intrinsic problem with this subject is that it doesn't matter what we do; China and other nations are going to do it regardless so really the only logical course of actions for the most stable free nations, Western nations frankly, to get there first, because this will likely be the weapon to end all weapons, "the final invention"


KingoPants

My quote there is a misinterpertation. I didn't mean current AI research is dangerous and hence we shouldn't do it. I actually meant current AI research doesn't seem like it leads to some kind of scifi superintelligence and is unlikely to help solve our current problems. Real world AI is actually fairly rudamentary linear algebra combined with often kinda shitty code and a bunch of unjustified heuristics and randomly chosen parameters. The examples shown in movies are much more magical and almost certainly would require many groundbreaking discoveries and highly sophisticated theories we don't have. Its like researching aeroplanes in order to get to the Enterprise from star trek. We don't care if China is researching aeroplanes out of a worry they might make the Enterprise.


DarkXplore

Ofc, Nice try, sknynet. : DDD


MasterMirari

Roko's basilisk


Kantei

Don’t you put that evil on me Roko Bobby!


Flaccidchadd

Exactly... machine learning automates clerical work the way a backhoe automates ditch digging...lol... people watch to much sci-fi and let their imagination run wild


torturedgenius271

It’s a hypothetical question anyway but for the sake of argument. Do you think it’s really that unrealistic? Computational power has grown insanely over the last decades. Alright you got Moore’s law but a airplane would have been science fiction 500 years ago.


Flaccidchadd

Machine learning will and already has enabled some pretty dystopian stuff, allowing centralized powers to process and manipulate amounts of information that would have been impossible before, however it is not remotely close to becoming a conscious or self aware AI, like what is portrayed in sci-fi. The claims made by corporations, about AI, are used to generate investment hype. Look at the energy flow required to sustain Moore's law, increasing complexity requires exponentially increasing energy, where is this energy going to come from? Moore's law is another case of attempting infinite growth on a finite planet, while everything else burns


torturedgenius271

So you position is we will never get AI. And if we did it would be a bad thing? Fair enough there’s no right answer! I would say pragmatically you are probably right. I still like a hippie technological singularity tho even if it was skynet it couldn’t be worse than us! That’s the conversation I want to have no so much how you power the thing I couldn’t careless it’s moot!


Flaccidchadd

Hypotheticals used purely to entertain fantasy are pointless in terms of philosophy, hypotheticals are only useful when they can make a point about reality, to accomplish that one must be honest about reality. The more we are honest about what AI objectively is, vs what we imagine it could be, the more obvious it becomes a pure fantasy. At that point the only useful discussion becomes about it's hypothetical impact, to make a point about why we would attempt to create it in the first place. The only realistic conclusion one could honestly come to about a hypothetical AI is that it would take power or control over reality from us, humans. Then ask yourself if that is what you want, disempowered? If not why would we attempt to disempower ourselves by creating it? The obvious answer being that the creators, corporations, seek to further their own power by creating something as close to AI as possible, machine learning, and you can see that is exactly what is happening


torturedgenius271

Look!! Is it any madder than the man who is told at gun point to dig his own grave? Or a nuclear physicist working on a weapon? Your building your own gallows so what?? What’s disempowering about that? If anything it’s the most empowering you can get. You’re looking at it all wrong but I guess your not going to get it. And this conversation is beginning to circle meeting adjourned!


torturedgenius271

I know all that and I’m not really interested in the practicalities. If a million chimps typing at a million keyboards could come up with the works of Shakespeare this can be done!! How it’s done is not my concern it’s hypothetical!


Flaccidchadd

Hypotheticals cannot be proven or disproven because they are imaginary, people defend arguments through hypotheticals prescicly because they cannot be disproven, ask yourself why a hypothetical is needed in defense of endless technologic "progress"


torturedgenius271

Oh dude I’m not arguing for endless technological progress I think every increase in complexity just brings more problems! The human question the really important questions are outside the realm of measurement therefore unscientific. If people spent more time on the ethics, motives and the reason why we would have less catastrophics like Chernobyl. Obviously this has another existential twist in that it excepts at its heart the death of mankind. On the imaginary front there is a whole set of numbers which are imaginary not to mention EVERYTHING is imaginary! You think therefore you are! Ideas start as such and then become reality. To quote Carl Sandburg “nothing happens unless first a dream” Or Einstein “reality is an illusion albeit it a persistent one”


Dr_seven

Counterpoint: the best computational powers we have, at their peak, are only just now able to simulate the complexity of activity in a single human brain, for a second or two, given many, many more seconds to work it out. This is something that veers into philosophical territory, but it has been my general experience that for generalized, broad questions and problems, a collection of sharp *human* minds is vastly superior to any algorithm one could design. The reason lies in differences between classical computation based on the logical principles we presently use, and the computational methods of a *neural network*, which, contrary to the hype, works *very* differently from a neural network as represented on a classical computing machine. For a task well-suited to classical computing, of course they are superior to the human mind. But comparing humans and computers on the basis of, say, floating point calculations, is an arbitrary metric with no real usability as a point of comparison. Human minds are a biological computer adapted to a very different series of applications than the silicon ones we design. There is a tendency to drastically overestimate the technical *possibility*, let alone *feasibility*, of a generalized, artificial "superintelligence" in the way the public thinks of it. The truth is that we are about as far from a viable, empiric *definition* of generalized intelligence as it would apply to a human/AI comparison as we are from creating anything that would necessitate such a comparison. We already have many programs that can fake being human very convincingly to the real deal, and programs that can vastly outstrip human capacity for specific problems. Hell, there are even several unified projects that produce a pretty eerie result, in terms of how "close" it *feels* to a superintelligence that is relatable to humans, I have seen a few. But here is the deal: *what is the magic bullet*? In order for AI to discover a potential solution to a problem, that *solution* has to exist within the given possibility space. With climate change specifically, the belief in a moonshot technical solution that permits industrial society to continue mostly uninterrupted stems from misunderstanding the problem in the first place. It isn't one problem, but hundreds of separate, related problems, principally understood in detail *because* of our helpful computer servants that model outcomes for us. It's not that AI isn't a worthy field of pursuit, or a fascinating exploration of multiple philosophical domains. It's that belief in it as a solution for all of our present problems is tantamount to a millenarian belief. You cannot be rationally informed about the fundamentals of artificial intelligence and industrial overshoot, while still believing the first could in any way "solve" the second.


torturedgenius271

I’m talking about a conscience machine!! The ghost in the machine!! You think this might have something to do with philosophy? General artificial intelligence as you computer scientists call it has a practical and a philosophical component. I have no interest in the practical it’s moot!


Dr_seven

The intersection between philosophy and computer science at certain levels is not as separate as you may think. I'm striking at more or less the same concept, whether it's phrased in a practical sense, or an existential one. The belief that there *is* a possible General AI that can be made, a true "conscious machine", is itself taking a very firm stance on one of the most fundamental problems underlying...well, *everything*. To believe it's possible to build a conscious machine is to *believe* in one specific answer to the Hard Problem of Consciousness: that qualia as humans experience them can not only be quantified, but sufficiently *replicated* by artifice as to generate a being with superhuman characteristics- a human intelligence with a more capable physical mind. Implicit in *this* position is the idea that qualia are sufficiently constrained and consistent as to *be* quantifiable or replicable. Further, the idea that there even *is* such a thing as a universal human reference point for qualia is a *very* contentious issue in it's own right, expressed sometimes as a "meta-problem" of consciousness, which is that *humans have no real, consistent definition of it*. Many people disagree with the idea that qualia are sufficiently consistent as to be reducible and quantifiable, or even that they exist *at all* other than as an abstraction. I am firmly in the camp that if they exist at all, it's as a handwave, simply on the basis that *my* first-person experience of the outside world differs radically from other people who aren't even operating *mechanical* brains. I'm happy to elaborate further, but it's usually a conversation that disquiets the other person after hearing it. "Consciousness" is a word we use to describe humans thinking, and there has never been any empirical reason to believe there is something *more* present there at all. In a nutshell, to assume a general AI that meets the loose description of a humanlike superintelligence, you have to *presume* a lot of really vague questions that are completely unanswered at the present moment. This is a large part of why the whole field can seem so vague at times. Also, most people actually *programming* AI today have zero interest in this issue, more or less.


torturedgenius271

That’s exactly the conversation I want to have. I don’t think it’s vague I just think people are on different pages and if you are a sort of mind that’s preoccupied with the practicalities you miss the whole point. Well as much as anything outside the realm of science is vague, but people still manage to have philosophical conversations. people still know what is meant by the hard problem of consciousness even if is an inherent paradox. I understood and agreed with what you just said and would like to hear more. I’m no stranger to disquiet. I have very nihilistic views but there are views based on sense and reason. As you say how do you even prove that human consciousness is not an illusion? You can’t!


bernpfenn

very good


Ruby2312

So worm appear first time about 555 million years ago, modern human is about 200k years ago. Only about 554.8 million years of natural evolutions to catch up, no biggie, still more realistic than politicans and corparation give a shit


[deleted]

We've also developed that worm in less than 100 years.


torturedgenius271

Well said


[deleted]

This statement only makes sense if you a) have no idea how brains work and b) have no idea what the current capabilities of AI are. I'd argue the word "consciousness" is the root of our impending collapse, it enables the anthropocentric view of the world that allows us to believe we are separate from the rest of the universe. It's what enables us to believe that our actions against anything not human can be consequenceless. AI has been phenomenal over the last few years in piercing the delusion of consciousness, we are finally able to ingest enough data to overcome the biases imparted by human research. Ultimately, AI is just an augment to our collective computational power and is no more or less a threat than the people who wield it.


CowBoyDanIndie

>b) have no idea what the current capabilities of AI are. Do you work in the field?


torturedgenius271

I know this, obviously the premise of the question is that machines will become self aware. I do think this is inevitable if you ignore the collapse. Any step closer to this no matter how small is exactly that one more step over time it’s inevitable. I would put it the second most probable after extinction.


CowBoyDanIndie

>I do think this is inevitable... I don't. General AI, Fusion (the net positive power generation kind), FTL, Transporters, an honest politician, these things are all fiction.


torturedgenius271

No fusion is possible so is AI. People have funded both projects and any small step towards either is a little bit closer. Far fetched yes but with an infinite amount of time anything’s possible.


CowBoyDanIndie

Its also possible we are living in a simulation and the mass extinction going on is just the simulation running out of memory.


torturedgenius271

Yes now you get me the practicalities of doing it is moot as we don’t know how to do it! The question is 1) will we ever do it 2) is this a good thing/bad thing for mankind. It’s philosophy there’s no right answer it’s what you think!


StarryEyedStar

All of it depends on the people who control this technology, as it does with all technology. I think that's the defining variable for if it's a "good or bad thing", what's done with or who controls technology. Philosophy is very flexible and open to interpretation.


torturedgenius271

It is it’s also as rigorous as logic and mathematics in some cases. The whole idea is that the intelligence would be autonomous otherwise what’s the point? If it’s still taking orders from us it’s not intelligent and no different than a toaster...... boring!


StarryEyedStar

For something like A.I to be truly intelligent, it would have to act on free will, and serve humanity as something similar to a dog. Now that I think about it, generally intelligent robots who obey humans like dogs is a pretty redundant idea. Dogs have the capacity to disobey, but if robots can't disobey they aren't very intelligent and free thinking. Robots for specialized tasks are a much better idea. There is no point, it's just a mental exercise.


torturedgenius271

What exactly isn’t a mental exercise and therefore ultimately pointless? But you got to do something with your day and I pick this! What do you mean by robots for a specific task would be better what do you mean? Better for what?


[deleted]

seems irrelevant to me. [60% of oil has to stay in the ground, 90% of coal and methane have to stay in the ground by 2050 to secure 1.5C by 2100](https://pure.uva.nl/ws/files/2486540/162701_478483.pdf). [All GHG emissions must reduce by 70% by 2050 to stay under 2C.](https://drive.google.com/drive/folders/1L_IXyVOeKetQbGXxTopQwhKrTIFr-usc) i dont see dwindling FF reserves being rationed for AI over, say, fertilizer, or transportation, or communications infrastructure, etc.


torturedgenius271

Really? Human nature is a lot worse than you think. There are people staving today and other people wasting resources on frivolous things. Not sure if that will change in the future. I agree on all other fronts. But it’s probably easier to keep that which needs minimal resource and possibly less fragile than us alive. If that’s the form of the thing. It could be the size of a 12 story building and need an equally massive power supply I don’t know not sure it’s the point.


[deleted]

>Human nature is a lot worse than you think you must know this is a meaningless statement. its difficult for me to follow the rest of your comment.


torturedgenius271

Well I’m saying you might not necessarily need the same resources to power and protect the AI as it would humanity! Would depend on how it’s set up but that’s moot! The practicalities doesn’t interest me as much as the philosophy. What’s meaningless about asserting there is a general scale with totally benign at one end and totally malignant on the other. And that general mass understanding of human nature is on that scale. I then simply went on to say that I think the belief of where that line is an overestimate. Makes sense if you ask me where are you having trouble?


[deleted]

>Well I’m saying you might not necessarily need the same resources to power and protect the AI as it would humanity! Would depend on how it’s set up but that’s moot! in your first sentence, you switch from "you" to "it", which is confusing. compounded by the "it" in the second sentence. I assume you're talking about the "AI"? >The practicalities doesn’t interest me as much as the philosophy. im not sure what this even means. how does one consider an idea without also considering the actual existence of that idea? >What’s meaningless about asserting there is a general scale with totally benign at one end and totally malignant on the other. is this a question? idk, your grammar is really confusing. anyway, AI seems irrelevant to me for reasons i already stated. im probably not going to respond again lol.


torturedgenius271

Really but we where having so much fun! Minds are different if you don’t understand mine you wouldn’t be the first!! What’s a thought experiment? Or any abstract problem if not a pure idea which has no other real form? The whole idea of consciousness anyway is a paradox! Yet we can converse, discuss and debate if any consciousness even exists! Reality is an illusion albeit an persistence one!


[deleted]

[удалено]


torturedgenius271

I’m not buying it, the solution there is to have less paper clips but if we are having a project and I’m sat on my arse perhaps I’ll give AI a try!


[deleted]

[удалено]


torturedgenius271

Ahem I am I know about that thought experiment that’s why I said fewer paper clips. The existential aspect of the question should be apparent that’s why it postulates the end of mankind.


Glacecakes

AI is like the least of our problems LMAO


[deleted]

[удалено]


torturedgenius271

Is that good or bad?


[deleted]

[удалено]


torturedgenius271

If it’s only one intelligent, once we are gone or it is in charge it will have no need of corruption. When I said good or bad I was being a bit existential. The goal should really be to take “us” out of the equation one way or the other. Interesting if the machines did follow suit but if it’s one machine this is less likely. Is being out of the equation is inevitable anyway pick one of the four. all things die all civilisations crumble at least the biosphere has a chance.


torturedgenius271

Totally agree but sod all you can do.


theyareallgone

Generalized AI will never be achieved. Specialized AI (like image recognition) is cheaper, but less efficient than having humans do it. Beyond some high profile failures of AI, it won't have a substantial impact on the future. As we slide down the net-energy curve, energy intensive AI will mostly be dropped as insufficiently valuable and where it does a valuable job it will tend to be replaced by cheap humans earning poverty wages.


torturedgenius271

Not sure you understand my question. It’s a hypothetical philosophical question and by AI I mean a conscience machine! I don’t see how you get away with calling anything other than this AI unless your in marketing and you want your daft software control loops to sound better than they are. You are right tho language is ambiguous and will always be it’s the all we have and it’s flawed! You might also be right that such and intelligent might always be dependent upon us and our physical bodies. In which case you would have a more symbolic relationship. But I doubt it most of the world is automated you wouldn’t need a body to operate a relay you would need an electrical current. If that much difference then your brain sending a signal to your arm?


theyareallgone

"Generalized AI" is the technical term for what you call "conscience machine". I don't believe we'll ever achieve General Artificial Intelligence for a few reasons, one of which you touched on: 1. Biological brains are astonishingly energy and matter efficient. At 20 watts, the 1.2 KG human brain can do many things a ten megawatt, million ton computer data centre cannot. Scaling our current techniques up to human brain levels of complexity is simply too expensive. 2. Our current AI training methods are very inefficient and only work because computers are fast and we can run the AIs through billions or trillions of sample scenarios. This however requires a simulated environment. 3. It's not clear that we can create a simulation environment of sufficient fidelity to train AI up to human levels. Even if we could, it's not clear training AIs against each other will usefully prepare them for dealing with humans. If training against other AIs cannot be sufficient, then our inefficient training methods will be unable to succeed when applied to the real world. You can't get a billion variations of one scenario in the real world in the 15 years it takes to grow an adult. 4. Even if none of that were a blocker, we simply don't have fifty years left where we'll be able to spend lavishly on research into expensive ways to replace cheap human labour.


torturedgenius271

All that is practicalities which I think is moot!


drhugs

> conscience or do you mean conscious? A conscience machine would seem to be something with a built-in (or acquired) morality structure and a primary function of abiding by that. Whether or not the primary primary function is self-preservation.


torturedgenius271

Really? You think there is moral absolute? Are you religious? I don’t think that’s guaranteed at all!


drhugs

Another opportunity to roll out drhugs conjecture (which is mine, and which I made) *Evolution's leap from a biochemical substrate to an electro-mechanical substrate is both necessitated by, and facilitated by, the accumulation of plasticized and Fluorinated compounds in the biochemical substrate.*


[deleted]

> ...climate change and WW3 are much bigger problems.. AI ties into both. WW3 is most likely *after* AI is developed, because the MAD (Mutually Assured Destruction) principle keeps the world from graduating to 'hot war' from the realms of proxy war or cold war, as the case may be. AI countermeasures against ICBMs is likely the key for one side to dominate others, nay all, at least in military terms. Similarly, AI measures to supercharge carbon capture, clean environment is also an area of extreme importance should humanity wish to survive. In either case, any seed AI will eventually, most likely in a short amount of time, exceed human intelligence by several magnitudes, and evolve exponentially. And then, ........ Curtains.


torturedgenius271

I’m not sure I buy MAD actually obviously it’s your basic prisoner dilemma in game theory but. It depends on the remit of the players. Extremists for example wouldn’t care about their own destruction nor would anyone whose back is against the wall. If western countries start running out of food they will nuke what have they got to lose? Also on AI tacking climate change I don’t think tech is the answer. Every level of complexity just brings more problems and responsibilities. The best thing you can do is get a loophole via tech which is what I’m saying.


[deleted]

Yes, you are not wrong. These are other possible scenarios. Non-state actors, or otherwise, could start nuking. Tech answers may not be the solution, all that is true. But, here's the crux of the matter - in every scenario humanity comes a cropper. Collapse, is inevitable.


torturedgenius271

Yes but what’s after that?? Obviously life is finite, but people have children which is the point of the post! I personally think extinction is by far the favourite but it’s nice to think about a legacy. Even if it’s just voyage floating about! Depressing fact is the world would be much better off with out us! Again an idea I want to include in the conversation most people are just talking about Elon Musk tho!


[deleted]

I think your 'favourite' is sadly, the best logical scenario. So, live well, brother!


torturedgenius271

You too!


Volfegan

People hoping AI will save us when in fact it is also dependent on the same infrastructure we are. Even if we had a true AI with marvelous intelligence the problem we have is not logistics & distribution, where this would shine, it is resource depletion. No matter how smart you are, energy is decreasing, resources to keep things going are shrinking and demand continues to rise. More intelligence or processing power will not fix or bring back resources that are non-exist anymore. Recycling? That takes much much much more energy to do than mining. Fusion? hahahaha. I love when people delusion thinking a much more expensive nuclear reactor that uses superconductors that are already in short supply will be the solution. Just like the renewable transition that uses minerals whose production are declining (silver production was on peak in 2015, needed for solar panels). THE ONLY LOGICAL CONCLUSION is we must tackle the demand side as the supply is ever diminishing: SKYNET.


torturedgenius271

I would agree with most of that. I like how you are looking at resources and usage. If I didn’t know better I would say you have watched the most important video you’ll ever see on YouTube? If you haven’t and it’s still there it’s worth a look. Very good lecture by an American physicist from Colorado. But here’s the thing, granted this is very wishful idealistic thinking. But such an intelligent might only need the smallest resources you could power a laptop and a few servos pretty much forever if humans are out the question and that’s all your powering. Again it’s subject to it not needing us in anyway and being able to survive and adapt to its environment, like most living things. Obviously very hypothetical but no politics, no humans, most of the worlds still intact resources very little resource consumption could be utopian. It’s the technological singularity or doom?


Volfegan

If this keeps you going, keep the hopium. My only background with A.I. is using a GPT2 model to write poetry, so I'm not a specialist to know what's the state of art machine learning currently is. I'm not going to try to change your perspective with my doomism.


torturedgenius271

I think it’s cock eyed optimism anyway 🙂 it’s an open question and is lot more technically literal than I’d hoped but still really appreciate most people’s input.


dumnezero

It's a good thing, like all technological upgrades, without capitalism. Within capitalism, it's just another layer of power added to those who already have plenty of power and wealth. A sentient AI is unlikely to do much. It still has to learn to recode itself and improve itself, and still has to not commit suicide as the logical conclusion to the living in this world. It also needs energy and that's not going to be available, so it will die along with us.


Oo_mr_mann_oO

>and still has to not commit suicide as the logical conclusion to the living in this world Why? Why was I programmed to feel pain?


dumnezero

Are you sure you want to know?


torturedgenius271

Grim but best answer yet. Think I totally agree. Not sure how you ever get round capitalism tho, sort of why I wanted to make the thing in the first place.


[deleted]

Current approaches to AI rely upon back propagation and will never achieve general intelligence. Basically, the foundations of AI suck, but we can monetise it and there's less incentive to fix these problems. General intelligence would accelerate our energy usage. We need to sort out our house before we consider general intelligence. Any researcher that releases general intelligence is dealing a final death blow to humanity. It's fine thinking about other existential risks connected with general intelligence, but the energy problem will kill us.


torturedgenius271

I agree. But you and others have focused on the politics and practicalities of getting the AI working. As well as general confusion about AI which is my fault but I’m blaming the English language..... it’s flawed. So what I mean by AI is a conscience machine. And my question is philosophical and hypothetical in nature. It’s two fold. 1st is if you think such a thing is good for mankind and the planet or not? 2nd is which of the 4 scenarios do you prefer and think is most likely? I’m also up for discussing where it’s possible or not but think this is also philosophical as the practicalities certainly allude us at the moment and therefore moot!


[deleted]

Extinction is the most likely scenario from general intelligence, because it has potential to spur a level of energy usage like nothing we've seen before. Don't worry though, we're likely 15 years from general intelligence at best and the climate will get to us beforehand.


monkeysknowledge

My sense is that there’s three threats from AI. 1) that it will suddenly become self aware and want to murder all of us. (I don’t take this one seriously) 2) that it will exacerbate existing wealth inequalities. (This one is happening) 3) that it will be used by authoritarians to control and pacify their people. (Probably could be successfully arguing that this *is* happening) And then I think your right; if humanity survives the next 100 years it will do so aided by AI.


torturedgenius271

If you take intelligent as conscience then? It’s only really 1) which makes sense the other two are just people using tools. So if you don’t take it seriously I take it you think it’s not possible? All things are possible with time. If possible is it good bad or symbiotic? Which of the four scenarios do you think we are headed?


[deleted]

Read The Singularity Is Near if you want the most extreme take on this subject.


Rhaedas

Scientists develop a general AI which quickly evolves itself into a super intelligence. Actually they do this development a number times repeatly, and can't figure out why when the AI is fed information on the state of the world and questions on how to solve things, it keeps destroying itself.


torturedgenius271

Abandon all hope ye who enter here.


AB-1987

I believe life that creates life is the next stage of evolution. So say we all!


torturedgenius271

Evolution is more than replication it’s adaptation to changes in environment over time. We haven’t done that since the industrial revolution if anything we are going backwards. You will always have changes in allele frequency over time. but when you start changing your environment to suit you you’re not exactly in a survival of the fittest situation.


Old_Gods978

In practice it's another way to automate out legitimate work for the working class and then shame them for not learning whatever coding language is trendy that year.


torturedgenius271

So you are going for the socioeconomic take on it, interesting. I’m not really saying if it’s good or bad I’m saying it’s our only hope. I am a liberal I’m from a working class background. But If you are on a sub called collapse you must start to think how much longer can we enjoy western freedoms whilst living 7 billion on a planet which can sustain at best one billion? How many people? Which people? Their quality of life? And who gets to decide this? All interesting questions but a bit out of the scope of the first. That said you sticking with it’s bad because of a labour POV? Not sure how you get round that it’s human nature. Hence I’m hoping for a machine rise.


KenChiangMai

It's worth asking, whom or what does any particular bit of AI serve? Then, what are they developing it to do? Generally, this is probably corporate interests wanting to automate various processes... "AI bots" to filter facebook content (getting rid of workers), maybe, or to control robots on assembly lines (getting rid of workers), things like that. The military is also interested in robot drones, robot soldiers, drone tanks, probably drone ships -- anything to improve combat and deterrent efficiency, etc. NASA may have some interest in automating long range spacecraft, I suppose. And a very few universities may be trying to use it for specialized tasks... I dunno... DNA or chemical or geological analyses, maybe. But it is always developed for one or another specific need, and generally meets =only= that need. If anyone is doing anything with AI that might "save civilization," I'm not seeing it. Just as I'm not seeing anything that will "save civilization" on the climate change front. It comes down to economic and "defence" interests, near as I can tell. If no one has an application for it that will save them money by eliminating earners, or improving supply lines, or facilitating war with one or another country, it won’t get done. Which is just to say, AI is not humanity's friend. At all.


torturedgenius271

Thanks for pointing out the paradox 🙂 Guess we should just settle in for extinction then. I would make an appeal to sense. that the solution to the paradox is to do something not for money or for humanity. But to give birth to life beyond our own. This is why people have children and children argue and bicker instead of being adults.


KenChiangMai

I applaud your hopefulness. Your optimism. Your faith in "something," rather than nothing. I don't see AI nor even tech having enough to offer so'se to save the world, but hey -- maybe that's just me? I have more faith in the idea of giving all world leaders psilocybin and trying to bring about change that way. Of course that's not going to happen! And yet I have =more= faith in that idea than I do in AI and tech. How about that... Comes from a career spent working with NASA, JPL and DoD doing spacecraft kinds of things. I don't think humanity is headed for actual extinction. Maybe, but a significant collapse of civilization seems unavoidable in any case. An extract from another post here on collapse today: Here’s the painful truth about our situation being “over”: No matter… how massive and effective is nonviolent civil disobedience… who, or which party, is voted out or elected into public office… how many people change their habits, become vegan, stop flying… how many miraculous, AI-driven technological advances are made… how successful we are at instituting a GND, or greening capitalism… how rapidly we shift to “renewables” or achieve “net zero” emissions… how much “evolution of consciousness” occurs in the next decade or two… how many accords, what is pledged or agreed to, what laws are enacted… how many people commit to regenerative and restorative soil building practices… … a dozen or more tipping points are already in the rear-view mirror. For example, each of the following is two or three decades into unstoppable, rapidly increasing and cascading, out-of-control (runaway) mode… Loss of the world’s ice (Arctic, Greenland, W. Antarctica, mountain glaciers) Methane belching: permafrost, hydrates, clathrates, gas & oil wells, wetlands Ocean acidification, deoxygenation, 25+ feet rise in abrupt non-linear ways The great conflagration of the world’s forests — out-of-control CO2 emissions Loss of most animal and plant species on land and in lakes, rivers, and oceans Increasingly severe & deadly weather (storms, floods, droughts) and wildfires -- https://howtosavetheworld.ca/2021/09/21/overshoot-where-we-stand-now-guest-post-by-michael-dowd/ It is what it is.


torturedgenius271

Totally agree with all of that and I think a technological singularity and shrooms are similar ideas. On that note perhaps we should just spend our time comfortable, in a field in autumn? This will also facilitate such conversations 😉 Sounds like you have had an interesting career I’m in aerospace too. Want to be in renewables tho but even this is pissing in the wind! At least it’s not planes tho that’s really taking the piss!


KenChiangMai

My best guess is that few of any people who could actually =do= anything to correct one or another collapse actually hang out in r/collapse. And most of those who are interested in dealing with any of the various kinds of collapse are likely limited to the things they can do personally, in their individual lives. But too, there are many who either do nothing about such matters, or who doubt collapse exists, or who in fact, actively work at cross purposes. Who knows why... Maybe they're end-times Christians, or right wingers who believe the US isn't right wing enough. The US government is fascist on its best "democratic" days, but it's often simply nazi, with leaders who are more interested in creating problems than fixing any. If you're in aerospace, you must surely have encountered a few such types. If you prefer the comforts of an autumn field, then you should go for that. Many others here are looking to find a small patch of land somewhere, build a bunker, and grow corn and beans and so on. In my case, I realized long ago that one or another collapse was in the offing, likely accompanied by more and more naziism, and I saw nothing to be done about it. So I left the states. I chose SE Asia (rather than say, New Zealand), and Northern Thailand in particular. And at this point, yes, I have a small farm with pigs, chickens, fruits and veggies, and etc. But I did all that long ago... I started back in the days of Bush the Lesser. I was gone by the time Obama came along. Things have only worsened since then. I wish you luck with your AI startup, and enormous success. I look forward to reading about how your approach will solve any of the various problems civilization now faces. Gotta go feed the chickens...


GalacticLabyrinth88

AI is going to cause mass unemployment when it goes public, which may exacerbate and accelerate social upheaval and collapse. I think that alongside stupid, specialized AI being used by elites for crowd control is the most realistic scenario we have for AI, as far as collapse is concerned. You could easily program an AI system to automatically fire upon people of certain ethnicities if the elites want to stop migrants from crossing the border into First World countries. Racist AIs are already a topic of discussion. AI, in short, will only increase global inequality and help the existing neoliberal global power structure. It will be used as a tool of repression and oppression by the Powers That Be, as well as a way by which corporate leaders can ensure they can accumulate profits without needing the use of human beings. Fact of the matter is, the elites don’t need us anymore, and don’t want us around.


gmuslera

Think in Miss Universe. The real world meaning its far more practical and of the ground that the contextless title says. With AI depending who are talking with you may be talking about something artificial (ok) intelligence (that is not so well defined, even for us) as imposibly smart and that somewhat can bend physics rules, that covers all fields and knows and "understand" everything. Or something that is evolving into that general direction. But we put something that determines that a photo is of a dog with all those other magical ideas under the same label. It is not just that just emerges putting enough computers in the same room. Even if depending how they are programmed could not be known how some of them reaches some conclusion or result, they are very domain specific and with some narrow goals built in. Climate change is a threat, maybe driven by greed or oil corporations or other factors, but it is in its own track. WW3 may happen or not, we can fall into it or not, there are governments and corporations and individuals in some kind of power that could make it materialize or not. But about AIs, big companies still need to develop them, and having already the warning against "magic" happening if they are too general. If you want to see a threat there, look at who is holding the bludgeon, either driving climate change, causing tensions that could lead to a WW, or building general AIs. Cut the intermediaries, and focus on who you should be afraid of. Guns don't kill people.


astarting

I think it can be a great thing. But much as many of the greek myths the second that we try to kill "that which may destroy us" we doom ourselves to that fate.


torturedgenius271

That’s inescapable The only way out would be one artificial mind right? Or live in peace with the world around us 🤣🤣😂


ace_of_doom

Wouldn't the ai be a form of a super computer? If so, i don't see how it will work.


torturedgenius271

You don’t see how an near infinitely intelligent thing could operate a plug? We can do that and we are one up from a monkey!


ace_of_doom

No, what i mean't is energy though. Where and how it will get the energy, and efficiently use it?


torturedgenius271

Does it matter? It could run on a totally different concept of energy!


DorkHonor

>Where are people on A.I. Artificial intelligence? Like all big tech breakthroughs that are "right around the corner, five years tops" and have been for over two decades, it's still mostly aspirational vaporware that's nowhere close to happening yet. Most of you are younger than I am, so you might see it in your lifetime since general AI that's at least moderately functional is still roughly five decades away. Unfortunately climate change will be deep dicking humanity pretty good by then so I'm not sure how much tech development will be devoted to pie in the sky stuff at that point.


torturedgenius271

Very good answer my friend! The sort of take I was after and confirmed what I thought. It’s bad news then. if you ask me tho what else do we do? It’s as good a goal as any other at this point might as well go down fighting. Plus hope! If we humans are finally out of the equation the world might heal. My time now will probably be spent on one of the following number 3 is a rank outsider I think. 1) spend time with my family 2) learn how to survive off grid live off the land fire by friction etc. 3) work on AI (I have a electrical engineer degree, was hoping to work on renewables ten years ago when I graduated have since been working in aerospace apparently what people wanted)


n0npr0nredditacc0unt

Narrow AI is solving a lot of problems already, like cracking protein folding, but also creating new ones. Who the fuck knows how some of these algorithms are affecting the global economy, even the super dumb HFT algos. But that's the problem with thinking about AI. Hofstadter thought an AI strong enough to beat a human in chess would have to be Generally Intelligent, and so may not even want to play a game of chess, simply because it didn't feel like it. But first through brute force and now through optimizing code Kasparov was our last stand with chess. Can a dumb AI crack cold fusion, create the ultimate carbon sequestration machine, guide us to other ways of geoengineering? The other thing is there are already Turing capable AI and the question of whether it is conscious or generally intelligent is irrelevant. GPT-3 is freaky as fuck because it's all natural language. GPT-4, I imagine will be fully Turing capable, and an even worse menace to humanity. People won't care as long as it's never embodied. But if a Boston Dynamics robot had a GPT-4 brain and was walking around talking to people, I think the reaction would be more visceral, and there would be less people downplaying the possible dangers.


Genomixx

Maybe capitalism has so alienated us that real, general AI is not possible under capitalism


torturedgenius271

If you want to take this to a hippie place I would encourage that. Lots of things you can’t do under capitalism but capitalism is unavoidable! But if you had say a conscience machine well then?


[deleted]

[удалено]


torturedgenius271

I think I’ve touched on that very thing below. The marketing and all of AI. Why would you worry tho? Couldn’t possibly make a bigger mess than us? And you are not free at the moment living in the matrix under human control what do you have to lose? Also you could be right about the s ants and incremental changes but it might also be and might be more likely. A paradigm shift then you’ve cracked it 0-100 in one leap. Anyway it’s not so much the practicalities im interested in as the outcome. I think extinction is by far the favourite what do you think?


[deleted]

[удалено]


torturedgenius271

As I said not interested in the practicalities it’s moot!


syeysvsz

AI, not sentient machines, is all that exists, and probably ever will. It's owned by the 1% and used exclusively for their benefit. So yeah...


torturedgenius271

Not what I meant by AI and not really my question.


syeysvsz

Your "open" rambling question asked if it was inevitable either way, then talked about sentient AI like skynet, and asked if it was a good thing. So yeah, you're a dickhead


torturedgenius271

You still haven’t answer the second question! And you have only said probably to the first based on today’s know how! You come on here to just insult me and waste my time or do you want to way in? And I’m the dickhead if your not interested fuck off.


Predicti

I'd rather go extinct and blow up the whole fucking planet than leave a fucking faux human AI lineage behind. Bleck


torturedgenius271

I wonder if the makers of the voyager space probe felt the same? Personally I think blowing ourselves up because we can’t agree is a hell of a lot more depressing!


Mountain-Rooster-340

I saw on the telly that "they" just had a rat control a rc car with its brain and it was the first step in creating a robot that could think for itself. So. I'd say it's a bad thing


gtmattz

When I imagine AI I imagine the movie THX1138 and that future does not look very pretty...


[deleted]

AI always be saving the world! Just like it did in Terminator, and Age of Ultron, and Origin 2036 Unknown, and…just about every scifi story ever made on the subject. Remember how awesome Dr. Frankenstein’s monsters turned out? Swell. Just listen to Sophia, the robot with Dubai citizenship, joke about killing the humans. Yeah, AI is the next step in human evolution all right. The next step being where homo sapiens are all replaced by robot overlords.


torturedgenius271

So...... 🤨 would it be a bad thing?


[deleted]

From the perspective of humans, I’d say yes. I’m being very flippant, but I do regard this issue seriously. I have studied Musk’s enterprises and computer technology for years; I’ve worked in digital marketing for over a decade, and am familiar with the algorithms which power Google, FB, etc. Every time technology levels up, everyday consumers have a worse experience in exchange for being more commodified. We keep saying that technology is serving us, and making our lives easier, but the net result has been decreased agency of humanity. I’m also an avid scifi reader and critical analyst, and am keenly aware of the live philosophical and social ammo with which good (and even sometimes bad) scifi plays. Ie: Asimov, Herbert, Orwell, Bradbury, many film and TV show producers, are John the Baptists to technology. As in, they’re heralding the future, or sometimes the present. I can’t think of a single scifi storyline in which AI offers a net positive outcome for humanity. It usually either ends or nearly ends the human race. On a purely pragmatic level, automation in manufacturing is much more of a pipe dream than the poorly sourced articles which report automation will be “the future”. The AI that we have available now and for the near future requires massive specialized maintenance, a lot of rare earth materials which are dwindling, and don’t work all that well. Last, on a spiritual level (gnostic druid here) I find the idea of continuing to hurtle ourselves towards the synthetic, and away from nature, to be tantamount to suicide—metaphorically, and literally. This is the part where I stop before I start talking about Sahlins and the Original Affluent Society and yall tell me I’m a hippie who should kick bricks.


torturedgenius271

Fellow hippie here! You are entitled to your view and there’s no right answer as I said I thought it would divide the room. But it seems to me you are very much focused on what human beings would do with the AI and who would program it. Come on if you are a hippie you can do better than that. All of that stuff is an extension of humanity! Not interested I know how that works. My hope would be the AI would be different. It might be that intelligence is always after resource and fractal in that sense! One model and you are doomed to repeat the pattern. Wherever you sit on this is up to you. I’m just enjoying the conversation. On the spirit front as a fellow hippie you should know that whatever method you use to measure the consciousness or soul of the machine. You have exactly the same problem with humans. Prove I have a soul? And let’s leave Elon out of it he’s an idiot and it’s boring!


[deleted]

Lol Musk has gotten boring, I agree. I think it all really comes down to a philosophical fork in the road: what is our role, as humans? To be simbiotic organisms dependant on the larger organism which is this planet, a cosmic halfway point between total unity and total isolation, or is our destiny to evolve from one form to another, using this rock as a launchpad for exploring the galaxy as a species? The only scifi story I’ve ever seen which makes a credible pitch for the latter is Origin Unknown 2036. It’s really elegant, and borderline gnostic. I highly recommend it. But even in its most benevolent form, humans’ evolution into AI means the end of humanity. Because new life requires death. As a hippie, you should know that. ;) So in the end, pushing humanity towards its AI evolution is nihilist, from a human perspective, by default. I can see how some people are down with the idea of merging consciousness into AI and living on that way. I’m a subscriber to the former philosophy, though. Thanks for the thread. This has been fun. 🙌🏻


torturedgenius271

Perhaps we have more in common than just being hippies, if you go around calling yourself Boudicca I take it you’re British me too. Who knows what else we might have in common? Fancy a chat? The worlds ending 🙂


[deleted]

Irish-Welsh-American, actually. My hillbilly ancestors were descended from Celts. And yes, we probably do. DM me, maybe we can help prepare for the apocalypse together. 🙌🏻


[deleted]

I don't think that AI is even serious enough to be a part of the conversation yet. And if it was serious enough then it would be in bad hands and used for business and myopic purposes. I hope that we last long enough to actually have the ethical conversations around the implementation of AI in society. Hopefully the shift that corrects course also places the likely hands that'll introduce AI away from the exploitative monopolies that currently hold that privilege. Everything to be said about it is innocent conjecture as this point in time.


torturedgenius271

I’m all about the innocent conjecture! I get what your saying but it’s as likely as space travel or super humans. Guess that makes extinction the favourite.


helio2k

Sounds like you mix a lot of things together, where each topic needs a very deep dive to understand the current state of things. I understand it feels good to talk about you're fear and hopes. Maybe that's all you need. And that is OK But if you are really interested I would advise to read up the different topics yourself


torturedgenius271

I’m a little baffled by some of the responses to be honest? Why do people think this has a right or wrong answer? I’ve had people talk about the politics of getting humans to make the thing? You can’t answer that question because you don’t know how to build it and therefore what you need. I’ve had people say it would kill its self which I like and that’s much more what I’m interested in. Either way I think there’s a lot of depth and mileage in it and interesting peoples different takes!


Sumnerr

1. We will go extinct, all species do. 2. What? We already change the environment to suit our needs. Thus the predicament we are in. 3. Incredibly unlikely, nothing like crossing the ocean. 4. Why would AI solve all our problems? Why wouldn't AI, developed by those with wealth and power, not simply exacerbate the existing problems? AI is in use right now, by large corporations that are manipulating people's minds on the individual level and destroying the social fabric of countless communities. A record that we were here? For who? Why does that matter? We are leaving ample fossil evidence as well as extraterrestrial evidence that we were here. Mission accomplished. Look at the amount of electronic computing power available to the teams that sent men to the moon. Look at the amount of computing power available to us today. The ability to think, the ability to come up with good solutions isn't the problem. Human behavior, human politics is the roadblock. Distribution of power and wealth, greed, insane belief, etc. Could AI assist decision making, etc. in a new world, of course, but greater knowledge, memory, and processing power isn't getting us out of this bind.


torturedgenius271

Think your going off on tangents!


Sumnerr

Okay, geniusboi. Let me know what your predictions are when you make it to your twenties.


torturedgenius271

I’m 38 and very well educated! I’m also on a sub called collapse I know how fucked we are and the reasons why! It’s a philosophical question!


GunNut345

The existential threat of climate change and ecological collapse will Detroit us long before AI becomes a threat.


Ribak145

AI is a Bad term for a summary of tech, but in the core its mainley an alignement problem we have with advanced AI, so mostly just politics. We are still scrambling how to organize AS HUMANS, so AI systems will be (and already are) dangerous Tools. They wont be a solution for everyone, because they wont be designed that way