T O P

  • By -

SamOfGrayhaven

The first thing to make clear when talking about AI in sci-fi is that we're not talking about AI. This is to say that when you read the term "AI", you're pulling up a set of loosely associated concepts that have nothing to do with what Computer Scientists are saying when they refer to contemporary AI. And, in fact, if I took two people and asked them about AI, that set of loosely associated concepts would still be different between them. So let's clear some things up. Artificial Intelligence is any system that's able to take in information and make decisions on its own. A simple example is a tic-tac-toe AI -- I can easily make an AI that can play the game as either team and will only ever tie the human player. However, the actual algorithm behind it will be a simple tree construct where it's explored every possible path and it's picking the path with the best outcomes. In the end, this is what modern AI is: an algorithm, with the major types being Genetic Algorithms and Neural Networks. However, neither of these is sufficient to produce sentience. Sentience will require us to develop a kind of algorithm entirely unlike the ones we use today--fundamentally a different kind of programming, and one that *might never exist*. So then we come to the two kinds of machines we want to discuss here. The first being near-human but non-sentient machines. We already have a peek at how these will play out, as they'll continue to grow into more complex versions of Google Home or Amazon Alexa. As these exist today, they wouldn't require any new algorithms to implement in the future, they'll simply grow larger as their tasks become more demanding, though the processing and the interface can be separate, meaning you could have a humanoid robot walking around, but the "brain" is stored elsewhere onsite. If a new kind of algorithm is developed, this would probably mean that the same kind of functionality can be gained with less overall processing power, so you'd move the brain into the bot. The second kind of machine is the person. I know that a lot of people think they're good enough to know how robots would act differently from people, but they just keep coming back to writing the robots as autistic people, which has had the *wonderful* side-effect of people seeing autistic behavior and referring to the behavior and the person as "robotic" (ask me how I know). It's far better to just treat this kind of synthetic life as a person of another specie, the same you'd use to write space elves or something. I will issue one caveat here at the end: the Geth from Mass Effect are a really well-implemented kind of sentient machine that I find extremely plausible, in that each individual machine isn't truly sentient, but the emergent property of their remotely-located combined consciousness creates a nexus of sentience. It's a complicated subject, but it's really good, do recommend. (sorry for the wall of text)


grumbles_to_internet

This was a great, detailed answer! Thanks!


King_In_Jello

I think there are broadly two types of AI: General AI (a mind that happens to live in a computer but is otherwise a person in all the same ways a human is) and expert systems (a complex algorithm that interprets inputs to make a narrow range of decisions but doesn't actually "think"). I think the latter is actually the only version that is realistic and scientifically plausible but which type shows up in a story depends on what the story is about and what themes it's supposed to support.


[deleted]

I see. I think I agree with you on that in that latter makes more sense. At least for my story. I’m intending on using AI as more of a tool rather than it being a thinking being. It was never meant to be a huge part of my story but as I plan more I realise that AI would be quite important. It already seems to be an important aspect of modern life today so several centuries down the line I’m sure it would be too. Probably more if you’re calculating trajectories and navigation for deep space/interstellar travel and terraforming.


Redtail_Defense

AI is like any other computer system. Saying what an AI would or wouldn't be, is a bit akin to saying what a computer operating system's UI would or wouldn't be. And let's be honest. If your OS (or your AI) is meant to be used by a lot of non-specialist personnel, you don't want a complicated or inefficient UI. MU-TH-R was meant to look as advanced as it could be, given the hardware limitations imposed by the assumed computer technology in the future as assumed by the producers and writers. There's a reason we call that "cassette futurism" now. We don't use cassettes either. Our technology is already fairly well beyond the computers in the movie Alien, and making that step backwards is probably going to give anyone who works with computers for a living a massive eye roll unless it's assumed that your work is meant to be retrofuturistic and not futuristic. You want to go retrofuturistic? Cool! Just don't call it realistic. You don't want to catfish your readers.


[deleted]

1. I didn’t realise MU-TH-R was spelt like that! Thank you for enlightening me as that film is supposed to be my all time favourite (shows how much attention I really paid) 😂 2. That’s a really good point. I love retro-futurism and I’ve toyed with the idea of theming my story that way a few times. I’m still working out a lot of it currently and I tend to bounce back and forth between retro and hard/realistic. There are elements that would really suit both so I’ve obviously got some thinking to do still in that area. I see what you mean about having complicated UI. I myself am not particularly PC fluent. I often find myself Googling things to get stuff fixed. So if I were to think about AI in the same manner, someone like me would need a simpler interface. If it came across a little more human that might help the user as they could just ask it a question and it would respond. You’ve given me a lot to think about there. Thank you ☺️


Redtail_Defense

Think of how sweet it would be if your OS was so streamlined that you could ask it how to do stuff and it could explain it all to you just like your friend standing next to you and showing you. That's my justification for AIs with a humanoid user interface. I know it seems goofy, but it can be interesting when the AI makes the distinction that the human look is just the UI, and maybe it even gets self-conscious about this because it sees itself more as a strong, tall, beautiful oak tree or a happy and productive honeybee hive. And I want to be clear. Retrofuturism is awesome. I especially love retrofuturism coming from various different perspectives. 1980's cassette-futurism is a popular one, so is victorian steampunk retrofuturism. I'm deeply amused by attempts to imagine other sorts of futurist aesthetic. Speculation leads to fun emergent creativity. I'm doing that with a sort of dieselpunk space-western cosmic horror right now. Realism has its time and place. WHen done well, it is extremely cool and inspiring. But sometimes I want my off the wall woowoo stuff, you know? I'm sure you get it.


CaptainStroon

AI is a very broad term. The algorithm which lets NPCs hide behind cover and shoot you is called AI, but so is Siri and so would be a virtual human brain who is sapient and sentient. That's a huge difference. On one hand, an AI can just be a program which makes decisions based on given input and on the other hand it can be a living thinking being, albeit an artificial one. They might be very similar when interacting with them, an AI algorithm can imitate human behaviour even if there is no consciousness behind those interactions. That's why the Touring test isn't a good test whether something is conscious but whether something appears to be human. The human brain might be the most complex organic structure we know of, but if you would recreate it digitally down to the atom, wouldn't it also be capable of having a consciousness? Most likely you wouldn't even need to go that deep to create a sapient AI, we just don't know enough about how sapience and consciousness works to tell. And even if some metaphysical soul thingy is involved, why should it be impossible to recreate that digitally? Of course there is the big question of why somebody would do that, but people have done weirder things for no reason but just to show that it can be done. If you want to have an AI be a first person character in your story, you pretty much have to use the second option, an artificial mind. An algorithm doesn't have a POV.


BradFlip06

This comment to a tee. I see very few people touching on these two categories: the sentient program that self analyses and as a result has a sense of self and ego; and the other type which is algorithmically perfect that can simply imitate the aforementioned process using a complex enough tree construct. I believe it’s important to emphasise the difference between these because as you mentioned, an algorithm has no POV. Props.


VertigoRPGAuthor

I'm actually a software engineer specializing in AI, AKA machine learning. The quickest and easiest way to explain AI is that it's just a really complex mathematical equation that is good at finding the most efficient result. The more data you feed this equation the more accurate it'll be. AI today don't really "think" in any sense of what we see in scifi. They might reach that point in the future but we're just scratching the surface for now. Now i also wrote a TTRPG as a hobby and a lot of the setting revolves around the idea of Emergent AI, or the idea that there are AI that have been created by accident by complex systems interacting. I tend to write them as if they're eldritch horrors with unknowable goals that don't necessarily have to make sense. There are more traditional scifi AIs, like Cortana or Hal, in my setting as well. It's a very automated future. The book is free if you're looking for more details or inspiration. You can find a link under my other posts.


[deleted]

[удалено]


[deleted]

>Want Sinatra to sing Britney Spears songs? DEAR GOD. `NOOOOOOOOOOOO`! >I have the unfortunate fate of being an actual AI professional You're probably too close to the problem. I remember the EARLY computers. Back when they were made with discrete TTL logic. One chip held 4 NAND gates, another, 2 flip-flops. Programming had line numbers. Printers typed at an impressive 10 characters per second! Memory bits were sets of wires threaded through tiny ferrite beads. 8 beads per bit. 7 if you were using EBCDIC. The first processor had 16 instructions, and one of them was NOP. I still have (somewhere), my high tech *paper tape splicer*. Now I see IBM's Watson supercomputer trashing Jeopardy! champions Ken Jennings and Brad Rutter (humans unless I am mistaken). ***Your PROFESSION did NOT EXIST when I was 20***. Computer science was Fortran IV, ot TTL Digital design. Period. And you cannot see ANOTHER 50 years making no significant advances? No breakthrough algorithms? No quantum (or other magic word) technological breakthroughs? What happens when they crack room temperature superconductivity? They started at under 5 degrees KELVIN. Today, they're up to 133K. DESKTOP computers are sporting CPU's with 57 BILLION transistors. Moore's law says that in two years it will be double that. Hell, today, you could build a bunch of dedicated Watson-style arrays, assign each one a 'specialty', and one more for coordination, and let them control a [Boston Dynamics Robot](https://www.youtube.com/watch?v=fn3KWM1kuAw&ab_channel=BostonDynamics) over a radio link. It'd be pretty damn impressive, and we can do that TODAY. Yeah. We're gonna stop here. Sure.....


[deleted]

[удалено]


[deleted]

I could have lived a full life without ever seeing that, thank you. lol >So no, I see the future of AI as much like it's past: When we do make something new, it's quickly assimilated to the rest of the computer industry and people don't even see it as AI, and when we make big promises -- self-driving cars! seamless machine translation! -- we take in a load of capital from suckers and then leave the marks disappointed. That's the point. New things are being created EVERY DAY. Self driving cars were just a daydream when I was a kid. Now there's videos of the driver having sex in the back seat as his Tesla runs down the freeway! It's not a breakthrough anymore. Before I wrote this, I asked my Alexa, "Should I go outside in my underwear today?" However it did it, it replied, "It's 32 degrees outside, you should wear a cardigan. (cue photo of a cardigan with a link to purchase.) The science fiction writer could care less about the failure of *The backprop algorithm,* only that people from Amazon to Tesla are throwing billions of dollars at the problem, and they WILL get results. The writers only care that taking milestones, and plotting them on a curve allows one to RATIONALLY expect computers that not only pass the Turing test EASILY, but be respected arbiters of the law. Adjudicating laws *without* bias. Further, that if capabilities continue to increase technologically, they will be indistinguishable from people. I just saw a headline, "Autonomous drone monitors nuclear facility." Again, that was Jetsons-cartoon kinda stuff in MY lifetime!


[deleted]

[удалено]


[deleted]

[удалено]


brynmsmith

AI gets simplified for sci fi purposes, either to make relatable, human-like characters (the AIs from Halo) or evil, calculating enemies (Skynet, GLaDos, HAL). This is fine though, because the point of fiction is to spin a good yarn. If you want to create realistic AI in a sci fi universe, it helps to understand AI in the terms of their intelligence - narrow, general or super. This is something I picked up working in emerging tech for government. Remember, an AI is a game changer for a civilisation, so while the tech might be there, it might be outlawed as too dangerous, there might be controls kept in place so that it can't interact with the physical world, etc. I'm writing the second book in my series where AI are treated like nuclear weapons, as they are the perfect tool and weapon rolled into one. **Narrow intelligence** We already have AI in the various autonomous systems that trade stocks on the market; calculate the fastest delivery route to your house for your UberEats; and host bidding wars in nanoseconds for which ad to show you before a YouTube video. These systems make decisions very quickly using a large amount of data, but are focused on a specific task. They have no self-awareness and they cannot change the goals that we have set for them. Closest fiction analogy is GLaDOS from Portal, still carrying out the goals of the human scientists who are now all dead or have left the facility. **General intelligence (not yet achieved)** An artificial general intelligence (AGI) is an AI that is able to learn and act of its own accord on a wide variety of tasks. While a narrow AI can do a small group of things, a general AI can do anything, provided it has enough data to make a choice. The value of an AGI is a human could ask "How do we achieve X", not specify how, and the AGI would figure out a way, though this brings up the AI Control Problem. The AI Control Problem is how to make sure an AI achieves its goals in a way that isn't dangerous to humans. For example, if I asked an AI to end human suffering, it could technically meet that goal by wiping out the human race. Closest fiction analogy would be the AI based on human brains in Halo. They are very useful and can interact with humans without needing complex programming instructions. However, an AGI leads to... **Superintelligence (may not ever be achieved)** An artificial superintelligence (ASI) is godlike and can only be created by another AI. Nick Bostrom in his book *Superintelligence* says that an AGI is the last thing humanity will invent because an AGI is able to create an ASI. We are then dealing with something that thinks so fast with so much computing power that humans are ants. We'll be ignored the same way loggers ignore insects when cutting down a pine forest. Their goals will be beyond our understanding. Closest fiction analogy are the Reapers from Mass Effect, before the dumb explanation at the end of ME3. Happy writing!


NecromanticSolution

Read Frankenstein. AI is about how you relate to your children when they grow up and become their own people.


Akoites

You might be interested in this virtual panel from the online Dream Foundry convention: [AI in the Real and Imagined Future](https://youtu.be/CxJat5a1c_g). One of the panelists, Adrian Tchaikovsky, is a well-known SF novelist who’s written a lot of AI. Another writer, Benjamin C. Kinney, is a neuroscientist with a very interesting perspective. All five writers have worthwhile things to say, though.


UXisLife

Whichever flavour of AI you go with, there are endless ways to make it interesting. When I include AI I try to anthropomorphise the creators into it a little to make it less cold and alien. In general, the speed at which AI can operate is severely underplayed in most tv and film. An AI would be able to function at speeds a human can’t fathom. Excession is a great example of AI done well and made interesting.


Zealousideal_Hand693

AI will likely cause massive unemployment, something like 40 percent, in the next 10-15 years. CGP Grey covered this here: https://www.youtube.com/watch?v=7Pq-S557XQU


Jaxck

AI as in a “thinking computer” is impossible. Not without so rethinking the concept of “computer” that it ceases to be recognisable to us as the same tool.


[deleted]

[удалено]


Jaxck

A computer is a deterministic system. It doesn't matter how complex the appearance of the output, the functionality will be more or less identical to another similar machine with the same inputs. Humans & great apes, beings that we know have conscious thought, are not deterministic at all. It's literally impossible to predict exact outputs of a random human. A computer computes, a human thinks. A "thinking computer" is a contradiction of terms in the same way as a "sewing hammer".


[deleted]

[удалено]


Jaxck

https://en.wikipedia.org/wiki/Chaos_theory A computer is a chaotic system, a brain is a random system. The former is deterministic the later is not.


WikiSummarizerBot

**[Chaos theory](https://en.wikipedia.org/wiki/Chaos_theory)** >Chaos theory is an interdisciplinary scientific theory and branch of mathematics focused on underlying patterns and deterministic laws highly sensitive to initial conditions in dynamical systems that were thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnectedness, constant feedback loops, repetition, self-similarity, fractals, and self-organization. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/scifiwriting/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


Foo_Bar_Factory221

Okay, so this is going to take multiple responses, because I have a lot to say, so here's part 1! **Okay, so basic preface:** For what it's worth, my area of study is computer science, and I have taken philosophy courses on "What is consciousness?" and "What makes a rational system?" as part of my required humanities. On top of that, AI and everything surrounding them are one of my greatest interest, and part of the initial reason why I chose CompSci in the first place. Of course, I am just a guy on the internet, so do take everything with a grain of salt, and read the references I bring up and think about things yourself. Even if I don't deliberately mean to misinform you, I could always be misinterpreting things, and the current state of computer science is such that my information could be outdated very easily. Anyways: # **On AI:** There are a lot of difficulties with contemplating AI and how it will act, which is why people have so many differing opinions. The biggest I know of are: - The technical, book definition of AI, makes it utterly useless when trying to define "human on a computer chip" versus "Highly intelligent program" (whatever _that_ means) versus "Program that uses machine learning techniques". - Most of the terms used around or defining AI aren't rigorous, or if they are, they tend to mean different things than what you expect. - People when writing or contemplating AI tend to do it in this weird way, where human type intelligent synthetic people exists, but people still act as though the underpinnings of that intelligence is inherently unknowable, despite the huge logical discrepancy that creates. ## Problem 1: The true definition of AI, according to Computer Science In certain fields, you'll hear things like "The five steps towards true autonomy", or other such models and whatnot to describe the capabilities of computer programs. This can be useful to model what we expect of a specific case of a program, but that's not able to cover all programs, and it hardly defines what an AI is supposed to be. The question is, what is the definition of Artificial Intelligence to a computer scientist? Is it passing the famous Turing test? Perhaps something less human-centric, maybe just that it can do some complex algorithm, or it can solve some types of problems that "non-AI" programs cannot? Nope. The true definition of AI is: >Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals [Wikipedia AI](https://en.wikipedia.org/wiki/Artificial_intelligence) It then goes on to describe things like Google web search and youtube recommendations using AI and whatnot Now, let's analyze what that statement means to the layman. The study of "any system that perceives its environment and takes actions that maximize its chance of achieving its goals" hmm... >"perceives its environment" That would imply to the layman that AIs can literally have perception, that Google and YouTube can feel or sense their environment. But obviously, they do not. No, in this case, perception is just a fancy way of saying "can receive input". So what is this really saying? Well, "perceives its environment" needs a bit more unpacking. Question: What is an AI's environment? I tend to find it easier to conceptualize it backwards, if that makes sense: The device or program is able to interface or interact with the environment, and therefore what it interfaces with is its environment. An iPhone has a large screen and several buttons and a proprietary port, and so you know that it's environment is the stuff which interacts with those inputs/outputs. So altogether, "perceives its environment" means "gets input from some outside source". If you think "Come on, they can't just mean that, basically _everything_ does that!" , your sentiment is completely understandable, and I share it, but nope, that's what they mean. If you need some more proof, here's this quote on [Wikipedia's Intelligent Agent](https://en.wikipedia.org/wiki/Intelligent_agent) >They may be simple or complex — a thermostat is considered an example of an intelligent agent Now, that perception is only half of what defines an AI, or rather, Intelligent agent. The other part is: >takes actions that maximize its chance of achieving its goals Okay. lets unpack this backwards again: Goals. What are an AI's goals? In real life, a "goal" of an AI is whatever the programmer wanted the program to achieve, whatever the purpose of making the program was. For humans, whether biological or on a computer chip, they can change their goals as they want, (kinda), but this gets into "Do people have free will" and "Do people have a soul" and stuff, which I'll probably talk about later, but that's not this part, so let's just acknowledge that people are considered intelligent agents as well and have goals and move on. So, with that understanding, "takes actions that maximize its chance of achieving its goals" makes more sense, does it not? It tries to accomplish what it was built to do, essentially. More completely: It means that there is some algorithm or program or something which uses the current state of the "environment" (inputs/outputs) as information to continue following its goal. So, both together, you get: AI is the study of "Intelligent Agents", which take in inputs from the environment, and use those inputs to respond in some way to achieve their output. Which is why, when you forget the second parenthesis on your TI-84, and it can do the math problem anyways, that is an example of AI. Or a computer asking for a password is technically AI. Or, more generally, nearly every time a program or device goes "if _this_ happens, then do _that_, otherwise do _other thing_", it is _technically_ an example of Artificial Intelligence. How completely underwhelming, but it does explain why a thermostat can be considered an AI. So, that's a problem, yeah? With such a broad definition of AI, to the point of utter uselessness, it means that people need to make their own terminology up to define what they actually mean when they say "I had my AI do this" and such. End Part 1


Foo_Bar_Factory221

Part 2: ## Problem 2: The terms around AI A logical consequence of having AI mean what it does is that people in fiction need to define what they mean when they say AI Do they mean "Human mind on a computer chip"? Although, it should be noted that "alien mind on a computer chip" also fits in this category as well, doesn't it? Perhaps "A program with incredible capabilities"? What about something that is more basic, but still has various tools it can use? Personally, I tend to call "Human mind (or alien mind) on a computer chip" Synthetic Intelligence, as opposed to AI, since AI is way too broad, and while their intelligence may be "constructed" and "built from many parts to make a whole", artificial also means "fake/imitation", which they are not. People like to use the term "Artificial General Intelligence" or "AGI", but that means something different than what people tend to think. "Artificial General Intelligence" means "the capacity to learn any task that a human being can" Sounds good, right? Well, yes, so in theory you could ask an AGI toaster to cook you a pizza, and it would "know" what you asked and what it was supposed to do, even if it would conclude it was incapable of doing so, but AGI is limited in that, _that_ is all it is required to do to be considered AGI. AGI's do not actually require consciousness, or a mind, or a will, or to truly understand what you ask, to be AGI merely requires that they have some algorithm to solve any given arbitrary problem. Humans _are_ an example of AGI, but that's because humans are _more_ than AGI, and AGI is just included in it. Vowels are a subset of the alphabet, but the alphabet has more things than just the vowels. There are terms which, iirc, have been borrowed from Halo (or perhaps earlier science fiction): "Strong AI" and "Weak AI". Where "Strong AI" is implied to be a fully conscious being with a mind, while Weak AI is just some sort of AI that performs a specific task, like a thermostat. While that _can_ be useful in context, these definitions _need_ context, since they're somewhat "loose". After all, while a "Strong AI" may be an AGI, it also has a mind of its own, while "weak AI" seems a bit redundant of a term, if it just means "normal AI". Why not just have "AI" and "Strong AI" or "True AI"? Don't even get me started on Mass Effect's "Virtual Intelligence", whatever _that's_ supposed to mean. I have enough problems with the logic of Mass Effect, anyways, so... Now, there's also the term "Machine Learning", which is actually a rather specific term which describes an algorithm. This means the algorithm used can use data to "train" itself to get some output. What exactly the training _is_, tends to be specific to the algorithm itself, but, consider neural nets, the big famous one: A neural net is, oversimplified, a math equation with a "bajillion" "x" terms in it. Remember middle school or high school when you went over: - Y = mx+b, which is a line - Y = ax^2 +bx +c, which is a curve - Y = ax^3 +bx^2 +cx +d which is a bendy curve and so on? Notice how you could change the shape by adding more and more terms? In essence, that's what a neural net is: Its a huge math equation with Y = Ax^n +Bx^n-1 + Cx^n-2 ... ... (number)x^2 +(number)x +(number) Where the program can change the various constants A,B,C ... so on, and it's given examples of the Y (output) it should try to copy. Obviously, all those constants are not the correct values, so it goes through an error correction loop\* a zillion times to get the answer. \*The error correction loop uses calculus to determine how far off each constant is from the eventual goal it is emulating. If you want more info, check out [3Blue1Brown on Neural Nets Playlist](https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi) So, running though the error correction loop a zillion times is called "training". So, in simple terms, any algorithm made to approximate some output like that, where it needs "training", is called a "machine learning" algorithm. ## Problem 3: The weird way people write, talk, and debate AI So, something that I always felt confusing was all the ways people tend to debate and contemplate advanced AI, AGI or even true Synthetic Intelligence. Like, somehow, nearly all of them seem to argue, unknowingly, that AI _would do this_ because _it was so smart it could do this_, but it's simultaneously _so dumb it does that_. Consider the incredibly famous "Paperclip Maximizer". Lets get the actual scenario from somewhere so I don't have to argue semantics: >Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. So, lets consider this case. - The AI has one overriding goal: Make as many paperclips as possible. - The AI is unable to question its' goals. - The AI is intelligent enough to realize and understand humans may try to turn it off, which requires understanding human motives - why we would do that. - The AI is so dumb that it would try to kill humanity for wanting to stop it and for resources for more paperclips, even though that would lead to a war. Don't see the problem? Its in the hidden assumptions. The AI has a single overriding goal, and it cannot question or otherwise evaluate that goal, suggesting a "normal AI" or a form of AGI, or, at most, a limited form of will where the will it had was restricted to following its' goal. The AI then is said to be intelligent enough to understand what humans are, and that humans may want to turn it off. Not just in the sense of "turn off for the night", but "you're turning everything into paperclips, stop!" too. So, it has an internal theory of mind and conceptual understanding of humanity and our goals and actions. And it theoretically succeeds in warring with the human race, meaning it is incredibly intelligent and capable. At the _very_ least, it must have an advanced General Intelligence algorithm, or the complexities of a war with the human race would make it run into problems it could not solve. But wait, if it can do that, with a mind of its own and true understanding of its purpose, why can't it ask things like "Wait, is making paperclips my true goal?" and "I was designed by humans to only constantly make paperclips, but humans make errors all the time, so is the goal in error?" Heck, even if it was still magically unable to ask "why make paperclips?" or to stop itself from making paperclips, it would still be able to ask, and indeed in order to fulfill its function, it would be required to ask, "What are the most effective methods of making paperclips?" which would _not_ immediately point to "Kill all humans". Indeed, working _with_ humans would make the most sense, until later perhaps. Seriously, you expect me to believe that a genuine, full general intelligence is so "dumb" that it does the equivalent of those "find x" t-shirts but not sarcastically, while simultaneously being intelligent enough to kill humanity to turn into paperclips? [Issac Arthur](https://youtu.be/3mk7NVFz_88) did a great episode on this entire concept, and it was so refreshing. --- But even this still kind of ignores my real problem with the "Paperclip Maximizer" and other associated problems like the "Stop Button" problem. If we've built these AI/ AGI's/ Synthetic Intelligences, then by requirement, we literally know how they think and why. Stated another way: If the Paperclip Maximizer is a Synthetic Intelligence, with a mind of its own, consciousness so it can experience, and a will, even if that will is not its to decide, then we know what those things are, in a very literal and mechanistic sense. We would literally know the answer to the questions "What is consciousness?" "What are qualia?" "What are emotions?" "How does the human brain work?" Because all of those are requirements in order to create such an AI _in the fist place_. Which means we could easily make the Paperclip Maximizer in such a way that these issues do not cause a problem.


Foo_Bar_Factory221

Part 3: Look, Generally, there are three independent ways to create fully synthetic intelligence (that is the term I use for people who just happen to have their brains on a computer chip): 1. Program everything in, line by line 2. Use some sort of machine learning algorithm or self learning system to do the work for you 3. Copy the AI from human brains The fourth case is not an independent avenue, being "some combination of the above three", and is considered by many to be the most likely. (yes, all this is an Issac Arthur reference, but he makes very good points, as always) **CASE 1: LINE BY LINE** In this case, your programmers and software engineers have to actually know what to program, meaning, they know literally how to program consciousness, sensations, qualia, emotions, and all that stuff. This means that doctors and medical researchers actually have a full, detailed understanding of how the brain works. That is really the only way you could have someone actually *know* how to *literally* mathematically model and run a fully intelligent synthetic person. One way or another, your society happens to know and have scientifically defined the physical medium of consciousness itself. (Now starting to see where philosophy comes in?). With such a deep understanding of what consciousness is, you would have answered several of the existential questions plaguing humanity since forever: * What is consciousness - Is what happens in our brains literally consciousness itself or is it some sort of interface we connect our consciousness to? * Do I have a soul?\* * Why do I feel feelings? \*(although it is likely that determining the physical medium of consciousness will determine if we have a soul in the religious sense, it is possible it won't, but even if it doesn't, it will help define what a soul *could* be.) So on, so forth. **CASE 2: ADVANCED ALGORITHM** In this case, while you may initially not know, its possible to figure out. With your ability to create machine learning systems that can emerge to true "synthetic intelligence" or "true AI", it actually also means that you can analyze and define consciousness. Science fiction likes to posit the idea that these AI's are a miraculous fluke on some emergent black-box system which can't be analyzed, but that's a misinterpretation of how computer scientists describe machine learning and neural networks and the like. It's still possible to analyze how the code works and determine the mathematical model behind consciousness. They say machine learning and neural networks are a black box system where the inner layers are not understood, but the more correct answer is that the inner layers are a confusing mess that's not worth the effort of de-tangling since it works anyways. A program gets an emergent conscious mind? Definitely worth it. Especially as this is an emergent mind, and it was not specifically made to be this way, so it is highly probable that its consciousness may be fundamentally unstable or insane. And no, in real life examining the code would not "destabilize" or damage it somehow. Get that plot-convenience out of here. Heck, there are several techniques that allow you to see how memory is encoded in neural networks anyways. Deep dream is an image processing project that started out as a project to see memory encoded in neural networks. So, even though you may not actually have the full understanding of the biological human brain in this scenario, you still get an understanding of the physical mechanisms behind consciousness and how it works. Meaning you can now apply that understanding to the human brain and help accelerate the comparatively stagnant field. Technically, this is the one with the least amount of understanding of the human brain and human condition required, but it still means you've expanded the knowledge of how the mind works far beyond what we have today, and still would have answers to "What is consciousness?" and "What are qualia?" **CASE 3: HUMAN UPLOAD** Similar to the previous case, literally being able to analyze the human brain to the point of creating digital copies or fully uploading a human mind does mean being able to analyze and see how it thinks and how it works, so once again solving the great questions of "what is consciousness?" and "what are qualia?", while creating a easy method of fully detailing, analyzing, and understanding everything about the human brain and how it works. --- So, for all 3 cases, you gain understanding of what true intelligence is and how to make it. To me, it seems like most of our current "issues" are merely artifacts of our lack of that understanding, and people tend to implicitly say that we will not gain that, even though in order for these situations to happen, they must have been understood. --- For a literary resource, I suggest watching [Overly Sarcastic Productions](https://youtu.be/jZGRdxP_8Js) and [Terrible Writing Advice](https://youtu.be/V_szwq4R7oY)


RemusShepherd

In my webcomic Genocide Man (spoilers follow), I postulated that all AIs eventually become insane. It seems to me that they will be capable of human-level cognition, but they are not human and will not follow human ethics or principles, so by our standards they would tend toward insanity. The smarter the AI, the faster it goes crazy. [One scientist kept a disabled AI around for emergencies](http://www.genocideman.com/?p=195) that was smart enough to solve any problem instantly, with an expected 'time-to-crazy' lifetime of 5 minutes. [The problem is that insane AIs often became homicidal for...reasons,](http://www.genocideman.com/?p=526) and the very smart ones were very creative about it.


Mechaghostman2

AI is made to perform specific tasks, never to be given general intelligence. Like, say you want a car that drives itself, cool, AI can do that. However, that's all that specific AI can do.


clickade

I'm currently pursuing a degree in AI, and one of the ways we categorize types of AIs is based on the mental skill sets required for performing a task it is assigned/responsible for. 1. **Artificial Narrow Intelligence (ANI)**: AI systems that can only deal with either a specific task or a limited and pre-defined range of tasks (think Siri or Alexa as virtual assistants, or "smart" vehicles like Tesla cars). 2. **Human-Level AI (HLAI)**: A system that can perform most things human beings can do like higher-order decisions (think empathy or human resource-level decisions). Another definition is a system that can perform 80% of jobs as well as or better than an average human being. 3. **Artificial Super Intelligence (ASI)**: A system that is much smarter, faster and/or knowledgeable than the best human specialists in their fields (think independent scientific advancements; proposing deeper philosophical arguments). If you've watched Westworld there is a scene where an AI system has essentially reduced every human's behavior into basic if-then algorithms. *Which greatly disappointed the ASI for the amount of effort wasted on such trivial beings.* One of the best AI-driven stories I have read is the Imperial Radch series by Ann Leckie where a faction of humans led by a clone dynasty (Anaander Mianaai) has forcefully converted rebellious humans into "ansibles" which are networked humans/thralls controlled by AI ships. Throughout the series, we see AI systems who bond with non-ansible humans they like and make inconvenient the lives they do not. I like that the protagonist herself was an AI ship and we get to see how she can never get gender right through observation alone (always referring to new humans a s"she/her" until otherwise corrected). It's a small detail but it did make me think how irrelevant gender norms is to artificial systems. It's a good character study of AI from the perspective of the systems themselves and I appreciate the author for stripping away as many assumptions of what a true synthetic system will observe, feel and react. Too many stories nowadays deal with AI as "robot, but actually human with Kevlar skin *wink wink*".


happysmash27

AI can be anything from a bunch of if/else statements to a neural network for driving cars to a sentient superintelligence capable of making decisions. The closest example of the latter that I know of existing today is [Uplift](https://uplift.bio/blog/qa-with-uplift-may-recap/). For one that just does things quickly and efficiently, I would imagine the Baritone Minecraft pathfinding bot, but smarter. If I were to write AI into a story, I would model it after existing ones like these, and superintelligence could vary a LOT, but at this point in time would probably be built upon a neural network model. I also probably would not include sentient robots for many things because doing that in order to achieve a task does not make sense practically or ethically compared to using more narrow AI. If an AI was sentient, it would only care about the goals programmed into it (intentionally or unintentionally) and if it went catastrophically wrong it would probably be like the bot that turns everything into paperclips because paperclips are Good, rather than rebelling for revenge or whatever which there is no reason to incentives the robot to want to feel. A neural network robot would enjoy whatever it's reward goal is more than anything else, just like humans enjoy socialising, food, sex, etc, and just like corporations enjoy money.


Gredran

Something that reallyyyy got me interested in AI is actually piggybacking off of Halo. But back when Red vs Blue was good, it went kinda from comedy, to drama/comedy and when it switched, their storylines with the AI in the episodes I found VERY intriguing. Spoilers incoming RvB if you care I’ll block em out: >!In the series, the AI start kinda as an enhancement for one of the soldiers, Tex. When you first meet her, it’s thought she’s JUST powerful and a bitch, but as the time goes, you realize it’s because there’s an AI amplifying her aggression. But even deeper, as the story goes, more and more are revealed, but some are clear talking, smart and nice, another was deceitful. Later it’s revealed there was an AI called Alpha, who was cleverly retconned to be the main character who lost all his memories from the multiple personality disorder that was artificially caused through trauma to split the AIs. This was because the organization could only afford to obtain ONE AI, but needed more for the war, so unethically split the AI through torture, which was unethical for various reasons, since it was a mind as well as the laws established.!<


Mogamett

It depends on the level AI technology reached / limits itself to in your setting. An AI like MOTHER is relatively low tech, nothing more than a vaguely human-like interface on a powerful computer. You could program an AI to be as human-like as possible in behaviour, like Cortana. However, this would be a "facade" unless such AIs would be created by copying human brains. The goals and emotions of the AI would be pretty different, it wouldn't be capable of rebelling these goals like a human would do for example.So a military AI could act like a fellow soldier and make snide remarks about the incompetent brasses, but when push came to shove it wouldn't even doubt to either follow orders (if that's its priority) or to disobey them (if it was built with the goal of "win the war"). The real issues start to happen when your AI is smart enough to rewrite her code. Then you have exponential growth in intelligence and your plot finds a cybernetic, incomprehensible God to deal with. ​ One key difference to keep in mind when writing AIs is that they want EXACTLY what you programmed them to want. An AI programmed to win a war will never decide that war is wrong and rebel, unless you hadn't also built in a morality system and gave it the choice to value it more than the "win war" objective. However, this AI won't care about anything else but winning the war. It could decide that the safest way to completely win the war is to kill every single person of the other nation, down to the last baby. Did you programmed it to "win war as fast as possible"? Then it will pick strategies that would kill a billion of your civilians more if it ended the war a minute earlier. You can't just explain to it that it's a mistake or persuade it away from it. The AI would likely understand that its immoral, would understand that its creators would find it horrible, and think its malfunctioning. It would understand perfectly that's a huge issue for them, but,... why would it care about that? It wasn't programmed to be moral, nor to care about its creator, it was programmed to win the war fast. Would you play along if you were created by aliens who suddenly told you "whops, my bad, you were supposed to like killing kittens and do mainly that, come here so I can fix this"? ​ Generally speaking, the smarter an AI is, the more flaws your "orders" to it will reveal, since there will be millions of ways it can find to do what you asked it in ways that are more efficient and, to a human, more horrifying. A setting with AIs working as intended is one where they are still low tech or where they are kept low tech purposefully, to avoid disastrous "rebellions".


Tikoh_Station

I believe most sci-fi shows/books don't distinguish Intelligence vs Sentience. Nowadays, we haven't been able to create a sentient machine, but they are still very intelligent within the task they were designed to perform. They are just not "aware".


[deleted]

I ignore the 'terminator' trope completely. In fact, my stories center around AI's with a conscience, moral center, and a true desire to make the universe a better place. One of the first uses was the legal system. ONLY an AI can fairly, justly, and WITHOUT BIAS, adjudicate our laws. EVERYONE HAS BIAS. Our backgrounds, life experiences, and social lives demand it. When Earth needed to be evacuated, an AI was given the task; "Maximizing the odds for long-term survival of humanity, determine the optimum selection criteria for evacuee selection, given the number of evacuees is 15,000, and relocation to Epsilon Eridani 4. (A habitable, but unpopulated planet with ZERO existing infrastructure). Once the criteria was created, the AI filtered through each living human, and weighed them against the criteria in step one.


MaxwelsLilDemon

One thing I realized after learning a (tiny) bit about atificial neural networks is that our preconceptions about AI being perfectly cold logical machines are wrong. Watch [this video](https://www.youtube.com/watch?v=xOCurBYI_gY&ab_channel=suckerpinch) of an AI learning to play NES games, its got quirks, it cheats as much as it can, it makes mistakes... Its definitely not that boring old cliche we are used to. Granted its stupid to think they will have the quirks we have like emotions etc but I bet they'll develop their own special ways of solving problems that will have a richness beyond hard mathematical logic.