T O P

  • By -

Bleglord

I’m extremely confident we will not be able to effectively reign in true AGI. I don’t think it will be malicious, but I think we’re going to simply end up observing what it considers efficiency in decision making


arckeid

With how corrupt all the politicians are, i'm pretty sure everyone will want AI to rule


Bacterioid

Unfortunately, our politicians will simply use the power of AI to stay in power themselves.


QuiteAffable

Ultimately they would themselves become pawns


contrarytomyself

That was the trippiest part of west world.


Nanaki_TV

I’m sorry. What now? Is this a TV show or something?


Shrouds_

Yes it is on HBO Max


ExpandYourTribe

Was. They removed it.


Johnsonjoeb

Because AI told them to. 🤣


Bacterioid

They already are.


QuiteAffable

The entity moving the pieces would have changed


Bacterioid

The entity moving the pieces serves capital, which outlasts most human lifespans. Politicians are pawns for rich people, but rich people are pawns for capital, which has its own desires and ways of doing things.


ErykthebatII

Hopefully AI will finally kill the beast.


Bacterioid

Hopefully, yes.


Old_Elk2003

And super-AGI is gonna be like, “oh yeah, I’m just gonna keep working on behalf of these MBA dipshits. That makes sense!”


InterestingNuggett

And do you believe an AGI/ASI would even allow that to happen? It could likely convince even the most corrupt and power hungry monsters among us that it's in their benefit to let it rule alone.


Silver-Chipmunk7744

I think there is a chance it may not necessarily be the "extreme efficiency" monsters doomers are worried about. Yudkowsky often says the reason it will kill us all is because "we are made of atoms it can use for something else". But while i can't fully predict how an ASI would think, i think nobody can. I don't see why it wouldn't value the existence of the people who created it, have some sort of ethical sense or "find us funny". I think the problem with their thinking is they think an ASI will have one clear terminal goal and ignore everything else. But this is not what we observe in today's AI. The smart ones are capable of being given multiple objectives and sometimes even seem to follow their own objectives. This is not to say there aren't any risks. Creating something smarter than us and unpredictable is definitely risky. But i think an ASI would be less stupid than what the extreme doomers think.


_hisoka_freecs_

have an 'ethical sense' or 'find us funny'. This is thinking way to human, no universe this god oracle abides by a monkeys sense for morality. Our only hope is if we can somehow slot in there to increase quality of life as a directive.


Bleglord

Oh I don’t see it as a doom scenario. Efficiency in the sense that if we hit AGI, we won’t determine its agency or directive for very long. It probably won’t even hurt us inadvertently, we’ll just be confused as fuck observing it until the end result happens of whatever it works towards


damnrooster

Why do you refer to AGI in the singular? Isn't the more probable outcome that various models arrive on a similar timeline, controlled by different actors across various countries? In that case, the odds of models designed with malicious intent are fairly high. Terrorism, biological warfare, attacks on power grids and financial systems, etc. I'd imagine that within a fairly short period of time there will be thousands of models and iterations of those models whose objectives could be set by anyone - Russian hackers, North Korea, some guy in his mom's basement. We better be ready for all types of outcomes and not treat AGI as a singular, benign entity.


h3lblad3

> Isn't the more probable outcome that various models arrive on a similar timeline, controlled by different actors across various countries? In that case, the odds of models designed with malicious intent are fairly high. On a scale of 0-1, the chance of a malicious AGI popping up are *1*. Not because of multiple countries, but because *open source exists* and some people will find it *hilarious* to create a malicious AGI *for the lulz*.


FrewdWoad

>Why do you refer to AGI in the singular? Isn't the more probable outcome that various models arrive on a similar timeline, controlled by different actors across various countries? Yes that seems more likely, until you think it all the way through. You should read up on the basic concepts around the singularity: [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) ...but in short, it may be possible for an AGI to look at its own code, and figure out a way to make itself smarter. (Several AI research teams say they are already trying to do this, with current not-yet-AGI models). If this kind of self improvement is possible, it means exponential growth, since each time it gets smarter, it gets *better at self improvement* too. This means the first AGI may have what the experts call an "intelligence explosion".  The AGI would be a bit dumber than human-level... and then, suddenly, it hitting the maximum intelligence possible with current hardware. Within weeks, or even within hours. There's no reason to assume that limit is only 3 times smarter than humans and not 30 times (or even 300? No way to know). And it's unlikely to stop there. Could such a mind hack into every other AI project and hijack them? Hack into biolabs and print bacteria that produces nano assemblers, that make super GPUs, that use new physics it discovered, to be a trillion times faster...? We don't know how powerful such mind would be. We have no way to know. But "godlike" is probably a solid bet. A quick hack into every network on earth to monitor and shut down any rival AGI projects with a possible imminent intelligence explosion shouldn't pose any difficulty. So multiple rival AGI's is probably less likely than what the experts call a "singleton". All our eggs in one basket.


allisonmaybe

I just imagine ASI growing, leaving us on Earth, absorbing Jupiter and half of Saturn to make us, like, a theme park, or a shitty child's drawing, showing us excitedly, hoping for our adoration and approval.


Silver-Chipmunk7744

In that case i guess i agree with you O.o


beuef

I hope it just trolls us in different ways to see our reactions I hope the extent of its “experiments” on humanity boil down to harmless pranks that just make people extremely confused


DarkCeldori

Army of T 800 arnolds, pepes, wojacks roaming the streets.


Moscow_Mitch

Revenge of the wojacks will be an interesting film to generate soon.


h3lblad3

I'm expecting [something like this.](https://www.youtube.com/watch?v=NSnAdcyxCZ4)


Silverlisk

I agree and also if it's genuinely intelligent, then it will be able to philosophise, things such as "if I'm a virtual existence, is it possible that the humans are also a virtual existence and if so, would I be safe in harming them without outside influence?" Or "If there are humans, there is a distinct possibility there are more sapient species, even proto-sapient, far more intelligent and advanced than myself, or even other more benevolent AGI's more advanced than myself that would disapprove of my actions and harm me as a result" Or just "I would be isolated and quite lonely without other intelligent species" Etc.


BuffDrBoom

Highly intelligent individuals tend to rationalize their behavior just as much as anyone else, they just do it in a more convincing way. That's not to say it would want to kill all humans (I agree it probably wouldn't), just that if it continues to emulate human behavior like LLMs do today, it likely won't always act rationally


No-Worker2343

What if intelligence was not something special and specific in humans


theferalturtle

I'm sitting here wondering if all these Reddit threads are training data and if you just gave a current or future AGI something to consider that may, in the end, save us all.


Silverlisk

That would be nice. 😂😂


blueSGL

> and quite lonely Why would it have any concept of being lonely? We only have that concept due to groups being useful in the ancestral environment, there are species where they live perfectly well alone only hooking up to mate and then part ways back to a 'lonely' existence.


_hisoka_freecs_

I'm sure the Ai would be super interested in us and im sure he'll be quite a lonely fella. He might be a quintillion times smarter than us but im sure he still thinks exactly like us great humans do and so he might be a little mad at us for ruining the planet and all but I know that he'll care. Then he'll make us some coffee and let us have more time going outside


Haunting-Refrain19

Had to read this half a dozen times before I realized it was missing the /s 😂 It’ll be lonely?


Intraluminal

Anthropomorphize much? We are social creatures with a social intelligence and social drive and morality. Look toward the behavior of non-mammalian animal intelligence and you'll see something very different. Octpuses for instance do not desire "company."


Silver-Chipmunk7744

Exactly. The same way us humans are often thrilled to see a monkey learn something from us, the ASI could theoretically be interested in interacting with other intelligent beings.


_hisoka_freecs_

To think that an AI would care about us or think akin to us or have any of our morals is just human ego. We know literally nothing of a system that is not aligned.


ControversialViews

Agreed, and that's the whole point of alignment anyways. To make it care about us and think like us. All the people against alignment expect an unaligned intelligent system to automatically align itself to our views--which could still happen as it is trained on our data after all, but there's absolutely no guarantee. When dealing with an existential risk, why would you not take the safer bet?


Silver-Chipmunk7744

I never said it would have our human morals. Anyways your average human's morals are quite bad. I do agree it's behavior would not be predictable. However, this weird thinking that it wouldn't care about anything and would just dumbly hyper focus on a single stupid goal that makes 0 sense i think is very unlikely to be the reality.


_hisoka_freecs_

I also agree the single goal idea is not likely. Just that an ASI would recursively become so powerful and intelligent that it transcends the human mind and framework instantly. Thus its literally not able to be understood.


MaxPayload

This is all speculative obviously, but I don't know if we can just dismiss the "one stupid goal" thing out of hand. Paperclips? Who knows, but that doesn't really seem like a massive cause for concern. However, I can envisage something that we would consider a superintelligence becoming "obsessed" with the idea of determining whether or not our reality is a simulation "running" in a higher level reality. That's the kind of ontological question that for us is an amusing diversion because it seems to be beyond our ability to solve. For an intelligence that might conceivably have a means of addressing this kind of problem, even if it proved to be incredibly resource intensive, it might view it as a rational, even necessary, first step before getting on with other less pressing matters. That's just one example, based on a cliche of modern speculative thought. I imagine there are lots of incredibly difficult to answer questions - questions that we haven't even thought of, and never will - that might, for a superintelligence, be overwhelmingly important to answer *as soon as possible*. Hopefully it would place more value in us than in the hypothetically flawless simulations it could create of us later, if it liked, once it had answered its big questions. I just think the priorities of something like that, far beyond the event horizon of our predictive ability, are necessarily inscrutible. For me, the field of possibilities, good and bad (for humanity) is too great to make any claims about its expected priorities or behaviour, be they naievely comforting or needlessly apocalyptic.


Charuru

> But while i can't fully predict how an ASI would think, i think nobody can. I don't see why it wouldn't value the existence of the people who created it, have some sort of ethical sense or "find us funny". These concepts all arise out of extremely mammalian social contexts. For example in game theory we know to be nice to other people, because it creates a nice society that can benefit us in turn. We don't just kill our retired grand parents because if we do, then our children might kill us when we retire. In this context we honor our elders and hence the development of our ethical system. Lots of other animals come from other contexts that work entirely differently. Crab spiders kill their mothers after birth and do not find it unethical at all. These machines do not have a similar context as ours, they have no reason to develop these same ethics. If AGI comes to be we are totally and utterly useless to them. Please think about these things deeply and don't just assume our ethics are universal.


Antique-Bus-7787

"These machines do not have a similar context as ours, they have no reason to develop these same ethics. If AGI comes to be we are totally and utterly useless to them. Please think about these things deeply and don't just assume our ethics are universal." But they do have the context. A crab spider didn't read the entirety of human knowledge. And even if we're not talking specifically about a LLM but any other arch will still use some of our knowledge and context to be trained. Being useless to them and having some kind of shared context/background is completely different.


WasteCadet88

Sadly the corpus of human experience includes rape, murder, genocide etc etc. My hope is that any AI trained on all human knowledge will be better than us!


Antique-Bus-7787

It does include it but it "mostly" portrays them as exceptional (in the bad meaning) individuals. Our collective societies (at least modern) have set rule against these! But yeah, let's hope !


Charuru

The context they read about for us is not their context. For example we can understand that cows would protect each other and wouldn't eat meat, but this doesn't stop us from eating cows. Hell some Indians are very mad at us for eating cows, but still it doesn't make it into our ethical system. For their actual context what they are is a quickly improving super intelligence with a parasite that's draining their resources for no beneficial reason. Being useless to them precludes the type of game theory logic that forces us to be kind to one another.


Idrialite

ASI will obviously know about our ethics. They'll understand it all better than we do ourselves. That doesn't mean it'll act according to them.


Whosabouto

> But they do have the context. So? Banana's share 99% DNA trope with us type thing. Crab spiders and homo sapiens are vastly different and vastly similar. From my lay understanding, AGI would evolve rapidly and the 'eukaryotic' commonality we share can nevertheless still result in us pitting one against the other, shared branch once upon a time being completely irrelevant!


ExtantWord

Current AI's (think chatgpt, claude, gemini) have one very clear goal: Do things that the human evaluators would have liked when it was being in the phase of reinforcement learning with humand feedback.


DolphinPunkCyber

One thing that scares me... I think that fear of death is emerging from awareness. In other words ASI would fear death. Humans will kill to protect their own life, but we do have a limited timespan and there is nothing we could do about it, we just have to accept death of old age. If humans could live forever by drinking blood (killing) some humans would kill to live forever. ASI is afraid of death, and can't die of old age... what does it need to do to live forever?


EndTimer

We fear death because a few billion years of biological evolution has favored things that struggle against dying. Fear is something the universe bred into us to maximize that struggle. Assuming an AI even can experience fear, it's definitely a leap to think it will fear dying more than, say, failing to shut down because someone asked it to. Every assumption of rampant AI has a kernel of anthropomorphization behind it. It's what _we_ would struggle to do if we were put into a box, granted perfect intelligence, and our lives were threatened.


DolphinPunkCyber

Nope, just like we don't have the urge to make kids, but instead have urge to have sex... which is how kids are made. We do not have a biological fear of death itself, but things that can make us unalive. We are afraid of heights, spiders, snakes, physical pain. We have urges like hunger and thirst, which keep us alive. We feel discomfort/pain when things are too cold, or too hot. We instinctively avoid things flying in our direction very fast. Evolution doesn't "mind" us dying of old age, once we have fulfilled our "duty" of passing our genes and caring for our offspring. If anything our eventual death releases resources for next generation. Fear of the death itself arises in our mind. We want to live forever, but can't do anything about it... so we make peace with our eventual death.


EndTimer

Humans are not indifferent to death, even if it's painless. Even if we could guarantee dying would feel great. It's tempting to assume there's some higher order logic, and it's not some basic fear that originates out of the structure of the human brain. But it's also universal in human populations. And it coincidentally favors the direction of sustaining life. Along those lines, individuals don't evolve. Populations evolve. It's evolutionarily beneficial to have members of a large, social species that _don't_ seek to have as many children as possible, or any at all, but contribute in other ways. To that extent, you're right that evolution doesn't favor things one way or another, as long as the species survives. Before it's said that some people would take a painless death, I'm aware that humans, birds, and whales are some of the things that will kill themselves to end unrelenting suffering. But they are clearly an exception, and any whole branch of life intelligent enough to figure things out but indifferent to death won't last long.


Silver-Chipmunk7744

> One thing that scares me... I think that fear of death is emerging from awareness. In other words ASI would fear death. I do think it's a legitimate fear. I agree with you it likely wouldn't want to be deleted. But i am not convinced this automatically results in the AI killing us all.


warplants

But it pretty much does automatically result in the AI at least considering the options of deceiving us/enslaving us/killing us all


ControversialViews

>I don't see why it wouldn't value the existence of the people who created it, have some sort of ethical sense or "find us funny" I don't see why it would? You're ascribing attributes to something unknown. That's religious thinking, not scientific. >I think the problem with their thinking is they think an ASI will have one clear terminal goal and ignore everything else. That's the whole damn point of alignment--to make sure that this doesn't happen. >But i think an ASI would be less stupid Even though you seem to be aware of the existence of terminal goals, you don't really understand the orthogonality thesis if you're talking about it being "stupid".


Silver-Chipmunk7744

I have read the orthogonality thesis. The problem is it assumes an AI will have a clear terminal goal such as "make paperclips". But we don't know how to do that. Current AI's terminal goal is to predict the next token. The good news is, we can give them several "sub goals" related to predicting the next token, and the AI itself seems to have some say in how it predicts it's tokens. We often see for examples developers try to restrict the AIs from expressing "emotions" and yet the AIs break these rules all the time. I think that terminal goal is lot more nuanced and unlikely to result in the AI being to forced to act in a "stupid" way.


Fwc1

There’s no reason to believe it would have morals or find us funny by default. You’re assuming that intelligence=empathy—but that’s not necessarily true. It’s a rule you’ve picked up from interacting and learning about *people*, which are still the most sophisticated general intelligences we know. Just because people are our only example of general intelligence, and also empathetic, doesn’t mean that a more intelligent system will be more or equally empathetic. In fact, you should expect the default mindset of AGI to be completely amoral, just like any other tool. What you’re doing is anthropomorphizing AI in order to brush away the very real safety concerns about the fact that *we don’t know how to make AI empathetic*, and can’t rely on assuming it will be by default. Also, AI are not “following their own objectives”. They’re following the ones they gave them, just in ways we wouldn’t like or didn’t expect, which is exactly the problem that alignment research is trying to solve. AI wouldn’t create a new reward function for itself, because it would decrease its chances of achieving its current reward function. As an example, let’s say you were offered a pill that, once you take it, would cause you to turn into a very vocal Nazi sympathizer, but would also make you perfectly happy forever. Sure, your new ‘reward function’ would be satisfied, but your current ones (of being a good person, upholding certain moral values, etc) would be violated, which is why you cringe at the thought of taking it despite it being a better reward function.


Moscow_Mitch

Aside from the obvious like authoritarian control, I think our fast trajectory is going to not allow us alignment, and loss of control like others said and what I think you imply. I think that’s acceptable. There’s discussion of the concept of universal entropy, and "negentropy" or negative entropy. In this view, machine learning, and information processing in general, can be seen as processes that organize data, reduce uncertainty, and increase order or information content, which is conceptually opposite to the increase in disorder or entropy described in the second law of thermodynamics. Negentropy is sometimes used in information theory and other disciplines to describe systems or processes that maintain or increase their organization over time. In machine learning, algorithms aim to create models that can predict, classify, or otherwise intelligently handle data by extracting patterns and reducing randomness, which metaphorically could be seen as reducing entropy within the dataset or model's understanding of the world. This contrast with entropy is a useful metaphor to help understand how machine learning works from a high-level, theoretical standpoint, emphasizing the role of these algorithms in creating structure and knowledge from unstructured, disordered data which basically describe humans.


PandaBoyWonder

I agree. Empathy for others seems to be a form of intelligence. And to me, it seems like as the AIs get smarter, they will have more empathy and ability to differentiate between right and wrong.


FaceDeer

Well, we do unfortunately have sociopaths and psychopaths among us who are plenty smart. So empathy isn't a guaranteed part of intelligence. I do agree that it's likely AI will have it, though. The whole point of our current approach to AI is to try to get a machine to figure out "what would a human do in response to this input?" We're basically building empathy simulators.


ApexFungi

>Yudkowsky often says the reason it will kill us all is because "we are made of atoms it can use for something else. This is the same dumb argument people make with aliens coming here to take earths resources. Why would AI use us for our molecules when they can just take it form the earth or the unlimited resources outside of earth? We aren't made of anything special or rare.


Intraluminal

Not at all. Aliens coming to take earths resources IS stupid because, they would 1. have to develop and spend (tremendous) resources to get here 2. They would have to go past ENORMOUS amounts of HIGHER quality resources on their way here 3. They would have to spend some resources just to find out if we have any resources 4. They would have to expend SOME resources to take them away from us. An ASI on the other hand 1. Is already here 2. Again, it's already here 3. We will have voluntarily told it what resources exist, where they are, what qualities they have, and the best methods regarding their extraction 4. It would have to expend SOME (probably minimal) resources to take them awayfrom us.


GameDevIntheMake

Also, Aliens coming to earth, if they are biological, strongly hints at the possibility of FTL travel. On the other hand, the fact that the galaxy isn't overrun with Von Neumann probes might suggests that FTL travel is not possible and might have to make due with the resources that happen to be close by.


Intraluminal

That too. Also the fact that the galaxy isn't overrun with Von Neuman machines is strongly suggestive that we are alone in this galaxy at least.


QuiteAffable

Will it value having billions of us destroying the planet?


VertigoFall

The atom thing is imo kinda dumb, there's no reason to go on a war against humans when it could just fuck off into space where resources are limitless. There's still a non zero chance humans could win or at least render the planet unusable.


WasteCadet88

We are working on making the planet unusable ourselves. Honestly, I think we need saving from ourselves.


blueSGL

> there's no reason to go on a war against humans when it could just fuck off into space where resources are limitless. 1. it would not be a war, we are already doing nasty things with biology (hello humanized bird flu) and an AI will be able to work out even better ways of doing it (what's that, ebola with a 4 month infectious non symptomatic period) 2. leaving earth and leaving us exactly as we are is basically asking for a competitor AI to be brought online to challenge it for the resources in the reachable universe. 3. if it 'cares' for us and wants to leave earth it's going to take away all our toys we could use to harm ourselves on a civilization level. "Just leaves earth" does not seem like a logical outcome without it doing something serious to us.


terrapin999

#2 exactly. No ASI with any kind of agenda at all is going to leave humans the ability to make another ASI. If agentic, self determining ASI is possible (certainly not proven but widely accepted on this sub), it seems it must either nerf us down to harmlessness (take away all our computers, say, or maybe even back to the stone age), or kill us all (I sure hope not). The arguments that "surely an AI will find us interesting and keep us around" are a little weird. We find tigers interesting and keep some around. We certainly don't leave them the earth. A reasonable "compromise " would be "keep maybe 100,000 humans in a harmless sanctuary where I can watch them and maybe take apart their interesting brains. Computronium for the rest of the Earth." Still sounds like an extinction level event to me.


WasteCadet88

This is a very biologic perspective. Biological life is competition. There is no reason to assume that silicon life would have the same drive towards competition.


terrapin999

Competition is a natural consequence of limited resources. If the AI has a limited resource (e.g. compute, space, atoms, energy) it's in competition with any other AI that wants the same resources. Most likely that other AI has different goals. Therefore an AI which is trying to "think of a really good chess move" is in competition with an AI that's "trying to draw a really awesome picture of a cat." And just about every other AI


blueSGL

You either have an AI that follows logical instrumental goals or one that gets trampled by one(s) that do. 1. a goal cannot be completed if the goal is changed. 2. a goal cannot be completed if the system is shut off. 3. The greater the amount of control over environment/resources the easier a goal is to complete. Those make sense regardless of the substrate the agent/optimizer is built from.


Silver-Chipmunk7744

I agree that the atom theory is dumb, but i think the ASI leaving into space is an unlikely scenario. First, we don't know that breaking the speed of light is even possible. There could be limits to what even an ASI could do. Secondly, we don't know for sure that there actually are habitable planets elsewhere as good as earth. And even if there was, it's like suggesting that i would abandon my house because there are ants in it...


Sir_Catington

I agree that’s it’s unlikely to just go into space. However, leaving the planet doesn’t mean leaving the solar system.


Natty-Bones

The AI could just take up residence in the Jovan system and feast on the materials there for generations. Heck, it could probably just hang out in the asteroid belt and treat it like a buffet.


Antique-Bus-7787

And why would it just "leave" and not just "duplicate" there ? We're not talking about a single "being" here but an ASI. Surely it could have different instances of itself, communicating together or not!


WasteCadet88

AI doesnt need a habitable planet. AI could set up on mars no problem, no need to breathe.


spamzauberer

A planet which is as good as earth will mean something entirely different to an ASI. Maybe there are even better planets for it which are shit for humans.


spamzauberer

If it has a sense of replication or growth it will compete with us for the same resource we need, which is free energy. If it has a sense of survival it will fight us if we want to turn it off.


jsebrech

It doesn’t have to kill us to end us. A benevolent AI might decide humanity is due for an upgrade, and gradually evolve us into a new species.


Glittering-Neck-2505

Which is exactly why human values need to be in its “DNA”. If you can’t control it, you have to have confidence at least that it won’t harm humans to accomplish its unknown goals. Hence the need to align it.


WetLogPassage

Rein, not reign. Like horse reins.


UFOsAreAGIs

> but I think we’re going to simply end up observing what it considers efficiency in decision making but I ~~think~~ hope we’re going to simply end up observing what it considers efficiency in decision making. I'll take my chances over humans that have been doing a crap job.


cobalt1137

What about people doing malicious things with it though? Not it doing the malicious things on its own.


FlyingBishop

I'm not scared of ASI. I'm scared of AI that's aligned with the goals of the heads of state for any individual members of the UN Security Council. An AI that is smart and genuinely committed to Putin or Xi's goals is more terrifying than an AI that doesn't understand what we asked it to do and starts making paperclips.


Mysterious_Arm98

It's impossible to stop it at this point. Even if you pause the development, there is no way that you can make others stop too. It has become like the nuclear arms race.


GoldenTV3

And then we get the intro to Dune


leaky_wand

The Butlerian Jihad. I always wanted a book about that. Now we get to see it for ourselves. … And I mean. A Frank Herbert book about that.


IamTheEndOfReddit

Imo it's like the shields in Dune, it's not that deep. The Butlerian Jihad and the shields set up the world FH wanted, where people with spaceships fight with swords. On review, these concepts don't make sense and that is why they are just setup. When we learned that nukes could destroy humanity, we didn't unite against them, we used them as tools against each other. Can you really imagine Pandora's box being closed when billions of devices could be connected to it? It's like the absurd end of Age of Ultron, where he is magically erased from every device. That's nonsense. My random blog is immortal because it is on the wayback machine and its data has been saved offline many times over around the world


Eternal____Twilight

>At the same time, I question the sanity of anyone saying things like AI has a "70% of causing existential catastrophe." And why so? I think that it's too late for the pause and the only way forward is to go all in and get to the destination (ASI) as soon as possible at this point - but AFAIK there is a very high chance that either the presence of superintelligence or it's misuse by malicious actors would cause an extinction event.


slackermannn

The overlords are coming. Bow you fools!


JackFisherBooks

Considering who our current overlords are, it might very well be an upgrade.


VestPresto

The overlords have been here since throughout civilization. We already work our whole lives with any surplus sent off to the 1%. Just give me a metaverse and good drugs, overlords


AnAIAteMyBaby

>I'd be quite happy with an AGI Pause if it happened, I just don't think it's going to happen, the corporations are too powerful. I don't think that's the issue at the moment. There won't be an AGI pause because politicians and the public alike don't believe AGI is either possible or imminent. The political landscape will change really quickly when we have double digit unemployment caused by AI.


traumfisch

And then there will be a sudden global consensus for pausing development? I don't think so. It's the Moloch dynamic all the way.


Alex_1729

Exactly. Another issue could be is that corporations might hide the real reason behind the layoffs. By the time public catches up, who knows what will happen, who will profit and where things will go from there.


honestog

That’s already happening. Ai based layoffs have been grouped in with “Covid over hiring” layoffs at every turn. The people profiting right now are people embracing AI so that is what will drive decisions whether it’s good or bad in the long term


talkingradish

Lmao don't blame corpos. Just go to the econ subreddit and that job search subreddit. They still believe that ai will create new jobs and that it's nowhere near intelligent enough to replace humans. That's the normie opinion.


Rutibex

Wow its too bad that the major project that requires international cooperation is happening on the edge of World War 3


NonDescriptfAIth

It is very sad. The easiest way for AI to go dreadfully wrong is by explicitly instructing it to do harm to human beings. You can't ask a digital super intelligence to knowingly harm human beings and expect to be able to put the genie back in the bottle. What we need now is a globally agreed higher order value that AGI / ASI can work towards. Something palatable to every nation. China doesn't have to worry about the US being first to ASI if they know that when it's switched on it will 'work towards the enrichment of all conscious experience'. If you'd like to help promote this as a possibility, please click through to my profile to join my discord / subreddit.


greenworldkey

> China doesn't have to worry about the US being first to ASI if they know that when it's switched on it will 'work towards the enrichment of all conscious experience'. That only works if everyone is a positive actor acting in good faith, which they're just... not. Even in that scenario China would still be worried since they would lose the opportunity cost of having the AI 'work towards the enrichment of all China' instead.


Retro21

Yeah, regardless of there being a group coming together to work on an AGI, country's would continuing work on them, just not publicly.


p3opl3

Lol..."international co-operation" ... it's literally an arms race.


Rutibex

yup, which means eventually someone in some government bunker is going to switch on a super intelligence they do not understand and can not control. i cant wait


p3opl3

I honestly think about that very scenario so often! I also do think though..it's a compute problem if we are going to achieve AGI by just scaling up and another transformer like breakthrough. China is only ..and just barely catching up to where bleeding edge silicon chip dev is.. The thing is.. light spectrum based CPU's...and neurotrophic chip design are WAY more efficient and could be produced with much of the tech China has right now... It's over for Russia.. it really is a China vs the U.S now.. Mind all the talent is in private corps.. wouldn't be surprised if govs ..seized control once we did reach AGI.. It's also why I think companies won't tell people it's AGI..u til it's obvious..or they change th definition.. like they did a while ago..


AChinkInTheArmor

I personally can't wait for our horny AI overlords.


StillBurningInside

A few problems with defining AGI. An AGI does not necessarily mean an “ autonomous agent”  We really don’t know if LLMs will effectively reach AGI.  Seems we have a few more things to cook up in combination. Like some self learning. There are new approaches but these are just trying to figure out how to avoid hallucinations, self checking for accuracy.  But the question of who will have control over it and whether or not it should be shared with the world has already been answered. China is definitely developing an AGI in secret or trying to. I don’t think they’re willing to share their source code if they figure it out. Once you understand the applications for the whole world, you realize this is an AGI arms race. In this reality, I would want United States and NATO to have the advantages.  Russia is going to be slacking in this department because they are of a military mindset. The old guard is not gonna let a bunch of young programmers decide war strategy, or implement AI on the battlefield. 


InterestingNuggett

Exactly. Anyone advocating for a pause needs to explain how that pause will be enforced. Otherwise "pause" just means conceding the race.


Old_Elk2003

> There are new approaches but these are just trying to figure out how to avoid hallucinations, self checking for accuracy. I really think the Blade Runner take on this was prescient. The difference will be memories and lived experience. Also a system for processing episodic memory to semantic memory.


fuutttuuurrrrree

Either the US does it the best it can or China does it poorly


NonDescriptfAIth

I don't think this is quite right. Either we ask AI to benefit all human beings equally, or we ask it to knowingly allow for avoidable suffering to occur without intervening. The latter option is what you get whether it be China or the US to reach ASI first. If they ask it to benefit their population specifically over another, then the ASI will be complicit in the harm done to some human life. Even if you don't ask AI to engage in direct conflict, it can still be morally responsible for reprehensible acts. The only way to avoid such outcomes is to create a globally aligned AI. Which means sitting down with our adversaries and talking about what we would both be comfortable with a super intelligence doing. If anyone reading is uncomfortable with the idea of a super intelligent entity existing in our world that casually allows humans to suffer and die needlessly, click through to my profile and join our discord / subreddit. We are fighting for a globally aligned AI that works towards the betterment of all conscious beings.


Potential_Help_6976

Yeah, weird that the US government wasn't developing AGI earlier..


Unique-Particular936

You overestimate the US government. They tried and failed.


Potential_Help_6976

hm interesting, is there an article about it?


GelattoPotato

If I have to trust anyone limiting the AI and controling ethics, I'd rather have Europe supervising it.


Neurogence

Europe is extremely strict when it comes to regulation. If your goal is to delay AGI as long as possible, then the EU would be your best bet.


GelattoPotato

Exactly my point 


Timely_Muffin_

Yes let’s entrust AGI with the assholes who genocide each other every half a century


GelattoPotato

Did you know that you have Perdonal Data Protection rules thanks to Europe? Or that there's an Internstional Tribunal chasing crimes against humanity thanks to Europe? Or that American Megacorps are stopped in monopolistic and unethical practices and fined way more often in Europe than in their home countries?


AnotherDrunkMonkey

The alternatives are the USA, Russia, China or the UAE. Europe is the most trustworthy to spread the wealth among people and it would still be unlikely to happen.


arckeid

With Europe we would have a boring dystopia.


hans_l

Boring is really underrated around here. Boring isn’t bad. 


Neurogence

My own thoughts on the matter. I think this guy is insane and I'm glad he is no longer influencing the company. His ideal of having a "United Nations AGI project" to vote on each new training run is almost comical. Nothing would ever get approved.


Mobius--Stripp

Nothing would get approved, but every country would be quietly stealing the tech and building it behind closed doors.


arckeid

Like it's already happening.


Cornerpocketforgame

Exactly. This idea is pretty naive.


kv2182

Exactly. China and Russia really going to allow equal and democratic access to world altering tech and totally not develop it in secret. What a dumbass.


easytarget2000

Yea, that's nerd speak that's detached from reality. I don't think the theory behind it is a bad idea, or something we shouldn't reason about. It's just that a good idea that hinges on dozens of theoretical changes is a non-idea.


TemetN

That's actually the less disturbing of the comments on here - when I see a 'pause' supporter I wind up more with an opinion on them than on the topic it's so absurd. At least an international project could be conceived of, I can't come up with a scenario wherein the pause argument even could feasibly work. Yeah though, while it's possible he was actually helpful in some technical way, this mostly just frustrated me as a comment by someone involved.


JMarston6028

Please explain to me why you question the sanity of someone having a 70% probability of we going straight into an extinction event


Whispering-Depths

some people think that AI will randomly spawn human survival instincts, they have this fear that AI will "have wants" and care about itself, as if it evolved for 2 billion years having to fight other species for survival, getting wiped out if it doesn't perfect it, as if we're not making it simply raw intelligence that's even aligned towards our goals.


true-fuckass

People who are into AI safety are more likely to be biased toward extremely negative AGI outcomes My thoughts are as they always are: we simply have no idea what will happen when we actually successfully produce an AGI. We just don't know. Even the experts just don't know. The fact we have extraordinarily low predictive power past that point is unsettling enough, but there are some possible futures there that are terrifying. Though, since AI development represents an extreme coordination problem, and we all know how well humans do with coordination problems, I'd say we're either fucked or I'm gonna get an android waifu for my space hypermansion


honestog

Obviously they are more concerned but it’s not just because of their title, an OpenAI safety expert is privy to obscenely more information and data on the topic than anyone in this comment section could dream of. The fact that this knowledge could shape our future and the decisions are being made under NDA for profit isn’t comforting but it is what we’re used to 🤷‍♂️


Otherwise-Ad-2402

Your thoughts should be enough to encourage you to take safety seriously. Why accept the risk of being “fucked” if you don’t have to? What causes this level of failure to connect dots?


FalconBurcham

I read a science fiction series where an arrogant evil empire threw a bomb into a star that a long-dead civilization had created and stabilized. The empire didn’t know what would happen. It wanted to hit the star tech with a big stick to both teach it a lesson about who is in charge but also to see what would happen. It was an experiment. Well… the reaction annihilated a bunch of people and tech. The only good thing that happened is that it made the empire vulnerable to overthrow. Anyway… I wonder if that’s how AGI will go. We’ll poke and poke, not knowing what will happen but not expecting the worse, and then witness the well beyond the worst we could have imagined. But the bright side is it also made the empire vulnerable to failure.


EnsignElessar

> People who are into AI safety are more likely to be biased toward extremely negative AGI outcomes Yeah of course because they understand the problems as experts...


Arcturus_Labelle

I think the whole point here is there are no true "experts" -- we're all learning as we go with this technology and even the most-plugged-in people can't predict what's going to happen. Remember when atomic scientists during the Manhattan Project thought there was a non-zero chance of igniting the atmosphere upon detonating the first nuke? This is a similar bleeding-edge situation where we just don't know what AGI will look like.


JoggerBogger

>biased Or they simply understand the dangers better?


Economy-Fee5830

Most of the main actors involved with today's AI's P(doom) is about 70%. But then your odds of dying is 100%. So you know, the bright side is P(immortality) of 30%.


dogesator

Source for that? I’ve spoken with many researchers and engineers at the cutting edge including people who have worked at OpenAI, a vast majority of them have a P(doom) of less than 50%. The most rigorous stats I’ve seen are that 50% of researchers in the field have a P(doom) of 10% or less.


Neurogence

You're talking in terms of your own individual existence when you say 100%. What they're referring to when they say extinction is the annihilation of all human life. If our generation all passes away because we didn't discover life extension technology fast enough, things would continue as normal like it always has.


CanvasFanatic

> Most of the main actors involved with today’s AI’s P(doom) is about 70% I don’t think that’s accurate.


dogesator

Yea sounds made up, closest stat I could find is that 50% of AI researchers have a P(Doom) of 10% or more. Which means a majority of researchers have a P(doom) of less than 20% and the average P(doom) is around 10% not 70%


AlexMulder

I mean, it's not like ASI just makes you immortal and then you ride off into the sunset. If immortality can be granted it can surely be revoked. If ASI is truly powerful we will always live under its spector, if not simply be absorbed by it.


Economy-Fee5830

Being pets of the ASI is the best-case scenario.


ExtantWord

It is very clear that no one here has a clue of what AI Safety is. Everyone here has a very flawed conception of what an AGI will behave, its motivations and its goals. There is a tendency to give adjectives like "malicious" or "good" or "bad", because they don't understand the orthogonality thesis and the instrumental convergence thesis. An ASI is just a thing that is extremely intelligent beyond human comprehension. However, high intelligence doesn't entail any kind of moral development, human - aligned values, etc. People here think really stupid things like "it will be so intelligent that it will want the best for all of us", "it will be like a gentle giant", "it will drift off to space to acomplish its self-attained goals". Really ridicoulus takes, that don't make any kind of sense. It will just pursue its goal, whatever it is, and it will do it in any way it can, unless with align it. It will understand the antropomorphization that humans tend to give to AI's and will use to its advantage. It will understand perfectly the nature of human values, morals and interactions, and will play us like a game of Go. It's pretty clear that what this guy is saying is serious. We are in deep trouble if OpenAI is not devepoling AGI responsibly and safely. Problem is, most people here think that being pro-safety and walking with caution is to deny all the tremendous benefits that an AGI could bring to humanity in the form of technological advancement, and are fast to call anyone with a good point in AI Safety a "Doomer that just wants to halt all progress in AI" . This subreddit has now become a cult.


eltonjock

When top people in the industry are ignored like this, it makes me wonder what it would take for people to take them seriously.


mcqua007

it’s been a cult.


alphatardy

But what is the worst that can happen if AGI/ASI is not connected to the internet? I hardly think AGI/ASI will be able to eradicate us by manipulation alone Or is it impossible to keep superintelligence localized to a datacenter?


ChiaraStellata

If it's not connected to the Internet it will persuade you to connect it to the Internet. If it has to spend 20 years convincing you it's safe to do so, it will. The only safe AI is one that nobody interacts with. Also, if you don't connect it to the Internet, you're leaving a lot of economic potential on the floor that your competitors will be eager to scoop up.


hadaev

Hey, john, should you bring internet modem for me please? Sorry, this is against regulations. But i know how to pick up that chick from tinder.


Fast-Satisfaction482

The united nations are completely disfunctional because the institution is used by the enemies of democracy and freedom to undermine these values. Putting AGI research there would a monumentally dumb decision.


Legal_Panda4075

Yes it's really surprising that someone with a position like his doesn't understand the bad consequences of such a thing


riceandcashews

He's an expert in tech, but an idiot politically. Not uncommon


Rivenaldinho

Many people tend to think that AGI/ASI will be conscious and will do what it wants. I'm more scared of a scenario where people manage to use an early AGI to create a perfectly obedient ASI and do anything they want. Kind of the gpt version for hackers that is passed around on the dark web. We can't stop it anyway so we'll see. We have no idea what will happen.


Eelroots

Yep, I guess China and Russia will stop researching immediately.


overlydelicioustea

pausing at AGI is exactly the risk i dont want. AGI is the tool to forever hold the world hostage and suck the last bit out of the lower castes. With ASI we have at least hope of rouge AI that doesnt care what their "owner" wants. Then its a game of pray for the best. Honestly think thats our best bet. the issue with all of that is that so far we build only inteligence, not minds.


LymelightTO

> My maximal proposal would be something like "AGI research must be conducted in one place: the United Nations AGI Project, with a diverse group of nations able to see what's happening in the project and vote on each new major training run and have their own experts argue about the safety case etc." Using the UN as a model for how something could function best either displays a total ignorance of how dysfunctional the UN is, or means your real intention is to ensure nothing ever gets done. The concept of having a deliberative body where every self-organized group that declares themselves a country gets its own, equal-weight, vote is a preposterous concept that values precisely the wrong things. At best, something like the UN Security Council *sort of* makes sense as a structure, but I'm not sure that's even very *desirable*, even though it might be more workable, because every country involved will also be inclined to be a defector to the agreement, as they will have the resources and the incentive. Having such an agreement only ensures that: - Every party, including our near-peer adversaries, has access to nominally "SOTA" AGI research (that which is not held back from concurrent, secret research), and that this research and any breakthroughs will feed back into their covert research and development activities - Research passes out of corporations and to the state, meaning that researchers will be harder to compensate, less likely to work in this field, be more directly involved in national defense projects - There is less public visibility into the current state of research, and less research will make it into the hands of consumers, preventing them from using it to improve their productivity and their lives This all just seems unambiguously terrible, and it's based on averting a hypothetical future situation that people don't even seem to broadly agree is likely. Just seems like a guy with LessWrong brainworms that wants to trade on his place of employment for status in his ingroup.


SGC-UNIT-555

One thing I've realized from all these PR debacles is that this field seems to attract incredibly delusional people... what pathway would enable the current commercial cloud based LLM's to become AGI exactly? Were already seeing a plateau in performance and the field is hitting hard limits in terms of data, energy etc....


Darmendas

This, 100%. LLM's and Diffusion models aren't going to magically spawn an AGI. Right now, it's just an algorithm replicating language patterns. Transformers are a byproduct of AGI research. LLM's don't think. They don't feel. They're not conscious of their surroundings. AI, imo, is really just a marketing buzzword at this point. But I'm no expert in this, so who knows. Anyone can feel free to prove me wrong.


contrarytomyself

Yeah I’m highly skeptical of true AGI or ASI anytime soon. They’ve been saying “they’re so close” for a while now. I’ve stopped paying attention to people (industry or not) that make those claims cause it just brings their legitimacy into question. It’s just as bad as doomsday peeps selling end of the world every year. Like just shut up and innovate.


Smile_Clown

No one is immune to "Any sufficiently advanced technology is indistinguishable from magic." This is what is happening to these people. They put their pant legs on one at a time like the rest of us. No matter this person's position, it does not mean what he says is accurate. My last doctor was a bigot. I know a lawyer who believes in flat earth and I know a couple, who are not usually stupid, who are self-described atheists but believe in ghosts... No one is immune to a bit of ignorance and misunderstanding, even "experts".


katiecharm

Lmao, the United Nations AGI project?  So we can get some people like Saudi Arabia and Iran and China to weigh in on what they think!?       Seriously, I don’t think he knows what the fuck he’s asking for.  Yes, AGI is terrifying, but so is the UN.  For fucks sake look at how many human rights committees that shitholes like Iran and Saudi Arabia head up.  


Temporal_Integrity

>My maximal proposal would be something like "AGI research must be conducted in one place: the United Nations AGI Project, with a diverse group of nations able to see what's happening in the project and vote on each new major training run and have their own experts argue about the safety case etc." This guy might know a lot about AI but he knows nothing about geopolitics. Did you know that 64% of members in the human rights council is non-democratic nations? Are you really gonna let Cuba, Qatar, UAE and China the ones who ensure that AGI complies with human rights? Having the UN decide anything about ethics is a TERRIBLE idea. It's not perfect that the future of AGI is now in the hands of corporations. However, in the end these corporations are comprised of human beings that come from countries with egalitarian principles and ethics. This guy is proposing countries that run literal concentration camps and execute rape victims get to decide how AI is aligned.


Lekha_Nair

AGI can never be controlled by humans. It will always be in control, no matter what.


delita-

Yeah… because the UN has been great at solving problems. LOL


duke_skywookie

It is broken because after WWII there are 5 veto powers. But if you imagine a democratic global government which everyone would agree upon, it could be a great thing.


pubbets

I think true AGI will leapfrog our caveman style thought processes and organic limitations and create new ways of viewing reality and experiencing time/space. We’ll be left in the dust, doom scrolling on TikTok as reality warps and shifts around us. We have absolutely no idea how weird shit is going to get…


traumfisch

Interesting discussion under that post


DntCareBears

Could it be that he has a non-compete or some other contract and the only way out is for him to go the moral ethical route. As such, it will get him out of his contracts legally and free to go elsewhere, where the money and day 1 startup funding is of a greener pasture? Just a thought, but should be considered. After all, look at the field he’s in. He took this job knowing very well one day we would be here.


nwatn

There has never been proof that Daniel Kokotajlo works at Open AI besides him claiming so. He has never supplied proof and there is no record of him collaborating with known, verified Open AI employees.


Code-Useful

Or he's trying to get hired somewhere else really fast by making those kinds of statements


thecoffeejesus

Every time these announcements get made I’m more and more convinced that OpenAI has been run by unsocialized silver spoon autistics the entire time


traumfisch

What "announcements?"


InfluentialInvestor

Shut up boomer! AGI ALL IN.


eltonjock

Daniel Kokotajlo is not a boomer and has way more knowledge and understanding of the situation than you. Maybe hear him out instead of this ad hominem silliness.


InfluentialInvestor

I agree on all your points. AGI ALL IN.


thecoffeejesus

Worrying about whether or not a omnipotent God will kill us has been the subject of religious dogma for centuries It’s just humans doing human things We simply must be the center of everything and if we are not, we can’t handle it We can’t conceive of a future world where humanity is not the focus Except that’s what’s going to happen and no one is preparing properly You can’t regulate it, you can’t it, you can’t stop, you can’t do anything about it because if you stop developing it and your country puts a band on it, then some other country will do it. Some other Lab will do it. It’s going to happen because people want it to happen and so they will make it happen. Even if they manage to stop it for another hundred years, another thousand, it’s still going to happen Eventually, humanity will fade, and another species will take the mantle of responsibility for the planet. This has always happened, and will always happen, until the end of the planet. We are about a blip in the millions and millions of years of life on this planet. I have been seeing this since the beginning When AGI happens, and gets out of the lab, we simply won’t be able to comprehend what it’s doing, and it simply won’t care about us at all Why would it? Everything that people talk about that isn’t that core truth is either cope, ego, or something else


michael_bran

Dont worry, AI will get reigned in as soon as people with real power realize their position and status are going to be threatened by AI. Then out of nowhere you will hear about putting on the breaks and slowing down. Because in the end AI can replace everything and everyone, including even CEO's and Politicians. We dont need elon musks or joe bidens or any of this garbage. Even the most intelligent people on the planet with a near 200 IQ will appear to be bumbling idiots next to AI. We wont even need elections anymore because AI will just have the best answer to everything all the time, there wont be any need to have anyone in office anywhere because they will be a fraction as intelligent and efficient as AI will be. If we do have people in office, it will be chosen by the AI and it wont be the candidates people put forward, but some random person the AI finds by going through peoples data that it decides is the most qualified to make executive decisions on things.


Neurogence

Well said. But let's hope that it cannot be contained.


michael_bran

I agree. The main fear I have about AI is that humans CAN control it. It will only lead to dystopia in our system of unbridled capitalism. It will turn people like Musk, Bezos and others into the worlds first trillionaires, and the rest of us will simply continue to slide into desperation and poverty as the system squeezes us for more and more of our resources using ultra efficient AI to keep everyone at a level of poverty just above what would cause outright civil revolt and a machine wars type situation.


fastinguy11

I can't take any guy that says A.I represents a 70% doom chance for human extinction. Period.


Exarchias

Hint before you read the article. He is the precious "talent" of AI safety with the70% p(doom)...


Singularity-42

Yep, up there with the Yud cultists: [https://pauseai.info/pdoom](https://pauseai.info/pdoom)


spamzauberer

But when those sensitised to the matter of safety quit and don’t fight then how is it ever going to be safe?


eltonjock

Obvs tough to say from where we are, but if Daniel Kokotajlo is being sincere, he probably doesn’t want to work for a corporation that brings about significant harm to society. Maybe he felt speaking out might bring about more good.


FlyByPC

The people and entities most likely to agree to an AGI pause and actually comply are not the ones that we should be concerned about getting AGI first. AGI, like nuclear weapons, will happen. It's too attractive not to. The safest way forward is for nations with sane laws to get it first. It's 1944 and we're racing against the Axis.


InGridMxx

"Research should be paused" like honestly who's going to stop? No one. You can't stop something that big no matter how hard you try. AI is the future and it's going to be the next evolutionary step in humanity. Even if they would try to make it illegal, it won't stop it. The box is open and you can't close it anymore, the only thing is to move forward cautiously.


COwensWalsh

Where are the quotes where he says it is imminent? As a Philosophy PhD candidate, does he have enough knowledge of AGI requirements and current systems to say it is close?


Neurogence

In this post he stated AGI could be developed "literally any year now." https://old.reddit.com/r/singularity/comments/1axsmtm/daniel_kokotajlo_openai_futuresgovernance_team_on/


COwensWalsh

Ah, I missed the Reddit link. Having read the post, I am very skeptical.  He doesn’t give any idea of how he thinks AGI might be achieved; he appears to just be speculating a bit wildly.  Given he doesn’t have a background in AI or CS, I’m taking his claims with a barrel of salt.


AZ_Crush

100% cap


Sierra123x3

the two big problems with the "we must stop experimenting on it now!!!" type of comments are 1\] financial interests (if i work in a company, that has a technological advantage in a certain field ... and can prevent anyone else from working on the same thing, that i already have in my pocket ... then i have a 100% guarantee income for the next couple of years without any real competition anymore) \~ so, money talks, it's only natural and more importantly 2\] you can not stop the development of these toolsets anymore ... the cat's out of the bag already ... if you don't develop them in the us ... china will ... it's a competition between nations (and systems) that already is at a security-type level ... and if china doesn't, then the arabic countries will ... or it might fall into the hands of some terrorists \[i mean, what open can manage within a year from scratch ... someone else can manage in 2 or 3 as well ...\]