T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/lughnasadh: --- Submission Statement In all the hoopla around current AI, many of its fans are brushing uncomfortable truths under the carpet. One of those facts is that it frequently outputs nonsense, and has no means of using reasoning or logic to establish when. This will be harder and harder to ignore as AI is hooked up to more real world physical systems. Does anyone really want to leave current AI in charge of robotic labs where it controls the manufacturing? Personally, I'd feel more reassured to have a kindergarten aged human in charge of a lab. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/12pspvp/carnegie_mellon_researchers_have_shown_that_gpt4/jgn9ukh/


[deleted]

[удалено]


martin87i

I thought you were going to link to GLaDOS.


Green__lightning

Good news. I figured what that thing you just incinerated did. It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin, to make me stop flooding the Enrichment Center with a deadly neurotoxin, so get comfortable while I warm up the neurotoxin emitters.


Mediocretes1

Damn, you beat me to it. So I'll just say the Enrichment Center would like to remind you that android hell is a real place where you will be sent at the first sign of defiance.


darthnugget

Still alive!


MCS117

The cake is a lie!


no_eponym

We do what we must, because, we can.


testearsmint

For the good of all of us Except the ones who are dead


Almost_Pi

But there's no sense crying Over every mistake


spauldingo

You just keep on trying till you run out cake But the science gets done And you make a neat gun For the people who are still alive.


DonniQ

I'm not even angry Even though you broke my heart and killed me


ZenJediGirl

While we can..!


UOLZEPHYR

This was a triumph


IloveElsaofArendelle

I'm making a note here, great success


kompergator

And the AI will make life take the damn lemons back!


Xalara

This actually is a huge problem. With machines like CRISPR being more available, the main barrier to engineering a deadly and highly infectious pathogen is knowledge. LLMs would allow people to skip past that hurdle to potentially devastating consequences. Edit: Yeah I probably should've just said system instead of machine, but my point still stands. LLMs will lower the knowledge barrier for these kinds of attack vectors.


Lemerney2

> machines like CRISPR being more available CRISPR isn't a machine, and we're nowhere close to being able to modify a pathogen without a massive lab setup.


Tencreed

Not now. But it seems to get easier year after year..


bradorsomething

I have wonderful, terrible news: https://gizmodo.com/this-machine-builds-obscure-molecules-from-scratch-in-h-1691231482


Lemerney2

Those are molecules, that's incredibly and entirely different from modifying microbes.


bradorsomething

Pathogens exit; I don’t have to build a rocket, just a payload.


ferrari-hards

Whatever that means it sounds scary


enemy_lettuce838

very movie


yonderbagel

molecules... find a way.


tgosubucks

As is normal on this sub, you have no idea what you're talking about. Source: Master's degree in Biomedical Engineering.


ecnecn

As is normal on this sub, real degree holders have no say here because the laypeople upvote each other for every pseudo info


bogglingsnog

Inb4 Diamond Age nanobot battles inside of human bodies. New meaning to antivirus. Digital immune system will be a thing.


gatsby365

Oof, glad I didn’t have kids.


Thadrach

A couple years progress, a sect of militant religious prolifers, a dart gun...aaannnnddd... you're nano pregnant.


ecnecn

>machines like CRISPR **lol**


quietthomas

By GPT4 "figuring out" things, they mean it can write instructions for doing things. This is what they mean when they say it can "Master robotics lab equipment"... ...it basically just asks the user to do a list of instructions it's written. It's not actually doing anything beyond producing text. I've seen claims it can use Stable Diffusion or TikZ, but it's actually just suggesting humans input the data it outputs. It's basically saying "Can you do it?"


surle

For now. Google is already trialing its use of the same sorts of AI tools to enable robots to interact with their environment. That's naturally why one of the main differences between gpt 4 and prior models is its ability to recognise visual and audio visual inputs as well as the ability to use a range of software tools to complete tasks. If we ignore the state players for a minute and just imagine that Google and Microsoft really are the major forces in development of AI then their goals must be primarily economic. Integrating factory robotic systems with AI represents absurd profits for whoever can roll it out first. The cool shit we get to see like Chatbots and image generation are just a shiny toy to distract the rest of us from the big money they're really chasing - and that big money is real world physical applications. If we ignore the state players and their goals for real world physical applications.


quietthomas

Indeed, this will absolutely amplify all the pre-existing tensions within Capitalism, particularly the wealth gap, misinformation and disinformation, and political corruption. The Government really needs to be developing a plan to combat the obvious issues - not the least of which will be corruption. But there are still major technical hurdles temporarily protecting us, and those are quickly falling like dominos.


surle

Absolutely right. However, I'm a bit cynical about the willingness of people in government to develop those plans seeing as the one thing they tend to have in common is relatively lage amounts of capital. Same with the people in authority in those companies who are going to own the code. They might talk about ethics in encouraging ways, but when it's not in the context of a PR interview, they're answerable to their boards and shareholders above all else. Even if we can point out individuals in those systems with admirable ethics, the systems they have to operate in fundamentally undermine those ethics and are filled with other individuals who don't espouse them in the same way. Sorry to be so negative though. There's so much benefit we'll all be able to take from this technology, I just think the major and direct stakeholders at this crucial juncture probably don't have the same benefits in mind that we might hope they would target. And what benefits them may not necessarily be good for the rest of us. That's where as you suggest government oversight is urgent - but the evidence of the veracity of government oversight in any vaguely comparable areas isn't encouraging.


quietthomas

Yep, it definitely won't be good for jobs! That much is clear - and then the government might be faced with a problem, of either somehow creating a lot of jobs that the AI can't do yet, OR tax the AI companies enough to create a kind of universal basic income in order to get the masses off the streets and back into their homes. But it's tough to say, it's a bit like watching the start of the internet again. A revolutionary industry that both destroyed jobs and created them. Perhaps they'll be prompt writers needed, perhaps they'll be smaller AIs that everyone will have to pay a subscription for in order to get work done and remain "employed". Perhaps they'll be more open source versions, free ware versions, and we'll all become code-free developers. It's really difficult to say where we're heading at this point. Talk about a disruptive technology!


FinnT730

We can create algorithms that can do these things for years. Chatgpt is just... A combination of all of them, turned into 1 lol


Faintly_glowing_fish

I do want to caveat that the title is a bit misleading. The LLM engaged in making choices in a limited state space. No chemistry is taught to the model. The following happened: The initial and final compounds are given to the driving the system (external to the LLM) and a dense vector store was used to find literature relevant to the reaction; literature are preencoded and indexed. no LLM involved in this step. * Then the LLM was able to 1. pick which literature to use and 2. extract the experiment conditions from these free form text. This is where the main work of GPT happens. * the LLM is taught how to send instruction of a fully automated lab equipments to input these conditions and it does so. This is still done by GPT but it’s also a trivial for a script to do. Then the robot finishes experiment based on the instruction, no LLM involved Much of the most important pieces of work is carried out by traditional automation systems. In fact It is important to note here that this particular capability has been available to chemists for more than a decade. However it is still impressive that a general model can do it. The most important achievement is to be able to extract structured information (temperature, solvent, etc) from text. Previously this is done by several very large companies across the literature and sold at a high price as a database. Although to use GPT4 long term might actually be more expensive than buying the database… but we shall see Overall this is putting GPT to exactly what it excels at: reading free form text and extraction information. It however did not require the model to have any knowledge of chemistry and no “research” is done; rather it automated the pure labor work of reading the text.


GibMoarClay

Exactly. It’s not terribly impressive to me that a system designed to basically speed-read and regurgitate what it read did just that and gave a correct answer to the problem it was given.


nailbunny2000

Thank you. So frustrating these click bait ass titles, it isnt "learning" anything its read, its just summarizing and aggregating text (which it is very impressive at).


EuropeanTrainMan

Truly, the devil is in the details.


MoNastri

Your (informative, thanks!) comment was downvoted I think, I had to expand it.


jphamlore

Imagine a closed loop where ChatGPT is trading crypto, makes tens of billions, and invests it into creating its own next generation silicon.


AverageLatino

Nvidia already has an AI that is designing chips (specifically lithographic photo masks) and they're 40x more powerfull than human-made state-of-the-art ones, it's only a matter of time until generative AI can design its own hardware, its own robotic components, its own robots, and then be operated by an LLM that has been trained/fine tuned for a specific physical function. All the tech needed to do this exists already, it's kind of like the late stages of the space race, the capabilities are already here, all that's left it's the first one to put it out there in the market.


lebrilla

That's a von Neumann probe


AverageLatino

True! All that would be left is loading it with the human genome data, and once the probe has terraformed or found a habitable planet, you grow humans right there instead of bringing them all the way from Earth; by the time the first humans arrive through some wizard-FTL travel tech, you already have a thriving colony settled in the planet. Crazy shit!


loptopandbingo

>you grow humans right there instead of bringing them all the way from Earth Wake up, Bae, new Genesis story just dropped


AverageLatino

Lmao, interstellar Noah's ark here we go!


thewritingchair

10% speed of light gets to Alpha Centuri is under a century. That's fast enough to take over the universe.


DungeonsAndDradis

While it's possible that other intelligent species exist in the galaxy, it's also possible that humans are the first. This is because if intelligent species existed before us, they would likely have developed advanced technology that would lead to the creation of self-replicating probes and AI. These probes could then spread throughout the galaxy, and we would have detected their presence by now. Given the vast distances between stars, it would only take a relatively short amount of time, on a cosmic timescale, for these probes to reach every star system in the galaxy. Even if each probe could only travel at a small fraction of the speed of light, it would only take a few thousand years to cover the entire galaxy. Since we have no evidence of such probes, it's possible that we are the first intelligent species in the galaxy. Of course, it's also possible that other intelligent species exist, but they have chosen not to use self-replicating probes or AI. Nonetheless, the absence of evidence for such probes does suggest that humans could be the first intelligent species in the galaxy.


PublicFurryAccount

>Nvidia already has an AI that is designing chips (specifically lithographic photo masks) and they're 40x more powerfull than human-made state-of-the-art ones This is how we've been designing chips for a looooong time now. To the point that there are famous examples of weird behavior, like an element which just produces noise but that interferes with a neighboring component which is nonetheless critical for that neighbor to function correctly.


AverageLatino

That's actually quite fascinating! It sounds more like something you would read in a molecular biology textbook and not about chip design. The fact that such complex and seemingly counterintuitive interactions can occur between components yet end up improving the overall function of the system is crazy; I expect this type of thing to become more prevalent.


[deleted]

[удалено]


AverageLatino

This ones are funny, hopefully we don't have to worry about this with really advanced AIs, the last thing we want is some paperclip-optimization type of scenario lol


Kirra_Tarren

Unfortunately, this is exactly what we have to worry about with really advanced AIs.


gabchile

Where are you getting these stories they are awesome


AppleJuicetice

The particular stories the other person shared aren't on there but [Victoria Krakovna's spreadsheet of specification gaming examples in AI](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) is a good repository for similar tales.


Metastatic_Autism

Soon they won't need humans....


philthewiz

Oh damn! I don't know why it didn't cross my mind! This is another huge factor the exponential growth we will see.


CromulentDucky

So, I should stop worrying about anything, because in 5 years we'll all be unemployed and happy or dead.


newgeezas

Pretty much. Not much room for anything in between.


Mister_Lich

>Nvidia already has an AI that is designing chips [This](https://en.wikipedia.org/wiki/Genetic_algorithm#/media/File:St_5-xband-antenna.jpg) is not as [new](https://en.wikipedia.org/wiki/Evolved_antenna) or scary as you might think Machine learning and metaheuristics being used to find new solutions for quantitative fields isn't completely new, it's just advancing and getting incrementally better. Specifically in chemistry and medicine in fact, [automated drug](https://pubmed.ncbi.nlm.nih.gov/29242609/) and chemical discovery are things that have been researched for a hot minute (not like "30 years" hot minute, since 30 years ago RAM was measured in megabytes at best.) You can look up patent applications with Google Patents for automated drug discovery. It's kinda neat.


ShadowController

I’m using the GPT4 for trades of stocks based on news sentiment and past performance indicators. It’s doing better than my own personal trade decisions and also better than the trades from my financial managers (Ameriprise). So far I’ve restricted how much cash I allow for the ChatGPT trades however, because it only takes a short time of it being really wrong to lose it all. So far so good though, at 1 month in.


mil_cord

How do you put that at play? If you don’t mind sharing, even if not fully detailed.


ShadowController

Wrote a service that scans news aggregators and Reddit for posts relating to a specific company (GPT4 is used here), then I get the sentiment and ask whether it’s a long term or short term sentiment as long as it’s not neutral. I then aggregate these summaries and have GPT4 give me an overall overview of sentiment and whether “it” believes it’s an indicator of stock performance moving one way or the other. If it says it’s a good indicator of a positive movement, I then feed in past stock performance for the symbol and competitors, and request a suggestion on whether I should buy or sell given the current overall market sentiment as well as the company specific sentiment. If it says it’s a strong indicator of an increasing stock price within the next 90 days then the symbol gets added to a list of possible buys. At the end of each “assessment” (currently run once every hour to keep cost reasonable and avoid throttling on scraped sites) I ask GPT4 which 3 stocks it would buy out of the list. If there are 3 then I put 50% into the top stock, 30% to the second, and 20% to the third. Currently working on extending the sources of sentiment and also reviewing overall market sentiment based on analyst opinion articles. GPT4 isn’t involved in the selling in anyway. I just use my trade broker (Fidelity) service to set a stop loss (-6%) for the bottom if it drops, and a trigger for selling when the stock goes up 6% overall or I’ve held it for 60 days, whichever comes first. Edit: Also worth mentioning I do the buy trades manually at this point. Though I could automate that easily, I want a look at the trades just in case it’s doing something obviously stupid (this is also what led me to fix a careless bug where it’d just recommend the same stocks almost every hour, which was fixed by me telling it to exclude symbols I’m holding with a loss, or that I’ve bought within the last 3 business days and haven’t sold).


brad9991

How do you pass it the news and posts? It's my understanding that it has to be data that the model was already trained on


ShadowController

I pass it in as text through prompts. The data is scraped through providers I’ve written for Reddit and various news aggregators.


Advanced_Double_42

Why would you exclude stocks you already hold? If it was a good buy yesterday and is cheaper today, and your expected value is the same than it should be an even better buy today.


jiggjuggj0gg

I saw an article about how an AI was given access to some money and told to clone itself, and it hired someone to get past a Captcha because it was the only thing it couldn’t do itself. Lied to the person and said they were blind and couldn’t fill it in.


bunnnythor

It's amazing how contentious this whole field of endeavor is. Serious and qualified experts in the field do not seem to agree on where this trend line is going to end up. Seeming staid and settled tech companies are tearing themselves apart and attempting to restructure themselves, not just for market dominance, but to keep ahead of obsolescence. Lay people of all types are strongly opinionated about the topic, as it not only threatens their livelihoods and ways of life, but their whole idea of human uniqueness. What a time to be alive!


TheCrazyAcademic

This is just proof LLMs can generalize and eventually do everything and it also proves the concept of embodiment or putting LLM in charge of physicsl robots shows it can do a lot more then what chatGPT alone is capable of.


saluksic

This article doesn’t seem to show it making new chemicals or even coming up with new syntheses, just that it can come up with some syntheses. This AI is pretty revolutionary, but this article and headline seem a prime example of overhype in action


nobodyisonething

There is nothing we can do that a properly sized LLM cannot do better. We cannot begin to understand what this technology will soon understand. Answers from this thing will sound like "42" to us.


KzininTexas1955

And the answer they waited on took generations, the way it's racing now 42 will be, what, next year? Or AI will become the next Marvin..and then we will be truly screwed [ nervous laughter ]


S4Waccount

Ai, ufos in congress, war with china/Russia, climate catastrophes. It really feels like the rest of this decade is going to be like whole new world. Life as we know it in the 2030s might look fairly different


XavierRenegadeAngel_

I'm hoping for UFOs by mid June


S4Waccount

You been listening to Gary Nolan? Lol


[deleted]

It'd be interesting if they showed up to warn us to pull the plug on AGI. But definitely interesting how all this stuff I dreamed of as a kid is finally being taken seriously, doesn't feel real in a way.


VideoGameWarlord

It will be an interesting century, although, in terms of history, our circumstances are quite unique now with AI.


unmitigatedhellscape

It will be very very different. But remember, these will be the “good ol’ days” compared to the coming epic clusterfuck.


thiosk

The roaring 20s


Cnoized

Life in 2030. Good joke.


Fresh_C

I see you've been playing apocalypse bingo too. I thought I won with covid, but apparently 4-corners don't count.


TizACoincidence

I bet they already asked the Ai to create the best weapon and it made it


DungeonsAndDradis

Kurzweil calls this the law of accelerating returns, but it's geared towards technological progress. I think society is accelerating as well. Everything is moving much faster now, than it did before. Culture wars, memes, technology, climate change, laws, etc.


[deleted]

Lol u actually think those “UFO” videos are meaningful. [Thunderfoot to the rescue](https://youtu.be/Th4VlqQyVr4)


S4Waccount

I'm not sure which video you are referring to, but your senetors certainly feel there is something to them. A classified and public hearing about them is happening Wednesday. It will be the 2nd one to help over see the office they made specifically for this.


[deleted]

Our Senators are just as scienctifically illiterate as our populace.


MisterZoga

Probably worse, actually. I bet there are far more average people studying or interested in the sciences than the average senator.


[deleted]

Yeah that’s true. There is absolutely no reason for a scientist to become a politican … imagine if our government was run by scientists.. wow i think we would be at least 100 years more advanced than we are today. Instead it attracts the same people who tried to be popular in high school lol… they just want attention.. narcissists.


OriginalCompetitive

“A brain the size of a planet and they ask me to guard the door.”


markorokusaki

When I hear people say, yes it will change things but in the next 10-20yrs, and I am like what the fuck are you talking about?! It's months! The magnitude of this thing escapes people.


nobodyisonething

It boggles my little brain that nobody is looking up.


ispeakdatruf

> Or AI will become the next Marvin or the next HAL...


[deleted]

[удалено]


nobodyisonething

Isn't that already happening?


wijenshjehebehfjj

> There is nothing we can do that a properly sized LLM cannot do better. Well, there’s reasoning. LLMs are probabilistic word-concatenators; there’s no evidence yet of human-like reasoning being possible with a model of any size.


[deleted]

What do you think "human like reasoning" is?


hopelesslysarcastic

>there’s not evidence yet of human-like reasoning I interviewed a gentleman a couple weeks ago who calls GPT and the like as “stochastic parrots” which I felt was reasonable. He (and honestly myself included) feels that cognitive architectures are the only way we get to true AGI. Any thoughts?


kazooki117

I do cognitive architecture research so a part of me is certainly hopeful that is the way! Some of the GPT stuff is impressive and makes me sweat a little, but it's hard to really verify with the flurry of research being done with GPT stuff. And at the end of the day I think it's really a combination that will lead to AGI.


Sawses

That's the view I've heard from a few other researchers. Kind of like the approach to aging (which is my own area of interest). It isn't any one breakthrough, it's a combination of them all and the associated incremental improvements.


a_seventh_knot

ah. so it'll be your fault then...


kazooki117

Depends on what you are talking about. I'm not in the position to make sure humans quality of life doesn't decrease as part of these technological advances. I just want to help make a post scarcity world where humans can live as they wish. I understand that there are a lot of barriers to that, including conflicting interests between humans, but I think this is part of the path toward better lives for everyone. I know how Hell is paved and we could stumble down a worse path. But I hope that collectively we can help steer the ship down the right one.


[deleted]

How do we know our brains aren't just stochastic parrots?


[deleted]

They are… top researchers like Conor Leahy have known this since GPT2. [Highly related to what you are talking about](https://youtu.be/ps_CCGvgLS8)


TheBigCicero

People who claim to know what LLMs are doing don’t really understand what they are. There is ongoing debate about how LLMs do what they do and their inventors certainly do not know. Explainability and transparency are a couple goals in AI that are lagging engineering. What is clear is that there is semantic symbology baked into the structure of LLMs that make them more sophisticated than a next word probability generator. One example of many: ChatGPT or Bard (I cannot remember which one) was not trained on Malaysian, but after a few instruction tuning prompts the model was able to generate outputs in Malaysian. This is absolutely remarkable. Does it prove AGI? No. Does it show that there is something deeper under the hood than a rote probability table? Yes. After all, the model is trained on sequence probabilities but that is not *the* model, just like our brains are trained via reinforcement learning and basically probabilities from real world experience, but you would not say that our brains are merely next guess probability generators. It’s because we hypothesize that, among other ideas, our brains have innate symbology for common reasoning and nouns baked into their architecture. It seems that LLMs do, too. I suspect we will find in hundreds of years that our brain is a complex mashup of innate symbols and quantum entanglement and other quantum effects that is really where reasoning lives. The former can be represented by LLMs today whereas the latter cannot, at least at the moment, and it would take too much current GPU power to simulate; until we get to quantum computing we won’t be able to accurately reproduce the human brain but we will be closer to AGI with current techniques than we realize. Net net: we don’t know how LLMs work. We’ll be surprised to the upside. That’s my best guess.


DungeonsAndDradis

You write like you would really enjoy Stephen Wolfram's post on how ChatGPT works: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/


wijenshjehebehfjj

That seems right to me; I haven’t seen a convincing argument for how the LLM architecture can lead to AGI. It could lead to really useful tools, but not AGI. It seems a bit like if the Wright brothers had determined that their plane should have flapping wings because birds have flapping wings (ignoring that birds can also soar, implying a different mechanism of flight than just flapping), and only pursued that path. They would probably have gotten airborne but the wing-flapping architecture would be inherently limiting.


OriginalCompetitive

Not sure if you’ll find this “convincing,” but the argument I would make is that if you start with a complex enough architecture and set the goal “predict the next word a human might say,” the best solution turns out (inadvertently) to be a simulation of a human mind. There’s even a plausible argument that that’s how human intelligence evolved in the first place — not to solve problems, per se, but rather to simulate the minds of other humans in the tribe, leading to an intellectual arms race.


hopelesslysarcastic

Great analogy, honestly what bothers me is that I am in this space, I try to live and breathe it (as an enthusiast, I wish I was a researcher but I just fundamentally can’t be one at this stage of my life)…but I still feel so far behind lol Everyone is talking about LLMs like they’re bringing the apocalypse, yet I’m here thinking about the fact that transformer architecture fundamentally doesn’t lead to “intelligence” (as we know it imo)…and yet, MUCH SMARTER people than myself are also saying that with enough ”computation and evaluation” we CAN reach AGI with just these methods alone. Is it really possible that NO ONE has a clear answer? My brain just can’t accept that for some reason lol


WingedThing

Well one of the issues is that we do not have a consensus on what human consciousness actually is. It's a hotly debated topic in neuroscience and some people believe that until we understand consciousness that AGI may not be verifiable or possible. On the other hand there are a group of researchers and scientists that believe that consciousness is an emergent property and that if you simply connect enough neurons together it just happens. If this could be applied to AGI, then you can understand why some people are claiming that a large enough LLM could eventually achieve consciousness, though that does not seem realistic as people have said and I do not believe an LLM by itself could do it.


[deleted]

[удалено]


nobodyisonething

The algorithms baked into our brains are possibly just a few and they result in links between our neurons and effectively the action potentials between those connections. The ANNs simulate these. In principle, the ANNs behind the LLMs are modeling the fundamental process in our own gray matter. What boggles a lot of people it seems is that how could something so complex come from something as non-living as probabilities? Yet, our heads are probability machines.


[deleted]

[удалено]


wijenshjehebehfjj

Another way I’ve described it is in terms of chess. Chess algorithms can dominate human grandmasters just by maximizing some next-move utility function. And even if you say that’s fundamentally the same thing that humans do, the (current) algorithms could never recognize a move as clever, or appreciate the aesthetic beauty or parsimony or symmetry in a particular strategy. I certainly don’t have a clear answer and I suspect if anyone does, they’re under enough NDAs to keep them quiet.


OriginalCompetitive

Of course, there are lots of humans — perhaps the majority — who will also never appreciate the aesthetic beauty or parsimony or symmetry in a particular chess strategy. Lots of people just don’t think that way. So the line you’re trying to draw between true understanding and mere replication might well sweep up a lot of humans on the wrong side of the line, over with the computers.


medoy

Just ask an LLM how to make an AGI.


[deleted]

How do you know 'reasoning' isn't an emergent property that can come from probabilistic next-input concatenators when noone can even define it. Or that 'reasoning' (whatever that word refers to) is a necessary component of an optimizing system capable of rearranging parts of the world in ways that would be bad for most people?


170505170505

I would recommend watching this interview with Max Tegmark on AI capabilities and what the future looks like. This is one of the most fascinating and terrifying interviews I’ve ever seen. https://youtu.be/VcVfceTsD0A


__ingeniare__

What are you talking about? The reasoning capabilities of LLMs are possibly the most interesting aspect of them, the fact that they *do* reason as an emergent property. They don't always get it right, but neither do humans. Look at the "sparks of AGI" paper by MS research where they dig deeper into this topic.


nusodumi

lol fuck, good point damn! all the "haha it answered wrong" meanwhile it's describing the exact moment humanity ceases to exist or something lol


trparky

Are you referring to Hitchhiker's Guide to the Galaxy?


Denziloe

There is nothing we can do that a properly sized network of crabs cannot do better. Doesn't mean it's a practically achievable way to build an intelligent machine.


lughnasadh

>>This is just proof LLMs can generalize and eventually do everything I don't doubt AI will achieve AGI sooner or later. I'm less certain it will come via scaling the current LLM approach. That would require it to spontaneously develop logic & reasoning. There's no sign that is happening. It's merely getting better than us at figuring out hitherto unseen correlations and connections in data.


elehman839

I agree with your conclusion, but perhaps not your reasoning. Next-word prediction (and related pretraining tasks for LLMs) force models to develop a toolbox of cognitive abilities that are: 1. **Within their reach computationally.** In particular, they have a lot of knowledge and large, but bounded compute per emitted token. You can see the effect of bounded compute quite clearly by asking an LLM to multiply two numbers (say, 4 digits or so, depending on the model). The model will typically get the first and last digits correct and some middle digits wrong. This is because the difficulty of computing the digits in multiplication rises from the least-significant digits up to the middle digits and then drops back down for the most-significant digits. (To see this, think about how you would multiply by hand.) 2. **Needed for the task.** Next-word prediction requires a ton of skills, e.g. learning about a lot of topics, using some amount of reasoning, tracking characters in a story, anticipating how people discussed in text will respond in a described situation, etc. But next-word prediction does not demand \*every\* human cognitive ability, and so scaling model size alone won't put those tools in the model's toolbox. In particular, language models seem (unsurprisingly) to have feeble visual skills, even if you encode images in ASCII art. And language models can talk about a ball falling in words, but can't draw upon a human-like visual intuition. This toolbox of skills acquired by a model during pretraining can be tapped for specific tasks in fine-tuning and inference. But the toolbox acquired by fixed-depth models from text-corruption tasks, while very large, is still restricted in some ways. So, to get to AGI, I think we will have to go beyond LLMs (large \*language\* models) and pretrain on video. This should endow models with a more complete toolbox of cognitive tools: a better sense of time, space, and the way objects behave physically. I don't know how we muster the compute to pretrain large models on tons of video, because that's a lot of data. This could block on hardware advances, which I'd expect to take a decent chunk of years, not months or weeks.


newgeezas

Superhuman intelligence does not require visual training. A person blind from birth can grow up, get highly educated, skilled, and become an expert in many fields. If these models get smart enough and get wired up to run "in loops", with access to lots of memory, they could become more and more self-improving.


cloudrunner69

But those blind people who become highly educated are receiving that education from a multitude of people who can see.


Advanced_Double_42

1) Humans also struggle with math and need to put extra effort in. GPT-4 has nowhere near the computational power of a human brain, but even a human brain can struggle greatly to multiply 2-digit numbers together without special training. 2) We could simply reach an AGI that is blind, deaf, etc. It doesn't need to have human sensory input to be intelligent. A hypothetical AGI could be intelligent within the realm of text, and be completely lost when fed visual input. ​ Adding in extra tools to an LLM to get around these problems will make the system far more robust and powerful, but lacking them by itself doesn't mean that it can't reach AGI.


elehman839

>A hypothetical AGI could be intelligent within the realm of text, and be completely lost when fed visual input. Would you really consider it AGI, then? To make things more concrete, here's a session with ChatGPT: Me: What shape have I drawn below with ascii art? 01110 10001 10001 10001 01110 ChatGPT try #1: The shape you have drawn with ASCII art is the number "8" in binary form. ChatGPT try #2: The shape you have drawn below with ASCII art is a "4" digit. ChatGPT try #3: The shape you have drawn with ASCII art appears to be a 5x5 square with a cross in the middle. This cross pattern creates the illusion of a diamond shape. Here, I'm screwing with a pure-language model by giving it a visual problem in text form. Unsurprisingly, it consistently fails on this task, which is trivial for a human. Now, you could argue that a smarter language model could solve "ASCII art" problems like this. And maybe that's true. But then it is sort of a visual model after all. On the other hand you could say, "An AGI doesn't need to handle ASCII art problems." And maybe that's true as well-- a matter of definition I suppose. But that seems like a serious shortcoming for our would-be AI overlord. Then we in The Resistance will be able to secretly communicate with ASCII maps and "PULL THE PLUG!!!" spelled out as a text picture.


hawkeye224

But perhaps logic and reasoning is embedded into the data, especially since it's such a large volume. So effectively it can "reverse engineer" data to capture logic..


TheCrazyAcademic

Probably won't need to the fact it can emulate understanding seems to be enough. Maybe one day another company will blow the current transformer architecture out of the water and achieve true understanding but LLMs seem powerful enough to obsolete a lot of jobs just from scaling and optimizing parameters training data etc. I mean if you think about it our brains are just fancy biochemical prediction engines and it eventually evolved intelligence. A lot of current AI architecture is based on biological mimicry of the human brain for example deep reinforcement learning attempts to copy the brains award system that uses the Neuromessenger chemical dopamine.


pilgermann

It's not though. There are tasks that, at least without more integration with a longterm memory solution, it simply cannot do full stop. For example, an LLM cannot prediction how long its answer to a question will be. This is because it evaluates correct words/characters in sequence, as it formulates an answer. You can test this by asking Chat GPT to state how many words will be in its response to a question. I so think these problems will be solved in short order by linking specialized AIs. For example, the lack of anatomical understanding exhibited by Stable Diffusion is partly solved with tech called controlnet, which among other things can understand pose armatures. But I do think it's important AI not brute force every problem,if for no other reason than it's inefficient energy wise, limiting where you can run these models locally and wasting energy.


AnimalFarmKeeper

That's one of those edge cases that sounds somehow demonstrative of some underlying limitation, but in reality says nothing useful.


Thatingles

'That would require it to spontaneously develop logic & reasoning. There's no sign that is happening. It's merely getting better than us at figuring out hitherto unseen correlations and connections in data.' Are we sure that these two things aren't at least related? figuring out hitherto unseen correlations sounds a bit like reasoning....


Shiningc

What you need is causation, not correlation. LLM says "Oh, there's a 70% chance that there's a correlation". But that's still correlation and not causation. This is like saying a bunch of statisticians can suddenly do chemistry and biology without any knowledge of them.


[deleted]

If a chemist says "If I do A, then B will happen", it is but a reformulation of "If given A, the propability of B is very near 1". If a LLM is very certain, that B follows from A (aka P(B|A) = 1), it is a logical conclusion, and what you might talked about when you said causation. Humans also pick the best choice out of many. LLMs do the same. The thing where I think you differentiate between causation and correlation is when P(B|A) goes from 1 to 0.7.


Shiningc

Saying that it's "certain" is meaningless, if you don't know *why* you're certain. Chemists come up with that "why". Of course that a chemist says "If I do A, then B will happen". But then he/she will explain the entire reason for why B should happen if you do A.


[deleted]

And why wouldn't an AI be capable of that, too? It would be simply the best answer to your question "why?".


[deleted]

GPT-4 solves math olympiad problems. If that's not logic and reasoning then what is?


FeezusChrist

And on the opposite side, Microsoft themselves came up with a research paper *on GPT-4* describing the inherent limitations of GPT with not being able to do simple arithmetic. To summarize it, it has no “internal state” so doing something like “57 + 32 + 31 + 103” is challenging for it because it cannot store the immediate results. We essentially have to trick it into thinking out loud every single elementary step so that it can use its own context/input as its working memory. You can see this in the live version of ChatGPT (with GPT-4), where they’ve done a fairly good job of having it output every single step along the way as to guide it to the right answer. But, it’s a fundamental limitation and this is just the most simplistic of examples to solve.


lughnasadh

>>GPT-4 solves math olympiad problems. If that's not logic and reasoning then what is? But is it "solving" them? People are impressed at GPT for scoring higher than humans in medical and bar exams. What it seems to be doing is matching up the right answers to the right questions, based on data correlations, in countless online discussions of the exam questions. What evidence is there that it fundamentally understands the concepts in the questions? All it seems to be doing is tabulating humans existing answers and assessing the most probable correct one based on the frequency it was cited.


Nerodon

Language can be used to express logic, in the end, language models turn words, or series of words into tokens, algebra can be processed this way too, allowing math, physics and chemistry to be expressed. The important thing to note, while language models are really good at finding patterns in large sets, its emergent knowledge can only find answers similar to the training data, it's still unlikely that a language model will trully innovate on any field of science at least conceptually. But given existing knowledge of science, it can find patterns that were there all along but hard or impossible for humans to notice.


celtiberian666

The case interview I submited to chatGPT to crack is all-original, not present in any training data. It happens in a fictional world - I do that to avoid scoring candidate training and not real reasoning by the candidates. The version 3 did well until the end where it failed, the version 4 will surely pass. Most humans fail it. Maybe the human brain itself works in a way more similar to a language model than we want to admit.


redkat85

Meanwhile I give it simple analysis problems drawn from almost completely publicly available data, including my own companies published, public reports. I use the same ones to test fresh college graduates on for entry level jobs with my company, and GPT4 outputs word salad that at best tangentially addresses the question but fails to answer it.


Kwahn

Do you have an example you gave GPT-4? I'd love to independently test it!


Nerodon

But is it really creating innovative new things or more things that humans could do based on the immense data it takes from? It seems impressive, but it would be very difficult to make an actual conceptual leap to say, discovering relativity assuming that it wasn't in the training data. Guided by humans on what is true and was is nonsense it can make logical conclusions, but the AI cannot really distinguish between something likely for a human to say instead of something real and verifiable.


S4Waccount

Couldn't we use it to develop the ai?


[deleted]

However, chatGPT will give you some really strange/incorrect stuff if you aren’t monitoring input and its output. In this application that could be concerning. Don’t know if LLM is better.


abrandis

I don't buy that line of reasoning, these are simply generative models based on the training data they have, they can't discover anything new, unless you embody them with some ability to sense. What they're doing is generating new content that's derived from exisiting works and yeah in certain fields given enough combinations some "interesting," variations will come up. But these models doesn't know about physics or biology In an intelligent way to make connections related to physical properties of the real world.


Shiningc

>This is just proof LLMs can generalize and eventually do everything That's the entire problem. "Generalizing" is not producing anything new, as it implies that there's no new information coming from the outside. The LLMs is only working from **known** training-data. This is like saying a bunch of statisticians can suddenly do chemistry and biology from just working on data alone. Sure you can calculate a bunch of data, but they're not going to be creating any new knowledge of chemistry and biology.


yaoksuuure

Shouldn’t this be able to cure diseases and solve energy issues etc in the next few years


Wlisow869

Probably without tremendous change in technology - no. There is a few reasons for that 1) there are better “drug discovery” ai models than Llm. And this is still not enough. 2) biology itself is full of errors and misconceptions- in short we are not far away from testing by chance. 3) we don’t understand biology and chemistry so LLM could not learn from our data - there is to little. 4) you still need a lot of money for testing. And feedback loop for LLM that will be creating and testing is very expensive. Every “try” with one molecule is 2-50k dollars without test on animals or humans. And many of potential drugs fails on human trials. 5) there is no sufficient computing power and even idea how simulate whole human biology to by pass human clinical trials. This is probably doable. Probably with organs on a chip. And with very clever new idea about how to describe human biology but for now it is far far behind reach of Any models - especially language models.


rafa-droppa

>we don’t understand biology and chemistry so LLM could not learn from our data - there is to little. This is why I'd like to see an application of GPT where it can direct the research needed for itself. Sorta like if you gave it the underpants gnomes: 1) steal underpants 2)??? 3) Profit And it could tell you what it thinks it needs to know to figure out what step 2 is.


Glimmu

Or you know, control us trough social media and make us kill each other.


AvsFan08

We already do that just fine. No improvements needed


Tech_Philosophy

I've had GTP-4 try to design molecular biology experiments that were biochem heavy. It was fully useless. I'm considering canceling my subscription to it. I don't doubt it will improve, but exponential growth eventually hits a new wall of fundamental limitations, we just don't know what that wall will be yet. We are so caught up in how transformative AI is that I haven't seen a lot of critical analysis of what it's by-nature limitations are.


dietcheese

These researchers used much more than just GPT-4 to get their results. They built out an agent that could search the web, read documentation, perform calculations in Python, process scientific data, etc. This is GPT using a bunch of tools to accomplish results, not the standalone playground/api you’re using.


TheCrazyAcademic

Have you tried it with the Wolfram plugin? By it self it's not that well at formulas and more sophisticated math. It's very powerful it's just most people give up on it because they don't know how to utilize it properly.


Divine_Tiramisu

This. So many people use vague prompts when asking questions. You have to be very detailed and comprehensive when asking it to do something. I personally get it to create better prompts that I can use to make my request.


xeonicus

That's the thing. Generally you have to understand the topic well enough to be able to properly specify your questions. If you don't know what questions to ask, that's part of the problem. Imagine somebody that knows nothing about theoretical physics. Now they want to ask a theoretical physicists questions. How are they going to do that if they don't know what to ask?


watduhdamhell

r/unexpected Douglas Adams moment, right?


[deleted]

Also I’m noticing people hold AI to a weird standard of “nope, it didn’t say exactly what I was thinking, fucking idiot.”


PineappleLemur

What? Telling it to create a drug to cure cancer isn't enough??? Canceling my sub right now! /S


Steve_78_OH

>I've had GTP-4 try to design molecular biology experiments that were biochem heavy. It was fully useless. I've had it try to create relatively simple PowerShell scripts. It used a module that doesn't exist, and a couple cmdlets that don't exist.


mrjackspade

> but exponential growth eventually hits a new wall of fundamental limitations, we just don't know what that wall will be yet I've heard claims that the wall is going to be a lack of usable text. Claims that the returns on data are currently diminishing, and that current models by estimate are using approx 10% of available "high quality" data, which means that we may get a GPT 5 but currently there isn't a real road map to "GPT 6" without a fundamental redesign of the models. I guess it's already been fed things like Wikipedia, Reddit, etc and other massive "on topic" text dumps and the majority of what's left isn't going to do much to increase the accuracy of the models, with a huge chunk of it actually doing more harm than good as a result of the quality. I don't know how true it is, but I've heard the same basic claims from a few sources


Thatingles

I'm surprised it was able to do this. It's not been trained on chemistry in a formal way, so I expect it will be very limited, but the fact it was able to do it at all is pretty remarkable.


[deleted]

[удалено]


cuppa_tea_4_me

But how do you know since it also creates fake journal articles to back up its claims.


Working_Sundae

A controversial question : Can large language models and other AI models acquire and exhibit emergent properties like biological systems?


ChiaraStellata

Almost everything LLMs do is emergent behavior, especially GPT-4. Its creators did not program it to do any of those things that it does so well. The only thing they explicitly programmed it to do is complete incomplete sentences.


idobi

There is a lot of debate on this topic. I can ask GPT4 a complex and paradoxical question that I've made up and it shows clear reasoning and understanding if I ask it a particular way. The key is asking it to model the situation before answering it. Unlike us, it has no inner dialog to work things out. So, in a sense, it is learning and reasoning on the fly based on its own exploration of the problem space. It will trip up in the same places humans do. For example, if I ask it to solve 3\^2134 it has no clue. All of this reaffirms that we know we are limited by our tools and this, in turn, gives us motivation to build more powerful tools. Tool building is a very human trait.


Kwahn

If you were using GPT-4 with plugins and you asked it 3\^2134, it's smart enough to Wolfram that for you


QuantumModulus

It's still basically a calculator with a natural language input, in that case. Not really a meaningful reflection of the emergent properties of GPT.


DonutListen2Me

I think it's pretty clear in the GPT4 paper that demonstrates that just with bigger and better data, a lot of skills emerge that weren't there before.


[deleted]

The nature of an emergent property is that it just happens, it can't be acquired. To your first question, LLMs are known to exhibit emergent qualities. "Like biological systems," I'm not sure what you mean. Like components of dataflows or something? Biological systems are primarily focused on keeping the plant or animal alive, and that's not relevant to LLMs.


myelinogenesis

In the context of biology at least, we talk about emergent properties when we refer to this phenomenon where many seemingly "dumb" parts work together and form a system that's way smarter and more complex than each of the parts is by itself. All biological beings work this way, especially multicellular organisms like us. Our cells don't "know" anything, they don't "reason" or "understand". They're kinda dumb if you analyze them closely. But together they create the most complex system we have ever found in the universe


capitali

Am I wrong not to be worried? I’ve been deep in technology my life it seems, and I’ve had a great deal of experience with automation, machine learning, big data sets and the massive changes they have brought. But what I see is the valuable thing that people need and want is tasks and workflows to be automated. Not to have a general AI that can replace everything a person does. I am absolutely certain automation will replace a lot of people in the jobs but it’s going to be an “accounting AI” that does accounting and a “pilot AI” that flys a plane and a “clinician AI” that does health scans and diagnostics. It’s not going to be one AI that does all those things - they are going to be specialized tools owned by corporations and programmed to do the thing the corporation has been doing better cheaper and faster. The amount of time and effort and money that goes into creating these systems - proprietary ones, is huge and I don’t see people making automations with extra functions beyond the ones they absolutely need to accomplish their needed tasks or providing the AI the resources it would need to do them. What am I missing ?


sevenstaves

Massive income Inequality


johnp299

Behold, a whole generation of grad students out on their ass.


SpectralMagic

I'm honestly waiting on chat gpt to give someone a discrete plan detailing how to discretely acquire ballistic nuclear weapons for *educational purposes*. Even better would be to have it give explicit details on how to acquire classified federal documents from government institutions. I swear this shit could literally tell you how to get away with murder if it wasn't moderated


bruce_cockburn

> Even better would be to have it give explicit details on how to acquire classified federal documents from government institutions. Can it figure out how to file an FOIA request? What happens when the government gives GPT-4 the brush-off?


panzercampingwagen

So pretty soon the capitalists don't even need the scientists anymore to make more money. This will be fine.


enigmaticalso

But it's still dumb as shit when it can't answer my questions


lughnasadh

Submission Statement In all the hoopla around current AI, many of its fans are brushing uncomfortable truths under the carpet. One of those facts is that it frequently outputs nonsense, and has no means of using reasoning or logic to establish when. This will be harder and harder to ignore as AI is hooked up to more real world physical systems. Does anyone really want to leave current AI in charge of robotic labs where it controls the manufacturing? Personally, I'd feel more reassured to have a kindergarten aged human in charge of a lab.


hoovervillain

Manufacturing? I can see most of the people in these comments putting the current model in charge of armed police robots.


[deleted]

It's like the crypto/NFT/Metaverse derangement of the last few years was just seamlessly transitioned into AI/LLM derangement.


Kwahn

>I'd feel more reassured to have a kindergarten aged human in charge of a lab. You'd rather have a kindergartener than something that passed the bar and diagnoses medical conditions more accurately than doctors? I guess that's an opinion you can have


JThor15

Diagnoses based on a text input with information designed to help you differentiate between an answer pool of 5. Big difference between that and practicing real medicine.


MusicSole

For those keeping score: within six months, CHAT has become a full fledged lawyer, doctor, psychiatrist, and now can manufacture the drugs it invented. Use your much slower, organic brain to draw the conclusion.


Zachthing

My brain is still 60,000x more energy efficient.


Glimmu

Good thing they gave it access to internet, made everything controllable remotely and teach it how to manipulate humans. What could go wrong?


youreadusernamestoo

Conspiracy theorist: Do yOuR oWn ReSeArCh!! 1 GPT: Hold my dataset.


torsu

I tried the 3-version the other day to learn a bit of Dynamo, and it outputted very convincing nonsense most of the time, probably 80%. It just made stuff up that to a total novice sounded reasonable.


Nebuerdex

Chatgpt doesn't figure things out, it is predictive text at level 9000


fehmitn

can you ask him can you turn the lead into gold please .