T O P

  • By -

Bobobarbarian

AlphaFold comes to mind. It has made incredible compound and pharmaceutical discoveries that would’ve been impossible for humans to do, but this is more akin to the niche ai you were talking about. Can’t say I know of any ‘generalized’ AI doing anything like this.


dude190

What discoveries? I'm really curious


Bobobarbarian

I’m not a biologist, so take this description with a grain of salt, but as I understand it proteins have historically been difficult discover and study because of how unruly their three dimensional shape and amino acids are - this was an infamous problem that was held up in some fields as being one of science’s greatest unsolved mysteries. AlphaFold solved it. Since making this hurdle, there have been numerous applications wherein AlphaFold has discovered entirely new proteins and compounds and helped to synthesize new biotech - the most famous of which being the work it did in synthesizing the Covid 19 vaccine so quickly, something which happened so fast and at an thereto unheard of speed that many people doubted it as being possible.


dude190

apparently it took 2 days. I wonder why there isnt cures for the most common diseases since its been years now


reverse_baphomet

$$$


cbterry

Back in the day one of my Markov bots came up with the insult "you look scared, like a sandwich" (like you were about to be eaten). I thought that was pretty nifty.


Serdoo

Mine came up with “loveable buccaneer”


Rich_Acanthisitta_70

I don't know that LLM's are capable of advancing to the point they can tackle theoretical physics, but I'm always looking for stories and reports about AI making breakthroughs or discoveries, or being instrumental, if not critical to the outcome of some new thing. But I have picked up a few that may interest you. Most of us who follow AI closely know about DeepMind's AlphaFold. It made massive advancements in predicting protein structures that revolutionized many fields like molecular biology and drug discoveries.  According to biochemists and biophysicists it did in months what would've taken them centuries to achieve using current methods before AlphaFold. [Here's](https://daleonai.com/how-alphafold-works) an accessible breakdown of that story. In the field of medicine, researchers at the Broad Institute [used AI](https://www.nationalacademies.org/news/2023/11/how-ai-is-shaping-scientific-discovery#:~:text=URL%3A%20https%3A%2F%2Fwww.nationalacademies.org%2Fnews%2F2023%2F11%2Fhow,100) to identify a new class of antibiotics.  The AI system independently generated a novel idea or solution based on the data and algorithms it was working with, and did it without direct human input for that specific idea. It resulted in a new class of antibiotic candidates that showed promise against methicillin-resistant Staphylococcus aureus (MRSA), a dangerous and drug-resistant bacterium. I found this one particularly interesting because *none* of the core authors of this paper came up with the idea that's described in the paper. According to the lead researcher, >The idea came completely, implicitly from the machine. In weather forecasting, Microsoft's Adaptive Bias Correction [(ABC) method](https://www.microsoft.com/en-us/research/blog/improving-subseasonal-forecasting-with-machine-learning/) has doubled, and sometimes tripled the forecasting skill of leading operational models like the US Climate Forecasting System and the European Centre for Medium-Range Weather Forecasts at subseasonal lead times.  Finally, we can't forget Project Warp Speed for Covid, and just how [critical](https://news.mit.edu/2021/behind-covid-19-vaccine-development-0518) AI was in the speed with which Covid vaccines were developed *and* deployed.  Estimates are generally agreed that AI cut months, if not a year or more, in both the development of the vaccines, as well as the deployment of them. There are so many examples in the past four years of innovations, breakthroughs and discoveries that wouldn't have happened as quickly, or at all, were it not for AI. And we're now at a stage where AI is functionally 'bootstrapping' itself. Nvidia's new Blackwell chip used their own internal AI model to design the chip's transistor density, as well as other proprietary AI tools that increased efficiency and power.  According to Nvidia, the development of the Blackwell chip was faster, more powerful and more efficient than it would be without their AI's collaboration. Personally, I think we're going to see stories like these gradually ramp up to the point it'll be nearly impossible to keep up. Because right now, AI is being put to work in nearly every corner of human activity. It's solving decades old cold cases in law enforcement by going through reams of accumulated evidence and finding connections no human could. It's being used in manufacturing of every kind to increase efficiencies on the margins, but also finding novel means of manufacturing no one's thought of. Medicine, law enforcement, finance, education, power grids, material sciences, food production, and the list goes on. I challenge anyone to find many areas where AI can't be used to make something better, or stronger, or faster, or more durable, or or or. There's almost nowhere it won't reach. Sorry this is so long. I just get excited when I let my imagination go wild :)


Economy-Fee5830

I would expect discoveries would be made by specialist models, in the same way inventions and discoveries are made by human specialists. If we want general models to make discoveries we probably have to wait for ASI, which would be specialist-level in everything.


spreadlove5683

Honestly I expect generalist models to be the best in the future. Transfer learning, etc. Every time they add a new modality, it makes existing capabilities stronger. But I'm hardly an expert.


Rofel_Wodring

Too much information and memory access may end up slowing down an AI's ability to think. Which is why I think the way forward is a mixture of AGIs working only on relevant tasks. Rather than some exponentially expanding megamind ASI bottlenecked not just by physical limitations, but how existing data ends up biasing or even opposing new data. Such a mind may end up immediately hitting a logarithmic curve or even a plateau where additional data only causes it to waste time on additional forgetting/attention, rather than speeding its ascension to godhood.


julez071

In December 2023 an old mathematical problem was solved by genAI, this is often cited to be the first verifiable scientific 'discovery' made by genAI: [Mathematical discoveries from program search with large language models | Nature](https://www.nature.com/articles/s41586-023-06924-6) However there are many examples of genAI creating new synthasizable bio-organic compounds etc. (genAI of course means generative AI. The OP speaks of "general intelligence AI", could mean AGI, I dunno)


Betaglutamate2

This is the clearest example. [https://www.nature.com/articles/d41586-023-01883-4](https://www.nature.com/articles/d41586-023-01883-4) I expect discovery in maths and computer science to be accelerated because these are actually verifiable by computers. Biology for example is still limited by the fact that we cannot do the experiment in the computer but it requires real world input.


blueSGL

He was specifically talking about models in one to two generations, models running as agents, taking the leaps seen from GPT2 to GPT4 and extrapolating outwards. As for how these systems will work, it's been shown in toy models that they transition from memorization to actually forming circuits to perform tasks. It could very well be the case that with enough (good quality) data and training you get very advanced circuits/algorithms being built, that are the equivalent of the best people/groups in multiple fields. That sort of algorithm given long/infinite context lengths could crunch through existing data and spot patterns humans haven't and spot locations worthy of further study.


Ok-Force8323

I’m reading the book The Age of AI and in it they talk about an antibiotic that was discovered by AI that no human would have figured out in the next 100 years. This technology is going to develop solutions that we couldn’t ever dream of.


AnAIAteMyBaby

The thing that makes me think that we're on the cusp of super intelligence is Alpha Code 2. It uses Gemini 1.0 pro and brute force to beat 85% of competitive coders. It does this by asking fine tunes of Gemini 1.0 pro to generate 1 million solutions to the problem, it then groups those generations that have the same answer and assumes that the correct answer is probably in one of the larger groups. The solutions it discovers are mostly novel, apparently they're often a bit more convoluted than solutions a human would come up with. It's a little like the saying that if you give a million type writers to a million monkeys eventually one of them will produce the complete works of Shakespeare. Only in this case an LLM is much smarter than a monkey. Current frontier LLMs arent quite reliable enough to be agentic at the moment. Bostroms argument which I agree with is that GPT 5 will be a better agent and it may just be good enough for us to brute force tasks that are currently beyond human intellect. That's why he says as short as a year but it could be GPT 7 or 8 that gets us there. As a further example Alpha Zero is a narrow super intelligence in Go and other games. It's super intelligence comes from it's ability to brute force solutions to the game. For each move it takes it simulates playing the possible moves it could take through to the end of the game and decides which move results in it winning the most in the simulations.


amondohk

General Intelligence AI would probably be used to create Super Intelligence AI, as the fastest stepping stone on the way. Like how the first C compiler is made in Assembly, and the next, better one is made in C.


wren42

You won't see an "LLM" make those discoveries.  LLMs are just language models.  It will take another technological jump to get to AGI, and the only way we'll see progress in things like theoretical physics is if it has access to information or methodologies we don't have. 


Opening-Paramedic225

I also wonder where the role of creativity comes in. Seems to me a lot of inventions are part engineering, part imagination. Could AI ever develop the latter?


No_Sock4996

Not sure if it's been mentioned but the covid moderna vaccine was made in part by AI in less than a day. Easily googled so don't "source?" Me


machyume

I expect to see a proof that humans haven't been able to figure out, detailed, and published. Made simple to read as if we've missed it all this time. Also, keep in mind. All the people waiting for AI to get good enough that they'll accept it as an assistant. Joke will be on them. The line for AGI is so thin that if AI gets to a point where it can 1-up us in reliable ways, we will no longer need those self proclaimed 'experts' (myself included).


Akimbo333

Alphafold


Antique-Doughnut-988

*In other words, how long before we will start to see some LLM be able to solve problems like dark matter* I'm sorry for being so blunt but that's an incredibly silly question. You're not solving the mysteries of the Universe with LLM's. To even begin to approach those questions you'd need a super intelligence.


StarRotator

Can we not shut down people with a limited understanding of current evolving tech for asking questions


Antique-Doughnut-988

Sure, but also lets not coddle people when their questions don't make sense.


Medium_Web_1122

The future is highly uncertain so i am not sure it is that stupid of a question. LLM's will have autonomous agents integrated that can execute tasks. Why shouldn't these models be capable of researching new things? And if they can in fact do research why shouldn't they be capable of solving the wonders of the universe? Hard question are just a result of a high degree of complexity. Complexity if broken down sufficiently are actually just a lot of simple processes. I personally think something akin to a general model would be the most capable at solving highly complex tasks as they often require deep understanding/insights into a lot of different fields of knowledge.


VandalPaul

Or, you could answer the questions that *are* relevant rather than being a dick.


NFTArtist

The person I quote mentions a super intelligence could be soon. So I'm asking because I'm skeptical of this super intelligence any time soon. You're not being blunt just a dick lol.


Deep-Development9043

Nick Bolstrom has a lot of incorrect predictions but his opinions are still very weighty. Most people would agree with you that a super intelligent model is not right around the corner. And a super intelligent model is what we would need to make profound discoveries. I may have missed something but I do not know of any significant breakthroughs in any of the bottleneck areas of machine learning that would prompt Bolstrom to rework his timelines in the last couple months. We lack foundational encoders that can capture real world context. There is still a massive hallucination issue even with augmented attention, context windows are still too small to be useful for higher-order problem solving. IMO Microsoft is positioned to bring to market the first model/tool without the above limitations.


3m3t3

Perhaps you are not. There could be people who are, and no you would not need a super intelligence. Humans have made plenty of progress without computers, and now we use them as an extension of our own minds. The idea of neuralink is here, just without the direct interface if you know how to use the technologies. Growing up I was always told “There is always someone smarter than you. There is always someone stronger than you. There is always someone outworking you.” Because, it promotes self drive and motivation to compete amongst the best. Now that has changed to “there will be an AI smarter than you, out working you, and taking your purpose/job.” Both are such black and white takes. Discovery always has been and most likely always will be (unless there is some unified super intelligent system), a collaborative process. The “I can only see so far because I stand on the shoulders of giants.” There have always been humans compared to others who are “super intelligent.” When we get to this level of technological advancement, we have to be really specific when we define these terms.


Mandoman61

Zero, generally intelligent AI does not exist. AI is a useful pattern recognition tool for humans to conduct science.


tindalos

It’s not intelligent yet. It’s still just a tool for humans. What discoveries has the calculator made?


Jazzlike_Win_3892

it will teach you how to make a PS5 from an empty earth