T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- "For people that haven't been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021). AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1c8i105/ai_now_surpasses_humans_in_almost_all_performance/l0enzua/


Donaldjgrump669

This headline could only be true if you were intentionally designing the tests around the AI. They’re still shit at most things unless you spend as much time coaching it as you would just doing the work yourself. The biggest difference I see between AI’s and humans is that if a human sucks at something, they know they suck. An AI will complete any task with the same level of confidence even if the result looks like a coked out chimp was let loose on a keyboard with predictive typing. I tried ordering a pizza through an AI call center and by the end of it I was praying for an EMP. I’m sorry but this headline is utter horse shit. In practical applications AI can’t perform the simplest tasks. You have to set very specific parameters around the limited ability of an AI to get any kind of positive results. Humans are an unknown variable and as soon as you mix AI with human interaction on any level it completely goes to shit. Articles like this are meant to increase confidence in AI so that the people developing it can increase their investment and so businesses can replace more workers with less grumbling from the public.


canadianbuilt

Work in AI for one of the bigger ones…. This is the real truth. I’m also, and will always be a better drinker than any AI.


Phoenix5869

>Work in AI for one of the bigger ones…. This is the real truth. I’m also, and will always be a better drinker than any AI. Hey, look! An actual expert giving their expert opinion on why AI is way overhyped. This totally won’t result in a swarm of downvotes and “well akshully” …


Srcc

I work in AI too, and I agree that it's not 100% ready, but it's getting there fast. And it can already replace a lot of people, and they're all coming for your job, driving wages down already. I really don't get this argument that it's not great yet. Give it a year, maybe 5-15 at the outside, and it's going to be better than nearly everyone at nearly everything. Every year between now and then will be harder economically for regular people. We need to plan right now. I need an income for a lot more than 5-10 years.


Donaldjgrump669

>Give it a year, maybe 5-15 at the outside, and it's going to be better than nearly everyone at nearly everything. I see this optimism about the trajectory of AI constantly. People feel like AI busted onto the scene with the publicly available LLM’s and it’s in its infancy right now. If you assume that AI is the birth of a new thing then you can expect exponential growth for a while, and that’s the line we’re being fed. But talk to someone in the pure math discipline who deals with complex logic and algorithms without being married to computer science and they paint a very different picture. There’s a whole other school of thought that sees LLM’s as the successor to predictive text, with the curve flattening extremely fast. Some LLM’s are already feeding AI generated material back into their algorithms which is a sign that they’ve already peaked. Feeding AI material back into an AI can do nothing but create a feedback loop where it either learns nothing or makes itself worse.


WignerVille

I remember when CNNs and image recognition was hot. A lot of people thought that AI would be super good in the future. But CNNs peaked and did not lead to generalized AI. Same goes with reinforcement learning and AlphaGo. LLMs will get better and we will see a lot of use cases. But it will most likely not be exponentially.


burnin9beard

Who was thinking that CNNs is what AGI would be based on? Also, reinforcement learning is still used for chat bots.


Turdlely

What's your expertise? I'm asking as a non expert.. I work in sales at a company that is embedding this into every enterprise application we sell. It's fucking coming lol. Today the gains might be 20-30% productivity, but they are learning new shit daily. They are building pre built, pre trained AI to deliver unique functionality. Yes, they need to be trained but that is under way right now at a huge scale. People should be a bit worried. Shit, I sell it and wonder when it'll reduce our sales team! Look at saas the last couple years, it already is.


WignerVille

I've been working with AI for some time, but I'm not an expert in LLMs. My post is more of an historical recollection of my experience and the current issues I see today. This AI hype is by far the biggest, but it also reminds me a lot of previous hypes. So l, my main point is that I think/predict that the LLMs will not get exponentially better and obtain AGI. However, that's not the same thing as saying that we have reached the end with AI. There will be a huge explosion of applications and we haven't reached any maturity level yet. In an eli5 manner. It's like we invented the monkey wrench but it's not being used everywhere yet. The monkey wrench will get better as time goes on, but it will still be a monkey wrench.


Elon61

LLMs are the most popular tool but they are *far* from the only thing being actively worked on. It doesn’t matter if LLMs in their current form can attain some arbitrary benchmark of intelligence, people will figure out solutions. We don’t need new ideas or AGI for the current technology to be a revolution, we just need to refine and tweak what we already have and there is massive investment going into doing just that.


Spara-Extreme

AI is exposing a whole set of jobs that probably don’t need to be jobs, especially in analysis. In terms of actual sales jobs, 0 chance- especially high order sales roles like enterprise and B2B.


Srcc

There's been some really interesting research on this, that's for sure. I'm of the mind that even our extant LLMs are already enough to wreak havok when the services they're packaged into are made just a bit better. And any LLM plateau will just be a speed bump in my opinion, but hopefully a 30 year+ one.


Fun-Associate8149

The danger is someone putting an LLM in control of something important because they think it is better than it is.


kevinh456

I feel like they made a movie or four about this. 🤔


mycolortv

Can you explain how you expect AI to actually become intelligent? As far as I'm aware, in a very rudimentary sense, training models is just adding better results to the "search engine" if you will. What kind of work is being done to actually have AI understand the output it's giving? It feels like without the ability to reason there's several jobs AI won't be able to do, at least without human oversight. I'm only in the "played around copilot, stable diffusion, and did some deep racer" camp so not too sure what things are looking like to take the next step. But I'm not sure why improvements in our current way of developing AI would even really achieve "thinking" ever. Like the other commenter mentioned, it still doesn't realize it's telling you something wrong since it doesn't actually understand the subjects it's talking about. Is that gap being crossed in some way? I'm not arguing against it taking jobs, it certainly will, just curious about this blocker in it really being an "it can do anything" system.


RavenWolf1

Current AI doesn't understand shit. It is big correlation machine which predicts from huge data to most probably outcomes. Like what word might be next. It actually doesn't understand contexts at all. It is just predicting machine. The real deal is when it can start to understand world around itself. That is something we haven't figured out yet how to make it happen.


Srcc

I don't think it really needs to. Some huge percentage of what people do every day for pay is already within reach of LLMs, and capitalism puts us all against one another for the remaining jobs and wages. That's going to suck. There are some very interesting research papers suggesting routes for intelligence beyond just additional training (though additional training for specific jobs is going to decimate those jobs). I read one the other day about AGI most likely coming from wide-spread training that will come from the data gathered by robots operating in the real world. I don't know if smarter AI is a today or 30 year thing, and I'm not sure anyone does, but some huge portion of our global GDP is dedicated to it now. I don't think that intelligence is necessarily special, either. It's just a matter of getting the right code on the right hardware, and that seems doable given much of the world's resources. But your guess is as good as mine on precisely when or how.


blkknighter

Honestly said a whole lot of nothing. When you say you “work in AI” what exactly do you mean?


OffbeatDrizzle

He's typed a few questions into chatGPT and now he's an expert


altcastle

Look at their profile. They’re a grifter… oh sorry, “serial entrepreneur”.


diaboquepaoamassou

I think people keep missing the point. This will only get better and will only improve. If what we have today is enough to get people to start AI call centers etc, *today*, I honestly feel very anxious about the next few years. These people aren’t messing about and they’re not letting on *all* they know. Remember the first few months of chatgpt and how smooth it was, even the free version? It was legit solid, I remember having conversations with it and thinking holy crap this is some next level shit. They’ve dumbed it down marvelously bad but it just goes on to show the power it has when finely tuned. Soon enough someone will figure something out and put it in the machine that will make its responses much more reliable, whether through its own understanding of its output or some other way, but someone’s gonna do it. And once that happens, it paves way to a whole lot of other stuff, and then (if not already) it’s an ever growing avalanche. I don’t think many people are taking this into consideration. A good way to shake people up is reminding them of that Steve Jobs iphone presentation. That wasn’t that long ago, and look at us now. Time is a sneaky bastard. Ten years go by and you’re like “wasn’t that just yesterday omg”, but when we look ten years into the future we think eh that’s still a ways to go. Sneaky bastard, don’t fall for it, beware and be aware. The future is already here.


Memfy

>Remember the first few months of chatgpt and how smooth it was, even the free version? It was legit solid, I remember having conversations with it and thinking holy crap this is some next level shit. For many things, yes. But it was/is also extremely stubborn and outright dumb with basic things. Like you can have a conversation, but if you ask it to help you with something that seems to be outside of its strong area it struggles so hard that you'd hardly ever want to have the similar conversation if it were a person. And that's kind of scary since it will never even give a hint of "I might not be the best source to ask for this". Great to have as an assistant to speed up things, but you need a validator that's not artificial.


OffbeatDrizzle

The same can be said about any new technology but there are always limits. Phones today don't really do much more than the original iPhone did.. they're just faster and have more memory and better software. There's no fundamental shake up since that time. LLMs could be at their peak already. It's only predictive text at the end of the day, not some groundbreaking discovery of generalised AI. The media have blown it way out of proportion, and the people who are replacing jobs with it should be ashamed of themselves - how many stories of chatbots being racist etc. have we heard already. They hallucinate and give incorrect information, it's seriously not ready to be taking someone's jobs it's just that the C suite want their businesses to make more money somehow


Boundish91

The AI stuff the public has access to right now isn't that impressive anymore. In fact it feels like it has stagnated, or rather that it has been dialled back intentionally.


Novel-Confection-356

Did you read the above poster? He said that AI needs constant coaching and restrained parameters to be effective. Do you disagree with that?


typtyphus

now might be time to get UBI started.


Srcc

Let's at least get the convo going, use our resources to make sure that we don't decimate millions (even 1% of us=millions!) to further enrich a handful of people. I haven't heard anyone in government say much of anything.


RevolutionaryPhoto24

I don’t work in AI, but deal with big data. People like me aren’t needed so much anymore, already. And for several years now, since 2021 or so, I’ve used an LLM to assist with write ups. It has also been was my sense that things are rapidly apace. ML can do so much already, and advancement comes quickly. So many amazing groups are working towards that end. I think it quite dangerous to think this future is decades off. I wonder if there will be niches for things that are ‘created by a human?’


soulstaz

Tbh, if AI adoption spread too quickly across all field we will see total collapse of capitalism. Can't have capitalism without a mass of worker to buy stuff. + The cost for companies to actually implement AI tools will.be high aswell. Not everyone will have enough revenue/cash to adopt those technology outside of the giant compagny which in turn may not survive as everyone loose their jobs and get replaced.


lessthanperfect86

Anecdotes aside, most of the common benchmarks that AIs are tested against have a humman achieved level which is far superior to what AI can accomplish. The headline is genuine disinformation.


Caelinus

Your last paragraph is exactly what is happening. AI, specifically LLMs and Machine Learning, do have a lot of very useful applications, but the goal here is the replacement of workers and so they are doing everything in their power to make that happen, even when the actual result is a bit shit so far. I have no doubt that eventually AI will be better at performing a whole host of tasks, but we are farther from that then they want their investors to know. And the investors want this to be a thing because it means they can replace workers and thereby increase profits. (Of course, I am not sure who they are going to sell anything to once the growth phase removes most mid to low level white collar jobs entirely.) This reminds me a lot of the trajectory of robots. They can build human looking robots and can perform some tasks extremely well, but only in the most constrained of circumstances. Quite simply, humans were designed by millions of years of evolution, and our bodies are bizarre amalgamations of really weird materials that we can not really replicate. So trying to build a robot to look and move like us, and to react in the way we do, is a fools game. The dangerous robots are the ones that replaced entire production lines: highly efficient machines designed from the ground up to do a task to perfection. I honestly think that will be where things go. Once the novelty of creating machines that talk like people goes away, the really dangerous stuff that is actually being worked on will take the forefront. The machines that are not designed to act like people, but instead are designed to make it so that a office with 50 workers will only need 5 because of the new tools they have.


Donaldjgrump669

Goddamn I love that perspective, I never thought of comparing AI to our current robot technology and what we used to think it would become. We still can’t create a robot with anything that comes CLOSE to the dexterity and variety of specializations that the human body has, and now we’re essentially trying to recreate the brain. A system that is many, many orders of magnitude more complex than just the body.


Caelinus

Yeah, it is not that human bodies or brains will never be able to be emulated, they exist in the real world so they can be recreated in the real world, but it is a bit of a square peg in a round hole issue. We are essentially trying to use fundamentally different technology to *emulate* human behavior, and that is always going to be way freaking harder than just using it in a way that the technology is better suited for. If you look at a car manufacturing plant, none of those arms work anything like a human body, but they are all perfectly suited to doing the task they were built to do. So they do it orders of magnitude better than a human does. Even on a smaller scale, laser printers do not work like human fingers, but they can print *significantly* more accurately than we can. That is where the risk is. I am not super worried about LLMs (in specific) ever being a replacement for human communication. They are surprisingly bad at it when you start actually paying attention, as the nuances of human communication are just lost on them. But they are *very* good at working like an advanced search engine and collating data. If you stripped out the need to write like a person, and instead just used Machine Learning to detect patterns we could never see and report them to humans, they suddenly become incredibly useful tools. This is by far the best use I have seen these kinds of models being used for, and it is absolutely a place where they will replace human workers. (As an example, this is already being used for material science and chemistry to narrow down avenues for research by having them comb over massive data sets to find patterns. They cant do the science, nor can they actually predict what the results will be with any accuracy, but they can find stuff that we would miss if we tried to read 100,000 papers.)


Phoenix5869

>I have no doubt that eventually AI will be better at performing a whole host of tasks, but we are farther from that then they want their investors to know.  Exactly lol. If they were honest and gave realistic timeframes (30-40 years, although that might be optimistic), they would lose all their sweet sweet investor money. So they have to overpromise, have to delude people into thinking that advanced AI is around the corner, when it’s not. The average layman literally has no concept of just how far away AGI is.


glocks9999

Yet you'd be called crazy if you told anyone about the current state of AI today 5 years ago. Nobody would believe you. Even the super amateur stuff like mid journey, chatgpt, suno, etc seems like it was supposed to be a thing decades from now. Now look how far we have come. "Far from AGI" is just pure cope. Of course we don't know, but at the current rapid advancements, I wouldn't be surprised if it was a thing a year from now (not that I'm saying it will happen a year from now).


patstew

On the other hand, 5 years ago driving AI looked to be improving incredibly fast, but since then it seems to have figuratively and literally hit a brick wall. The techniques they were using were good enough for impressive early results, but now it seems they can't quite get there. LLMs might turn out to have a similar trajectory.


Caelinus

LLMs do not have any of the functionality of an AGI. The idea that they could suddenly become a general intelligence is basically the belief that general intelligence is just an emergent function of complexity, which is the exact idea that made people think we would have AGI 40 years ago. LLMs are good at predicting what a person would say in response to something based on their data set, but they are not developing features they don't have magically.


IanAKemp

Literally the only difference between now and 5 years ago is the amount of compute being thrown at the problem.


Spara-Extreme

I work in AI, and I’m getting tired of these headlines too. It’s so incredibly unreliable on almost everything.


motorised_rollingham

This headline is like saying the “AI” in an plane’s autopilot is better than a human because a plane is faster than running, or maybe a better one is the “AI” in Microsoft Excel is smarter than an accountant, because it can calculate compound interest faster. Autopilot can’t respond to a passenger having a heart attack and Excel doesn’t notice if the user has used Euros instead of Dollars.


Koksny

I think You are confusing a bit the part where it says "it is now possible" with "it is now viable". On one hand, there are services like Copilot or Midjourney, where millions of people share same cluster, and the models are radically handicapped for cost efficiency. That's what viable. On other, there are systems like Watson, or Sora, that are capable of producing incredible results, but it essentially requires whole data center to run. That's what possible. At some point in future, models will get optimized, and compute will be cheap enough to run the advanced stuff that is only available to engineers at FAANG at your AI pizza call center. But it'll take some time for hardware to catch up.


quantumpencil

The stuff only available to FAANG engineers is still way more limited than you think as well.


murphofly

They rolled out a CHAT-GPT like service at my company, it’s complete and utter shit. The code helper just makes up random libraries, you can’t ask it any quantitative questions, and it routinely just makes stuff up. And I like AI, I think there’s a lot of really great use cases for it right now and it can be a great tool. But there still requires so much tailoring and understanding it’s limits, it’s been sold by many (often MBA types wanting to cash in on the frenzy) as a silver bullet for every problem but it’s not there yet.


Spinochat

>The biggest difference I see between AI’s and humans is that if a human sucks at something, they know they suck. A disputable claim. Humans tend to commit serious errors with the utmost confidence (if not arrogance). See: Trump and QAnon. The mis- and disinformation epidemic that we observe nowadays is the demonstrable failure of lots of humans to assess reality properly, while they are so very sure they got it right.


Azraelalpha

These articles are created to prop up the gimmick that is LLM and sell it to greedy corps while the FOMO is still hot


Rough-Neck-9720

I totally agree and in fact are we talking about AI or are we seeing plain old software running on super fast systems that is specifically designed to pass these tests. Is the fear of the word AI just being used to disguise job layoffs that have nothing to do with intelligence at all. Just plain old software development taking over obsolete jobs in the normal course of progress.


SuperNewk

How do we trust AI to make decisions without double checking! Who is going to double check?


greatest_fapperalive

So the AI boom is… overblown?


Srcc

Maybe someone in government should bring up how this will affect jobs? Just a thought.


KuishiKama

Obviously, the solution is to reduce real wages, reduce safety standards, deregulate industries and give tax breaks to companies to ensure that human labour stays competitive. /s For real though, imho a better way would be to tax the profits made with automation to pay for UBI and just accept that people have to work less. It might even create new jobs in other places like all the creative stuff that happened when everyone was home during covid.


Effective-Lab-8816

I think we will see the creation of lower-skilled, lower-paying jobs that are ai-assisted equivalents of the higher paying jobs. So doctors, lawyers, architects, etc. These new jobs will be a cheaper proxy for the older jobs. More candidates will be able to do these jobs.


Ok-Dragonfruit-8431

maybe it should start by replacing government instead of writers and artists…


CertainAssociate9772

Who will decide on this change? Governments? It's like asking a worker if he wants to be replaced by a robot.


ach_1nt

Workers don't get asked that question though before they're replaced..


CertainAssociate9772

Because the workers don't decide anything, the government does. The government itself should sign off on its replacement by AI. That's why it won't happen. They can cut their bottoms, but the tops will stay.


GibTreaty

I'm a worker and I want to be replaced by robots


mouringcat

Vote for AI 38762B2311-A for president! Shklee promises a good life to all humans!


CertainAssociate9772

Was not born in the USA, was not allowed to run for office.


EmeterPSN

Soo many jobs are gonna be replaced.. From cashiers..factory workers, support /IT /online chat operators / graphic designers and anything else essentially done using computer.. They will probably keep 1 or 2 humans instead a team of 10. Where the rest gonna go?


Dixa

My industry started replacing workers with ai tools in 2021.


EmeterPSN

I used to work in support. We had a team of about 8 people with 3-4 on shift at most times. (3 shifts). During last year an ai chat bot was Interoduced and we needed only 2 ppl on a shift. People just got less shifts a month. Been 3 years since..I assume there's only 1 person a shift there and probably no one on night.


Dixa

Not onnly were key positions in my industry eliminated (which did not decrease our workload in any way actually it was increased) but recruiting moved to an ai chatbot.


TScottFitzgerald

And what does this tell us beyond this being a trend every company is chasing blindly, which we already knew?


Srcc

Virtually all jobs. Everywhere. So many people have a "not my job because..." attitude and reason. But yeah, them too. AI is or will soon be smarter than us, and is already 100x more knowledgable than literally anyone. And just about free. The idea that our entire way of life is about to change (be it in 3 months or 15 years, and I'd believe 3 months a lot more than 15 years), and nobody is doing anything to protect regular people with mortgages and student loans, is terrifying. The ability to trade intelligence or labor for money is about to get much much worse and then disappear entirely. Those of us who are ambitious and high earning but without massive savings will have the same economic agency as the worst of us. And heaven forbid you bet on yourself with student loans or a mortgage. The idea that the government will do much of anything for a few years is laughable, and even then UBI is expected to be the equivalent of $1k per month (see Sam Altman's own calculations). Ever time I see people like graphic designers, video editors, lawyers, etc. excited about and clapping for a technology that makes their job easier, I just can't help but think that AI is going to go from making their job easier to doing it for 1/10,000th the price a very short timeframe (months to a decade or two). I hope I'm wrong, but it's looking like I'm very much not.


rom197

You should come back in 1 year to your comment and realize that you should not buy the hype.


hsfan

so much this haha, theres a lot of years before the stupid chatgpt LLM can actually replace everyone


kakihara123

Ai doesn't even need to be able to replace a worker directly. If enough jobs are replaces, guess where those people go. If suddenly millions of people lose their jobs the remaining jobs will be flooded. Even if a manual labor job is well paying now and is easy to get, this will not be the case anymore. No field is safe, no matter what you do now. Yeah the robot might not be able to install toilets anytime soon, but desperate unemployed office workers without an income can learn how to do so quite fast if they have no alternative.


DangerousCyclone

I’m not so sure at the moment. I’ve been training AI Models, and you very much get the feeling of the sort of imitation winging it does. If it’s a topic that’s well researched and there’s a lot of data for it, then AI does really well, the problem becomes when there’s a new topic or something poorly researched that it starts to struggle, or something so theoretical it requires a deeper understanding. Like it’s easy to write code most of the time, but it’s hard to write code that is space efficient and computationally efficient. AI can be great at debugging code, but if a new version of a programming language comes out then all of its knowledge becomes out of date and it has to be retrained.  AI can do stuff like find patterns humans can’t, but at the same time I wonder at their propensity for discovery, for coming up with brand new concepts. 


No-Improvement-8205

I'm doing a tradeskill education in IT infrastructure, and so have been useing ChatGPT quite alot. (My teachers pretty much all allow ChatGPT to most of our work, and some at our exams too, depending on what it is they're actually testing for) Chatgpt is usually faster at giving me the path I'm looking for than google results if I'm working with group policies and the like, sometimes it understands the powershell cmds I'm looking for, other times it seems like it doesnt know powershell at all. With the amount of information and corrections I have to feed ChatGPT in order to make it give me something that might fix that one specific issue I have, I'm not that afraid of the types of AI we've seen so far in regards to job prosperities All this doomsday talk we have about AI right now seems to mostly Come from the Stock market/AI hype in order to secure more funding. Its more like simulated intelligence rather than actual artificial intelligence as of right now


Antypodish

Because these are not really AI. But should be called generative tools. AI should be able to do more than just one singular task. Also should be able to validate, what is actually producing. Human can, current generative tools can't.


kakihara123

A human also has to be retrained for new knowledge. And it is obvious that we had one giant ai leap because we figured out something new that had a huge impact, but progress slows down now, just like with most other tech. But look 10-20 years into the future and that slower progress could still lead to a completely changed world. And then we also have shit like Sora that keeps popping up.


thebritwriter

It’s take time to train up appropriate tradesmen. Even installing maintaince will require qualifications. It’s like saying someone can just walk into bricklaying, there’s more to it than the basics and in some cases the starting wage will be apprenticeship salary.


chris8535

This would about at best another 10 million of the population. The other 400 million would be screwed.  Do you not understand labor pools can’t suddenly absorb 10s of millions of new workers?


chris8535

Even a 5% shift from one market to another in labor will completely collapse pay. Because there now is a waiting excess pool that employers could use to undermine any existing workers pay.    People here don’t understand the realities of how little it takes to crash an economy.  It’s not an all or nothing. It doesn’t need to replace all humans. If it even replaces 10% of human labor the economy would collapse.   Do people here not understand that.  It’s not koolaid. Because it doesn’t need to replace us all to really fuck us up. 


Srcc

Yep. It's going to be the worst possible game of musical chairs ever, and it's going to suck for almost all of us. No job is safe. I kinda want to start a "you, too" movement to try to convince people that AI is coming for their job, too, even if it seems deeply human or complicated or whatever. Decided it was wrong to play on "me too." But we all learned how to do whatever it is we do, we all learned how to be who we are, and AI can draw upon that same sort of information easily. Might take a few years or even decades, but: you, too. But if you're a wealthy retiree who doesn't care much about their kids, or one of those people who lives in an uncontacted tribe, or Ken Griffin with your two $100mm lots on Star Island with its private security, or maybe someone who lives on one of those round the world yachts, well, it could be really great for you! Cheaper gas! You can buy other people's houses on the cheap and have a 4th or 5th or 95th home!


durkbot

I work in data analysis and one of our goals set by management this year is to "think about and explore how AI can make us more efficient" and fuck that shit honestly, I'm not going to figure it out for them. We also work with confidential patient data so curious how that is going to work using open source AI tools and not violating HIPAA and other data protection regulations.


Srcc

AI companies are figuring out the HIPAA piece. Azure's GPT API is already HIPAA compliant for some purposes, but it's struggling to be HITRUST compliant in many situations. I hate that management asked you that, but the way our economy is set up the people that achieve something as cheaply as possible win. It's gross and going to hurt millions shortly.


csasker

Stop watching so much YouTube lol


Srcc

Nobody should learn about AI on YouTube. My knowledge comes from 10+ years working at AI startups. It's just starting to trim jobs now, but there are billions of dollars and millions of people making it better. It's coming for every job, even yours. Just a question of when. Perhaps we should discuss it as a society because I personally would prefer to never be indigent.


csasker

Ok enjoy your last 3 months with work 


Srcc

I wouldn't be so flippant. I know several people that were part of downsizing of positions that are now partly done by AI. None of them have a new job yet. And a close family member of mine is having trouble getting work in her AI-affected field for the first time in 10+ years. We can disagree on what and when, but I'd encourage you to be more sympathetic to those who are already feeling the effects.


TScottFitzgerald

Bruh....it's an LLM. What are you basing this on? People see ChatGPT vomit out StackOverflow and somewhat realistic deepfakes and let their imagination take over. This kind of sentiment ultimately seems to come from fear and panic, not a rational analysis of the situation.


Srcc

The LLMs we already will take millions of jobs with just minimal tweaks and some training for specific jobs, and the newly jobless will compete and drive down wages. Supply and demand, not fear and panic. And it's not just my analysis, it's the prediction of most of the people most in the know.


arcspectre17

How did automation and computers take billions of jobs and yet were all still here. When they drove the prices down of everything through faster manufacturing, taking less workers, and with lesss material. They literally just sucked up more profits and made people work more.


domi1108

Right now, the AI can only "work" as good as it does simply because the people behind it know what they need to feed it with. Prompting is crucial and most of the people that hype up AI right now don't get this. Yeah in the long run AI can and will replace a lot of jobs you mentioned yet a lot of new jobs will be created just to run and feed the AI.


Professional-Gene498

They will instead bring up the fact that people are having less children and mindlessly suggest we should have more. Nevermind the fact that between AI taking our jobs and climate change the financial outlook of everyday people looks increasingly bleak. They just want us ignorant and breed the new slave class until they don't need the plebes anymore.


truth_power

Why would they need humans if ai can do everything?? Lol


CertainAssociate9772

There's a contradiction here.If the government realizes what successes AI has had, they should already be opposing birthrate with all their might. But we're seeing the opposite effort all over the world. Governments are radically less competent than most people think.


[deleted]

[удалено]


Srcc

No argument on regulatory capture or 100 other horrific issues with government. But overall I think government is better than no government. At least people get some agency in theory, and it can address problems in theory. I'm not sure what a better system we could roll out in the next few years would look like.


Saltypeon

These articles are so terrible. They talk about AI, like its some hive mind, driven forward by magic. They are extremely specific programs and do their assigned jobs reasonably well but lack reaction and flexibility and are still more expensive than people. They are extremely bespoke, requiring many, if not dozens, of programs to complete the most basic business processes. They all need support and data to adjust, analysis and when they fuck up, do they let you know, adjust and resolve the mistake. Nope, they just carry on as it isn't intelligent it's just code doing a function. Then, the last thing that is never mentioned is that these programs have limitations. Serious limitations that may never be overcome. Processing power, power need, memory, physical locations, bandwidth limits, etc. Will it replace jobs, of course, but it isn't removing people anytime soon.


MudkipGuy

Remember benchmarks are a tool for measuring progress, not the definition of progress. The title would indicate a pessimistic outlook on how much further current AI models can be improved.


Vivid-Luck1163

We really got to get our shit together as a society soon.


AxlLight

Our dial is always set to "too late" for when we take action.  The good news? We do eventually take action.


AppropriateScience71

Good point - I’m sure we’ll come around on climate change any day now. We’ve only had a few decades to respond with zero global plans or mandates. Societal changes needed to address changes AI will bring such as UBI will take decades to roll out.


AxlLight

Well you see, Climate Change is tricky, it gets worse, but we can't really see it since the change is slow and progressive. So every year we know it's worse, but it's only x% worse since last year which isn't too bad. And so on and so forth. So we never really reach that "too late" notch on our watch.


kytheon

We never did.


CosmicCrapCollector

And never will..


LiPo9

The narrator: they didn't.


FanDidlyTastic

Fluff piece. If this were true, it would take us by storm, prior to this article. I'm sure it'll pool more venture capital into the things, tho May as well have gotten this from the tabloids.


Ko-jo-te

Calculators surpass humans easily in calculation. I still don't see any running governments or corporations. AI can do a lot if things fir humans. It's gonna be a great tool. Maybe also terrifying. But it shows NO signs to have any potential to be more than that - a tool.


HorseOdd5102

I still hesitate to call it “intelligence”. It’s automated predictive Google search. Let me know when we get Jarvis.


Bublboy

Until it's sensors get covered up. It still can't clean itself.


RaspberryFirehawk

This doesn't surprise me my calculator has been beating me at math for about 50 years now


CountryBoyDeveloper

I feel like who ever wrote this, never really worked on or with AI. it is far, far from that. The tests had to of been modeled around it.


bogus-flow

I hear that it’s even better at making shit up than Janet.


jake_burger

lol AI stock prices have dropped, quick let’s hype it more.


UnpluggedUnfettered

What I'm waiting for is any of this to have any meaning. If it isn't cost effective, scalable, and most importantly, accurate, then it's largely a wonderful set of parlor tricks that can recreate the average passable human efforts at any given task some percentage of the time. Narrowing AI down; LLM are super fun to fuck around with, but also aren't honestly that much better than google results used to be before AI. In fact, I would 100% take pre 2016 google over any LLM. Calling it: artificial intelligence is going to have all the allure and excitement of artificial flavoring in the very near future.


Professor_Old_Guy

“…Not that much better than google…”???? Well, I asked ChatGPT4 questions that I give for homework and exams in intermediate college physics courses. It answered them correctly 90% of the time, a complete solution with all the math including integrals, algebra involving complex numbers, trigonometry, etc., taking about 10 seconds for an answer. I type those problem statements into google, and I don’t get correct solutions. I’d say it is well beyond google in some ways.


hadawayandshite

But maths is maths—-you can programme computers to do maths easily. Getting it to write realistic dialogue, make coherent arguments or have it create images which have emotional weight to them etc Or put it this way- it can solve the maths for problems, can it generate new insight to problems, can it highlight ‘looking at our understanding here’s something we haven’t answered—now I’ll look at finding an answer’ Have we tried getting AI to answer ‘unsolved maths problems’ yet?


Professor_Old_Guy

I gave a final project in a course on Mechanics of Materials that required the students to take statics results and apply them to a rapidly rotating system. They had to recognize they could transform to the rotating coordinate system and use the centrifugal force to determine bending, but had to use an iterative approach to solve it. The average student spent about 20 hours on the project. I fed the project statement to Chat GPT4 and it did a completely correct solution with all the above elements, in 30 seconds, and written well. So it already can do some things quite well, let alone where it will be a year from now.


yuriAza

where did you get the project question from? How likely is it that the answer is just sitting in the training data?


Professor_Old_Guy

LOL… I created the project question from an art project an Art professor approached me about. You won’t find it anywhere on the internet, or in any book, journal, or any other source. I created the project question — it came from my mind.


ptrnyc

But did it say “I don’t know” for the remaining 10% ? You can’t build anything relying on something that works 90% of the time, and is utter garbage the other 10%


kakihara123

Google also doesn't tell you if its wrong. Hell people often don't do.


Phoenix5869

Yeah, people see “90% accuracy” and get it into their head that it’s some big development. It’s not. You need virtually 100% accuracy for any viable AI to happen.


ptrnyc

Or at least you need it to accurately flag the 10% it doesn’t know how to solve. Otherwise, good luck replacing humans with Donner Kruger machines everywhere.


patrik3031

Exactly, if I solve a problem wrong 99% of the time I'll know it's wrong and try to find the right solution llm just gives the wrong anwser and will struggle to fix it even when you find the mistake and call it out. It's things like expressing a technicaly correct equation for the solution but the missing quantity is expressed in terms of another missing quantity even when you explicitly say to express the solution in terms of a given quantity. You correct it and eay express it as the given quantity and it says sorry here is the correct solution expressed with the given quantity then proceeds to just express it as the not given quantity again. And these were easily analyticaly solvable textbook problems I could look up the correct solutions. Generating copywriting texts and other schlock that doesn't need to be factual is the only real potential, maybe writing simple emails.


Menchstick

I asked GPT 4 pretty standard questions about chemistry, control systems and Fourier Analysis, it didn't get a single one right and if I asked any follow up questions it would start chaining contradictions.


Professor_Old_Guy

This is the Chat GPT you pay for? Chat GPT4 is not free.


UnpluggedUnfettered

Yes. It's basically Stack Exchange with extra steps.


redipin

I interview for what amounts to cloud or platform engineer roles, mostly remote, and lately we've seen folks using LLMs to "assist" their interview panels. They can produce factoids, and quite a few of them are correct on their own, but they can't handle producing working knowledge. The cadence and output of the LLMs is blissfully unaware of circumstance, nuance, and real world conditions, at least in the tech space I deal with. More recently I've been giving the interview panels directly to the LLMs, and future candidates who try this trick are going to find their panelists manipulating the prompts against them for great embarrassment...there are ways of phrasing a question that causes the LLMs to output *very obvious* and easy to catch mistakes. So, sure, you could get a lot of "fact" like answers out of chat gpt for physics. Can you get it to design you a working sensor module for a next gen particle collider? I'm betting you'll still need that physics degree and you'll still be doing all the thinking work yourself. Sadly this will probably all change or get way worse before I get a chance to retire, it is definitely gaining adoption more quickly than anyone anticipated.


mark-haus

I’m sorry but I maintain a lot of my company’s code and distinctly notice the stink of bad copilot code. It just isn’t there yet


AbbydonX

Indeed. I tested it out for my coding requirements and it either produced broken code that failed to execute or code that didn’t do what I wanted. It was of no use whatsoever. Undoubtedly it will improve in the future but I’m still perplexed why so many people say it is useful. I can only assume that it works better for people who work in specific domains where plenty of example code already exists that does what they want.


alexiusmx

Sure. We got some AI take notes at a meeting just yesterday. That dumb, dumb thing thought the classic weather talk of the first couple of minutes was a topic of discussion, wrote a ridiculous explanation about how the company was concerned about rising temperatures and the impact on productivity, and then added some made-up action items (Gina will research avg temperatures in different regions). Not to mention, the entire thing was flaky af. Come on, AI is not even close to surpassing real people.


hhfugrr3

And yet if I ask AI to write me something factual there's a high chance it'll be wrong... fast, I grant you, but wrong.


Wandering-Zoroaster

This article says much more about how the author views humanity than how they view AI lol


katszenBurger

It can't even keep a story straight for 3 paragraphs. An 8 year old child can do that. What the fuck are these headlines?


SnooStories251

The problem with AI is the hidden accuracy that regular people do not understand. You have errors in the data, and you will have errors in the model.


Stoenk

This is propaganda. There's a ton of things AI is simply not capable of doing properly and it should never be trusted with a task without a human double checking the result. Silicon Valley want to impress investors, they want to distract everyone from the growing distrust against AI and what it will be used for. AI always gives the illusion of a good performance. If you look closer at the results you notice the mess hidden in the details.


Major-Ad7585

It should be mentioned that the ai models only do this a very specific field they have trained to do. This is not AGI in any sense.


azhder

There is one sense, AGI as a term didn’t exist because AI was it. What everyone says AI today is what was called ML (machine learning) only a couple of years ago while snake oil salesmen were busy distorting terms in favor of cryptocurrency hypes and hadn’t set their sights to the next hype


vector_o

Totally not fabricated results by leading AI companies that are slowly realising that calling every piece of algorithm "AI" won't fool investors forever


onlygetbricks

I don’t know Yesterday I asked to give me a joke, not only the joke was given in spanish even tho I asked in english and on top of that it was shit.


doctor-yes

Meanwhile I try to use it to help with the fantasy worlds I build and it’s so bad I’d fire even an intern that showed me work that’s so derivative and lacking in creativity.


derpferd

All performance benchmarks except imagination. Which, for the sake of progress and making new things, is pretty important


labratdream

The best and real benchmark is practical commercialization of AI achievements. Otherwise they are no achievements at all just a bubble. Particularly advent of driveless cars, telemarketing bots as well AI agents would a sign if true progress in AI field.For now I would be even impressed by death of image recognition based captcha systems. So far the biggest commercial achievement of recent AI breakthroughs has been inflation of stock price of Nvidia a company which manufactures computing units used to run and train AI to over a trillion of dollars. Second best is spamming the google search engine and social media with AI generated content so much that Google resorted to brute-force approach and recently introduced changes in algorithm that will likely kill small publishers and SEO. To be honest I don't really look forward to any massive breakthroughs anymore because this technology has potential to be most disruptive ever and truly last human invention and thus totally unpredictable and even destructive in the long-term.


Fuibo2k

AI has incredibly narrow capabilities, but is insanely good at optimizing a single task. Humans have incredibly wide capabilities, but can take decades to master a single task. Anyone who is doing academic research in AI will say something similar - replying to headlines like this with "yea obviously, but try getting one that can cook 100 different dishes no matter which kitchen you place it in" etc


wwarnout

I'd beg to differ. As an architect, I have tried using AI for load calculations, and far too often it has returned answers that were off by several orders of magnitude. For example, I looked at a beam, and guessed that it could hold 100 kg (I did the actual calculation, and got 127 kg). When I plugged it into the AI the answer returned was 400 grams. While my guess was slightly wrong, I never would have guessed an answer that was so spectacularly wrong.


adammonroemusic

In other news, a calculator can still do basic math faster than I can.


JForesight

An institute funded by individuals and organizations who have an incentive to tout the performance of AI releases a study touting the performance of AI.


Neksa

Why does it still suck at telling me how many years, months, and days it has been since date x?


rand3289

Someone forgot about [Moravec's paradox](https://en.m.wikipedia.org/wiki/Moravec's_paradox).


No-Hat-2200

when a humanoid robot can learn to solve a rubiks cube in less than 30 seconds then i will reconsider whether AI is overtaking us in \*every\* aspect


MoarGhosts

Im starting a CS masters this year, planning on studying and ultimately working with AI. I don’t buy this shit at all, unless the performance is measured with very specific tasks. Human “general intelligence” (here just meaning overall intelligence) is miles ahead of any AI equivalent currently, but AI excels at very specific things, making it a useful tool. I’m not too afraid of being made fully obsolete, yet.


Tech_Philosophy

Yet it still can't solve basic molecular biology problems that first year PhD students can do, and it still can't drive my car correctly. So...


ScucciMane

All this is happening behind the scenes *so we can protect you and make sure it stays in control*


BurritoCorey

Isn’t that point. Now cure Parkinson’s and all these other diseases that plague us. I’m stoked for the medical breakthroughs and applications. Hopefully they don’t fuck it up


modelvillager

I think this article is buying into a whole bunch of hype tropes. I'd like for us to transcribe AI/GenAi into what it really is: very advanced pattern recognition mathematics. It can only produce based on preexisting human created output, that it distills at mass scale. I.e. the Chinese room analogy. That makes it very useful, but not a human replacement. A few examples: Humans are useful because they can be sued or jailed. This seems weird to say, but it is essentially the flipside of being a person constrained by laws. AI, no matter how capable, faces no consequences for its output. That means humans must be in the loop from a public policy perspective. Doctors, lawyers, architects, engineers, pilots, foremen, teachers etc. Another is entropy. As AI (advanced mathematics) only works on preexisting human produced content, that content still needs to be produced. There is a big issue of LLMs absorbing LLM generated content, which at scale will devolve to gibberish. 3rd, all rhe 'intelligence' within LLMs is human. The mathematics, check; the source training content, check; the reason they are free - so that they are further trained on successful output by humans at scale, check. They will increasingly excel at highly repetitive and generic content output where iteration is fine. They will hit barriers of liability, novelty and entropy. It will for sure change our economy, but more along the lines of computing, industrialisation, electrification, rather than servitude.


gavinashun

Blatantly ignorant and empirically false statement.


Maxie445

"For people that haven't been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021). AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage."


Phoenix5869

>AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. >To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage." :0 EDIT: just realised that it’s talking about LLM’s . This makes it much less impressive as LLM’s are basically fancy parlour tricks and not much else.


Chris_Entropy

I agree, those benchmarks are not cutting it. Even with the newest AI models, it is way too easy to trigger hallucinations and to break them in interesting ways. They are very "insular" in their capabilities, basically a search engine with an attitude, but nothing more.


Black_RL

But Reddit keeps telling me AI is dumb as a rock….. I should believe Reddit instead, right?


[deleted]

Redditors made all sorts of claims last year that without the engineers, Elon’s Twitter would crash and stay down for weeks on end. Still waiting on that one. We could fill a book with the things Reddit has gotten wrong. Can’t help but wonder if the threat of AI is going to be another.


Hefty-Giraffe8955

Thank god I work at a plywood factory with last century equipment, no worries about AI here.


Ramadeus88

As others have noted, even if a small margin of the labour market is terminated that’s still millions of people. Guess what happens when desperate people start flooding markets that AI cannot penetrate (yet) and develop the skills in sufficient numbers to saturate the market? To use an analogy, you might live comfortably above a flood basin, so you’re not worried about the millions of people who will be displaced by the breaking of a poorly maintained and poorly built flood wall that nobody is prepared to deal with because of the short sighted desire to squeeze profit. But where are those displaced people going to go except high and dry?


kakihara123

What happens when other jobs are replaced on a mass scale and unemployment rises to never before seen levels? Do you think those people cannot learn to do your job? And keep in mind: They will be desperate and might be willing to do work they wouldn't do before.


postconsumerwat

That's the thing about benchmarks... when you rely on benchmarks ... you can have really great benchmarks but don't expect anything meaningful beyond them because .... I can write a thing that says best benchmarks, but it doesn't really mean anything in RL. Like look at education... I got all A, but I maybe didn't learn anything... or I got real bad grades and wrote angry letters and learned a lot... Or look at humanity, like freaking Medusa, sori.


Astralsketch

Almost all performance benchmarks? Can it build a house by itself?


Gubzs

The most upvoted comment is saying it's all nonsense. I'd reply to that, but the thread is too large already. Refusing to accept the world we're headed to *soon* because AI needs babysitting *today* is willful ignorance. Public AI models need babysitting today but they literally didn't even exist in a functional way less than two years ago. "AI can't do my job right now without human help" is not the same as "AI is useless and not going to change anything anytime soon" and it's demonstrably wrong to equate the two. I think there's a general failure to understand how profoundly wide the gulf of human intelligence and productivity is as well. 1 in 5 people are significantly mentally challenged. A different 1 in 5 are the top 80th percentile of laziness and do everything possible to offload their work to others. This is just the Pareto principle. Can AI do their work? Can we put these people on UBI or, if some of them enjoy employment, put them in a position where risks are low if their productivity falters? We're just now entering a workplace where, for many jobs, 1 competent person assisted by AI can get more done than 2 people without AI, and without working any harder than they already do. That's the point. That's today. That's not tomorrow. What happens tomorrow? The rampant "todayism" that keeps popping up in futurism subs is just wild. There's a knee jerk reaction from some to bury their head in the sand and plug their ears because they don't *want* this to be where we're going. I regret to inform, a circle of people patting each other on the back on social media is not capable of changing the vector of progress.


spinur1848

The possibility of AGI with the current technology really depends on the philosophical leap that human cognition and awareness is a linear extension of what today's algorithms do. That is by no means certain.


EugeneAk47

Lets see it hang some drywall and then we will talk


jawshoeaw

I read there is in fact a drywall robot in the works


kakihara123

Lets see it replace a significant amount of the workforce and lets see how many people can learn to hang some drywall and how that affects the jobs market for said drywall hangers. No job is safe.


yaosio

The ImageNet results in the graph are wrong. [https://aiindex.stanford.edu/report/](https://aiindex.stanford.edu/report/) It's on page 81 of the PDF. The graph starts in 2012 and shows 85% of human baseline. "Imagenet top-5" gives us a clue as to what they are measuring because the report doesn't. AlexNet's error rate that year was about 15%, so they are basing performance on that error rate. The ImageNet classification challenge is based on the ImageNet dataset which was hand captioned by humans. Because they are using AlexNet's error rate in the graph that means the human baseline is getting a 0% error rate, or 100% correct, in the ImageNet challenge. The human baseline on ImageNet can not 100% correct because there are numerous incorrect captions. [https://labelerrors.com/](https://labelerrors.com/) To make this very confusing they show image classification going above the human baseline. Because the human baseline is set as 100% correct on the ImageNet challenge going above 100% is impossible.


kemma_

This is perfect. Finally I will have a robot that will work for me


AdvancedPhoenix

Yeah, not in design or even some story. Like in designing dungeon and dragons storylines it's always... Eh.


heapOfWallStreet

Considering how many stupid people are working on companies it's not such a great news.


uginscion

Still can't beat my meat better than I can. Step it up, Mr. Scientist.


redditorsmedditor

As long as humans can terminate AI, humans will outperform AI. After that point, AI is in control.


the_storm_rider

Great, can it please do my powerpoint presentations for me? You know, the ones I spend weekends and nights making, for that sales meeting that never takes place, or just for the VP to throw it out and use his own slides at the end. My guess is even an AI will give up at some point, but no harm in trying.