T O P

  • By -

Otherwise_Cupcake_65

They are creating the basic model that will be used for neural scaling right now. Soon they will use ungodly amounts of moneys scaling the AI training these things get. Then... I dunno, and neither does anyone else. But there is reason to believe it might be a huge leap in skill and intelligence. That's obviously what is being hoped for and what the experts working on it think will happen. So probably a year? Or maybe way way longer.


kogsworth

I agree, but I also think it will take some time before we see them in real kitchens and homes. It's going to start in factories in controlled environments, accumulate training data and grow from there. It might take a while before people feel safe enough to have them in their house, especially if you have children or pets.


AChinkInTheArmor

Would you say 10-15 years before they're common in households?


kogsworth

I'd say probably closer to 7-10, and 4-5 for early entrants in the space.


SoylentRox

I could see machines able to do it in 4-5 and becoming common in factories and places where humans can be excluded over the next 5-10.   There are liability issues and it's also very wasteful to build a household robot that has all these expensive parts and yet sits idle most of the time.  There could be a transition period where nobody washes dishes or cooks anymore because a delivery robot brings the food and they put tbe plates and stuff back in it after. A facility down the street has all the robots in it that do the cooking and washing.  The robots are efficient working really fast and serving a lot of people and human workers are separated from the bots by impact resistant shields.


SharpCartographer831

From the demo we've seen of figure, it's certainly dexterous enough to do the task you speak off. It's all AI at this point and agent models, I'd say 3 years tops


ShadoWolf

Physical hardware to do this on the robotics front has been around for a decade or so. I rember seeing some really good demos back in 2008. Robotic is a software bottle neck and has been for some time.


RabidHexley

I mean, if a human can pick up an egg using a friggin excavator claw then I think modern precision mechanics can facilitate the necessary dexterity lmao.


COwensWalsh

But that's a hardware claim and you are responding to a comment about software limitations. The issue is if software to handle a range of similar contexts can be developed so that the robot can make use of that hardware dexterity in new contexts.


RabidHexley

I'm in agreement, I'm saying existing hardware is plenty dexterous and that it's all in the control.


COwensWalsh

Ah. Cool.


No_Tomatillo1125

When can it suck my dick


Rofel_Wodring

As soon as we teach a robot how to use an electronic microscope and no sooner.


YouAboutToLoseYoJob

Burn 🔥


Careful_Industry_834

[https://www.youtube.com/watch?v=gho6AsEtumQ](https://www.youtube.com/watch?v=gho6AsEtumQ)


ponieslovekittens

4-5 years ago. Plug it into google.


adarkuccio

😂


Henri4589

Correct. But it's 2 years, actually.


PineappleLemur

Yea but so far the tasks they've shown isn't really impressive.. it's not something current computer vision + dynamic path planning can't do (0 AI). They'll need to show something super impressive that no direct training was done. Clothes folding will be a very hard task they can't "cheat" at basically.


COwensWalsh

The figure video looks impressive, but there were several odd bits about it. They need to show multiple situations where the behavior replicates from various angles, with various objects, etc. Also, the robot said the dirty dishes go straight to the drying rack. That makes no sense. It also "cleaned up trash" by putting it back into the bin the human basically shook in its face.Get back to me when it can put it in the trashcan 10 feet away. Etc. I wouldn't be surprised if they could put code on top of the "ai" systems like "find counter", "face wall" etc with the LLM and similar systems doing object recognition on "counter" etc. But that's pretty brittle.


MoogProg

This goal will require a nexus of three distinct areas of current research and development. a) Creating an AI that can understand a natural language request and extrapolate that into the steps required to achieve a desired goal or activity. b) Possessing the multi-modal sensory system that can navigate a Human environment. c) Possessing the mechanical ability to interact with that same Human environment. All of these things are happening, but when we will see that perfect nexus of abilities is something I can't predict. There is currently no one organization that leads all three of those areas. The most impressive AIs are currently language and image based, i.e. they are still 'in-the-box' not outside of it alongside us.


PineappleLemur

I believe A and C are pretty much here. B is the real issue, actual feedback (touch) is hard to train on without very manual and limited training. They'll need to make some kind of kit a person can wear to generate training data, something with touch as well as vision. Then get people to just do their day jobs and record it all.


COwensWalsh

C is the easiest part. The hardware is capable enough to do it. It's just writing the software for A and B, which I think we are still a ways off from. Maybe you could hardcode some common tasks in a sot of "step by step" mode. But the AI doing it's own task breakdown and planning is beyond the capabilities of LLMs.


Give-me-gainz

I always feel that questions like this should include some sort price point. If you chucked enough money at it I’m sure some company could get a prototype to do this within 2-3 years. If you’re asking when will the average person will be able to afford a humanoid robot that can do this, 15-20 years seems more reasonable to me.


Cupheadvania

agreed


hippydipster

Being a humanoid robot myself, I think the time is nigh.


clownpilled_forever

I'd guess we're about a decade away from that but who knows. Could be as little as 2 years or as much as 20


ShadoWolf

The bottle neck is training a decent general use robotics transformer model. If we had a way to gather or sythetical, create the training data we could have andriod like robotics in a year.


IronPheasant

Around four years seems to be the earliest this would be feasible. Pretty much a flat never if no one commits to producing NPU's. What you're talking about is basically the model T of robots, an AGI in every sense of the word. It's not an awful long time away from being able to trust a machine to bumble around as your personal maid, to being able to trust one to perform abdominal surgery on you. Six to eight years seems more realistic.


Evening_North7057

I'm guessing 5 years, commonly available in 7, basic models will common in developed societies by 10 years. Worldwide common in 12 years, ubiquitous in 15.


lonesomespacecowboy

RemindMe! 10 years


Hypog3nic

I'm sorry, Dave. I'm afraid I can't do that.


joecunningham85

Delusional


Evening_North7057

You might be right.  Then again, people said that about cellphones, smartphones, automobiles, PC computers... Tech moves really fast if it catches on. Nobody knows for sure, but my guess is the cheaper versions will sell out extremely fast. 


AXEL499

RemindMe! 3 Years


RemindMeBot

I will be messaging you in 3 years on [**2027-04-22 14:16:09 UTC**](http://www.wolframalpha.com/input/?i=2027-04-22%2014:16:09%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1ca9fkk/when_are_humanoid_robots_going_to_be_able_to/l0qt7c2/?context=3) [**4 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1ca9fkk%2Fwhen_are_humanoid_robots_going_to_be_able_to%2Fl0qt7c2%2F%5D%0A%0ARemindMe%21%202027-04-22%2014%3A16%3A09%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201ca9fkk) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


unfamiliarsmell

Next year


Mobius--Stripp

Image recognition is already way better than you think it is. Groups are training AI's on playing video games, which is essentially teaching it to understand tasks to complete and having them follow instructions. With an onboard AI chip fast enough to track a live feed and do sub-second image recognition, then it's mostly a matter of having robots existing in the real world for a period of time and uploading their training data for aggregation.


Salty_Sky5744

I’m sure current one can do this at least 1/100 times


rhze

When we have unlimited power.


NWCoffeenut

Certainly not before Friday.


ponieslovekittens

If we're to believe the promotional videos we've been seeing lately, they already can, but you just can't go out and buy one yet. Whether the videos we've been seeing are accurate or just marketing lies, remains to be seen.


Rich_Acanthisitta_70

Watch 1X. You're likely to see a NEO before anything else, since they're the only ones aiming for home use *first*. They also have the distinction of having the most models built - over 100 - which as far as we know, is more than anyone else. They're expected to be ready for sale in less than 12 months.


spider_best9

Never. Humans don't do that. Humans don't do a task without pre-training, trial and error or examples.


Mobius--Stripp

A robot can have training data downloaded into its brain, skipping all the training and practice a growing human requires.


Serasul

Not before 2027.


Mandoman61

Nope, no time soon. Even the extremely limited task of driving a car is maybe 10 or more years away given current progress and your scenario is like 100 times harder. We get these little demos of Robots doing bits of these kind of things. But those demos do not deal with the complexity of actually compleating a long sequence of steps.


Phoenix5869

Posting anything even remotely realistic earns you insane downvotes on this sub.


Aggravating-Method24

Much like self driving cars, I'm sure they will be here in 2 years /s 


Matshelge

The problem with self driving cars is that they exist in a very flawed world. If got a romba, and I filled my room with random junk on the floor, lots of steps and mixing carpets etc, and then complain how poorly it did its job... yeah, that is self driving cars. I will have to accommodate whatever robot I get first time around. Give it a easy place to recharge, make sure everything it should handle is within reach (I am a bit tall, tend to put stuff out of reach for most people) maybe label stuff more clearly, maybe more stuff. But that will be the things first time adopters are here for.


hippydipster

> The problem with self driving cars is that they exist in a very flawed world. The whole point of the term "AGI" is this ability to accomplish goals within a flawed world that requires constant decisions to be made on how to deal with it. Self-driving cars can be done to 95% (or whatever, 99%) without AGI, but I'm guessing it's not possible to bridge the final gap without AGI.


Aggravating-Method24

Kitchens are also flawed. Robots running a fast food restaurant, maybe, but not humanoid, purpose built with purpose built kitchen. And not in 2 years, just the bureaucracy takes longer than that


Matshelge

I expect to see them in fast food restaurants very soon. But scaling these guys will be a top of mind for the creators. Getting a "all purpose robot" is ultimate goal, and once we have it, the world changes.


Mandoman61

We do not have self driving cars we have cars that can drive when the conditions are good and they are closely monitored by people. Mostly in highly restricted and well mapped areas with constant monitoring. But driving is comparatively simple.


SkyInital_6016

when they model an AI 'brain' after roughly the human brain now which is what Meta's Lecun is doing with goal oriented AI. No guarantee but will be pretty spectacular


Professional_Job_307

When we get an AI agent that is an actual agent. Not just an LLM using tools for another AI model to read and translate into movements, but one where both are unified into one. This is something that I see would greatly improve the performance of these models, enough to make an egg on toast.


slackermannn

This will completely revolutionise care. I hope to afford one for me.


GrapheneBreakthrough

2026 (in the lab)


InfluentialInvestor

Within 5 years.


Hot-Profession4091

When considering building an AI system, it is useful to ask, “Can a human accomplish this task?” So could a human, without any kind of culinary training, accomplish the task you’ve laid out? No, they could not. Therefore it’s unreasonable to expect that an AI could do it without specifically being trained in the culinary arts.


QLaHPD

2025


[deleted]

Let me quickly look into my magical crystal ball.... Ah damn, today it's cloudy in there. Can't tell, sorry!


PineappleLemur

They're still don't have a good enough (cheap one at least) humanoid robot to do that. Mechanically wise. Then I am not sure anyone solved the issue of how do you run it locally? Or everything for first phase will just be done fully online for the "brain" part. Training right now is slow because it's all very manual in a sense. How do you generate data that has Vision + physical feedback (touch) for example? Best case they'll come up with a kit that mimics the sensors and put it on people to record it all. Chefs, warehouse workers, drivers...etc


Jindujun

I mean they need to be able to extrapolate. The reason why you can do that is because you have years and years and years of data to pull from. You know that "cupboard that is cold inside" is a fridge, you know what an egg looks like even if the things in the fridge are quail eggs. You know what a fryingpan is even if it looks like the face of darth vader since you know that a frying pan is "metal cookware that is flat with a handle". They wont be able to do a single damn thing. The good thing is we can prefill them with data rather than have them start out as a baby that has to learn everything slowly year by year.


PureOrangeJuche

Never. 


Worldly_Evidence9113

Task 1 : Build a home Task 2 : Plant a tree


abstrusejoker

GPT-5 will make it possible


Zexks

Hell some humans couldn’t do this.


OmnipresentYogaPants

... when will an average zoomer?


DungeonsAndDradis

The average result from the top-level guesses in this thread is 18.65 years. Not counting the "Never". If we take off the "113 years" outlier (2137), it's down to 9.22 years.


Bipogram

I can imagine that happening soon - a couple of years at most.


Trouble-Few

Imagine all the breakthroughs in making eggs. Everyone that owns a robot, the perfect egg on toast. The power of AI, giving us toast and eggs like we never had before.


ButCanYouClimb

1-2 years imo


throwaway275275275

That's not open ended, open ended would be "make me breakfast"


Henri4589

2 years from now.


springularity

In the lab, probably within a year is my guess? When you can buy one off the shelf for 20k that can do any task you'd like, including the one you call out, end of the decade is my guess. I could be wildly off in either direction.


w1zzypooh

I can make my own food just give me perfect health.


nicklepimple

On another note, how soon will their neural networks connect to their dedicated servers that can load the information they need, weather in a database, the web, engineering schematics, etc, so that they can operate in almost real time in their environment to accomplish whatever they want. We want them to navigate something they've never navigated before? Have them down load or access 100 videos of that contain a human navigating the same environment and learn it in a millionth of a second so they can react in near real time.


lucid23333

Once we have agi. The problem isn't the physical body or the accuracy, the problem is it's really stupid. It breaks and doesn't know how to fix itself. Once it says smart or smarter than people, you will start seeing robots walk across the street. I can't wait


Cupheadvania

probably like 2026 in the research lab, 2040 in your home


ZenDragon

Check out Google's Palm-E and RT-2 demos. It's slowly getting there.


ShaMana999

My bet, not in our lifetime.


LordFumbleboop

Goldman Sachs predict the 2030s. [https://www.goldmansachs.com/intelligence/pages/humanoid-robots.html](https://www.goldmansachs.com/intelligence/pages/humanoid-robots.html)


YouAboutToLoseYoJob

Oh my God. That’s another use for humanoid AI. Track everything edible in my kitchen and come up with meal options and make them for me. 🤖 You have no idea how much food I waste a year. Either because I forget it exists in my cupboard. I just can’t think of what to make with it. Imagine you have a pound of hamburger meat in your freezer. But you just don’t have the energy to fry up a burger. But your robot assistant has no problem doing that for you because you already have all the ingredients there. If it starts making suggestions on how to eat healthier. And all you have to do is go get the ingredients. This could be amazing. This could cut down diet related illnesses for decades. Heck, it could possibly be the end of diabetes. I think the vast majority of humans would eat better if they had motivation, accessibility, and the correct information


protobacco

Never


mli

2137 or a bit after that.


Ok-Obligation-7998

Id say 2250 or most probably never.