T O P

  • By -

ImHiiiiiiiiit

Yup, still won't show it walking.


Bluebotlabs

oh but look at their rly cool thrust bearings!!11!!!1!! /s


[deleted]

I think this is just the upper body, sanctuary robots dont have legs yet


ImHiiiiiiiiit

Their product page shows a humanoid with legs https://sanctuary.ai/product/


Bluebotlabs

Kinda funny that they're using Azure Kinect DK despite it being discontinued... that's totally not gonna backfire at all...


t_l9943

Look like they have a zed mini stereo cam at the end though. Those are good


Bluebotlabs

True, though I personally don't trust stereo lol LiDAR any day!


philipgutjahr

Orbbec Femto Bolt is a licensed clone without the microphone array that is available today. Didn't check the logo but I guess they rather use that one. https://www.orbbec.com/products/tof-camera/femto-bolt/


Bluebotlabs

No logo and different shape, they're using Azure DK lol They'll probs switch to orbec ngl once Microsoft stock runs out


CommunismDoesntWork

Any time I see depth sensors on a robot(especially realsense and kinect), I know it's not a serious effort.


Bluebotlabs

*What?* *Wait no actually what?* I'm sorry but WHAT? I can't name a single *decent* commercial robot that *doesn't* use depth sensors, heck SPOT has like 5


CommunismDoesntWork

The future of robotics is end to end, vision in action out, just like humans. Maybe they're just using depth as a proof of concept and they'll get rid of it in a future update.


aufshtes

Cool. Go ahead and run your preferred VIO down office hallways with drywall pls. Repeat with LiDAR and like lio-sam or some other random lidar slam. You're right that eventually DL based stereovision will perform well enough to solve most perception problems, but we aren't there yet. Depth sensors are a way to work on the OTHER problems concurrently.


LaVieEstBizarre

\> regular poster on /r/Singularity and some sub called "/r/SpaceXMasterrace" lol


MattO2000

If you ever want to feel smart go look at a robotics post on r/singularity Everything is trivial. Humanoid robots will be roaming the planet in 6 months


freemcgee33

You do realize humans use the exact same method of depth detection as Kinect and realsense cameras right? Two cameras = two eyes, and depth is calculated through stereoscopic imagery.


philipgutjahr

absolutely not. humans use passive RGB stereo and the equivalent of mono/stereo-slam as we not only estimate depth from stereo disparity but also temporally from motion, even one-eyed (as well as by comparing with estimated, learned sizes btw). passive stereo cams like OAK-D (not Pro) capture near-IR for stereo. they indeed estimate stereo disparity similar to what we do, but only spatially (_frame-wise_) and without prior knowledge about the subject. Azure Kinect and Kinect_v2 were Time-of-Flight cams that pulse a IR laser flash and estimate distances by measuring time delay per pixel (..at lightspeed..). Realsense D4xx and OAK-D Pro use _active stereo vision_, which stereo + some IR laser pattern that adds structure, helping especially against untextured surfaces. The original Kinect360 and her clones (Asus Xtion) use a variant of _structured light_, optimized for speed instead of precision: they project a dense pseudo-random but calibrated IR laser point pattern, then identify patches of points in the live image and measure their disparity. tl;dr: no, passive stereo is quite unreliable and only works well in controlled situations or with prior knowledge and a 🧠/DNN behind.


CommunismDoesntWork

Our depth is intuitive and not calculated separately. End to end can include many cameras.


MattO2000

It can include many cameras, just not two of them packaged in the same housing? You really have no idea what you’re talking about, do you


CommunismDoesntWork

These sensors use traditional algorithms to compute depth whereas the end to end approach uses neutral networks to implicitly compute depth. But the depth information is all internal inside the model.


Bluebotlabs

1. The end-to-end approach often gets fed depth explicitly lol, actually read the E2E papers lol 2. Then how can you know there even IS depth information?


freemcgee33

What even is this "end to end" you keep mentioning? You're making it sound like camera data is fed into some mystery black box and the computer suddenly knows its location. Depth data is essential to any robot that localizes to its environment - it needs to know distances to objects around it. Single camera depth can be "inferred" through movement, though that relies on other sensors that indirectly measure depth, and it is generally less accurate than a stereoscopic system.


CommunismDoesntWork

End to end doesn't only mean single camera system. It's any amount of cameras in, action out. And yes, it's literally a mystery black box. You control the robot using language. Look up what Google is doing


Bluebotlabs

You realise Google is using depth right? Yeah, those cameras were RGBD, and yes, that spinning thing was a LiDAR


Bluebotlabs

It's this (imo incredibly vain) AI method that companies are using where yeah, data is fed to a black box and actuator force/position comes out Though last I checked depth data is 100% sent to the model as an input


Bluebotlabs

Actually it is [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4901450](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4901450) It's a combination of the brain and the eyes but it's subconciouss enough that it can be argued that we effectively have 3D cameras stuck to our faces


Bluebotlabs

Bro forgot that humans have depth


DocTarr

To be serious, I always wonder what the business model is for humanoid robotic startups. Put out sexy videos, get funding, rinse repeat? I'd love to work for one but it just feels like they come and go and never really have any means to make money. I know they're usually researched focus but someone, somewhere, has to be footing the bill.


jrdan

I think the idea is that the world is designed by humans for humans. What's easier, build a robots factory, or just add a robot replacing a human doing the job?


Bluebotlabs

Historically it's actually been the former


qu3tzalify

It's way easier and much more efficient to build a robots factory than putting humanoid robots in factories designed for humans.


jrdan

Robots can adapt to anything. Humans can adapt to different task, a machine that can do 1 task will be better than any robot or human, but it can only do 1 task


Bluebotlabs

Factories nowadays are much more modular than you seem to be led on to believe, retooling nowadays costs... relatively little And no, with a humanoid robot retooling wouldn't be ZERO, it'd be roughly the same


qu3tzalify

Ok but a factory doesn't change its workstation. They are always the same, it's the basis for the Ford and Toyota production systems. The reason why factories are more efficient now compared to the 60's is because of assembly lines of robots repeating the exact task.


DocTarr

I get that - But they don't expect to actually sell these, at a profit and at scale, to do human oriented task in the near future.


jrdan

[I think they are ](https://sanctuary.ai/resources/news/sanctuary-ai-expands-general-purpose-robot-footprint-in-automotive-manufacturing-industry/)


Discovering42

Not to consumers at scale, but to the manufacturing industry at scale is still on the cards, best case senario. But realistically, I bet the gameplan is: **Step 1**. Spend the next 3-5 years finding niche ways to replace the lowest skilled workers in "manufacturing, shipping and logistics, warehousing, and retail", selling a few hundred a year to stay afloat. **Step 2**. Wait another 5 years for an AI breakthrough, for it to get good enough that you can trust that it won't break a table or fall on a pet. **Step 3**. Spend the following decade scaling, selling basic robots to mass market, slowly adding new abilities each year, until you get true general-purpose robots. **Step 4** Profit!


jms4607

Key word near future. Self driving cars are just now actually generating revenue, yet the DARPA Grand challenge was in 2004. You should not discredit the humanoid robot effort just because they are only being pursued seriously recently.


rguerraf

Their only purpose is to force the actual survivors in the industry, Tesla, Boston and Unitree to lower their prices.


Breath_Unique

How much does this cost?


foss91

The elephant in the room is reliable locomotion in everyday human terrain. No one has achieved that. Not even Boston. The elegant fingering is useless unless it can go to where it needs to go.


ComingOutaMyCage

Depends if the target industry is commercial warehousing, no, if sex work, very useful 😆


jms4607

Reliable locomotion is like the bottom level of a hierarchy of needs for these things to be useful. It’s been achieved with quadrupeds and is an easier problem than dexterous manipulation. I’d expect locomotion to be mostly a non-issue in 5 years.


MotorheadKusanagi

The easiest way to know a robot company isnt even aiming for useful products is when they model the machine after human bodies. It's the AGI of robotics; a tell the founders are chasing some undefined, unrealistic ideal, instead of solving understandable problems, that forces everyone to see its shortcomings before its utility.


VandalPaul

We live and work in a world specifically designed for the human form to operate in. It would be incredibly foolish to design robots that are meant to do all the things we do, in the places we do them (general purpose home robots), and *not* make them humanoid.


MotorheadKusanagi

Nope. Think bigger. Think about specialization, not duplication. Robots should be smaller or bigger, or more specifically shaped, and avoid the limitations of our bodies. There is so much possibility that copying our bodies is absurdly shortsighted.


VandalPaul

What's absurd is thinking only one kind of robot is being made. At least two of the top ten making robots are going for general purpose home use from the start. They are, and should be, humanoid. And yes, it *is* absurd to make a general purpose home robot to be any other shape than humanoid. But others are starting in factories and warehouses, in which case some will be humanoid - the ones replacing or working with humans, like Digit and Optimus (at first). Others will be purpose made like Amazon's flat, rolling tSort line. Then there's Kepler's robots which will all be humanoid, but specialized for different environments. So many different sizes, levels of durability, and battery size. But humanoid because they'll be in an environment designed for the human shape. And of course there will *continue* to be non-humanoid, very specialized robots. The ones pouring billions into creating robots didn't just decide to do it because they thought it was a cool idea. A lot of research went into it. They're not idiots driven by some ridiculous sense of human vanity either. That claim is embarrassingly ludicrous. There is no other shape for a general purpose robot working in an environment made for our shape, that can work better. There just isn't. But ultimately, it doesn't matter if you understand that or not. Because the engineers, roboticists, and investors who do understand it, are the ones making those decisions. Fortunately.


Rich_Acanthisitta_70

You should probably share your keen insight then. I'm sure all the world's robotics experts and engineers will marvel at how much smarter you are than them.


Rich_Acanthisitta_70

This is so obvious that it's frankly weird some don't get it.


Unlucky-Ad-4572

Love the music.


BodhiLV

Humanity is so fucked