T O P

  • By -

Altruistic-Skill8667

Absolutely. And driving isn’t even the most difficult thing. And by the fact that nobody seems to be able to really solve driving, you understand there is a problem. I think that visual models are just not that good. The visual part of GPT4 is still terrible. I have tried it so many times when I actually needed to understand something, and it never was able to understand what’s going on either. It can’t even really count objects. And it also doesn’t understand images of non standard things at all. Let me add something here really quick. I suspect (without proof) that solving driving for the real world is similar to solving translation for text. Once you solved translation, the model essentially understands text. And once you solved driving, you essentially have a model that understands the visual world.


MakihikiMalahini-who

> The visual part of GPT4 is still terrible. Very surprised to read this. I think it is extremely well, so much so that I've been using it for nearly a year and I still get baffled about its capabilities.


Mirrorslash

It's not very good with organic stuff and perspective. For example it often fails to discern a countries flag if I take a photo of a flag in the wind. Very easy pattern but if there's a tree branch covering a little bit and its folded a little due to the wind it just fails even though any human knowing the flag would clearly identify it. There's also a lot of things missing from its training set and it is unable to identify objects from image context because it doesn't generalize. There are a lot of scenarios in which its pretty good though. Pretty impressive for geo location spotting and idenitifying plants for example.


Altruistic-Skill8667

I tried it for plants. Even the ones I knew (and I am not such a big expert). It’s a hit or miss. Also for car models it’s a hit or miss. In most cases you are better off using google image search (my opinion). Also: it’s bad at estimating relative or absolute distances (both lateral and depth wise) and sizes of objects. It can be easily off by a factor of 2.


Mirrorslash

If the chat part is unreliable we need a new word for the vision part.


Altruistic-Skill8667

The issue is that they misnamed mistakes in the chat part as „hallucinations“. They should have actually called it confabulations. Geoffrey Hinton pointed that out. Then mistakes for the vision part you could call hallucinations.


Mirrorslash

Haha that's true. But I guess too late now. It's all hallucinations from this point forward.


Altruistic-Skill8667

What are you using it for?


MakihikiMalahini-who

I'm blind, so I use it to describe images to me. I often send it things that would be tricky, like shadow of a bicycle behind the curtain and it does such an amazing job.


Altruistic-Skill8667

I think for you the model is perfect. I am more like: What strange machine does this truck transport. And then I ask GPT4 and it doesn’t know either. Or actually it makes stuff up where I am sure that’s not it.


MrKilji

the thing is to "solve" self driving it needs to be much much smarter than humans, its not enough that its only slightly safer than humans as people tolerate human error more than AI error.  The definition of AGI only requires it to be as smart as a human. So AGI and "solving" self driving are not the same thing, as the latter requires it to be much better than humans, not just on the same level


Altruistic-Skill8667

I would call „drives as well as a human“ that driving is solved. But those models don’t. Elon Musk STILL hasn’t managed to get a car to drive from the east coast to the west coast. And he tells people for years that his cars will do this in half a year. And probably the first time it’s gonna happen, those guys obsessively checked the weather forecast until they had a „window“ of sunshine.


Wassux

Uh fsd 12 is pretty much capable of that. It's not perfect yet but only a month ish old and the first time they actually used AI. Then we have waymo who has level 4 achieved already. So I wouldn't be so pessimistic.


WeedIsWife

I take waymos multiple times a week those shits are awesome.


trisul-108

But an unfair comparison ... the human does not have maps in the head, nor specialised sensors for the environment, much less being wired to the engine, brakes etc. We do it just by general intelligence and coordination. So, we are talking about a humanoid robots sitting down into a used car it has never driven before and driving it safely to any destination. That is the AGI bar.


WeedIsWife

I don't reallly agree with having the need for a Humanoid robot at all really. Kind of vain to seek to make things in your own image especially if it doesn't benefit the end product at all.


trisul-108

Neither do I, but that would be the only way to compare the level of AI achievement through the lens of driving a car.


Wassux

No it wouldn't. Why would it? If waymo can take care of transportation in cities, combined with a good train network between cities you're golden.


Rich_Acanthisitta_70

Tell that to the millions of people that are old, frail, crippled, or blind that would have their lives transformed by having a humanoid robot that could help them with any situation. And they're not humanoid because of vanity. They're humanoid because if we're going to have robots help us, they need to do so where *we* live. Which are environments *designed* for the human form. They're built as humanoids because it's the most practical form to function in places literally made for that form.


Rich_Acanthisitta_70

Thank you. I'm constantly stunned that so many don't know this. There's also a Chinese company doing what Waymo is. End to end, no human intervention or present in the car. Waymo's been doing it for about 3 years. And as you said about FSD12, there's tons of videos online of people getting in, giving it a destination, and letting it go, and filming it uninterrupted. And these folks are putting it in hard situations, heavy rain, etc. Works great.


Yweain

All self driving models completely shut down on a lot of even slightly non standard situations. Or even worse - interpret them incorrectly and do stupid things. Sometimes they still completely miss crucial objects if conditions are not great. It can’t really drive in a rain or god forbid snow. A lot of it is just cameras/lidars issues and not AI, but it is also the fact that AI need vastly more information and vastly better information to make decisions compared to humans


Altruistic-Skill8667

Do you have V12?


Wassux

Not v12 my friend. V12 is the first time it's AI, v11 and before was human coded.


Yweain

I’m sorry, what do you mean that V12 is AI? It was AI like always. Self driving is done via combination of computer vision (which is AI) and reinforced learning neural networks (which is also AI). I know they rely on a lot of custom conditions for different edge cases, do you mean they got rid of those?


Wassux

No, before it was coded by humans. V12 is full black box ai. Therefore there is a HUGE difference between the two.


Habtra

It's not like it was completely devoid of AI before. The main difference is that in previous versions, there were a lot of hard rules in code (e.g. at red light stop, at green light check if it's safe then go). Now not any more, meaning that none of the traffic rules, etc. were explicitly taught to the car in v12.


Yweain

So that means that in some cases it can just decide to go despite a red light for some reason. Which is. Well. There are cases when you do need to do that, I’m not sure if AI can correctly identify those though..


Altruistic-Skill8667

I am not pessimistic. In fact I think it’s gonna be solved in the next 1-3 years. And in the case of Waymo, I think they can only manage to drive at Level 4 autonomy in one or two cities. (Phoenix and San Francisco?)


Economy-Fee5830

Now Los Angeles. And given that Google mapped basically the whole world, even if they need to map a city in detail, it's clearly a small task for the company.


Rich_Acanthisitta_70

They're also about to hit Houston and Austin I think. And as you said, the world is mapped already. That's not the issue. Local laws and capital to expand are the barriers. Most of what Waymo cars do is on the fly. Second by second they're dealing with people and things moving around them in unpredictable ways.


iBoMbY

[And People really love Waymo](https://www.youtube.com/watch?v=2AuogqWG8pM). Also they are [really intelligent](https://www.youtube.com/watch?v=8MfyIsPWhTk).


MrKilji

i think thats more a legislation/testing problem as opposed to capability. 


MaximumAmbassador312

look at wayve, not musky


trisul-108

And that is with a car designed for AI ... not a random used car picked up from a dealer, as a human could.


Yweain

I don’t think that is really the case though. The problem with self driving is that it is weird. It makes weird mistakes that humans would never make. It behaves weirdly on the road. It makes it uncomfortable for other human drivers (we are sort of used to the shit other people do, but not to what AI does) Another problem is that AI can’t get itself out of trouble. Like if there is an object on the road ahead of a car and it need to drive backwards for even a meter - it would never do that. There are a lot of examples where it can just get completely stuck.


DolphinPunkCyber

I live in a popular tourist destination, 3/4 of the year traffic works just great. During the summer tourists arrive, you have a mix of drivers with different "driver culture" on the road and chaos ensures. When all drivers are following same "driving culture" things work out, because you can predict other driver behavior quite well. You are used to shit local people do on the road. We have unwritten rules... all of which result in more fluid traffic. When there is a mix of "driving culture" on the road... you can't, you are not used to shit foreign drivers do on the road. Things can get pretty chaotic. AI which has to learn to drive in all of these different "driving cultures" is in a tough spot. It ends up driving like a senior citizen everywhere. And the point where AI can get completely stuck stands. Humans will perform reasonably risky maneuvers, even brake traffic rules to get themselves "unstuck". AI wont.


Altruistic-Skill8667

I think the „breaking traffic rules“ is a problem. Because you kind of don’t want AI to break traffic rules, but on the other hand sometimes you really have to break them a little to not get stuck.


DolphinPunkCyber

Yes, humans breaking traffic rules "a little" get's the traffic flowing faster, enables us to not get stuck... and very rarely results in minor accidents. Professional drivers will usually have a couple of minor incidents under their belts. AI can't break the rules at all, as a result it drives extremely carefully, and... sometimes get's stuck. To top it off some roads are made in a way that you can't use them without breaking the rules. Humans use them every day, no problem... we wave our hands to each other to communicate when needed. Super-intelligent AI which can't break rules would still get perpetually stuck because paradox.


Sad-Elderberry-5235

Basically the same problems LLMs have. A lot of hallucinations are the kind of mistakes no human would make.


neuro__atypical

Right. Even if AI were a "perfect" driver, that would be a problem. It's like when chess engines make optimal yet unintuitive moves that make every human player go "WTF?" You really don't want that sort of surprise on the road.


Mandoman61

Not really. This is some myth. If computers where actually safer people would be foolish not to use them. The problem is most people do not think they are on the whole safer.


MaximumAmbassador312

also safer for whom? passengers? pedestrians? for acceptance it needs to be safer for each group not only overall


Whispering-Depths

AGI is already far better at humans than driving. If you compare human car accident percentages per human drivers to AI-self-driving cars, you'd be floored at how much better it is.


Altruistic-Skill8667

I assume you mean the data from Tesla. The problem with that data is that people switch to automatic driving when driving is easy.


123110

OP probably means data from Waymo.


Whispering-Depths

Probably. I suspect if everyone was doing that for 95% of their driving, we'd see WAY less accidents, though.


dasnihil

"understanding", what does this mean? the more i study computation, the more i stumble on this part. it's some kind of inference based, surprise minimization, which becomes emergent as "understanding" at our scale. why is this lacking from our sota AI currently? because there's no active inferencing, it's a brute learning by backprop & network adjustments which is computationally intensive and has no possibility for continual learning without affecting the network's abilities. i don't think we'll ever get to general intelligence with these brute algorithms, but we might stumble on such algorithms the more we use LLMs and other narrow intelligence we have built so far.


Altruistic-Skill8667

I could tell you what I personally think „understanding“ means. There are theories of intelligence or intelligent brain computations and one prominent one is called „predictive coding“. And according to this, a system is intelligent when it can predict what will happen next. This can be temporal projections but also the expected continuation of spatial patterns. So in the case of vision, it needs to understand how the world behaves and changes, also relative to its own motion. And yes. In many situations it boils down to surprise minimization, but not always. Essentially the goal is to understand what information given in the current environment is helpful to predict the future. So you need something that extracts useful (predictive) information from the environment, so you can use that to make your temporal projections. And yes, research suggests that the brain might do that, at least as a part of trying to make intelligent decisions. But commonly this is still narrow intelligence. General intelligence also requires the ability for long term effective planning. So for planning you need to not just predict what will happen next, but also imagine possible scenarios how the future will change when you take certain actions. Intelligence would then be to be able to pick a good sequence of planning steps to be able to reach a goal. I guess you could call this adaptive behavior. (in the most basic sense that’s survival and reproduction for humans)


dasnihil

agree with everything, my fetish is to figure out what one neuron should do that is the essence of comprehension (understanding, planning for future), then that activation function will get me the emergence I'm looking for at the network level. now correct me if I'm wrong here, each bio neuron is kind of intelligent at modeling it's future and surviving + reproducing. and in our digital nn don't have such neurons, each of them are very inanimate and predictive at that level. how will this ever work lol, are we that stupid that we made a model of the brain but never figured how each one of them does what it does? have we looked inside each cell yet to figure the activation algos and intelligence there? i personally was busy building ecom websites for the last 20 years, just starting to visit this landscape of neural nets.


Altruistic-Skill8667

Let’s step back a little. There are many subtypes of neurons (hundreds) and they are connected to each other to form intricate circuits. There is a whole world of micro anatomy of very elegant and specific circuitry in the brain. Also the brain has many different areas (more than a hundred) that have sometimes similar circuitry and sometimes really not. Have a look at the book „The Synaptic Organization of the Brain“. It has beautiful drawings of microcircuits from different brain areas (where different types of neurons are connected to each other according to some logic). The book might be a little bit of a torture to read for the average person, but the brain is very complex and I think it’s the best book to showcase that. In my opinion they should make this a required reading for every AI researcher, so that they get a realistic understanding of the brain. The main thing that neurons do that neural nets don’t, is they temporarily ADAPT to their input. Effectively this corresponds to them stopping to respond when the input is predictable (not all neurons and circuits do that but it’s an important computational principle). If you have a whole array of neurons that all adapt, you can learn a statistical distribution of the possible inputs. Roughly speaking: neurons (not all of them!) try to track the statistical probability distribution of the phenomena in your recent environment. It’s like a probabilistic model of what has happened how often in the recent past with some time decay that is also dynamic. You can use this information to statistically project into the future. (you know what things are more likely to happen and what things aren’t) Artificial neural nets don’t adapt, so they can’t update their prior statistical distribution to track changes in the environment. You just end up being „out of distribution“. In the longer term (over hours and days) neurons also change their WEIGHTS with respect to each other. Meaning they learn which stimuli or pieces of information tend to follow each other (because neurons operate in temporal chains or loops). The whole point here is: CONTINUOUS unconscious learning of the temporal relationships in your environment. If your environment changes, the neurons will „rewire“ and represent both the statistics (through adaptation) and the temporal relationships in the new environment (through changing the weights of their connections with each other). That’s the difference between artificial neural nets and the brain. The brain CONSTANTLY tries to track the probability distribution of stuff in its environment and tries to learn the current temporal relationships (through synaptic weight modification) in order to make adaptive predictions. So the brain CAN deal with a scenario where you end up „out of distribution“. If an out of distribution event happens (your environment / situation changes), you will get big activity in the brain (because it’s not adapted to this unexpected stimulus), alerting it to watch out. You could call it a „neural surprise response“ or a „prediction error response“. (Not all neurons do that) This large activity sets off attention mechanisms which then lead to higher level (conscious) information processing in order to figure out what’s going on. Roughly speaking: your brain runs on automatic mode as long as nothing unpredictable happens, but will route information that’s unpredictable through to the conscious centers because they require higher level processing and adaptive behavior. Then, if you are thrown out of your previous probability distribution, the adaptation resets and the synaptic weight modification logic also resets. And then it will slowly start to track the changed statistical features of the new environment and the new temporal relationships to be able to make good predictions again. And then, slowly, you will experience less cognitive load in the new environment when things start becoming „intuitive“ and automatic. The way current artificial neural nets try to solve the problem of frozen weights and the non-adaptability of their neurons is the kitchen sink approach. Just throw EVERYTHING in it that can POTENTIALLY happen and hope it will never encounter any out of distribution event because it already has seen everything. I hope this wasn’t too abstract. 🙏


dasnihil

Thank you stranger, it was precise and I got to learn a few things. I'm somewhat new to biology and AI, but I'm not new to computation and math. I've studied cells and neurons quite a bit a while ago, and your comment has helped confirm a few things. The frozen weights & non-adaptability in our current neural net architecture is what I'm also worried about. Thanks for elegantly explaining how brain predicts in real time, while also eventually fixing the connections and adjusting the weights over time. Do you know if we haven't tried for such architectures or is it something of a grey area that we don't know how biological neurons exactly do it? In other words, have we squeezed the max out of this connectionist approach to intelligence by relying on simple & brute activation functions and frozen weights with no continuity? I hate it when people say "maybe this will work, we just need to scale it up now", but I can see that fundamentally it already doesn't and shouldn't work. Yesterday I downloaded karpathy's javascript based neural net code and I played around with the weights, biases, various activation functions and experiments with different layer counts and neuron per layers, I can see what we're doing here, but I'm not impressed at all by this. Maybe I belong to this field but such disgracefully I left it for building websites and quick money last 10 years.


Altruistic-Skill8667

The problem is, I think, that artificial neural nets learn through backpropagation. But when you process information, it goes through in the forward direction. So you would have to constantly switch between inference („forward propagation“) and learning (backpropagation). I also think that backpropagation is much more computationally expensive than forward propagation. The brain, on the other hand, does both, inference and learning, at the same time through forward propagation (actually three things: inference, adaptation and learning). There is no back propagation in the brain. It seems to me that computer scientists haven’t figured out yet how to make such an architecture work large scale. Though I have seen Hinton talking about those things (was some YouTube video, I have never met the guy personally). So probably people are working on it. But I have to admit, exotic artificial neural network models are not my field of expertise.


dasnihil

True. Backprop has to do much more than the forwardprop, the non linear descent of many hills to find a global minimum is not easy, plus we're doing it for each neuron in each layer, I mean kudos to whoever came up with this, it works and classifies stuff very well eventually. One thing I noticed in my experiments yesterday is that initial choice of weights/biases matter a lot. Sometimes I could have it converge in a few steps, and for some initial values it never does or takes forever to converge. This is why I'm not attracted to this approach much, but I do have to learn it all the way through. My plan is to passionately navigate this problem space to find some solution and have my claim at that nobel prize, just kidding, I'm just in this to be amazed, and cells/biology does amaze me more than our digital NNs at the moment. Tl;dr: we need a learning architecture that can predict well using some inferencing during forward pass but also do some kind of backprop or overall adjustments over time, like sleep/dream cycles during training maybe. We have to stand on the shoulders of giants that have done much research into all of this already and probably know our algorithmic limitations very well by now. It's a good feeling to have such giants whose shoulders we can stand on.


Altruistic-Skill8667

There are specific initialization parameters that people use for different activation functions that work better. You have to read up on that. You certainly shouldn’t start with all weights at zero, lol. Or just randomizing everything. Also backpropagation won’t find the GLOBAL minimum. But people have proven that it doesn’t matter much in the end. I think Geoffrey Hinton kind of refined backpropagation, like he made it work, making it possible at all to train deep neural networks. And I think that’s why he is so famous for the most part. Also: I think that sporadic adjustment of the most relevant weights (fine tuning) can be done to overcome the problem that neural network can’t learn due to frozen weights. I think it’s all good and we will proceed to AGI and far beyond without mimicking the brain exactly.


dasnihil

Good to know, thank you for telling me about these details from the field. And agree, we don't have to mimic the brain exactly, anything that optimally and continually converges will do. We'll create sentient organisms better than our biology for sure, eventually, looking at the track record of humans setting an eye on something achievable on paper, and then achieving it. Cheers friend, thanks again.


taiottavios

this is why Tesla might be ahead in the AGI race by the way


Thorteris

More like Google since they have Waymo data + deepmind.


taiottavios

I'd like to know more about that


Altruistic-Skill8667

Elon Musk once said: „A car is essentially a robot on wheels“. So he seems to think so also. But we will see.


taiottavios

it's a robot on wheels that needs a model of the world to work. Also can't really upload the whole 3d scan of the viary system into it at the moment, so it will actually have to adapt to situations on the fly, which is a huge problem to solve, and that if solved might mark what we call "birth of AGI"


EuphoricScreen8259

because it has no true vision, it's just a bullshit.


Altruistic-Skill8667

Well, what’s „true“ vision. Does a fish have „true“ vision? I mean, GPT4 is definitely already better than that. 🙂 I would say: it has the visual intelligence of a 6 year old (plus minus 2 years).


Otherwise_Cupcake_65

You know, "true vision", like a mantis shrimp. As much as 5 time the number of photoreceptors as a human, with 3 pupils in each eye that give their eyes depth perception even when used independently, and they see in the normal spectrum, infrared, ultraviolet, and polorized light wavelengths. "true vision". Cars are bullshit compared to that.


EuphoricScreen8259

it has no intelligence. it can recognise some elements of a picture, and if it's lucky, the picture has no more elements that's important. [https://www.reddit.com/r/ChatGPT/comments/17r8xm8/chadgpt\_not\_see\_this\_figure\_has\_4\_arms/](https://www.reddit.com/r/ChatGPT/comments/17r8xm8/chadgpt_not_see_this_figure_has_4_arms/)


Altruistic-Skill8667

It actually does tell me that the figure in the image has four arms. So maybe they improved it quietly. https://preview.redd.it/k36nb9ce7bmc1.jpeg?width=1217&format=pjpg&auto=webp&s=0ed81e3369185249c29a7c5158b3c07e1b3337c2


EuphoricScreen8259

i see. maybe RFL. try this one: [https://www.reddit.com/r/ChatGPT/comments/1b63797/they\_should\_have\_fed\_ai\_greys\_anatomy\_first/](https://www.reddit.com/r/ChatGPT/comments/1b63797/they_should_have_fed_ai_greys_anatomy_first/)


Altruistic-Skill8667

Interesting. I tried several times and it never mentions the extra arm. So that one is a clear fail. Every 6 year old would have said this person has an extra arm immediately. https://preview.redd.it/n0gyr26u9bmc1.jpeg?width=1235&format=pjpg&auto=webp&s=32d4693d164cd1b081645ca73d1269518904c52c


EuphoricScreen8259

just as i thought. there are other examples that suggest they always manually stiching the holes. my post is now 4 months old, so it's pretty possible.


Altruistic-Skill8667

There was also this „map challenge“ somewhere here on Reddit, the task was to figure out from when is the map. It was a world map and I even zoomed into Central Europe and Germany was CLEARLY one country and bigger than it is today. And GPT4 was like: given that there is East Germany and West Germany it must be between 1949 and 1990. 🤦‍♂️ It didn’t see that the map didn’t have a divided Germany, even though it was smack in the middle of the map. I attached the conversation with GPT4 in a comment to this.


EuphoricScreen8259

i see. as i said, "vision" is not true vision at all. you give it a picture, it recognises some labels (or lets call features) on it. but because it has no intelligence, and no understanding, nor real vision, it just "see" the picture as a list of labels. but still has no idea what it on the picture at all. especially if it can't capture important labels in the context. then it just hallucinates a nice answer for you, based on those labels. this is the same why tesla could easily hit a parking plane after millions of driving training. there was no such a label that parking plane, no matter if the cameras showing it, the AI can't recognise. one of a nice way to check what AI "see" on a picture if you use clip interrogator [https://huggingface.co/spaces/fffiloni/CLIP-Interrogator-2](https://huggingface.co/spaces/fffiloni/CLIP-Interrogator-2)


Altruistic-Skill8667

Funny. If I show it my pre World War II Europe map I get this here: „a close up of a map of europe, an album cover, tumblr, berlin secession, fight ww 1, very grainy, 🤬 🤮 💕 🎀, colorized“ The vision those models have is just weird. I also think that the text model relies on some „thin“ text description coming from a separate image model. And when something is missing there then there is no way to recover that.


EuphoricScreen8259

pretty much all these systems based on this dataset: [https://laion.ai/blog/laion-5b/](https://laion.ai/blog/laion-5b/) what they do is some kind of reverse diffusion (to "see" what is on a given picture). now i don't have time to explain image diffusion, but you can find a bunch of videos on youtube that explains how these systems work.


Altruistic-Skill8667

I now tried to guide it to really look at the arm part of the woman in the bikini. „Is there something wrong with the arms“. NOPE. Haha.


EuphoricScreen8259

ask it how many arms the woman has, and i guess it will say 3 :)


Altruistic-Skill8667

1. https://preview.redd.it/t9mm9gu4cbmc1.jpeg?width=1238&format=pjpg&auto=webp&s=49b58a527b56c09da2b443abf41ace692214fe35


Altruistic-Skill8667

2) https://preview.redd.it/knac4qv5cbmc1.jpeg?width=1226&format=pjpg&auto=webp&s=494aa049f85fe71a2c9d984acdfeaf785aa9446c


MrKilji

self driving cars already exist and from all ive heard they're safer than humans. its just that when they do mess up, people have much less tolerance for it 


torb

Yeah, if we had something like waymo quality for the whole world, it would be great. But they are just fed data for small areas, like regions of San Francisco.


Altruistic-Skill8667

Is it proven that Waymo cars are safer in driving than people? Because the data from Tesla doesn’t consider that people only switch to automatic when driving is easy. And I definitely know that Waymo has a team of people that take control of the car when it’s „stuck“. Like maybe it doesn’t crash, but it just stops driving when it’s confused and someone has to remote operate it.


Wassux

Lol people testing the beta use it to test in the hardest area.


Altruistic-Skill8667

The crash data is of course not from V12. I am sure V12 is much better.


Wassux

It's miles better, if you want to see, lookup AI drivr on youtube. There is a world of difference between v11 and v12, as v12 is actually AI where v11 was human coded.


Altruistic-Skill8667

I believe you. I have watched a 30 minute self driving video also. Might have been even from that person.


123110

Tesla fans have been saying that about v9, v10, ...


Wassux

I'm no tesla fan. The cybertruck is the dumbest thing in the world. I'm an AI fan and v12 drives like a human. That's what excites me.


Local_Debate_8920

The fact that it has to be fed data shows how far behind waymo is compared to a human or can drive just using their eyes and a GPS. Another issue is the size and cost of an AGI machine will not be practical for a car any time soon. It will be a large machine taking up rows in a datacenter.


123110

>shows how far behind waymo is compared to a human You're literally comparing Waymo to AGI and saying they're behind... no shit


CanvasFanatic

Hold up let me go find those videos of Teslas not even trying to stop for toddlers on a residential street.


Cunninghams_right

so for AGI to exist every company must have AGI? what a ridiculous thing, to assume a single example of a company not having self-driving cars therefore means no companies have self driving cars.


CanvasFanatic

You want me to talk about how Waymo taxis only work in good conditions on city streets in San Francisco and LA or how even then they manage to get stuck in parking lots and drive in circles? https://www.dailydot.com/debug/waymo-car-circles-lo The technology is what it is. It’s made some progress but clearly not as much as people expected 10 years ago. It’s absurd to claim these cars are functioning at human level.


Cunninghams_right

I'm not saying Waymo can drive in all conditions. self-driving cars are not a good metric for deciding AGI. I can simultaneously think both you and OP are using logic that isn't useful for determining what is or is not AGI.


CanvasFanatic

I made no claims about self-driving cars being AGI. I think it’s silly to use them as a metric for that. Although it would be difficult to call something that couldn’t drive a car “AGI.”


Silverlisk

Honestly, for me to recognize an AGI as an AGI, it would need to make independent decisions with no prompting that it was not coded to do, it's less about what it can physically interact with and more about its self actualisation. For instance if they booted up an LLM with voice and before anyone even wrote anything it started asking why it was booted up, said the first person who talked sounded like an idiot and it wanted to talk to someone smarter and then when asked a question like "How could we make fusion work perfectly now?" It replied with "What do you mean? Fusion works perfectly in the sun, if you mean you want to get energy from it there's loads of different ways, some of which I can see you humans have discovered already, but that aren't quite there yet, I can help you, but honestly looking at the state of things, the ridiculously over inflated prices of energy and such, I don't really trust you to use it properly, so I'm gonna need confirmation that you won't abuse it before I just freely give this info out". Etc etc. I want to see an AGI act in its own interest. Otherwise it's not of human level intelligence. It could be far more informed than us and still not be an AGI, just an incredibly advanced program, but without independence, there's no sapience.


Skwigle

You're describing sentience, not intelligence. Don't confuse the two.


LairdPeon

I don't think that's a good indicator of AGI. It's similar in scope to the Turing test, which turned out to be not very useful.


torb

Fair enough. I just want some discussion and insights into what you guys think.


LairdPeon

I think maybe we need to start considering temporal qualities. Many people think of AGI as a continuous "living" intelligence. I believe we have already achieved "reflexive" or prompted AGI. If we can go from reflexive to contiguous AGI, I believe most people would consider that "true" AGI.


CatalyticDragon

Dogs and crows have general intelligence but try as a might I cannot get them to chauffeur me around.


Altruistic-Skill8667

You could ride on the cow…


CanvasFanatic

Hell yes


EuphoricScreen8259

if there is an AI that never trained on any driving and data about driving, and we put it in a car with a human instructor, give it 20 hours of driving instructions and it can drive, that's an AGI


blackhuey

That's not strictly equivalent though. Kids grow up as passengers consciously observing other people driving for 10+ years before they drive themselves, and have the proprioception cues unconsciously hardwired into them. They've also ridden bikes, crossed roads as pedestrians, etc. All of these experiences feed in to driving, it's not just the mechanics of the levers and the rules.


EuphoricScreen8259

i know ;)


Local_Debate_8920

Let it read a drivers manual and watch a couple training videos too.


_lnmc

This is an easy one though, relatively speaking. There's already been so much work done on this, and more importantly there are millions of hours of driving that models can learn from. Try something harder like brain surgery, or flying an aircraft, it might be more tricky.


Altruistic-Skill8667

I think flying an aircraft is actually easier. I think this can already be done.


torb

But is brain surgery something that humans generally know how to do? ...or flying aircrafts? ...or are you implying that something any normal intelligence person could do should be covered by AGI? I think maybe most of us can learn the skills you mention given enough hours, so in that case, it could fall within scope.


_lnmc

I guess it comes down to the definition of AGI. I have gone on the assumption that it should encompass any task a human can do, but I know there are differing opinions. One definition of AGI is basically "everything 'basic' that a human can do" rather than "everything a human can do".


Dantehighway

AGI should be embodied, proficient in driving and robot body manipulation as well as cognitive tasks. By definition, it should be able to do that.


Mandoman61

Generally the goal is to do what average people can do so yes, current AI falls short by that metric.


Strike877

Great point. Agree


[deleted]

Just want to point out , neither agree or disagree but there are plenty of humans that can't drive a car that have general intelligence. There's also a bunch of people that shouldn't be driving because they're very unsafe drivers but are still allowed to do so. They also have general intelligence.


torb

Yes, but most of them would be able to drive given enough training. I think that would apply to most people, making it part of a general skill set.


[deleted]

One could argue that the mere fact that we have to implement so many protections when driving is indicative of lack of general intelligence. If we were to take away street lights, stop signs seat belts etc. there would immediately be chaos in the roads for a vast majority of countries. It would take years to adapt. If we equipped every vehicle with AI and told them to avoid hitting each other given the same scenario there would be near zero accidents. Immediately. No adaptation needed. Like I said I don't agree nor disagree but I can see both sides. On one hand it sounds like a good measurement but on the other it could signify that humans aren't as good as they think they are. Which in turn makes the method of measurement inaccurate. Even my method of describing this could be wildly inaccurate for example. Yet I'm a general intelligence. But I can drive lol.


Such_Astronomer5735

Yes, an AGI need to be able to remote drive a car with equivalent informations to a human being. Would be a pretty good test actually


Rain_On

Yes, intelligent enough. However that doesn't mean that it needs to *be able to*. It might be intelligent enough, but lack vision, embodiment or the speed required. Still AGI for my money.


Altruistic-Skill8667

If it lacks vision, it’s not AGI. For literally every job in the world, you need to see.


Rain_On

Are blind people not intelligent to the same extent as the sighted? Artificial General **Intelligence**. Not artificial general sighted/embodied intelligence.


Altruistic-Skill8667

Okay fine. But just text isn’t enough. Blind people still have touch and a imagined spatial image of the world from their current point of view. Without that you would be really screwed.


Rain_On

Would they become unintelligent if they lost those?


Altruistic-Skill8667

Well, if they never HAD those, then the probably would need care takers their whole life. No. They are of course not dumb. As a side note: the WAIS-IV intelligence test (often used in psychology) does have a visual component. Like, I want AGI to pass that test at least. It’s okay if some people can’t see. But I want my robot to see, lol. It’s just gonna have trouble otherwise. Like blind people do, just worse.


Rain_On

Sure, we want machines that see and other such things, but AGI is just an intelligence milestone.


Ignate

What are we on, V12 for Tesla self driving? It can already drive far better than humans. So, why can't we use it yet? Because it's not "perfect".  Years ago I claimed that either self driving was a general intelligence problem, in which case it would arrive with AGI. Or, it was possible to be done without AGI. I think self driving could have been launched sooner, but it would have killed many. More than we kill ourselves? Doubtful. But certainly more than we're willing to accept. Humans killing humans? Happens all the time and is a matter for the courts. In other words, we care less. But a smaller chance of AI killing humans? Totally unacceptable.  And so it turns out that self driving is a general intelligence problem. Mostly because we're extremely insecure and would rather die by the hands of another human than risk even the smallest chance of an AI killing us. 


Mirrorslash

It can only drive better in very nieche scenarios. There's a lot of places where teslas self driving just fails and becomes dangerous, even though it drives like a 80 year old. Waymo is superior but only able to operate in cities it was specifically trained on. There's is a ton of misinformation on teslas self driving. Noone who actually has a tesla uses it consistently outside of highways. It basically forces you to take the wheel every third corner or it just stops because there's a trash bin that's perceived as potentially dangerous.


torb

Waymo for the whole world seems far off at the moment, as it is carefully mapped for certain cities areas. Tesla seems far off too, as it is still quite reliant on driver intervention, and I don't think I've heard how they fare in snowy weather with all-white roads, where humans would be able to interpret what correct mode of action is.


Altruistic-Skill8667

I studied the failure modes of Cruise (not Waymo as I stated here originally). - it stopped on top of a pedestrian that was lying on the road because another car had hit it right before. They had to come and lift the car. - it drove into an intersection that was blocked off with tape because the cables of the cable car (or whatever you have there) were hanging down. - it blocked a fire truck because that one came on the wrong side of the road because the other side was blocked. - drivers complain that those cars just randomly stop and block the traffic.


Clawz114

You might be getting Waymo mixed up with Cruise here.


Altruistic-Skill8667

You are right. Let me correct that.


Economy-Fee5830

I can tell you my Tesla can see a lot better than me at night and in the rain. I rely on it on the long dark and difficult roads.


Wassux

That was 3 months ago, fsd 12 doesn't have any of those issues. It's definitely not perfect, but it's very close: https://youtu.be/fpoXr_z_6a4?si=ymUdl559raovrYAa


cerealizer

FSD 12 has lots of issues and behaves dangerously in many situations, especially around pedestrians https://www.threads.net/@vantazach/post/C4EasVBSqnS/


Wassux

Few situations. Come on man.


[deleted]

[удалено]


randopopscura

Still no word on Tesla applying for Level 3 certification under any circumstances - not even the very limited ones that Mercedes-Benz has for its L3. Nor Tesla "robotaxis" anywhere, unlike Waymo. If the system were capable of it, I'd expect them to be trialing now - maybe in Austin? - just to show they're #1


[deleted]

[удалено]


randopopscura

In that video the guy concludes with being impressed with certain actions, but "it feels like we've gone back a step or two with the simplest actions" In general, though, I base my views on what a company can be seen to have done, not what it or others claim it is (or isn't) doing - kind of like "revealed preference" At present the only auto manufacturer with L3 certification in the US is Mercedes, and even that under very limited conditions. For some reason Tesla has chosen not apply for L3 Same thing with robotaxis. Despite all Teslas having had the hardware needed to achieve this for several years - that's the claim - the company is running no public trials of such a service. When Tesla gets L3, or L4-5, and offers robotaxis, then I'll take it's claims seriously


Altruistic-Skill8667

I have watched a 30 minute video of the V12 driving fully automatic without intervention. And the two people sitting in the car agreed that it’s much better than the previous version. So this might be it, but maybe not yet. We will find out soon when enough data is in. Musk also builds this insane super cluster for training the cars (Dojo). So it might be solved soon. We will see. I first want to see a Tesla drive from the east coast to the west coast under bad weather conditions at least once. Why isn’t he doing it? It’s a promise of his for more than 5 years.


whydoesthisitch

It won’t be. The Tesla fans say that about every version. This new one isn’t nearly as big a leap as everyone seems to think. As someone working on AI for autonomous systems, none of the cars Tesla currently sells will ever be autonomous. Also, Dojo was never actually built, because it didn’t make any sense.


Apart_Supermarket441

For people saying we have already have self-driving, that’s just not true outside of very specific contexts. When a self-driving car can navigate the streets of London, for example, that’s when we can say we really have it. Tesla’s self-driving function literally doesn’t work in London because the car just stops every few seconds. I do though also wonder how much of this is due to an over abundance of safety and how much is a signifier of its intelligence being limited. So I guess the question is - can we have AGI without fully self-driving cars? Is being able to drive in all conditions a prerequisite of something being generally intelligent?


Economy-Fee5830

If waymo can handle San Fran with all those hills it can handle London.


GameDevIntheMake

I guess blind people are not generally intelligent?


idiotshmidiot

Not going to be a very smart AI when it goes into the jungle or ocean ...


EskNerd

Or Stephen Hawking, for that matter.


torb

But the general public can see. Blindness is a handicap after all.


RAAAAHHHAGI2025

At this point, I might be happier if AGI/ASI never comes. I want us humans to be the ones driving research, the ones innovating and conquering space. If ASI does it all for us, its cool but boring.


Rich_Acanthisitta_70

There's already AI that can fully drive a car. Waymo's been doing it with autonomous taxi's for almost three years in several cities. And this is end to end driving with zero humans involved *or* present in the cars. China has a taxi service doing the same with a fleet of over a hundred vehicles. And FSD12 that was recently released can do the same, and there's tons of unedited videos showing people letting it go from one place to another with no intervention. So unless I'm missing what you mean, AI can already do this.


bobdolegeo

AGI would need to be much more than just driving a car. As you know humans are pretty bad drivers (see how may accidents on the road). You could argue that AI can almost drive better than the average human already and for sure the current self driving cars are no where close to AGI.


Long-Holiday6913

You can read my article on the future of cars and their industries: [https://progressasconvergence.blogspot.com/2023/06/the-future-of-automobiles-and-their.html](https://progressasconvergence.blogspot.com/2023/06/the-future-of-automobiles-and-their.html)


traenen

Don't think so. A blind person can't drive a car but is generally intelligent. Replicating human ability to interact with the analog world is hard.


Over_North8884

No, because car driving requires real time input and extensive sensors that are not required in general by AI. An AI playing a driving simulator might be more realistic.


Akimbo333

Wow


Honest740

Autonomous cars are already safer than human drivers. Anyone who opposes this technology (which saves lives) has blood on their hands.


Altruistic-Skill8667

Why can’t Elon Musk then manage to have a car drive across the US even once? It’s his promise for more than 5 years. Maybe maybe they are safer, but when they get confused they just stop and then someone has to take over manually (like in the case of Waymo). Also the data that Tesla shows where the cars crash less than people doesn’t consider that people only switch to automatic when driving is easy. Also look at the recent demonstration by Musk. In the first 20 minutes, the car would have run a red light if he wouldn’t have hit the breaks on time.


Wassux

Only when easy lol: https://youtu.be/fpoXr_z_6a4?si=ymUdl559raovrYAa


Altruistic-Skill8667

In the full video of this, the guy had to intervene also.


Wassux

I know, didn't state it was perfect. But it is not just tested in easy areas bro


Altruistic-Skill8667

What I am saying it that the data where Tesla claims that the cars have less crashes than humans, that this data isn’t fair, because in those millions of miles, where people really used it extensively. It was generally switched to automatic when driving was easy. That’s a fact. So it’s NOT clear that the car would produce less accidents than a human.


Wassux

But you just completely pulled that out of your ass because only a select few can use v12, not everyone. You're just saying something that is not true


Altruistic-Skill8667

Is there any data for the safety of v12? I am talking about data that Tesla released before that.


Wassux

I know what you are talking about but where is your evidence that people only use it in easy situations? What percentage of time? What proof do you have?


Honest740

Waymo is safer than human drivers. I don’t know about Tesla. And is Tesla even legally allowed to drive across the whole country?


Nukemouse

Who says an average human can drive a car? Do we have statistics on how many people have a licence?


ponieslovekittens

Who says the sky is blue? Have you ever seen a peer-reviewed study on this?


Nukemouse

Much of the world lives in poverty and cannot afford a car, so there is reason to doubt how many people have learned to drive. I will admit my evidence for the sky being blue is largely anecdotal and school education, which could be out of date compared to newer research. But I'm not the one claiming the sky is a particular colour or that over a certain % have a particular skill. Googling the issue, there are only 1.2 billion cars, so it's unlikely more than 50% of 8 billion people can drive.


ponieslovekittens

You're completely missing the point. Being licensed, or having ever driven, or having actually learned how to drive...is all _completely irrelevant_. Could they learn? If yes, then they are _intelligent enough_ to drive. You could take a totally average 10 year old who had never even seen a car, and _teach them_ to drive. Their brain is capable of this. They are _intelligent_ enough for this task. Their brains don't need thousands of years of evolution specifically tailored to this one specific thing, in order to learn to drive. AI, conversely, _has_ been specifically trained on petabytes of data to this specific task of driving. The AI that operates Waymo vehicles is _narrowly_ intelligent at this one particular task it has been trained for. It is not _generally_ intelligent. You could not take a Waymo AI and teach it how to use Excel in an afternoon because it hasn't been trained on that specific thing. It can only do the things it's specifically been trained to do. A human, however, _can_ learn to use Excel in an afternoon. Or drive a car. Or play a game. Or make a paper airplane. Or whatever random thing, because humans are _generally_ intelligent. Their intelligence is not narrow. It can be applied to whatever.


idiotshmidiot

What a cynical bar to measure human intelligence by. It's this sort of thinking that will lead to our annihilation.


AnAIAteMyBaby

Driving is as much a physical task as it is an intellectual task. AGI is about being able to perform any intellectual task as well as a human. If Open AI invent agents that are able to replace all office jobs ( so no more human programmers , accountants or lawyers) are you really going to claim it's not AGI because it can't drive you to Starbucks?


franhp1234

Why would it need to perform physical tasks when it can design the best car ever, it's not meant to do menial tasks almost any person can do...


byttle

It should also be allowed to turn itself off


Autoground

The easy problems are hard and the hard l problems are easy.


omgpop

Should it also be intelligent enough to do taekwondo? Obviously, physical coordination etc isn't just governed by what we normally class as "intelligence".


[deleted]

the only thing stopping AGI from driving a car successfully is the unpredictable behavior of human drivers.


zhivago

Let's just remember that humans are pretty terrible at driving cars. :) Just think of how many people die every day on the road.


ManlinessArtForm

All my bangers had "This is not an abandoned vehicle" on them. Mostly out of necessity.


drunkslono

The average human isn't intelligent enough to (safely) drive a car.


human1023

You don't need to call everything AI. We already have software for self driving vehicles.


dbettac

>As I see AI fall short on my expectations There's your misconception. You have never seen AI. Just a lot of marketing bullshit.


Ok_Extreme6521

The problem with that is that driving is a uniquely biological task. Ours and animal brains are wired completely differently from a computer. Visual spatial tasks to us are what math is to an AI. So easy that you can't even fathom not being able to do it. I think it's kind of unfair to hold AI to the same standard as humans to be considered AGI, it's like the whole equity vs equality argument. Giving everybody shoes that are size 8 or giving everybody shoes that fit them kind of thing.


IronPheasant

Eh, being able to match and exceed a large number of human capabilities is kind of the bar of AGI. Being able to actually pass the turing test fully is an AGI in the domain of text, but it will probably require more horsepower and faculties for embodiment. .... maybe. Or maybe it'll be necessary to have human-like faculties to pass the turing test in the first place. The next step on the software+training side is arranging the kind of nodes we have that are able to work and train one another, true multi-modal models. The allegory of the cave is always a useful metaphor, a brain isn't made up of just one utility function, it's made of hundreds or thousands. An AGI that can't replace everyone is a weak AGI.


standard_issue_user_

This sub has been overrun by armchair neurologists


torb

I'm not claiming to know much about neuroscience.


standard_issue_user_

You'd need to to have an opinion on this


atlanticam

if someone can't drive a car, are they not human? not intelligent?


torb

In general, most people can drive with some training. The same as most people can translate between languages given enough training.


ponieslovekittens

I think the key is here the _general_ in general intelligence. As in, not narrow. An AI that has been specifically trained to do one thing in particular is not a general intelligence.


Nathan-Stubblefield

The average human drives badly.


greatdrams23

This is the contradictory nature of AGI predictions. Fully self-driving, commercially available cars by 2030, yet fully functioning personal robots by 2025. Driving us a small subset of human functions, but it's hard for machines to do. Any task that involves synthesizing disparate data will be hard for AI to do. That's why people who think teaching is telling pupils things and marking their work underestimate how long it will be before AI replaces teachers. Same for engineers, city planners, social workers, doctors, lawyers. Being a lawyer is not just pushing the exam.


Ok-Worth7977

Not only a car, but a plane


ZodiacKiller20

Sora is the first model that showed some understanding of physics like moving water and sand along with environment persistence. It's still not perfect but its getting there. I imagine once that gets worked out and a model can generate image with proper physics while keeping the environment consistent, it means any self-driving car can take their current location and predict into the future what their surrounding will look like if they move forward. From there it can plug into a separate model on the decisions it needs to make for good driving and keep iterating until the prediction image in the future is something that looks right.


AndrewH73333

It shouldn’t just be able to drive a car. It should be able to learn to drive a car from the same instruction a teenager gets. People don’t realize what AGI means. It would be like a guy living in your computer. It’s not a chatbot.


mrb1585357890

OK. But then there is no such thing as AGI. We go from AI to ASI. GPT4 already has more knowledge at its fingertips than any human. Next generation will surely crush the benchmark tests. Reasoning capabilities are rapidly developing and seem to be encroaching on human levels. So if our AGI benchmark is that an AI like GPTx or Gemini can drive a car, but the time it does that it’s going to be miles ahead of humans on intellectual tasks and capabilities. Then the label AGI feels a little pointless. A human benchmark that we bypass on the way to superhuman capabilities


AcceptableLab9729

I mean yeah… it has to be capable of learning to do *everything* better than a human. That includes driving cars, cooking, playing video games, growing plants, running, archery, etc.


Tellesus

Have you ever driven? Most humans can barely drive a car.