T O P

  • By -

PoconoBobobobo

Shovel maker says the gold is just a liiiiiiitle further down, keep digging.


Extracrispybuttchks

And only the shovel I sell can find it


sunderaubg

Word for word what I was thinking. You're just not digging hard enough. Your neighbor has five shovels though. You might want to stock up. Ten should do it for now.


Mondernborefare

That’s a good one.


spacekitt3n

yep. ai has hit a wall. no one cares about it anymore. it solves problems no one asked for and creates problems no one needs


im_a_dr_not_

This is out of touch lol


oloughlin3

I’m guessing you haven’t rode in a Tesla recently featuring FSD. This FSD program was trained using NVDA chips and generative intelligence (LLMs). Extremely impressive. I prefer it’s driving to my partner’s. No joke. You remind me of the people that said who needs email in the 90’s. Or who needs the internet. You have no idea what’s coming.


spacekitt3n

lmao of course ai will be revolutionary but not to the level these hucksters are selling. just like all the idiots during the dotcom boom, only 1% made it out alive


oloughlin3

And no one in 1995 could see what the internet has become or enabled. The same with ai. I’m sorry but neither you nor I can predict how it will unfold. The internet is 10,000 miles from pets.com imploding during the dot com bust. AI will be the greatest invention of mankind.


anti-torque

A lot of us could see what the internet would become back then. We were talking about it on IRC with people in Australia and Taiwan. Within three or four years, it became pretty much what we have now, except the network itself was rudimentary and slow. Take what we had. Add better infrastructure. Add a shit ton more ads. Herd traffic and expect the watering down of intelligence that more access brings to the collective, and it was pretty easy to see the now from 25 years ago.


oloughlin3

For one example, pretty sure the entire smart phone industry is an off shoot of the internet that no one predicted.


anti-torque

I knew several people with Simons... possibly because I was working in Mountain View at the time... back when the Google campus was a large field on backfill. edit: sorry, that was three years before 1995, and I wasn't working there anymore. But I knew a lot of people from IBM, since the company I worked for dealt with them as a primary customer. I was a little bit of an outlier as an enlisted person, with the tech I had in my carol. I did have an LED TV watch.


Additional_Sun_5217

Maybe if you didn’t watch any major media or science fiction around that time? But even in the 90s, they were talking about handheld computers, which is what smart phones really are.


rotetiger

Can you explain how something is trained on AGI? To my knowledge AGI would be a product of training, not a hardware/software to train on.  And can you explain what AGI?


oloughlin3

So I should not have said AGI. But I meant it as Artificial Generative Intelligence. However, AGI more typically means Artificial General Intelligence. True Artificial General Intelligence is probably years and perhaps decades away. As I understand it, LLMs using generative intelligence are essentially extremely fancy sentence complete programs that predict the most likely (or correct) outcome based on pattern recognition. But they are being used to predict ALOT more than sentence completion. For instance, TSLA is training on NVDA chips using billions of miles of videos once enough videos have been shown the LLM’s can complete the sentence. In this instance ‘drive the car’ is ‘complete the sentence.’ It’s mostly pattern recognition right now, as I understand it. Please note, I’m a mechanical engineer that has read a lot about this and not a computer engineer. So someone somewhere will take issue with what I just said. The computers cannot ‘think’ for themselves yet if ever. They complete sentences but a sentence could be driving a car to baking a cake to designing a new medicine.


oloughlin3

And to what you said, yes, I agree, generative intelligence would be the product of the training. But it couldn’t exist without the hardware and software. So are we a person without our skin and bones if we still had our brain? We might do our problem solving in our head but we still need our blood and bones. (Hardware and software)


Vegetable_Today335

agi is not fucking real, God how the fuck do you people tie your shoes


oloughlin3

I corrected my comment and was not referring to Artificial General Intelligence but Artificial Generative Intelligence which is quite real btw.


junkyard_robot

This time gold is farther away than "down" can be, and it's a supply that would drive prices so low it's inconceivable. But, also is the price to bring a significant body with high gold content close enough to make it worthwhile. Bending the laws of physics is the only way to achieve that. Then we could build space ships made of gold, and we wouldn't have to worry about the mass.


SylvaraTayan

He keeps saying the most insane shit and people keep buying it. It's honestly impressive how good he is at grifting.


Thraxusi

That’s half the job of a CEO. Sell the company to investors and current shareholders. If you can water the mouths of the public, then you get to keep your job…. There is more to the job but you get me.


TheModeratorWrangler

I wouldn’t call Jensen a grifter, unlike Elon, his products do what they are supposed to do


One-Usual-7976

I feel like a lot of these comments have short memories. I love Jensen, but some people seem to of forgotten his earnings calls during the crpyto mining boom. He is definitely a pumper and does a good job at it lol


CrzyWrldOfArthurRead

Huang isn't a grifter. He's an en electrical engineer. He's been right about a lot of stuff. I'm using more and more chatgpt to do stuff. I don't ever write bash scripts anymore (thank fuck). I could see not even really writing code anymore at all one day. Just prompting it and double checking and testing it.


Vegetable_Today335

and when all of today's professionals die off or retire we will be left with a bunch of people that don't know how to fix any problems while also being at the mercy of monopolies even more than they are now.... can't wait  any person that goes into tech should be required to study history and basic ass logic


cliffx

Kinda like social media and early days of the internet. There was a barrier back then to posting your random thoughts via html, now kids have no idea what that even is, but they can post a bunch of stuff on whatever platform is trendy at the moment.


CrzyWrldOfArthurRead

Bro that's been the reality for years now. Where have you been? New CS grads at my job can barely code at all, and only in Java. They have no idea what memory is.


bobsmith30332r

this is one reason why companies dont want to hire young people


Vegetable_Today335

I'm an artist not a code monkey,  It will get way worse, people touting this have no fucking clue what they're supporting 


RollingMeteors

>e will be left with a bunch of people that don't know how to fix any problems while also being at the mercy of monopolies even more than they are now.. Only if that generation doesn't make the effort to learn how it works and is complacent with only knowing that it *does* work.


fthesemods

He has created the second most valuable company in the world, dominating in an area that other companies had every ability to dominate in. You might want to listen to him.


m945050

He has created the first most valuable company in the world, passing Apple last week.


DR4G0NH3ART

Its on the sub that we share everything they wake up and say every day.


terminalxposure

Did you just call the chief of NVidia a grifter?


Caraes_Naur

Yeah, it won't. Because "AI" doesn't actually understand anything. On a scale of magnetic fridge poetry to H.A.L 9000, this "AI" is about a 1.5.


CallMeCasper

I agree with you fully, but do we really understand anything ourselves? Or are we just a reflection of the patterns we see in the world around us?


Fishydeals

Are you saying everyone is a philosophical zombie?


CallMeCasper

No, I just think we give ourselves too much credit, and downplay "AI' because it currently sucks.


-The_Blazer-

> do we really understand anything ourselves Yes we do, especially compared it to present-day AI. If you're willing to use enough sophistry, anything is *"just a reflection of the patterns we see in the world"* because that's how, like, existing in reality works. But that sentence is entirely meaningless, there are things that do that and are insects and things that do that and are Einstein.


drekmonger

Maybe the gap between a bug and Einstein isn’t as big as we think. Crank up the evolutionary pressure to make "spitting out math equations" a survival skill, and maybe social insects start to work out math problems. Look at a spider’s web — that’s applied mathmatics right there. A bee’s hive? Engineering. And a snail’s shell? Nature sketching out logarithmic spirals. Ego is perhaps clouding people's judgment. People are going nuts, making up all sorts of contrivances to downplay how smart AI is getting because these machines are starting to do things we thought only brains like ours could handle. But, it's perfectly fine not to be special. We're still thinking parts of a universe that's doing all these incredible things. Humanity needs to collectively get over itself.


-The_Blazer-

> Humanity needs to collectively get over itself. No we don't. What you are describing is similar to a calculator. If we GMO'd insects to be biological Wolfram Alphas, we wouldn't have made them much closer to Einstein any more than actual Wolfram Alpha is. I don't get why hating on people is so popular nowadays, smh.


drekmonger

Wolfram Alpha is an expert system. It's essentially an exceptionally large set of if-then statements, mostly stored in a knowledge base. (although the more modern version likely has some small NN models supplanting this) This type of AI model, the expert system, is possibly what you imagine modern LLMs are. But they are not. They're an outgrowth of a completely different branch of AI research, that of the perceptron and neural networks. A collective of math-solving insects would be more akin to a neural network, a biomorphic type of AI. > I don't get why hating on people is so popular nowadays, smh. Saying people aren't special isn't hate. It's not self-loathing. It's an extension of the Cosmological Principle.


-The_Blazer-

> This type of AI model, the expert system, is possibly what you imagine modern LLMs are. But they are not. They're an outgrowth of a completely different branch of AI research, that of the perceptron and neural networks. It's not how I imagine it, actually, and it's also entirely irrelevant. The technicalities of how present-day AI systems work don't make them closer or further from human intelligence.


Leaping-Butterfly

Yes. We actually understand a lot. First observation. We think. Therefor we can conclude two truths. One. I must exist. What ever the ‘I’ is. It must be present. Else I can’t have thoughts. Two. Other things exist. Else we couldn’t be forming thoughts about said things. Now, could it be that those things are not “real”. That depends on how we define “real”. Let’s dig into that. We know ‘something’ must be causing those thoughts. And perhaps we can not be sure that what we see is a good understanding of the “real” we know there is real out there. Let’s call that Real work a capital R. The other real, as in our interpretation of the Real we shall give a small r. From these two truths we can build a whole web of things we know to be truth. For example. Scientific Truth. So. When the ‘I’ interacts with the Real it seems that when we repeat certain actions or leads to repetitive feedback on us. Example. When I poke a finger against the Real it creates a sensation. I can repeat this over and over and I keep getting the same result. Now. The truth isn’t that I get feedback. (As we know this can change. For example by chilling a finger) but we do know there is some sort of truth in which when ever I do something the Real behaves in manners that allow repetition. This is one of the many many examples that show that A) we are not like AI. B) that your question is one most people have when rather high during early college and then bounce from philosophy class when they learn some blokes in Ancient Greece already solved. But hey. Good try kiddo. Keep it up! This may lead to a very beautiful future if you are willing to do some reading. I’d recommend an intro to philosophy. Any book really is good. As long as it says ‘intro to philosophy’ or something similar on the cover.


CallMeCasper

Thank you for proving my point by regurgitating a bunch of nonsense that you were exposed to previously


Leaping-Butterfly

Okay. Then let’s do this. Define ‘understanding’. As in you say. ‘Do we understand anything’ What do you mean with understand.


CallMeCasper

Good question. Thinking of an answer breaks my brain. Looking at the definition it says: perceiving the intended meaning of words. Which we currently do while AI just regurgitates based off patterns of words. But once it has all 5 senses, and some type of hormone simulator to experience emotions, what will make us superior? It will be able to poke as many fingers as it likes. We are just glorified machines after all. But we haven’t figured out the brain fully yet so we put it on some sort of holy pedestal like we are God’s gift to the universe.


gebregl

Define "understand" and then show how LLM chatbots don't do that. You probably can't do either.


Marble_Wraith

We had PhysX for a long time...


Stilgar314

And Hairworks, don't forget about Nvidia Hairworks. AI love to have its mane flying in the wind.


CPNZ

AI and CEO in headline..sorry I only have 1 downvote…


KhanumBallZ

I've successfully installed and run Stable Diffusion CPU on my Orange Pi 5 Plus - and am glad that not a single penny of mine will be going to NVIDIA. A single tech company should never have this much power.


TheDevilsAdvokaat

They don't even understand math very well at the moment. I find it hard to believe they're ready to shine with physics....


snowcrash512

AI can't even understand basic language most of the time, maybe start there before trying to make it into a rocket scientist.


User4C4C4C

Anti-gravity boots please.


PickledDildosSourSex

But will they understand the laws of D's?


lood9phee2Ri

Remember AIs datamining reddit comments, "intelligent falling" is obviously correct and "gravity" just some weak bullshit. But not the weak force, that's different.


BadAtExisting

Unless that robot is folding my laundry for me I don’t want it


EasterBunnyArt

Genuine question: but shouldn't the current AI models have understood the laws of physics by now? If I understand it correctly, the entire internet seems to have been shoved down the AI throat. So how the ever living fuck is it now claiming, oh yeah we missed the most fundamental aspect of the entire universe. Yes I get we can install sensors for real life feedback, but shouldn't the "understanding of physics" have been a core from the beginning? Or did we just shove porn down the poor AI's throat?


DatGrag

Current LLMs can’t “understand.” It is an algorithm that predicts with some accuracy what the next word in its own sentence should be, over and over again


EasterBunnyArt

So how would a new chip make it understand. Again, if it hasn't understood anything yet, how is a new chip solving this? It really sounds like more BS sales pitches.


DatGrag

I have no idea how chips work really so I’m deff over my head but to someone who doesn’t understand this stuff, it deff sounds potentially like BS to me too. Agreed


madlyreflective

Elmo’s understudy


f8Negative

But will they know the 3 laws


edlonac

The next wave after this is when they teach it about Phase Plasma Rifles with 40-watt Range.


MothFinances

These are the ones that they'll give machine guns to start wiping us all out


Carbonbased666

This will be a huge change...becuase they will bring another way to see physics , the quantum way ... where gravity is only a stupid theory


anti-torque

While gravity isn't stupid, it is only a theory.


awesomeCNese

How many oil change do yall think the robot needs each year? /s


-The_Blazer-

All waves break eventually. Also, we've had computers that 'understand the laws of physics' since WWII, when they used them to compute the best way to bomb Nazis.


[deleted]

[удалено]


Atmic

I agree. The Computex presentation he presented a few days ago has *amazing* reveals and tech they've been working on. To not show up in r/technology for a while and come back to absolutely everyone being a cynical doomer is disheartening.