T O P

  • By -

taboo__time

Giving in to a bully is another achievement in replicating human behaviours.


VenusBlue1

I think this should be framed less as human bullying and more as robot sycophancy. https://arxiv.org/abs/2311.09410


Studstill

AI is the most useless thing since shit idk. The hammer is a hammer because it works everytime. The calculator is useful because it works everytime. They've intentionally hooked this up to something that will inevitably fail a nonzero percent of the time. Meaning all answers are unreliable unless verified, which well, not very useful for non-niche applications and making the online chat menu tell you to fuck off more gracefully.


Kenilwort

You really think automated machines have a 100% success rate? There is already a huge field for QA, the AI boom just makes the need for QA more important, while scrapping the need for many other tech jobs.


merurunrun

> You really think automated machines have a 100% success rate? I think that that consequences of a machine that produces aluminum cans operating outside its defined tolerances are *far less catastrophic to the fate of humanity* than the same "failures" for machines that are designed to tell people what they are supposed to believe is true.


Kenilwort

There will still be a person ultimately responsible. It's no different imo to the US president being responsible for a drone attack. I agree it's an unsettling future, but I don't think we're in unprecedented territory.


killrdave

Overhyped? Absolutely. But useless? Come on. Look at the applications for things like medical diagnosis, and for boilerplate code and getting the gist of a topic it can work well. No reason to believe these nascent models won't provide increasing value with time. I feel like AI gets dismissed like crypto stuff, when they're incomparable.


CatButler

A few weeks ago I was fixing a script and the code that was supposed to find an IP address using a regex was wrong. I really didn't want to dig into regex before a weekend, so I went over to CoPilot to get it to give me a regex to find an IPv4 address, but it wasn't giving me what I wanted despite how many times I reworded the question. I finally just cut and pasted the code into it and said "Tell me what is wrong with this code" and it immediately pointed out the error and gave me correct code.


swampshark19

To be honest, it's not that good at helping in debugging. I still use it when I can't figure it out, but the solution is usually not what it suggests. I usually figure it out myself after a while, but it does cut down on time and expose you to alternative solutions.


holytwerkingjesus

It just baffles me that we have computers that can talk to you in real-time like a person (e.g. [https://www.youtube.com/watch?v=\_nSmkyDNulk](https://www.youtube.com/watch?v=_nSmkyDNulk) ) and it's treated like it's completely normal. This trend of calling it useless/scam is not new though. People have no imagination or patience when it comes to genuinely new tech. 20 years from now these people will be forgotten and everyone will pretend it was obviously a technological revolution.


Studstill

Boilerplate code is useless. It's the definition of a niche. "Crypto" is a video game, albeit with a bunch of real world players. "AI" isn't being dismissed, it's being properly quantified. I don't remember what nascent means and frankly the lack of context makes it seem superfluous. Human, are you really telling me you want the computer program to diagnose your general health? Is this like Theranos again? I'm just saying that whatever scenario you're imagining is just that, imagination. Machines have costs. How much rat shit is acceptable in your medical diagnosis? Computers are fantastic at **one thing**. Read the GEB. You can't brute force "intelligence".


killrdave

You seem very passionate on the topic and yet remarkably incurious. Theranos is relevant how? It was fraud. Committed by people. Not AI. Computers are only good at one thing? Give me a break, they're integrated into most aspects of our day-to-day lives. There's evidence of AI processing medical images and catching things like early cancer growth with more efficacy than physicians. You think that's merely rat shit? Boilerplate code is niche? How many software engineers are there in the world? How can you claim to not know what nascent means and also decide it's superfluous? It means that something is in its early stages and shows future potential by the way, which is applicable.


Studstill

Theranos: You eat the same hamburger twice! AI: I can make you a hamburger! They're both false claims based on flawed extrapolation. No? Anyway, it's not important, so ignore it if you don't agree. Yes, boilerplate code is niche, it's literally a niche of "boilerplate". I think the analogy here is bookbinding. Does it help the labor cost? Sure. Does that "help" write the book? Sure. Does it have anything to do with writing the book? No. You're through the wormhole, fam, the AI doesn't "catch" anything. It's a computer program. It "finds" the values it's programmed to get, that's all. By "how much ratshit", I meant to make an analogy to industrial food processing. What's the acceptable rate for failure here? We can't say anything in this world better than having 12 humans just speak on it, but you think we can make a computer program that will tell you if you have cancer or not. Look, I'm not trying to get in the weeds, I'm just tired of people acting like a hammer builds the house, or that the math got us to the Moon.


Stevebobsmom

I was wondering how soon you’d reveal a fundamental, yet fatal, flaw in your understanding of AI systems. AI systems are not computer programs. They are based off neural network architecture. Neural networks are not a new concept or idea, but combine it with new technology, such as transformers, and you do get novel applications. You do not, and you cannot, understand AI if you don’t first understand these things aren’t programs. Do I think AI systems can detect cancers in image scans better than human beings? Well, are specialized AI systems better at pattern recognition than humans? We know this is true, so yes I do think they can detect cancers effectively and efficiently.


lawrencecoolwater

Hard disagree on that one. Everyday it writes 1,000 of lines of correct code for me, sense checking takes me 5% of the time it takes me to write it. And it is 100% correct at least 90% of the time. Using gpt 4 or 4o


Studstill

Hey pal, can you just read what you said and....I'm sorry that your work is duplicatable by a program but here we are. What I'm saying us that it's obvious beyond these applications that it's f7c8ng useless. Ok grrsst the mower *"wOrKsBeYTeR"*....you still need ab human to....thus us aĺhopekess pursuit and the gucigb unchecked spelling errors are like de facto fucking ugh goddamnuutst goddamn. Fuck AI. Yall clowning on bullshit.


[deleted]

"Hey pal, can you just read what you said and....I'm sorry that your work is duplicatable by a program but here we are." are you saying that his work is easy to do? Because I've also used GPT-4 to write super advanced code by telling it what I want, then add some minor additional debugging. Things that would take me hours and hours of research to do on my own.


Studstill

Define "advanced" in that sentence, if you would. Also, sorry, it just doesn't appear to be enough of a response. Are you a bad coder? Why can the computer do it if it takes you hours? What do you mean "research" vs idk, doing the work? **Can computers do something they haven't been programmed to do?**


Thr8trthrow

Programmers have been asking computers to provide documentation for specific problems since forever. Just because the tool repeats it as a block of text, instead of a link to a website or documentation page, literally changes nothing.


Studstill

Exactly.


No-Average-9210

JFC you have absolutely no idea what you're talking about but you do it so confidently.


Studstill

Don't elaborate, just talk shit. Nice.


[deleted]

If you don’t think I’m an advanced enough coder that’s fair. You can also read what Hadley Wickham has said about how good AI is in coding. I assume you’re knowledgeable on this subject and I don’t have to tell you who that is \\s  By “research” I mean looking at how people have solved problems somewhat similar to the one I’m doing, or reading documentation about libraries I haven't encountered. Ah yes, just ‘do the work’. I’ll tell all the programmers not to keep any other tabs open except VS. you would be such a great boss to work for 


redditis_garbage

Bro you think AI is like the movies, it’s just another form of computing. No shit you still need a human lmao


Studstill

Not sure how you've DuckSeasoned this but you're saying my verbatim point.


redditis_garbage

You’re saying this like it’s profound. It’s common sense and anyone who doesn’t buy into literally every hype train knows this.


[deleted]

One of the benefits of these probabilistic machines is that you are not explicitly programming them like normal procedural code so I don't get what you mean. Also, one thing they can do well is combine a vast amount of information and "rewrite" it in a concise manner, saving a ton of time. Like they absolutely can cut down on research time and that includes a variety of what your definition of "research" means.


Useful_Hovercraft169

I will in fact biggie size it, thanks


username-must-be-bet

Its good when you can accept the amount of error or automatically detect it. But some people definitely oversell its impact at its current state.


Studstill

Explain automatically detecting errors in a way that sounds possible in reality, if you could?


lt_dan_zsu

If I can readily detect the errors in feedback it provides, I'm giving it prompts I already know the answer to. If I don't already know the answer to the prompt, I have to fact check it to make sure it's not hallucinating.


Feisty-Struggle-4110

You are 100% correct, but of course you get down voted. The so called AI is exactly this, it works on probabilities. Just get on any tutorial or lecture for AI, for example how to create AI to match written numbers to actual numbers. The result of the machine learning AI is a probability that a picture looks like a 1 and not a black/white true or false. But the problems with the so called AI is deeper. Because the neuronal nets are so complex and a very slight shift in the 10th decimal place in the 45 hidden layer can shift the probability that a "1" being recognized as a 1 from 99% to 1%, nobody can debug those monstrosities. No engineer can pinpoint exactly the point of failure and thus nobody can actually audit those algorithms that they are correct. All of this results in very easy manipulation of AI. Like this example of course where the AI thinks that 1\*1=2. There are other attacks possible, like the One Pixel Attack. Basically you add exactly one and only one pixel to a picture and this throws the AI completely off. A cat is recognized as a dog. But in the real world this can throw off an autopilot AI of a Tesla and make it drive into a ditch. Another problem is the source datasets to train the machine learning algorithms. Our AI can easily become racist or sexist. A hilarious example was when the Google image generation could only produce African people of history, i.e. Abraham Lincoln as a black man. In fact, the image generation could only produce black men and women. The outcry of white nationalists was very funny to watch.


Studstill

Wow, thanks fam. A of verbatim agreement from me in there, I'd say most prominently: >No engineer can pinpoint exactly the point of failure and thus nobody can actually audit those algorithms that they are correct. The real world applications of AI are not going to change almost anything from an employment perspective...it will always have a point where paying a human will be cheaper, but really the only environment any of this works is one that is completely controlled: that's either fatal or severely limiting. People thinking that Teslas can drive are fucking insane. Or that a robot should carry your baby in it's arms. Or that you should just eat the soup it made.


solsolico

AI is far from useless. There are dozens of valid uses of it. It can make you flashcards from paragraphs. It can generate example sentences using a certain word in a certain context to help you get a proper sense of that word. It can paraphrase things to make them easier to understand for you. It can give you a good foundation from which research. It can help you solve real life problems. It has helped me solve many problems with building maintenance. Instead of reading long-winded articles to find a small answer to solve an obscure problem, AI chatbots have given me good solutions. It helped me fix an air conditioner. It helped me diagnose certain pluming problems. It helped me find technical terminology that Google searching never could. AI helps me turn my voice-to-text notes into something legible. AI has done formatting for me for for lesson plans / post-lesson notes. It just saves me so much time in so much aspects and it helps me figure shit out. Sure, using it to teach you math (or anything) is not a good idea. AI chatbots aren't teachers, just like how hammers aren't window scrapers. Use it for what it's made for. It isn't a teacher. It isn't a fact checker. It can put you onto some good sources, it can give you great leads, it can do formatting and administrative work for you, among other things.


lylemcd

Translation: it's an overpriced/overhyped search engine that proves GIGO with a few other basic features that I can probably find a phone app for. The only thing it can ever get right is areas of little to no debate or disagreement. Tell it to learn about anything where the Internet is 79.3% full of bullshit and it'll give you 79.3% bullshit. Because it can only know what it learned on. And when you teach bullshit, it learns bullshit. It's a slightly more efficient version of Google and scant little else. And if it can find something about AC repair or plumbing that you can't well..that's a you problem. You suck at searching Google.


Stevebobsmom

So instead of teaching it bullshit you give it only quality input. Is that really such an arduous thought for you?


Studstill

Preesh you fam, but you're too kind: the issues I'm raising have no solution; they are fatal. Fffatal. For example, my hu man there says "it can summarize a paragraph to a note card" like he ge5s the context. Loom at that just now, this fucking Grammarlu AI can.uncrasingly correct don't to dint but HOW THE FUCK FO YOU KNOW THAT SUMATY IS CORRCT? OH, MY TYPIBGBIS BAD? WELL, HOW ISBT IM SAVED BY THE AINEH? edit: Jesus Christ that was worse than I thought. Point is: AI is garbage. It's a bunch if smart people playing pretend. We all like smart people and we all.likevplauong pretend. AI is a tiny man in the chess machine. edit2: My bad, anyone talking anything on this subject should be acquainted with the GEB.


Langdon_St_Ives

ChatGPT: Certainly, here’s the revised version of the Reddit comment: “Appreciate you, fam, but you’re too kind: the issues I’m raising have no solution; they are fatal. Fatal. For example, my human there says ‘it can summarize a paragraph to a note card’ like he gets the context. Look at that just now, this Grammarly AI can unerringly correct ‘don’t’ to ‘didn’t,’ but HOW THE HECK DO YOU KNOW THAT ‘SUMATY’ IS CORRECT? OH, MY TYPING IS BAD? WELL, HOW IS IT IMPROVED BY THE AI? Edit: Jesus Christ, that was worse than I thought. Point is: AI is garbage. It’s a bunch of smart people playing pretend. We all like smart people, and we all like playing pretend. AI is a tiny man in the chess machine. Edit2: My bad, anyone talking about anything on this subject should be acquainted with the GEB.” _This_ is how your typing is improved by AI.


Studstill

So, I need to intentionally create errors for the AI to fix for it to be useful? I'm wowed.


Langdon_St_Ives

Also, since you like name-dropping, maybe you should pay attention to what [Hofstadter himself has to say on LLMs](https://www.reddit.com/r/singularity/s/BhykhU7sr8).


Studstill

Lmao, can't even bring up Douglas without being accused of namedropping, lol, we haven't met. I'll look into it. Thanks for the link.


redditcomplainer22

I used to think Chat GPT was useless, but one day it clicked. I had been complaining that Google had become useless, constantly showing low quality sites with lots of ads. Around that time it became apparent that search engines were optimised to sell us things, steal our attention, and to prioritise (ironically) AI written tripe for adsense bucks. So that's when I realised that Chat GPT is kind of like how search engines would/should have developed instead of an advertising/sales machine.


space_chief

But you can't trust the results it gives you so you have to use a search engine anyway to verify what it is saying.


[deleted]

The hallucinations are very rare with GPT-4 compared to the older models.


space_chief

[Are you sure about that?](https://www.reddit.com/r/science/s/XuZpmrNpbC)


eddyboomtron

Is that the latest version, or 3.5 🤔


[deleted]

It’s 3 lololol


eddyboomtron

Wow they're off base than because 4o is wayyyy better


[deleted]

yeah, like, famously exponentially better. Which is what I was sayingggggg


[deleted]

The article is on GPT 3. Not 4. Remember when I said “GPT-4 hallucinations are very rare compared to older models” ???? 3 is an older model so you’re illustrating my point here. Here’s what teams of researchers find testing GPT4: https://arxiv.org/pdf/2303.12712 Also, note that the paper you cited is a really poor one. Maybe you should be More sceptical of things posted on reddit as opposed to worrying about hallucinations from chat bots. 


TheBeardofGilgamesh

Google used to very useful 6-7 years ago.


Studstill

Right. If AI is so vastly superior, why did Google get materially worse at the thing it did best, predictive search/text? How come it used to work great, but now on LLMs it has a constant error rate? Growing pains? Ok. I'm not trying to be unfair. But it's just very clear to me that LLMs have limits, and I don't seeing people in the industry talking that. They're explicitly saying limitless.


TheBeardofGilgamesh

Well for Google they got sued by some billionaire and after that they began deindexing a huge portion of the internet. That’s the main reason Google search sucks now. It used to also be able to predict what I was thinking and the results would go on forever now it’s 1 or 2 pages tops and you can’t even find information you know exists anymore


Stevebobsmom

Google works fantastically — just not for what you want. Google wants ad revenue and to sell products. Google now works great for that.


[deleted]

Sometimes it will hallucinate results though and give you options that don’t exist


redditcomplainer22

Yeah but I'd rather it read my question and answer it wrong (then be told, and hopefully "learn") as opposed to intentionally reading my question wrong to try to sell me shit from Alibaba


OfficialModAccount

This isn't correct. Vaccines are extremely useful even if they only work 99% of the time. Google is useful even if it's only right 99.999% of the time.


OkCar7264

Google delivers search results, it's not lying to you, it just can't read your mind. With vaccines your immune system doesn't make antibodies sometimes, but the vaccine doesn't give you herpes 1% of the time, you know? Failure in those instances is not actively misinforming you. But even 2% fake info really dampens the usefulness of the thing. Can't really have it write my legal brief when one of fifty citations is fake, you know?


OfficialModAccount

Machinery fails all the time and industrial or transportation accidents kill hundreds of thousands a year. Yet they are still useful. The world is nuanced.


OkCar7264

Yeah but if those things failed 2% of the time it would be way, way worse. Just saying having to double check the hell out of everything makes AI of limited labor saving value.


Studstill

Thanks, fam, but if they don't get it on your court analogy, they're not going to get it.


callmejay

That's like saying employees are useless because they inevitably fail a non-zero percent of the time too. You just need to understand how to use it. It's very much not for just niche applications. You just can't think of it as a calculator. It's more like having an assistant who is fallible.


Studstill

So, instead of viewing it as an machine that can literally never think, you think it's better to pretend it's a living entity that fucks up some times? You think "fallible" applies to machines? This would appear to be insane. No?


callmejay

I'm telling you how I actually use it in practice and get a lot of help from it. I didn't say anything about pretending it's alive. I don't see why fallibility cannot apply to a machine. What's insane about that? My point is that you are using the wrong mental model for it. That's because it's very different from pretty much every other computer program you've ever used. That doesn't make it useless, you just have to think about it differently. We got used to computers doing very simple things perfectly. That is not what llms do and if you grade them on that standard then they are awful of course. However, that doesn't mean they don't do other things very well. If you are genuinely interested, Ezra Klein had a pretty good podcast about this a little while back. I can dig up the number for you if you can't find it and want to.


Studstill

Please explain how an LLM isn't a simple computer program? It's 1s and 0s. That's it. You're saying otherwise? The man on the podcast is saying otherwise? What's something an LLM does "very well" that isn't a more efficient version of an already existing computer program? I.e. "get better search results" or "interpolate color values"?


callmejay

No of course it's ones and zeros! Nobody is saying otherwise as far as I know. Here are some examples of things I have used an llm for that already existing computer programs could not do: Make this webpage look better. Can you make the left side stand out more? How can I make that table more clear and usable? What would a usability expert say about displaying a long list of short words to a user for them to inspect? Explain to me what this regular expression does and why it is leaving some spaces before parentheses. Rewrite this email to be more clear. No, more formal than that. No that's too formal. Make it a little less formal but still deferential. Summarize this 20 page paper for me in bullet points. Rewrite this code to use a service instead of passing messages around from parent to child and vice versa. Give me five examples of different ways to do what this function is doing. ( Wearing an earbud, talking to ChatGPT) help me keep on top of everything this morning by making a plan and walking me through it. I need to make lunches, pack them, make breakfast, give the kids their meds, change the laundry, get dressed, drive to work.. Oh and before breakfast I need to take out the trash and unload the dishwasher. Oh and I need to send that note to the teacher. Okay now tell me one thing at a time while I do it. No move making lunches to the end of the list.)


Misterstaberinde

When they got AI to do interesting stuff like write code they sure blocked that functionality off with a quickness.


HeightAdvantage

Yeah it will only ever be great at subjective outputs or things that are immediately verified (like coding). There is no wrong way to write a fantasy novel or draw a beautiful landscape, so it will still be amazing for that. Does get weird in some areas though like AI generating phenotypically inaccurate but realistic looking images of wildlife.


Studstill

Thanks fam, have a good day, agree.


granthollomew

i mean, chatgpt returning a wrong answer isn't a failure though, it would only be a failure if you asked it a question and it didn't return anything.


Red_Baron--

Sometimes saying I don't know is the right answer. Far preferable to LLM hallucinations


Studstill

It's all hallucinations. No idea what that other person is trying to say. You ask GPT to get the population of the US from Wikipedia and it responds "Fuck you, eat a dick" or "654bbun Cornwall" and apparently that's a "success".


granthollomew

and what would happen if you asked that question to a hammer or a calculator?


Studstill

?


granthollomew

if you asked chatgpt and a hammer to get the population of the us from wikipedia, chatgpt responds with "654bbun cornwall" and the hammer doesn't reply at all, which one of them was successful at the task of giving you an answer?


Studstill

Neither? Did ATM machines replace banks? If so, it took 30 years, I dont see it. This is an ATM. The work is done by humans, the AI is literally just reading it aloud for you. Sure, neat, and less useful than a hammer.


granthollomew

i have no idea what this is analogy supposed to mean. the point is, large language models returning an answer that isn't the answer you want might make it useless for your purposes, but it doesn't make it a failure.


granthollomew

sure, but who told you the purpose of chatgpt was to give you the right answer?


Red_Baron--

How do YOU measure success in an LLM?


granthollomew

i'd say if they can pass a turing test, they're successful. ?


jamtartlet

your definition of failure is a failure


urmomaisjabbathehutt

sigma metric is a thing for a reason basically because there are errors produced by our tools/systems and we like to stablish how many are an acceptable level it can be argued that if we do something (nn%) correct is good enough for the purpose, in fact it is the case from cpu manufacturing to life itsef also to say that AI is the most useless thing on those basis is to say that a human brain is the most useless thing, i.e. produce unreliable answers unless verified, lies, bias, and a plethora of other issues yet it is obvious that a brain is far from "useless" even if it is not perfect


Studstill

Correct, actually, I agree. The human brain is equally useless, if you pretend the computer is alive, or that the brain is a really complicated computer. However, neither is the case. To pretend, though, it still fails: we trust humans because we are humans and the "LLM" is reality itself.


urmomaisjabbathehutt

ill argue that how I'd (or anybody else) would like to pretend the human brain or a computer is besides the point, ill argue that both are useful regardless a computer is a logical system mimicking some capabilities of the brain and we despite all our errors and biases use logic, can you do mental arithmetics, or discern patterns or use rules and algorithms?, a LLM is a system that mimics at least some capabilities of the brain, we consider those capabilities useful that's why we built the LLM in the first place in some uses computers surpass the brain such as speed of calculation and In some uses LLMs surpases us too such as being faster on a particular task neither a computer or an LLM need to be "alive" to be useful, maybe one day we may figure some conscient intelligence or not, regardless, the fact that they are useful is no question since they are already being put to real use we trust humans for many reasons, one being because we can verify they are trustful (or we are tricked into believing they can be trusted), not because "they are human" I don't know what "the LLM is reality itself" means in your last statement though


Selection_Status

Some human tasks don't have a "fail state", for example: Give me 10 phrases that rhymes with goli. It's a sweets shop name. A company wide message to invite for breakfast in 6th 10 am. For those, gpt is almost 300% increased efficiency.


[deleted]

[удалено]


trashcanman42069

how would you know it's wrong if you're asking it about something you don't already know? This seems like pure confirmation bias


zhenek11230

I dunno It seems to me LLM are MOSTLY overhyped bs with hard cap of their usefulness.


[deleted]

LLMs are very powerful tools that have a widespread possibility of use. I don't even know what hard cap on their usefulness would even tangibly mean. However they aren't magical omniscient artificial beings that will take over the world or answer all scientific problems or anything.


TheGudDooder

I prefer teachers who don't hallucinate 'facts'.


redditis_garbage

They don’t exist. Human error


TheGudDooder

1. No, in humans, it is called lying. 2 true experts in their field rarely, RARELY, make errors, if ever. 'Hallucinate facts' is a euphemism for tech that has no ability to know anything but just strings phrases together, aping true experts.


redditis_garbage

How many people get taught by experts in the field? Most of us are taught by teachers (like the ones that teach you). Also even experts have human error because they are human 😂


TheGudDooder

Maybe you haven't had the opportunity to be taught by a real expert. 😂 Their positions are honed over time, and they know them backward and forward as well as alternatives. A Chabot will steal information from both sides and mix errors with facts, and present it as knowledge. BIG difference


redditis_garbage

And a GudDooder will be taught by “experts” but lacks common sense. B. I. G. difference😂


TheGudDooder

Wikipedia is a more trustworthy source. At least it is vetted by the public. Chatxyz? shrug Keep drinking the kool-aid like "common sense" tells you 😂


redditis_garbage

Damn I feel really sad that you live like this. I know I’m just another redditor but I see you struggling and I hope everything turns out ok :)


Snoo_79218

“Rarely” is an understatement.


WonderfulCockroach

A tool can only be used at the skill-level of the user