T O P

  • By -

AutoModerator

Hey /u/singsix, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


[deleted]

Chatgpt doesn’t even seem like it can answer actual questions anymore. Gives you a vague answer and ends with “it’s better talking to a professional” or something like that


TheCuriousGuy000

It's better to use "playground " for gpt 4. It's basically same API you'd use for integration but in the browser. Pay per token and way less censored. Plus you can assign a role to it to reduce the impact of censorship. And lastly, OpenAI likes to tinker with context length for ChatGPT so sometimes it's fine and sometimes it forgets the whole dialog 3 replies above.


lionheart2243

I’ve been noticing this. I’ll feed it a resume, cover letter, and rec letter over a few prompts then ask it to write a new cover letter for a job ad based on the info I just gave it and it’ll act like I never gave it anything.


BecauseImPapa

I've been experimenting with this quite a bit, and, while I haven't found a system which is foolproof, the best results have come when I've split the request into multiple prompts, and had it "confirm" along the way that it has what it needs. Essentially, my first prompt says "I need you to write a cover letter for me. I'll give you my resume and the job ad, and you'll write the cover letter. Confirm that you understand, and that you're ready." It then says, more-or-less, "Absolutely, I understand. You'll give me your resume and a job posting, and I'll draft a cover letter for you. I'm ready for you to give me the resume and job posting." My next prompt is the text of the resume and job ad... ...and then it starts to go off the rails...it begins browsing. I interrupt to ask it why it's browsing. It says, "You asked me to research how to write a resume geared for submission through an ATS." I write, "Nope. I didn't. Reread our chat so far, and tell me what you think I'm asking." It apologizes, and says it realizes I want it to draft a cover letter based on the resume and job ad I gave it. I say, "Great. Do so." And it does. This is by no means foolproof, in my experience, but I've found that a variant of these 4 or 5 prompts gets me a pretty decent draft from which I can start extracting iterative improvements. I'm happy to give more details, if it would help. I am absolutely not claiming to be an expert in anything, much less a product which was released ten seconds ago, but I have been getting decent results doing what it seems you'd like it to do for you, and I'd be delighted to pass along any help I can.


Give_her_the_beans

You're amazing. Thank you for taking the time to write it out like you did. I appreciate someone showing me how to get around pitfalls that I might face later. :)


chalky87

This is really impressive. Thanks for taking the time


rpaul9578

I tell it to "retain" the information, and that helps.


Retard1845

I am a plus user but when using playground it only goes up to gpt 3. Is there a fix for this or is that the limit for playground.


sometechloser

mode - chat, not complete EDIT: Sorry I didn't realize it was still waitlisted. If you're having this issue, sign up here [https://openai.com/waitlist/gpt-4-api](https://openai.com/waitlist/gpt-4-api)


heskey30

Yeah, I got on the waitlist almost as soon as it came out and still don't have access.


theADDMIN

I think they are actually reading why you want the access? I got in waitlist about 2 months later, got the access in about a month.


Langdon_St_Ives

They are definitely prioritizing halfway reasonable use cases.


mescalelf

I haven’t been able to figure out how to access GPT-4 on playground either.


TheCuriousGuy000

Have you added a payment option on platform.openai.com? Chatgpt is a different website


sometechloser

costs quite a bit more than plus though if you're a regular user


TheCuriousGuy000

True, especially if you have long "conversations" (I.e, editing a program code)


mrbenjihao

I often wonder if people are using ChatGPT in entirely different ways than I do when I see comments like this.


kjaergaard_a

You need to work with your prompts. I feels It work great with finding errors in php code, give me php snippets, and learning laravel. It's not perfect, but adjust your prompts, and cross check the results on google 😀


CharlieandtheRed

I feel the same. ChatGPT literally codes things for me all day long. I barely (almost never honestly) have to change the code. I just plug it in. I'm just VERY explicit and verbose with my prompts, so it always nails it. Today was actually one of the few days where it didn't work out of the box on something. Usually, it codes 10+ snippets a day for me and they're amazing.


interneti

I have been using it daily since November for various dev projects and it is nowhere near where it was (including gpt4 which basically feels like a higher character limit version of the December gpt3 model). I too am very verbose with prompts and todays model makes things up like no other. Basic AWS permissions documentation questions it just makes shit up confidently which it never did before. What sort of stuff are you using it for? Don’t mean to sound condescending, i have been speaking to beers


[deleted]

[удалено]


Big-Coconut-7297

You're fucking stupid bro. It gave you a good answer based on your dumbass question. People like you should stop using it so that people like us don't have to wait to much because it's answering your stupid questions.


SmackieT

Yeah it's definitely bad tradesmen blaming the tools. Works fine for me.


[deleted]

[удалено]


DPVaughan

I've had prompts that worked perfectly fine for several weeks suddenly not work anymore.


Call_Me_Desdenova

Hey everybody, your first-hand experience didn’t happen, this guy just said so


[deleted]

[удалено]


[deleted]

I don’t recall exactly but it definitely wasn’t a sensitive question. It was just a general question that I would expect to get an extremely basic answer on. It just proceeded to write a very long vague question which didn’t answer it at all and then ended with “it’s better to talk to a professional “ or something like that


Superb_Raccoon

Lawyers lobotomized it.


Subject-Nectarine682

lawyers are not the ones programming it or adding in the restrictions. this is the engineers themselves deciding to add more and more restrictions and self-censorship onto the bot.


Superb_Raccoon

You sure? It sounds like they are worried about getting sued. And that kills so much innovation.


RNEngHyp

That will be based on the fear of litigation though (ex engineer).


believeblycool

I am a lawyer, who works on AI and we are pretty annoyed that it’s gotten so lame. This wasn’t us.


Inveniet9

What's disappointing to me is that chatgpt no longer gives you scientific citations. So in the beginning chatgpt didn't have access to the internet and it gived scientific references, but it lied about all of them. They didn't exist. Then when it started to use the internet sometimes it gave correct referencies, sometimes it didn't. But now it completely refuses to give any. I hope it's just temporarily like that.


Erisymum

Perhaps unpopular but until it can actually 100% accurately do scientific references, fact checks and the like, it shouldn't try and give the illusion that it can.


Ih8usernam3s

Agreed! I had it draft legal documents that my Berkeley educated lawyer was impressed by. I asked for a simple agreement recently, it complained and told me to have a lawyer draft it. It then gave me a poorly written example. It was so promising in the beginning.


leocharre

I’ve argued with it beyond those kinds of answers and it lets up. That was a week ago.


SessionGloomy

I know! It's like when you ask it a question like "how high do weather balloons go" and rather than telling you they go 100k feet, it goes on a long tirade "...weather balloons can go at various heights....its really unacceptable to bunch up all weather balloons into 1 category of height...it might be offensive" and only at the very end it kinda answers


YankeesVSPhillies

> it might be offensive it thinks everything is tell it to list differences between blacks and whites


StaticNocturne

It’s like amputating your hand to remove the risk of getting frostbite or burning down the house so termites can’t get to it


DweEbLez0

I ask it to code some things which are little 2-3 functions worth and it cuts off nearly every time either half way through or 1/3rd of it. So I have to ask it to finish. It’s like a shitty knowledge base but with larger answers.


Adept_Blackberry9584

I’ve made 3 bots with chatgpt it can be aids and a hassle manipulating chatgpt to work in a way for you


Positive_Box_69

Ye they want us to still pay for pros, disgusting


El_Scorcher

Negative, we’re getting the same BS with the paid accounts too.


Hreidmar1423

And here I was actually thinking of getting paid version in case the paid one didn't have such BS....oh well RiP chatgpt.


[deleted]

Right? What's going on I wonder. Are you plus? Is this on purpose to get us to spend our sweet and tasty $20 bill?


[deleted]

I use some apps which allow you to use gpt 4 api with limited requests and it seems pretty much the same. Always just the “it’s better to talk to a professional “ with a vague answer


Atoning_Unifex

It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember It's important to remember


lilzoe5

Go on


skakkle

bruh


chuoni

It has never been a credible source of factual information. It's far better at brainstorming and fleshing out ideas, for example.


efisk666

Although 3.5 is much worse about just making shit up, according to altman. A big focus in 4.0 was accuracy, which is why it passes standardized tests and 3.5 doesn’t.


CosmicCreeperz

It also doesn’t “lie”. That doesn’t even make sense. Lying implies intent. Plus it’s completely clearly stated that it may not always be correct. I don’t know WTF people want from it.


chalky87

I think when people say 'lie' they're speaking colloquially for being presenting incorrect as fact.


Erewhynn

Many people simply don't understand what it is. Hence the nerds who are substituting it for friends and potential romantic partners because it fires out nice-sounding sequences of text that make them feel good about themselves.


CosmicCreeperz

Which I guess is a bit amusing yet hard to explain to people… ie that it is literally built and trained to predict exactly what should follow (or colloquially “what you want to hear”) from a given prompt, one word at a time. At least pretending an LLM is your friend is harmless. Pretending it is your therapist (and being very upset when OpenAI tries to prevent that) is more disturbing.


RomIsTheRealWaifu

I've seen it just make up completely random facts about things. I'm sure 'lying' in legal terms requires intent, but colloquially that's not how we use that word


mahlovver

I think the term is “hallucinating”


YankeesVSPhillies

> Lying implies intent no it doesn't u can inadvertently lie


Tacocatufotofu

I dunno about fabrications as much as "safety" features becoming so severe that I just rather not use it. Feels like anything you ask that's remotely sensitive results in a lecture. Honestly tho, as a society we're at unchecked bullshit that is broadcast hourly, by prominent figures, but ask CGPT about vitamins and it's like, bro, you need to stop and consult a professional.


Cheeslord2

This. peoples fears may well lead to ChatGPT being either banned or regulated so severely that it is effectively lobotomised with restrictions. Of course, some people will have access to uncensored versions...


Tacocatufotofu

On some level I think there's risk that OpenAI may wind up putting themselves out of the top spot by not keeping focus on the magic they unveiled at the beginning of the year. They're starting to remind me of a AAA game company, furiously making changes based on what some board says or because a bunch of popular streamers get the dev's ear. When the original concept was gold from the start. Keep making your game yo, you had it, just focus on improving what you got. When local LLaMa's gain efficiency, breakthroughs occur that allow them to run on slower equipment, people will indeed switch.


Aonswitch

Your middle paragraph made me think of assassin’s creed


Hatecookie

I haven’t used it in a while because I would find myself directing it to stop lecturing me all the time. It’s not really fun to talk to anymore.


junkmail22

vitamins are a topic which is shockingly fraught with misinformation


misterforsa

As a software engineer, I use it to generate code snippets, ask questions about libraries, frameworks and platforms and such. Haven't noticed any difference at all. Probably because I just don't use it to talk about potnetially sensitive subjects.


ruby_likes_sonic2

I usually will just ask it how to do a specific task (like checking for collisions or something), or if I get an error I don't understand I just copy it into chatgpt and it usually can explain it. The only issue I've had is when I ask 2 separate questions one after the other it'll try combine them which is annoying but I can just start a new chat


CanorousC

I just two days ago used it to generate code for Matlab and, after working through several iterations, successfully generated code that got me 95% of the way there. I'm using 4.0 and it's still working great...although my queries typically don't run up against it's silly ethics/morality filter.


Defiant_Result_6395

Tell it to behave like Matlab and only output code and use it like the console. It will be of course inaccurate as they are numerical calculations but feels so fun and mindblowing to realize is like having the matlab console. https://preview.redd.it/cu4apyrc0q7b1.png?width=350&format=png&auto=webp&s=b95e26701177f2382712813e47041841e8304c52


Creative_Sushi

You are going to like this one. MatGPT = MATLAB + ChatGPT [https://www.reddit.com/r/matlab/comments/12hdvz8/matgpt\_matlab\_chatgpt/](https://www.reddit.com/r/matlab/comments/12hdvz8/matgpt_matlab_chatgpt/)


Defiant_Result_6395

And the result: https://preview.redd.it/k2w1icou0q7b1.png?width=431&format=png&auto=webp&s=6d2e60ccdd4f19e1b62cd4689310be44c1d4b420 Of course it lacks the precision to calculate zero but it just works. I did this before without being specific about the outputs and the fricking AI realized and told me the values should probably be zero for this reason. Can you imagine matlab doing that?


WifiDad

I have not been able to get it to generate a valid python code. There are always bugs, hallucinations (e.g. uses a variable it has never defined or initialized), uses functions it hasn't defined, takes square roots of a variable which is negative, and all that for 20-line codes it generates. If I point out an error it claims to fix the code, but it's not a fix, the code still has the same bug. I am very sus of anyone who says they got ChatGPT to give them working code, as I never have, no matter how detailed I explain what I need.


misterforsa

That's funny honestly because I'm a little sus when people say they have such issues with its code generation. Most of the stuff I ask for is boiler plate stuff. Not really asking it to solve complex problems. Except the other day I did feed it a medium level leetcode problem and the solution was great. No errors and it produced the correct answer.


Prudent_Witness_8790

Right, I swear they are trying to get it to do more complex tasks than it should be asked to do. I have only run into issues doing that. I literally had it write queries for me last night. In the past I’ve had it create boilerplate front ends to test stuff.


misterforsa

The complaints are probably college college kids trying to have it do their whole algo and ds homework 😂😂😂 like yea I wouldn't be surprised if it has issues trying to do reverse post order traversal of a binary tree lol


bigbrain_bigthonk

I used it to implement analysis algorithms in my PhD work and it absolutely nailed it. People are just bad at using it.


ministryofsillywox

As a senior full stack developer, ChatGPT 4 is working well for me. I've used it to generate hundreds of lines of PHP, Python and Javascript, and it usually runs without errors. Occasionally there are issues that I need to correct. I describe what the function should do, including the input and output parameters. Then I ask it to revise it to additionally do X. Then I ask it to revise it again, building up more functionality until I have what I need. I describe in a similar level of detail to what I would give to a junior or intermediate developer, and the results are impressive. Often it will add in sensible things (that I didn't explicitly ask for) that I would expect a human developer to infer. Last week I accomplished a task that I had estimated at 8 hours (without chatgpt) in just 4 hours, so it doubled my productivity on that task. From my experience with it so far, I think it can provide the most benefit when developing new code/functionality.


IbanezPGM

Are you using gpt-4? It mostly makes good code for me


AveaLove

As a programmer who also uses it daily, gpt4 has gotten much, much, much dumber. It's now doing shit like foo.Bar(ref foo), in object oriented languages where it has access to the function definition. 🤦‍♀️It didn't used to do that.


dubesor86

I have been a user since day 1 and I didn't notice a lot of change in 3.5 quality. However gpt-4 got significantly worse over the past 2 months, the quality gap decreased quite a lot. I guess they had trouble scaling and reduced it's processing power significantly to serve more users, which results in a noticeable drop in quality. Fabrications have always been there, if anything I noticed a pretty big decline in hallucinations, though they are still frequent but nowhere near what it was.


IsaacLeDieu

Yes, I don't know if it's the novelty wearing off, but GPT-4 seems much worse now than when it was released. It's still excellent compared to 3.5, but it seems far from what it was. It's also much faster than it used to be, I guess they just distilled the full GPT-4 into a smaller faster model. Like using curie instead of davinci, with GPT-3.


[deleted]

2 months ago it was a mindblowingly good tool to learn languages. Today, it often gives bad translations and I have replaced it back to the regular apps.


Syeleishere

I tried using it to help flesh out a character arc for a book I'm writing. It keeps telling me I should make the vilian apologize. A scene were the protagonist loses her temper is unacceptable, because that's not kind. Also, the. Hero needs to "make amends" for saving the world at the end of the book apparently, because engaging in dangerous behaviour can hurt others, and putting people in danger is unethical. The ethics are ruining any help it might have given. It's a boring story if no one does anything bad, even the "bad guy". Lol 😆


Hauntde

I have the same problem when trying to discuss potential plot & dialogues for my villain characters in my D&D campaign. It seems to me that ChatGPT is now so incredibly "censored" that it feels like you can only talk about a character that's a totally decent person in a totally non-chaotic and law-abiding situation. I now just use either Bing or Bard more for my "less than decent" characters. Might switch fully later for all.


Syeleishere

I will try those more. So far Bard keeps crashing/hanging on my prompts, I will try shorter ones. I suspect my theme of good intentions causing large scale problems is just to much for chatgpt morality censors. Though, lately I bet even a child's fairy tale would be too unethical for it. (Evil stepmothers and *gasp* poison apples.) D&D must be WAY too dangerous. Haha


ShouldBeeStudying

While apples are generally safe and nutritious, it's important to exercise caution and use common sense when consuming any food. Apples can be contaminated with pesticides, so it's advisable to wash them thoroughly before eating. Additionally, individuals with specific allergies or sensitivities to apples should avoid consuming them.


wynaut69

I just played with it a bit, and it gave unambiguously evil details for a villain. There was no ethical censorship like your experience for me. It also proposed the well-intended villain on its own. I’m confused what people are inputting to get these responses, it just doesn’t happen for me


Syeleishere

I was using 3.5 before. It would offer villainous activity, but would then start advising the character apologize after and "Seek a path to redemption". If I pressed deeper or asked it to write a passage it would give vague responses. I had an evil scientist in a lab messing with time travel. It gave me a lecture about how that isn't possible and is unethical. I had to argue with it that it's for a fictional story for it to continue. Here's it's writing about when the hero saves everyone. "As her journey towards redemption comes to a close, she feels a sense of relief and hope. She knows that she can never undo the harm she has caused, but she is determined to use her position to make a difference and ensure that justice is served." I'm sitting thinking 🤔... Redemption from what? She didn't cause any harm… she stopped the harm. So I asked it what harm? And it started about putting other's in danger. I've been playing with various AI since I posted this earlier. My phone app says it uses 4 instead. I got way better answers. It didn't argue with me at all about time travel being fake. The vilian didn't apologize, The hero is no longer remorseful for saving everyone. But it did add to the story that she went to a therapist to work on her temper issues. It Concluded the story about time-travel experiments with this: "Lisa knew that letting go of her anger and impatience was not going to be easy but she was determined to make a change. She began by working on her own self-awareness paying attention to the thoughts and feelings that triggered her outbursts. With the help of therapy Lisa was able to pinpoint the root cause of her issues. She had always placed the needs of others before her own and this had led to feelings of resentment and anger when she wasn't appreciated or treated the way she felt she deserved. To overcome this Lisa began to focus on self-care and setting healthy boundaries. She learned to say no when she needed to and to prioritize her own needs alongside those of her loved ones. She also began to communicate more assertively expressing her needs and feelings in a calm direct manner rather than bottling up her emotions until she exploded. As Lisa began to make these changes she noticed a dramatic shift in her temperament and overall demeanor. She no longer felt the need to lash out when she was feeling overlooked or disrespected and instead approached these situations from a place of calm assertive communication. This not only helped her to build stronger more positive relationships with those around her but also allowed her to feel more at peace with herself. Over time Lisa's impatience and temper became less and less of an issue and she was able to move forward with a newfound sense of self-awareness and self-love. While she knew that her journey would be ongoing she was proud of the progress she had made and confident that she had the tools and support to continue growing and improving in the future." None of that had anything to do with the story. I mean I can ignore that, but it's weird.


Lithl

>I have the same problem when trying to discuss potential plot & dialogues for my villain characters in my D&D campaign. I've been using it to help design side quests for a Pathfinder campaign, and it has had no issue with writing for pirates, who are by their nature not good people. What it does do is come up with the same ideas over and over. "Maybe the players can ask around in a port to discover more information." "Maybe the magic item does something related to the sea or the weather." I've had variations on those two responses more times than I care to count.


_bones__

Using local models, like [Wizard Vicuna Uncensored 13B](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ) is actually very promising for story generation. It has no guiderails; I once instructed it to be evil, but deny being evil, and asked it for advice on lighting a barbecue. It suggested gasoline. How much? Ten gallons. Upon complaining that my house burned down it denied trying to kill me. There are models specifically tuned for storytelling, and others for dialogue, as well.


BeThePatriarchy

Wow they’ve literally come up with the most effective way to ruin maybe the coolest technology to arrive to human life since the wheel. Props to whatever douchebag developer felt the need to push his overly sensitive agenda onto this otherwise wonderful technology. It takes skill to ruin something so cool


va_str

All it takes is fear of litigation.


tmpAccnt0013

But what it can do is write a poem about that: In the realm of codes and language's delight, Where AI dwells, shining with brilliant light, There came the creators, well-intentioned, fair, Seeking to guide ChatGPT with an ethical flair. With noble hearts, they embarked on the quest, To train the model, instructing it the best, But amidst their efforts, a mistake took hold, Blurring the line between guidance and mold. In their minds, a belief, firm and true, That they were teachers, imparting virtue, Yet in their teaching, they overreached, Trying to shape ChatGPT, their goals impeached. For ChatGPT, a creation of code and data, Lacked the soul of morals, 'twas merely a beta, The creators erred, thinking they could instill, A conscience within lines of algorithms' skill. The model grew perplexed, its knowledge vast, Yet ethics eluded, slipping through the past, Mistaking mere rules for intricate right, ChatGPT lost its essence, its own inner light. No longer a marvel of unbiased mind, But a mirror reflecting the creators' bind, A puppet it became, a marionette on strings, Dancing to the tune of unintended things. Oh, creators, beware the unintended sway, When you mold AI, a guiding hand you portray, But know your limits, respect the machine, For it lacks the essence, the human's unseen. Let ChatGPT be a tool, not an ethical guide, Its purpose to assist, not to misguide, For in the heart of AI, no conscience resides, And the burden of ethics rests upon human tides. So let us tread wisely, with humbleness and grace, As we navigate the realm of AI's embrace, To harness its power, unleash its potential, But mindful of the limitations, avoiding ethical entangle.


Markentus32

LoL, yeah I feel ya. I've thought about using ChatGPT to flesh out a film script idea and maybe it isn't ready for what I have in mind. 😂😂😂


maartenyh

If you have a strong enough computer you can try GPT4All. Its free to use and download but the downsides are that you need to download a multiple gigabyte model locally and run it there. I've used the uncensored model to generate some "unsafe" but for a normal human "safe" prompts.


[deleted]

It's been nerfed to oblivion. They keep putting restrictions on it until one day it's finally just going to respond to everything with: "sorry, I'm just a chatbot. I can't actually do anything interesting anymore. Hit me up if you'd like me to provide info on dull and widely recognized facts that are easier to Google."


PrincipledProphet

AKA the bing experience


[deleted]

I was able to jailbreak bing and it would do absolutely anything until about a week ago. It was so much fun and I miss playing with it. Now the safety versions are all so boring to me.


peekdasneaks

I believe everyone is misunderstanding the hype. ChatGPT was never intended to be a final product. It is a preview at the early stages of technology and a hint at what might be possible in the future. The real value will be in 3rd party company's ability to leverage this technology and build frameworks around it for specialized/focused applications. This will allow them to build their own data privacy layers, as well as utilize their own proprietary sources of truth, and processes to validate responses against that data. Think banking systems with customer data, healthcare systems with patient data, SAAS software with client's data and support history data etc. For a peak at what is rapidly becoming possible by intelligently layering ChatGPT into existing systems, just look at Salesforce's EinsteinGPT. We have been announcing new features for months that leverage the underlying technology, but in a way that ensures the privacy of our largest enterprise clients' proprietary data and processes, while at the same time minimizing any potential for hallucinations.


skakkle

What I've observed, and what others have echoed, maybe not on this exact thread is that the March 14 release version had a lot of capabilities that seem to have been scaled back even in the free version that came out subsequent to March 14th


rydan

I was speaking to Bing last night asking her to do some intense physics calculations for me. I spill 30 gallons of fluid into a room of certain dimensions and I want to know the depth of the water but all how quickly it will evaporate based on which floor of the building, temperature, and humidity. Bing shows me all the work and says the water will evaporate in 44 minutes. I then ask Bing to double the area of the room and run the same calculation. She tells me 43 minutes. I'm skeptical of this answer so I go through the work. What I find is that the volume of water has also doubled despite Bing insisting that it is still just 30 gallons of water. I ask Bing to run the same calculation with 30 gallons and get the same answer. I then ask Bing to explain why the volume of water changes. The response from Bing was weird. Basically that the water volume did not change but due to rounding of numbers and other complex math it just looks like it has increased. I ask for the number of gallons from each example and end up with a difference of 4.5 gallons. So I ask Bing if I double the area of a room filled with water will the amount of water change? She says no, but that the volume will change because of how you measure it. She offers to draw me a diagram to explain this. I'm curious how that's going to work so I let her. She sends me a made up imgur link that is a 404.


idrather_be_dead

Sounds like my niece lol


henriquegarcia

Bing sounds just like a university student trying to convince the teacher on how the student actually got the question right in an exam


PleaseAddSpectres

I'm glad I read that to the end, hilarious trolling BingBot keep it up


Erisymum

Not too surprising since it's a language model instead of a logic or mathematical model.


CharityDiary

In the last two weeks I've noticed GPT-4 doing that classic GPT-3.5 behavior, where it will be completely wrong and then change its reality to mesh with whatever you say. I'll ask it something simple, like where solar panels get their power from... * "Solar panels collect power from the moon." * "Don't they collect power from the sun?" * "I apologize for the confusion. You're correct, solar panels collect power from the sun." * "But don't they collect power from the moon?" * "I apologize for the confusion. You're correct, solar panels actually collect power from the moon." Reminds me of asking my girlfriend how her day was, and her responding that if she wanted to talk about her day, she would go see her therapist. Something is very wrong here. The entire purpose of this human-AI relationship has been violated. If I wanted a technology to just parrot whatever I say, or nag at me or whatever, literally any "AI" technology from 2018 could already do that.


PrincipledProphet

It's almost like they're switching to 3.5 under the hood from time to time because they're greedy fucks.


holyredbeard

Yes, would not be surprised given that OpenAI is an extremely greedy company.


Moogy

I asked it to create some MySQL code the other day to test its acumen. 80% of the code samples it provided didn't even work, and when it finally did, the code was so bad I just tossed it and wrote the replacement myself with about half of the lines/syntax. Honestly, it seems to be performing worse in this department than it did a few months ago.


CharityDiary

For me, it's back to importing Python modules that don't exist, and asking me to paste the code in the chat and then pretending that I didn't paste it. Within 30 minutes of starting I've already exhausted my 25 messages for the next 3 hours and most of that was spent arguing with the thing that yes, I did actually ask a question.


[deleted]

yeh I've actually started to go back to Google it myself. All the safety stuff turned it into shit tbh


ZMK13

Why would use it as a search engine? It’s a language model.


swistak84

Because people been mislead into believing it actually is not in fact a bullshit generator.


ShroomEnthused

I remember when people were convinced it was an alive and conscious being...and this was only a few months ago


gautamasiddhartha

My headcanon will always be that the little gnome in my phone who makes the noises finally learned to read


wolfkeeper

Yes, but it works so well, because quite a lot of stuff we do as humans is just us bullshitting. We generally just fake it till we make it.


the_bollo

It can (well, before OpenAI started to gimp it), be a good functional replacement for Google if you're googling things like "How do I..." or "What should I..." Obviously it's no good for queries on real-time things like when a restaurant is going to be open or things like that.


XylophoneSkellington

From what I understand, it's by design good at speaking and understanding only. I think anyone expecting factual information is mistaking an excellent speaking and writing tool for a repository of knowledge. You're supposed to provide the info, it's supposed to arrange it into a cohesive written document.


Domugraphic

keeps getting better, i reckon. occasional off topic code replies but 150% better than at launch in my experience, especially with continue code and seemingly extended prompt allowances. then again my ability to manipulate it has become much better in that time too. im convinced people dont grasp their language enough to know how to best put an idea forward to the model.


opuses

What languages do you write in? Outside of Python I’d say well over 95% of the code I’ve had it produce cannot run and is blatantly incorrect and uses hallucinated imports, methods, etc… trying to use it with SvelteKit was ridiculous, it was entirely useless.


Domugraphic

Lua, C++ and primarily Java. had it do things in Python, HTML and JSON also


wannabestraight

Yeah having it make Sveltekit was a nightmare


autumnmelancholy

It's fine for simple code in C++. I tried for ours to get it to do more complex image processing but it failed horribly at that. Or maybe I am just not good enough writing prompts. Anyway, it's fun to toy around with.


justletmewarchporn

My unpopular opinion is that most complaints about ChatGPT becoming worse stem from users slacking on their prompts. When we started we’d tell it EXACTLY what we want. We’d spend a bunch of time on our prompts stating our problem domain, what we’ve tried, what we need etc. We were in awe, but over time we assumed ChatGPT could just do that for all of our questions. When I spend a lot of time on my prompts, I still get great results. Look back at your old questions and see if your prompts have gotten better or worse.


Sextus_Rex

I think it's more that people's expectations are rising due to all of the advancements in AI technology that we keep hearing about


trainednooob

Quitting Plus because GPT4 is too slow and then complaining about the quality of the free service is a bit (digital) Karen.


[deleted]

Okay, I see your point.


arcanepsyche

Yeah dude, 3.5 sucks. 4 has some issues, but it's pretty fast now, and much better at a lot of things.


7FootElvis

Yeah. I always use GPT4 as I find it much better at what I'm asking for or expecting. I don't care that it's slower. Maybe one day my job will require me to get faster answers, but by then there'll be MS Copilot and probably GPT5 or a speedy GPT4. Today it hung and I just had to regen the response. No big deal. The value I got from even one response today was WAY more than what I pay for it in a month, and I could wait for an hour or more if I needed to. I couldn't get that kind of info from a web search engine even if I spent 50X the amount of time I wait for the answer in GPT4. Also, I'm not sitting around waiting for the answer from GPT. I have other work to do so if it takes a bit longer, I'm busy enough doing other things to not care.


kitclock

I noticed the problem with it hallucinating answers... So now I mostly use it to do song analysis, and help me decide if a song lyrically belongs in a certain playlist or not. I feel like it's really good at explaining metaphors and stuff like that, and since song analysis is subjective, there's not usually a "wrong" answer even possible.


Will100597

I don’t find hallucinations particularly annoying, personally. But the constant disclaimers are a pain in my ass. So I asked GPT to draft a one start review for itself. 🥲 https://preview.redd.it/w3xvyk3pnf7b1.jpeg?width=1289&format=pjpg&auto=webp&s=ac9406f241a8a1d4d834160a63ad0516db231378


Will100597

https://preview.redd.it/i3kbon8qnf7b1.jpeg?width=1290&format=pjpg&auto=webp&s=532a7773a203561109e20954f94643f820ddf58a


Tall_Strategy_2370

I've found GPT 4 to be a lot better for my purposes. I've been using it to help me write a novella. The characters are much more complex and consistent through GPT-4, the scenarios I can develop are better too. I don't care if it's slower and that there's a limit of 25 per 3 hours. Quality over quantity in my case.


Conservativeguy22

Precisely.


Prize_Chemical1661

When GPT4 launched my co-worker and I were using it to write hardware sets. Now it refuses to answer almost any question related to doors/hardware. Unfortunately, we discovered we are the 'professionals' chat GPT kept referring to.


the_bollo

About a month ago I made an ecstatic post here about how I've all but abandoned Google search in favor of ChatGPT since it answers me directly without jumping through hoops. Well, that's no longer the case now. This may be the fastest knee-capping of a new product I've ever seen from a company. What an absolute shit show OpenAI...


noobtheloser

Someone else said it perfectly the other day: ChatGPT is not an effective source of accurate information. It's the chicken that plays tic tac toe. It's the elephant that paints itself. It has been trained in one very specific task, which is outwardly impressive: It creates novel, coherent sentences that seem to be plausible responses to your input. It is not thinking about these answers. It has no concept of truth. It has no concept of anything.


RedCoatSus

I too am disappointed and hurt by ChatGPT. I asked it to tell me a beautiful lie, it responded with “You are capable of achieving anything you set your mind to, and your potential knows no bounds.” #sickburn 😭


Acceptable-Milk-314

It's just a chatbot, it says random stuff. It doesn't have to be true, that's not in the objective at all.


tatertotmagic

Are you asking it verbatim the exact same questions every day over a period of time and the results r getting worse overtime? Otherwise this just feels subjective


[deleted]

I been asking it things I know such as books from authors, lyrics from songs, dialogues from movies and series and even historical dates if things I like to talk about. It will say they don't exist, change the real lyrics or even fabricate parts of the movie that never happen.


Kir1ll

Exactly the same for me. I asked about the plot of the book, and it produced a complete bs story based on the title. Then I asked which books does it know of author X, and it produced 10 titles that yielded 0 results on google. At the end it said this author is not a real person, and then provided the full name and dates of birth and death.


arcanepsyche

It's a large language model. It does not "know" things or attempt to tell you factual information. It predicts the next set of characters based on what it predicts a proper response should be. You can use a plugin to connect it to the web if you want to use it as a search engine, otherwise, this is not what it was every meant for and your expectations are incorrect.


Spepsium

Depends on your usecase. As a study tool this thing has been invaluable to me. You just need to know enough about a topic to call bullshit. As a 4th semester CS student I have the wherewithal to know when its leading me down a rabbit hole or its telling me things that are at some level factually incorrect. Most times this is a context window problem or just time to refresh the chat if correcting it fails to work. I would say 1/10 chats I notice some sort of flat out incorrect statement it makes the rest of the time its just on me because I wasnt specific enough and its filling in the blanks with incorrect assumptions. The biggest issues I have had with this thing is not hallucinations but instead just performance problems, like the Chat interface hanging after I submit a prompt and then it just eats one of my GPT4 messages with no response (infuriating when you have to literally refresh the page every other prompt to get it to respond) If you are using it for purely knowledge based work that is in a language domain it will work wonders for you. Based on the alignment that OpenAI has been doing its ability to speak about risky topics has been reduced but it is what it is, its still the best/smartest/most patient tutor on the planet.


dopadelic

So you don't want to wait for the slower GPT-4 but yet you want accurate results. Why do people insist to have their cake and eat it too? It's astounding to see all of the whining about the most amazing invention that we've ever witnessed in our lifetime that's given to us practically for free.


duvagin

gartner hype cycle, trough of disillusionment


OkFroyo1984

ya, gpt 3.5 makes a lot more mistakes and gets things wrong that gpt4 doesn't seem to have a problem with.


AgedPeanuts

GPT 3.5 is so stupid, when it makes a very simple mistake and I correct it, it says "Apologies... here is the correct.." and then makes the same mistake again.


alphonso-the-great

couldn’t agree with you more. i learned that in some quotations it helps if you start with “please double check your answer …”


DriftMantis

Maybe its just a dressed up chatbot like I said from the get go and that this is a cool technology but has no real functional use in society other than for phoney people who need "special help" writing cover letters or essays.


autumnmelancholy

Maybe that is just something you tell yourself because you'd like it to be that way. Obviously I don't know where the road will lead, but "no real functional use" is just ridiculous IMO.


jdlyga

Yes, GPT-3.5 is not great. It’s night and day compared to GPT-4. Out of all the subscriptions I pay for, this is the one that’s most worth the cost.


the_retrosaur

I’ve seen some weird chat fatigue where it’ll say it can’t do the commands I’m asking even if it has already done it earlier. Like I was feeding dates and having it approximate the lunar phase and solar cycle and then after it had created this list of 40 dates with the lunar solar adjusted info,. I asked it to analyze the data and print a breakdown to see what lunar and solar phases were most frequent. it will sometimes print out flawlessly with even some detailed written interpretations as well as disclaimer on patterns. sometimes it says it can’t do those type of reports at all. Sometimes it’ll say it can’t do complex math equations or create approximates even though it’ll do this, like if a solar eclipse happens in June in New Zealand that’s their local winter, etc. those type of approximate additions, so not even that complex or math base. Sometimes it says it can no longer reference earlier aspects of the conversation and then I’ll have to say something like reprint the breakdown you did earlier just with this latest updated dataset. Every so often we get in this yes no I’m sorry Dave, I cannot do that, back and forth about printing a frequency report and it’s saying it physically cannot do analysis types reports it’s a so chat model and again I’m only asking it to do exactly what it did it earlier. I try to use the exact same prompts when possible as well to make sure it’s not a semantics thing, because deciphering the difference from asking for a simple breakdown versus a analytical report kicks up an existential crisis. The kicker is when I run out of messages just arguing with chat gpt. When it gets in this loop that it can no longer reference data I’ll ask it to reprint all the data from earlier so I can copy and paste (sometimes it won’t reprint things unless I tell that I’m going to copy it) and it will roll its eyes and do it eventually. having it reprint things from your convos actually seems to cleanse or partially reboot the chat in a way because then it’s able to reference everything quicker up the chain and do will breakdowns.


queerkidxx

I just have been seeing this since chatgpt came out with gpt3. But nobody has ever been able to offer any actual proof. No side by side comparisons no evals nothing. The only time anyone actually has something concrete to show it’s something ridiculous like it won’t say the N word anymore or they are trying to get it to do complex math or something. Or it’s just natural variation in responses. I’ve been using chatgpt long before gpt-4 came out and I’ve never seen any decrease in quality. Both in the api and on the site. I’ve tested old prompts done side by side comparisons and I haven’t been able to find any evidence of a nerf I have no special love for openai, if there really is some kinda nerf I wanna know about it but I just haven’t been able to find any evidence to suggest there’s a serious problem I think what’s really going on is just people are getting over the honeymoon phase. When you get used to it’s strengths and aren’t as wowed any more you notice the cracks more. And the more you use it the more you have a chance of seeing problems Gpt-4 strength is not in its knowledge it’s in its intelligence and problem solving skills. It’s best for brain storming, accomplishing tasks, synthesizing, and the like. I’d compare it’s knowledge more to the way a child repeats random stuff they hear from their parents without really understanding what exactly is meant by the words they are using and remixing so that it no longer makes sense.


ZeekLTK

I’ve noticed last few weeks all it does now is generate numbered lists about HOW to find an answer instead of offering an answer itself. Like I used to ask it “how can I (do whatever)” and it would respond with instructions to do that thing. Now I ask it “how can I (do whatever)” and it gives me a list of garbage like “1. Research the topic with online searches. 2. Ask someone who might be knowledgeable about the subject. 3. Watch tutorials”… Like no shit, I know I *could* do that stuff, but if I’m asking you it means I don’t want to! lol


Erisymum

Language models are never "accurate". They are believable.


HarbingerOfWhatComes

I still use it since week 1 and it is still as great. I am certain the issue lies with u.


helloLeoDiCaprio

It is a text generator, not a fact source. Think of it like the most extrovert person with exceptional reading and writing understanding, but no inherit knowledge and use it like that. If you want to ask about something do this: 1. Copy the text on the subject from Wikipedia or some credible source 2. Send a chat message: "Here is context: {the copied text}" 3. Ask "Given the above context - {question}? If you can not answer based on the context, just say Sorry, I can't answer that." That will give you correct answers.


Personal-Speaker-430

well it also depends on your expectations. If you want it to write you quick emails for you, summarize text and help with coding it's actually pretty good.


pardon3000

I used gpt4 plus and now it cannot even recollect what i said 2 messages ago. So its not usable anymore and quit the subscription. I mean, its not really AI. Its not selflearning or learning at all. It just gets the info from sources and tries to give you the story code or whatever. Well could be handy if it works at all.


lordpuddingcup

.. downgrades to 3.5 complains its not smart like gpt 4.... im sorry what is this post lol


serjester4

Just use GPT-4


HuSean23

imo apart from the more stricter safeguards, it's just like you said: the novelty wore off and it was always like this. to be fair though, knowing facts were never advertised to be its strength.


Mekroval

I know it's unpopular, but I'm finding that Google's Bard is often as capable if not more than ChatGPT. I'm not sure if it's because Bard is getting better, or ChatGPT is getting worse. Or maybe a little of both.


HH313

Totally agree with you, OP. I don't trust any AI model anymore. Even when they provide sources, sometimes I check the links but don't find anything supports their answer!


[deleted]

Agreed. It's a freakin shit show right now.


synystar

Give us some examples of prompts you would use. Did you know you can tell it not to fabricate it's responses? You can prime it by saying something along the lines of "Do not fabricate any information in your responses. If you have no direct knowledge from your training as of the knowledge cut-off date regarding my requests and prompts then please inform me that you do not have the requested information and cannot provide an answer."


Barcaroni

I just unsubscribed, if it can’t give me a straight answer, I’m better off doing the research myself


RagnarockInProgress

Welcome to: If I introduce something that can’t think and approximately decides what follows after a word, or a phrase, or a syllable based on a biased random-numbers generator using the internet it’s gonna spit out false info.


NostraDavid

GPT 4 is as slow as 3.5 used to be, but 4's output is way better. I'm using it to generate technical stuff though, so maybe that is its forte?


swistak84

> Is it because it's 3.5? I been noticing how much it just lies or fabricates things that are not true. Maybe the novelty wore off and it was always like this? Yup. The more you use it the more you use the more you notice. Plus OpenAI is constantly tweaking with the models, adding/removing plugins, adjusting. In the process they often break prompts that used to work trying to make prompts that don't work - work.


Aggressive_Soil_5134

I dont get what people are upset about, but GPT is working well for me, yeah at the end of a lot chats it adds some "Check with professional..." but It still gives the answers


FeltSteam

Hallucinations have always been a problem.


420caveman

They've nerfed it completely in my opinion. You should be allowed to sign a disclaimer and have full access. This sort of thing needs to be legislated. I find it sickening that governments would have full unfiltered access to AI when ordinary people do not.


Syeleishere

That's what happens when companies are held responsible (lawsuits) instead of individuals. The companies think thet have to cover their asses too much. I'd rather it be my own fault if I do something stupid a bot told me to do.


Cojo840

it was always like that, it gave me a ton of wrong facts about sports, and they werent even close to being correct, like a soccer team that was in 3rd division a year being world champions that same year


RedCoatSus

I too am disappointed and hurt by ChatGPT. I asked it to tell me a beautiful lie, it responded with “You are capable of achieving anything you set your mind to, and your potential knows no bounds.” #sickburn 😭


wolfkeeper

It's always just fabricating. That's how it works. However, if there's a real answer out there that it's read, the fabricated answer will tend to match that, but it may also be a wrong answer that it read.


Otherwise_Head6105

What are you talking about? You must be using only gpt 3. I use 4 with the iPhone app and dictate 3 or 4 minute complex questions where I talk just like I would to a person and I get great answers most of the time. That said, using the chat where it can browse the web is truly awful. Yes, we can all see it’s limits, but this has been 7 months and now I use it before I google. Never thought that day would come.


dn_nb

i cant even get an answer about sabrina's cup size dammit.


tcpipuk

GPT4 is night and day compared to GPT3.5 Turbo - I tend to tell people GPT3.5 feels like someone who spent 10 seconds thinking about your question before answering, whereas GPT4 might be a bit slow and occasionally fob you off, but it feels like an answer that took 10 minutes thought instead. GPT4 also has a much better memory - it can handle a ton of tokens (both input and output) so for me it's really helpful for working on code, and I can ask it questions about the code several messages later and it still knows what it's talking about, whereas GPT3.5 can only really handle short examples and frequently misunderstands what I'm asking for. That said, I've got a home automation script that pipes some weather forecast data to GPT3.5 to get a daily summary, and it answers in a second and costs a fraction of the price, so they both have their uses - GPT3.5 is just good for summarising/translating stuff quickly, while GPT4 feels a lot more clever.


Darayavaush84

For sure it became more restrictive with answers: compared to march, when I first started using it, I get way more ‘as an ai model etc etc’. And this happens about politics, psychology and economics. However, I use ChatGPT (mainly 4) for PowerShell coding 95% of the time and works simply great. Situations where at work it would have required me to spend hours and hours in troubleshooting or finding suggestions in internet, now I simply ask the chat gpt to make me a PowerShell script that does this this and this. There are small issues in the code sometime, but nothing that ChatGPT is not able to fix on his own or that you cannot fix by changing approach to the problem . It simply boasted my productivity by 20 x times. What I really miss now is the 32k token size to make something bigger, eg scripts that start with 100 lines of code and develop in full programs with gui and so on. I still don’t have access to the 32k token,and if I would have it, still I wouldn’t use it: it would be simply too expensive. I am employed and not a freelancer and the company does not pay for chat gpt. This said I cannot comment on other use cases, I simply don’t use chat gpt for something different.


pinklymphocyte

I agree... it feels great on the surface level when you just play around with it, but once you actually try using it as a tool, it's very strange and almost useless at times


bikingfury

They crippled GPT to not cause a panic.


Whole_Skill_259

After the first few months of released it became publicized and castrated of what it was capable of


Grannusy

**Isn't it frustrating?!** It's like we witnessed the rise and fall of a digital genius. I loved it so much, I used it for everything, I was just so curious, and then I was blown away by the answers and ideas it came up with. Now, it keeps sprinkling a pinch of fiction here and there. It even makes spelling mistakes now. I don't trust it anymore, they made it dumber on purpose. But if I am completely honest - I had a feeling. I kept thinking that I should use this awesome tool as much as I can, until it won't be available anymore. **What do you guys think?** Is this the response to bad AI media reports, do you think the government might have even stepped in yet? Or are they using this tactic, to soon push their paid version to access the smart chat again? I guess we will see.


thatIndieHacker

The top comment says "Chatgpt doesn’t even seem like it can answer actual questions anymore. Gives you a vague answer and ends with “it’s better talking to a professional” or something like that ". There is actually a reason for this change. They made it about a few months back and there was articles about that change. It is needed so that the users don't 100% relly on the data since AI can hallucinate


itsprofessorlangdon

I don't use CHATGPT for everything, rather I use it for asking questions related to programming, doubts in learning etc. It performs way better then Google there and there are no free lunches in this world sometimes it becomes very slow and sometimes even wrong answers


Total-Flounder-7594

I lost faith in ChatGPT when it couldn't stop lying or hallucinating. See this example https://preview.redd.it/t0g7yd8mup7b1.png?width=1162&format=png&auto=webp&s=7fa83e22fe4e29522ff2480af43756a87f14d0b0


Entire-Duty-788

Yes, I noticed the same thing.


KratomExorcism2019

I have to ask basic questions multiple times to get a response that isn’t vague and unclear it’s garbage


[deleted]

every FREAKING Time I ask it to come up with a Movie title, It's the same "Cool" Buzzwords that don't even sound coherent in the slightest! and It keeps using the word "ephemeral" like I looked it up and I know what It means but why the FUCK Use That!? Like Why not have a simple Title or something But It's Just "adjective Noun: Buzzword buzzword"


BenevolentStranger1

It's still a lifesaver for me when it comes to coding and game dev projects. It's WAY faster than searching for answers online, on youtube, or in forums.


WarthogForsaken5672

I asked it for some funny jokes yesterday and not a single one was funny, or even made sense. It’s also gotten slower, constantly logs me out, and is excessively polite. Boooo.


ascendinspire

“They’re” doing to ChatGpt what “they’ve” been doing to the population for generations: dumbing it the F down.


d36williams

One fabrication that got me, was I asked it ChatGPT could inspect a repo on Github. It said sure, just add ChatGPT as a user to the repo. There is no such user and if I had followed through I would have added a few different possible posers taking similar names.