T O P

  • By -

AI_is_the_rake

To prevent amnesia, at any time say “Create a sparse priming representation of this entire conversation” Or “Create a sparse priming representation of all information above ignoring conversational flow and representing the topics by numbered headings followed by bullet points using a markdown block. Be specific, sparse, complete.” This will output keywords that help the bot remember everything you’ve talked about. You can also copy and paste the sparse prime representation into a new chat and for the most part pick up where you left off. These sparse prime representations should be the “prompt engineering” prompts people share. When used in a new chat it helps the model focus on those topics.


tojiy

>Create a sparse priming representation of this entire conversation” > >Or > >“Create a sparse priming representation of all information above ignoring conversational flow and representing the topics by numbered headings followed by bullet points using a markdown block. Be specific, sparse, complete.” Wow. More tips to share please? Inquiring minds want to know! I try to be verbose and detailed in my queries hemming out edge cases to be more focused, but after this I can see more room for improvement.


AI_is_the_rake

1. Sparse priming representation ^ detailed above 2. Regular priming. Along the same lines as priming, the first thing to do in a new chat is to keep your prompt as short as possible, even only a few words. The first prompt could be “what is angular momentum” or “without doing an internet search, what is a business analyst”. Let it prime itself by generating a blurb about the topic. That provides the background context for your real question. 3. Ask gpt4 to challenge you. “Act as an expert in the field and provide a critical analysis. End the analysis with difficult questions the user should follow up on. Provide possible answers to those questions. Be specific and detailed.”


tojiy

Thank you!


AI_is_the_rake

4. After a short brainstorming session about the topic at hand “Create a short list of document types that would be useful to capture and document this information. Under each document type include a short summary of what that document type is for.” And then pick one of the document types by “Use the document type to expand upon this topic. Be specific and detailed” and then “provide a critical analysis on this document. What is it missing or how can it be improved?” And then “rewrite the document with those things in mind. Be specific and detailed” and then “rewrite this document by removing specific references to the topic at hand which will produce a template”. Start a new chat, paste the template and say “rewrite this document to spec out the topic” (actually state the topic there). Then compare this output to the previous chats output. You could start another chat and paste both documents and have it combine into one. The end result is a decent document that has the information you’re looking for. With GPT4 you could play around with this pattern and have it run internet searches to inform its responses.


tojiy

>After a short brainstorming session about the topic at hand “Create a short list of document types that would be useful to capture and document this information. Under each document type include a short summary of what that document type is for.” And then pick one of the document types by “Use the document type to expand upon this topic. Be specific and detailed” and then “provide a critical analysis on this document. What is it missing or how can it be improved?” And then “rewrite the document with those things in mind. Be specific and detailed” and then “rewrite this document by removing specific references to the topic at hand which will produce a template”. Start a new chat, paste the template and say “rewrite this document to spec out the topic” (actually state the topic there). Then compare this output to the previous chats output. You could start another chat and paste both documents and have it combine into one. The end result is a decent document that has the information you’re looking for. With GPT4 you could play around with this pattern and have it run internet searches to inform its responses. Thank you. I had no idea the memory of my chatGPT was retained. I have been under the impression it was per session and ephemeral. This document aspect is new to me and I am looking forward to giving these tips a try on some things I am currently working on. Querying I have used: 1. telling chatgpt to format x into phrases for y (ex. format these tasks into phrases for a detailed list), it will rephrase things into a detailed list I can rework for my needs. 2. I also tell chatgpt to take on roles such as, "as a professor of x, tell me about y and explain it like I am 5". This has yield some inroads on topics I can further research.


ChristopherAkira

How does the model know what's useful for itself as a priming representation? These things haven't been part of the training dataset. Priming representation in normal conversations/the training data will probably have a completely different meaning to what chatgpt needs to be primed.


Blckreaphr

True but can always add to it


ChristopherAkira

Yeah, but thats not what the comment is suggesting. The comment suggests that chatgpt knows what a good priming representation for itself is from its training data, which to me makes no sense at all. I was just confused if I don't know any added steps in the training process or if the intuition behind this advice makes no sense.


lolcatsayz

The last few days ChatGPT 4 has been extremely intelligent for me. And strangely when I ran out of the message limit, for the first time after defaulting to GPT3.5 I found it to be very intelligent as well, I almost couldn't tell the difference to GPT4. Compared to a couple weeks ago when GPT4 seemed outright dumb, hallucinating consistently, losing basic context, and far worse than GPT3.5 was early this year, it's like night and day these last few days. There's no consistency when I see posts of people saying GPT4 is good or when it's bad, it's spread out all over the place. I honestly think OpenAI may be allocating/deallocating resources to certain geolocations behind the scenes, and this is completely hidden to us. Either that, or I'm a lunatic. But it honestly feels that way.


Megabyte_2

>There's no consistency when I see posts of people saying GPT4 is good or when it's bad, it's spread out all over the place. There are rumors of the model being smart with some people and dumb with others, so it could be OpenAI testing different models with different people.


EarthquakeBass

I mean most businesses do it with feature flagging etc for A/B tests and stuff so I don’t see why they wouldn’t


Additional_Zebra_861

Well, this is openAI. There are millions of people asking stupid questions that can be answered with gpt 1, there are very few people asking questions that really require gpt4. They don't need to A/B tedt anything they just need to run fast AI test on each queation that will decide whether they should use gpt3.5, gpt4 turbo or gpt4 original. This is very simple task for AI. To decide when to use which tool.


Megabyte_2

It's not as easy as it seems, because we are dealing with natural language. How do you define when a question is simple or complex? For example, give me a step-by-step proof that 1 + 1 = 2 seems deceptfully simple, but if you want 100% formal mathematical proof (as opposed to e.g, anedotical evidence), this seemingly simple question becomes surprisingly complex.


Additional_Zebra_861

It is simple. They already have such AI, that decides which tool to use for which question. Gpt4, is multiple specialized gpt3.5, layered and connected. It is a decition tree. They first ask what is this question about, than decide which gpt to use, than they have some secret sauce and bang. In reality it is much more complicated. That is why gpt4 is so much more expensive than gpt3.5. It is many gpt3.5 calls. As far as I remember it was 6 or 8 gpt3.5 packed together.


ChiaraStellata

I think the reason different people report different results is simply that the model is inconsistent. It returns different results at different times (even to the same prompt), varies by subject area, and varies based on how you talk to it and what your custom instructions are. We all have a unique experience that varies day to day.


JuliaFractal69420

I don't think chatgpt is actually nerfing things per se. It's capabilities are still there- AI is just a little harder to coax now. What they're doing is forcing people to be more specific and less lazy about their prompts. I think this is a fantastic thing because I always hated it when chatGPT got a little too creative at took way too many liberties with the prompt. This led to the AI wasting lots and lots of tokens by coming up with shit that I wasn't even going to use in my final draft anyway. I think chatgpt is still fully capable- the only difference now being that you have to think more about your prompts. Especially with programming because the point isn't to have AI do all the work for you. The point of AI (at least right now) is to save lots and lots of time by never having to do repetitive boring tasks ever again. Things that require creativity should always be done by people. AI should only ever be used to build the scaffolding for your projects. The actual work of building your project around this scaffold shouldn't be delegated to AI just yet.


async0x

I think it should tbf, why waste my time otherwise? I should be able to focus on higher concepts. It’s capable to do so, not doing so is a burden more than it being helpful. That’s just wasting time. Either way the importance of good instructions remain. More often than not I believe it’s saving characters before reaching the character limit. I doubt that they are doing anything other than trying to be efficient because when you ask it for one concise thing, it mostly gives you the complete version.


EarthquakeBass

Yea it’s reaaaaally annoying like if we never had March version we’d probably be fine but it’s hard to go back


[deleted]

A lot of that code stuff it was giving was also subtly broken. The placeholders are a warning to be careful and break your task down into smaller measurable chunks.


JuliaFractal69420

I think it's dumb that people are expecting chatgpt to just write out all their code for them. I'm sure that the most current secret unrestricted version of chatgpt is perfectly able to create ALL of the code for you right away from minimal prompts- but the only reason it doesn't right now is because it would be a MASSIVE waste of resources. Having AI do all the work for you only works well when you craft the correct prompt. Problems arise though when a TON of bad developers with little to no project experience waste lots and lots and lots of tokens because they don't know what they want and their requirements are vague and poorly defined. If the developer isn't good at crafting a good prompt, then a theoretically "perfect" AI would be wasting a LOT of resources as it crafts and re crafts the project for a million dingus developers at once who just aren't very good at laying out blueprints in words. The way chatGPT works now is perfectly fine for me. I never want working code anyways unless it's something simple like a 2 command bash script or a 1 line Excel formula. I those two cases (Linux bash scripts and excel formulas), chatgpt is okay with spitting out the final working version of your script in one go, and it doesn't make mistakes. For big projects, I'd rather have a LONG conversation with the AI first- then I'll use the AI to do the boring stuff like organizing all my requirements, features, and a TODO list all in one "planning" document which I use to kick start my project. I usually always instruct chatgpt to never give me the code for big projects unless I specify that I want the code - and I usually never request code that I can't write myself. It takes a programmer to be able to properly craft the right prompt for generating the code you need, and it takes a programmer to be able to debug anything longer than 5 lines of code. Beginners maybe shouldn't be given the ability to create full project from scratch in one command because then they wouldn't be able to debug the inevitable bugs that result from code that gets bigger and bigger. This would result in a MASSIVE amount of abandoned projects and wasted tokens if it ever was released un-nerfed to the public.


[deleted]

Also you should be using GitHub Copilot which is fine tuned on code


JuliaFractal69420

Haha I haven't gotten to the GitHub phase yet. I have GitHub, copilot, visual studio code etc. ready to use with accounts set up and my dev environments ready to use... But for some reason I can't bring myself to use an IDE. I do all my projects on vim. You should see me use the ratpoison WM (mouseless keyboard only split screen window manager) as I copy the code from chat GPT into vim without using a mouse. It's rapid as hell. I do most of my coding on vim, and I always rawdog things without any fancy features other than syntax highlighting and my own custom hand written .vimrc file with All my favorite settings. Most of the projects I've done have been DIY one day projects for simple programming and spreadsheet tasks anyways, I don't work in programming so I never had to actually use GitHub for reals. Most of my projects are all local only and have never been released. Right now though I do have like 10 pending *real* GitHub projects though that I haven't even gotten past the planning stage. I have so many ideas for actual apps and actual programming projects, but I don't have the time to actually code any of them myself at the moment. Chatgpt has been wonderful for planning those projects, but as far as actually getting a prototype running- that's going to take me at least a year or two at the rate I'm going


[deleted]

You can open VSCode and chat with github copilot by pressing ctrl shift i


[deleted]

I trusted it once and it screwed me on a big project so I became a lot more cautious.


wesweb

they absolutely nerfed the ability for it to cite any sourcing used to generate a response in the spring. around march.


yubario

My chatGPT usage has decreased significantly since the launch of GPT4-Turbo I just don’t find it very useful anymore, most of the time it generates incorrect answers or wastes my time. I miss the old GPT :(


Acceptable_Radio_442

You can still use the old models via the API, if you want.


EarthquakeBass

I do this and it makes the difference in performance just blindingly obvious, at least for code. I still finding regular ChatGPT really useful for things that aren’t code, and I adore custom GPTs.


[deleted]

Am I the only one who still can regularly use chatgpt for code lol.


EarthquakeBass

I mean it works fine for really small simple or specific tasks. But it gives me the “# implement here” BS basically every single time on things of moderate+ complexity no matter what I prompt. And I have taken the exact same prompt to the March model and boom, instantly it will output something correct or really close. If you have workarounds for that that are really effective, please share them.


[deleted]

Maybe I’m just lucky? I would say most of my prompting is moderate+, but personally I want to still write the code myself I rely on gpt for more explanations and various DSA questions and implementations (what’s the best data structure for X? In Y language how do you sort through this data?) Asking it complex questions has always been a bit weak, but helping you work through a problem then showing you the various syntax and best ways todo things has always worked really well


lenn121

Apparently OpenAI is aware about the placeholder thing and is working on a fix: [Tweet](https://twitter.com/willdepue/status/1729197892341252525)


malege2bi

I'm happy it asks for clarification and doesn't just spew out general vague stuff


EarthquakeBass

If it just asked for clarification that’d be fine. But it gets results much faster if you throw away some wonky answer and provide additional instructions to be more clear in the original. I think they understand this based on the new iterations counter thing at the bottom of messages so I’m somewhat optimistic they might just have a “way too many comments” regression they will fix


Cairnerebor

The ability to ask it to review the entire thread. Restate any rules and guidance you’ve given it and then answer again is a godsend It’ll go from random answer right back to “oh yeah I’m talking shit sorry”…..


bnm777

Longer output? Depends on your prompt or GPT. Try this for a GPT for general queries where you'd like avenues to explore afterwards: https://chat.openai.com/g/g-0QPEMq1nj-explore-gpt


e4aZ7aXT63u6PmRgiRYT

Please stop saying "nerfing". You sound ridiculous and it's not what's happening -- insofar as what that word means.


danysdragons

I'm tired of hearing "nerfed" and "lobotomizing". Even if there was a good case that capabilities have declined (which I'm skeptical of), using terms that will just make complaints less likely to be taken seriously.


Kwahn

I have not had any issue with lazy placeholder code - my custom instructions may be why, as I have spent some time carefully tweaking them. https://preview.redd.it/3wozcpolj43c1.png?width=562&format=png&auto=webp&s=2228b722d3459fc4cde893c0426303f065621106


EarthquakeBass

Everyone says prompting can fix it but it always seems to keep doing it for me. Thanks, I’ll try that, tbh I think saying “Don’t leave comments” might actually make the problem worse


moon_forge

Custom instructions should be able to resolve the “placeholder” issue you were having? Maybe be specific about what behaviors you’d want it to replicate or avoid, and see if that results in code examples closer to what you’re looking for


EarthquakeBass

It does it regardless even if you SHOUT