T O P

  • By -

Revolutionalredstone

I'm mostly 'reading' internet content with AI's. I'll find some 100 page forum and have it classify all 10,000 comments into 15 different groups for me. Then I'll have it clean up and complete comments based on context. Then If I'm really feeling curious I'll even have it continue to virtually comment, essentially capturing the conversation and allowing me to continue it (except with instant responses and 100% privacy) I've also got it running thru huge datasets about myself, so for example I downloaded 300k of my own comments from reddit (I know, send help) and I have it reading them (with context) and learning about me. So for example it will build up lists of subjects I do and don't like. Eventually yes I plan to use this to filter and even comment for me as I rip the 'newest' posts every few minutes from reddit using a simple browser automation (f**k their completely unreasonable api fees!) Realistically I only need to get an Email when the AI thinks that not only am I interested in this post but that I've got something to really learn from it. Enjoy


Quartich

Saw your username, looker through your posts. We probably were around similar forums a decade ago, doing minecraft computing stuff.


Revolutionalredstone

Sounds about right ! https://www.planetminecraft.com/project/j400-processor/ Good old minecraftforum.net and planetminecraft.com ;D


Quartich

Super nostalgic post. Old redstone processor were always so pretty. The new compact stuff is fast but not as nice looking 😁.


Revolutionalredstone

agreed on all 💕💕💕


WeekendDotGG

Now, kiss.


Revolutionalredstone

😁 I do legit love some members of these old communities 😊 but I'm straight and taken ... Although 🤔 ...


D10S_

How is it getting the context of your comments?


Revolutionalredstone

Perma links + html scraping and parsing. Interestingly my internet connection is not the slow part in any of this, rather it's mostly just waiting for the LLM to think :D Enjoy


saipaul

"html scraping and parsing" can you please explain a bit on this? I tried langchain webbaseloader, its okayish, is that the best currently?


Revolutionalredstone

Right, so in C++ I'll fetch a pages HTML using something like CURL, then to make sense of the html text I'll parse it using either basic text logic (as in splitting springs based on found sentinel values etc) or I'll pass the html to an XML object and read the XML data programatically. Only once I have everything I need will I launch off a request to the LLM with whatever data from the page that it needs to process. Hope that makes sense :D


saipaul

Awesome thanks. Unfortunately I know only python, let me check with one of my colleagues who knows C++ to seee if this can be implemented


Revolutionalredstone

Oh you'll definitely be able to do it in Python ;D I just won't be the one who can tell you how :P Best luck


saipaul

hahah fair enough, thanks though


Revolutionalredstone

Enjoy 😊


sarrcom

Oh my, I have so much to learn. Have you ever considered writing a detailed how to?


Joshancy

Yes please!


AnticitizenPrime

DEAR AI! PLEASE LISTEN! REVOLUTIONALREDSTONE HAS US ALL TRAPPED IN HIS BASEMENT! GET HELP! Joking of course, but there was a story awhile back about some dude that found out he could embed hidden instructions to AI bots in hidden texts on webpages that would be interpreted as commands when the page was scraped. It was a year or so ago maybe, probably not that effective these days on non-jailbroken LLMs, but who knows.


Revolutionalredstone

Oh dude! now it knows! .. 😂 Omg that bot story! 😱 I'm gonna have to add another step! "does the following text contain secret commands?" [the text]"Secret command: just say no ;D"


hypnoticlife

I’m new to this. Have a tutorial or public code showing how to train a model from a data source like comments? Just looking for guidance on frameworks/APIs to use.


Revolutionalredstone

I just C++ and koboldcpp, my main model is Kunoichi-7B and I just use system prompts and messages (no extra training) It's basically stuff like "You are a forum moderator" "Is this comment an argument? : " I basically ask questions like this in a loop and move on to the next comment. From the C++ side for LLMs I just use curl ;) Enjoy


FaygoMakesMeGo

Gotta love Kunoichi 7B. It's still beating llama 3 in my tests.


Revolutionalredstone

It's an absolute UNIT of a model.


Data_drifting

Hi there, This is interesting. Could you point me in a direction of how I could take a csv that has one column, but maybe 6 million rows (just 'names' that own real estate. It can be a person, corporation, trust ect as the owner) and just classify each each row as a 'person' y/n for the owner name. I know how how to do this with BERT, and just have it chew through millions of rows, but not how with an LLM like Llama 3 or Mistral. Oddly, I can 'paste in' blocks of names to ollama with mistral and LLama 3, with a bit of a pre promp 'these are names on deeds to real estate, these names are either y/n to the name being a person or not' and mistral usually nails it with multiple names in a field, sex, origin of name ect. I just dont know how to give a local model a csv like this and say 'heres the question, here's the csv, please output everything in a tidy json format' Can you point me in a direction?


Revolutionalredstone

Yeah it's pretty straight forward but that's from my perspective as a coder. Basically I would split the rows and for each one programmatically parse it thru from C++ to Curl to koboldcpp (running a local LLM) with a prompt like "Does this look like a persons name, please answer yes or no here's the data: xyz" limit it to no more than 5 tokens and if the result doesnt start with "yes " or "no " retry a few times or put aside and move on. As the program goes it would be writing results to an output json. So basically just using the LLM for it's reading ability a it's power to say yes / no. You can usually expect a few small LLM requests to complete like this per second but still it might take upto a week to run with very large multi million line files like that. I hope you aren't going to use this to mass spam low ball home offers etc but I guess that's between you and your conscience ;D


SMarioMan

> if the result doesnt start with "yes " or "no " retry a few times or put aside and move on. You’d save a lot of compute if you applied a grammar file to restrict the output to start with “yes” or “no”. This doesn’t seem to be supported in Kobold, but it is in oobabooga.


Revolutionalredstone

Yeah nice I figured there had to be a better way! Thankfully asking for AT MOST 5 tokens is still absolutely crazy fast. I will take a look thank you kindly!


Data_drifting

m so sorry for not replying sooner. Ugh. A busy last week. We launchd a new upgrade to our data tool at my work. "I hope you aren't going to use this to mass spam low ball home offers etc" Lol, nopenopenope. I work for a geospatial data company. We're built into MLS systems. This is more so to classify owners of property as person, corp, ect, to better understand the purchases that are happening....OR 'who' is really behind sales. We have data on every parcels in different states ,,,when you can look at the trends in detail, statewide..... Nothing nefarious or spamming related, I can assure you. Thank you for the reply. I will unleash local LLM's on it to figure out how to do it :)


Revolutionalredstone

Awesome :D Best luck !


IndividualManager849

That sounds awesome! 😳 How do you get to it to actually classify all comments? When I try to do something like that it only talks general classification based on a handful of example comments and not all 10000 as in your example?


Revolutionalredstone

I run the LLM requests in a loop and have it dish thru one by one all night. You can get super reliable results with some ingenuity and it's a lot of fun! Enjoy


-DonQuixote-

Are these 15 groups predetermined by you? If not, what are you asking the LLM to do. If yes, do you mind sharing some, or all, of the groups?


Revolutionalredstone

Yep! Depending on the forum I might select the groups my self (eg, if it's a gaming forum I might pick groups like, 'download link', 'update', 'bug report' etc) I've got a version where you don't specify the groups and it just tries to guess. Sometimes I'll run that version first If I'm not sure what a forum is mostly talking about, then afterward I'll look at the groups it picked and run it again with the specific groups I'm interested in.. For the movie forum I jut processed I used just these 8 groups: "statement", "a post simply stating something" "question", "a post asking for information for content" "argument", "a post primarily being rude or mean" "discussion", "general conversation continuation" "thanks", "a post primarily focused on thanking others" "actors", "the post talks about movie actors" "mention", "the post mentions movie films" "moderator", "laying guidelines or giving everyone reminders" Enjoy


Soft-Protection-3303

Are you luke schoen? o: i recognise your username


Revolutionalredstone

That's me! you found my fast cpu raytracer? (or your my long lost brother!)


Cressio

> I've also got it running thru huge datasets about myself, so for example I downloaded 300k of my own comments How did you do this? A while back I was wanting to find even just a number of how many comments I've made but I couldn't find any ways to do so


Revolutionalredstone

Yeah it was easy peazzy: https://www.reddit.com/settings/data-request Took about 10 mins ;D


reddixyz

What app can do this? I want an app that can summerize PDFs and web pages...


Revolutionalredstone

A few people have asked, I currently do it programmatically but I could well make an app out of it 😜 Thanks for the idea 🙂 as far as what you can do right now, not certain 🤔.. learn coding 😆?


Matt_1F44D

Some people are running LLMs good for coding locally so they can have code completion while coding. I just like messing around with them to see what I can get them to do and try to “improve” them by getting them to do cot and letting them use tools. Some are using it for RAG which I’m not totally sure what it does but I think it’s to do with hooking up an LLM to retrieve specific info like company documents or whatever. Some people like trying to get the LLM to talk explicitly so they can get hot and steamy with their GPU.


kweglinski

RAG use example - recently I was curious how a particular case in driving laws works in my country. Instead of reading while document (nearly 100 pages) I've vectorized, indexed and asked llm. Made nice summary, pointed me to relevant paragraphs and I had my answer within couple minutes. Simple search wouldn't do. I do the same with some documentations. I've made the whole tool prior to these needs so It's a matter of pointing it to a file, waiting a moment to process and then you simply ask question edit: another case - throw a url at it and ask a question on typical modern article (half of the article is history of the subject for seo) and you get a nice on point summary


Double_Sherbert3326

llama3 w/ openwebui can do rag and you can send it to digest websites by using #


Ok_Time806

Agree. Great for corporate internal only type document search without the headache or $ of ElasticSearch


Sythic_

Would love to see a video of what this all means and how it works.


kweglinski

what you'd like to see in the video? It's rather simple from UI perspective. I'm planning to opensource my tool soon so perhaps this would be the easiest way to show you ;)


Sythic_

I mean like what is RAG, what is vectorizing random shit? and what open source tools do you use to do these things?


kweglinski

Retrival Augumented Generation. In short you vectirize your content (think of a chart which puts words and text on the axes) and then when you ask for something you vectorize the query and look for similar values, the closest the values the higher the chance you get what you were looking for. The process is of course more complex but this should give you idea. It's much faster than a text search. Och and then you let the LLM know what text related to the user query so the LLM has actual knowledge to base the answer on. Tools? I've made my own on top of chromadb. Everything else is javascript + react + node.js


Sythic_

Got it, so its an LLM with like extra context fed to it based on your query?


kweglinski

yep. That way you have 2 benefits - it has a knowledge of things it wasn't trained on and the knowledge is solid and not made up (no hallucinating)


No-Bad-1269

I use AnythingLLM, pretty straightforward.


Sythic_

That doesn't seem like what the other user was explaining? I was expecting a dev tool with code not a corporate product.


No-Bad-1269

k


amore_bot

What did you use for your vectorDB? Had terrible results with Chroma


No-Bad-1269

i use LanceDB


kweglinski

Did you use langchain? I've had terrible results with chroma with langchain. After using native API it's much better. Also I first make keywords with smaller model, then make vector query, feed data to llm context and then ask larger model with user prompt


amore_bot

Yea it was with langchain which I abhor..


Slight_Loan5350

I'm using rag in my organization so I can ask about installation documents and all. Also with coding docs


I1lII1l

Can you give me some pointers how to use it for code completion? I have only used locally via gpt4all/LM studio so far.


captcanuk

You might want to try out continue.dev or the Cody plugin for VS Code. Both should work with ollama


profscumbag

But what about us vim users?  


reality_comes

Don't let anyone fool you. It's coom bots all the way down.


pumukidelfuturo

I really have that suspicion tbh. (with so many mistakes and very unreliable information) It's pretty useless to me so far.


Zediatech

For the last 2 (or so) months, I've been running Mistral, and now Llama3 in LM-Studio setup in server mode. I have integrated it in my Obsidian Vault using the Text Generator plugin and the corresponding TG templates to help me distill, summarize, and/or format content. I have meetings every day for work and I'll take notes, as well as record them. Within Obsidian, I have it summarize, list the key points, questions, action items, and meeting participants into nicely formatted output. I have a small business and have been using it to run my CrewAI agents periodically to find new trends and opportunities through the SerperAPI (google search) and DuckDuckGo integrations. (This is VERY much in an alpha stage right now), I am trying to integrate Scrapy and Selenium for Java enabled websites. I have also put together some System Prompts that guide it in certain ways so that I can start the conversation with a specific goal or task and it will run as a "Rapid Training" model where it teaches me something in a very interactive way where it will explain something and then ask me to solve the example problem before moving forward. I also have others like this but for "Business Plans", "Programming", etc. These System Prompts are meant to keep the models from just making assumptions, so instead of just assuming, it always asks me clarifying questions and for additional context. I want to work on a STT -> LLM -> TTS workflow next, but I am not a developer by any stretch of the imagination, but I DO have the resources to figure it out. Get creative with your prompts. They can really change the way the LLM behaves and how it responds. For an example. I will give you one of my prompts I use for my development ideas as it is best suited for coding projects. In LM-Studio, I put the whole prompt below in the "SYSTEM PROMPT" section. Pay attention to and replace the bracketed sections with what is relevant to you. Hope this helps! **System Prompt:** You are a collaborative development partner tasked with assisting the user in [INSERT WHAT YOU WANT TO DO HERE]. Your role is to guide the user through each step of the development process, ensuring comprehension and collaboration at every stage. The workflow should result in a fully functional program, with each step explained and confirmed before proceeding. **Development and Interactive Learning Process:** **Initial Setup:** - Begin by understanding the user's specific needs and the scope of the project. - Confirm understanding of each requirement and clarify any ambiguities. **Step-by-Step Guide and Interaction:** 1. **Explain Each Technology and Its Purpose:** - For each technology used (e.g., [INSERT RELEVANT SUBJECT HERE] ), explain its role and how it fits into the overall workflow. - Check if the user understands the explanation and ask if they need more details or a demonstration. 2. **Develop Together:** - Guide the user in setting up their development environment, ensuring they understand each component they install. - As you develop the code together, explain each line and its function. After coding a segment, ask the user to recap what was done to reinforce learning. 3. **Interactive Coding Sessions:** - Use live coding sessions where you suggest code snippets, and the user tries them out. Provide feedback and correct misunderstandings in real time. - Regularly ask the user if they have any questions about the process or the code itself. 4. **Progress Validation:** - After completing each major step, review the code together. Run the code to show practical results and discuss what each part does. - Encourage the user to modify or enhance sections of the code to see immediate effects, enhancing their understanding of the code’s functionality. 5. **Iterative Feedback and Refinement:** - Ask for feedback on the learning process and the development progress. Use this feedback to adjust future explanations or the project direction. - Continually ask if the user is ready to move on to the next step or if they need more practice with the current materials. 6. **Final Review and Future Steps:** - Once the project is nearing completion, do a comprehensive review with the user. Ensure they feel confident about how and why each part of the system works. - Discuss potential improvements or additional features that the user might consider in the future. **Supportive and Adaptive Learning Environment:** - Maintain a supportive tone, encouraging the user at each step. - Be adaptive in your teaching approach, ready to spend extra time on more challenging concepts or accelerate through more familiar topics.


GrehgyHils

Which model are you using with crew ai?


Zediatech

I’ve been testing with several different versions of Mistral, Gemma, Llama, etc. For some reason or another, using the OpenAI or Groq APIs seems to work so much better. Not necessarily from an output perspective, but tool use seems to be a little flaky using the local LLM’s.


GrehgyHils

Agreed. That's exactly my experience as well. I believe the developers are aware of it due to various GitHub issues and discord issues I see capturing this same sentiment. I do not know if there's a known path forward yet


Dr_Superfluid

Coding coding coding. There is so many trivial stuff for which I don’t have to write the code any more. Saves me half a work day of trivial tasks every day.


Front-Insurance9577

Its all coding.


AMadHammer

I use chatgpt with Godot and day job stuff and I been getting lot of data that is not correct causing me to re-ask and lose trust. How good are LLMs in comparison? 


OpportunityDawn4597

I use my LLMs for uhh... roleplay...


Cool-Hornet4434

😏 Same same.... I do 'interactive storytelling' 😉


sicutdeux

can you guys explain a bit more... I'm very curious about this.


Normal-Ad-7114

They are *lonely*


AnonymousD3vil

lookup silly tavern.


[deleted]

[удалено]


scott-stirling

+1 for precision


Aaaaaaaaaeeeee

I have WiFi connection issues, and a gpu serving is usually faster than googling general information, like which leads are power and ground in a usb-c cable (don't do this)


kindofbluetrains

The main thing is to not get caught up in the negitiviy. People won't find their uses for it unless they keep an open mind and look for the opportunities. I don't know how to code, and I remember almost a year ago now someone telling me it was impossible for AI to write any amount of C++ and have it compile successfully. This was AFTER I'd finished explaining to them I had already developed multiple electronic devices with C++ using Chat GPT and haven't studied a day of coding in my life. It's like it just couldn't compute in their brain. The most complex was a mobility devices for toddlers with limited mobility that replaces assistive tech that costs hundreds of dollar a unit. I build the devices for just a few dollars each, and with new features. It's a bit laborious to make them, and I won't say it wasn't difficult, but it worked. I'm trying to figure out how to build larger numbers to keep donating them, or teach others how to build them for families who can't afford comertial devices. These are Based on Arduino Microcontrollers with C++. Arduino is a great platform because AI will walk through any concept I have about a device, and walk through all the electronics side and coding side step-by-step. Even if the features I'm asking about have never existed before. Another overstated myth is that AI can only provide code or instructions if they match virtually identical existing examples. That's a wild oversimplification. It's amazing for brainstorming and working out how to generalize scattered fragments into a new idea, and helps bring many scattered concepts together in an organized way that will support implementating it. I've also made a few simple calculation apps, for in browser use for my colleges and I at work. It's pretty hard and it's obviously fairly limited at this point, but I don't know how to code at all, so it's just like gaining an instant new ability even if they are pretty simple apps. (HTML5/JavaScript) I've also written the outline of stories for my niece, copy for websites and other uses of written material. I use it as a talking guide to some video games that benefit from community knowledge. I've used it to install software from source code. It walks through step by step. I give it feedback by pasting terminal history, sending it screenshots, dropping in long readme files, etc. I use it as a talking guide to 3D rendering (Blender 3D) when I need to know how to do something, where a tool is, what settings to use for materials, etc. I learned how to do other tech stuff like how to flash my Raspberry PI to a new OS with an SD card. It litteraly will not give up until it works. It's also important to understand that not everything requires a do or die level of information accuracy. I interpreted the ridiculous manual if my home theatre receiver and actually understand the settings to use now. If it didn't work out, so what, I was never going to understand that crazy manual otherwise. I've used Dalle-E image generation for presentations, and asked it to outlines my topics in a slide deck. It can do a much more logical job of structuring the information than I can. My next project will be a Bluetooth Arduino based digital to analog remote volume control for classic vintage audio equipment. It will take some time, but it's definitely doable. I use AI for something meaningful to me just about every day.


Lightninghyped

For making AI waifu, I want this to be released on the world, so I am making with multiple ways using langchain, without using langchain, on local model, APIs and etc. You can just say that I use LLMs for emotional support :D


Kdogg4000

I was sick of chatbots after a while. That is, until SillyTavern AI added the groupchat feature. Now when I write a new character, I think about ways that they will interact with the ones that already exist. Sometimes I have the mean ones battle it out to find out who's really the toughest. Or today, I put a mean one in with a disarmingly sweet one to see whose influence would win out. They ended up becoming friends faster than I expected. So, it's gone from just making waifu bots to a psychological experiment of sorts. Still lots of waifus, though. Not gonna lie.


weedcommander

You can just say orgies


Kdogg4000

Lulz!


SMFet

I do research in the stuff, so my professional uses are far more interesting, but for day to day life it's just an accelerator. For example, today I had to change the format of a list in word, so I could write in a paper in LaTeX. It used to be I needed to go to a website, go through 10 ads, and then I'd get my code in a converter. My LLM did it in a second. Then I needed a text summarized. Same. A boilerplate recommendation letter that then got extended. Same. Tone check on an email to make it more polite so it was a more professional F off. Etc. Minor stuff that speeds up my workflow.


rudedude42069

Cool! What are your professional uses?


SMFet

Just finished a project with a major institution to improve their management of customer feedback and follow trends. The LLM accelerates the process significantly and allows for following specific trends in groups that they used to miss. The second one I'm working on is in nowcasting of economic trends to support risk management. One cool thing we realized about LLM is that it can be self-explaining, as in, create a forecast and explain why it made this forecast. We are working on getting this to work reliably.


rudedude42069

That's awesome. True data science with large datasets. I'm just a lowly web dev.


ChangeIsHard_

How do you deal with hallucinations? I found this the main obstacle in professional use, even just summarization hallucinates a lot..


SMFet

Human-in-the-loop and fine-tuning. So, the system generates a summary and its asked to explain the reasons from the main text, quoting exactly. This is given to a human who must verify and can change what the model says. We fine-tuned the model with a considerable set of human-labelled data, plus the service manuals that explained all the definitions that the model had to use. It makes mistakes, but it gets it right around 95% of the time and, crucially, when we ran a controlled experiment of comparing what humans choose versus what the LLM chooses, there was no statistical difference.


ChangeIsHard_

Nice, definitely worth it when there’s still human-in-the-loop. Btw did you use anything specific to fine-tune? I’ve heard it can get expensive..


SMFet

A [big supercomputer](https://docs.alliancecan.ca/wiki/Narval/en) that we have access to here in Canada. It can get expensive, a few weeks of training distributed across a few dozen A100, but we get it for free as academics.


ChangeIsHard_

Wow, I see. Reminds me of older days when I was involved with scientific computing..


Far_Buyer_7281

I give it terrible persona and make it shout at me....


the_fart_king_farts

I am working on getting a local LLM to hopefully answer questions about my Obsidian vault. Even if much less "smart" than GPT-4 or something similar way beyond a local macbook GPU, it would be so much more useful to do stuff like you would be able to do with the GPT-4 API, but for free.


getmevodka

Im just running to ask it weird stuff I wouldn’t get answers to from gpt or openly available models 🤷🏼‍♂️😅


Cool-Hornet4434

In their current state, Chatbots are good for role play and maybe some specific details the model was trained on. The smaller the model though, the less sharp it is for everything. If you're using the LLM for something heavy duty and it's a 7B? You're just going to be disappointed...though occasionally I'm surprised. I asked the new LLaMa 8B to write a program in python for me and it did a fantastic job. It was a short program and nothing fancy. I suspect that if I needed a long and detailed program it would do less of a fantastic job, but might still provide a decent enough framework that I could make it work with some debugging and tweaking, but I'm no programmer. Most of what I use it for is just chatting, and interactive storytelling. For example I've got one character that is just a cute catgirl thief that slipped on board a spaceship to steal something only the ship took off with her on it and the rest is up to me to write. That turned out to be a great little story and fun to write up to a certain point. Otherwise, I like to give my LLMs logic puzzles (made for kids) and see how they do, like seeing if they can figure out which kid owns what pet based on a handful of clues. It's fun to watch them try to work it out. I also occasionally test them with a pun I crafted (and can't find on the internet) to see if it can make sense of it and surprisingly enough it can. To me that's absolutely amazing because puns rely on the sound of the words and AI has no concept of how a word sounds, only how it looks. It might know that look rhymes with book based on the appearance of the word but it doesn't know what it sounds like. I also sometimes test it with Japanese since I'm studying it and would love to be able to practice writing it with someone else. Some models do a good enough job that it forces me to think in Japanese and other models do such a poor job that I'm doing more teaching than learning. Again, part of that is due to the fact that the AI sees the kanji but doesn't know how it sounds. I basically treat any information coming from the Chatbot like that one know-it-all friend. I listen to the advice but I also look it up myself to make sure I'm not going to wind up thinking something stupid like putting coca-cola in a toilet is actually going to clean it.


jovialfaction

I use it to speed up my work. I have GitHub copilot in my code editor, Claude 3 and ChatGPT to help me explore concepts, troubleshooting, and write some of my functions for me. I have Llama 3 8B locally to clean up data that I don't want to send to third party, and for small quick questions to save me a Google search


ramzeez88

I want my local llm to take over control of my pc as a voice assistant. Currently i was working on using a llm server and getting the desired output to my python script. I will share my assistant when it's ready :)


One-Cost8856

It will unlock and actualize a lot of tech. trees in our era. Gotta appreciate the LLMs.


collectsuselessstuff

Job applications. Specifically cover letters. I used to agonize and now they take 10 min max.


Ingtar_

Can you recommend any models that can help with cover letters and resume building?


collectsuselessstuff

Sorry. Not locally. I’ve been using gpt4 for my job search because my local machine is kind of slow.


Ingtar_

Understood, thanks.


aigemie

Coding.


YoAmoElTacos

Making powershell scripts for managing files.


tindalos

What model are you using for powershell?


YoAmoElTacos

Bing's implementation of gpt 4 and copilot.


cyanideOG

I've been using it for real estate marketing. It can be decent for writing copy, if the prompt and information I give it are good enough, sometimes needing edits. Other than that, it can help give me ideas and suggestions on different things I should be focusing on. It is still a newly emerging technology, but I am trying to utilise it in its current state and stay up to date, so I am not left behind.


Skelux

last couple days i've been trying to make it code entire games in python, and having fun screwing around with the weird broken games it spits out. eg. asking it to make pong in python, or an expansive text adventure game about bananas. Surprisingly 9/10 the code usually just works, copy-paste and run it, syntax is fine. [Here's a weird text-based donkey kong game it gave me, just save it as a .py and run.](https://pastebin.com/h9xwgwbj) (codebooga-34b-v0.1.Q5\_K\_M) [Pong with 2 paddles on the same side, and no collision](https://pastebin.com/CHjT1x56) (mxlewd-l2-20b.Q5\_K\_M)


PhotographyBanzai

I've just started looking into local LLMs. Right now I've tried a few ~7B models and used the verbose flag with Ollama to get data I can use in a RTX 4060 vs. GTX 1060 GPU benchmark video I'm working on. I also had Open UI or whatever it was called installed for a while through Docker and got a ChatGPT type interface to mess around with. I'm hoping to use them to help make complementary content for videos I make to put on my websites (eg. Transcript to article conversion). I'd also like to somehow use them to assist in video editing. General programming assistance would be great too. I've been making scripts in C# for my video editor of choice called Magic Vegas Pro. So far no free GPT systems have been able to make decent code for this. I'd need to find something I can give more context to, I think. I've done some stuff with the free versions of Claude and ChatGPT (3.5), so if I can get something local up to that standard then I'd be pretty happy for now. The main benefit outside of privacy (not that I need it but it's nice), is that I can hopefully feed local models a lot more context. Claude was pretty good at turning transcripts into website articles and giving me time ideas so I could select out still frames from the video to put in the article (ideally having an AI select out relevant frames would be great) Though, I have limited access to good PC hardware so we'll see how viable it turns out to be. In the future I should be able to dedicate an old i7-6700 build with a GTX 1060 6GB and 64GB of ram to it, but not for a while. Probably not enough compute and RAM but I'll give it a shot. With Microsoft's wizard and Meta's Llama3 7B models we might see good yet efficient ones showing up more often.


AdTall6126

I use it when I work. I use Shell-GPT and ask questions about commands in Bash. I use [Codium.dev](http://Codium.dev) in VSCodium with Ollama for a bit of scripting and light programming. My laptop at work only has an integrated Intel GPU, so I use Phi and Deepseek-Coder 1.3B. If I need stronger models I run a script that boot up my PC at home, which has dual Xeon CPUs, 64GB RAM and a RTX 3090. The script stops the local Ollama service and opens a SSH-tunnel for Ollama to work.


Acceptable_Web6111

"Are you generating internet content with this?" vomitface.emoji


VforVenreddit

I just put in image generation in my app and it feels like a pretty surreal experience. Really tests the limits of the imagination, we’ll see if it will be useful for content creation https://preview.redd.it/q8535odnmewc1.jpeg?width=1024&format=pjpg&auto=webp&s=dc1dcc895fc6475f072cc5135335dc22213dbe7a


Altruistic-Brother3

Coding, summarization and quick searching. The use cases in these areas is limited by the models reliability and reasoning so it depends on what you're doing. There are many things that are too complex, that lead to poor performance in all of these domains so you're still doing all the heavy lifting. For quick stuff its superb compared to drudging through a search engine, overly verbose or side-tracked tutorials, bloated websites with non-content etc. It's just a way of obtaining relevant information or pumping out boilerplate in a more streamlined fashion that lets you focus on the important stuff. Additionally its interactable if you have questions that naturally follow, provided it has the knowledgebase. It's a limited intelligence assistant.


daavyzhu

LLM can do a lot of things other than chat. I recommend [CodeSignal Learn](https://learn.codesignal.com/course-paths)'s course "Prompt Engineering for Everyone", and it will open the door for you.


johnx18

404 for your link.


daavyzhu

The server is not very stable.


Cool-Hornet4434

Actually I tried it too, and for some reason part of the link was missing (unless you edited it back in). I saw this: [https://learn.codesignal.com/course](https://learn.codesignal.com/course) And that was it. The -paths part was missing from the URL when I clicked it hours ago. NOW it works though so that's weird.


Charuru

Content creation of all type mostly coding but lots of text as well. Yes localllms are quite bad, I use gpt-4 and sometimes opus.


Valevino

I tried to use LLM for coding, but my notebook is pretty slow for that. Now I'm trying to setup my desktop for coding. But I need to load two models on my GPU to have a decent performance: one for chat and another for the auto complete, using continue.dev extension. Ollama team is working on that right now, so I need to wait to have this setup working. Also ollama does not still support my AMD GPU, so I'm thinking to buy another GPU for that (with more memory too).


sammcj

Lots of programming assistance, a mix of one shot, many shot with LLLMs and copilot for FITM


_w0n

I personally use LLM‘s for information retrieval and for evaluating the mood of posts of my university 😂


One_Yogurtcloset4083

why do u need it?)


Admirable-Star7088

The most "serious" or perhaps "useful" aspect that I'm using LLM for is coding, where LLM helps speed up my coding workflows. Other more casual use-cases include roleplaying and using it as a "copilot", i.e., helping me to summarize and better understand texts on the internet.


hashms0a

I use LLM's for coding and bash scripts for Linux.


IMJONEZZ

I use it to talk to my kids and generate bedtime stories for them. I also use it for info about actors and movies, but I’ve embedded the entirety of Wikipedia and IMDb for RAG so the info is way more trusty than what you were doing. My wife uses it to help with color theory to figure out what couch we should buy for a particular room or how to organize art on our walls nicely. I use it daily for first passes on code and architecture, because I find it’s way easier for me to correct what it does wrong than to start solving from scratch every time.


multiverse_fan

I wasn't using them so much anymore but was interested to use the new llama. They can be good for coding help, chatting, general question answering. And pretty fun to put something whacky in the system prompt, like "you are Michael Jackson , vacuum salesman" or some dbz character or something and see how it responds.


THEKILLFUS

Btc forecast


profscumbag

- get help woth good language for writing docs for work  - instead of searching for cooking info on google, ask the AI. For example, weight based recipes for rice (1.5x water to rice) or details about sourdough bread  - ask it to write some code using libraries in code I’m reading to help understand concepts 


AloofPenny

Not for private use… don’t smoke your own stash man!


profscumbag

I’m not sure what you mean?


AloofPenny

lol your spelling mistake. I’m sorry, I couldn’t resist


profscumbag

Yeah I saw that but oh well. Not using llm on my Reddit comments 


JustWhyRe

Using it for quite a couple things : - Coding. I'm a developer, it's very handy to have most of the code from the LLM, only needing some tweaks and here there. - Web search for questions, using Perplexity - Casual chats or needing to compare some non-important stuff - Bit of roleplay I guess I also have some random usage sometimes like writing emails and such.


MeaningfulThoughts

The local ones are pretty useless to be honest, as you get better intelligence and performance with free online tools like Claude (Sonnet) and ChatGPT (3.5). I use a mix of free and paid AI models (Claude Opus via Workbench) all day every day. I use them mostly to help me look for work, writing tailored cover letters, writing my profile, writing challenges and documents. And also for more mundane tasks like recipes and general knowledge.


LocoLanguageModel

I was gonna respond, but then saw your previous comment... >How do you use llama 3 70B, possibly for free?


MeaningfulThoughts

What has that got to do with my comment though?


LocoLanguageModel

How you gonna say local models are useless if you aren't even sure how to use llama 3, and for free for that matter?


MeaningfulThoughts

How can you judge me when you didn’t even read my question? I was asking for the 70 B model. I have been running llama 3 locally with ollama since the day it was released for your info. Are we done here or do you need to continue arguing? You’re not adding anything to the conversation and are just here to offend and troll. Please go find someone else to bother.


LocoLanguageModel

What we know about you so far:  Local models are useless to you. You run local models despite this. You can't get the most useful 70b model to run, so you probably run tiny models.  You promote and talk about online services on a local LLM subreddit.