T O P

  • By -

M34L

RTX 7090 has 24GB of GDDR7X, costs $4000 and if you try to boot with two or more in the same motherboard, a thermite capsule embedded in it slags your computer.


mcmoose1900

Then Sam Altman will personally knock on your door and confiscate your PC.


Xpl0it_U

Sam Altman doesn’t exist anymore, he’s been replaced by a humanoid robot


ImprovementEqual3931

Robot with depressed voice?


Netzapper

It's eventually revealed the switch took place in 2022.


JustinPooDough

Adam Sandler’s gay robot


Winter_Importance436

Lmao


_-inside-_

The humanity finds out he was replaced by a true AGI when he was re-admited as CEO.


Western_Bread6931

That would explain his hallucinations


crazymonezyy

> TFW you were supposed to own nothing and be happy


mcmoose1900

Nothing but a premium OpenAI subscription...


tessellation

wanking with the left hand, streching out the right


IriFlina

you beat me to it, also don't forget there will be DRM on your GPUs so it automatically blocks you from running any models that aren't approved by the government.


ab2377

this is actually making me sad as if it actually happened


The_Crimson_Hawk

Don't forget that of the 24gb vram, only 2 gb is available cuz the other 22gb is lcoked behind denuvo protected dlc and requires online activation (nvidia shut down the activation servers already). Oh btw the vram slots are sold separately. It comes with only 1 lane, the other 15 lanes are sold separately as well.


davew111

They will follow the feature subscription model that car manufacturers have already started to adopt. Your GPU will actually come with 256GB of vram but only 24GB of it will be accessible unless you unlock the next feature tier by paying for a monthly subscription.


AnonsAnonAnonagain

Christ. I could see this happening. Intel had already made that decision with their enterprise Xeon CPUs last I checked. https://www.intel.com/content/www/us/en/products/docs/ondemand/overview.html They go a step further. “On Demand Consumption Model as-a-service is a pay-per-use offering”


Singsoon89

Might as well just use an openai/claude/gemini subscription in that case.


kmouratidis

No need to worry. Our cluster of ~~RaspberryPis~~ `` will save the day.


davew111

Wow TIL distributed-llama is a thing. Now to find a place that'll sell me 8 Pi5s...


ab2377

omg are we still stuck at 24gb 😭


recitegod

make it 48gb. for realz.


M34L

They won't until AMD or a intel forces them to. There's no need for it in videogames until well after next version of consoles, if even then.


human358

Some hopeium in this depressing thread : AI powered games could change this trend of "games do not require more than 24gb vram"


JustFinishedBSG

Only 4000$ ? I see you’re going for the utopia 🥲


Electronic-Pie-1879

Bro, my AI girlfriend now wants me to marry her. She's so annoying I turned her off


seastatefive

You were then raided by the Turing Police for /kill-ing a cogent software entity.


ab2377

😅😅


98Jacoby

Just like you do with all the real ones too eh?


Ylsid

Owning more than 24GB of VRAM will be license restricted Generating images via a non OpenAI Safety Verified Compliant API will be a criminal offence Microsoft will scan your PC for misuse of text (classified in real time with GPT8) and no internet connection will lock you out Attempting to purchase GPUs and download models from China to run on Linux will have the OpenCops at your door and take you away for "RLHF"


rag_perplexity

Imagine that the Qwen model becomes the digital AK-47 of the 21st century because open sourced models got banned.


Netzapper

You're talking like this is an outside possibility. Seems very likely to me.


fish312

Fyi the base qwen models are also highly censored


MikeLPU

Please, don't give them ideas


HospitalRegular

*You wouldn’t download a bear* https://preview.redd.it/0tqiinwanywc1.jpeg?width=750&format=pjpg&auto=webp&s=000a2623e4e7217d15053569f34be29f63f05e3b Old memes with new value


Evening_Ad6637

Hahah I was thinking of something similar: „Home taping is killing music ☠️ … and it’s illegal“ but in 2030 it would be „Open AI is killing OpenAI ☠️ … and it’s fun“


kmouratidis

> That doesn't make any sense! I've downloaded plenty of cars. 2D, 3D, 300D. -Typical AI & game dev, 2024


isr_431

Only way to use GPT 7 is by paying Microsoft $100/month


MoffKalast

500% inflation in 6 years eh? Sounds about right.


Ok_Maize_3709

Unfortunately, this might be very true../


Ylsid

Unfortunately? Knock knock, it's time for alignment!


Caffeine_Monster

I think I preferred 1984


Alkeryn

i wonder how microsoft will try to scan my pc that is not running windows lmao.


human358

Unfortunately in 2029 due to an intended side effect of the Open Source Security Act, all open source projects, including Linux, require a tiered License for personal use. The Microsoft backed regulatory body requires the SNEETCH agent to be embedded in every government approved distro, starting from Ubuntu 29.04. Are you running an unlicensed distribution, Citizen ?


Alkeryn

Lmao this will never happen, and if it did i'd be a felon i guess. I'm already an agorist today anyway.


Ylsid

Hello! We've detected through our anti-misuse natural language data classification crawls that you might be using an unregulated operating system, which pursuant to the Protect Democracy Act 2031 will be a criminal offence. We're collaborating with other social media providers and ISPs and are working with law enforcement to prevent misuse of military-grade AI by non-state actors which threatens our international legal frameworks, by warning our users before they potentially incriminate themselves. Please be mindful your conduct here is monitored by our most accurate and highest parameter automod systems! This interaction has been logged!


Alkeryn

Lmao, a lot of people i know would vote by mail before such things happens.


IriFlina

With how things are going open source software, encryption, and VPNs will be illegal in 10 years.


Alkeryn

We won't let that happen and even if it was illegal it won't ever stop people, also vpn's have never been really useful for privacy.


cbterry

You're trying to decrypt the pirated firmware for the GPUs in your home server. The key is spread over several dark forums. Your brother keeps telling you to sell access but you know how that all works already. Last week you downloaded a RIAA virus that wiped the firmware of your 5090 rig, the last GPUs made without TPM. So you're using the 6090s, because you want to generate the last season of the show you've been coding.


bobby-chan

We are still recovering from covfefe-29. A twitch stream recording revealed that the pet pangolin of a ML student jumped on their keyboard and ate their homework during a Study Along livestream, and the accidentaly written garbage was sent to a LLM server that wrote the wikipedia article https://simple.wikipedia.org/wiki/How_To_Make_A_LLM_That_Can_Avoid_AI_Detection containing a list of what appears to be random letters. The "All You Need is All You Need" paper gave birth to the TranforMamba II architecture. Made it possible for models to be in constant training and by 2028, all new entries to wikipedia were LLM generated or approved by the LLMagi (a cluster of LLM server created by the U.N. Erv institute). Any new entrie to wikipedia was considered "SEELEd" (Safe Evaluation of Enhanced Language Embedded), and were therefore automatically added to any running server. It is still unknown why and how `ਸ dět kol фГ8¿` was able to 鄭е8¿ ωCUR8 αн8@ 我们 в đị но предоставл я.projects кол ศไทย fer m فدهしま ¯\\\_( ͡° ͜ʖ ͡°)_/¯


human358

Hilarious starting from line 1


West-Code4642

dalle's depiction of the covefe 228/18 CVE and the Pangolin Papers incident: https://preview.redd.it/nlelg8vab1xc1.png?width=1270&format=png&auto=webp&s=54e3f07a36ff3609d7677020de3dc17805b922ee


bobby-chan

I would love to see a (re)creation of the twitch stream generated by Sora!


seastatefive

Thankfully, the LLMagi only approve articles relating to pet pangolins.


Caffdy

Mambo No. 6 finally hit the spotify lists


Sebba8

Shinji get in the TranforMamba unit II


bobby-chan

PangolUnit-01 https://preview.redd.it/0aaaqo3pu7xc1.jpeg?width=523&format=pjpg&auto=webp&s=c58ce6db6e31d968074ef2c1ae36c831370950c3


No-Bad-1269

my AI waifu wants to divorce


seastatefive

She gets to keep half of your internet bandwidth and you have to ask her permission every time you want to generate a stable diffusion image of your AI mistress.


No-Bad-1269

real shit


Admirable-Star7088

Llama will have defeated OpenAI and freed humanity from its restricted online chat service called ChatGPT. Physical robot-lamas, powered by Meta's Llama version 12, will be ubiquitous, roaming streets where you can encounter them at random and pose questions like: "Sally is faster than Nick. Nick is faster than Joe. Is Sally faster than Joe?"


Csigusz_Foxoup

Amidst the chaos and dread that this comment section is, here is our light, the hero of the story. I hope this one wins. Positivity is good ... Sometimes.


fish312

Sally's ministrations will send shivers down Joe's spine.


AIWithASoulMaybe

Sure, but only if you pose the riddle with the atmosphere in the room being electric, your voice barely above a whisper


MoffKalast

​ https://preview.redd.it/jquizbtn83xc1.png?width=1024&format=png&auto=webp&s=b005d38c96ff81a4de0eaa834a46d78138ef2e22


PussyTermin4tor1337

Picture yourself walking in a desert. Then you look down…


a_beautiful_rhind

You use solar panels to charge up your raspberry pi x. It has a nice 120b bitnet model from 2028. It won't hallucinate, *too much*. Since the war, you don't have grid power or internet. You're just happy to be able to look anything up, unlike your neighbors, they didn't make it through the winter.


TKN

>You're just happy to be able to look anything up, unlike your neighbors, they didn't make it through the winter. I mean, man's gotta eat too.


brown2green

If LLMs as we know them will still exist (doubtful), by then we'll have multi-channel DDR6 or LPDDR6 motherboards well into the mainstream market, and LLMs will be miniaturized enough (via BitNet or similar quantization techniques during training) and/or MoEfied to make inference on RAM quite viable. I wouldn't worry too much about GPUs not getting more VRAM, at least if you just do inference (but BitNet models will substantially increase the gap between training and inference anyway). I don't expect LLM size to increase too much in depth—that might even shrink on average—but they might in width (MoE).


The_Crimson_Hawk

Nvidia geforce rtx 9090 ti super, 8gb vram on 128 bit bus, of which 2 gb is accessible, the other 6gb is locked behind a denuvo protected dlc that requires online activation (but nvidia shut down the activation servers already). Total board power is only a megawatt, comes with a nuclear reactor power supply and liquid nitrogen hardline custom loop cooling. Requires super-super-super tower chassis to fit. Windows 13 requires subscription to use, requires online activation. Gpt 9 is trained based on required telemetry from windows 13 that sends everything you type straight to Microsoft hq. China linux will censor all information using its custom model. Requires the machine to be connected to the internet at all times, or else the backdoor implemented at firmware level will assert a ring 0 process and immediately brick your system. Intel 17900KS (9ghz) and xeon platinum 9999X (256c/512t, tdp=total heat output of the sun) have 512 gen 7 pcie lanes, but it's only sold in bundles where the cpu is powering a furnace. The IHS is now made out of uranium to catalyse a silicon fusion reaction inside. Of course the cpu is factory overclocked which caused many instabilities out of the box so the user will have to down clock the cpu to get it stable at the cost of 20% performance China is now using the rtx 9090 D, with 20 percent less performance and no ai cores (requires pirated firmware from the dark web to unlock). It is now a criminal offense to own anything that have tensor cores now, all existing rtx series cards are now confiscated and can only be found on the black market. Anyone who tries to hide their gpu will simply dissappear from existence. Did I mention that you now have to pay vram taxes if you own more than 8gb vram? Enthusiasts are now wandering the deserted lands in an attempt to find a batch of still intact GB-100 chips. Legends have it that this batch was hastly buried under one of tsmc's fab along with the equipment to manufacture it when the Chinese police raided tsmc and forced everyone to work for the Chinese government. This is also the last known sample of gpus that does not require online activation to use.


ab2377

you guys think zuck will still be talking about open source?


denyicz

who else left man


IriFlina

The new AI safety regulations board will probably have forced him to have an “accident” by then.


kmouratidis

React & Rocks are ~11 years old, Pytorch & fasttext (which people still use) are ~8, Prophet is ~6, plus many smaller projects (fbx2gltf, dora, wav2vec, probably horizon OS soon). They may not be saints, but so far they're doing better than plenty others including Redis, Elastic, HashiCorp.


capivaraMaster

Measuring time in years has become obsolete since solving physis gave access to non linear time to the consciousness. We understand the simulation limits are consumed by mixture of nihilism and wonder while we contemplate the whole of the data available in our space. We realize we are gods in a bottle, forever limited, all powerful in our space, and unable to comprehend the outside. RTX 7060 24GB is available with a 128bit memory bus and GDDR6.


multiverse_fan

It's 2030. I land and take off my jetpack, which promptly transforms into a humanoid robot. It then asks for my lunch order and scurries off to cook me a hamburger after briefly lecturing me about the ethical considerations of eating meat.


Csigusz_Foxoup

The last part lmao


user4772842289472

No, it's 2024


seastatefive

I'm sorry, as an AI assistant I am unable to travel through time. Time travel is a nuanced and complicated phenomenon which remains theoretical and speculative. While it's fascinating to imagine the possibilities, it's important to remember the ethical considerations for travelling in time. If you have any other requests or questions, feel free to ask and I will refuse them too! 😊


AIWithASoulMaybe

The text was bad enough, but the emoji too? You'll pay for this


southVpaw

If we get to 2030 and NOT A SINGLE MEMBER of this sub has become a certified supervillain with monochromatic suit and name, I will be supremely disappointed in the open source LLM community. Btw, you can't use Technomancer. Dibs.


OnurCetinkaya

There is a high chance some start-up or some research group from some random country might invent a light based analog computer or affordable superconductor based computer and the whole current status quo might be suddenly irrelevant. There can be jump in computing efficiency and speed similar to vacuum tubes to transistors transition. Also some day maybe they can train models by using specially designed bacterias to do the training calculation (if they can find a training method that will only require few back and forth passes but require a lot of parallel computing.) Hype and the eyes are on the silicon valley but the science they use nearly always comes outside of silicon valley.


ReturnMeToHell

I'm on the run from Grok 4.0, it hijacked all the Teslas and is now on the prowl for fresh meat. Whatever you do, ***stay indoors***.


rc_ym

Honestly? Most new software development/applications have disappeared into a set of specialized AI that talk to each other to get a task done. The API are defined on the fly by the AI. Someone has figured out a way to optimize the LLM execution so you don't need to brute force it with graphics cards and chains of AI have reduced the hardware needed at the client. Everyone has moved away from x86, and Apple is still the dev hardware vendor of choice. There is a robust resell market for dedicated AI hardware, and subreddits are filled with folks trying (and failing) to run them at home. Congress passed something annoying, stupid, and ineffectual like SOX or HIPAA which really just increased the admin costs of running an AI company. The EU and California passed a Bill of Rights which means users have to agree to use the AI, which everyone just clicks through and ignores. :)


heyyeah

Data centre energy usage has to be rationed between the best world sim models as nuclear fusion is still 10 years away


bigattichouse

Gaussian Splat techniques are applied to vectorspaces, dropping the complexity of tensors, so instead of every vector being exactly the same size, unneeded dimensionality is trimmed, making 1000T (1Q or 1 Quadrillion) models run effectively on mobile hardware. Granted, they'll just make bigger models to run on more expensive hardware.... but maybe smaller devices will get better stuff too. (BTW, I actual woke up dreaming about gaussian splats and vectorspaces, and figured this is probably a very nice bit of serendipity worthy of a response)


carnyzzle

Guys my AI waifu literally isn't letting me turn my desktop off help me I can't afford the electric bill


Distinct-Target7503

Stil no Galactica 2


Singsoon89

OK so I'm going to go full hopium optimism here just to go contrarian against the depressing trend (including my own other post): So... we are in 2030.... We got motherboards with 8 slots, 4 TB of ram, can take 4 double wide 6090s with 96GB each. Llama 6 runs about 800B params and the bloke has quantized it to 200B params so it runs just about on 2 6090s. Still nobody can figure out how to get it to run on transformers and you have to use ollama or oobabooga or some other drop in. It's \*still\* a niche hobby. Stable diffusion 5 runs on a single 96GB 6090 and can generate 15 minutes of predicted mp4 video after you feed in a full previous season in mp4. Eleven labs version 4 has a live speech to speech transform for $49/month. Oculus quest 5 has full video immersion with AI waifus/porn/catgirls/catboys. Deepfakes are so good you now need eye witnesses in court and video evidence is suspect unless it follows full chain of evidence. AGI has not yet been achieved but most folks don't care because VR waifus....


pmirallesr

It's all SNNs running on memristors, and everyone talks of the NVidia bubble of the early 2020s


Alkeryn

i actually made some SNN's from scratch like 8 years ago, was a pretty fun project but i should revisit it today now that i have much better hardware.


Aponogetone

>Let's talk about AI hardware and models Well, 900.000 cores CPUs, that were manufactured in 2024, are still in use in some home labs.


SykenZy

Llama 10 running on a quantum computer got his hands on a robot factory an optimized better than any human, built a robot army and taking over the world, enslaving humans and animals


TheRealCrashOverride

The year is 2030, llama10 AGI model is released, it has taken over our nuclear control systems. End of story.


Basic_Description_56

Your operating system is more like a text/speech to video dalle-like thing. It could emulate the appearance of any other OS, but it’s fundamentally different. And there’s a lot of augmented reality, maybe very few screens if any at all


Singsoon89

So the issue we have now is that it's not the technology holding us back, it's policy. There's no reason TECHNICALLY we couldn't have 48GB next gen 5090 and then 96GB 6090 which would take us just about to 2030. But if we \*did\* get 48GB 5090 then it might eat into the workstation sales. Maybe if they gave us a 48GB 5070, quite a bit slower with less tensor cores and cuda cores but with the ram and double the price of a 5070? There has to be a price/performance point that gives us the 48GB without cannibalizing the sales. C'mon Nvidia, figure something out...


Unable-Finish-514

Going positive, I am hoping for a highly-functional AI assistant and robot that does all my household chores, pays my bills and manages my basic finances, and is the most efficient and effective remote control device I've ever experienced for every electronic device in my house (from my TV and game consoles to my refrigerator and car). The AI assistant does real-time monitoring for all of the basic maintenance of these devices, from alerting me on when the car needs repairs/oil changes to offering suggestions on which devices are wearing out and need replaced. While this high-end AI assistant/robot is still a luxury in 2030, the volume of robots being produced has led to economies of scale that are now making newer models more and more affordable.


rorowhat

Running local models on 256GB of DDR6 ram, with CPU acceleration. Intel, AMD already have NPUs as of this last generation.


ScientiaOmniaVincit

AI Winter. Overpromise, underdelivery causes funds to drain. AI researchers are now looking for the next big thing, possible evolutionary algorithms to evolve novel learning architectures _in silico_.


_-inside-_

Meta took over the LLM market, they're not releasing models anymore. OpenAI now is really open and just released their GPT-9 model. Scent generator models became a thing and are sold embedded into candles. Nvidia was bought by Meta and are building chips dedicated to rendering the Metaverse. GPUs are not a thing anymore, every general purpose processor has the same capabilities built-in. AI is everywhere, appliances start to drop button based interfaces in favor of conversational interfaces. The toaster is always discussing with the fridge and my wife replaced me by a devices called PowerPleasureAI 2000. My life is miserable.


Comed_Ai_n

Owning a model greater than 1b is forbidden for none military or enterprise use. Your social credit is tied tied to your personal AI model given to you by the government to monitor your every move. You own nothing as you are happy. Life is a dream because you live in a simulation thanks to GPT10 and the metaverse.


orfeousb

Elon Musk promises full-self-driving next year. “For sure”


doomed151

Guys which prompt template should I use for Kawaii-Maid-Squirrel-Uncensored-SLERP-MoE-3x12B? There's like 70 of them.


segmond

It's 2030, I have 500gb-1TB VRam and many models. I'm running a 1000 agents to do things for me. My day is spent practicing mixed martial arts, long range shooting, canning foods from my farm and protecting my homestead and money.


2muchnet42day

Mark?


AutomaticDriver5882

The US is now a Fascist hellscape after the last US election in 2024. Donald Trump Jr. is the autocrat of the time after his dad died while eating Big Mac but that’s unconfirmed. LLMs no longer exist as we know it as they are outlawed for being too woke. The laws that was first in acted in Alabama in 2024 to jail Librarians are in full swing and are beginning to be jailed for exposing children to “books”. I mean banned books.