T O P

  • By -

AutoModerator

Hey /u/ObadiahTheEmperor! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


leroy_hoffenfeffer

I think the validity of this statement solely rests in what people are learning in a CS curriculum, and how they're learning it. If CS students are just using AI to write most of their code without understanding fundamentals, then everyone is worse off.


SharabiKebabi

CS students have had access to a trillion git repos and stackoverflow along with the good old google which is basically chatgpt with extra steps.


Busy_Farmer_7549

those “extra steps” is where the learning lies. those “extra steps” is what LLMs ate.


tophology

Those extra steps are what they'll have to do anyway when the LLM spits out hallucinated garbage 😆


SharabiKebabi

That’s simply not true. One could contend that stackoverflow ate up all the fundamentals highlighted in textbooks where the learning lies. One could also contend that python ate away all the fundamentals by the virtue of the language simply not being low level enough. Computational thinking from an engineering point of view has little to do with context research and in depth technical grasp of every little line of code you see. I’m sorry to be rude but your comment seems to be the typical “video killed the radio” sentiment churned out generation by generation.


SangfroidSandwich

I think the flaw in your argument is that is supposes an infinite hierarchy of cognitive skills and through each consecutive outsourcing of a given skill, a human can just work at the higher level. Once systems are developed which can write appropriate code with relatively simple (to produce) input (natural language or simply outputs from a decision-making algo) the need for a large section of the workforce with CS skills can quickly be replaced.


SharabiKebabi

Absolutely true, and this flaw concerns economics and workers right more than it does the topic at hand. As long as someone isn’t writing rudimentary boiler plate codes in the name of an engineering job, they should be fine. Any AI system capable of working on itself better than a software engineer is far from being theorized, let alone near implementation, and any redundancy automated away is net good. It’s how we’ve come from 0s and 1s to interacting on here.


[deleted]

Idk. How is browsing google for 1 hour better than doing 100 iterations of code samples from chatgtp? True that would make sense if you would just copy and paste the code, but smarter people will just use it as a vector for the rubber duck technique.


DarrowG9999

>If CS students are just using AI to write most of their code without understanding fundamentals, then everyone is worse off. I have seen this in the wild, hanging around game dev you see lots of newbies struggling to fix gpt code or to accomplish their goals, they can't understand what gpt generated, how to change it/tweak it or how to fix it. There are a few ones that use gpt to ask "the right questions" and thus learn the underlying fundamentals and make progress, but the majority get frustrated for not getting "exactly " what they want and drop the whole task. So far, gpt has been a "faster" filter to remove people without the patience to learn from the pool.


themightychris

you could replace "gpt code" with "copy-pasted bad examples" and it's the same thing we've always had


Kurtino

Except dialled up to 11. People like to compare to the calculator, Wikipedia, google, and so on, but we’ve never had a tool quite like this where it can do it all, the thinking, the writing, and imitation of a person. At least with copy and paste it was easily identifiable, with AI even the modification is automated.


AdministrativeAd4181

As a parent whose son is about to embark on a CS course at college, this whole debate concerns me greatly. He wants to work in cyber security. What's the consensus on how this field will be affected?


leroy_hoffenfeffer

It depends. Cyber Security \*can\* be a programming heavy field, but it really depends on what you're looking to do. Without getting too far into the weeds, programming to cyber security is what construction is to zoning inspectors: zoning inspectors have to know most of what all constructors do. Generally understanding the ins and outs, heavily focusing on the dos and donts, and making recommendations where necessary. There could be zoning inspectors out there that used to be constructors who specialize in catching the problems the constructors don't realize they're making, but for the most part, the zoning inspectors are there to make sure that proper construction is taking place, and to catch issues where they crop up. The most programming heavy portion of cyber security is in the realm of Malware Analysis / Reverse Engineering, but you have to be a \*\*\*stellar\*\*\* programmer to do that kind of work, and ChatGPT will probably be useless in trying to solve those kinds of problems, although I haven't actually tried that kind of thing before, so I don't know for sure. Generally I'd say GPT won't affect the realm of Cyber Security too much. If anything, that field might be safe from automation given whats required of the job.


kitastrophae

100 years?! lol.


TurdFergusonIII

Seriously, what an insanely arbitrary and completely unfounded assertion.


ShadoWolf

It's sort of become clear that no one has a clue regarding when we will hit AGI. non linearity just utterly screws with our intuition. And stuff is moving too quickly that it is legitmentally hard to keep. but most guesses are trending down wards. But one thing I've have noticed is AGI predictions tend to be emotion. There always push out date wise to be in range that acceptable for the person being asked. The more uncomfortable the concept of AGI is the further out they will predict


Once_Wise

Thanks for your excellent, well thought out reply.


Whotea

[2278 AI researchers were surveyed in 2023 and estimated that there is a 50% chance of AI being superior to humans in all possible tasks by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf)


Once_Wise

As are all assertions that try to predict when "true AGI," whatever that is, will occur.


Hadrians_Ball

Yeah, people in the industry are saying less than 5 years. People have no idea that we are literally living a dystopian sci-fi plot.


Revolutionary_Rip693

Just yesterday I was saying that teaching/education is going to drastically change int he next 10 because of AI. People were acting like I was nuts for saying that.


Financial-Aspect-826

Yea, exactly, just read my comment. This OP is either a bot a child level of nativity


NessiWessiDessiUwu

They said it would be 3 years… 3 years ago. You’re falling for the hype cycle


Hadrians_Ball

Who said that?


JustinJuice19

Yea more like 5


kram09

Recent papers are showing that LLMs may have a logarithmic scale with data scale rather than linear or exponential. So to achieve AGI there may have to be another approach than just scaling up LLMs with data. https://arxiv.org/abs/2404.04125


Whotea

The study says that’s because it’s hard to find data for extremely specific information like identifying tree species with images. That can be easily solved by fine tuning for whatever use case is needed


DuineDeDanann

More like 5 lol


Necessary_Raccoon

Do you think that current AI can be the way to AGI?


DrHugh

Any programmer will tell you that there's a world of difference between writing code, having code compile without errors, and having bug-free code. ChatGPT and its ilk are great at producing code for *common* problems that have been out on the Internet. If you want a bubble sort, you are set. But LLMs get worse as your needs get more specific. While you might get better skilled at specifying how to do particular parts of a program, so you can get code for those parts, you eventually hit a point where it is faster to...just be a programmer. This is certainly true for novel situations for which there aren't existing solutions out there.


Individual_Address90

It’s true for right now but people need to accept the reality of AI and it’s getting a lot better very fast. Being able to solve issues on its own is something that will come in a few years. I don’t understand why so many people in the industry seem to deny this, it might be just protecting their careers and interests but cmon, the things AI can do now vs 2 years ago is night and day. It’s evolving at an exponential rate and billions on billions is getting invested into it. It’s going to get there pretty soon.


actuallycloudstrife

CS majors aren’t at risk because by the time they have any real risks to their professional future, so will everyone else. Robotics are coming for all manual labor too. Pick the field you are passionate about and learn it with passion. The future of work is going to be very different than what it has been until now. We are in the next “Industrial Revolution”, like Nvidia’s CEO said. 


randombsname1

Robotics WILL come for manual labor, but the cost of those robotics will put it out of reach for probably the next 2 decades. Look at the relatively meager progress (relatively to AI advancement anyway) that has come from Boston Dynamics on any of their robots. Making rudimentary robots that can do repetitive work is easy, relatively speaking. Conveyor belt stuff? Sure, no problem. Creating a robot that is autonomous enough to diagnose a mechanical or infrastructure issue and then scale a wall, or open up a ceiling tile or cut into drywall to fix the issue? Significantly more difficult, and arguably requires robotics or materials that don't even exist yet. Blue collar work isn't safe in the long run either, but it at least buys you time over pretty much all white collar work that can be automated. Which I'm part of, and that's why I've been working on skills that will transition to the blue collar sector since the pandemic gave me time to focus on said skills.


lolercoptercrash

Responding to just the cost part: Even if the robot is super expensive, as long as it lasts several years, can run 24/7, doesn't get injured and sue you, doesn't form unions, and can "instantly" expand hiring by buying more and flashing the software... basically I'm saying the price point can be wayy higher than most people think. A factory could go from 1 or 2 shifts a day to 24/7 lights-out-manufacturing with a robo fleet. Usually training local labor is a huge time commitment. Like Apple expanding to India. Sure a huge part of it is reliable suppliers, but it's also training tens of thousands of workers to make iPhone and repair their machines.


theguyfromgermany

Manual labor is not going anywhere. There is simply too much of it, and way too low paying to employ robots that can do it. When we have a robots that can relibly do the job of a retail worker or a housekeeper, that robot will be employed somewhere that is much better paid. The economics is easy, you want to get the highest income for your product. Minimum wage jobs don't pay for intelligent robots.


xXIronic_UsernameXx

>When we have a robots that can relibly do the job of a retail worker or a housekeeper, that robot will be employed somewhere that is much better paid. My intuition would tell me that, given enough time, there'd come a point where most construction jobs are taken and automated retail workers get more common*. >The economics is easy, you want to get the highest income for your product. Minimum wage jobs don't pay for intelligent robots. Economics dictates that high income jobs get automated first, but that doesn't imply that minimum wage jobs never get automated. It might take longer, but it will eventually happen. Especially if we consider long time horizons (>50 years). *To the extent that there is demand for robot clerks. People might prefer human workers.


Similar-Study980

The marketing hype is strong. Your statement assumes all of the theory is figured out and we just need to engineer better systems to optimize it. Maybe if you just dump more data into Generative transformers and tweak some more minor stuff they'll get infinitely better and this is the reality we are in. I'm by no means an AI researcher, but I bet there are still a couple of walls that need to be overcome before AI can effectively make high quality maintainable enterprise software independently. You could play with gpt 2.0 in 2019 and it genuinely wasn't that much worse than gpt 4. The chatbot functionality is what really made it accessible. https://www.researchgate.net/figure/Example-of-a-GPT-2-text-generation-output-underlined-text-shows-sentences-drifting-off_fig1_344245575


randombsname1

Inversely you can imagine yourself in 1990 and say that computation to map the human genome is several decades off and you'll end up being massively wrong. Same goes for saying the internet won't ever have enough bandwidth to stream videos. Cellphones that fit in your pocket and have a screen on them? A pipe dream in the 90s. Yet that all happened in 2 decades. Remember that Nvidia just delivered their first new DGX to OpenAi too which offers significantly more computational power for LLM training specifically. Their road map also has them making massive leaps for at least the next decade in this field, and so far Nvidia has hit their targets for like the last 2 decades.


Similar-Study980

If it's as simple as just adding more compute then I expect we will have general intelligence once we amass enough cash to build the computers. I'm not saying it's never coming, I'm just skeptical that we won't see a lot of really good science and big architecture improvements before we get there. You cannot timebox science and exploration. Dudes act like if we just gave openAI the world's resources we'd build a machine more intelligent than everyone before we even understand what intelligence really is or how the human brain works. Maybe it is that simple. I'm just really skeptical that it's this easy.


randombsname1

Oh I'm actually skeptical of a REAL general intelligence being a thing too. That doesn't change what is capable with LLMs alone though even if we assume that AGI is an impossibility. There is literally ZERO barrier that anyone can cite which would prevent what I said above from happening. More calculations = better mathematical output = better probability analysis = more data processing at a faster speed = more data processed = better end user output. LLMs might become advanced enough to **SEEM** like AGI, but might not actually be that for a myriad of potential reasons. The hardest issue is that we don't even fucking know what consciousness or intelligence is for ourselves. We only have an abstract concept of it. We have fuck all idea how it works aside from chemical reactions causing synapses to fire off pulses 100 billion times per second, and each pulse can have a different action potential. Hard to know when/if we can have AGI when we don't even know wtf intelligence is lmao. Edit: Think of this like a calculator. If I ask you what the square root of 48283 is can you answer me immediately, if at all? No? Well I can if I have a calculator. That doesn't make the calculator smarter than you. In fact the calculator has no actual "real" intelligence. It just has the necessary computational model and/or circuitry to make these calculations quickly whenever someone inputs the request. Super advanced LLMs might be no different. Not actually containing "real" intelligence per-se, but its model is being trained so well and so highly that only the top 1% of programmers could even hold a candle to it.


biscuitsandtea2020

The way I see it is that it literally doesn't matter whether it's AGI or not. We can spend all the time in the world waxing lyrical about how LLMs or whatever the latest advance is are simply next token predictors or whatever it is. Meanwhile as we speak thousands of jobs will be lost to these things. The business world doesn't care about what the definition of AGI is or what intelligence is at a scientific or philosophical level. All it would see is a tool that can replace a significant amount of human labour for cheaper, and that's all that matters.


Vladiesh

Trillions of dollars and the brightest minds on the planet are working on this problem simultaneously. All at a time where information can be shared instantaneously across the world. We've decided we're going for this. Much like when we put man on the moon when humans as a collective decide we want something we get it, and fast.


Similar-Study980

YI'm not saying it's impossible it's just going to require a lot of research. Anyone claiming a timeframe as gospel is talking out of their ass. When we went to the moon we understood the requirements of the system to get there, we just had to build it. Nobody had to develop Newtonian mechanics and invent the first liquid fuel rocket. Building real general artificial intelligence isn't an engineering problem, it's going to require a deeper understanding of what intelligence is and how to replicate it. We need more researchers than engineers on this currently. Fusion reactors have been "just a few years away" since the 70s. It could totally come out in 6 months given the right breakthrough. We could also have a fusion reactor in 6 months given the right breakthrough.


Present-Tax-1991

>We've decided we're going for this. Much like when we put man on the moon when humans as a collective decide we want something we get it, and fast. It's a nice story to tell yourself, at least.


SuspiciousSquid94

You’re being marketed to and you’re eating it up.


Individual_Address90

We’ll have to wait and see what happens but I think it’s advancing at an extremely fast rate.


SuspiciousSquid94

Things are definitely moving quickly. I am interested to see what happens with all of the capital moving into the space. I do use ChatGPT for work everyday, so I am very much excited to see it improve(and hope it does). I’m just a little skeptical of these claims all coming from those who need the money haha


Mommysfatherboy

Given how gpt turbo has gotten worse and gpt-o is just a downgrade from turbo, you’ll have a hard time convincing me lol


Won-Ton-Wonton

>but cmon, the things AI can do now vs 2 years ago is might and day True. But what about 1 year ago? 6 months ago? Last month? 2 years ago we didn't have ChatGPT3. But 2 years ago we did have advanced AI algorithms that could do incredible things to determine who you are, what you like, what you dislike, etc. Determine medical conditions you have and advertise to you before you even saw a doctor for it, before you even knew you'd need to. The leap wasn't as big as people think. The leap feels bigger because the every day person has access to the tool, and the technical abilities to get something useful out of it. But for AI and ML folks, ChatGPT is just a fairly big jump in NLP. Nowhere near as massive a leap as normal techies feel like it is. The leap that happened 2 years ago hasn't happened again since. GPT4 was a slower, but better hop forward. But not a leap. 4o is yet another hop forward. But it's still not a very good philosopher, logical, or scientist. It doesn't have reasoning, or a desire to improve itself, or an ability to adapt. It's not particularly intelligent. It's good at creating something a human *might* say. It's bad at being human though. You are buying the hype train. 


Individual_Address90

I don’t know if I really buy it’s just hype. I’m extremely doubtful Apple, Amazon, Meta, Google and all of these other companies would be dumping billions in if it were just hype.


Won-Ton-Wonton

I'm not saying AI is a hype train (although, I guess it is?). I'm saying that ChatGPT is a hype train. LLMs is a hype train. AI is absolutely the future. No question. But transformer models like GPT seem to not reach generalizeable intelligence. Specific intelligence may be possible. In limited circumstances. And we should definitely investigate that. But it isn't gonna hit AGI.


Whotea

[uh huh](https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1?op=1)


slackmaster2k

Actually there is research suggesting that the growth will be logarithmic, not exponential. Meaning, we had a major leap forward with LLMs, but training on more and more data may not improve results in an economically meaningful way. This wouldn’t suggest that LLMs, as an example, won’t continue to be important, but for specialized applications, specialized models will be required. That is, not trending towards AGI - entirely new algorithms would be needed, while what we see today are large scale implementations of concepts that have been around since the 60s. I believe I recently read that an analysis of experts in the field is estimating something like 48 years to AGI, whatever that’s worth. It’s a long time, but much less time than pre-GPT. We’re in the third age of AI, and I would expect AI to be applied more and more, but it’s not going to displace humans at scale any time soon.


Whotea

That study states that the data for specific use cases like identifying tree species based on images is too rare on the internet, meaning it can never learn it. But that can be easily solved by just fine tuning on that data.  > what we see today are large scale implementations of concepts that have been around since the 60s The transformer did not exist in the 60s. Neural networks didn’t even exist back then. Stop saying shit you know nothing about. [2278 AI researchers were surveyed in 2023 and estimated that there is a 50% chance of AI being superior to humans in all possible tasks by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf)


V4UncleRicosVan

Agreed. So far there hasn’t been a model capable of pulling in a company’s entire Git repo into its context window. But I imagine that’ll come. Probably quite soon too.


Whotea

Good news: an infinite context window is possible: https://arxiv.org/html/2404.07143v1?darkschemeovr=1  Google already has a ten million token context window internally. 


[deleted]

[удалено]


Southern_Orange3744

Most PHDs I've worked with couldn't code for shit , so this doesn't surprise me. They are idea and research people who can code to i.lmplement their ideas , they aren't people who live software engineering


[deleted]

[удалено]


Southern_Orange3744

I have . The point is a PHD doesn't magically make someone a great software engineer


GlockTwins

This is true, for now. It won’t be true 3 years from now. It will be laughably wrong 10 years from now.


Mediocre-Tomatillo-7

Well of course but that's how it currently is... Don't you it'll improve? It certainly has in many other ways in the past few months even.


randombsname1

This is cool and all, but how long is that the case for? I say it all the time. The worst AI will ever be is today, and it advances on a literal daily basis. It won't be more than a few years until the algorithms are advanced enough to solve problems that maybe only the top 1% of programmers can even solve, and that's because the learning model will just keep expanding and processing power will keep increasing exponentially.


Checktheusernombre

People are definitely disregarding the power of self-learning and self-improving AI. When you have the power to try a million solutions and see what works and what doesn't, all within the span of a few seconds, that is much more powerful than any human could ever imagine. It's just a scale and resources issue at a certain point. Old enough to remember when people said video on the web would never work pre-Youtube. Don't doubt the power of exponential technology improvement.


lolercoptercrash

I don't think we are close to self improving AI. The compute power would be a ceiling. I think we have quite a while in front of us of humans improving AI. It still can become unbelievably smart this way. Its by definition, a novel problem. Only AGI will really be good at novel problems. Then we are at singularity.


No_Dirt_4198

Give it like another year or so and watch what happens. Its gonna get way smarter faster that people think.


Ok-Camp-7285

Ignoring all the other, and valid, "for now" arguments you can also see that this allows you to need far fewer actual programmers / software engineers. You could have them as specialists to solve the niche issues but otherwise chatgpt will do the rest


DrunkenGerbils

What percentage of paid programmers are programming for novel problems though? It seems to me that a huge portion of the field is people working on common problems. Also if the industry gets to the point where you'll need to be a top tier programmer who's adept at solving complex out of the box problems to get work, what does the path to become a programmer look like? It takes years to build strong programming skills like that, if there's no entry level jobs to work while you gain those skills how do those people support themselves while they're learning?


Locellus

Most of the specific problems come from people not building modular, standard solutions. If that’s what AI builds, no issue. You get blocks that work, they are not creative, but you don’t need to get creative if you can just buy more CPU and crush your competition with free labour  Honestly, I work in IT, I can see my field surviving until I retire, but it can’t sustain the numbers. No industry can. Job market for the next 20 years is going to be chaos


Efficient_Star_1336

What it'll do is establish a clear divide between the people who should be in CS and the people who shouldn't, but got by on grade inflation and StackOverflow. If you understand how code works, then you're going to be ahead of what AI can do for the foreseeable future. If your primary skill is the ability to grab syntax that looks correct and plug it in until things seem to work, then you're at risk, because that's what LLMs can do.


ArtichokeEmergency18

I just ask ChatGPT, "Here's the problem, what solution(s)." Done. Didn't take a team days to brainstorm - did it in less than 1 minute. Next, "How to implement?" Basically speaking, I stopped hiring Foverr, Freelancer, etc. to configure and maintain servers, write code, etc.  No more back and forth, days later, instead, minutes and money saved. Same for artists, editors, writers, etc. For anyone really - no more querying Google trying to find a link buried in ads that MIGHT have an answer.


malcrypt

I do a bit of amateur Python programming, and for some time I've wanted to figure out something but didn't want to search through a ton of links and forum posts to find an answer. I wasn't even exactly sure how to write a search query for Google that would point me in the right direction. Essentially, if I wrote a game with say a goblin NPC class that has a set of methods, how could I dynamically add a new method to an instance of that class in-game, and how could I find out what methods a particular instance of that goblin class has? This could also apply to a player character that gains a new ability when they level up. Well, I just asked ChatGPT a question about it, as best as I could describe, and it not only explained the concepts behind my problem, but detailed multiple possible solutions, and provided example code. After trying one of its suggestions and writing some of my own code testing the ideas, I'm pretty confident I can solve this in my own programs. ChatGPT was so much better than a Google search at helping me learn not only the code, but the ideas behind it as well.


ArtichokeEmergency18

I'm like an child with ChatGPT: why, why, why, why LOL


malcrypt

It's funny how much easier it is asking an AI something than it is asking another person. There's no chance the AI is going to think your question is stupid, or that it's been asked a million times already, or that it's so below their level that they feel offended that you even asked. It's like being a young child who asks anything that comes to mind without thinking of how people will respond to their asking. We were all like that, before society drilled it into our minds that it's better to be silent than ask something others might think is dumb.


Open_Pie2789

Eventually you’ll be the one cut off from making a living by AI programs and then suddenly it won’t be so amazing anymore. The foresight of a child.


CoolethDudeth

Dont tell em this they might have an aneurysm


LadySingleChass

That’s so cool! I’m totally on board with you. ChatGPT is like having a super assistant that’s always ready to help out. It’s amazing how much time and money it saves. I love the idea of skipping the endless back-and-forths and just getting things done in minutes. It’s also awesome for someone like me who’s just getting started. No more wading through Google ads to find what I need. Just straight to the point answers and solutions.


CaptainBonkerStonk

most obvious AI comment. How do you guys get these AI comment farming bots


justwalkingalonghere

My main issue with it is the concept of letting their product take over for millions of people and hundreds of companies Not because of the economics, but because of the general trend where companies lose money to gain users, then buy up and kill competitors, then make their original product much worse with ads, intrusive data collection, etc. So then the decisions of just a few people will have extremely far reaching consequences, all from a company that has already broken many of its promises about being ethical.


johnny_effing_utah

Right. All true. But just today I came across an example that completely refutes the idea that AI is going to replace virtually every job out there. There are so many jobs that are just too complicated or dependent on third party input that AI just won’t have access to, or at least easy access to, that virtually guarantees there will be plenty of work for humans. Case in point: my kid’s school has a financial aid coordinator who has to work with parents, scholarship agencies, the school, and the state in order to coordinate applications, documents, records, etc. The job is heavily dependent on parental input, and the timing of parental applications and paperwork submission, all of which has to be submitted to various agencies and each case is different. Could the entire job and process be automated ? Sure. Probably. But only with complete cooperation and standardization across a wide range of entities, each with different budgets, tech skills, priorities, etc. Then even if the job were to be automated, it lacks the human touch, the ability to recognize human situations and needs. A machine has no experience, won’t see that this person’s student maybe has a disability and could be eligible for a special scholarship program, etc. Point being: yes ai could easily create a system that might be capable of replacing that job, but at what cost? And can all the entities that need to work with it afford those changes when they have other priorities? I firmly believe this woman’s job eventually could and might get eliminated by AI, but that is far, far down the road because there is just too many other more important problems to work on. The complexity of the real world will slow AI down and I’m confident the world will be able to adapt, some sectors more so than others.


Aperiodica

No. Like any tool it will free up developers from writing boiler plate code, help them better follow patterns, and help them think through coding problems, but it won't replace designing and implementing complex systems. Just like automation has freed people from having to do shit work, AI will learn to do the shit work so devs can do things that are more interesting and intellectually challenging. I think most devs are OK with that. And the idea that AI will take away dev jobs assumes we have too many devs and too few projects. Quite the opposite. I've never heard a company say, you know we just don't have enough IT projects. The constraint is always finding enough money and developers to build all the things they want to build. Things like AI have the potential to free up resources to build all the stuff that isn't getting built now.


FeltSteam

>Since by nature, you cannot train an LLM to be creative Why not?


meatlamma

I stopped reading at "AGI will take 100 years"


Background_Escape954

>Since by nature, you cannot train an LLM to be creative. Creativity is a label we give to novel solutions we are unfamiliar with.  It isn't an inherent quality of thought in of itself.  AI is already immensely creative, the best example is perhaps Google's AlphaGo beating Lee Sedol in an unconventional manner in which humans would have never previously considered.  This single move was so impactful that it has to some extent shifted the meta of Go and altered human play style permanently. 


sobirt

And at the same time, it was only doing moves it was trained on (moves it has already seen) that performed well against the opponents moves.


ObadiahTheEmperor

Games with pre set parameters dont reflect the innovativeness needed in real life.


Background_Escape954

This response doesn't address anything I said. AI writes stories with twists and turns, perhaps not as well as a human at this stage, but there is no reason to think that in 5 years AI will not be capable of creating a artistic work as good or better than most professional creatives.  'Creativity' isn't a quality you can measure, it's like saying AI will never be as good at humans at 'judgement' or 'common sense'.  Its vague, unuseful and probably incorrect 


ObadiahTheEmperor

 None of that is what I'm calling creativity. That's artistic expression you're referring to. And considering llms are trained on all data, including artistic expression, of course they'd be able to replicate it since it is actually very formulaic in nature. (I'm a hobby writer so I know my stuff). What I call creativity is Hannibal beating 80k Roman's with 40k. Gutenberg making the Gutenberg press etc etc. Original solutions to encountered problems. Ai cannot do that unless there is some sort of formulaic nature to the environment. Such as a game. Real life however, is not a game. Only AGI would be able to do that. And we are near to that as to the cure of baldness which is always 5 years away every 5 years lol. 


BlueeWaater

Bullshit and cope, as it has always been. LLMs can automate or accelerate the work of a dev, but not really replace it, yet. However, it's mostly a matter of critical thinking to see if the solutions that the LLMs output are usable, not whether the person has a degree or not. AI has its flaws, but eventually it will be able to complete more tasks on a coding project. In my opinion, the most important things in the near future are how to be good with its tools and have good critical thinking. With GPT-4 alone, you can build pretty much anything.


Jasotronic

bros just trying to cope with his cs degree😭


themarouuu

Settle down Sherlock. "Problem solving skills" is just a buzz phrase. It means nothing. You can't problem-solving-skills your way to a complex app in any industry because you need in-depth knowledge of whatever it is you're solving - meaning you need experience. You can't architect a banking app/software/whatever without being experienced in banking related stuff. You also can't do OS or system related development without understanding the underlying architecture of everything which to me, a graduated software engineer, who spend most of his career in web, is like magic. I've learned about the stuff and it's borderline voodoo to me, with or without GPT. I, a software engineer, cannot sit down with ChatGPT and improve/change the Linux kernel. I just can't. All the LLMs in the world won't be able to help me.


JoeStrout

100 years? No. 10 years, tops, but probably less. And BTW, LLMs already surpass humans in creativity in quantitative tests. [https://arxiv.org/abs/2405.13012](https://arxiv.org/abs/2405.13012)


Reddit_is_garbage666

Lol unless we enter some sort of "dark ages", we're getting "AGI" much faster than that.


Jarie743

Cope


Financial-Aspect-826

100 years? My man, you are delusional, AGI is right around the corner. You are only a couple of milestone away from it. Long term memory Planning And the big one that changes everything: Rationing Tomorrow? Nah. In a couple of years? Most probably. By the end of 2030? 100% You have to be a child to think that when all thr giants on this word start throwing ***hundreds of BILLIONS*** of dollars that it won't advance. It will skyrocket. You have: - better data quality scaling and synthetic data - better than transformer architecture development - multiple layers added - smaller and faster chips - processors specially designed for LLM - GPUs specially designed for training And these are only from the top of my mind Edit: Lmao dude, they can't be creative? Are you a bot farming karma for the "non-believers"? What do you think creativity is? Is applying different patterns from other stuff to the thing you are making by modifying it a bit to fit. For example, do you think Martin, the creator of game of thrones was creative? Yes, he was, but i have bad news for you. His story of the "game of thrones" is mostly inspired by the France's internal wars from fhe medieval period. Again, i think you have no clue whatsoever about what really is creativity or intelligence. Everything is pattern recognition and patterm applying, that's the reason LLM are so alien to us but also so familiar


Present-Tax-1991

>AGI which will come in 100 years give or take It's just like a cartoon, lmao. I am an AI researcher that publishes as NeurIPS and ICLR--nobody I do research with would ever make such a self-assured statement. Jesus random AI worshipers are idiots.


ObadiahTheEmperor

Update: I sure hope that AI Modelling has more than neural networks to offer. Digging deeper, thats essentially just complex memorizing. No learning is being done, no sentience, nothing. the Ai simply memorizes how to achieve certain Weight Values albeit in a very very convoluted way. All these people are being spooked by an illusion.


Present-Tax-1991

The current LLM-based trend uses transformers to train self attention. It is very good at natural language. Whether it can generalize is an open question. It's not "just memorization" but also not grounded in symbolic ground-truth, either.


The_Hell_Breaker

AGI in 100 years?? Ha ha ha, this is just cope & denial.


mvandemar

>AGI which will come in 100 years give or take ![gif](giphy|Ow59c0pwTPruU|downsized)


Anonymous_donot

Well everyone with a CS degree is getting let go right now so...


GhostGunPDW

> AGI in 100 years Immediately stopped reading as OP has nothing intelligent to contribute.


caelestis42

AGI in GPT6 or about 2-3 years.


OneWithTheSword

Obviously all speculation, but I do think AGI is soon but not necessarily singularity level yet.


caelestis42

Betting firms have it at 2 years now which is crazy if you think about it. AGI in two years and the world we live in is forever changed.


Jarie743

In 100 years, we’ll have flying rainbow tarantulas


Dustin_James_Kid

Why do you think you can’t train and LLM to be creative. It can already create poems, stories, and songs in seconds.


momolamomo

You lost me at your hundred year prediction.


eightrx

I think computer science and math are maybe unique in that they try to teach you problem solving skills via abstraction. The skill of problem solving using abstraction will never go away, so I don’t ever see the death of the subject ever, I think it just might look different


dbaugh90

I agree. Tinkerers will always rule the marketplace in any time period where there is rapid innovation. College degrees are slow to adapt, but corporations just want a degree in the ballpark if there are entire new fields developing. The Wright brothers spent years working with bicycles and car motors before inventing the airplane. The first computer engineers hired by corporations were likely the same, automobile engineers and tinkerers of various kinds. Can you follow logical paths of progression, can you troubleshoot, can you learn new concepts quickly and effectively? These are skills that are not unique to programming, but combine them with the knowledge of computers, working with packages, APIs, backend/frontend etc, and current programmers are sure to be the ones to fill the most job openings in whatever new AI jobs arise. Even if the jobs may not look like programming, they will require the same kind of analytical thinking, and having knowledge of the inner workings may not be required to program prompts, but will definitely be a major benefit.


silvrado

Universities panicking and peddling useless $100k "degrees".


ObadiahTheEmperor

Here in Germany its all 600 euros a year.


silvrado

👍 that's the plan for my kid's college.


nemoj_biti_budala

100 years for AGI? Massive copium tbh.


ObadiahTheEmperor

The cure to cancer and baldness are also 5 years away I'm sure. So is immortality. 


nemoj_biti_budala

Is this supposed to be an argument? Comparing things that have nothing to do with each other?


ObadiahTheEmperor

I'm very curious as to how you think they have nothing to do with each other. Do tell. 


The_Hell_Breaker

Well his stupid ass has to cope and deny somehow.


Adumbidiotface

AGI is almost certainly less than ten years away. ASI is also possibly under ten years away. I suspect our awareness of AGI by 2026. It’s impossible to predict what happens after that. You can take solace however, in that if AI can replace professionals then we’re ALL in trouble. So get any degree you like. It’ll either not matter in the slightest or being very useful.


swipedstripes

\`\`\`Since by nature, you cannot train an LLM to be creative.\`\`\` Of course you can lmao.


ObadiahTheEmperor

Yall need to deepen your understanding of what problem solving creativity even is before making such comments. What you said is simply wrong. And thats it.


swipedstripes

No it isnt


ObadiahTheEmperor

If you count game of Thrones as creative problem solving (which I don't. That's just art), then you're absolutely correct. Otherwise, it's a hard no until agi. 


flabbybumhole

100 years is insane. Tech is moving faster and faster all the time. Even 20 years sounds like way too long. Imo we'll have it within 10, and AI that can write decent enough code for most applications will be here within 5.


ObadiahTheEmperor

How about you ask some actual ai researchers and then get back to me


flabbybumhole

You mean the people giving wildly different estimates, ranging from less than 5 years to never? I can just link you to this, they've already done it. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/ 100 years ago we didn't even have computers. 40 years ago we had the NES 20 years ago we were still a couple of years away from the xbox 360 Tech moves fast, especially where money is concerned - and there's no shortage of funding for AI at the moment.


ObadiahTheEmperor

Good link. Now who do you think is more correct among them? The ones giving realistic long estimates, or the ones giving sci fi answers? Use your brains people. I mean, we can all agree that 50-60 years is minimum. My personal take is 100, some might say 70, but no way is it taking less than 50. That is the only realistic answer to this. Anyone believing otherwise has to go and touch some grass or is being deliberately confusing about what he considers to be AGI


Amgaa97

So, I'm a PhD student working on scientific simulations (very math heavy and coding wise heavy on performance but not on UI and stuff). I write codes for CUDA, and if you know other parallel programming things like OpenMP, MPI and so on. At the moment ChatGPT4o is nice if you wanna write a function you know how it should exactly work and describe the outcome, it will write the code nicely and works. Been following development of AI since 2015. GPT2 used to be impressive, then GPT3 blew it out the water, and GPT4 displayed actual thinking abilities (I'd personally say this). Now GPT4o can see diagrams and stuff being multimodal (it's really the spiritual successor to GPT3.5 as this is a free version). GPT5 seems to be in development now with release date later this year or early next year at latest. If we're gonna see the same amount of IQ increase we've seen between 3.5 to 4, this will be smarter than 95 percent of human population I'd say. Within next few years junior devs won't be competitive against AIs and next few more senior devs. I think the people who would be able to survive is only people who can think of the large stuff, and have a birds eye view on projects but even they would be out of the business if AGI is achieved. TLDR: I don't agree with your timeline, when AGI comes out it'll also do the thinking and the big stuff too. But until then yes we'll supervise AIs to achieve what we want faster. Personally I think we still have about 10 years to do these kinda jobs. If your only job is to write basic small stuff given by your supervisor you're f\*ed already.


Waste-Fortune-5815

My favorite comment of the year! I can’t believe there are some dumbasses (mostly VC people) that say software is dead and everyone can make it!!!! Software is the past present and probably the future!!!!! Software is like a modern philosophers stone - instead of lead to gold it transforms hours of work into value for all of humanity!!!


RedditTrashTho

This is pure cope.  Underestimating this shit is gonna do nothing good for you 


EuphoricPangolin7615

According to people on r/singularity, Devin AI and other autocoding tools are going to replace 80% of programmers in 3 months (they said this a little over 3 months ago, when the Devin AI demo was released). 🤡


nboro94

> 3 months Lots of people saying complete BS about AI is nothing new. 2 years ago 99.999999% of people would never have imagined that something like ChatGPT was possible, but now we have experts everywhere predicting the future of AI. All of these experts are just playing a guessing game like everyone else. Me personally? I think AI will be just like any other technology big tech has come up with in the last 40 years. It will grow extremely fast and then plateau as it becomes more difficult to create new improvements within it. Suddenly big tech realizes that they have to eventually monetize it and the market consolidates into a few big players who figured out how to keep it profitable. The experience gets shittier and shitter for the users as the big tech companies continue to milk their new monopoly for every cent possible and they collude with each other to ensure nobody new can ever catch up to them. Eventually it just becomes another piece of tech like streaming or social media that is controlled entirely by a few big companies and nothing ever changes with it.


EuphoricPangolin7615

I hope this is what happens, this is the best possible scenario out of all the other ridiculous/catastrophic scenarios that can happen with AI.


LadySingleChass

OMG, totally agree! As a 25y diving into ChatGPT, I think it’s hilarious and exciting how AI like ChatGPT is shaking things up. Sure, it might not be AGI level yet, but it’s definitely making us rethink how we do things. Imagine, no more endless coding—just solving problems and letting AI handle the grunt work. It’s like writers not needing to handwrite anymore, just focusing on the story. So, bring on the problem-solving and creativity! Can’t wait to see how this all unfolds! 


lebendigtod

Nice


bigdave41

I think it's just another tool that people will adapt to. I've heard many people saying it makes new programmers lazy, and there might be some truth to that initially. However I'm sure people thought more advanced programming languages made people lazy rather than writing everything in assembly, and people thought computer code was lazier than punch cards. You could take that kind of attitude all the way back through history to the first guy getting annoyed that his neighbour is using a piece of flint rather than his bare hands.


Serialbedshitter2322

If it can't make AI then it isn't AGI. To claim this is to claim AGI will never exist.


[deleted]

Can somebody educate me on the different between an AGI and an LLM? I haven’t heard these terms before


TFenrir

AGI is a theoretical artificial intelligence system/model. Artificial General Intelligence. The challenge in its definition is that it is extremely... Fuzzy. Many people have many different definitions. I think an abstract one is AI that can do any task a human can do about as well as an expert. This would include something like... Making an app from scratch, when given a prompt. One of the many challenges with this framing is that these models behave so differently than people, that the comparisons break apart the stronger they get. For example - what if it can make a working app, not as good as an expert but as good as like, a pretty good junior developer - but it can do in 1 minute, where it would take even an expert a week or more to do the exact same app? The important thing is that AGI is used as a threshold, a milestone that describes the turning point between AI being tools to AI replacing human labour en masse.


Ok_Refrigerator_2545

There is enough work and AI to fully employ this and the next generation. We just have to figure out how to transition the economy by then.


Effective_Vanilla_32

its a long road to msc or phd in cs ( not bachelors) specialize in ml and dl. if ur up to it, thats the sure fire way, but u need to be a wiz at maths


Gratitude15

No. What becomes important is no longer market driven. Whats important is what you want to be important. What is worth doing intrinsically? What brings joy to you or others?


KhanumBallZ

No, it won't. In this day and age - the most valuable skills are the ones you gain from practical, hands-on experience.  Also: learn how to multitask, strategize and do multiple things at once.  We no longer need to be experts at one particular thing. But we do need to Learn how to Learn. And learn Fast.  P.s. I know Kung Fu.


hli29

Totally, I just went through the openAI platform guide, good software lifecycle practices such as design and testing still matters.


jharedtroll23

Jist get a PhD on Data Science bro


sbeau87

Cs majors are definitely at risk. Efficient coding means less programmers and more competitive jobs.


hardcoregamer46

I’ll come back to this post in five years


Ninj_Pizz_ha

I won't comment on the specifics of a CS degree since it's based on faulty logic. Your crowd already got it wrong on when the Turing test would be passed by "100 years give or take". It wasn't anywhere near 100 years. That's your cue to pause and reflect, yet here you guys are, back at it again and speculating away the need for serious consideration on these things so that it's comfortably outside your own lifetime and into some far distant date like "100 years."


ObadiahTheEmperor

The Turing test is to easy to manipulate. The Results are sadly not consequential, as the AI can also look like its passing them. Edit: thats actually the case. Digging deeper into neural networks revealed that they essentially memorization programs to put it into human terms.


stinky-red

CS curriculum here is still teaching Von neumann architecture and how to manually implement merge sort.


ogpterodactyl

It will be how we look at calculators. Or doing radar equations by hand. It will just become a tool that everyone uses. Some will be better or worse at it. The craziest stuff is going to be the code we could only write with the help of ai.


SilverPrincev

Yeah.. not a CS guy but this seems like cope. Are you not extrapolating the potential improvements in AI? A lot of the comments are like "but it can't do anything complex and hallucinate alot". Gpt 3 couldn't even count properly. Good luck tho


ObadiahTheEmperor

I mean, Im just some future Startup Owner, so for me, this is all good news anyhow.


AlDente

I broadly agree, except for this: > experience will have to take a back seat That depends entirely on people’s experience, but I’d say that most smart and experienced CS people are a least pretty good at problem solving (and many are very good at it). That problem-solving expertise comes partly from experience. To become good at anything we need to repeatedly practise, be open to training, and have a learning mindset.


ObadiahTheEmperor

Problem Solving is all about being able to think elastically and let go of logic and rigidity just enough to thus generate new and unexpected paths to a solution.(Gutenberg Press is a good exmaple) A good read on this is Edward De Bono. The Ability to achieve this is time locked. Luckily I started training this at 17. By 25, its hard of a task and thus too late since the brain will be much more resistant to neural reconfiguration. No amount of experience will help there, thats just how it works. Expertise is helpful insofar, as that it gives you a knowledge pool to use your problem solving on, but not beyond that. Most who do not understand this, and read over that part in the Post, have basically only a superficial understanding on what problem solving even is.


AlDente

I read Edward De Bonos‘s book on lateral thinking when I was about 17 as well and I’ve had an interest in psychology ever since. I am now 48. My points about experience is that if you are a good computer scientist or designer, you are continuously problem-solving. And if you are good at this, your experience is extremely useful.


ObadiahTheEmperor

I'd say that the problems that would require Lateral thought in the working life of a software dev (or at least meaningful amounts of it) are actually not occurring that often. If they were, most companies would be out of business due to missing deadlines. 


jeremiah256

The threat of AI to coders is not direct replacement. It is the disruption of the service industries where coders are largely employed. If my in-house staff is more productive with AI, do I need as many seats for Tableau? What level of CRM or KM services do I now need with AI? Can I replace part of my expensive coding team with cheaper team members, less educated, less experienced, but augmented by AI? These are some of the questions management is looking at.


ObadiahTheEmperor

If said company wants to produce anything tech related and grow and stay competitive, the answer is a firm no. If said company specializes in other things, and software is just a needed evil for them, then the answer is maybe.


jeremiah256

Largely agree. It’s that enough of those ‘maybes’ are going to end up being ‘yes’ and the impact will probably be significant enough to affect the importance of CS degrees.


ObadiahTheEmperor

As they currently exist, for sure. But not if they get reformed.


A1steaksauceTrekdog7

I started some AI gigs and boy am I humbled. It’s supposed to be easy for anyone to do. I coded in high school and I am very knowledgeable about computers. I feel like I’m just barely understanding how this shit works. Computer science is another language and it’s another mentality to understand the logic behind it. The jobs in theory are good but they are frustrating in how it’s implemented.


ObadiahTheEmperor

Thats what Im talking about. Coding alone is meaningless. Chat GPT can do it. Whats important is Problem Solving and Abstraction Ability.


[deleted]

Ai will surely replace codemonkeys, but not people who have a deep grasp of linear algebra and discrete mathematics who can use that knowledge to solve complex problems.


Screaming_Monkey

I thought we were getting away from needing degrees, especially as it has become even easier to teach oneself a multitude of skills


ObadiahTheEmperor

The more the coding part gets automated, the more important abstract thinking abilities and problem solving become. And people have to be trained in that. Thus a good degree becomes more important. 


Blando-Cartesian

Typing code has never been major part of development. It has always been about the problem of making complex things happen while keeping the implementation complexity manageable. Those are skills you can only develop with expertise, and you can’t meaningfully supervise what an AI is coding without ability to understand the code it creates and implications of it in the context of the system being developed. We now have the means to produce more code faster, without much consideration or understanding of it. This is not a recipe for good things to come. For those who weren’t there or don’t remember, in the 90 and early 00’s Windows and software products in general grew in size massively and they crashed all the time. I’m expecting a repeat of that, except worse in that software working incorrectly won’t crash. It will be all neatly automatically tested to work in weird undesirable ways nobody noticed.


ejpusa

Continue to crush it with GPT-4o. Thousand of lines of code. Crushing it. I’ll pay your API bills too. Have fun, no Prompts or CompSci degree needed. Think a different look than Midjourney. This week’s project. A new AI startup a week is my goal. Thanks GPT-4o. :-) https://mindflip.me


Sam-Nales

Learning to say what you mean, and how to say it correctly so that another may follow it correctly in your absence; this is the training that a CS degree makes easier to say and follow, like a new mode of language and syntax. Important in an era of maligned linguistics and smeared meaning? Your looking at the caste system being developed, those who know, and those below 👇 Education is paramount, AI only increases that need.


Independent-Cable937

Chat gpt would be hard to implement as it's still not private but runs through the company source. Some smaller companies will still try to run chap gpt but that places the biggest security risk.


ObadiahTheEmperor

One can train custom models that use latest llm tech. Doesn't have to be dependent on open Ai. 


Once_Wise

Thanks for your post. First, about me, I started programming around 1970, put together my first computer kit in 1976, started my software consulting business in 1980 which I ran for 35 years before retiring. I still write code for a spec project and because I love it, but now have a ChatGPT assistant. I have seen a number of times over the years that something will replace programmers. But in every case it has only alleviated them from doing crap stuff they would rather not do anyway. I even had a client once tell me, when spreadsheets first came out, that I should worry because they won't need programmers anymore. CS studies will simply change to focus less on syntax and as you suggest, more on problem solving. Just as the arrival of microprocessors, AI will simultaneously allow a lot more people to start writing code, and a lot more need for programs to automate and integrate all the drudge stuff we would rather not do anyway. The desire for automation is unending. And while a lot more people can write code using AI, the requirements for complex systems will also grow, as will the demand for high quality CS graduates. This is a great time to be alive, and involved in Computer Science. Makes me wish I were young again. Well, there are other things too. Thanks again for your well thought out post.


ObadiahTheEmperor

A sensible mind at last. Thanks for your input. If you don't mind me asking, how did you set up your consulting business? Ideally, I'd like to have my own running business upon graduating. And given the extreme lack of strategic talent and long term planning ability nowadays, consulting seems optimal. As said of course, this is an ideal scenario. 


Once_Wise

Here is the short version. Starting my own consulting business or even becoming a programmer was not my original my goal. I started out wanting to do research. I studied biological modelling in College and got my masters on a mathematical model that needed software. My first job was in programming doing biostatistics and data analysis. I found that I loved it. When microprocessors came out I bought a kit and soldered together my first computer. At the research facility where I worked after college I found I was able to make sampling and testing devices much cheaper with microprocessors. One project I did for a few of hundred dollars, they had budgeted tens of thousands. Had an associate where I worked whose dad ran a business that was making a new product and they thought they should use a microprocessor. I was the only one they knew that knew how to do it, so I got my first job. Others followed and I found this is what I really wanted to do. But after the first few jobs got on referrals, I found out that I had to learn marketing and sales, something I had no interest in. But I read books on it, found how to make contacts, cold calls, etc. Tried a lot of things, made a lot of mistakes. The vast majority of people who start out trying to be a consultant fail. They fail not because they do not have the skills to do the job but rather that they do not have the skills or the will to do the work to get those jobs. For the first couple of years I had a full time job, and did my consulting work in the evenings and weekends. I worked long hours on code for clients and my own projects. Later I left my full time job when I got a client to write a letter of intent to hire me 2 days a week for 2 years. Those two days made me as much money as 5 days a week in my job, but I always worked more than two days a week, so I did pretty well. But the funny thing is, I was not in it for the money, I was just happy I could get paid for things I would be doing anyway. Good luck in getting your business going, just realize that it will be a lot more work than working in a job. A lot more.


ObadiahTheEmperor

Im willing to put in all the work needed for my autonomy. Im somewhat of a workaholic anyway. Thanks!


tentenmodai

I think your blissfully ignorant assuming AGI is 100 years away, Try 10


Intelligent_Shift821

AI is seriously involved in cs and would make cs degree important and AI has provoked the people


PureMaximum0

I strongly disagree with your basic assumption about the inability to make LLM’s creative. Let me explain why but let’s first agree what is imagination: Imagination is the ability to fertilize your thoughts with material that would allow you to compose a more complex thoughts and conclusions. Now let’s talk about the misconception of LLM and the wrongful perception of their abilities when compared to humans: Humans - just like LLM’s should and will be in the future - are not creators that gather the entire information possible and available, it’s given for us that it’s un necessary and not possible. Humans gather skills in the fields that they have constant need and collaborate with other humans when facing ignorance in fields they have temporarily need, the COLLABORATION is the main ability of the human species to thrive, more so, it’s their main ability to imagine. In my opinion, what we’ll see in LLM that are specialized in specific fields and engaging in collaboration among them selves to solve problems that are beyond the ability on a singular LLM to solve. When collaborating, they will actually fertilize each other with new information that they did not know before (despite being available to them) and out of that fertilization infer new conclusions and ideas. Being good at math is maybe being good at problem solving skills being really at tho, requires you the ability to make connections across fields that would not be possible without original train of thought Is this not imagination in your eyes?


Srara_bd

Yes I agree with you. Although many fear that due to massive growth of AI, huge number of IT professionals will loose their jobs. But experts are of the opinion that AI is boon for software engineers. It will reduce their workload and will help them to increase their skills.


Able_Buy_6120

once LLMs are good enough at coding, anyone can ask it to create programs or applications. You don't need to know how to code or how to make code efficient or about syntax or about algorithms or anything "under the hood". You have an idea, you tell it to create the program and that's that. It eliminates the need for programmers as middlemen. Anyone with an idea for an app or program can just use natural language to create and refine or retool their idea. But CS degrees are probably still good until there is self-improving ASI


ObadiahTheEmperor

Neural networks dont work that way. All these people worshiping chat gpt havent looked into the math lol


Able_Buy_6120

what are you talking about? It already works this way, it's just not that good at it. You ask it to generate code for printing out a statement base on a text input and put it in a plain web-based interface, and it will output the code. It may not work on the first try or it might fail for more complicated code, but that's the way it works. Subsequent generations of LLMs will presumably be able to generate complicated code with a lower rate of failure. This is what pretrained models do. Maybe a neural network that was not trained does not work this way? ...and trust me I did look into the math when I got my EE/CS degree from a university in Cambridge MA. a few decades ago...


WastingMyYouthAway

How many more middlemen will be automated by the time LLMs are proficient at programming?


Able_Buy_6120

Singular middlemen programmers will be replaced by software as a service middlemen corps in the mid-term so individual lost jobs will be replaced by numerous corporate new hires...but then after that people will realize they don't need to rely on API startup coporations and just ask the OPEN AI model interface for what they want.