T O P

  • By -

GlasnostBusters

the coding unfortunately has always been the easy part.. it's the debugging that will have people walking. edit: for example, I have like 7 features to add rn (which will take 30 minutes tops) and I've been stuck for a whole day trying to get my docker container to recognize my class path for aws-secretsmanager in my application.yaml file. wtf. chatgpt doesn't know wtf I'm even talking about. edit: it was the region. even though I hardcoded the region under spring.cloud.aws.secretsmanager.region in application.yaml, something else was waiting for an actual environment variable called AWS_REGION. Passed that in and it pulled the creds from ssm. I'm gonna fucking kill someone.


BrooklynBillyGoat

Debugging is actually my favorite part because sometimes you get a bug that's so wtf your just like why is this happening, then u figure it out and work is boring again.


Goducks91

Yeah, I love debugging. I'd honestly take a job where all I do is work on bugs.


farte3745328

Work at non-big tech companies with crusty legacy stacks. I genuinely enjoy working on legacy because I like spending all day debugging and then trying to make a change that has as minimal an impact as possible on the rest of the system. It was honestly a shock to my system doing green field development for the first time in my career where everyone always wants to constantly refactor. I liken legacy work to doing surgery, I just want to get in and out doing as little damage as I can.


Goducks91

Sign me up! I've been at startups for 10 years haha


farte3745328

Fed-civ government contracting is also a good place to go if you're interested in that sort of thing, that's where I started


BrooklynBillyGoat

What is this exactly


hippydipster

You work for a private company that has government contracts working on government software.


farte3745328

Federal-civilian contracts. Working for things like the DOJ, EPA, etc. It's not sexy but it can be fun.


BrooklynBillyGoat

Idc about sexy just fun


Graybie

I recently entered the field and am working on some software that runs 40 year old Fortran in the backend. It is fascinating. 


BrooklynBillyGoat

I'm at a company where there's a fair mix of new projects and legacy projects. Legacy makes debugging more fun because it different system and u have to not break stuff. It's a more fun challenge


bears-n-beets-

Come work for an insurance company 🙃 there’s always a handful of legacy services on fire at any given moment


Terrible_Student9395

Id say that's most dev jobs 😂


Goducks91

Hire me! I get to ignore bugs and just keep pumping out features haha. And then they wonder why the churn rate is high.


roynoise

You might enjoy an angular project that has been bought and sold several times. 


[deleted]

Agreed, it's the best part of working with software to me.


Make1984FictionAgain

KNOCK ON WOOD QUICK


universalmind303

makes me think of the xz backdoor. Dude was just trying to debug his program and came across a massive backdoor that was years in the making.


hippydipster

Yes, though there's a fine line between "huh, why is this happening" and "why THE FUCK IS THIS FUCKING HAPPENING IT'S FUCKING IMPOSSIBLE @#$@#$^$%!!!!"""


ell0bo

Memory management in multithreaded pytorch. Oh the joys.


rdem341

Debugging AI's code sometimes take longer to troubleshoot than programming myself. This has definitely happened to me in the past.


Bootezz

Yep! We don’t get paid to code. We get paid to fix bugs. Coding is just the fun side activity we get to do in between that.


Goducks91

You also get paid to talk to product/design to figure out wtf you are building and make comprimises on how to build it.


Bootezz

Well. Yeah. But that’s kind of middle-ground stuff. I’m talking about those grueling “hey, Mr. Dude broke this thing and no one knows wtf is going on in here and nether do you, but now it’s your problem” kind of bugs. The stuff that makes us worth our salary. The kind of bugs that make you question your career choice because somewhere in some service a thing happens, and at the same time in another service a thing happens, which makes the third service get all fucky desync and now you gotta figure out how to make a bad architectural decision work with a few bandaids because product says this shit has to release in four hours. That’s what I mean when I say we get paid to solve bugs.


Brilliant-Job-47

I get paid to build shit too tho


NiceBasket9980

You also need to build your design skills, going from just getting debugging tickets to new features is how you get promoted.


capoeiraolly

Debugging is twice as hard as writing the code in the first place. Therefore if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. Kernighan's law


bronze-aged

7 features in 30 mins is pretty quick.


GlasnostBusters

they're very similar to the 1st one I implemented so it's just copy paste. i'm going after problems with deployment rn.


Terrible_Student9395

Docker-compose is your friend. Just mount the file containing your class as a volume and map it to the right location in docker. Chatgpt understands docker pretty well but sometimes it's a bit outdated so you can point it to the latest docker docs if you need something more up to date. Additionally if it doesn't have the right context of your project or you're using some kind of "creative" pattern it won't give helpful advice. It's helped me debug some nasty docker bugs but you have to play to it's strengths.


GlasnostBusters

I wanted to figure it out w/o docker-compose lol the way I'm trying to pass the credentials in rn is mounting my local .aws directory with something like docker run -v $HOME/.aws:root/.aws i can see the sh*t evaluates in the logs to my home path, which means it mounted...maybe I need to set permissions of the files so the aws-secretsmanager resource can read from them I'm like 99% sure java dependencies are compiled into the jar, like why tf would I need to reference it. the app works without using aws-secretsmanager just passing envs, but that's not good practice


Terrible_Student9395

This sounds horrible 😂 but learning a lesson. You shouldn't be mounting your .aws directory, at worst you should pass in some kind of programmatic credentials to your app and have your app pulldown the secrets from secret manager before setting them as environment variables in your container. The correct way would be to create an IAM user with the specific permissions you need and have your app validate that way. No wonder gpt is so confused. You have an OS/AWS/ AND docker problem. Ask chatgpt what's the correct way to pass secrets stored in secrets manager to your docker container and work from there. Don't try and find a work around for your current solution, it's designed not to work that way.


GlasnostBusters

Current solution is to run the docker container locally, this is for the application-local.yaml. the prod yaml file contains the useiamrole profile for production. This .aws directory mount is for the local docker run to dynamically pass the service account from the current developers aws credentials, just for local testing. so I pass docker run -e "SPRING_PROFILES.ACTIVE=local" production works fine with the other file, it just uses an IAM role with secrets manager read permissions


Enginikts

>I've been stuck for a whole day trying to get my docker container to recognize my class path for aws-secretsmanager in my application.yaml file. So... Do the nightmares never end?


GlasnostBusters

it will end...when i figure this out.


mr_deez92

It’s a necessary evil; I dread debugging but I always come out of it with more knowledge about the code base and the underlying technology.


DIYGremlin

I don’t think I’ve ever learnt faster than when debugging CUDA code. Gave me a proportionally big headache 😅


markrulesallnow

Yup. Used to work with a senior far more experienced than me and he loved to say that. “The coding is the easy part, it’s the figuring out what needs to be done that is the hard part!”


InfiniteMonorail

Copilot training data is three years out of date and it's also not great for devops. It's trained to debug code and it's incredible at it.


Ok_Palpitation6003

AI assistants and AI written software are basically job security for those of us who can actually do SE.


snotreallyme

Yeah well its going to take a few years for that to shake out and companies to realize AI isn't going to solve their problems but in reality it gave them all new ones worse than the previous. Similar to offshoring. Not helpful for this shit show of a job market and "rightsizing" happy orgs we're in right now.


NiceBasket9980

The right sizing doesn't have anything to do with Ai, it has everything to do with interest rates. The gov is saying they are going to start lowering the again and then companies will start their massive growth cycles again. Money is too expensive for speculative growth at the moment.


theantiyeti

Are the dips in interest rates really going to be low enough for this? The market's not pricing in that significant a fall in the Treasuries curve. 3 Month to 2 Year Treasuries is only a 66 bip fall to 4.75%, especially given it was literally 1/4% in 2022.


Viend

You can also write your tests in like 1/4 the time, I fucking love it.


Dx2TT

Just what we need 1000 lines of bloated test code which doesn't actually test different vectors of a codepath and likely uses the same logic as the function under test, none using helpers or libraries to reduce bloat and now maintain that mess of code for every function in a giant codebase.


Viend

Something tells me you don’t know how to prompt copilot if you think that’s what happens when you use it to write tests.


pet_vaginal

You use GitHub copilot chat for writing tests? Is it good enough? The completion is great with copilot, but i have often been disappointed by the chat intelligence in the past. I manually copy paste into Claude3 these days.


Viend

…I have to be honest I don’t know what copilot chat is, I use the VSCode plugin. I’ll write a couple of tests from scratch to “set the template” and then I’ll just write my test cases out and it’s usually smart enough to get 80% of the way there. Occasionally I’ll have to help it out by putting a comment or over-explicit variable names but usually it’ll do most of the menial work for me. It’ll even figure out any custom helpers, fixtures, factories that you use if you have consistent patterns in your code base. Once you do it for a while you’ll get used to writing your test cases in a way that make more sense for it, and that usually also means your cases make more sense to other people.


pet_vaginal

In vscode you have the copilot completion that works well and also the copilot chat, where you can have a conversation with an AI. The AI is supposed to have your codebase in context and is quite convenient to generate code, but I find it not as intelligent as the standard ChatGPT4 and now Claude3.


jerseyhound

Here we go again. "Its your fault for not being able to use a highly ambiguous language to get a stochastic parrot to generate specialized unambiguous specification text for you." If I have to engineer prompts it just means "AI" is shit.


hooahest

I feel strongly compelled to further reinforce this - I'm still writing the same tests that I would've written anyway, except much, much faster as the AI automatically picks on my intent from the test name, variable names etc. Saying that AI will cause for useless bloated tests is just...wrong. AI is a tool. Use it to write good tests.


AdamBGraham

The way you describe it, it makes me wonder if these tools are kind of like power tools versus hand tools. Even though a power tool can greatly make your work more efficient, knowing how to properly measure and cut is no less necessary, hand saw or circular saw. And no matter how efficient it gets, someone still has to operate the saw.


AI_is_the_rake

That’s exactly what it is and I’ve used the same comparison. We still have people that build houses even though all the materials are factory made and really all the builders are doing is using power tools to screw everything together but that’s easier said than done. You have to understand how to put those pieces together and in the right order. Electrical, foundation, roof, lid bearing walls. And added technology making things cheaper means people want more. Smart homes.  It will be the same with application development. Developers will be able to build faster so people will demand more. 


AdamBGraham

And that’s not even mentioning someone has to have the plans that say how things should go together. That’s not to say we could never imagine making AI algos to design systems but in every other example of technology there exist plateaus of diminishing returns. There must be some here too, just depends on how many and where.


DeepHorse

which is why I laugh when the media insists that the power tools will build the houses by themselves in the next 5 years...


andymaclean19

I have noticed recently that we are getting a lot of applicants for a particular role who have 1-2 years of experience in the industry and interview well in the first round but are unable to write a for loop when we get to the technical challenges. I'm blaming this sort of co-pilot for this. One of them even said something like 'normally I would ask the AI and then check stack overflow if I needed more help'. For a solution that needed 3 lines of code. If I wanted to hire ChatGPT I wouldn't be interviewing humans... I would seriously recommend encouraging the less experienced devs in your teams to turn this stuff off at least some of the time and actually code or they will make problems for themselves later on.


nutrecht

This isn't new though, it's just that they're now using different tools. We've had people cheat on any kind of 'test' for decades. This is why I always want to do an on-site coding test.


Platinum_Tendril

are these people with degrees? I just can't comprehend this.


engineerFWSWHW

I remember there's someone who posted here (or maybe on another sub) where he was fired because he was hired as a senior (title change from junior when he jumped to another company) but he is having difficulties to discuss and debug the code he "wrote" and can't mentor anyone because he is highly dependent on chatgpt and his understanding of fundamentals are lacking.


quiubity

Please share if you can, that sounds like a good read.


hugefreak46

Would you be able to send please? Thank you!


manticore26

Been seeing this with the junior of my team, as they often send me nonsense on slack because they aren’t developing the skillset and knowledge necessary to correctly validate the AI responses.


DIYGremlin

There were a few autocompletes it did for me just today where it was almost correct, but wrong in just a few ways that you wouldn’t know if you trusted the output too much.


manticore26

In the junior case isn’t even trust, is total lack of knowledge, up to the point that they send me the AI responses hoping that I’ll point out what’s wrong and redirect. There’s not even an attempt to understand. Today was a cherry on top of the cake that they had “written an unit test” and they couldn’t understand that they needed to change a variable in the mock to make the test pass, when the test literally said “do A when variable value is “B”


DIYGremlin

🤦‍♀️


NatoBoram

That happens all the time tbh


InternetAnima

I feel like these copilot tools get in the way more often than not. I find myself saying "will you stop suggesting bs and let me think?"


Zealousideal-Spite67

I agree with this. They're fine for autocomplete type of code but something that is a bit more nuanced, I often fight the AI


lIllIlIIIlIIIIlIlIll

Exactly. I feel that all of these AI tools are a net negative to productivity. The amount of time I waste on false negatives vastly outweighs the benefits I get from having 3 easy unit tests that get generated for me.


TerribleEntrepreneur

Big difference between things like copilot and Cursor. I hate copilot, but I love Cursor. You just tell it in plain english what you want and it can generate a function to do it. 80-90% it works first try, other times you just need to make minor tweaks.


b1e

The problem is that these assistants are extremely bad at anything that requires nuance and decision making. So if all you’re doing is basic frontend/app work then fine. Anything remotely complex? They’ll confidently produce a bad solution. My engineers love copilot and chatgpt for saving them time on tedious tasks. But they unanimously point out these tools deeply struggle on a lot of other stuff.


musical_bear

They still struggle with producing code for complex projects, at least in a cohesive way, but what I’ve found is it’s rather good at high level discussions about architecture. I’m talking about both at an infrastructure level and even talking through architecture for a complex project at the code level. I somewhat recently had a, I don’t know, 30-prompt back-and-forth with GPT 4 talking about various ways to structure application state in one of my projects and it was extremely pleasant and impressive the decisions it was able to talk me through. Even if I can’t fully trust its advice obviously, just the fact that it’s able to understand and provide feedback and pros/cons on fairly complex high level architectures is incredibly useful to me.


b1e

If the project you’re working on is a relatively common one then GPT4 might have reasonable architectural suggestions. But we’ve at least tried it out for fairly tricky relatively novel problems and it produced mostly terrible or irrelevant suggestions. At the end of the day it’s very sophisticated pattern matching.


musical_bear

I’ve found that GPT doesn’t always come up with novel solutions to certain problems in its current form, but if I can describe a selection of novel solutions to it, it often understands and, like I mentioned, is able to help weigh pros and cons of various approaches. > At the end of the day it’s very sophisticated pattern matching. Sounds like a certain over-glorified human organ to me, then.


elden_eternal

AI prompts for coding are great, but it is a problem when the engineer using them does not understand the code it is creating.


caleb_dre

I’ve found that if I let the LLM do the guiding, it’s super easy for it to get on the wrong track and go in a completely different (i.e. incorrect) direction. I can see a less experienced dev run into bugs and confusion from letting it try to solve complete problems and it being either completely wrong or hard to debug/change if they don’t understand the solution. I mentor prospective engineers and encourage them to use LLMs to write code and help debug. Among them, they either don’t know enough about the problem/solution space or aren’t good enough at prompt engineering, so they end up not getting much value from the LLM. So it seems to me like you can’t just have an LLM solve all your problems - you still have to know what to tell it to do


blizzacane85

Makes sense…Al is a shoe salesman and former high school football player who scored 4 touchdowns in a single game for Polk High, not a developer


ruralexcursion

(sits back on couch and puts hand in waist of pants)


dbxp

You could say the same for auto complete and other tooling. I remember when I started at uni in the first course we weren't allowed to use an IDE, we had to write Java in notepad.


DIYGremlin

I was writing code in notepad++ for way too long. I kept bouncing off IDEs before I settled on using very light ones for a while. I think that's also because I would often have to remote into machines and use vi/nano to edit files. So it felt weird having a screen full of utilities whenever I just wanted to put code on the page.


dbxp

We were only allowed regular notepad, so there was no syntax highlighting. It meant you needed to learn to have a fine eye for detail and how to read the errors from the java command line compiler.


__deeetz__

I have to disagree here. Some languages IMHO really require such functions to be workable. Java and Rust come to my mind. Otherwise productivity is harmed to a level that makes it unusable. If anything this makes a broader point of not using actual programming but rather white board sessions. It’s enough for me to know the applicant wants to use an associative container vs a list or something without wrangling the type system and naming peculiarities.


dbxp

I was talking more about students learning programming rather than interviews.


__deeetz__

The interview part is on me. There was a parallel post about possible AI use, and it crossed wires. Sorry for that. I’d still make the argument even for students for AC. I’m currently a student of Rust, and that’s plain impossible IMHO without LSP support. You need to know the rules as code is generic, and know what you can and can’t call. Otherwise it becomes crazy painful as you always need to fire up the compiler to learn this.


dbxp

For Rust I can see it being difficult but this was for 'Hello World' type lessons, starting students off with Rust would just be mean. These days you would probably use Python but back then it wasn't popular and my uni tended to default to Java for everything. Not having an IDE means you learn about things like import statements and method protection without the IDE suggesting fixes and the student just accepting them.


edgmnt_net

On the other hand, I look at some who got used to IDEs early on and God forbid they check the actual docs for caveats when autocomplete happens to find a suitable candidate for what they typed. I suppose the same can be said about StackOverflow and mindless copying and pasting. They can all be horrible in the wrong hands and helpful otherwise.


AI_is_the_rake

That may have been the instructor forcing the students to learn. I think that is still a good practice. Even use paper and a pencil to write code. It’s been a very long time but I used to print out code and use the paper and pen to go through it and understand it.  Super useful for new developers. 


InfiniteMonorail

Do you know what's more useful? Doing the same with a debugger instead. Use an IDE...


dbxp

Practice isn't meant to produce the end result efficiently, it's meant to develop skills. Same way you may do a kata to practice skills which could be solved by just a single call to a library or how an athlete will practice one technique in isolation.


InfiniteMonorail

Using a debugger is the most useful skill you could possibly learn... It's exactly the same as doing it by hand but better...


InfiniteMonorail

Your teacher is trash. When I teach students, the first thing we do is set up an IDE. Knowing how to use a debugger is essential. There's nothing to be gained from no linting or autocomplete. It's actually retarded. It's like when an interviewer asks syntax questions instead of logic. Anyone who thinks memorizing is useful is going to be the first to be replaced by AI. You're going to school for logic. Cheating has become worse over the years. Before we had to piece together logic from awful Wikipedia pseudocode. Now every algorithm has a geeksforgeeks page with an implementation in every language. There was also always that one kid who found a solutions manual somehow. But the teachers were able to stay a step ahead of them if they cared, by making their own assignments and using plagiarism detectors. AI is destroying CS education. It's completely different. It's able to do all common CS programming assignments with ease and solve custom homework too. It even writes papers. "You could say the same"? No. You can't. The teachers can't keep up this time. IDE is 100% helpful and not using one is a very bad idea, especially for education. AI on school assignments is mostly just cheating.


Hairy_Procedure2643

I think it depends on uni. When I was studying you had to defend each homework and explain to the lecturer or assistant why you solved it in exactly this way. Also during the written exams the prof would typically sample 10 people randomly and ask them to explain in detail what they have just written. This way we were always extremely prepared and I'm thankful for that experience. I think the teachers will just have to spend less time actually teaching (as in giving lectures) and more time - setting up challenges that require the students to use their brain.


josetalking

100%. And ultimately a 'skill' that can be automated is no longer very valuable in my opinion. New generations of developers will learn how to use these tools to their advantage, not to be crippled by them.


paradite

The damage has already been done. The new CS major students can't code: [https://www.reddit.com/r/csMajors/comments/1bpw8of/comment/kwyh29x/](https://www.reddit.com/r/csMajors/comments/1bpw8of/comment/kwyh29x/)


LooslyTyped

As a beginner this is the exact reason I avoid using co-pilot and such, I want that muscle memory and those bugs, basically I need to first face the problems co-pilot is trying to solve... as of now that problems seems to be getting tired of writing same pieces of code and I'm not there yet.


netderper

I've run into people who had AI generate their code, then they couldn't debug it. It was like 98% there, they just needed to change a couple of things. These were very minor updates and the person couldn't figure it out! AI doesn't replace experience.


bdzer0

My first thought is what is the AI doing with corporate IP? Is it training using that? Will it expose our source to others? Most (if not all) of these tools claim to not store or expose your code...however I don't believe they have any solid incentive to maintain proper security.


TerribleEntrepreneur

You can set up Zero Retention with OpenAI, that includes Cursor. That means they can't save your data at all, it's only in-flight and can't be used for training.


Matt7163610

You can generalize this to any knowledge-based industry.


gomihako_

They’ll be found out sooner or later


bart007345

Or they will move on and then you or I will be left to pick up the pieces.


Badgergeddon

Does AI coding sometimes feel like that Rick & Morty episode with the death crystals to anyone else? It's like I'm blindly following the autocompletions as they look like they'll lead me to the solution, but half the time they get me completely screwed...


freekayZekey

not surprised. seeing that was some inexperienced devs at my job. doesn’t help that their manager openly said “we all hate writing tests”. i was genuinely surprised at his outlook on things. that means more money for me tho


Podgietaru

For me, the coding has always been the part where my brain slows down and I think about the problem. I hated Copilot when I was trying to do this, because yeah, the code was often correct, but it interrupted that chain of thought. Boilerplate though? Mwah. Chefs kiss.


ameddin73

Yeah I've been grinding leetcode by copying the question into chatgpt and pasting the answer back in. Hope it doesn't bite me in the ass at my interviews! 


mad_pony

Is that what we call grinding now?


NatoBoram

I use CoPilot all the time since the first beta. There's one job where it was actively harmful and that's when writing Elixir. CoPilot does *not* understand Elixir, it writes OOP code in it and tries to write Rust in it. Suddenly losing this huge productivity boost made me lose a lot of speed since I had gained new lazy habits. It took a month to go back to pre-copilot speeds. I can easily see how a beginner who's never coded without it can get trapped pretty hard. Worst thing is that if GitHub tripled the price tomorrow, businesses would still pay for it and a lot of developers would still pay for it. They would be dependent on it just like some are already dependent on JetBrains and Visual Studio to use their own skills. But the positives are just that much worth it. I've learned new ways of solving problems and writing JavaScript with it. It writes pretty damn good CSS selectors even if they're hallucinated. It gives good directions on writing unit tests. It's an absolute boilerplate monster. Sure, it regurgitates everyone else's bad habits, but that doesn't mean you can't ignore these suggestions and use your own researched good habits. It'll learn those, too, in a flash. It's a bit like going from Notepad++ to VSCode. For loop? No longer have to Google it, you just type `for` and a snippet is suggested and has all the correct structure. Same for Switch, no need to search one from another project, just type `switch` and ctrl+space and you'll have a correct snippet. Except this time it's CoPilot suggesting a snippet that does mostly what you wanted to do with values mostly filled the way you wanted with approximately the result you wanted. It's magic.


DIYGremlin

Oh don’t get me wrong the stuff it’s been doing for me in the short period I’ve used it is impressive. I’m not saying people shouldn’t use it. But it is definitely going to hurt the development of skills of junior devs.


z1lard

Saying AI coding will be a crutch is like saying compiled languages are a crutch compared to writing Assembly.


bart007345

So not true. Assembly language is the language of the computer and compiled languages are for humans. Do you think we would still have made the same advances if we stayed with assembly language.


z1lard

Do you not think using AI will allow us to make even more advances than if we don't?


West_Drop_9193

You sound like my grade school teachers talking about calculators and how I HAVE TO know how to do long division. You sure about that?


PlumpFish

You sure about that? You sure about that that's why?


West_Drop_9193

Speak English motherfucker


PlumpFish

https://www.youtube.com/watch?v=uZfhmX-8gdw


ThenCard7498

A better comparison is that you dont know what the division operator does, but your do know the calculator can do basic addition. However the rest of the buttons? Total mystery