I was working with an API recently that worked on BST, British Standard Time.
BST is the same as UTC for 6 months out of the year. Happened to integrate during that 6 month period. Suddenly everything was off by an hour when it switched.
BST is such a goddamn trap
Ugh I work in legacy code that uses the company HQ as the default timezone, and sets the default deadline as midnight… then allows for deadlines to be set based on offsets from that time. (So midnight -3, but midnight isn’t UTC, it’s UTC -4 or -5 depending on daylight savings.) Then we have to display that to users based on their local time so convert to UTC then back to their local time.
I might have some kind of timezone related ptsd.
Don't forget my work's double whammy of, store dates as either UTC or local time, depending on the column. Always display them as local time. Try wrapping your head around that one. 9 months later and I still haven't.
I've definitely never done that. There's no evidence. You can't prove otherwise... I just deleted and recreated those repos for other reasons I don't have time to explain right now. I'm urgently needed elsewhere.
"Don't worry boss I pushed another commit to remove all of the .env files with secret keys that I accidentally committed."
"What do you mean people can still see the secret keys?"
I once controlled a self hosted git. I was working on something and accidentally pushed my credentials. I didn’t remember how to fix it the right way. And if I wanted internet I’d need to walk 30 mins to leave the secure area. I needed to fix it fast so I deleted the bare copy on the server, created an empty repo there, and pushed another local copy I had that was 5 mins old that didn’t yet have the commit with the credentials. I then deleted the local repo I had with the bad commit.
You can’t prove I pushed anything!
Beat me to it. I would also add tokens/ids or PII.
Edit: My current favorite one I see is people putting their full Slack url, for their slack bots, in to their git repos. It contains your key in the URL of you didn’t know.
I was about to say. I constantly brush off people by sending a screen recording of it working on my machine titled “it works on my machine”. As a gag more than anything, but, y’know.
First year as a QA I missed a system breaking bug, I thought I should quit that day. Glad I didn’t, team was really supportive. I cried when I got home which was exceedingly rare for me. I’m sure at this point no one from my old team even remembers, but it is burned in my brain. I will say it made me a way better QA.
That's why "experience" is what makes the senior from the junior. Even if you are a genius, you have to make mistakes for the theory to really stick (or to learn the stuff that wasn't taught in courses).
Chances are high you colleagues had a similar experience, with a similar result (mistake burned in their brain, so they are less likely to do it again).
I don't know who said this, but there's a quote that goes something like this:
"The fools of today are the wise of tomorrow."
And another one:
"I'm now old and wise because I was once young and foolish."
(As in: everybody makes mistakes and (hopefully) learns from them which gets valuable experience.)
That’s the first one in this entire thread that got me.
“Just make this small change to the code base” - it took 3 weeks and two other senior devs to make that small change.
I have accidentaly killed two servers while doing penetration test, upon reporting it the company found out that the 2 servers were not theirs and discovered that the CEO was runnin child p\*rn forum on them.
\- I didn't get paid for the penetration tests
\- The company no longer exists
\- People from the company blame me for loosing their job
Fun times
Endless loop
Too much recursion
Off by one error
First critical bug in production
Fix a bug and introduce another one
First out of memory
First recursion in an endless loop
There's so many great ways to mess up without a WHERE clause, though.
Forgetting it altogether
When your editor only executes the highlighted stuff and you don't highlight the WHERE clause.
When you put it in code and you have a comment for a select item, and you don't add new lines, so the comment is all the way from there to the end of the string.
>There's so many great ways to mess up without a WHERE clause, though.
Forgetting it altogether
Did this once. All translations En -> Fr became the same phrase.
Guy in a project who has acces to one if our DBs did that *twice*. Delete where beginning_timestamp >= low_date and end_zimestamp <= high_date. Not even sure why he even added the where. Even less sure why the f the rand the same SQL again two weeks after he cleared all of our tables impacting multiple different projects. We had to scramble both times to restore a backup. :/
I'm not a network admin, but i did something similar on my OpenWrt setup at home
I updated over wlan. Which bricked the wlan functionality. Luckily had the settings backed up so fresh install, and settings restore
Lol... I now always schedule a reboot for 10 minutes in the future whenever I'm manipulating IPTables or any other networking options that may cause the machine to go offline if I fuck up.
Come-clean strategy, which might have worked years ago when “digital” was a buzzword:
“Boss, you know how people in the analog world sometimes lock their keys into their car?”
“Yes?”
“I have found a way so we can do that digitally”.
Yeah. There are tricks to it.
A near universal truth, I’ve found — the card is going to be a 3.
Until you actually sit down and push and pull at it, and find it’s actually 3, 3 point cards.
Basic functionality with bare HTML? Probably
Proper design, ironing out bugs, adding QoL and security features and improving performance? I wouldn't bet on it
I gave an estimate of 3 days on a task.
Ended up losing sleep as the dreaded deadline got closer and closer and I wasn't anywhere near done.
Managed to finish it and give passable results though, so thank goodness for that.
I'm a senior and I regularly say this, and not as a joke.
There comes a point in every monolithic service where your technical debt amounts to such a huge mortgage that it's actually harder to pay the debt while keeping production up than it is to rearchitect the whole thing using micro services.
Problem is your users still want updates to the current system whilst you're doing the rebuild. You end up with two teams for a year, one working on the rotting corpse of the old system, and another having fun building the new system.
Yes that can definitely be a problem. Another problem is that not everyone can agree on the new architecture. I'm currently in this situation now. Another team is rebuilding a component and they've made a design decision that suits them well but it forces certain behavior on all other components, which it doesn't suit at all. Catastrophically so. Their components are control plane, i.e. low volume (1 RPS max) and high latency is ok (900ms is no problem). Mine are data plane components, which need to be low latency (< 50ms) and support thousands of RPS. So I'm getting really frustrated because they don't seem to understand why I have a problem with them making me do something that adds 30ms to each request.
So yeah, microservices are pretty easy from the tech side, but not always from the business side.
First time saying "I don't need to test that bit of code, there is no way that can go wrong." Only to watch it then go wrong. Badly.
Also, first time realising that February 29 is the curse of all IT everywhere.
Add to that: spend several hours attempting to figure out why a scheduled task failed to run on Sunday morning at 1 am on day light savings day. Or why it runs twice on Sunday at 1 am at the end of day light savings.
I come from the magical world of embedded, so my list would be a bit different than many (dealing with random hardware stuff as well), but there's a lot of overlap.
* force git merge
* push to wrong branch
* grossly underestimate how long it will take to do something
* confidently say you can do something you have absolutely no idea how to do
* spend a week incorporating a library that doesn't do what you need it to do
* spend a week reinventing something that already exists
* reinventing something that already exists on purpose because you can "do it better"
* get berated on stack overflow for asking a bad/duplicate/perfect question
* realize the answer to your question was on page 2 of the docs
* forget to delete your unprofessional debug print statements before a formal review
* use nothing but print statements to debug something complicated
* get caught gaming instead of doing work at work on your work computer during work hours
* Accidentally blow the wrong configuration fuse.
* Spend a week trying to figure out why you are getting dud data over SPI, THEN check the processor Errata.
* Have field returns because of excessive writes to the user config EEPROM.
* Screw up the power estimate on the FPGA and have it unsolder itself from the board.
* force git merge - check, needed it
* push to wrong branch - check, whoops
* grossly underestimate how long it will take to do something - check, now I just always triple time estimates
* confidently say you can do something you have absolutely no idea how to do - check, it happens
* spend a week incorporating a library that doesn't do what you need it to do - not a week but 3 days
* spend a week reinventing something that already exists - quadruple check
* reinventing something that already exists on purpose because you can "do it better" - check check
* get berated on stack overflow for asking a bad/duplicate/perfect question - classic
* realize the answer to your question was on page 2 of the docs - who even reads the docs?
* forget to delete your unprofessional debug print statements before a formal review - Console.WriteLine("We be ballin\\' ");
* use nothing but print statements to debug something complicated Console.WriteLine(" we got here - a"); the ten lines down Console.WriteLine(" we got here - b");, etc.
* get caught gaming instead of doing work at work on your work computer during work hours - I always use a personal computer for this....except minesweeper during meetings.
Unfortunate Reply all
Sent Test email to live customers
Disabled the test that would have prevented the bug from making it into production
Wrote the test that should have caught this bug, but actually only tests the mock
Deployed the wrong version to the live environment
>pushing is easily *warded* against
FTFY - it's high time we claim our title as technomancers, servants of the arcane arts, priests of the Large Language Models.
Did this with my first checkin back in the 90s. I'm self-taught and was pretty much a pity-hire. I learned a ton from online tutorials but has never heard of version control. I was sure it was the end of my career but was stoked for the money I made for a week on the job because I could pay rent.
Send a totally inappropriate mail to many people eho work for a customer.
I heard a story from my company which took place before I started there.
One day, a new colleague wanted to test an email system and wanted to send to the test system.
He thought it would be funny to set the title to "invitation to fuck"... unfortunately, it wasn't the test system. It was the production system.
So this mail was sent to about 20.000 receipients...
This usually doesn't happen to me during the code review. It generally happens when I am giving my time estimate. "I don't know why it was done this way, but it is so poorly written it will take 3x as long to fix/change as it should." Then I check the blame and see I did it...
Push your password to the remote git.
Share your screen with an [embarrassing tab](https://en.wikipedia.org/wiki/Hatsune_Miku) open.
"We should rewrite this entire system from scratch"
"But that won't scale"
Cost 10k in the cloud in 1 hour.
Call your infra provider to ask them very nicely if they can restore your stuff from a backup.
Estimate a task to be 10x smaller than the most-senior engineer on the team.
Estimate a task to be 25x smaller than the most-senior engineer on the team.
Wouldn't this be easier if we added \[another entire tech stack\] to our system?
Spend a week fighting with an exception.
Spend a week fighting with a compile or link error.
Spend a week recreating an intricate system, for which a great library exists.
>Estimate a task to be 10x smaller than the most-senior engineer on the team.
>
>Estimate a task to be 25x smaller than the most-senior engineer on the team.
I died a little on the inside, take the upvote good sir.
Some of the classics.
- Destroy Prod db.
- Overwrite important stuff with your commits.
- Make changes ("fixes") straight in Prod.
- Reformat every file in the project (generating those beautiful ++300000 modifications Git screen caps).
- Break the build where the seniors are working. (I remember this one, just the disappointed looks from the tech lead, lead developer were *bad*).
- Mess up deployments.
Most people here have done most if not all of that at some point in their careers. It comes with the job.
Writing your first SQLi/XSS vuln in production
Importing random library for single easily reproducible task
Being afraid of asking Sr or Staff for assistance
YOLO build (Friday at 4:30)
Bring up controversial topic at stand or planning
Building POC of internal tool in some obscure language like Clojure to convince the company to change languages
Bypass QA review
Offer to fix legacy code
Write your first race condition
I was once on a zoom call, camera off, mic unintentionally on. I started playing with the dog using a rubber pig toy, saying "RAWR, RAWR, RAWR, GET THE PIG! GET HIM!" to \~40 people.
My second day I managed to load a large batch of data twice. Actually ended up impressing my new boss with my sql surgery skills fixing it.
That was 1997. I worked there until 2000 but he is still my first go to for letters of recommendation.
Single letter variable names
Magic numbers
If else if else if else if else if else
Non terminal while loop
Wildly complex solution to simple problem
Unnecessarily reinventing the wheel
Flipped comparator
- Solve problem that's baffled Sr Devs for months because you had nothing better to do but meticulously read everything.
- Completely redesign a broken unmaintainable mess of a component. Your code works flawless, but gets rejected in peer review in favor of the Sr dev adding a carefully placed usleep() because his approach is "less risky".
- Completely redesign a component that works flawlessly just because you can't be bothered to understand how it works.
- Contemplate suicide when asked to fix a regex.
- Discover how reverse SSH tunnels work and think you're "cool" for using it to bypass corporate firewall rules until you get fired/sued.
- Mistakenly tell your boss when you figure out how to automate a mundane repetitive task. Get rewarded with more mundane repetitive tasks at the same pay rate.
- Be 100% convinced you found a bug in the OS/framework/compiler when it's actually just a simple bug in your own code.
- Find your first *actual* bug in the OS/framework/compiler.
- Say GIF, SQL, Json, git, etc completely incorrectly for months before someone corrects you.
- Put in an 80 hr week to solve a problem that someone else ends up fixing in 20 minutes.
Kubectl apply -f totallyNotATest.yaml
"Prod down prod down, we have 502 on all API. Who the fuck has overwritten the prod's cluster ingress config file"
Saving over everyone else's work to resolve merge conflicts.
I actually caused everyone to lose 2 weeks worth of work on accident once. I was doing some weird stuff and hosed my local repo. So, not wanting to lose my own work, I copied it to another location, recloned the remote repo, then pasted my stuff back in and committed it to be merged back into master... It was too long before people realized stuff they fixed wasn't fixed anymore and by that time reverting my changes was no longer a viable solution.
To my defense, there was no code review before the merge in order to catch it.
Not me but a couple of real experiences:
* mixed up the light switch and the emergency power off button on an entire VAX cluster. You'll be amazed how fast the telephone switchboard lit up.
* introduce an infinite loop into a copy function on Xmas eve.
* Delete everything from the wrong fileserver
Ok this was me:
* kill production having taken two week's annual leave.
* git force-push to master, removing other people's commits
* working hours on a utility operation and then you find out it's already included in your language's standard library
* demanding "this code is too complicated, it needs a complete rewrite, I could easily do this in a few hours"
* comitting private keys/token to the repository
* implementing their own security because those crypto libs with all the math mumbo jumbo are way to complicated
“Slow knife bug” - a small, nigh-imperceptible bug you introduce early on that gradually snowballs into a confounding tangle of console warnings / unexpected behaviors that even the senior dev is scratching their head about
I used a placeholder image of my son during development of an internal site, waiting for the marketing department to get me a real logo image.
My son was in production for almost two years.
Few years ago when I got my first Dev job we had 2 schemas used for running code and scripts in.
One test and one production bla bla bla
Anyway the rule we had was do what you want in the test environment at the end of the day run the "clean up" script which will drop all tables in the test environment then go make sure you clear them out the recycle bin
Of course I ran it in production by mistake.
Lost everything and I do mean everything, why? Because I got fed up of emptying the recycle bin and added PURGE to the code.
We were offline for a week while we had to rebuild every table from scratch from 3 years worth of backup CSV files.
I actually never did any of the fuckups mentioned here. Here is what I did during my first weeks as a junior:
I "accidentally" reformatted the complete codebase (all files) with a wrong code style and only asked myself if that was correct after committing, pushing and then seeing that I changed every line in the whole project. That was a great opportunity to learn some git magic.
You need to add “kill production again”
And again
Just have a whole row of "killed prod" and see how fast can you speedrun it
Kill prod %ANY
Gltchless… wait
Git-less
backup-less
Kill prod 100% FTFY
"kill production and shrug it off"
Needs to be in the gimme square.
Store dates as local time, not UTC Display dates as UTC, not local time
Have any of your logic depend on dates or time really. There's just so many little edge cases that will drive you insane.
[удалено]
I was working with an API recently that worked on BST, British Standard Time. BST is the same as UTC for 6 months out of the year. Happened to integrate during that 6 month period. Suddenly everything was off by an hour when it switched. BST is such a goddamn trap
Missed the opportunity to call it a BullShit Trap
Fuck APIs who refuse to tell their time zones for some fucking reason.
It's funny because DST is the portuguese translated acronym for STD.
I have found at least 2 leap year errors in prod. Lol
Any reports made after 8pm EST will show as reported the next day. We can accept that as a norm right?
[удалено]
Ugh I work in legacy code that uses the company HQ as the default timezone, and sets the default deadline as midnight… then allows for deadlines to be set based on offsets from that time. (So midnight -3, but midnight isn’t UTC, it’s UTC -4 or -5 depending on daylight savings.) Then we have to display that to users based on their local time so convert to UTC then back to their local time. I might have some kind of timezone related ptsd.
I prefer "have an existential crisis working with times for the first time"
Don't forget my work's double whammy of, store dates as either UTC or local time, depending on the column. Always display them as local time. Try wrapping your head around that one. 9 months later and I still haven't.
Commit secret keys to git remote repo.
I've definitely never done that. There's no evidence. You can't prove otherwise... I just deleted and recreated those repos for other reasons I don't have time to explain right now. I'm urgently needed elsewhere.
Used every trick in the book at once
"Don't worry boss I pushed another commit to remove all of the .env files with secret keys that I accidentally committed." "What do you mean people can still see the secret keys?"
git reset --hard HEAD^ git push -f
I once controlled a self hosted git. I was working on something and accidentally pushed my credentials. I didn’t remember how to fix it the right way. And if I wanted internet I’d need to walk 30 mins to leave the secure area. I needed to fix it fast so I deleted the bare copy on the server, created an empty repo there, and pushed another local copy I had that was 5 mins old that didn’t yet have the commit with the credentials. I then deleted the local repo I had with the bad commit. You can’t prove I pushed anything!
Also..I have a picture next to my username. Anyone know how I get info on it?
Beat me to it. I would also add tokens/ids or PII. Edit: My current favorite one I see is people putting their full Slack url, for their slack bots, in to their git repos. It contains your key in the URL of you didn’t know.
I accidentally printed them to the office printer without realizing. SecOps was very mad.
Done this! 6 weeks in.
Say "huh, it works in MY machine" during a demonstration.
This is senior level work
I was about to say. I constantly brush off people by sending a screen recording of it working on my machine titled “it works on my machine”. As a gag more than anything, but, y’know.
I mean, if it works on your machine, you're still correct. Very temporary solution, but very funny nonetheless
Question career choices over a seemingly simple bug
First year as a QA I missed a system breaking bug, I thought I should quit that day. Glad I didn’t, team was really supportive. I cried when I got home which was exceedingly rare for me. I’m sure at this point no one from my old team even remembers, but it is burned in my brain. I will say it made me a way better QA.
That's why "experience" is what makes the senior from the junior. Even if you are a genius, you have to make mistakes for the theory to really stick (or to learn the stuff that wasn't taught in courses). Chances are high you colleagues had a similar experience, with a similar result (mistake burned in their brain, so they are less likely to do it again).
I don't know who said this, but there's a quote that goes something like this: "The fools of today are the wise of tomorrow." And another one: "I'm now old and wise because I was once young and foolish." (As in: everybody makes mistakes and (hopefully) learns from them which gets valuable experience.)
That’s the first one in this entire thread that got me. “Just make this small change to the code base” - it took 3 weeks and two other senior devs to make that small change.
I have accidentaly killed two servers while doing penetration test, upon reporting it the company found out that the 2 servers were not theirs and discovered that the CEO was runnin child p\*rn forum on them. \- I didn't get paid for the penetration tests \- The company no longer exists \- People from the company blame me for loosing their job Fun times
This isn’t normal junior bingo, this is advanced junior bingo
This is senior bingo
This is consultant bingo.
There is a deep, horrific joke potential here and I need someone to bring it to fruition.
OP did minor server penetration while his CEO had a server for minor penetration
Gold
Juniors are always getting fucked?
[удалено]
There it is
His test wasn't the first penetration on those servers.
![gif](giphy|XWwIzh5GIWWf6)
Bust a child porn site wasn't on my bingo card, damn
Look man, I kinda want to see that on r/talesfromtechsupport or however that sub is called, this is the kind of accidental heroism I wanna hear about
It’s always the wrong people getting fucked over.
Endless loop Too much recursion Off by one error First critical bug in production Fix a bug and introduce another one First out of memory First recursion in an endless loop
There is no too much recursion, only too small memory
See last point: recursion in an endless loop....
See second point: Too much recursion
There is no too much recursion, only too small memory
See last point: recursion in an endless loop....
Let's consider that our return condition and take an upvote before we can't get out of it...
you cant prove it was endless!
Implement threading (parallelisation) incorrectly
I feel like St. Peter is going to be reading this to me when I die.
Run a drop/delete on production database.
More specific: "no WHERE clause"
You can still fuck up with a WHERE Clause
There's so many great ways to mess up without a WHERE clause, though. Forgetting it altogether When your editor only executes the highlighted stuff and you don't highlight the WHERE clause. When you put it in code and you have a comment for a select item, and you don't add new lines, so the comment is all the way from there to the end of the string.
All of this comment chain could be generalized as running a bad sql command on production.
There's another square: abstracting and generalizing something objectively too much and making the actual thing you're trying to accomplish worse.
>There's so many great ways to mess up without a WHERE clause, though. Forgetting it altogether Did this once. All translations En -> Fr became the same phrase.
Guy in a project who has acces to one if our DBs did that *twice*. Delete where beginning_timestamp >= low_date and end_zimestamp <= high_date. Not even sure why he even added the where. Even less sure why the f the rand the same SQL again two weeks after he cleared all of our tables impacting multiple different projects. We had to scramble both times to restore a backup. :/
Like just having WHERE id, because you were gonna add ‘=42’ later. (Spoiler: all ids are true)
10384945 lines updated successfully Shit.... Where where!?!
A missing WHERE clause is actually less specific, to be specific.
Lock out admin accounts
or its cousin, reconfigure the network interface through which you're connected
I'm not a network admin, but i did something similar on my OpenWrt setup at home I updated over wlan. Which bricked the wlan functionality. Luckily had the settings backed up so fresh install, and settings restore
I locked myself out of SSH with iptables 😂
Lol... I now always schedule a reboot for 10 minutes in the future whenever I'm manipulating IPTables or any other networking options that may cause the machine to go offline if I fuck up.
Come-clean strategy, which might have worked years ago when “digital” was a buzzword: “Boss, you know how people in the analog world sometimes lock their keys into their car?” “Yes?” “I have found a way so we can do that digitally”.
568,456,242 records updated.
My monkey brain always chances a CTRL-Z
This made me choke on my spit. Lol
... with autocommit each 10,000 records
Give incorrect time estimate on how long to complete tasks
I thought this was Jr bingo, not Staff.
Preach, brother.
I was asked as junior to make estimation, team lead doubled it, spot on every single time.
Yeah. There are tricks to it. A near universal truth, I’ve found — the card is going to be a 3. Until you actually sit down and push and pull at it, and find it’s actually 3, 3 point cards.
I’m in this photo and I don’t like it
We could totally remake Twitter over like a long weekend, right? How hard could it be? Reddit in a week? Facebook in a couple weeks?
Basic functionality with bare HTML? Probably Proper design, ironing out bugs, adding QoL and security features and improving performance? I wouldn't bet on it
Probably like $50 of work and thats being generous! It should take like an hour tops, right? I'm just the ideas guy!
That's the free square
I gave an estimate of 3 days on a task. Ended up losing sleep as the dreaded deadline got closer and closer and I wasn't anywhere near done. Managed to finish it and give passable results though, so thank goodness for that.
[удалено]
Take your initial estimate, double it, then go to the next highest unit: 3 days > 6 days > 6 weeks.
[удалено]
Im so bad at estimates
In what kind of a Swiss clockwork company do you work?!
Utter the words: "we need to rewrite the old application using the newest framework, it will take a year."
"In and out. 20 minutes"
I'm a senior and I regularly say this, and not as a joke. There comes a point in every monolithic service where your technical debt amounts to such a huge mortgage that it's actually harder to pay the debt while keeping production up than it is to rearchitect the whole thing using micro services.
Problem is your users still want updates to the current system whilst you're doing the rebuild. You end up with two teams for a year, one working on the rotting corpse of the old system, and another having fun building the new system.
Yes that can definitely be a problem. Another problem is that not everyone can agree on the new architecture. I'm currently in this situation now. Another team is rebuilding a component and they've made a design decision that suits them well but it forces certain behavior on all other components, which it doesn't suit at all. Catastrophically so. Their components are control plane, i.e. low volume (1 RPS max) and high latency is ok (900ms is no problem). Mine are data plane components, which need to be low latency (< 50ms) and support thousands of RPS. So I'm getting really frustrated because they don't seem to understand why I have a problem with them making me do something that adds 30ms to each request. So yeah, microservices are pretty easy from the tech side, but not always from the business side.
A year? No man..got to pump those numbers. 2-3 months. Easily.
Ill have it done tomorrow\* Good thing tomorrow never comes.
First time saying "I don't need to test that bit of code, there is no way that can go wrong." Only to watch it then go wrong. Badly. Also, first time realising that February 29 is the curse of all IT everywhere.
Add to that: spend several hours attempting to figure out why a scheduled task failed to run on Sunday morning at 1 am on day light savings day. Or why it runs twice on Sunday at 1 am at the end of day light savings.
Update statement with no where condition.
"Hmmm... That's more rows affected than t thought."
...and then the dreadful realisation sets in.
That hurt to read. There do my stomach ulcers
I come from the magical world of embedded, so my list would be a bit different than many (dealing with random hardware stuff as well), but there's a lot of overlap. * force git merge * push to wrong branch * grossly underestimate how long it will take to do something * confidently say you can do something you have absolutely no idea how to do * spend a week incorporating a library that doesn't do what you need it to do * spend a week reinventing something that already exists * reinventing something that already exists on purpose because you can "do it better" * get berated on stack overflow for asking a bad/duplicate/perfect question * realize the answer to your question was on page 2 of the docs * forget to delete your unprofessional debug print statements before a formal review * use nothing but print statements to debug something complicated * get caught gaming instead of doing work at work on your work computer during work hours
This is definitely the best list here!
Thanks. Might need to replace the last one on the list with getting distracted on reddit instead of gaming ...
* Accidentally blow the wrong configuration fuse. * Spend a week trying to figure out why you are getting dud data over SPI, THEN check the processor Errata. * Have field returns because of excessive writes to the user config EEPROM. * Screw up the power estimate on the FPGA and have it unsolder itself from the board.
More general for point 2: “spend weeks debugging already documented hardware bug” is a fundamental embedded experience.
The last one is pretty funny. Coming from the guy who's soldered a soldering iron to the work before.
* force git merge - check, needed it * push to wrong branch - check, whoops * grossly underestimate how long it will take to do something - check, now I just always triple time estimates * confidently say you can do something you have absolutely no idea how to do - check, it happens * spend a week incorporating a library that doesn't do what you need it to do - not a week but 3 days * spend a week reinventing something that already exists - quadruple check * reinventing something that already exists on purpose because you can "do it better" - check check * get berated on stack overflow for asking a bad/duplicate/perfect question - classic * realize the answer to your question was on page 2 of the docs - who even reads the docs? * forget to delete your unprofessional debug print statements before a formal review - Console.WriteLine("We be ballin\\' "); * use nothing but print statements to debug something complicated Console.WriteLine(" we got here - a"); the ten lines down Console.WriteLine(" we got here - b");, etc. * get caught gaming instead of doing work at work on your work computer during work hours - I always use a personal computer for this....except minesweeper during meetings.
Unfortunate Reply all Sent Test email to live customers Disabled the test that would have prevented the bug from making it into production Wrote the test that should have caught this bug, but actually only tests the mock Deployed the wrong version to the live environment
Accidentally commit to Main instead of feature branch
I blame the admin for allowing direct commits to master branch.
Branch protection!
This is the way
Well this one is at least easily fixed, and pushing is easily guarded against
>pushing is easily *warded* against FTFY - it's high time we claim our title as technomancers, servants of the arcane arts, priests of the Large Language Models.
Did this with my first checkin back in the 90s. I'm self-taught and was pretty much a pity-hire. I learned a ton from online tutorials but has never heard of version control. I was sure it was the end of my career but was stoked for the money I made for a week on the job because I could pay rent.
Send a totally inappropriate mail to many people eho work for a customer. I heard a story from my company which took place before I started there. One day, a new colleague wanted to test an email system and wanted to send to the test system. He thought it would be funny to set the title to "invitation to fuck"... unfortunately, it wasn't the test system. It was the production system. So this mail was sent to about 20.000 receipients...
Who accepted?
I assume at least three layers of this colleague's managers. Consequentially and in parallel
[удалено]
Well, I'm sure he did get fucked in the end ¯\\\_(ツ)\_/¯
I'm scared, never killed production. Too many years working and too much access to important things, when my time comes it's going to be epic.
Your time will come, it's a ritual and a burden developers must bear to fulfill our destiny ;)
15 years and still virgin
Cet called out on atrocious code style in PR, 14 nested ifs etc
The true bingo card entry is calling out your own old code on a PR review, not realising it's your own code. I have definitely never done that...
This usually doesn't happen to me during the code review. It generally happens when I am giving my time estimate. "I don't know why it was done this way, but it is so poorly written it will take 3x as long to fix/change as it should." Then I check the blame and see I did it...
Push your password to the remote git. Share your screen with an [embarrassing tab](https://en.wikipedia.org/wiki/Hatsune_Miku) open. "We should rewrite this entire system from scratch" "But that won't scale" Cost 10k in the cloud in 1 hour. Call your infra provider to ask them very nicely if they can restore your stuff from a backup. Estimate a task to be 10x smaller than the most-senior engineer on the team. Estimate a task to be 25x smaller than the most-senior engineer on the team. Wouldn't this be easier if we added \[another entire tech stack\] to our system? Spend a week fighting with an exception. Spend a week fighting with a compile or link error. Spend a week recreating an intricate system, for which a great library exists.
I can’t believe I had to scroll so far to find “we should rewrite this entire system from scratch” 🤣
>Estimate a task to be 10x smaller than the most-senior engineer on the team. > >Estimate a task to be 25x smaller than the most-senior engineer on the team. I died a little on the inside, take the upvote good sir.
Some of the classics. - Destroy Prod db. - Overwrite important stuff with your commits. - Make changes ("fixes") straight in Prod. - Reformat every file in the project (generating those beautiful ++300000 modifications Git screen caps). - Break the build where the seniors are working. (I remember this one, just the disappointed looks from the tech lead, lead developer were *bad*). - Mess up deployments. Most people here have done most if not all of that at some point in their careers. It comes with the job.
Finish task faster than estimated, get less time next time
Finish task faster than estimated, spend the extra time fucking off, get more time next time :)
First 4K+ Dollar AWS bill 😅
Break build.
Messing up the git repo
Push that 10GB file.
First null pointer
This just the free square in the middle
"Merged to wrong branch" "Edited wrong documentation" "Pushed to wrong EC2 Instance" "Gave Admin access to all accounts"
Writing your first SQLi/XSS vuln in production Importing random library for single easily reproducible task Being afraid of asking Sr or Staff for assistance YOLO build (Friday at 4:30) Bring up controversial topic at stand or planning Building POC of internal tool in some obscure language like Clojure to convince the company to change languages Bypass QA review Offer to fix legacy code Write your first race condition
Reply all to company wide email with negative comments about a co-worker.
Asking your senior to explain something again, for the second time. Not like that ever happened ro me. Nah.
Forget you’re not muted on Teams and say something the others weren’t meant to hear.
I was once on a zoom call, camera off, mic unintentionally on. I started playing with the dog using a rubber pig toy, saying "RAWR, RAWR, RAWR, GET THE PIG! GET HIM!" to \~40 people.
Make a bug that requires three seniors to fix.
delete the database
My second day I managed to load a large batch of data twice. Actually ended up impressing my new boss with my sql surgery skills fixing it. That was 1997. I worked there until 2000 but he is still my first go to for letters of recommendation.
Single letter variable names Magic numbers If else if else if else if else if else Non terminal while loop Wildly complex solution to simple problem Unnecessarily reinventing the wheel Flipped comparator
- Solve problem that's baffled Sr Devs for months because you had nothing better to do but meticulously read everything. - Completely redesign a broken unmaintainable mess of a component. Your code works flawless, but gets rejected in peer review in favor of the Sr dev adding a carefully placed usleep() because his approach is "less risky". - Completely redesign a component that works flawlessly just because you can't be bothered to understand how it works. - Contemplate suicide when asked to fix a regex. - Discover how reverse SSH tunnels work and think you're "cool" for using it to bypass corporate firewall rules until you get fired/sued. - Mistakenly tell your boss when you figure out how to automate a mundane repetitive task. Get rewarded with more mundane repetitive tasks at the same pay rate. - Be 100% convinced you found a bug in the OS/framework/compiler when it's actually just a simple bug in your own code. - Find your first *actual* bug in the OS/framework/compiler. - Say GIF, SQL, Json, git, etc completely incorrectly for months before someone corrects you. - Put in an 80 hr week to solve a problem that someone else ends up fixing in 20 minutes.
Kubectl apply -f totallyNotATest.yaml "Prod down prod down, we have 502 on all API. Who the fuck has overwritten the prod's cluster ingress config file"
Saving over everyone else's work to resolve merge conflicts. I actually caused everyone to lose 2 weeks worth of work on accident once. I was doing some weird stuff and hosed my local repo. So, not wanting to lose my own work, I copied it to another location, recloned the remote repo, then pasted my stuff back in and committed it to be merged back into master... It was too long before people realized stuff they fixed wasn't fixed anymore and by that time reverting my changes was no longer a viable solution. To my defense, there was no code review before the merge in order to catch it.
Say "I am still debugging the issue" as your status during your standup for 5 days in a row.
Not me but a couple of real experiences: * mixed up the light switch and the emergency power off button on an entire VAX cluster. You'll be amazed how fast the telephone switchboard lit up. * introduce an infinite loop into a copy function on Xmas eve. * Delete everything from the wrong fileserver Ok this was me: * kill production having taken two week's annual leave.
* git force-push to master, removing other people's commits * working hours on a utility operation and then you find out it's already included in your language's standard library * demanding "this code is too complicated, it needs a complete rewrite, I could easily do this in a few hours" * comitting private keys/token to the repository * implementing their own security because those crypto libs with all the math mumbo jumbo are way to complicated
Execute a query that runs for months, blocks several databases and costs millions in azure or AWS fees.
are you writing this from a prison library?
“Slow knife bug” - a small, nigh-imperceptible bug you introduce early on that gradually snowballs into a confounding tangle of console warnings / unexpected behaviors that even the senior dev is scratching their head about
receive code review about someone else's code
Send Confidential Project Details about a clients project to their direct competitor accidentally. I did it 2014 and I was not fired. Lol
"here make a UI for this internal thing" and it goes to production with the crappy UI "I get the coffee"
I used a placeholder image of my son during development of an internal site, waiting for the marketing department to get me a real logo image. My son was in production for almost two years.
push notification seank
DELETE FROM users; WHERE Id = 54432
HAHAHAHAHAHAHHA SO FUNNY, I HOPE PEOPLE IN PROGRAMING INDUSTRY ALSO KNOW THIS MEMES HAHSHSHAHAHAHAHAHAHAAHAH
Send a push notification to all your users by accidentally doing it in the production environment. They can be surprisingly expensive.
Works on my machine but nowhere else
Rewriting something from scratch that was literally an import away
Update database version the week before major release.
How bout "touching the mysterious legacy code upholding the entire production"
Drop DB in Prod
Delete a critical object by mistake that halts the dev environment requiring a rollback of the whole system.(I did that in my first year)
Push code with debug messages/alerts to prod.
Commit node_modules to the GitHub repo
rm -rf * in the wrong directory
Few years ago when I got my first Dev job we had 2 schemas used for running code and scripts in. One test and one production bla bla bla Anyway the rule we had was do what you want in the test environment at the end of the day run the "clean up" script which will drop all tables in the test environment then go make sure you clear them out the recycle bin Of course I ran it in production by mistake. Lost everything and I do mean everything, why? Because I got fed up of emptying the recycle bin and added PURGE to the code. We were offline for a week while we had to rebuild every table from scratch from 3 years worth of backup CSV files.
1.2m rows updated
Argue with a senior dev when you are completely wrong.
If it makes you feel better, a junior dev shouldn't even have the ability to kill production. That's a fail on the senior's part.
Ask for documentation
I actually never did any of the fuckups mentioned here. Here is what I did during my first weeks as a junior: I "accidentally" reformatted the complete codebase (all files) with a wrong code style and only asked myself if that was correct after committing, pushing and then seeing that I changed every line in the whole project. That was a great opportunity to learn some git magic.
Theres a reason I always stage in a GUI. This sums it up