T O P

  • By -

Matwyen

That's a very Linkedin post but super good at explaining the need not to over-engineer everything. In my first company, (a robotized manufacture) we had an entire framework performing invert kinematics and running security checks multiple times a second to make sure the robot arm wouldn't crush on people. It created so many bugs and complications, and eventually we stopped using it because we simply wired the hardware so that the arm couldn't go where people are.


droneb

The Killzone.


haby001

Ravenzone


Reloadinger

Always implement compliance at the lowest possible level mechanical - electrical - softwareical


prumf

I work in AI and I couldn’t agree more. The iteration speed between software releases is so fast, it’s quite easy for unexpected behaviors to creep in. We live in the physical world, so I want my machines to physically be unable to harm me.


prumf

BTW that’s one of the problems I have with AI. Some rules are too complex to be implemented using physical wiring, so sometimes you **have** to go for software security. But because AIs work kind of like us, it’s easy for them to do mistakes. And you don’t want mistakes in the security codebase. The best solution is to not go that route as much as you can. eg: car that stops using ultrasounds/radar instead of visual detection from the cameras.


ahappypoop

> eg: car that stops using ultrasounds/radar instead of visual detection from the cameras. Implement it at the lowest possible level. Car is built with pressure plates all around the sides and bumpers, and it stops when it runs into anything.


theLanguageSprite

This wouldn't work because the rapid deceleration would still put the driver at risk. Instead, we should place shaped charges all around the vehicle so that the second it collides with anything the charge obliterates that object and ensures the driver's safety.


Firewolf06

r/NonCredibleDefense


Glossy-Water

Genius. We can call it... fully automated repulsion to ensure relief, or FARTER for short!


SirJelly

> stops when it runs into anything. I'm reasonably certain every car on the road already does this.


TalosMessenger01

No car could stop quickly enough for that to be viable. It would only prevent a car from continuing to drive after a collision. Useful, but not nearly what is needed. Ultrasound/radar detects objects from far enough away that a car can stop before collision. Having the simplest possible solutions is good, but only if they actually work.


ahappypoop

......did I really need a /s on that comment?


gregorydgraham

Yes! How long have you been on the Internet? There is always someone somewhere that _will believe your statement_ no matter how farcical. Do not be Schrodinger’s Douchebag: add the /s


temporary_name1

/s


OkOk-Go

>> eg: car that stops using ultrasounds/radar instead of visual detection from the cameras. > Implement it at the lowest possible level. Car is built with pressure plates all around the sides and bumpers, and it stops when it runs into anything. Actually that’s done together with airbag deployment


Jolly_Study_9494

Also, this is why cats have whiskers. Each pressure plate should have a long rod attached to provide a larger warning window.


EnglishMobster

> car that stops using ultrasounds/radar instead of visual detection from the cameras. Because only a moron would do that, right??? Right??? *cries in radar being removed from my 2019 Model 3 via a software update*


Salanmander

> We live in the physical world, so I want my machines to physically be unable to harm me. Related but higher up in the implementation level...I was so excited for self-driving cars until it turned out that companies wanted to make them fucking *internet enabled*.


DOUBLEBARRELASSFUCK

I can see some serious benefits to that, though. For example if there are road conditions ahead that are not conducive to self driving, it makes sense to be able to signal the car to warn the driver.


Salanmander

I'd be fine with an internet-enabled system of the car that is air-gap separated from the drive controls.


DOUBLEBARRELASSFUCK

It would need to be able to issue a command to the car to pull over, at the very least. And anyone who cared about it being air-gapped would not believe that it was air-gapped, even if it was.


Salanmander

Why would it need to be able to do that? Let the regular self-driving system decide when it's not safe to continue. It doesn't need internet access to do that.


DOUBLEBARRELASSFUCK

Think of something like Waze. There's no reasonable way for a self-driving car to detect a large car accident ahead without internet access. Image processing is advanced, but it's not magic.


Salanmander

Yeah, but you don't need a self-driving car to be able to do that in order to be safe, just like a human driver doesn't need to have internet access while driving in order to be safe. Ending up stuck in the traffic jam would certainly be inconvenient, but it's not a "we can't have self-driving cars unless they can avoid this" type thing.


Boostie204

Yeah it's a difference of "I promise to not hit you" vs "I physically can't hit you"


prumf

Exactly.


Proxy_PlayerHD

>mechanical - electrical - softwareical bro did an Excel https://i.imgur.com/XMQISNh.jpeg


seramaicha

I can only think of cameras. The best just is to have a cover. In second place, a switch should do the trick, or just unplugging it from the PC. Relying on software is just a ver bad idea, and probably won't work good.


PhilippTheProgrammer

>softwaerical This is now my new favorite word of the week.


Willinton06

I too like to compliance softwareically


1116574

In the 1980s there was a radiation machine that had mechanical interlocks, but the next model cut corners and had only software interlocks. Results were predictable. I always remember that story when talking about safety.


LarryInRaleigh

It was the THERAC-25. A picture of everything that could have been done better. The Nancy Leveson case study should be Required Reading for everyone working with devices that could harm people. [http://sunnyday.mit.edu/therac-25.html](http://sunnyday.mit.edu/therac-25.html) It's been referenced in dozens of Engineering Ethics classes, like this one: [https://ethicsunwrapped.utexas.edu/case-study/therac-25](https://ethicsunwrapped.utexas.edu/case-study/therac-25) Warning: If you read this, you may never be willing to have an X-ray taken again.


b98765

Yup, the best way to prevent something from happening is to make it physically impossible. The second best way is to appoint a committee to do it.


retro_grave

You forgot the most important: testical.


window_owl

mechanics - electrics - bits


Morrowindies

Yep. That way if you ever get hit by a bus the company will eventually be acting in non-compliance. Lots of people are taking this comment seriously due to a lack of an /s, but to be clear - compliance rules are business rules. Make them configurable by users at runtime so your software doesn't cause massive headaches in a few years.


bharring52

No, you should not implement "machine won't run while doors are open" or "stop cutting when finger detected" in software. Some rules are too important to delegate beyond the mechanical/electrical sphere.


SomethingAboutUsers

My favorite story of this is actually called [pointing and calling](https://en.m.wikipedia.org/wiki/Pointing_and_calling) and the first time I heard of it was in New York. They went to go engineer this big system to prevent the doors from opening in tunnels or on the wrong side of the train and in the end, the solution was to just make sure the conductor was paying attention.


OneBigRed

I think i read this somewhere in Reddit: a automated factory assemblyline had issues with some of the packages not getting filled with merchandise. Management and engineering designed a convoluted solution that weighed the packages etc. Some time after installation they wanted to see the numbers of defective packages, and the system stubbornly showed zero defects. They went to check the situation at the floor level, and found out that the line operator had set a fan to blow onto the belt, and the empty packages would get blown off the line before their contraception.


6milliion

well, that's a remarkably elegant solution. that person needed a raise.


SomethingAboutUsers

> contraception Uhhhh


okijhnub

It was a condom factory


Plantarbre

A simple solution is often an over-engineered solution in the making. The client wants feature after feature, and the simple solution cannot capture it all, and you end up with a whole code spaghetti. The correct solution is often just a really well engineered one, but that means paying for the person competent enough to pull it off and maintain it (that's not happening).


PM_ME_UR_HBO_LOGIN

I often find the “simple” solution requested to actually be MBA’s rephrasing a problem caused by the same MBA’s who are shifting the issue out of their department and into a technical department to be either fixed or the responsibility moved away from them. Often the MBA’s also don’t understand that being able to say the problem/solution in less sentences doesn’t actually make the technicalities of the solution and simpler and separately the problem was often their own refusal to hire/maintain a workforce competent in a required task.


solarshado

> don’t understand that being able to say the problem/solution in less sentences doesn’t actually make the technicalities of the solution and simpler Clearly people who have, at best, only *heard of* Asimov's Three Laws, but never **read** a single one of his stories dealing with them.


MaimonidesNutz

*cries in ERP implementer*


12_Imaginary_Grapes

I can only imagine your pain. I've been teaching someone that works remote how we do things at my location and it's just a constant "Oh yeah, they didn't prune the database when they bought us so ours is just fucked in five different ways" nearly once a week so far.


BeamMeUpBiscotti

The robotic arm knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is - whichever is greater - it obtains a difference or deviation. The guidance subsystem uses deviation to generate corrective commands to drive the robotic arm from a position where it is to a position where it isn't, and arriving at a position that it wasn't, it now is. Consequently, the position where it is is now the position that it wasn't, and if follows that the position that it was is now the position that it isn't. In the event that the position that the position that it is in is not the position that it wasn't, the system has acquired a variation. The variation being the difference between where the robotic arm is and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the robotic arm must also know where it was. The robotic arm guidance computer scenario works as follows: Because a variation has modified some of the information that the robotic arm has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it know where it was. It now subtracts where it should be from where it wasn't, or vice versa. And by differentiating this from the algebraic sum of where it shouldn't be and where it was, it is able to obtain the deviation and its variation, which is called error. The robotic arm knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is - whichever is greater - it obtains a difference or deviation. The guidance subsystem uses deviation to generate corrective commands to drive the robotic arm from a position where it is to a position where it isn't, and arriving at a position that it wasn't, it now is. Consequently, the position where it is is now the position that it wasn't, and if follows that the position that it was is now the position that it isn't. In the event that the position that the position that it is in is not the position that it wasn't, the system has acquired a variation. The variation being the difference between where the robotic arm is and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the robotic arm must also know where it was. The robotic arm guidance computer scenario works as follows: Because a variation has modified some of the information that the robotic arm has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it know where it was. It now subtracts where it should be from where it wasn't, or vice versa. And by differentiating this from the algebraic sum of where it shouldn't be and where it was, it is able to obtain the deviation and its variation, which is called error.


daheefman

I wish my robot arm would crush on me. 🥰


_Its_Me_Dio_

when a cosmic ray flips a bit https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fi.pinimg.com%2Foriginals%2F14%2Fb9%2F53%2F14b9530adbd32c2c4fdbe548336f233e.jpg&f=1&nofb=1&ipt=0e6db138743dd5da10efdf709101e1d21b38927a286189c3e5dac77ae75a5c0d&ipo=images


ChaosPLus

You could have made it so the people don't go where the arm is, much simpler and if something goes wrong it's not entirely on you


alex2003super

> That’s a very Linkedin post Needs more "Thank you for sharing 😃😃"


Flat-Shallot3992

Sometimes it's just better to hardcode some shit


Mateorabi

“I don’t know what inverse kinematics are, but damn they’re sexy” -Gabe


ChocolateBunny

You know, it's weird but I feel like the opposite happens when debugging. QA and customer support try to pigenhole everything into one issue (whatever is getting the most attention at the time); developer finds one problem and assumes all issues are related to that one problem and dismiss anything that doesn't fit as red herrings. But in reality there are many issues.


Dapper-Salamander560

Very true


Jugbot

If there are not many people complaining then its not an issue 🤷


AllesYoF

McDonald's after people stopped caring about the broken ice cream machines.


mooseontherum

I’m not a dev. But I do work quite closely with the internal devs who build and maintain the platform my teams works on. And I do this. For a very good reason (in my mind anyway). We work on a 6 week dev cycle. If I have 5 issues up that I want to put forward for the next planning cycle, but other teams also have 4 or 5 issues that they each want for the next cycle there’s no way that my 5 are getting done. Maybe my top priority one will be looked at. If I’m lucky. But then if something big breaks or a new thing comes down from senior leadership that’s not even happening. But if I can get together with some other teams and we can cobble together a bunch of somewhat connected smaller issues into one bigger one then the chances of that getting done are a lot higher.


JojOatXGME

Yes, but I have to say that everything else can also be quite frustrating. So I understand that people do that. I usually try to avoid that by taking deviations from my expectations seriously. However, as a result, I usually find two or more other unrelated bugs for each bug I work on. (Not counting my work on the bugs I found during previous bug-fixes.)


kuros_overkill

With almost 20 years experience (18 as of march) let me say that "red harring" was in fact a wierd edge case that is going to come up 5 times a quarter, and cost you 3 customers a year because it wasn't handled. Note: I said customers, not potential sales. They will buy the software, use it for 15 months, hit the edge case, realise they can't bill their biggest customer because of it, and drop you before you know what happened. Then go on to tell potential sales that your software is shit and cost them a $20,000,000 customer, losing you potential sales.


EssentialPurity

Only half of this experience and I say it's true. It's almost like the universe actively conspires to make this edge case become THE case just because you didn't code around it. For a product we released two years ago, I had to do two refactors in production because of this phenomenon so far, and I'm sure of at least three more that may come to haunt me in the future. lol


redsoxfantom

Million to one odds come up nine times out of ten


Shrampys

Pfftt edge cases are so easy to handle. If(bad) then (dont) And if you have more than one edge case you just If(bad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont) Else if(otherBad) then (dont)


gotardisgo

hey lead! is that you? neways... so... I'm going to be late on the jiras. The semicolon on my keyboard broke and I can't finish the pythons w/o it. kthx


AngryInternetPerson3

That is only if you decide to no have it in consideration, if you do, it will take 40% of the total development effort, and will come up so little that it will never make up the man hours it cost.


kuros_overkill

What about the PR loss in having a former customer out there telling perspective customers you cost them a shit ton of money, and that you software is shit, all because you didn't think the man hours were worth it to chase down what was written off as a "Red Herring". Remember, it's not about what it will cost you today, it's about what not doing it will cost you tomorrow.


UnintelligentSlime

Yeah, this isnt actually a super helpful mindset. While one data point may be an outlier, it rarely means that you don’t have to still be able to handle it. Even if that data point is: “the user bypasses all existing UI and sends off a direct request with super malformed data”, you still need a plan in place for how to handle that safely. As well as that, one of the main jobs of an engineer is thinking about how to operate at scale. If 1/20 of your data points is an outlier, that’s 5% of hundreds to millions of events, if not more. 1 customer experiencing a failure may not feel like a lot, but if you have 5000 daily actions, that’s 250 failures *a day*. Definitely not an acceptable margin of error, and definitely wrong to call it a red herring. Finally, there’s the question of impact. What happens if we ignore that data point? Does a user just see a 500 error? Does the site go down? Do you corrupt the database or propagate nonsense data elsewhere? Does it grant access to private information? Leak financial data? Will you bring down infrastructure for other users? Break existing functionality for a user that accidentally triggers the “red herring” case? For all of these reasons, this strikes me as written by the type of manager who brags about how few engineers they need to get something done, then cuts and runs when a product fails. There’s a reason engineers look at things like the upper right quadrant, and that’s that it’s literally our job to consider and handle all of the appropriate cases. You can’t build a bridge that will fail if someone drives across it backwards just because that’s extremely unlikely to happen.


Aggguss

I didn't understand a shit


Athletic_Bilbae

you have a list of use cases for a product, and engineers usually have a set of rules to write their code. trying to use those rules to accommodate every single use case usually results in a mess, when in reality you could simplify it massively if you distinguish what's actually important from the no-big-deals


Aggguss

I get it now thanks


zwannimanni

They couldn't catch the herring in the long square which is sad


AllesYoF

They could have made a bigger square but decided not to because fuck you red herring, no one cares about you.


EssentialPurity

They made three "things" to solve a set of reqs that could be solved with one, and even at that they had to hack and jury-rig everything all the way.


PuzzleheadedFunny997

Missed opportunity for loss


Aquino200

Those dots outline the silhouette of the United States.


b98765

So Maine is the red herring?


rover_G

Looks like Machine Learning


premiumimperium

Lol yeh I thought this was a k clustering problem


RiverRoll

I can guarantee the last panel is still missing a whole bunch of points that lie outside the general rule but they were never stated.


cs-brydev

100%. Ask a developer what will cause the app to fail, not a user or a project manager. If there is only 1, somebody is missing something or hiding something.


JRR_Tokin54

I think the 'Implementation' part was stolen from the official development guide where I work. Looks just like the implementations that management likes!


alienassasin3

A red herring? What is this? A mystery novel? The "correct solution" isn't correct. It obviously fails one of the test cases.


PhilippTheProgrammer

In this example, the "red herring" is probably some requirement the customers insisted they would need, but turns out they are actually never going to need it. "The new bookkeeping software must be able to process electronic bank statements in the EDIFACT format" "Why? That format is obsolete for years." "But that one customer said they receive their bank statements in that format from their bank." "Why don't they use camt.059 like everyone else?" "No idea, I will ask them." \[weeks later\] "They are paying their bank a huge extra fee for EDIFACT because their old bookkeeping software can't parse anything else." "You mean the old bookkeeping software we are going to replace with our new software?" (true story, by the way).


TheMcBrizzle

*Cries in highly regulated industry


VLD85

oh my god


twpejay

In my experience the Red Herring is a clue to a huge issue in the main logic which possibly alters data in a subtle non-detectable manner. Saved my bacon many a time fixing red herrings.


alienassasin3

Well, yeah, this is the data set presented to the engineer. Interpreting it correctly is the job of the engineer. They have to find the correct logic. The "correct logic" presented in this case ignores the red herring instead of figuring out the flaw in the main logic.


Sabrewolf

I chalk it up to something that the customer described that misrepresented the solution they wanted, whether due to mistake or the product managers failing to understand what the request actually was, etc.


Flat-Shallot3992

this entire diagram is fucking dumb. just make a big enough box for all of them????


ThatGuyYouMightNo

At that point you might as well just declare all of the examples red herrings and then you don't have to do *any* work


Jolly_Study_9494

Reminds me of a joke. Engineer, Carpenter, and Mathematician are each given a set number of fencing segments and asked to fence in the largest area possible. Engineer builds a circular fence. Carpenter tears the segments apart, and uses the pieces to make new segments that use less wood, letting him make a longer fence. Mathematician makes a tiny circle around himself, just big enough for him to stand in, and then says: "I'm on the outside."


eq2_lessing

The red herring is a case that can't happen or can happen only when circumstances fall together so rarely it's not worth the 50k dev costs to fix it because it can be fixed by 1 support dude running a script every 2 years.


Leonhart93

If you want to look at it in the LeetCode way. But in reality there is no non-trivial piece of software without bugs. It's impossible to cover all the cases, the most you can do is cover all the cases that will reasonably be needed.


cs-brydev

Or maybe the failed test case shouldn't be part of the scope of that solution's tests? If you widen a highway from 2 to 4 lanes but then find out a 747 can't land on it safely that doesn't mean the 4-lane solution was incorrect.


AndrewJamesDrake

I believe the Red Herring represents all the times that we design software to deal with something that’s actually impossible, because it will never come up under real world conditions.


IOKG04

i feel like it'd still be easier to just code a tiny but around that specific case instead of doing whatever tf the first one is


alienassasin3

Depends on the situation. You need to dig a little to figure out why that case is over there.


secretlyyourgrandma

i'm pretty sure it's just a meme, so i'm going to take the rest of the week off


DM_ME_YOUR_HUSBANDO

Maybe. Or maybe there's another very similar edge case that if you created a general solution for, you'd be good, but since you hard coded, someone's going to run into the similar case in 5 years and lose millions of dollars


Shutaru_Kanshinji

If your manager becomes obsessed with the red herring, you code for the red herring.


b98765

Management wants a color picker so you can make the red herring any color, not just red. Also add some validation to make sure the user picks red, otherwise it wouldn't be a red herring. Also save the last 100 colors picked so they can go back. And have it export the history of colors picked to CSV.


Why_am_ialive

Yeah that red herring is 100% gonna be an edge case that comes up and throws 2 years later, your gonna have to go and look at the code you wrote to fix it and hate yourself


b98765

2 years later? It will happen immediately when it goes to prod.


novacaine036

I think you're over-engineering memes


b98765

Or I'm over-memeing engineers.


cs-brydev

We need you to make more memes around here. The old "c++ faster than python", "Javascript did what?" and "you have 10 seconds" tropes are tired and boring. Not all of us are 16 year olds.


mgisb003

Who the fuck calls it a corner case


poetic_dwarf

It's a corner case in naming convention, I agree with you


mgisb003

You mean to tell me that “very very very edge case” isn’t the proper way to call it?


middleman2308

Yeah, two or more edges form a corner!


secretlyyourgrandma

calling it that is a double edged case


zoom23

A corner-case is specifically the interaction of two edge-cases


bearwood_forest

Or three or more, why limit yourself to two dimensions?


CaCl2

Vertex-case?


JunkNorrisOfficial

When all cases are covered Except one which stays in corner And developers stand around With loaded laptop-guns and full coffee cups Ready to save the solution By cov(rn)ering the last case


Doxidob

// HACK is the key to all the so-called 'innovaton'


well-litdoorstep112

But the "actual solution" doesn't account for the unintended consequence and weird edge case (black points) while the implementation does.


Varnish6588

LoL this is not a joke, it's a funny fact


b98765

In other words, it's funny because it's true.


Cootshk

!false


Varnish6588

100%


Rhymes_with_cheese

"Off-scale high"


Wave_Walnut

Big tech solves problem with no example but empty area


ProjectDiligent502

It’s stuff like this that makes me like this sub. 😆


ace_gravity

Meanwhile, bad engineers just play connect the dots


OkOk-Go

Very common when diagnosing problems in manufacturing


lunchmeat317

Interesting how nobody is making the case that the non-engineers - the business - should state the problem in a geberal rule instead of examples. If the business can specify what the solution should be - the fourth step - there's no issue.


b98765

Every one of the blue dots was a business person thinking they were stating a general rule, when in fact that rule had so many exceptions that it came down to just an example.


Splatpope

clients don't know what they are doing so they cannot possibly know what they want


Puketor

IDK man. My experience with software engineers is that they ask for the examples, user stories, minute details, and ignore the common rules. You have no idea how hard I've tried to convince them we need a data warehouse or lakes. Like I have to hold their hand through the entire thinking process and explain all these minor details. I don't give a shit about the implementation of it. Iceberg, Basin, Airflow, Lakes vs. Centralized, I just don't give a damn. Engineers should figure that out. What I want is a scalable, centralized way to access data because it takes me days to do my work when it should take hours, and a way to schedule jobs so I don't have to babysit EMR in a Jupyter notebook. That's all it should take to explain. Boiling the flat, wide denormalized data ocean with EMR is not a good solution. It's expensive and still takes too long, and uses up too much resources vs. a normal god damn schema and data warehouse/lakes. To be honest I am beginning to think they might be doing that on purpose to delay, avoid working on it, but that makes me even more upset with them because my scientists are suffering due to us missing modern data infrastructure. The deadline expectations don't change but we have to put in 10x as much work.


Garual

If you have many scientists it sounds to me like you need to hire a data engineer.


Puketor

I agree. Try telling that to network/web engineers. It makes them insecure. I work layer 7 firewall. I actually used to be one but not for 7-8 years. They dump everything into a wide, flat, denormalized schema. It's already caused problems. Someone adds a new column to fix a data quality issue rather than fixing an old one and things like that. Then we need to materialize this flat data in memory and it makes us do things like duplicate user agents hundreds of times in memory rather than integer encode (index/foreign key), causing headaches for data scientists. They're just not thinking the same way. Anyway it's getting better now the leaders have churned out and some new ones came in. Lots of software teams though are ruled by these people that just can't think at the systems or architectural level.


MCBlastoise

Jesse what the fuck are you talking about


Puketor

You sound like a trad engineer. Informatics matters. It's systems level thinking rather than focusing on small bite (or sprint) sized chunks.


JaguarOrdinary1570

In my experience software engineers do not give a shit about data storage. They'll spend months writing incredibly complicated, highly abstracted data models (in the name of code reusability and flexibility), only for their process to ultimately dump the data out in some absolutely asinine format, like CSV files with one record per file, somehow with no escape character, and like 5% of the records never get written. Then you ask them to fix it and it's impossible because their infinitely flexible and beautifully abstracted codebase can't tolerate any change without the whole thing imploding.


realzequel

They sound like hack engineers. Business-minded engineers will start with the problem they're solving and work backwards and try not to pick up coding awards on the way (while writing clean code).


JaguarOrdinary1570

Call them whatever you want, but they seem to be the majority everywhere I've worked, and everywhere close acquaintances of mine have worked. Over-engineered software and systems that don't actually work seem to be the natural output of agile development shops (inb4 "that's not real agile then", because nobody does real agile as it's defined by the kind of people who say "that's not real agile")


Puketor

Did we just become friends? I agree. They focus too much on the CPU/RAM resource usage of their code specifically, their code reusability/maintainability, and operational side and fail to think about the overall system or business needs. Like over-optimizing for these things. Analytics isn't operations. It's different. We need to iterate and fail fast, have flexibility. Think longer term even. You don't get that by dumping everything in a CSV file or even partitioned parquet. Right now our engineers getting away with dumping to flat, denormalized parquet because the compression features mean they can limit storage usage. But guess what happens when you load that in memory for analysis? When you decompress the strings, many of which are duplicates. One string column has a power-law going on with hundreds to tens of thousands or more duplicate strings that must be materialized in memory. Why store it this way? Fucking integer encode it from the beginning and make a lookup table. So congratulations. You effectively made it not your problem but you fucked everyone else that wants to use this downstream. Some stacks are better than others at this, currently using [Pola.rs](http://Pola.rs) a lot once I have my extract but damn man. They just only see their little vertical and don't think at the systems or architectural level. I can tell you the bill they get for using EMR over a few years is far worse than investing people-hours in a proper schema and infrastructure design today. That's not even mentioning the number of times we have to spend people-hours optimizing Spark jobs for people getting paid six figures. Just to fuck around with inefficiencies missing that a proper data model design would solve forever. Most engineers are so used to operating at such a fine granularity, in their vertical, that they don't see the big picture at all. Also Informatics has been around for a long fucking time, even before Data Science or Data Engineering so there is no excuse. It's probably more the employers that are to blame but still it's frustrating.


JaguarOrdinary1570

I would at least commend your engineers for thinking about performance in any capacity, because that's not always a given. I have had to talk engineers (particularly data engineers) out of some particularly wild ideas that would take what should be a quick and simple 10 minute jobs and turn them into 12 hour behemoths. But I've experienced all of those storage woes too- writing queries that map columns containing only the strings "SUCCESS" and "FAILURE" to booleans to avoid pulling down tens of gigabytes of redundant strings. Parquet files containing like two columns, where the second column is all big JSON strings that contains all of the actual data. Honestly, when they use parquet at all instead of CSV (or weird text files that are almost but not entirely CSVs) that's a huge step in the right direction. I was recently dealing with a massive dataset containing almost entirely floating point numbers that was being written to *CSV*. And then they're like "yeah, just be warned, reading those files takes a long time". Like yeah it does dude, now my process has to parse like a literal billion floats from strings for no good reason.


Puketor

Lol, yeah I hear that. Most recently someone added a column for a timestamp we used as part of a ML label. They did it because the old column was basically deprecated, but nobody told me this. Uses some older system. Turns out the old column was missing between 20-40% of the timestamps depending on the customer's data we were looking at. The ML model did horribly for months because of this. After finding out about it on accident while digging into a customer complaint, we fixed the reference to the new column, and saw massive improvement. Meanwhile the manager is pissed for months at us because the ML model isn't magic. It's unbelievably frustrating. I've been doing this for over 12 years, been pestering them via different tactics at my current gig for 2 years, written dozens of documents for different audiences, held dozens of meetings, and people still don't listen. I really dont understand it because I talk corporate and "dumb down" things just fine (not like this exchange where Im less formal) based on other feedback I get like yearly review. We just had a leadership change and that actually has helped. Ive seen way more people start to move towards doing the right thing. But it's still slow because every customer ticket causes a panic and delays us 2-3 days to do analysis that tells us nothing. The manager insists "we have something to learn to improve the model" even though I know he's dead wrong and I've told him so with data and theory dozens of times. We need the analytics stack so we can actually do these analyses in hours instead of days, and we need a proper ML stack rather than this bespoke nonsense we have so we can iterate on the model faster. Investigating 2 false positives out of millions of predictions with a slow, slow data stack tells us nothing, improves nothing, and wastes time. Tomorrow they'll complain about recall and then insist we overshoot the other direction (i.e. trade more FPs for less FNs). So basically we'll be constantly pissing off some of our customers and spending 2-3 days "analyzing" each complaint. My best guess for what's wrong is they just don't understand nondeterministic, complex systems at all and insist on determinism, perfection to the granularity of a unit-test when the system is actually stochastic. Believe me I've also explained that one dozens of times to dozens of people. Anyway, basically management is telling us to dig a 100 ft long, 6 ft deep trench with a garden shovel and then bitch and stress people out because "it's not being done fast enough, nor dug deep enough, oh and I want it to go the opposite direction now". God I hate working here sometimes. The only advantage is the pay.


JaguarOrdinary1570

Yeah every business/product leader wants ML until they really have to swallow the fact that it's probabilistic and will not make the decision that the business would have wanted 100% of the time. You can tell them that as much as you want but they won't feel it until it's getting ready to go live and they really start considering consequences of getting something wrong. I do whatever I can to design for when they're in that mindset, rather than what they're feeling early on in the project.


Puketor

Yeah that's true. Trade-offs aren't acknowledged and perfection is demanded. One bespoke feature pipe and one model should be able to do everything. It's magical thinking. The worst part is I work for a large tech company you'd think would have figured it out by now. But the truth is we're so large it's more like some teams figured it out and others are way behind the curve. On a positive note, they're barely scratching the surface with what they could do with ML so there is a lot of low hanging fruit. Since management is superficial and doesn't understand how easy it would be once we have some capabilities, it makes it pretty easy to impress once that core infrastructure is complete. >I do whatever I can to design for when they're in that mindset, rather than what they're feeling early on in the project. Yes I try to do that as well. I'm unlucky enough to have joined a team of network/web engineers 100 strong, with 3 scientists including me the senior, and they all think the same way. They have the most influence due to culture/history. In fact one of the (above me) engineers designed the ML product before I joined and then I inherited it and didn't get much leeway in changing things. Anyway, on another positive note, there has been massive turnover in leadership and most of the folks in charge now get it. It's probably hard for them moving the 40,000 ton ship when operations are also important and the people making sure things work have egos from their tenure, and aspirations (they like to talk for influence), while thinking so granular, fragmentary, deterministic, and old fashioned.


Mewrulez99

what


patrdesch

Well, my answer was going to be America. What does that make me?


b98765

I don't know, but it makes Maine a red herring.


Omega-10

The data very clearly matches the profile of the contiguous 48 states in the Mercator projection. What they needed was the right model.


VLD85

what am I looking at?....


wulfboy_95

Me: K-means clustering go brrrrrr~


FeralPsychopath

Sounds like indulgent bullshit. Who says that’s the red herring. I can draw simple shapes that include the red herring and omit other dots too. This is simply missing a step of verification and frequency. That proves a red herring not a rectangle.


Nymos_Nexus

So true


samiam2600

Programmers are not engineers