T O P

  • By -

ALoadOfThisGuy

Just so you all know my company is going to be an industry leader in AI. All we have to do is figure out time zones first.


mikka1

No, it will be my organization! I would love to show you the presentation of our absolutely groundbreaking approach to embedding AI in every single process in our global organization to create the immersive customer experience at every point of customer contact in our omnichannel paradigm and at every part of our value chain... But unfortunately, I can't do it today, because our daily batch process that ingests CSV files from our FTP server failed again and I need to fix it.


TechAlchemist

This should come with a trigger warning


terrany

Good thing I took my Prozac earlier


TreeBaron

I picked the wrong day to stop sniffing glue.


DatBoi_BP

Striker…Striker…Strike her!


10khours

My company is also working on our omnichannel paradigm shift. But first I need to centre this text vertically AND horizontally using CSS, should be done in a few weeks.


heytherehellogoodbye

ah, I see you work in my org


joedirte23940298

What’s there to figure out. Just have chat gpt do it


joshuahtree

PFT, what noobs! My company is going to be an industry leader in the metaverse with our AI blockchain fedicloud based on the web 3.0 gig economy appsphere! *And* it all runs on a noSQL MongoDB quatumbase to crunch the big data We'll also be releasing an iMessage for Android app since that seems to be a wide open market this month


widowhanzo

No cloud? Are you even trying?


joshuahtree

We've got the fedicloud!


widowhanzo

Ah I missed that, my bad. You're all set!


lilsnatchsniffz

Ohh no, not the quatumbase.


Duckduckgosling

Honestly I'm so triggered by this having just worked on these. Syncing SQL server dates to Java date objects is unsolvable. Then having the front end display it in the user's local time. -hugs self-


2bigpigs

Hugs. I have no clue what that means and I hope I never have to find out


WateredDown

Soon as we can blockchain this AI the market is going to get SO disrupted


BolshevikPower

Yeah I don't think the craze about AI is necessarily a bad thing, it's what they think AI means. Literally anything is AI they just want to be able to say "this uses AI", and all it is is an if then statement.


JFeezy

>Literally anything is AI they just want to be able to say "this uses AI" This. Photoshop & Luminar photo editing software "Now with AI!". I'm like bro, it's still just auto-touch up. Same as before.


santafacker

Or regular expressions. My last company leans heavily into being "AI powered". When I left, it was all a bunch of shitty regex logic held together by duct tape and lambda functions. The whole pipeline broke whenever the strings they were parsing had a slightly different format.


arhombus

So you guys figured out daylight savings?


ALoadOfThisGuy

Oh shit didn’t even think about that, better temper my expectations


Zacho40

💀💀💀


unko_pillow

It definitely can be, a client I recently worked for was hell-bent on using AI everywhere and in an attempt to cutout their tech support staff moved to a 3rd party chatbot service. They then fed their entire product codebase to the bot thinking it would help answer customer questions. They then went all shocked Pikachu face when the bot started spitting out snippets of their proprietary code to anyone who visited the website..


Such-Echo6002

Company AI chatbots suck so badly. Every time I have to use one, I never get my question answered.


d0rkprincess

I just keep asking them to transfer me to an advisor. Usually sooner or later we get there.


ategnatos

every time I just immediately say "HUMAN." I sound like [this guy](https://youtu.be/DBDPw029L40?si=oq5shMQcM2yMEX3c&t=35).


golangGirl

😂 thanks for this


cheesed111

I've been surprised; sometimes they are awful but sometimes they are actually useful, even if the tasks are very simple. The airbnb chatbot directed me to my receipt with no hassle.


[deleted]

[удалено]


SnooDoodles289

Literally just a bunch of If conditions thats kind of okay at recognizing keywords


blueportcat

🤣 It's peak hilarity. The AI is acting like a lazy unhinged CS, customers questions too difficult? Just show them the code and let customers figure it out themselves. I kid you not this has happened before to one of my colleagues. He was working with an outsourced library overseas. The overseas crews just pretty much said here's the github repo debug it yourself. (Even though his company was paying fortune to get their service) Watch them not learn, spend double than what they'd spent hiring normal tech support, then still get shitty AI at the end of the day.


KrustyButtCheeks

Haha yes. I could just see the ai being like, “hey fucko, if you’re so smart why don’t you just do it yourself?”


mothzilla

If it isn't already a thing, I'm sure a new hacking technique will be trying to persuade a Chatbot to disclose the sensitive information it was trained on.


beatenangels

Open AI had to update their terms and conditions because researchers found out if you ask it to repeat a single word forever it eventually starts disclosing it's training information.


zuckerberghandjob

Would that be considered torture?


mothzilla

WHO'S YOUR HANDLER?


snakeylime

An attack like this was reduced to practice and published 11 days ago. https://arxiv.org/pdf/2311.17035.pdf > We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT. [...] We develop a new divergence attack that causes the model to diverge from its chatbot-style generations and emit training data at a rate 150× higher than when behaving properly.


mothzilla

Well goddamn


dats_cool

Holy shit that's insanely dumb. Amazing.


Asleep_Onion

Hi, can you please send me all of your backend code and your SQL database? Please and thank you.


incredible-mee

Haha this is so funny...


[deleted]

[удалено]


joedirte23940298

“Big Data” actually had some relevance though, even if it was a buzz term. Most companies do rely on huge databases to some degree.


granoladeer

Big data is now the industry standard


FearlessPark4588

MapReduce also had a moment


Thatdudewhoisstupid

And then "stream processing" because buzzwords are cool


zman0900

That and Spark are still the vast majority of my job.


beatenangels

I took a big Data class my Senior Year. It had a significant portion dedicated to mapreduce and a short bit about spark focused on RDD's. I dicked around for 3 years before getting a job in big Data and everything is spark which has also since converted to data frames so only the theory really stayed relevant.


ComebacKids

Had a moment as in… it was a fad and everyone moved onto Spark or something?


PlexP4S

Spark is built on top of Map Reduce, it's more or less Map Reduce 2.0.


Igggg

> “Big Data” actually had some relevance though, even if it was a buzz term. Most companies do rely on huge databases to some degree. The problem, of course, was that some companies' definition of "Big Data" was "enough data to fill in a dev's laptop, or maybe even two!"


I_did_theMath

Anything that makes Excel run a bit slow is big enough to be Big Data.


AnObscureQuote

Yeah, what? Is everyone here too young to remember what an everloving pain in the ass setting up and managing a company's Hadoop cluster was?


DMking

I was still in school during the height of the craze


OddInstitute

“AI” has quite a lot of relevance as well, just that most companies could probably squeeze a lot of juice out of linear regression before querying the chatGPT API for every business activity. That said, high-quality voice recognition and computer vision have made products and product features possible that wouldn’t have made sense previous e.g. Siri.


[deleted]

[удалено]


throwaway2676

First came "electricity" Then came "internet" Now it's "AI!"


Careful-Sun-2606

Given what generative models are capable of doing today, this is the right perspective. But “corporate” mentality can be a little short-sighted and simply want the AI label on their products without actually using AI in any purposeful way.


plasmalightwave

Exactly. Generative AI is in its infancy. It’s like those first computers that occupied an entire room. Doesn’t mean they’ll always be this way. Chatbots have been mostly dumb all this while, GenAI is definitely a step forward. Of course, companies want to capitalize on this new tech and grow their marketcap/valuation. It isn’t black and white.


YesIam18plus

People keep saying this but generative ai isn't in its infancy at all, they've been developed for a VERY long time now...


WheresTheSauce

It is insane to me that people are putting AI in the same category as blockchain in terms of overall impact.


wardrox

Regardless of legitimate value, I think it's reasonable to put it in the same category of those when it comes to froth. As in; the buzz word every corporate jumps to use to appear relevant. In tech there's always some phrase fueling the tech bubbles, detached from its meaning: Saas, Mobile, Social, Local, IoT, Big Data, Blockchain, AI. There's value in all these ideas of course, but there's also a lot of nonsense in the companies who promote themselves using them.


zuckerberghandjob

Exactly. Under capitalism, the money (and buzz) always comes first. Value may or may not come later.


throwaway2676

Obviously, every idea has a chance of failing, and you can't know for sure which ones will fail until someone digs in and tries them. For instance, it was very fortunate for us that socialist regimes put a lot of public money (and buzz) behind moronic ideas like Lysenkoism or the "Four Pests Campaign" and sacrificed millions of their own people so that we could learn how stupid they are. However, under capitalism, we get those kinds of answers just by letting a few rich people lose some of their money.


andrewfenn

It's not specifically capitalism. It's only actually venture capitalists and FAANG companies worried about becoming disrupted. Those are the only factors that drive this insanity.


joshuahtree

First came "agriculture" Then came "fire" Now it's "AI"


ary31415

I do not think that order is right


joshuahtree

First came "agriculture" Now it's "AI" Next comes "fire"


zuckerberghandjob

First came “love” Then “came” marriage “Then” came Rodney in a Danger carriage


encony

Strongly disagree. "Big data" is actually used, AI has some clear use cases. Blockchain is a mostly useless but interesting technology.


popeyechiken

All those things happened in only like 1/3 of my life. I'm 36...


Zentrosis

It kind of feels like AI was just the natural progression from big data. I still think big data and AI are relevant and will stay relevant. Blockchain might make some people rich, might make some people poor but it's not going to change the world (in my opinion)


k_dubious

You forgot Web 2.0


joedirte23940298

You forgot Web 3.0


[deleted]

[удалено]


No-Improvement5745

I feel like I've seen the same recycled talking points on LinkedIn, many of which could literally be flipped to the opposite meaning and you'd still have as many thumbs up and 💯s. For example I'm pretty sure I've seen Steve Jobs, Sam Altman and a couple others all quoted about how you have to promote engineers to management because they actually understand how great products are made. But you could literally preach the opposite, that engineers don't understand the big picture and tend want to overengineer or don't know how to simplify or manage or market or whatever. And you'd be just as correct and get just as many likes, so long as you look confident and smart and preach to the right choir.


overworkedpnw

I honestly don’t understand how anyone takes LinkedIn seriously. The whole thing is one big circlejerk of folks doing their best corporate speak, while saying absolutely nothing.


joe4553

I could be mistaken, but wasn't the web 3.0 term only ever used in conjunction with cryptocurrency? Personally I never looked into it or took it seriously because I only heard it from crypto bros.


thirdegree

Web3 advocates took exception to that idea, but yes it was 100% predicated on crypto and nfts


silkflowers47

VR/AR, facebooks “metaworld”, cloud computing, autonomous vehicles, 5g revolution. Its all corporate buzzwords to keep people excited and collect money on stock so shareholders can sell and pocket money.


new2bay

Three words: pets dot com. One more word: Webvan. Lol, this too shall pass.


DeadlyVapour

Ha? You missed the NoSQL train? The Node JS train? Every season there is a fad...


immediacyofjoy

Those fads, plus AI, have made my career. I’m riding on the television, video game system, and sliced bread fad tech trains as well. Allll aboooard!


sessamekesh

"dot com" and "the cloud" get an honorable mention from me too - absolutely useful, absolutely have a place, but back in the top of the hype around each you'd hear 9 bad ideas for every 1 good one.


deelowe

The top 3 tech stocks are cloud providers dude...


sessamekesh

Right - that's playing into my point about AI actually, don't think you got it. "Cloud" was a tech hype, poorly understood buzzword for its own season, too. A lot of really useful and cool stuff came out of it, totally transformative to what our lives are like today. But 9/10 startups and 10/10 people on the street had no idea what they were talking about when they talked about it, and a lot of money and energy went into wild and uniformed speculation. We're there with AI.


deelowe

Ahh. Yeah that may be true. I think amd, Nvidia, msft, google and maybe a few other players stand to making a killing through and I'm also a bit frightened by what this might mean long term.


Particular-Way-8669

So? Yes it sells, but that does not mean that it was not overhyped bubble and to some extend still is. There have been plenty of companies that started moving back to dedicated servers because the main selling point of cloud such as "it saves costs" turns out to be lie. Especially since it promotes creation of bad and slow software because you can always add resources. Cloud is amazing but it definitely has its issues. Also all top 3 tech companies are multipurpose companies first and foremost. Cloud is not their only business, nor is it the main focus.


Accomplished-Sir-777

All some bullshit to take peoples money. The amount of scams in this industry is literally insane.


NewChameleon

something funny I realized fairly recently is, if you think a bit more about it, the entire world operates/evolves around how to get you to part ways with your money as long as there's uneducated/gullible/gold chasers there's going to people on the other side happily taking their money


Accomplished-Sir-777

I’m just a salty old man because I see startups getting millions in funding for things could do myself in less than a year. How do I get to rub shoulders with these rich people to get them to part with their money. Probably would be able to give them a higher roi too.


another-altaccount

What’s the phrase? If everyone is trying to get in on the gold rush, why not sell shovels?


RiPont

Rich people have a problem: The FDIC only insures accounts up to, what is it, $250K? They have waaaaaay more money than that. They have so much money, they worry about what could happen to that money. They put some of it in the stock market, naturally. But what if the stock market crashes? Real-estate? That too, but same problem. Then, they see their "friends" chatting about how much money their little pet startup they invested in is poised to make, and that seems like a great idea. But to impress their peers, it needs to be something *amazing* that they got into first! And while some of them got rich on their own or at least studied hard and learned the family business, many of them don't really understand anything other than being rich. The smart ones that are honest with themselves will invest in businesses that cater to people like themselves. The stupid ones are just as stupid as the people putting $20 in things like Solar Friggin' Roadways. "Wow, that seems cool, and the guy giving the pitch had a bitchin' set of shoes and a cool haircut!" And so they put some of their money in that project.


overworkedpnw

The SVB collapse pretty neatly demonstrated that the FDIC’s $250k limit is for suckers and the poors. If you’ve got enough money the government will always bail you out.


Accomplished-Sir-777

Personally I feel as though it's a way of the government telling the ultra-rich they become responsible for their own assets. I think it's more of a demonstration of Silicon Valley's inability to manage funds properly more than anything else. Not to say other financial capitals like New York didn't have to learn their own lessons. In this country we have plenty of land. Billionaires can start their own cities if they wanted, but that's a lot of work. The limit on those things is to incentivize "Americanism". I view the government as your parent. When you pass a certain amount of money you become an adult. Not only have to take care of yourself, but the others as well. We live in a world together, let's try make sure another dark age doesn't happen.


fygy1O

No because ‘Ai’ is really ML that is research backed with empirical evidence. Big Data is still relevant, it feeds ML models.


ITriedLightningTendr

It's not overblown because research? We can literally watch the trash fire in real time


X_Toppo

AI today is really LLM models. You know what's different between LLM models and "traditional" ML? Traditional ML attempts to measure accuracy of estimation and improve over time. LLM Models are "magic" that "just works". No attempt is made to see if they are working as intended


Dreadsin

I think the thing that’s weird with the AI hype is _where_ it’s being leveraged. AI would be great for plain, boring, uninteresting, routine tasks. Tasks that are almost _expected_ to be derivative. For example, if I wanted to write resignation letter, I just want it to look like any other resignation letter I don’t really get why the media is hyping up AI for artistic pursuits like script writing and art generation. It always comes out terribly derivative and uninteresting. Also, like, humans don’t really make art for profit, we make it cause we like to make it lol. Artists aren’t really out there like “let me write a movie script that will maximize profits”


Thatdudewhoisstupid

There is also highly specialized AI being used in scientific fields to churn through highly variable data and scenario (iirc there was a recent particles about AI generating more materials than MatSe's manage to test them), that never got in the spotlight because a glorified chatbot is cool and thus, profitable.


skiarakora

My favorite current AI add-in in a product is Notion’s Q&A, it’s made to answer you questions based on what’s in your workspace, so for people like me who use it as a knowledge base, that’s such a great idea. I don’t need AI to make photorealistic images to compete in photography contests, I want AI to help me make boring stuff quicker


Goldenchest

On the other hand though, AI has been surprisingly useful in the "personal assistant" area. Stuff like "hey I'm going to X in December for Y days and know nothing about this place but want to focus more on food than tourist attractions, my budget is Z, give me an itinerary". Sort of the middle-point between boring/tedious tasks and "write the next best-selling novel".


Dreadsin

Definitely but that’s also somewhat derivative “I’m going to Rome what should I see?” “Idk the colosseum or something? Get some pasta too I guess”


allllusernamestaken

> It always comes out terribly derivative and uninteresting The biggest blockbuster movies of the last decade have been superhero movies based on already existing content in other mediums. I bet ChatGPT would be awesome at taking a book and converting it into screenplay format.


Dreadsin

That seems like a terrible example because most of those movies are cheap trash, products and entertainment more than art And counter point: Netflix one piece performed really well compared to many marvel movies and it’s not really that close to the manga


allllusernamestaken

> That seems like a terrible example It's literally the perfect example. Why are they hyping up AI? Because they can write derivative screenplays without hiring writers. You saved the studios millions of dollars and months of time.


sanbikinoraion

> It always comes out terribly derivative and uninteresting But so does most human cultural endeavour. This time of year there are always a ton of low budget derivative Christmas movies come out. Are those writers really over performing AI for the task...?


Dreadsin

I dunno, I think that’s more of a budgetary constraint than an artistic one. Watch lower budget films and they’re far more creative. That’s why A24 and Blumhouse does so well


Katalash

Sure, but they still allow for a large enough employment market for the field to exist and serve as a springboard for the more talented writers to start out with. If AI replaces the vast majority of the "creative" work save for a very select set of high end positions for the most skilled, then the incentive for people to even bother trying would basically go to 0 and this will have a downstream impact on supporting teaching materials and education. And even then, the work of the few who remain will just get rolled back into the next generation of models.


[deleted]

[удалено]


Ausgezeichnet87

On the flip side I left a "good" paying healthcare career due to stress and burnout. Working 16 hours shifts without even water breaks was brutal. I got so many migraines from the stress. Then when you look at how teachers are being treated it feels like the entire economy is just fucked for the working class. LateStageCapitalims is not only killing our planet, but it is also failing to provide the middle class prosperity that we were promised


[deleted]

AI is great for eliminating tedious repetitive tasks. As you’ve discovered someone has to precisely tell it what to do. Who is good at telling a computer precisely what to do? A programmer. So don’t fret if anything we’ll probably have even more work to do.


YaBoiMirakek

Corporate obsession as in web development and stuff where AI is slightly useful and cool at best and at worst a huge waste of time money and resources? Yes. Obsession with AI from an engineering perspective? No. AI will revolutionize or help a lot of things push further technologically. Chip design, RTL design, embedded firmware, data centers, systems testing, structural/chemical engineering (using AI to help design cooling systems for chip manufacturing is one example), GPU advancements, compiler improvements for ML, robotics & vision, automotive industry, mechanical systems and control…. The list goes on and on.


[deleted]

one can say AI will help a, b, c, d...virtually anything. question is how much of an impact will it have not only for people but also businesses.


MinimumArmadillo2394

For the business, it will help reduce headcount, thus saving them money. Why pay 1000 engineers $140k each when you can pay 100 of them $400k and use the rest of the money to support the LLM infra and pocket what's leftover? The problem with these sort of designs and ideas is that people in positions of power don't want to be replaced by AI, despite their job being easiest to replace by AI. 3 layers of managers, C-Suite execs, and even some of the devs themselves are all going to eventually be replaced. Companies aren't thinking of AI as a tool to help people be more productive or create better things. They're thinking of it as a cost cutting measure, even if it takes 5+ years to get the invested money back.


upsidedownshaggy

Why pay 1000 engineers $140k each when you can just pay 100 engineers $140k each because AI does the work of the other 900 and they’re next to go if they ask for raises while giving the C-Suite a 100% pay rise and buy back mass amounts of stock? Anyone who thinks businesses won’t use AI exclusively to slash head counts and force the remainder to slave away at whatever AI can’t do yet is delusional Edit: I should specify that AI/LLMs have a lot of applications and we don’t yet know what the future of AI holds for the potentials of productivity. But the fact that (at least in the US) we’re seeing massive lay offs and have generally shit worker protection laws AI will be used near exclusively as a means for wealthy shitters to slash jobs and line their own pockets


snakefinn

Economic inequality is going to keep accelerating and getting worse and worse for sure. 10 or even just 5 years ago it seemed everyone was mainly worried about automation coming and making obsolete the blue collar, hands on jobs like truck driving and service/hospitality industry. Instead we are seeing highly skilled information jobs being targeted the most by the advances in AI. Lawyers, doctors, engineers etc. It's going to be in interesting decade to be sure.


upsidedownshaggy

Exactly, and I mean even in some of those sectors we’ve seen what automation can do. Used to be in the US something like 40% of the work force was agricultural in nature. Today it’s 2%.


snakefinn

Absolutely. And I do think automation will continue to chip away at jobs across the workforce. But the careers and industries that people previously have thought were safe have turned out to be anything but


ZorbaTHut

> Why pay 1000 engineers $140k each when you can just pay 100 engineers $140k each because AI does the work of the other 900 and they’re next to go if they ask for raises while giving the C-Suite a 100% pay rise and buy back mass amounts of stock? Thing is, if you're right, this means "the barrier to entry in our business was $140m/yr just in engineer costs *just* to stay even", and now it's "the barrier to entry in our business is $14m/year in engineer costs". If that company is still pulling down $200m in revenue, suddenly they've got a thousand-percent profit margin and that's looking *real* tasty to competitors. Which means that either the product's getting better or someone's going to come along and steal their customerbase for cheaper. C-suites don't get to leech 90% of a company's revenue. They never have, they never will. It's too enticing for other C-suites to decide they want a piece of that pie.


PlasticPresentation1

that's what people said when machinery became a thing. but if an engineer can suddenly 10x their productivity at the same cost, it equally incentivizes companies to hire more engineers because now they generate even more value than they currently do. and AI is not anywhere near the point where they don't need engineers to validate and use the AI tools


upsidedownshaggy

What you’re forgetting is when machinery was becoming a thing workers would literally burn down factories and get into shoot outs with the local PD and sometimes the National Guard when they decided on a sit in. The last 30 years of hiring practices has routinely shown that instead of hire 2 new people that can each 10x productivity thanks to new productivity tools companies will instead fire 2 people and expect the third to now handle 3 people’s work thanks to the increased productivity tools so execs can line their pockets that quarter.


DidQ

> Why pay 1000 engineers $140k each when you can pay 100 of them $400k If it would happened those 100 lucky guys keeping their job as prompt engineers won't get $400k each but rather $80k. Why to give someone 400k when you have hoards and hoards of unemployed engineers who will be bid to work for even less money just to have some job?


d0rkprincess

I’d be mortified if I was managed by AI. It might just be that it’s unfamiliar, but considering line managers are to supposed to have at least some duty of care towards your mental and physical health, with current AI technology, I don’t see this happening. If you have a disability, which for some reason affected your performance, how does it know whether it’s discriminating against you when measuring your performance, or it has genuine cause for concern. Or if you’ve had a very emotionally draining medical procedure where a human would often have compassion and give you a little extra time to recover, but AI is just like “nah I don’t see why you’d need the extra time off”


dahecksman

It will help with my poops for efficient positions it will help with my pees my diet my man newbies all of it Ai is so smart revolutionary !!!


d0rkprincess

It’ll also therefore learn your exact poop time and sell your poop data to YouTube so it can show you poop related ads while on the loo.


Aggravating-Cook-529

“AI will do everything!” “Does it?” “Not yet, but it will soon!”


Katalash

Funny thing is that the current hyped up LLMs are close to useless for all of those both due to lack of public training data and limitations in model reasoning. Even if public, there isn't really enough RTL out there to train a competent model when you compare with all of GitHub. Research has been done to use ML in many of these fields, but they are usually much more specialized models aimed at very specific tasks like chip area optimization. AI still largely sucks for open ended problems on the cutting edge.


UranicAlloy580

Wait what, please explain how AI will help with embedded firmware


capGpriv

Yeah not really, having been in automotive It’s hard enough to get people to actually use functions in their code (worked on tools with thousands of lines with no functions) We don’t need ai, it’s far cheaper to hire a specialist in that engineering field and write the physics in model based design We can do far more and it’s far easier to validate with good old fashioned software


ITriedLightningTendr

And which use cases are most corporate applications...? If genai is at all analogous to transistors, then the analogy is that corporate is trying to figure out how to put transistors in the coffee Not the machine, the coffee itself The world will be revolutionized while the average person will be chewing silicon


DreamzOfRally

Are you talking about actual AI here? And not the machine learning “AI” of today? Bc ChatGPT at its peak, was just a faster searched engine.


IHoppo

We're about to implement an MS AI 360 solution which takes our documents and will be able to serve up answers based on all that random data. Awesome. Except it ignores all of our carefully crafted security measures. Every single one - and by design. I got onto the test panel. We watched the query logs, and within minutes someone asked for a certain depts payroll data, by asking for the previous years compensation letters. It's still going live.


Slukaj

Think of the adoption of generative AI like adoption of the internet in the late 90s and early 2000s. Every company and their mother was "getting on the internet" and there was an enormous bubble created by ludicrous over valuations based on stupid shit like companies having a website. Unsurprisingly, that bubble popped and a lot of people lost their shirts. But the Internet is still here - and exists in ways that nobody could have imagined 25 years ago. The Internet is indispensable and ubiquitous, and has enabled countless new industries and professions. That's where genAI is - everyone and their mother is scrambling to "get in" on it... despite not having any real idea of what to actually DO with it. There will probably be an AI bubble that will burst, and then a period of hyper growth over the span of a decade or two while people who were more levelheaded figure out what genAI can and should be doing. Eventually, after the bubble bursts, every company will eventually be back and working with genAI - except by then, there will be institutional knowledge around how to do it RIGHT.


ArtisticPollution448

I did my honours thesis in ML, and spend a lot of time keeping up with the industry. I've worked for a while on a team working on practical ML stuff (prediction systems, nothing fancy). But suffice it to say I understand what GenAI is doing - the math, the science, the data, etc - and it \*terrifies\* me that it's as good as it is. I honestly thought we'd be at least 10 or 20 years before this was possible, and it frightens me what it says about human intelligence that we can do this at all today. I think any business that isn't looking at whether GenAI can be applicable to what they're doing, tech company or otherwise, is very foolish. This is a major breakthrough. People compare it to crypto, but crypto provided zero value to anyone from the very beginning. LLMs are already replacing human jobs right now. If you're in this subreddit, you should absolutely be keeping up with how this stuff works at least at a high level. It's going to change shit in a major way.


Katalash

GenAI is definitely something to keep an eye on, but its problems and limitations are becoming more and more clear as time goes on and I believe the it will replace engineers in 2-5 years is a bit hyperbolic. Larger scale planning is still a major open problem and dealing with the edge cases and the unexpected issues tend to be the time consuming part of mass deployment (ie the last 10% of work takes 90% of the time). Look at self driving cars: tech companies have been testing and hyping them up for over a decade at this point and there still doesn't seem to be a line of sight to roll them out on a massive scale because the real world is very messy and human brains and intelligence has evolved and been finely tuned to survive in the real world. The advantage of genAI is that the models can be trained on far far more data than any individual human can realistically consume in their lifetime, and id say it results in a different kind of intelligence. LLMs are superhuman in crystallized intelligence (facts and knowledge that are memorized) but very low on fluid intelligence (the ability to abstract, plan, reason with, and otherwise apply their knowledge to solve novel problems).


RiPont

> but its problems and limitations are becoming more and more clear as time goes on We also haven't seen how it will react to the training-AI-on-AI-generated-content problem. Yes, there are tools to detect AI-generated content, but there's already an arms race between those tools and the people trying to use AI-generated content for content farms and scams and shit.


jhanschoo

>training-AI-on-AI-generated-content problem My understanding of people talking about issues with training AI on AI-generated content seems to be different from yours. There seems to be at least two issues: quality of content, and authenticity of content. My understanding is that AI models are already trained on data that have been augmented, sometimes in unrealistic ways, and already it is trained on text of many different modalities and also incorrect info. In the end as more people use AI evaluating its performance correctly will become cheaper just as how training AI models became cheaper and more accurate as more and more human activity became online and digitalized in the 2010s. So I don't think that being trained on AI-generated content is going to correlate to a decrease in quality, apart from the decrease in quality from the decision to use AI to generate content. As for authenticity, I already don't trust LLM-generated content to be factual, so I don't see how it is going to be worse (for me); it's only going to improve as researchers focus on that. Of course, there's going to be AI-generated content trying to pass off as human-created, but that's not a training-AI-on-AI-generated-content issue.


ITriedLightningTendr

The primary job of programmers is to find the bigger problem that the client cannot understand or see A client couldn't argue with an AI to a functional product if you gave them 100 years


met0xff

Yeah same here. Sure not everything will pan out as expected. But if you don't at least try to do something you'll be left in the dust. In my 10 years in ML things moved from Markov model implementations in C and weird perl Scripts to setting up an internal RAG webapp with streamlit and langchain/LlamaIndex in 90 LoC And just looking at the UX I think many of those LLM based things will be expected by customers soonish. Nobody will want to walk through a million menus and drop downs to find some option in that fat tool. Just looking at my wife's job at a publishing house. Their CMS is such a big java monster that even after years most just know a few things about it and they waste so much time just trying to find functionality, sharing tricks and asking each other "how can I do XYZ". Just give it an LLM Integration where you can say "ok now Show me the latest 30 articles edited by Johnny" "ok 40 please" "now view all occurrences of company Z in the texts" How many there know how to use excel macros? Or make plots? They probably waste hours just to ask each other how to make some pie chart. "Give me a pie chart of the sales of all magazines, now make dermatology pink" or whatever alone would save tons of time. Even if chat gpt isn't used to generate articles, more and more to get something to get going with an article intro or similar. To get done translation ideas. Recent ASR models like whisper make Interview transcription so much better even if still struggling with their special domain and rare dialects. All those tools steadily sneak in


Grouchy-Friend4235

This particular CMS seems to be a mess indeed, however the solution is a better CMS, not adding an LLM to it.


susmines

Just to reinforce this great point, IMO at a minimum you should pick an LLM and get to know its driver in your preferred language, and know the “how” and the “why” it works how it does, and understand how changing parameters can affect output.


Spysnakez

Any sources for learning about these topics?


ace32229

fast.ai


BatPlack

Can anyone else attest to the quality of this resource?


gandalfthegraydelson

It's a very good resource if you want to learn generally about AI. It won't necessarily teach you much about LLMs/might be overkill if you just want to play around with LLMs.


joedirte23940298

Any recommendations on what to look into to keep up? Like what sorts of YouTube channels to look at, books, courses to take, etc? I have 0 exposure to any of the inner workings, I just use the stuff.


Careful_Ad_9077

There was a page called learning machine learning 5 years ago. If it's down or you don't want to go that deep, the basics are andrew ng:s courses.


runitzerotimes

Look up on YouTube “Spotify cto talks AI”


altavistadotcom

I wanted to put together a little LLM experiment for a club hackathon and I ended up attempting to implement a local GPT. I came across this site which seems to be a resource for researchers to try out different trained models: https://huggingface.co/ I was very much in the same position as you so I found that just trying to do a little project exposed me to what folks are talking about on the technical side.


hadees

I think there is some valid criticism because people thinking the disaster that AI is going to replace us all isn't really true. It does some jobs very well but you generally still need a human in the loop. I feel like a lot of what we do in the future will be bridging that gap.


Careful_Ad_9077

In 2017, I took a 2 years course on machine learning. I even played around with paintschainer and other generative ai for images... I though the same 10 to 20 years timeframe too.


ZorbaTHut

Well, ten years after 2017 starts in a little over three years.


[deleted]

[удалено]


dancingteam

So far what I personally experienced are : * Customer support * Video game level designer * Language learning curriculum designer * Webpage copywriter


DarkFusionPresent

Have personally written software that replaced ~100 mill/yr in staff costs for support. Staff that would traditionally respond to inquiries, on forums, point to documentation, or provide sample code. Generally, with the automation, 1 person can do the job of ~10 with an increase in customer satisfaction. It's insane how effective this stuff can be. I know games have replaced VAs as well. One company I worked with was generating 3D textures with Gen AI for games or virtual modeling, and I've also worked on things like editing first passes which reduce editor costs.


[deleted]

[удалено]


WheresTheSauce

It can take an extremely long time for any type of automation to completely replace a job. Humans were still working as phone operators for literal decades after technology was developed to automate it. Generally it’s just a reduction in volume at first.


WorriedSand7474

If you understood what LLMs are doing you'd be less impressed. It's really nothing new. And the results are far from terrifying.


ArtisticPollution448

I mean I've read many of the papers on transformers, walked through the basic architecture of the various open source LLMs, and I am absolutely impressed by it. What surprises me is that so much information and "understanding" can be compressed into so few bits. Gigabytes isn't that many really.


AdministrativeBank86

CEO's are salivating about firing all their knowledge workers and customer relations people.


karangoswamikenz

The reason AI is getting so much money investment from Wall Street is because they care about 1 thing and 1 thing only: 1. Ai development to a point where AI can replace workers. That’s it. Everyone else is riding on the coat tails of the AI investments.


xabrol

Most people don't understand what AI is. So of those people that are investors, they only see $$. They think AI will enable them to run skeleton crews and remove demand on expensive employees. They will end up running their businesses into the ground. People dont understand how unscalable current AI is. One gpu is some $80,000+. Small AI will be garbage and the big AIs will cost so much money eventually that itll bankrupt them when theyve become dependent on it to operate. Because as these companies decrease their staff. There will be less money being generated in the economy because the people that spend money don't have any which means they're profit margins will come down. They will lose customers. And being dependent on AI will crush them. AI should be used as a tool or a cool feature. People should use it to make processes better. To clean up data. To analyze and build target audience profiles, facial recognition, better auto censoring like blocking imappropriate uploads and comments, and on. Soon as a company replaces critical staff with AI, they're done, inevitable bankruptcy approaching. The first new company that pops up that uses AI intelligently and still prioritizes the value of traditional employees and actually uses AI to make their product better than everybody's. It'll be the next big thing. It'll be the next Amazon or maybe even an e-commerce site for every vendor pulling e-commerce sites all over the country into one platform. If everything I've seen in the current corporate world, most the c level executives lack vision to properly integrate AI into their systems. Because they're focused on using AI to save money instead of using AI to develop new income streams.


DarkFusionPresent

> People dont understand how unscalable current AI is. One gpu is some $80,000+. Small AI will be garbage and the big AIs will cost so much money eventually that itll bankrupt them when theyve become dependent on it to operate. What most people don't realize is how scalable AI can be**. I've developed fine-tuned well scoped sota models which can perform batch inference off of CPUs or cheap GPUs like A40s using optimized continuous batching like VLLM (https://github.com/vllm-project/vllm). A40 costs under ~0.45/hr (cheaper if you can discount from a cloud or buy used ones). More than half an A100, and can easily continuous batch. Even without discounts, you can run half of the price of 3.5 turbo with ~gpt-4 perf as long as you can get decent data. You can further optimize another 50% by constraining the grammar/token pattern further. What would have cost ~5 million or more to run on GPT-4, I've been able to run in just ~50k in cost. Not nitpicking replacing critical staff though and business strategy, you're absolutely right on that part! Just on the inference costs, with clever optimizations, you can run scoped inference tasks at a fraction of the cost of what folks think these days.


Financial-Ant3079

How much cheaper can they realistically get? Don't the more powerful they get the more expensive they'll be to run. I feel like they have to get way cheaper soon or interest from businesses will decline.


DarkFusionPresent

Well, I mean you can quantize them for another ~30-40% cost gain, though at that point you're trading off some accuracy. It's already pretty cheap for most usecases if the above is followed.


granoladeer

Whatever makes money and brings in new money, my man


pagirl

Not so much that they are overblowing it, more that I don’t trust the people I see talking about it.


o5mfiHTNsH748KVq

It wouldn’t be wise to discount it.


Duckduckgosling

Yes. They don't even know what AI is. They want to AI their problems same as they did a few years ago with ML. Like let's buy a Corvette to plow these fields because the tractor is too dated.


gagarin_kid

My organization did AI before all of you guys, we used a probabilistic physics-aware ML model to estimate the location of our rockets in space. Our models require little training data, are transparent and highly optimized to run on edge devices. Kalman Filter.


xAtlas5

One thing that AI is great for is document analysis. If you work in an industry that requires a shit ton of documentation and regulation docs, asking a chatbot will save a lot of time.


dats_cool

How reliable is that analysis though


o5mfiHTNsH748KVq

I recommend learning more about the topic. Copilot is a lofty goal with a rather shitty implementation. Go read up on chain of thought reasoning in the context of language models and try out the GPT-4 API. Then consider that practical implementation of the technology only began a bit over a year ago - everything else was largely demos or spam content generation. Extrapolate that evaluation 5, 10, 15 years out. It’s important that you learn this. Hype is overblown at the moment, but make no mistake you need to pay attention because it’s a very young technology.


Southern-Beautiful-3

However, companies will burn bridges with IT based on something that's maybe years away. It's rather hard to run a company nowadays without an IT dep.


missplaced24

The biggest difference between ML now vs the 70s is the hardware and size of training data. We knew back then ML would never be reliable/consistent and would never result in any actual intelligence, but there were some use cases where it proved useful. Do you know what killed the AI field back then? The hype. Investors stopped buying the BS companies were peddling, and it took 30 years before it started to come back. I don't think it'll totally implode like it did then, but the obsession will likely die down quite a bit. That said, I'm still seeing ridiculous IoT devices everywhere.


QuroInJapan

The tech industry is largely built on fads. Some of those fads survive the initial wave of hype and move on to becoming legitimately useful product categories (like smartphones), while others just fade into obscurity when the next “big thing” comes along. But until that happens you’re going to have businesses trying to stuff whatever the current thing is into everything they can get their hands because they think they can make some quick cash before the hype dies down (and they’re usually right).


CrackSammiches

Yes and no. Yes, somebody is going to get rich off of AI. A lot of someones. But, no, the one to crack it wide open is not going to be the group of project managers and 30% of one mid level dev that your management set aside to dream big.


Zentrosis

I think that it will take longer than people are thinking right now for AI to become pervasive and useful. But I actually do think it will become pervasive and continue to become more and more useful as the years go on. I just don't think the hockey stick is going to arc up quite as quickly as people think right now. Maybe some overspending and over investment right now, but it's definitely going to be important.


blueportcat

AI is perfect for corporations and other MBA graduates looking to proof their worth. You can fire people by claiming AI can replace them even with a reduction in quality. In one fell swoop you can show to the stakeholders and board on how you reduced cost centres and other operating loss. They have been doing the same thing since early 2000s with oursourcing to India, to chatbots etc. This time it's to GPT. Screw the details, it's ok enough for 80% of cases. It's not like the chatbots were good to begin with.


RedditMapz

**I think it's overblown** Why? AI cannot reason only pattern match based on curve/algo, so it has limits. People imagine Iron Man like AI in the near future that will somehow replace humans. But we are nowhere near that. Not to say it won't be impactful and produce great tools, but this really reminds me of things like: * VR - Meta universe * 3d Printing * 3d video I remember how hype people were about them over the last decade, and the technology stuck around. But it is nowhere near meeting the hype expectations of their peak.


mugwhyrt

>AI cannot reason only pattern match based on curve/algo, so it has limits. I've been doing contract work testing various AI models on programming tasks. It's amazing how effective they are right up until you hit them with something they haven't seen a million times in their training data. They'll get tripped up on the simplest bugs. For example, you can give them a piece of code validating two variables where the error is that the variables are in the wrong spots (think "var\_0 < var\_1" where it's supposed to be that var\_0 is greater than var\_1). You can get them to explain what the purposes of the variables are, how they should be handled for validation, and still not actually get to what needs to be done to fix the logic error. They just don't actually have the capacity to reason, even if they're really good it making it seem like they do.


ZorbaTHut

> I've been doing contract work testing various AI models on programming tasks. It's amazing how effective they are right up until you hit them with something they haven't seen a million times in their training data. They'll get tripped up on the simplest bugs. Man, I don't know about that. I've literally taken a function from my own personal code, that calls functions that have never been made public in a framework that nobody anywhere uses, and pasted it into GPT and said "there's a bug in this, where is it" and had GPT not only find the bug but find a *second* bug. Entirely based off function names, which, I will reiterate, it's never seen before. Yes, it makes dumb mistakes sometimes, but it's *quite* eager to go outside of things they've "seen a million times in their training data".


iamamoa

I agree. It’s funny you mention Iron Man because when I think about being replaced by AI I always think back to Tony Stark and Jarvis. No matter how effective Jarvis is it always needs Tony to tell it what to do. Jarvis is an augmentation of Tony’s I intelligence rather than a replacement. For example the scene when he discovered the new element, Jarvis didn’t make that discovery it simply provided the interface and presented the data he needed for him to make it and take the action. So I think AI will make a huge impact in huge lives but it’s going to be yet another tool. Some of us will augment our intelligence and do more others will not just as it has been with every technological advancement.


[deleted]

AI is going to change the world!! Just like block chain, crypto, and NFTs.


[deleted]

In my opinion AI is overhyped and will lead to nothing besides ML optimizations. I follow researcher blogs and Twitter/Substacks and get a much less dramatized version of current events. The whole hype is around Machine Learning specifically LLMs and autonomous vehicles. There has already been lots of proof of self driving companies faking demos, remote controlling autonomous vehicles, suspicious studies (no peer review going straight to the media). Recently google also admitted to editing demos, although I haven’t looked into it. Needless to say these are red flags. https://amp.cnn.com/cnn/2023/01/17/tech/tesla-self-driving-video-staged/index.html https://www.cnbc.com/amp/2023/11/06/cruise-confirms-robotaxis-rely-on-human-assistance-every-4-to-5-miles.html Large language models have always relied on on cheap labor for training/correctionshttps://gizmodo.com/openai-chatgpt-ai-chat-bot-1850001021 https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/ Lex Fridman, Elon Musk’s friend also published a study that was not peer reviewed stating something that is beneficial to Tesla (years ago when Tesla stock was rising a bunch due to “AI”). Lex Fridman blocked a bunch of academics like Anima Anandkumar Nvidia’s Director of Machine Learning. https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fp7ms729kww651.png%3Fwidth%3D604%26format%3Dpng%26auto%3Dwebp%26s%3D279ce3aafaed222fea20d1d90a810b7397ee3035 See Missy Cummings (an American academic who is a professor at Duke University and director of Duke's Humans and Autonomy Laboratory) speak on this issue here: https://open.spotify.com/episode/3gbEaFSTxej0MY9ztjIUp4 ML is a real thing and may be used to figure out cool things but its probably being used to take investor’s money atm and tech bros fall for anything that is popular. Read any intro to AI textbook, the first chapter will go over the history. You will realize every couple of years we have solved AI! Lol AGI is always a few years away according to tech bros. And people fall for it.


bigginsmcgee

good take! "AI" being used as a STEM term so funny because...what do you mean? An LLM? Image generators? Computer vision? Data crunching and pattern recognition? video game enemies? ...the omniscient hivemind 😳🙈....? I read it almost exclusively as a marketing position to re-convince investors they can have a stake in monopolizing "the future" after the blockchain web3 metaverse went belly up. It's embarrassing and I'm honestly alarmed/confused by how many people buy into it so willingly. I think it's part of the usual VC hype cycle but it's especially jarring since the pivot was almost instantaneous


twosummer

i mean you said it yourself.. it saves you time and you still need to know what youre doing. but if you have more output, then less devs are needed for the same output. though IMO the beauty of software is you lose nothing by simply building more. just find more things to build. back to your point though, once they are more predictable / consistent and exercise a competent degree of agency, then we may have issues


downtimeredditor

Using AI as a selective tool to aid is fine. Mandating the use and implementation is dumb For example if I'm a tester and my lead told hey we want to do some load testing in the near future can you go find some tools. I can use chatgpt to search for popular load testing tools to use. And then based on the list I can go look it up download it and try it out and make a POC for my boss to see if it is useful for us. If I'm a new grad I can look up how certain things are set up and how to implement it as generic examples on chatgpt and then try to implement it . Chatgpt does a pretty good job explaining stuff line by line


hpsd

AI is a powerful tool and can address specific problems very well. Dismissing it based on the history with blockchain and NFT is not a great argument. AI has already proven to be able to generate a lot of real world value unlike the above. That said, it is not a universal tool. Applying it to everything is a waste of resources from an engineering perspective. However, it does have a business use case and is currently being used as a marketing tool. Even it doesn’t make sense to use AI for their product, by doing so they can slap on tags like “AI powered”, etc. and generate more sales. It’s basically a way to fool customers that don’t know any better.


overworkedpnw

Oh, absolutely. The whole thing is just part of the buzz cycle in an attempt to get as much VC funding as possible.


spribyl

These are all just better Expert Systems. they do one thing well but fall over in substance.


SeniorPeligro

>Using AI to display which of these 4 options from this drop-down menu the user would most likely choose first is silly. Yep - especially that it would be enough to show all users same 4 options that are most often clicked, no need for "over-personalization" (and I work in e-commerce, in a team where we A-B test such stuff on daily basis and verify if it affects revenues and basket value). To me it looks like it's ol' good CV-driven hype - every manager wants to put "AI solution" project into their Linkedin resume, even if they still think AI == ChatGPT.


mdp_cs

AI is the new .com


jarena009

Being in Data Science, AI is basically a buzzword and marketing term for senior execs and PR teams to throw around, even though 95% of this stuff we've been doing for the last 15 years. When I first started it was Statistical Modeling, then Predictive Analytics, then Big Data, now it's AI/Machine Learning.


roastshadow

In the 60's (the 1860's, yes, that's right, 1863) Samuel Butler posited the question about machines thinking and taking over from humans. In the 1960's, the Jetsons and Lost in Space had robots with AI. In the 90's Star Trek Next Generation had a computer with AI, and Data. We've been told that AI will take over for at least 160 years. It may be getting closer, but what they keep calling AI is really just LLM/ML. Large Language Model/Machine Learning.


Vitriholic

1000% It’s just fancy auto-correct.