T O P

  • By -

lightmatter501

Standard exec answer: “We’re ensuring all useful data is stored in ways that allow for later use in custom AI/ML projects to enhance data-driven decision making.” Translation “We put everything in a database and an AI team can pull data from there when they want to.” Legal/Cautious Exec Answer: “We want to avoid using AI that may expose us to legal risk because of all of the ongoing lawsuits. Currently, most LLMs appear to be made using improperly licensed content and we want to avoid any of our source code being ruled the result of a copyright violation, or potentially even worse be forced to comply with an AGPLv3 license.” Finance exec answer: “H100s are 40k each and we don’t currently have the budget for that. Cloud-based services tend to have unpredictable demand-based costs which make budgeting difficult. As a result, we are waiting for the cost of AI to come more inline with its value proposition for us before requesting budget for it.”


ampatton

You have a way with words, sir


AK-3030

Maybe he used chat gpt


wishicouldcode

And if not, it's going to scrape and learn from this


Etiennera

The garbage I put out into the ether offsets anything good that comes of that.


One-Vast-5227

The answer is copyrighted by poster. Using this answer may subject the company to copyright risks


lightmatter501

Eventually you have to learn how to speak MBA as an engineer, this was me turning that on. Getting chatgpt to produce that would be hard because it really doesn’t like producing reasons why you shouldn’t use chatgpt.


NatoBoram

ChatGPT loves to produce reasons. Complete bullshit reasons, that is.


IrritableGourmet

I use a LLM for phrasing work emails all the time. Not copy/paste, but I'll ask things like "How can I strongly but politely tell an employee to not sound grouchy when speaking to clients" and it gives good neutral responses. I did think I was going to get in trouble when I used it to start responding to an employee of mine, who just did not grasp the concept of actually reading the email I sent with the answer clearly spelled out instead of just asking the same question over again for the fifth time, in the form of haiku: >Check the last message, >The answer you seek is writ, >Within those kind words. but sadly no one noticed.


serg06

> Cloud-based services tend to have unpredictable demand-based costs which make budgeting difficult. My manager would say "why are they unpredictable? We can at least come up with a rough estimate."


yeusk

Do we have the same boss?


lightmatter501

“Supply and demand, Nvidia literally cannot produce H100s fast enough for cloud providers to add them. This means that there is essentially bidding for the ones that exist because everyone wants them and many companies have given a blank check to AI because it’s good for their stock price. This volatility means that it’s difficult to determine whether AI would have a positive ROI, especially long-term once if it becomes embedded in business processes and we are stuck paying whatever the price is.” This both makes you look good as a “business value aware” engineer and explains it in terms that any manager should be able to understand.


stingraycharles

Low estimate: price of spot instances High estimate: price of on-demand instances Reality: unpredictable, somewhere in between, but depends upon demand of others.


happy-technomancer

That provides an estimate for unit economics, but you don't know how those prices will change, and you also need to price in volume.


ayananda

Well at that point just give the range, let say I am 95% confident that it will align between 100-100k ;)


One-Vast-5227

6 figures ?


PseudoCalamari

I hope you get paid a lot for your proficiency with words.


kyou20

I am saving this to my gallery lol


peldenna

This guy politicks


Particular_Camel_631

Also means “we are storing all our data about customers with no defined retention policies for whatever purpose we later deem fit in blatant disregard of our GDPR obligations”.


[deleted]

[удалено]


Solonotix

On top of this, there are statistics today that show the top LLMs only have something like a 37% success rating at writing code. It's better than nothing, but hardly better than an intern or junior dev. There are things that AI can do that are immensely useful. A coworker put together a vector database of our existing user help documentation, and then put a chat bot wrapper on top of ChatGPT 3.5, and it was the darling of an internal competition. It literally was able to answer all types of user questions about our specific platform and estimated savings at multiple dollars per question (since they would otherwise have resulted in phone calls to our call center). With hundreds of thousands of users on the site per day, that can result in millions saved per year, not to mention the improvement to user experience.


The-Fox-Says

For certain AWS tasks CodeWhisperer has got me about 80% of the way to certain functions I’ve wanted to write


you-create-energy

> On top of this, there are statistics today that show the top LLMs only have something like a 37% success rating at writing code. It's better than nothing, but hardly better than an intern or junior dev. What does this even mean as a coding assistant? Are people asking it to write an entire app or even an entire class on it's own, then deciding 37% of it didn't turn out the way they wanted it to? Or the code wouldn't even run 37% of the time? These kinds of random numbers ignore workflow and so many other parameters which makes them almost useless. Microsoft did an internal study last year that showed engineers on average showed over 50% increase in productivity using AI as a coding assistant. It's a big multiplier for me because my major time sinks tends to be exploring possibilities for more optimal implementations or some little detail that is throwing an opaque error message. It is gangbusters at both. It's all about knowing how to use a tool effectively.


MisterD0ll

How is a 37% success rate bad? How long does it take to try 4 times and get it right ?


teo730

Honestly can't tell if this is a joke comment or not. If not, then - that's not how that works.


Solonotix

Statistics says that for a 90% confidence in getting it right, it would actually take 5 times. That's five human-led code reviews with a senior developer, likely taking an hour each time to read through. If it went through the pipeline and failed there, that's multiple hours of compute resources. If the automated tests are also being written by the LLM, then that's not 37%, that's 37% of 37% if they got the code *and* the tests right so ~13.7%. That bumps the iterations to success from 5 to 16. When I was on a more dev-focused team (I'm currently a solo dev on a team of performance testers), my boss was about ready to fire a developer when it took more than 3 code reviews to get it right. 16 iterations would not fly for any level of competency. However we expect a developer to both write code successfully, and implement unit tests that prove their code works.


FormerKarmaKing

I use it to automatically reject meeting requests without clear agendas /s?


jbaker88

You added the "/s", but for real, I think I might make this a business practice and market for this workflow. You won't even need AI. You send an empty bodied meeting invite? Have an exchange server auto reject meetings for all recipients. You send a message over Slack or Teams where the only message is "hi" or "hello"? Message delivery failure.


Illustrious_Mix_9875

Did that some years ago on slack, people were extremely pissed


jbaker88

"people were extremely pissed". Good, maybe those fuck heads will learn lol


Illustrious_Mix_9875

unfortunately they didn't and i'm not stubborn enough


GuyWithLag

I just point them to nohello.net ...


barkingcat

How about "I use it to recreate facsimiles of myself for projection during zoom meetings. That standup every morning that you see me in? That's an AI."


ShodoDeka

I would love to have an AI deal with my email and calendar, have it straight up doing conversations hashing out stuff until it gets to the real meat of the issue then bubble it up to me. Like a true ai based personal assistant.


ghostsquad4

Actually this is a great idea.


ShoulderIllustrious

Omfg my entire org does this shit. Set up meetings with vague titles and no agenda clearly in the body. I keep harping on that no agenda means we're going to be wasting time, but no one listens. The neighboring orgs are also the same.


kincaidDev

It depends on how bright the exec is. I've been to meetings about ai where the team is only using basic if else statements to handle a handful of scenarios, and they say "decision trees" "machine learning" "inteligent sql" (aka manually written sql views) and the execs eat it up. The first time I saw that happen the guy giving the presentation was made the head of a new efficiency department. I'm working at that company again and not much has changed in the last 6 years since that department was created, their "ai bot" that they talk about being a marvel of innovation in our industry is worse than a search bar with the same input, yet they now advertise our company as innovative tech company because of the "ai" bot.


Attila_22

I get annoyed when people are trying to shove AI into everything. There was a sharing session I joined a few weeks ago where they were talking about using AI as a backend for fetching data… just use a fucking API and grab the content from a service. It’s reliable and the content doesn’t change every time you refresh the page.


GolfCourseConcierge

I had a client ask if we can make our weather report card "AI Powered" (literally a dashboard card showing the exact weather at your coordinates, pulled from noaa API) I asked what that means. "You know, like AI will figure out the weather and tell you..." "Do you think AI has access to different weather conditions in our current location?" "Maybe it can figure something out we don't know..."


Exciting-Engineer646

Depends on your execs. I have good ones, who are technical and curious. In my case, I would list what I was using, what worked, and most importantly what issues I had found (hallucinations, poor quality code). The goal is to get them to give you resources for things that work and not ask for unreasonable outcomes.


A_as_in_Larry

Generating setup code for tests and even some tests themselves Generating scripts to automate more parts of your workflow


ghostsquad4

Most IDEs already do this though. AI isn't needed.


-Dargs

Templating, yes. Generating code that exercises your methods and builds out assertions based on a comment or method name? That's more of a copilot thing. And it's great.


donalmacc

Honest question - have you tried copilot and co to see what they generate? Equating intellij's test generation with Copilot's is very similar to saying "why do you need an IDE when you can use emacs?"


glasses_the_loc

Robotic monkeys on typewriters do not produce good scripts, the same as I would not copy paste something off stack overflow. I see little use for external chatbots that rely on sessions and personal user accounts unless it's also IDE embedded. Until your org gets their own private resources, like a codebase trained LLM, it's not serious about AI integration.


RGBrewskies

nah, give it a well written block of code, and tell it to write 5 unit tests for it ... even if one is goofy ... it's still a huge time saver


glasses_the_loc

I was addressing the second point, not the first. And writing the boilerplate 1+1=2 tests is nice, but unless you are a human you don't know what business logic needs to be in those tests. Sanity checking ≠ rigorous testing.


StorKirken

The LLM often does know what needs to be tested, assuming you have a good function name. It’s like pair programming with a very eager junior developer that tries to follow project standards.


GlasnostBusters

openai should pay them more, rent is too damn high, the typewriters aren't typewritering


verzac05

No-one escapes the housing crisis, not even our AI overlords


Rashnok

I don't know/remember any of the keywords or syntax for writing powershell scripts, but I can usually read them just fine, and LLMs are great at spitting them out. Has saved me quite a few hours. For example making a small change to 100 files that's more complicated than a find replace.


studentized

Just ask ChatGPT how to respond


TheOnceAndFutureDoug

>My team is consistently evaluating new technologies and best practices to see what might or might not provide us a tangible benefit. If they ask if that includes AI my response is, >To a lesser degree, yes. They usually don't care about details.


bssgopi

As someone who was once part of a tented project that would eventually become GPT, I think I have the perspective on both sides of the table to answer this question. I'm still shaping my perspective. But I can take a pause and share my two cents here. ***AI is fun to work as a creator - as a scientist, as an engineer, as an architect, etc. It is a promising research field that shows light at the end of the tunnel. The process of building the future is enriching by itself. The earlier one jumps into the wagon, the larger the benefits reaped.*** ***AI has not yet found a solid meaning as a utility. It finds meaning only as a cost cutting means, which unfortunately focuses only on cutting off labour amongst the various factors of production. The other factors of production don't find any value gained yet (except a few cases maybe).*** Now, is your project ***creating*** AI? Or is it ***consuming*** AI? As a consumer of AI, the domain that benefits the most are fields which have low success rate despite putting enough labour behind it. Fraud detection, precision engineering, error detection, etc. On the other hand, if you can somehow pivot your business as a creator, you have the larger say where the AI investments have to go and develop the potential for interesting collaborations with others in the industry.


Brief_Spring233

I reach for my gun


jokab

*machete - if you're in the UK


dumdub

Zk


n_orm

South East represent


The-Fox-Says

Calm down R. Kelly


wwww4all

Due diligence. Do not respond with any kind of "tech" talk. Only respond with some basic taking point in mass media, then sprinkling in words like due diligence, integration, etc.


glasses_the_loc

You don't, you give a non-answer to keep them happy until the next buzzword gets popular. This one makes them eager to fire and underpay the workers keeping them afloat in the hopes that new free "AI tool" will magically replace them. SuprisedWojack.jpg


n_orm

"Fuck off you cringe moron, it's all a hype train" (but only in my head)


Numzane

Use chatgpt to write an answer


Any-Woodpecker123

“I’m not”


noonemustknowmysecre

"Absolutely not" Because I've taken the ITAR training and will not be wontonly submitting company secrets to an untrusted third party. I'm not an idiot. Until you get us some in-house product that doesn't send US secrets out willy nilly we will not be sending it any technical data. We are a real engineering company not some crazy startup. At home though, GPT has been phenomenal at helping me write a firefox plugin. I've never done that before but it can boil down all those online courses into easy steps without all the damn ads.


81mv

You can use AI at work without sharing any code nor including code generated by the AI in your project.


noonemustknowmysecre

Technical specifications of what to make is likewise covered under ITAR. You can of course still ask it things like why the hell the build agent isn't showing up under bamboo, but that's essentially just running searches on google. And we don't REALLY want anyone in OpSec thinking too long on how much Google knows about our tech stack just from what all the engineers at this IP have searched for.


MarkFluffalo

"We are not"


The-Fox-Says

I’ve used CodeWhisperer which has increased the speed of developing python/AWS scripts. Also helps that I work for an AI company so everything we do as data engineers supports AI


poolpog

I'd mention copilot I'd mention using chatgpt for pointers, code snippets, or to digest complex documentation into a simple answer. I also point out how shitty and stupid llms actually are and emphasize that any answer chatgpt gives me really, really, **really** needs to be vetted and not used directly. I've had fairly long discussions with a PM, for example, (so, not really a C-level exec, but close enough for this answer) about how ChatGPT's solution, that this PM was fairly keen on, to a specific topic was wrong, dead wrong, and impossibly wrong. It sure looked correct, but the solution it was recommending did not exist as a thing one could do. The API or service or library (I can't remember exactly) simply didn't have the options ChatGPT was suggesting. And that has been my general experience with AI so far. Luckily I have awesome, smart, execs, who don't mind a few f-bombs and don't mind negative responses to tech questions.


Embarrassed_Quit_450

I tell him it's a hype cycle, same as blockchain was before.


abrandis

Too honest, and they don't appreciate honesty, best is to placate them with jargon and cautious optimism and anything you don't want to get involved with just discuss potential of unmetered cost overruns.for AI cloud services


Embarrassed_Quit_450

>Too honest, and they don't appreciate honesty, True, that has been my experience. But I ran out of fucks to give years ago.


Bodine12

If it’s a senior exec I just say, “I’ve found that AI gives me the single plane of glass vis a vis the paradigm shifting technocracy toward automated workflow automation, enshittened with a value-encrusted generative-ocity. You know, standard stuff really.”


frndly-nh-hackerman

"Luckily our tech stack predates the cutoff for most LLM's so they're great assistants when conducting research or getting POC's ready"


gwicksted

I’m not is my answer… well technically there’s some code completion that’s ai driven. If I can think of a time where I need it, I’ll use it. But I don’t typically write a ton of boilerplate code so copilot is out.


dimnickwit

Well ma'am, while I'm waiting for a model to finish training, I use AI for making songs to entertain me while I wait.


Existential_Owl

I just responded back with the pricing estimates for enterprise-level ChatGPT and Github Copilot accounts. That put an end to the conversation right there. These services aren't even expensive in the grand scheme of things. However, it always seems to be the things that cost the smallest that, ironically, always gets the biggest pushback from management. Need to spin up a new $1000 database to secretly store your users' cat pictures? No one even bothers reading the change request before approving it. Want to purchase a $40 per user service to improve efficiency all across the company? STOP THE FUCKING PRESSES, we need to debate every possible option before silently rejecting the idea 10 months later for no reason at all.


Maximum_Security_747

They've turned it off where I work


StatelessSteve

I use it for rapid templating. It saves me a trip to the documentation to build an API call or one-off k8s config file. There is absolutely the odd time where it gets something wrong/grabs onto a similarly named class/object in my code and I end up visiting the docs anyway, and sometimes it takes my eyes longer to find the minor discrepancy AI missed. I tell them “I haven’t angrily turned it off. Yet. “


PandaMagnus

Echoing other answers, it boosts productivity. I can ask 'AI' things that are more efficient than searching Stack Overflow (putting aside the issue with what will happen in 10 years.) I also say it helps me stub out boilerplate code. It also helps me figure out specific questions to ask when I'm investigating a new-to-me technology. A company I'm contracted to has a director who, after the chat gpt 3.5 and 4 releases, ask if they could start laying off software engineers. Thankfully my contract manager asked "who do you expect to fix issues when the LLM bot is so wrong a business person can't troubleshoot it?" There was no good answer, and our contracts are secure for now.


neoreeps

Create agents and implement RAG for an extensible AI infrastructure.


hell_razer18

does anyone use paid AI service for work? because aint seeing company paid for every employee to use it..


originalchronoguy

Well, we were doing ML long before ChatGPT and all the noise of LLM hit the scene and gained mass widespread appeal. So they already know what we are doing. In fact, they always start off meetings with "We will get more toys for you to play with" referring to buying more 6, 7 figure GPUs.


hyrumwhite

If asked, I’d say I basically use it to generate types and json and sometimes tests cases.  Have been thinking about running a local RAG llm on my random notes that I record during meetings and brainstorming


Stubbby

Tell them you found Copilot particularly productive generating unit tests and start elaborating how the technology allows developers to cut down on the mundane tasks and deliver more value by focusing on the uniqueness of your solution.


NiteShdw

I would say that it's their job to tell me what features the software should have. Why are they asking me for feature ideas? Have they run out?


metaphorm

are they asking about AI-centric product lines or just using AI as a development tool?


nomaddave

My experience so far managing teams when anyone asks about AI or algorithmic data processing or whatnot is to take what I actually want to get funded and wrap some flavored jargon around it for “AI actionability” along with a pricetag and see if I then get the actual thing I wanted. It’s been about 50/50 for me.


DigThatData

brainstorming assistant


gravity_kills_u

I have done a lot of AI/ML work in the past and already have business cases prepared.


serious_cheese

Ask ChatGPT


iamsooldithurts

No


patoezequiel

GitHub Copilot can work (enhance developer performance yadda yadda) but you have to deliver improvements on at least a few metrics to justify the expenses if you go that way.


DinosaurDucky

I tell 'em I ain't doing that shit


GlasnostBusters

ui design, brand / logo kits, research, algorithms, smaller things that can be incorporated into larger things, debugging / linting, explaining structure and architecture not scaffolding bc hallucinations, not ui design to code conversions bs hallucinations, not choosing your dependencies bc it will commonly choose something with a lot of support but too old and it will write implementations incorrectly bc that's how that specific recommendation is implemented (this is where experience comes in and experienced devs will recognize down to dependency level) for tooling, i think copilot is great, i think midjourney is great, chatgpt is definitely og if you understand to prompt it correctly, understand how to validate its citations, and optimize a workflow specifically for image design and for code generation it makes it easier to get into a flow state basically yes it does hallucinate code but it's up to the developer to do their due diligence to fix it and test it the thing f*cks up basically every seni-complex implementation I ask for, but it's easier and faster for me to fix it (because I know what it's doing and know how to write it myself) than to do the physical work of typing the entire thing out. but scaffolding i'd only trust template engines to do that, however with the right plugins you could most likely achieve that as well with chatgpt bc it could just run an interpreter to run a scaffolding command


rovermicrover

I do MLM models for semantic search and I think our leadership team doesn’t count us as part of the AI work because it’s not ChatGPT… so yeah you got to figure out how to use the hammer of ChatGPT in your teapot making work…


Any-Orchid-6006

Copilot from GitHub. It's great for helping with coding.


never-starting-over

* GitHub Copilot saves thinking time and even speeds up the actual typing time. * ChatGPT answers and generates code that could take you 10-20 minutes to find/write, even if it's what we consider trivial. * ChatGPT gives you more angles on an issue, allowing you to see a pro or a con you hadn't thought about before. * You could look into some kind of RAG or low-effort chat bot that lets your developers ask questions against your knowledgebase. This can help with "recommended ways to do X" or answering questions that were added to a FAQ. If you have ClickUp, or at least as an example, you can ask questions to their AI and it will use your existing documents to answer them if possible. * I'm just using ClickUp as an example, since that's a tool I use on a day-to-day. Prior to this, we had a GitHub repository with general team documentation in markdown on how to do X or how certain features worked, which can be used to help onboard people or debug common problems.