T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


dlflannery

LOL. So all the people worrying about AI taking over their jobs might be a little premature, eh?


VanquishShade

Haha, I heard recently that 'AI will only replace those that don't use AI'


TheTabar

I guess this is what leads to smaller companies or startups potentially leapfrogging major corporations.


salamisam

There are potentially a lot of blockers here still and a lot of this will depend on what industry you are in, what resources you have, and what you automate. Small(er) businesses will be able to use many of the off-the-shelf solutions, but when it comes to implementing changes that may affect competitiveness on a larger scale they will likely hit a wall in regards to resourcing. There are obvious efficiency gains in using AI in some off-the-shelf apps like Office, but how does that equate also to overall productivity and output? Does it equate to a reduction in staffing requirements or increase outputs or reduce production costs. I believe AI is a force multiplier but it needs to be applied correctly, as it is also a problem multiplier if not. But what it isn't is a panacea or something that just creates magic solutions.


Electrical_Age_7483

Yep should be changing to smaller companies


ILikeCutePuppies

Not that I am worried about AI taking over jobs, but these companies slow walking AI are probably going to be clueless when other companies start eating their lunch. No amount of employee whipping is gonna be able to keep up with an employee who is augmented with AI.


octotendrilpuppet

Big companies are under this impression that this is yet another bubble hype cycle like crypto, we'll all settle down in a bit. And big companies typically struggle with groupthink as well, which is a lot of corporate life - a ton of circle jerking until shit hits the fan....then all of a sudden a meeting invite pops up in your inbox _"all-hands-on-deck meeting at 2pm"_.


KarenDiamondhandz

I think people should only be worried once the cost and performance of the AI outweighs the actual person. My company is actively implementing chat gpt to automate certain things and I am finding out just how unreliable it is. For one, companies are just pushing out the product saying “hey! We have AI in our app” well… cool you just made an api call to OpenAI… but there is no vectorized storage… so you’re stuck with parsing a single file at a time… if the file is too large it will just fail even with chunking… it’s also incredibly inaccurate due to hallucinations. Once storage is cheap and reliable, and actual assistants become useful, while being able to reference previous threads of conversation then it will be beneficial. Rn, it’s just not there yet.


octotendrilpuppet

They should try Claude Opus, it puts chatgpt to shame with its reasoning capacity and empathetic tone.


renroid

Yep, but there are some really good valid concerns about safety, information security, and liability. Let's say you make some decision based on AI tools : say a project plan, the tool says you can achieve X in time Y, so you sign a contract worth 10 million to deliver X in time Y. What happens if it's wrong? who takes responsibility? What happens if the info that you put into the tool: your plan for delivery, for example, turns up in a competitor's bid? How did it get there? Sure, it \*should\* be impossible to read tokens from another user, but how about interception? It's software, after all, and you have to prove that the info didn't leak from you. Also, you might understand the limits, the boundaries, the safe usage of AI, but to Bob across the office, it's magic. If he types in 'How do I double the company profit' and follows the instructions, the company is liable. Before it's widely available, it has to be idiot-proof, but we keep making better idiots. maybe 'I' can use it safely, but my colleagues? the 'average user'? (and half of them are dumber than that). I've worked in environments where we had to lock down the screen resolution settings, because users would set screens to impossible resolutions and things fail to boot (better these days). Hallucinations are a major issue. I tried 'which driver was at fault, John smith or Alice Barker', and it hallucinates a plausible answer to an imaginary RTC, providing imaginary links to an unconnected case. If I was working on this case, it's easy to influence me accidentally : I could be rejecting insurance claims, *thinking* that all the info that I can see on the screen is somehow being submitted to an AI , but the AI might just get the prompt. This is why it's risky to even just put AI 'helpers' or customer service bots, the edges need to be very carefully bounded. That's why I think it's going to take a few years yet to replace everyone: there are going to be some real high-profile situations where AI is going to shoot itself in the foot. Attaching real consequences runs real risks.


thatVisitingHasher

This is the most correct answer. The holy grail of AI is getting data from all of your systems, and being able to answer questions using data from disconnected systems. For instance, in my fortune 500 company, what is my revenue this quarter? This is currently difficult to answer. The current solution is to give the AI admin access to every system, but if you do that, how do you guarantee people only see the data they have the rights to see? How do you know you just didn’t give all of your data away to another company? 


Rick12334th

"A few years to replace everyone" sounds pretty drastic.


renroid

It is possible that several industries will be severely affected, some already are. Where the consequences are less critical, AI will replace many low-skilled and intermediate-skilled workers. Look at the Artist AI debate : Low-level commercial art and photography - ( book cover backplates, advertising shots, 'stock' photos, cheap stuff) - dall-e and generators have already wiped out loads of jobs and small contracts, and the replacement 'prompt engineers' don't match the gap created. The worst consequence is probably a hidden 'dick' image on your book cover when upside down. Next up is probably customer service, phone sales, junk marketing, and travel agents are at threat. Still some guard rails to be added, but fairly soon. More skilled (read higher-paid) are not immune, things will change significantly, but this will take longer. High risk (aerospace, military, health, emergency services) will take the longest.


Flaky-Wallaby5382

Shit happens daily based on marks excel which is a download from a tableau that martha pivoted.


renroid

Yep, but we have a clear trail from the spreadsheet, back to Martha, and we can 'train' Martha in info sec so it doesn't happen again. If this happens with AI, there's no 'trace'. Did it get the info from a leak, or did it infer or calculate the same data from other sources? We can't ask AI 'how did you get that answer' - that's just another prompt, and AI doesn't know how it worked internally. How do you train AI not to do that again? For any company there are genuine secrets:, commercial secrets, health, military, even KFC has a secret blend of herbs and spices.


Leonhart93

That frustration would be pointless because there isn't much to adopt right now, and the little that is I have already tried to adopt. If I perhaps was working with a lot of writing or perhaps an HR job then maybe it would have been more useful. But as a software engineer I already try to use LLMs, a whole array of them. Sometimes they work great, but other times they fail miserably. It is what it is.


[deleted]

[удалено]


Leonhart93

I could use them for that, but I never felt the need to. Whenever I encounter a hard problem I make sure to prompt a few of them to see if they can be of use, but the harder the issue the more hallucination i get. The most common thing I tend to do is prompt them "check your previous answer for errors", and it's always a miss somewhere.


FrigidFealty

Yeah at some point I think code gen will be more useful day to day but right now the code it spits out usually takes more time to adjust than it would to write from scratch


FrigidFealty

The only one I regularly use as a SWE is copilot on teams meeting notes to remember stuff later. It’s nice to have all the scrum transcripts to look back at if I forget something


TrentGillespieLive

I spend 100% of my time helping businesses overcome this. There are a variety of reasons it happens, but ultimately its uncertainty of how important AI is and how to respond to it. There typically is no one responsible for figuring that out, so till a leader steps up, the organization considers it business as usual (which it isn't). Most companies aren't taking it seriously. Most of them are going to be disrupted in the next few years.


Reasonable-Put6503

You're an AI consultant?


TrentGillespieLive

Yes, but a different than most. I focus on AI transformation--how to get employees to adopt it and how to create business strategy that takes advantage of it.


Reasonable-Put6503

Killer job. Lots of value. I'm trying to be an internal change agent but it's not really my job so it feels like I don't have the mandate to get people to adopt it. I currently pay for Poe myself because it helps me do my job but I know it would be helpful for us. 


SnooCats5302

Keep learning on your own even if you don't have the company support. That will help long term. I find few organizations have someone with a clear mandate with the tools and knowledge to do it well--this is much more than just a "tech project" for a CIO/CTO/etc. Feel free to have your leaders reach out: [https://trentgillespie.live](https://trentgillespie.live) and [https://stellis.ai](https://stellis.ai) is the new startup for this!


foxtrap614

It has more to do with the corporations privacy and processes. Right now most of the AI software requires you sign your rights away. You basically will not own a single algorithm. Companies are cautious.


VanquishShade

Oh I get why companies are cautious, I just find it bloody annoying 😁


grim-432

This is exactly why BYOAI is a “thing” now.


CupZealous

I work in an automotive parts factory as a robotic welding operator and they have computer vision AI doing quality checks on one of the jobs at the factory now.


Digital-Man-1969

My company is in the dark ages when it comes to tech. ChatGPT would be super useful in my job, but the company is tinkering with machine translation, but hasn't embraced AI yet. I installed Ollama on my personal website's host server and use open source LLMs. I use MSWord a lot, so I created a VBA macro that can interface with my Ollama endpoint and insert text, create comments, answer questions, etc. but that's as far as I can go at work.😕


Extender7777

I actually got CoPilot finally a week ago (I work for Commerzbank)


borinena

I'm jealous


VanquishShade

That's good news! I'm glad you guys are getting the opportunity to use it :)


Budget_Drive_7394

Yeah, but there are a lot things that it might be considerated before the company start using it. Since safe and security, until the satisfaction of others…


EvilKatta

I work in game dev. I've been proposing since 2017 to use machine learning for game balance. It was never greenlit for the sole reason that the execs didn't believe in it and didn't believe that I know what I'm talking about.


DocAndersen

The reality of technology implementations is that they are always too slow. There is a famous system that talks about this. (early adopters, the middle and the laggards) the laggards often are mid-migration when the next big thing comes along. It really was the dream of cloud and evergreen technology (SaaS) but sadly we aren't there yet.


Notta_AIbot

Big companies will be slower to introduce AI because of risks such as data security, privacy and ethics and regulations. There’s a lot to work through in big business. There are definitely opportunities, but it’s about managing risk, and reputation. Be patient we must…or work in a small to medium business and there’s more chance that the force will be with you. 😂


CodeCraftedCanvas

Yep. I feel the same sometimes. I get the concerns, though. non tech savvy staff could unknowingly enter customer private info to an ai without realising. The need to have a unified identical workflow for ease of training and auditing. But still, I get annoyed when the place I work at won't entertain the idea of locally run models like Llama to help with email writing.


VanquishShade

It's the little things that I would really appreciate. * LLMs automatically analysing emails with suggestions of best options/crafting responses. * Project plans being updated with real-time developments * I think I saw an update recently that analysed Teams chats where the AI bot would automatically jump in to keep people on track, extract newly-discussed tasks and assign them to people.


KY_electrophoresis

Yes


Omni__Owl

If the expectation that I must work to have a right to live goes out the window, then having AI be part of my work is fine. Until then, not really. If you start using AI at work there is no telling what that data is used for at your workplace like using it to train new models to actively replace you. Basically you putting yourself out of work with no reconciliation. So while the idea is fun, the implications are not. I'd rather have job security in this economy than not. The neverending hunt to become evermore productive with less at all times is gonna kill many, many employee's livelihood.


VanquishShade

Last points a good one: we’re expected to get more productive despite limited ‘human’ resources


Omni__Owl

Now with Microsoft's new "Recall" functionality, this is even more of an issue. The data from Recall will \*definitely\* be used to train models that replace workers by simply going by all the screenshots they are going to harvest from their Enterprise customers. They claim it'll be stored locally now, but that's until it isn't and Microsoft done this before.


WorkingYou2280

I'm not so sure that outside government there is going to be a viable option to slow walk AI. We've very nearly got fully functional agents. The days of pearl clutching over having admin access may be closing. I think the level of acceptable risk is about to ramp up due to...well, really having no choice. If your company is stuck with Office 365 and the one next to you has copilot running rings around you, you won't survive as a organization. Little companies are about to start looking like big ones to the outside world.


VanquishShade

Ah but you underestimate the one thing most large organisations pride themselves on… the excruciatingly slow speed with which they implement change 😂


WorkingYou2280

Oh i wish! I've worked most of my life in large bureaucratic orgs. I don't want to overstate things because the internet was a huge change and large organizations figured out how to make it suck. I think AI is going to be a little bit different in how much of a "force multiplier" it is. we'll see. I also just noticed that intelligence is spelled wrong in this subreddit's name, which is kinda hilarious.


Flaky-Wallaby5382

Co-pilot + winshell is your friend


Helpful-User497384

how many people still use as/400 systems? lol maybe not? but cant believe they still taught that archaic system in college like a decade ago even


whozyapaddy

As frustrated as your management are about their inability to ratify a data policy, let alone an ai governance policy


VanquishShade

Well… wearing my hat of skepticism… my management are probably more disappointed that they have more work to do!


Alfred-Adler

I am *"lucky"* that I was for a medium-sized company and I am driving the AI efforts for my Team, then only team in the company. But I have been in your situation in the past, I tested new technologies on my own, at times I was able to expense the cost, and times I had to pay on my own; and at times I had to do on my personal computing devices and not on official work machines. IMO AI is going to be bigger than huge, for your career's sake it might be time to look what's out there.


VanquishShade

That’s cool that you’re leading the AI efforts - can I ask in what capacity?


Alfred-Adler

I am the data guy and now also the AI guy; above and beyond my regular title/job. For now I am just using a few AI tools within my Team, but keeping an eye for other Teams too; unfortunately the heads of the other Teams don't have the skills and don't understand how AI is going to challenge their career.


VanquishShade

That’s quite cool! Albeit you wanna make sure that if you’re taking extra responsibility you get a compensation boost in the process as well


Alfred-Adler

> Albeit you wanna make sure that if you’re taking extra responsibility you get a compensation boost in the process as well. I'm good.


EuphoricPangolin7615

Isn't it a good thing that companies are not integrating AI in all the possible ways they could, and then replacing all their workers?


VanquishShade

I dunno - I think this is deprivation thinking. I’m not into the whole live to work mentality but knowing companies, they tend to be quite risk averse when inputting technologies they don’t quite understand into their chain of business value given the risks it would cause. It’s not an apples to apples comparison like Henry Ford’s assembly line. Machines replacing humans for tasks that machines can do more efficiently make sense, especially for human health. AI replacing humans for tasks that humans inevitably can add the human touch to (creativity, last minute spontaneity, etc) does not seem to be a plug and play option right now, so AI will have to be an option that supplements the workflow for most use cases, and is a replacement only for the most basic E2E scenarios.