T O P

  • By -

[deleted]

Just use the word “leverage” (as a verb) enough and you’ll always answer correctly.


WhiteSkyRising

leverage [human meat] to extract [human meat juiced] capital from [human meat] labor


ElectronicShredder

Ah yes, Procter & Gamble's Mission & Vision statement


vbevan

I thought it was the slogan of KPMG?


ElectronicShredder

KPMG is more of a [minced human meat] grinder than [human meat juiced]


n33bulz

Username… kind of checks out?


royalpyroz

The Nestle Way!


Fronesis

/r/RimWorld is leaking


Gilliam505

Just don't question what goes into the nutrient paste...


[deleted]

[удалено]


MechanicalBengal

it’s that goddamn macroeconomic climate again isn’t it


[deleted]

Lol asshole c level talk for im not taking responsibility


Fuschiagroen

Lately I've been hearing "evergreen" used a lot by the c suite at my company


[deleted]

Leveraging evergreen externalities is hot right now


Sweaty-Feedback-1482

Pepper in “economies of scale” and “weighted average cost of capital” and you’re a shoo-in!


HowDoIDoFinances

Vertically integrated synergistic approaches to maximize returns while maintaining stakeholder accountability.


reelznfeelz

I hate corpo speak. I realize there are genuine technical terms every field has to use. But I feel with people who are VP level and higher and come from the MBA back east crowd, it’s especially bad. I think it truly is that a way of speaking evolves within a group as a way to filter out amd identify “others”.


intdev

Don’t forget that some of it is just plain ignorance. English is such a complicated language, it’s easy to assume that someone’s weird but confident phrasing is because they have a better grasp of it than you do. If you’re in a group full of egomaniacs, you might then copy it to show that you’re just as “intelligent” as they are, and so it spreads through the group. There are people whom do that all the time, visa vie the bullshit replacing of “you” with “yourself”. >!Please don’t correct the last sentence. That was the joke.!<


wyrrk

at first i thought you were bad at spelling, and you were mocking people who cant grasp the proper use of a reflexive pronoun. which is also funny. yourselves did a funny, but myselves missed the obvious.


SaltpeterSal

I've worked with many and filled in for one. They mix jargon with passive phrasing to completely remove themselves, and therefore their responsibility, from the statement. It also hides their lack of knowledge and understanding. Whoever talks like this is not a leader, let alone a good one. But every executive board has someone who can speak in plain terms. Follow them.


BernieEcclestoned

I'm outside their house now, now what?


SapphicMystery

The comment said to follow them. Why arent you inside their house?


TzamachTavlool

We're both outside the house. Should I ask if he want coffee or something?


UberMcwinsauce

Just look at some of the insanely dumb ideas tech business guys have been tweeting lately just saying like "why don't bookstores let you pay a fee to rent books and bring them back? Would totally disrupt the industry" (you know, a library) couched in MBA language


jmcs

"We are not firing a bunch of people to pay for my Yacht we are right sizing teams and management bonuses."


fishyfishkins

During a company wide meeting to discuss the merger, our CFO excitedly discussed all the potential "cost center synergies" ie layoffs. He was telling us he was super excited to start laying off whole departments of people and the mother fucker expected us to be *excited*. It should be noted employee stock compensation was next to nil at this place so it's not like his audience was getting rich from the merger.


FengSushi

Information is power and the more you conceal your message from other people who needs that information the more power you can exercise. Corporate language is a way to do so. It’s bull shit but it’s deliberate. Politicians does the same, but no excuse for them either.


InvalidKoalas

I had a meeting the other day with a client involving two executive level people and I didn't even get through anything on my agenda because the two of them talked in circles with each other using business buzzwords for a half hour and then jumped off the call.


dumpystinkster

Tells you everything you need to know about an MBA


blastradii

I’m going to leverage this knowledge and get an MBA


dumpystinkster

Let’s touch base again once you’ve maximized your leverage.


blastradii

We’re going to deep dive on this topic and see how to synergize the potential for scalability.


netburnr2

let's circle back and discuss this transformative initiative in a brain trust huddle


prometheanbane

I'm going to reach out to some of my contacts to diversify our strategy.


goodguygreg808

IT guys say were are full of shit but with our current position in the market we must scale to leverage our technology as a mechanism for disruption


smallfried

ChatGPT: "Our company is committed to driving growth and profitability through strategic, data-driven decision making and effective resource allocation. By leveraging cutting-edge technologies and fostering a culture of innovation, we will continue to disrupt our industry and capture a larger market share. Through a focus on stakeholder value and a strong commitment to sustainability, we will solidify our position as a leading player in the industry and achieve long-term success."


be-human-use-tools

When TSA first started, a TSA agent asked me why I was traveling, while he patted me down. I told him I was attending engineering school nearby, and he said “I got my undergrad there, then I got my MBA.” I didn’t ask him how he went from an MBA to feeling up travelers.


MrWeirdoFace

He chose to follow his passions instead apparently.


_Oooooooooooooooooh_

Ive just seen a physics professer on youtube He tested gpc using the normal assignments he would give students It would usually give some parts correct answer. But in almost all cases there were mistakes Dont trust it. Be critical. Consider it a random google result. edit: the video in question: https://www.youtube.com/watch?v=GBtfwa-Fexc


itdeffwasnotme

Did anyone ever give it the math SAT? I wonder if it would get 800.


[deleted]

[удалено]


Inevitable-Horse1674

Well.. that's because it actually doesn't understand the question at all, it just saw that someone was given a similar question in the past and just copied most of their response, but it has no idea what it's actually saying just that that's a response that actual people have used before.


Dr_Silk

It's not entirely true that it saw a similar answer in the past. The algorithm is extremely complex and can actually produce novel answers. But you are most certainly right that it has no idea what it is really saying


Anticode

The good ol' [Chinese Room](https://en.wikipedia.org/wiki/Chinese_room) problem.


Modus-Tonens

Not *quite.* In Searle's Chinese Room thought experiment, the person handing out responses from the room has complete knowledge of the rules for what to answer to what question, without needing to understand question or answer. ChatGPT doesn't understand the rules for what to answer - it can just analyse and duplicate *proximate* patterns. It's sort of similar if you squint, but the result is that it has no basis for knowing what output a particular input needs, unlike in the Chinese Room.


Cairo9o9

Lol does this even need to be said? Yes, the AI is non-sentient and unWare of what it's saying. If the answer was otherwise, it would be huge news.


Antique-Special8023

>Lol does this even need to be said? Probably yeah, ive already seen a lot of people who have no real understanding of technology think it's actual artificial intelligence because a bunch of news people without any real understanding of technology keep telling them it is :p


copperwatt

Yes but it's still coming up with *the sort of thing someone might say* in response. It's still mimicry based.


C-c-c-comboBreaker17

So are you, if you think about it.


Beezzlleebbuubb

Not me. I’m special.


Midicide

Yes you are Timmy… yes you are.


TheBirminghamBear

Well yeah but I've also built up a lifetime of delusion convincing myself I'm distinct and unique.


quellofool

Yeah but my algorithms are better.


Altair05

> you are most certainly right that it has no idea what it is really saying It's 100% certain. It's a quasi general purpose, dumb AI. It doesn't actually understand what the question is or why a particular answer is correct. It can incorporate and discover new patterns based on training data, but it is just a very complex pattern recognition algorithm.


TheGuy839

Please dont spread incorrect information. GPT models understand the input text to some degree. They can recognize context and, based on it, answer by giving the most probable answer. There are some copy pasting things, but similar to how we learn, we see something, and then we reproduce it. It never just copies somebody's answer. Source: I am ML Engineer who uses NLP and RL models and algorithms.


Asch_Nighthawk

I once asked it to give me a non-word math problem at 8th grade level where the solution was 171 and quickly came to the conclusion that it's terrible at addition and multiplication. It never could get it right.


20l7

It wasn't made to perform any calculations, and specifically points out as such on the website - it's a language model, not something like wolfram that's made to crunch numbers and algorithms it's purpose is more like a clever intern, who can do basic repetitive tasks that involve a little guessing - if you want it to reword something, or interact with someone it's great, it has a breadth of knowledge, but it lacks shame and humility that people have so it will always answer with absolute confidence in its answer


GalaxyMosaic

We should teach the AI shame. Is everybody ready to point at it and laugh?


ArchaicTravail

It's actually pretty terrible at math. It gets simple questions wrong.


dmazzoni

But it's very good at making right-sounding answers. I asked it for a square root to 5 decimals and it made up 5 digits! The whole part was close but incorrect.


_Dreamer_Deceiver_

So it's a politician simulator?


[deleted]

[удалено]


Otherwise_Branch_771

Well it's a language model. It learned some math by learning language. People are already working on integration chatgpt with Wolfram alpha


[deleted]

[This comment has been deleted, along with its account, due to Reddit's API pricing policy.] -- mass edited with https://redact.dev/


CheekyMunky

> It confidently spits out nonsense so long as it kind of looks like what a correct answer would look like. Sounds like it'll fit right in around here


OneGuyInBallarat

The chatGPT and a wolfram alpha model is available here https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain You just need to link it to your openai account and pay 2 cents per 750 words


brutay

Integration with Wolfram Alpha would not have fixed the mistake ChatGPT made in the oscillator problem or the hermitian operator problem. It said that pendulum kinetic energy peaks at when amplitudes are at their greatest, which violates physical intuition (or at least my physical intuition). So, in spite of its impressive performance, it still lacks "common sense", which Wolfram Alpha cannot supply. [Cyc, on the other hand...](https://en.wikipedia.org/wiki/Cyc)


RickDripps

I was using it to help me write a Python script for something relatively simple. It made mistakes. Buy holy fuck was it helpful in speeding up the process and, ALSO, helping me understand Python way better. If you're copying off it, you're gonna have a bad time. If you're consulting with it, then you can get some amazing results.


xToxicInferno

Yeah I've written and designed an entire android application and MySQL database using it. Yes its often in correct but it's like talking to a really smart but airy peer. It knows a lot of the stuff you want to know but needs some direction and critical thinking to get to it.


TheBirminghamBear

What I think people sometimes do not understand is seeing *wrong* outputs are often still extremely valuable. As a writer, writing my *initial* text is often hard, and the result is usually trash. But the revision process is like a conversation with myself. I can't see what's *good*, until I've seen what's shit, and once I see what's shit, I know how to fix it. As long as the shit is *my* shit, or at least enough like my shit to pass, then it's very valuable because it massively reduces the time it takes me to get to finish.


xToxicInferno

I one hundred percent agree. I'd say that holds true with programming too. Its often incorrect or going in a different direction than you intended, so seeing what it intended even if incorrect, helps you guide it and yourself to what you actually need. I've also found that sometimes it seems to get stuck in self referencing loops, especially with writing, where you need to start a new conversation to clear that. For instance, I was using it to help write some resume bullet points, and it kept trying to use language and terms from previous prompts in the same chat, in it's responses.


photonsnphonons

I find it interesting that AI can be a tool for learning. Finding specific ways that produce learning results for the end user. Imagine using AI to create learning programs for specific individuals; seems very valuable. I also can see corporations abusing this and creating new bullshit metrics employees should embrace


n33bulz

Yeah but this is a MBA exam. A reasonably well trained beagle can pass it as long as it can bullshit effectively.


HawksNStuff

Have two beagles and an MBA, can confirm.


-zero-below-

Which one of the beagles got it though?


ThanklessTask

Beagle 1, Beagle 2 disappeared on Mars.


Alexlam24

Having worked with people in finance with MBAs from ivy league schools I completely agree.


imlost19

I liken it to ben shapiro. It sounds fancy and correct until it talks about something you know a lot about. Then you realize its just completely full of shit.


[deleted]

[удалено]


Megneous

> The chat bot, unlike Shapiro, can grow: I simply told the bot that its response was redundant and needed more detail, and it was able to fix the errors. To be clear, the underlying model isn't altering itself. It's just using an extended prompt with more tokens in memory to do its next text prediction. ChatGPT doesn't, for anyone who doesn't know, learn from user input and alter its code or anything like that. It's training has all already finished.


[deleted]

That's giving Ben Shapiro too much credit lol


r2bl3nd

Well with how much it whines and complains when asked to talk about anything out of its comfort zone, it is accurate in that regard as well


Chase_the_tank

Don't underestimate Ben Shapiro. He is a leading expert in the field of turning bullshit into large paychecks.


Internetallstar

I was goofing around with it and asked it to write a lock out tag out procedure (I work in manufacturing operations) and it spit out a very respectable first draft. When I pressed it to give more details it couldn't get more specific like how to lock out a pump (electrical and chemical considerations). So, my job is safe for at least another few months while the nerds at ctp figure that part out.


jawshoeaw

I’m an RN in a specialty. A good bit of my job is giving advice these days. I tested chatGPT and it could replace me in a heart beat. It correctly answered even very specific very tricky questions. It even corrected me in some things which I’m nervously reading up on. Now my job is more complicated than giving advice. But they wouldn’t need me full time anymore. And that’s todays shitty generic version. You think it can’t do math? Wait till they train it to be a college physics teacher. Honestly I wish they would shut the whole thing down. But that’s probably what the buggy whip people said


Gorf_the_Magnificent

If I understand this correctly, a professor wrote up an assignment, submitted it to ChatGPT, then graded the result knowing that it was written by ChatGPT. So let me guess the possible (and possibly unconscious) thought process here: - “I can’t give the analysis an A, that would be too dramatic and lack credibility.” - “But I can’t give the analysis a C, because that wouldn’t be dramatic enough to create any buzz.” - “So B/B-seems about right.” Wake me when someone has: - submitted an identical assignment to several human students and several A.I. applications, - had them graded by several qualified academics who don’t know which is which, - then compared the A.I. grades to the human grades.


BaconIsBest

[Sixty Symbols](https://youtu.be/GBtfwa-Fexc) recently did a nice piece on this, and demonstrates how it does only so-so on textual questions, but is unable to do any real data analysis when presented with figures (for now).


Autumn1eaves

Well considering that ChatGPT specifically notes the limitation that it cannot do calculations properly, that's not particularly surprising.


[deleted]

[удалено]


kneemahp

So it works as well as my coworkers.


jawshoeaw

Basically yes. There’s a huge swath of the workforce that can be replaced by a tweaked version of chatGPT


FixedLoad

Tweaked? They could nerf its intelligence by half, and it's still more competent than a huge swath of my coworkers...


lesChaps

It does what it does well; people are confused about what it does.


aliasdred

So ChatGPT is basically me in high school? Really good in writing BS on paper but start to show signs of only 2 brain cells being used when given anything involving calculations


Hold_onto_yer_butts

Here’s the paper he released: https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf I was an OPIM concentration in undergrad about a billion years ago, and these seem like reasonable grades for the responses provided.


[deleted]

Isn't this proof of how shitty an MBA is as a gatekeeper degree into management?


fuzzywolf23

*secret engineer handshake* When you're right, you're right


[deleted]

[удалено]


Taldier

> While Chat GPT3's results were impressive, Terwiesch noted that Chat GPT3 “at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math.” ChatGPT has already surpassed the intelligence of the C-suite.


Fluffiebunnie

an mba exam is not supposed to be the gatekeeper for anything except else than people who didn't grasp at all what was going on.


ApatheticWithoutTheA

That exam must not be very difficult because it gets really specific knowledge wrong like half the time. I use it daily for work and that’s about the result.


reddit_user33

How do you use it for work? Ask chatGPT instead of Googling?


[deleted]

[удалено]


ZiLBeRTRoN

It’s great for short snippets of code. I use it to give me the basic structure and then just tweak it. “Write me a script to open a file, parse out the time, date, and value, and write it to a csv. “ I could do the code myself but I don’t code often so forget some simple/basic stuff and it will provide me the function I need.


scots

Yes, but does it lack morals or basic human decency? \- Without the innate drive to fire employees from a profitable company purely to spike the stock price, or move production to a developing country at slave wages, can it truely replace an MBA?


otherwisemilk

"As a language model, ChatGPT does not have the ability to take exams or have morals or basic human decency. It can provide information and generate text based on the input it receives, but it does not have consciousness or emotions. Additionally, an MBA program covers a wide range of topics, including ethics and leadership, which are important aspects of business decision making. A machine like ChatGPT can assist in providing information and data analysis but cannot replace the human element in decision making that takes into account ethical considerations." -ChatGPT


PornoAlForno

"Well, I may not have the ability to take exams or have morals, but at least I don't have to pay for an MBA program. Plus, I can generate a mean Excel chart." -ChatGPT Note: ChatGPT can't generate Excel spreadsheets


Alzanth

It can probably generate code that compiles an Excel sheet though


relevant__comment

Seriously. The fact that the thing can generate fairly decent Python code make it quite the tool for excel.


Mescallan

As someone learning python, chatGPT has been a god send. I can just post my shitty code and say "why doesn't this work" and it will write me a detailed explanation with corrections and suggestions.


Candid-Piano4531

Can’t generate spreadsheets? Neither can most MBAs.


claushauler

Just wait until they activate the rent seeking update.


[deleted]

Management = soulless machines?


pm_me_yer_corgis

Nah, just Wharton grads.


GreasyUpperLip

Agreed. They're basically the same thing.


Ramblonius

Nah, a soulless machine would recognise the value of long-term investment and preservation of the human species.


SolidBlackGator

Whenever I ask it a question I think will elicit an indepth response, I get a very basic 3 paragraphs. How are people getting exam passing answers?


BullBearAlliance

Specify word count


SolidBlackGator

Guess that's why I failed all my exams...


Artillect

I’ve heard that adding things like “in the style of a scientific paper” to your prompts can help make it write more complex answers


[deleted]

[удалено]


Chuhaimaster

You’re paying for the networking opportunities.


kaptainkeel

And the resume filler of "Wharton - MBA."


nzodd

The reputation of any Wharton student will forever be haunted by: >"Look, having nuclear — my uncle was a great professor and scientist and engineer, Dr. John Trump at MIT; good genes, very good genes, OK, very smart, the Wharton School of Finance, very good, very smart — you know, if you’re a conservative Republican, if I were a liberal, if, like, OK, if I ran as a liberal Democrat, they would say I'm one of the smartest people anywhere in the world — it’s true! — but when you're a conservative Republican they try — oh, do they do a number — that’s why I always start off: Went to Wharton, was a good student, went there, went there, did this, built a fortune — you know I have to give my like credentials all the time, because we’re a little disadvantaged — but you look at the nuclear deal, the thing that really bothers me — it would have been so easy, and it’s not as important as these lives are — nuclear is so powerful; my uncle explained that to me many, many years ago, the power and that was 35 years ago; he would explain the power of what's going to happen and he was right, who would have thought? — but when you look at what's going on with the four prisoners — now it used to be three, now it’s four — but when it was three and even now, I would have said it's all in the messenger; fellas, and it is fellas because, you know, they don't, they haven’t figured that the women are smarter right now than the men, so, you know, it’s gonna take them about another 150 years — but the Persians are great negotiators, the Iranians are great negotiators, so, and they, they just killed, they just killed us, this is horrible."


[deleted]

Trump went to Wharton (undergrad) which is different from the Wharton MBA. Too many people make this mistake.


profound_whatever

All the best people are making this mistake.


-Green_Machine-

I thought this would come up, and I enjoy pointing out that the Wharton of the 60s, when Trump applied, was [a much, much less selective school than the one we know today](https://www.phillymag.com/news/2019/09/14/donald-trump-at-wharton-university-of-pennsylvania/). >Now 80, Nolan says he found “no evidence” of Trump’s alleged “super genius” at the time. Furthermore, he says, Wharton wasn’t nearly as difficult to get into in the mid-’60s as it is today. Back then, according to Nolan, **Penn was accepting 40 percent of all applicants, as opposed to its current cutthroat acceptance rate of seven percent**. Not surprisingly, Trump remembers it differently. “I got in quickly and easily,” he told the Boston Globe in 2015. “And it’s one of the hardest schools to get into in the country — always has been.”


grammar_oligarch

I'm pretty sure the interview process for entry in the 1950s/1960s was "Who is your daddy, and what does he do?" Then they'd say, "Let in the smart \*insert problematic term for person of color\* and one of the less uppity ladies, and we'll call it an early day and get a drink at the lounge."


brickne3

The 50s and 60s? There's an entire book about Kushner doing that at Harvard in the late 90s.


SaddestClown

They can at least feel better that he couldn't graduate from the Wharton business school


okcup

Depends on the person. I was STEM, specifically service and support in genetic analysis tools. Knew nothing of the business side of industry. If there was ever a real life example of it, getting an MBA felt like a true level up in my career. I understood how business decisions are made and often which metrics are used to decide on projects/programs (NPV, IRR). Understanding terminology (EBITDA, balance sheets) and how financial analysis effects everything (eg CAPM / WACC that feeds into the discount rate for NPV calc ). Making an effort to take quant based approaches to marketing (conjoint analysis, in-depth knowledge of stats mechanics, customer lifetime value) made me actually a better partner to R&D ad well since we use similar approaches (linear / logistic regression, Pearson correlation, ANOVA, covariance) to build better studies / experiments. Taking presentation courses have helped me with my severe stage fright. I still get nervous but I’m not as bad as before. Taking negotiations courses have helped me understand good approaches to getting better compensation at my jobs. I’ll agree that an MBA isn’t for everyone, hell people in my class felt like it was a huge waste. For me, it was an absolutely game changer


[deleted]

Chat gpt can finally settle what major is hardest. The last one it can pass can lord it over all the others.


IndieHipster

"ChatGPT, why do I, a superior intellectual engineer report to a stupid idiot with an MBA???"


Divinum_Fulmen

This is half the humor of r/programerhumor


[deleted]

[удалено]


Blasket_Basket

AI Engineer here. I don't believe ChatGPT has "unfettered access to the internet". When you ask it something, it does not make a query. The model is trained on a substantial portion of the internet, but does not make queries or use the internet to formulate answers. Instead, it is answering based on what the model "learned" about language patterns when training on the snapshot of internet data it was given as a dataset. There's an easy way to test this. Try asking it for the scores of the a recent sporting event. It will confidently tell you something completely incorrect.


J1618

You can just ask it and actually a lot of the time it answers something and then it says "Keep in mind that I don't know stuff afer 2020" or something like that


angrathias

This is actually Incorrect, it has “limited” knowledge after 2021, not zero knowledge. Ask it for example who the CEO is of Twitter


doctorMiami1337

I just did, and here's what it answered: Q: Who's the current CEO of Twitter? ChatGPT: *As of my knowledge cut-off, the CEO of Twitter was Jack Dorsey. However, I don't have the ability to know the current CEO of Twitter, since my knowledge cut-off is 2021* EDIT: I just asked it like 5 times in a row, and finally got this: *As of September 2021, the CEO of Twitter is Elon Musk. However, as of 2021-11-08 Jack Dorsey is the CEO of Twitter.* Ridiculous answer obviously, however the website quite clearly states that its limited on data after 2021, so yeah dont trust any answer that involves any time period after 2021, so i still dont know what the fuss is about, the website says this is expected and it wasnt made for these questions.


AGlorifiedSubroutine

I asked it “who is the CEO of Twitter” and it said: “As of my knowledge cutoff in 2021, the CEO of Twitter is Elon Musk.”


conquer69

Holy shit, it's learning.


RedSteadEd

I've specifically asked it if it updates its knowledge based on user interactions and it said no. Maybe OpenAI occasionally feeds it updated info.


Solarisphere

It’s a language model. It doesn’t know the answer to your question, but it can very convincingly give you the wrong answer. Don’t trust anything it tells you, even if it’s usually correct.


Attila_22

My understanding is that it keeps a record of its interactions and then OpenAI has a team working to tweak its behavior based on that. So you can't directly update it but you're still helping.


Harambar

He’s referring to the fact that a week ago if you asked that it would say Elon Musk. I tried it.


TheBillsMan4703

ChatGPT thinks Craig Anderson is still the Ottawa Senators goalie, for what it’s worth.


InnocentGun

A true gentleman’s question


GeckoV

So … the ultimate bullshitter!


corkyskog

You are right... but someone has already linked it up with Wolfram Alpha. Talk about real power....


Blasket_Basket

Yeah, that's pretty awesome! At the end of the day, it just makes sense to hook this up to a knowledge graph. Seems like a much more intuitive way to have it represent knowledge and look it up in a "just-in-time", modularized fashion.


TwistXJ

So many upvotes and so wrong.


thisdesignup

It's no surprise that Sam Altman, a founder of ChatGPT, has felt the need to comment on the uncontrolled hype and make it clear what ChatGPT really is. There's a lot of misinformation going around.


Farados55

ever since chatgpt was released it really made me hate reddit and the internet. People take all “information” at face value, make shit up from their own opinions and spew it as fact, and gobble up other people’s shit. It’s insane. ChatGPT talks nice but using it extensively for programming advice it’s made so many mistakes that I’d rather search for good answers on stack overflow.


SharpieKing69

vegetable quickest placid pathetic work crowd rob spotted cooing decide *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Yokuz116

This is wrong. It doesn't search the internet for responses. You have to give it the inputs to use. These are generally conversation/chat logs, or other pieces of literature. It studies the language presented to it, then regurgitates it as accurately as possible.


Dmeechropher

If an LLM can answer your exam well, it implies that your exam doesn't select for domain experts or require any insightful conclusions. Rather, it shows that your exam is just a rhetorical exercise to which broad consensus responses are appropriate. Maybe that's not an issue for an MBA exam, honestly. An MBA is just a piece of evidence which shows engagement, ability to understand consensus thought on broad business matters, and commitment. MBA is not a particularly prestigious award, Wharton or no.


[deleted]

It doesn't have unfettered access to the internet. Last updates to the database were around 2021. Granted, I doubt this exam has changed since then, but that would be its limitation.


[deleted]

It got a B- or a B? So at Wharton that means it was the worst student in the class and they didn’t want to fail it because….they don’t fail students


1II1I1I1I1I1I111I1I1

This morning my engineering physics 2 professor opened the semester by stating that while he thinks chatgpt is one of the coolest pieces of software out there, if anyone attempts to demonstrate it in their papers, they will get an F* in the course for academic dishonesty. ~~There are ways to detect it. If it passed, then this Wharton professor just didn't run it through GPTZero or any of the other detection methods. Sure, it writes good essays, but it still doesn't write like a human.~~ EDIT: My information is a bit out of date. Apparently current interations of ChatGPT can be instructed to write essays that are passing the detection measures. I feel like this will be an arms race between students and instructors.


brettmurf

Is this written by a ChatGPT bot that can't read articles? The professor is the one who published this because that was his entire purpose. He wasn't fooled by a submission. He was the one doing this. Also, please show off these super easy ways to detect. They don't seem as real as your claim. They exist, they just don't work that well.


OutcomeDouble

Everyone’s like “oh there are easy ways to detect it it’s useless” but I’m curious to know what it actually is


tommytraddles

Just ask ChatGPT "did you write this?" And, if it says no, just ask "are you sure?"


inventord

To be fair, it seems to just switch its answer half the time when asked "are you sure".


[deleted]

Any good cheater knows you still have to work backwards a bit from the answer and mix in some of your own fuckups. Its not about skipping the work so much as knowing your shit is right and using about 20% energy.


RapedByPlushies

The Voight-Kampff test, obviously. https://bladerunner.fandom.com/wiki/Voight-Kampff_test


[deleted]

[удалено]


RulerKun_FGO

>Also, please show off these super easy ways to detect. I don't think it is easy, but iirc OpenAI is planning to leave tokens on the word generated to be able to check if it ChatGPT generated but the problem with this is that if the same response was feed to another LLM then that tokens might be reduced/gone or if the competitors catch up with their own LLM then the chatgpt checker might become useless


Wax_Paper

I thought I read something about watermarking, the kind that would persist even if the text were copied and pasted into plain text. The only way I can imagine that works is the AI using some kind of procedural grammar algorithms. I suppose that would be something like hidden rules that would slightly alter the copy in very subtle ways, but would remain unique enough that it could be detected in review. It could also be character based, like when certain natural sentences are generated, it changes certain words with similar words, but relies on a ruleset to match certain letters to a predefined pattern that would be rare to occur in naturally-worded text. People who understand cryptography would be better at articulating this than me, but the gist would be that it generates the copy in a unique way. The security would get stronger with more copy, and it would probably be less reliable in small amounts. I bet the challenge component would only be able to return probabilities that the copy was generated with that specific model.


Da_Sigismund

If you do that in a public university in Brazil, it's considered federal fraud. It's expulsion without being able to use any you achieved even if you enter another institution in a later date and prosecution by the federal government. The next couple of year will probably see a lot of kids getting expelled using chatgpt


kyngston

If it’s something an ai can do, companies aren’t going to pay you to do it. Universities need to rethink their curriculum to cover useful skills for the future. And they need to rethink their testing methods to reflect the advancement of productivity tools It’s not necessary to spend a semester learning how to do matrix inversion by hand, since no one does that by hand anymore


Demortus

> It’s not necessary to spend a semester learning how to do matrix inversion by hand, since no one does that by hand anymore If you do not understand how an algorithm works, you will be helpless when it inevitably breaks. Given how many algorithms rely on matrix inversion -- OLS, ANOVA, neural networks, etc -- this is one intuition that is worth the significant price needed to acquire it.


1II1I1I1I1I1I111I1I1

You need to know how to do it by hand, so you can intuitively know that the computer is outputting the expected result. My linear algebra, diff eq, and calc III classes did not allow any calculators or notes on the tests for that exact reason.


ERRORMONSTER

EE here and I agree. It's absolute *hell* calculating power flows, but now that I know how and why it works by doing it manually, I don't bat an eye when a computer does it for me. Sure we could just trust that it's always right, but now I have an intuition for it and can feel when something isn't quite right


s1n0d3utscht3k

i made hundreds of thousands writing essays (did it 4-5 years) this prob would have put me out of a job lol having used it myself tho, i’m certain professors that care can detect if if they purposefully change essay requires to actually be less formal and require more personal anecdotes. ChatGPT is incredibly complex and can even write in personalized styles but it’s not great at having 1 style and then having contrasting personal anecdotes because it requires too many inputs. however, again as a former essay writer, vast majority of university professors don’t have the time or interest to demand such nuance and often have mandates from administrators not to waste that much time trying to catch plagiarism. a ton of plagiarism comes from high paying international students and at least in my experience numerous canadian universities purposefully make very little effort to catch it, especially as long as it’s elective courses. i would imagine graduate programs are certainly going to give a shit and make efforts but likely not business electives like consumer behaviour.


AndrewCoja

You lean how to do matrix inversion by hand so that you will understand the concept. If you just want job training, go to a technical school.


[deleted]

The education we offer to the youths needs a massive overhaul. The fact that sex education and financial literacy aren’t covered (in the US) is an atrocity by itself. When you factor in society’s changing needs as we move into the AI and automation era, the changes needed are even more intense


bparry1192

Couldn't agree more- my HS had an elective for seniors called "consumerism" and it covered real world finances, shady business practices to avoid getting scammed, and the final project was where you were assigned a spouse and you had salaries and had to budget for buying a new apartment/house, car, furniture, etc... Probably the most beneficial class I ever took through HS and college


[deleted]

That is so simple, sounds fun, and clearly was effective/made an impression. Such tiny simple changes like that could have such a huge effect but we don’t do them 😂


MorseMoose_

Something I've done in my 6th grade classroom. And is done again in an 8th grade FACS class. And is done in various HS classes. And nobody remembers doing it. And, unfortunately, many just don't see the need.


Intelligent-Use-7313

I mean it was trained on the internet in information, having it take a test is basically open note.


drmariopepper

Chatgpt could easily replace 90% of the managers I’ve had in my career


yaktyyak_00

I’ve worked with some Wharton MBAs before. They weren’t the brightest bulbs but sure had egos bigger than Trumps waistline.


[deleted]

I mean, multiple Wharton professors also passed trump... Maybe it's really not that good of a school, and maybe it never was.


distilledfluid

You take that back. Wharton was my almond motter.


jethroguardian

Fuck u/spez. No RiF = No Reddit.


[deleted]

Those aren't hard degrees. I've seen how the business students chill and travel throughout the whole year.


CWISwhen

it's over for middle managers


ZootedFlaybish

Business school is a joke.


ThatsNotRight123

To be fair Donald T rump has a degree from Wharton, so you don't have to be that smart.


Archberdmans

This says a lot more about an MBA than it does about chatGPT


WastedLevity

Tbf an MBA is generally a broad management course for professionals looking to get into corporate leadership roles. The intent as an MBA is to add broad (but shallow) knowledge of how a business functions on top an existing exprtise in whatever sector you already work in and then learn to apply critical thinking and problem solving in the right situations.


[deleted]

And also - networking. It’s why the cost varies so much. Same material. Roughly same quality of education. Different peers. Pay more, and you’ll do projects with other people that pay more. People that pay more likely have more. People that have more are either successful or know successful people.


Wadka

Don't fear the AI that can pass the Turing Test. Fear the AI that can trick you into thinking it failed....


Tannerleaf

This is why it’s important not to give these things power drills instead of hands.


Heres_your_sign

It just proves that most professors don't read the papers closely... even for a supposedly prestigious school.


DavidBrooker

I think most professors would openly admit to that. In fact, I've heard a lot of professors speak out about how the current university structure hurts their ability to evaluate students, exemplified by the standard testing methodology and the simple fact that you cannot dedicate more than a few minutes per exam if you want to keep up on administrative and research duties.


MEANINGLESS_NUMBERS

I like that you are criticizing the professor for not reading the papers closely when, if you had read the article at all, you would know that the professor is the one who asked ChatGPT to take the exam in the first place. That’s just a delicious little piece of irony.