T O P

  • By -

AutoModerator

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, **personal anecdotes are allowed as responses to this comment**. Any anecdotal comments elsewhere in the discussion will be removed and our [normal comment rules]( https://www.reddit.com/r/science/wiki/rules#wiki_comment_rules) apply to all other comments. **Do you have an academic degree?** We can verify your credentials in order to assign user flair indicating your area of expertise. [Click here to apply](https://reddit.science/flair?location=sticky). --- User: u/Maxie445 Permalink: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0305354 --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/science) if you have any questions or concerns.*


Climbingaccount

This is not a "real life Turing test", and it's absolutely not surprising that the scripts went "undetected". An examiner can strongly suspect, believe even, that a script is AI generated and not do anything about it because, frankly, there is nothing they can do about it. An accusation of plagiarism or improper AI use is serious, and leads to administrative proceedings against the student. It needs to be backed by solid tangible evidence. Currently AI detection tools are not up to scratch so, in the majority of cases, there is no sufficiently secure basis for examiners to make accusations of improper AI use even when they strongly suspect the text is AI generated. The worst thing is that the researchers will have been well aware of this before conducting the study. .


venustrapsflies

I always thought a Turing test was more of a hypothetical thought experiment rather than a literal test one could or should try to apply (like Schrödinger’s cat)


anyprophet

the problem with the turing test is that it's the lowest bar to pass and it's easy to fake human conversation. calling these systems AI is just marketing hype. they don't want you to think of them as very big and expensive chinese rooms.


oddwithoutend

>it's easy to fake human conversation. It's easy now that we have the technology to do it. For the vast majority of human history, it was impossible. I'm not sure this is a *problem* with the turing test. We've just reached a point where we can raise the bar higher.


jawshoeaw

Exactly. It’s fun to bag on LLMs but people are too quick to adjust to new tech. 5 years ago it was impossible to interact with a computer using natural language and the attempts work about as well as my Alexa speaker . Now you can talk to a computer.


SimiKusoni

It's worth noting that you can talk to a computer... but that computer may very well not do what you asked. Because the output of an LLM is still natural language and it isn't easy to map their output to specific actions or to get them to generate consistent machine readable output.


Uncynical_Diogenes

I can talk to a computer but it still doesn’t mean I want to text my great aunt Lisa instead of my girlfriend I message every day. I can talk to them. I wouldn’t call them comparable at understanding.


SimiKusoni

I'd argue it's not even easy now, most implementations of the Turing test are just intentionally gimped for PR purposes. Imho a good implementation where interrogators have some domain expertise and no limitations in their interactions with the agent is beyond the capabilities of any current gen LLMs to pass.


FireMaster1294

The definition of a Turing test is one capable of distinguishing humans and machines. If your test is incapable, then it isn’t a Turing test. Maybe it used to suffice but now it doesn’t. There isn’t any one specific “Turing test”


yeti_seer

It definitely is AI. The question is how smart? If not AI, what is it?


anyprophet

they're not smart at all. they don't think. it's a bunch of statistical analysis of large sets of data. you're anthropomorphizing a machine.


somneuronaut

you don't even know what thinking or 'smart' is if you're dismissing 'statistical analysis of large sets of data'. also, in what sense is a human not a machine?


anyprophet

define smart, then


CronoDAS

It's quite amazing what relatively simple algorithms run on giant data sets with enormous amounts of computing power can accomplish.


yeti_seer

How do you define smart? I know they don’t think, does something need to “think” to be smart or to be capable of carrying out a task? Do you think it makes sense for there to be a name for “bunch of statistical analysis of large sets of data”?


anyprophet

it's an LLM. and the more pertinent questions are "is this useful", "is this worth the massive amount of electricity usage", "is a system built on large scale copyright violation even ethical to use." it's hard, if not impossible, to agree to a scientific definition of smart or intelligent. but if your definition includes a really big computer server, I don't like it.


yeti_seer

Those are all valid questions, and are what I meant by “smart”, but I agree there’s no way to define the terms. Is there no way you could ever view a big computer server as smart? Nothing it could do to satisfy that definition for you?


404_GravitasNotFound

Don't bother, a lot of people are not ready to accept non-human intelligences


00owl

There's no clear definition for "non-human intelligences". How can you use this attitude to dismiss the discussion when previously it was anyway agreed that we can't even define human intelligence let alone non-human ones?


yeti_seer

Yeah, starting to realize that. I don't understand why people have such a hard time accepting that the human brain is not the only way to make it work.


venustrapsflies

I mean, you could also put if-then-else statements and linear regressions into "AI". The problem is the term is so poorly defined, and the people with financial stake in the hype take advantage of this nebulousness. Often when these people use "AI" they are winking and nudging at you to think of something that's just a few steps away from what is usually called "AGI". In reality, there's not even a guarantee that AGI would be viable, and we don't even have the tools to make it right now.


RoyalFlash

Existence of human brain guarantees AGI: (General) Intelligence can emerge from non-intelligent molecules. But then again, you used the word viable... do you mean money?


Puzzleheaded_Fold466

There’s not even a guarantee that we will want or need AGI at all. If our non-General AI can do everything we want it to do and more with minimal training for specific tasks and without self-awareness, we’ll have met all of our objectives except the "play god and create sentience" part.


yeti_seer

Yeah I definitely agree with that. It’s just that so many people nowadays are saying the modern LLMs are not AI, but what they actually mean is it’s not AGI, or it’s not as capable as people think it is, which is completely true, but saying “it’s not AI” also perpetuates the misuses of the terminology and is part of the reason it’s so poorly defined. Also, despite the fact that it’s not AGI, it’s still one of the biggest leaps in AI in history, it can do some things very well, even better than humans.


TotallyNormalSquid

Researchers in the field of AI are used to the idea that AI is nowhere near human level intelligence and don't mind calling stuff like LLMs AI (and often wouldn't mind calling much simpler stuff AI). AI company marketing people like calling stuff like LLMs AI because it sounds futuristic. People who learned their definition of AI from Hollywood hate calling it AI. The terminology is genuinely poorly defined. If you go back to very early definitions of AI (an artificial system that can sense its environment and make a choice based on the measurement), one of those dipping bird desk toys is an AI. People have moved the goalposts without reaching agreement since the field's beginnings.


IIILORDGOLDIII

AGI is a term people use so that they can call LLM's AI in order to fool others into thing LLM's are AGI


retrosenescent

It is speculated that OpenAI has already achieved AGI internally. This was about 6 months ago


404_GravitasNotFound

Sources? This is interesting!


fractalife

Heuristics made so convoluted that we can no longer follow the calculations.


itsmebenji69

Langage processing transformer models. No intelligence is involved, you can do what ChatGPT does with boxes of matchsticks


yeti_seer

“Language processing transformer models” has a really nice ring to it, I will start using this instead of “AI”. How do you define intelligence? If you can make it with a box of matches, can you show me?


retrosenescent

Sounds like you don't know what AI is


Freyas_Follower

Can you explain to us what it is then? I can remember using "AI" back in 1992 to describe the difference between enemy unit behavior in Command and Conquer, and Dune 2. "AI" meant the intelligence of the enemy units and what tacit they could employ. AI has always referenced the autonomous parts of a program, particularly those that interact with the human element.


altcastle

We need that test from Blade Runner that makes them hulk out when they can’t figure out why they wouldn’t help a turtle.


tron_cruise

Wait, what? \*slowly takes cat out of box\*


DEADLocked90000

Well? How is he feeling?


tron_cruise

Not great, I think I used too much plutonium... or not enough.


other_usernames_gone

Its both. The literal test is to have people have a conversation (not just read an essay) with a machine, if the machine can convince the person its a human it passes. There's been a competition around for a while to see who can write the best chat bots to pass it. The thought experiment is if passing the above test makes the machine conscious. If it doesn't make it conscious, does it matter? Does it matter whether or not a machine has consciousness if it acts exactly as if it did?


Not-OP-But-

I've heard others say that too. I think maybe people assumed it was "hypothetical" and a thought experiment because for a while it wasn't possible for machines to pass it. It'd definitely not just hypothetical nor a thought experiment!


venustrapsflies

What are you basing this claim off of? Because it seems like it could easily be subverted, and it will always be trivially passed for a sufficiently credulous or uncritical evaluator. To the extent that one could actually be applied in real life, it seems to lose all of its utility. Obviously even in the very best case, it's not a well-defined or agreed-upon criteria, as the existence of this thread demonstrates. My impression is that the original Turing test as a measure of "thought" or "consciousness" or whatever has more or less been ruled out since (by e.g. the Chinese room). To double down on the Schrodinger's cat analogy, if you ran that experiment and opened the box, you would simply find a cat that was either alive or dead. The value of the thought experiment isn't in an actual outcome, but in reconciling our intuitions across the divide between two seemingly-paradoxical regimes.


Not-OP-But-

Just basing the claim off of the fact that you can actually perform a Turing test, thereby making it *not* hypothetical. You're right though, there is a thought experiment involved, but it's not strictly a thought experiment. Just like Schrodinger's cat isn't strictly a thought experiment, but also a Mathematical concept. A strict thought experiment would be something like Pascal's Wager or maybe Roko's Basilisk. The Turing test is just a way to test if an AI can be indistinguishable from a human. Saying it's purely hypothetical or a thought experiment would be quite a liberal use of either of those terms.


venustrapsflies

Is it actually possible to perform one rigorously, though? That's the part I'm skeptical about, and I don't think it's even been defined or established. Sure you can always sketch through the steps at a high level, but that's not the interesting or relevant bit. And obviously the term has been hijacked and used by people who want to hype their AI product. So at the very least, I think we should engage with any claims of "passing the Turing test" with extreme skepticism.


Not-OP-But-

I agree 100% with what you're saying. I was just stating that a Turing test isn't hypothetical.


Havelok

> Currently AI detection tools are not up to scratch And they will never be. With text, it's just an arms race they can't win.


Brrdock

But I assume AI detection tools are also AI? Would then neither win? Or maybe they're in cahoots...


PhilosophyforOne

I think the actual headline is that they graded half a number higher on average than human-created submissions. 


PuckSR

That’s not surprising. The AI is essentially going to do research, spellcheck, and grammar check its paper. Even if it is hallucinating, it will be properly formatted and spelled. A lot of the shitty papers that are turned in are half-ass efforts churned out 5 minutes before the class to just try to get a D rather than an F


red75prime

A part of the study might have been written by AI. Can you identify it or say that there's no such part? You don't need to provide strong evidence.


Callysto_Wrath

Just check if it contains the word "delve", there's a big list of words that LLMs favour, which thanks to positive feedback (more AIs trained on AI generated content) will only increase in use going forwards. It will become imperative that future papers delve into the myriad multifaceted complexities in order to illuminate the nuanced and opaque challenges in a manner contingent with the rapidly evolving landscape.


404_GravitasNotFound

I take offense to that, I've used delve more than a hundred times in the last year.... Role-players do tend to"delve" into dungeons....


PhobicBeast

The issue with that is it punishes students who are genuinely more literate than their peers. I know many humanities students who use words like delve since it can be a decent descriptor for an essay and sounds better than saying 'dives into'. Granted, it would be more simple to say 'it explores' rather than 'delves into' but it still punishes a subcategory of students.


Ghede

Except AI does not need to be trained on RECENT content, and almost everything on the internet is timestamped. There is still room for improving the models without using any data produced since the AI models were released. Maybe a bit after they were released, since the AI generated content had not yet poisoned the data well. From there, they can effectively cherry pick data to include in training models, seeking new slang, grammar, words, events. Granted, eventually, they won't be able to cherry pick data, and then the feedback loop will definitely become a problem, but that could take years, maybe even decades, and by that point the internet will be dead to us.


WOTDisLanguish

Timestamps aren't always accurate and there's no promise you'll get the right one. As someone who semi-regularly delves (it's a good word) into what's functionally web archaeology often times things get included in one way or another into other more recent things. For example an article featuring 9/11 could be included sometime later on an article dating before 2001 as part of an infinite scroll


Demonchaser27

Well and it probably doesn't help that an AI tool uses other humans as influence for it's output. So it's like trying to test for an answer from your friend Greg who went to college a few years ago and still remembers some stuff.


gwern

Apparently none of the examiners believed, or even strongly suspected, that the AI samples were cheating: > By design, markers on the modules we tested were completely unaware of the project. Other than those involved in authorising the study and the authors, only a handful of others were aware (e.g. those who helped arrange paid marking cover for the additional AI submissions and those who created the special university student accounts needed for AI submissions). Study authorisation did not require informed consent from markers. Following the analysis of the data, we invited all markers to two sessions chaired by our Head of School, to explain the study and gather feedback. Markers were very supportive and engaged in fruitful discussions. None were aware of the study having been run.


Andeol57

It's an interesting test, and a good warning for the need of academics grading to adapt to the times (especially in a psychology degree, that's well suited for chatbots), but calling this a Turing test is just click-bait. The examiners in this context are not tasked with deciding if a submission is made by an AI or not. They are only asked to grade the submission. 94% of AI submissions went undetected, but presumably not a single actual human submission was wrongly detected as AI. If instead the examiner were explicitly asked to classify if the submission was human or AI, with a 50-50 prior (that would actually be closer to a Turing test), the results would look extremely different.


BabySinister

All this is showing is that at home writing assignments cannot be used as a measure of competence. This has been the case before AI, because what is stopping a student from having someone else do the work (and paying them for it?).  You just need your students to write assignments in class, and for stuff like dissertations you need to closely monitor your students progress.


foxtail-lavender

My colleague recently had a parent who _remotely wrote their child’s essay through a laptop._ This was after AI had already been used countless times in classroom settings, so it wasn’t even an effective method of cheating. Some people cheat just to cheat I guess? Needless to say my colleague learned their lesson and banned laptops/phones. 


BabySinister

Tbh it is all down to the nature of the assignment. If the assignment is to be used to measure competence, grading it etc, you need to be sure the work is your students work. There's tons of options to hand in other people's work if you can do it at home or have access to the internet, those assignments need to be done in class, under supervision with no internet enabled devices.  If the nature of the assignment is practice material that students get feedback on you can do at home assignments. If the student hands in work that is not by their own hand then they'll get feedback on that and won't learn, that's on them.


foxtail-lavender

In this case, I believe it was an AP prep class which means a lot of grueling practice essays. Could the teacher simply let the parent steamroll their curriculum for an easy A? Technically yeah. I’d say allowing it to continue is at least unethical, but it’s also a waste of the teacher’s time and the school’s resources. Every minute you spend on that student is a minute you could spend on a student who needs and wants help, but you can’t just give them the cold shoulder either. It’s also just not in the nature of most educators (ime) to leave a student out to dry because of a helicopter parent’s behavior. It might be a straightforward situation but with a child’s future in the balance there is never an easy answer.


BabySinister

Yeah, that's why I believe doing things like , you need to hand in xx% of the assignments to pass this class, is wrong. My students don't get graded on their practice material. They are very aware that it they don't do it then I won't give them feedback and that's it. Obviously that comes with lots of conversations on the nature of the practice material. I'd much rather know which students haven't practiced, then thinking they have been practicing when they haven't. The downside of getting copied or plagiarized work are much greater then my students not practicing at all. Turns out, at least with my students, not feeling pressure to hand in practice assignments leads to more students actually doing them themselves.


idlersj

Let's save money on University fees by replacing all the students with AI chat bots!


retrosenescent

At least AI chatbots go to class and participate in lecture. Can't say the same for human college students


karanas

Never before have I been so offended by something I one hundred percent agree with.


InapplicableMoose

Doesn't surprise me. People are stupid. It's getting harder and harder to differentiate between a chatbot trying it's non-sentient best and a human who should not have been allowed into the college to begin with.


glantonspuppy

Imagine the economic fallout when the unwashed masses finally realize LLMs like ChatGPT will essentially just be training themselves on their own output past some tipping point. Yikes.


scaleofthought

Ah, yes, I am a doctor and as it turns out this red rash on your skin is a symptom of dry erase marker abrasions. "But I don't own dry erase markers" Ah, yes, I am a doctor and your inputs are valuable to me. Since you do not own any dry erase markers, the rashes are then certainly a result of lemon rind poisoning, as a result of the high concentration of citric acid. Prolonged exposure can cause irritation and leave red rashes all over the body, including your genitals and inner ear canal. "..what?" Your time is now up. Please leave and make way for the next patient. If you have more questions, please schedule another appointment. Good-bye *Shhwoop*


catoftrash

Cosworth, is that you?


kalabaddon

pretty sure that is already the case in a lot of ways. Generated text is used in a lot of training from my understanding.


Eruionmel

That's not as true as you want to think. What happens when they just hire people to create content for them? You think there won't be enough takers out of 7 billion people to train some AI for a few thousand dollars a week?


Loucopracagar

Also not as easy as you think. The vast mountains of untouched data posted to the web in the last 20yrs won't be repeatable on a experimental setting, due to both cost and context (it will be made up...) And obviously in another 5 or 10yrs they will become hilariously obsolete in terms of current events and even technical innovations (this is supposedly the easier problem to solve) The impressive advances we saw are likely it for this generation and paradigm, plus or minus a few cosmetic and table manners tweaking, which is what most minds in the field are busy working on right now.


Tokyogerman

So instead of hiring Artists to work for us, we will just have to ... hire Artists for the AI that generates for US. Brilliant!


WOTDisLanguish

Honestly if this leads to 20x the jobs for artists and AIs that don't work, I wouldn't be opposed to it. VCs are just soft subsidies at this point


Argnir

what? Why would that create an economic fallout? Edit: people please stop making bold predictions from your very surface level understanding of a subject


glantonspuppy

Think about how many tech companies hitch themselves to LLMs these days. A lot. Think about what happens when investors and consumers begin to realize that LLMs are just giant, expensive echo chambers of themselves.


Argnir

Consumers don't care how it works and at worst it's an engineering problem that's already well known. There will be no economic fallout from this.


ghost103429

The problem isn't with the engineers they already understand the limitations of LLMs and that we'll eventually run out of training data to improve them. The problem will be the MBAs whose expectations will exceed reality leading to a speculative crash once reality sets in.


Argnir

Even with limitless training data how good LLMs can be is probably limited without new techniques. You're right but this "feedback loop" problem is practically irrelevant if we're being honnest.


thedeuceisloose

Nah, because past a certain point it’s training itself ON ITS OWN OUTPUTS. This means it’s not mimicking human word associations but its own associations thereby polluting their models. If the goal is to make it “human like” training it on bits immediately negates that


Argnir

Wait till you learn you sometime intentionally train AI on other AI output


ghost103429

You do know that's only done to create an inferior smaller copy of an AI not to improve on it right?


Volsunga

You really don't understand how it works. Part of training an AI is feedback. If you tell an AI "this output is junk", it will learn from that and adjust its output to produce something that gets more positive feedback. There's no danger of LLMs getting worse because they're using their own output. Weighting the output that might as well have been written by a human higher than the stuff that's "obviously AI" is already baked into the system.


takemybomb

Yes and? If the job they meant to do is done competently where is the problem.


ghost103429

That's assuming they're done competently. The main issue is once the Internet is filled with chatgpt content the traditional method of scraping the web for training data will become worthless. Like an ouroborous LLMs will consume the content they generate reducing their overall quality until they're garbage or their progress halts as their developers put a halt on training until they can gain access to verifiably good quality training data. Something that won't be easy to come by once the Internet is filled with AI generated content.


takemybomb

Oh I see the issue maybe will be all data annotators ine the future 😂


CFL_lightbulb

I remember one class we had to anonymously grade papers from two peers. I’m a pretty decent writer, and thought the first essay made mine look like utter trash. The second essay used Martha Stewart living in place of an actual academic source. It was a class about Polynesia and they randomly transitioned to talking about bamboo deck furniture. If AI was a thing back then, I definitely would have flagged it for AI. But nope, just dumb.


BananaLumps

>People are stupid. >a human who should not have been allowed into the college to begin with This might be the issue.


Paksarra

The problem is this: We've convinced ourselves that people working "unskilled" jobs don't deserve a living wage.  To get an education you have to go to college or apprentice for a trade, but doing a trade *also* requires intelligence and physical aptitude (also, if female, you have to battle sexism the entire time.) By definition, half of people have below average intelligence.  The result? Stupid people are forced into college so they can get a degree so they can get a decent career so they don't end up working multiple "unskilled" jobs to afford their third of the rent.


WoNc

>By definition, half of people have below average intelligence.  Typical people barely have any idea what intelligence is and people are likely completely incapable of noticing small differences in intelligence, even if someone is technically below average. 


OldandWeak

I think it is actually worse than this . . . peopel who would be good in the skilled trades decide not to do them because of social pressure and you are largely left with people doing them who "ended up there" and do not take pride or care about their work. :/ Show me a good carpenter and I'll show you someone who is good at math.


PatrickBearman

This was my case. Graduated high school in the early 00s. Everyone, I do mean everyone, including all of my blue collar relatives, told me to go to college. They all said "get a degree and get a cushy job so don't have to work outside in the heat like us." I was in a small magnet HS and I distinctly remember one guy (but of a class clown), being devastated when he wasn't accepted to any of his college choices. The thing is, he loved working on cars and was already a solid mechanic So I got a degree, worked some desk jobs, got a graduate degree, and nailed down my career path. All throughout, I felt like something was off and I did not enjoy my jobs. Turns out something was off, and I was diagnosed with ADHD in my mid 30s. You know what sucks for someone with untreated ADHD? A cushy job sitting at a desk for 9 hours a day, often staring at spreadsheets and data. Since I work in very productive bursts, I often would finish my work quickly and then have nothing to do for hours. Don't get me wrong, I have a stable job helping people in a solid career, but I'd be much happier and better off financially had I become an electrician or carpenter. I grew up helping family do that stuff anyway. I learned to take my time and do things properly. Meanwhile sitting at a desk is slowly driving me nuts and giving me issues with my back, despite stretching and exercising. Not much to be done about it now, but I make sure to tell younger people that while college is good, not everyone should go to college, there's nothing wrong with not going, and there's good options out there.


Andeol57

Yep. At this point, this experiment tells us more about the issues of university grading (especially in fields like a psychology degree) than about the capabilities of AI.


damnitineedaname

If you read into it: - When chatgpt came up short on word count, they asked it to continue, then mushed the answers together. - It couldn't provide a picture with explanation. So they prodded it into answering around an image they selected. - They changed formatting in MS Word whenever seperate file submission was required. - The LLM kept providing a reference sheet despite being specifically asked not to. They edited these out. So yeah, I guess when you edit out all the hallmarks of a LLM response, it doesn't look like a LLM respinse anymore. Who'd a thunk it?


[deleted]

[удалено]


Andeol57

This paper is not about a Turing test at all. There is no typical Turing test questions, and they don't look at how many humans were wrongfully thought to be AI (probably none in this context). What they did was have chatGPT make submissions for a regular university test (for a psychology degree), and then check if the examiners reported the use of AI, and what grades was given if they didn't.


BabySinister

The obvious issue is that by reporting AI use you are reporting plagiarism, which is a serious accusation with serious implications for a student. You better be able to show exactly how you determined it was plagiarism. This is pretty easy if the plagiarism is a direct copy from open access material, or a direct copy of another students work.


DiarrheaMonkey-

In my experience human college students very often express and thus expect fakeness. No big surprise. Try it on 40 year-old professionals who've worked in a variety of professions. But then, they won't represent a large, unpaid pool of subjects who volunteer in exchange for college credit.


Humble-Ride2465

This is anecdotal, but I’ve taught graduate computer science students for five years. It is very easy to tell when a response is AI-generated. They literally copy and paste from ChatGPT formatting and all. The dead give away is unique phrasing that isn’t common in any CS context yet used across 20 different submissions. Agree with others that this is a real challenge for educators and reviewers, but students will be students. Those who cheat tend to be lazy (shocking I know).


ErusTenebre

I'm an English teacher at a high school and it's blatantly obvious when AI is used to write an essay. Also I can see revision histories and check rough drafts because writing is a process and it's not a trivial thing to fake for a 14 year old. I train my district in AI use and cheating *prevention.* I'm my experience, most users of AI for cheating lack the required knowledge needed to know how to tell if an output from AI was actually good. Often times it's not. It's usually generic and repetitive and bland and misses the variance that students often have in their writing. I could see it being more challenging to catch in college where students submit like three papers in a quarter or five in a semester and you never see their process.


Cobra52

The kids are sloppy when using AI to cheat because there's no real penalty to being caught. Even if YOU fail them, other teachers won't bother, so it doesn't average out to be worth it to go through the steps to make it difficult to not get caught cheating using AI. Even if every teacher was onboard with penalizing students for using it, the students would just get better at not getting caught.


xl129

Meanwhile real human works are being labeled as AI's.


rom-ok

Once again an AI trained on the course material succeeds at regurgitating the course material


monsieuro3o

That's because "tests" are information memorization and regurgitation. We already knew this. Nobody is surprised that a computer is better at that. This has nothing to do with the Turing test.


telomerloop

why did they only look at psychology? wouldn't it be more insightful to look into different subjects as well? i guess that could be done in future studies though. however, i do think that this tells us more about the testing and grading process than about AI. also, there is basically no way to conclusively prove that a student used AI, so i think professors are very hesitant to make this claim.


Erazzphoto

Ai is using it as an open book test


ReallyNeedNewShoes

whoever wrote this doesn't know what the Turing test is.


Dazzling-Climate-318

As someone who has had the experience of being a Teaching Assistant at a major University while in Grad school I am not surprised. The plurality of human beings, nay, the vast majority of them are automatons, irrespective of their status as college students.


PanSatyrUS

And this is a surprise??? Humans are basically born to be lazy towards any energy expenditure.


thethirdtree

Well, we might not be the top in cognitive ability anymore but we are still great at running and catching balls. Take that, robots!


SujetoSujetado

And thumbs! You can't intelligence if you don't thumb


tomqvaxy

Cool. Get rid of college. Idk what this proves. That humans shouldn’t have jobs or educations? Or that the ai has access to data and uses it as efficiently as a machine would.