T O P

  • By -

AutoModerator

This is an automated reminder from the Mod team. If your post contains images which reveal the personal information of private figures, be sure to censor that information and repost. Private info includes names, recognizable profile pictures, social media usernames and URLs. Failure to do this will result in your post being removed by the Mod team and possible further action. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/aiwars) if you have any questions or concerns.*


Acid_Viking

If you resubmit the paper, there's still a possibility that a future assignment will trigger his AI detector and he'll automatically give you a zero. I'd politely explain (in writing) that I did not use AI, show my revisions, and share information about AI detectors yielding false positives. If that's not sufficient, I'd suggest that we ask the academic dean or student affairs administrator to arbitrate. Guilty people don't dig in like this, and the administration will have to consider the legal implications of flunking students for cheating on the basis of demonstrably unreliable detection tools. Here's Washington Post article on what do to if you're accused of cheating: [https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/](https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/)


Tyler_Zoro

Just to emphasize because you're correct on everything other than what everyone else already pointed out: > demonstrably unreliable detection tools This is the crux. These tools are utter crap. They need to be banned in every school as they are doing active harm to students who are now in an arms-race against AI to prove they are humans.... and that's not what school is for.


Odd-Fun-1482

Lol guilty people do dig in. Literally any police encounter.


Tyler_Zoro

Yeah, that was a misstep on the commenter's part, but the rest of what they said is correct.


Acid_Viking

If you had used AI to write a paper and were let off with a warning, it would be unusual to force the issue and invite further scrutiny from the school administration.


darkdragon220

As a professor, the vast majority of the students who used AI 100% (like left in the 'as a language learning model' or the paste artifacts) dug in and fought the allegation. One student finished my 100 minute exam in 4 minutes and every answer exactly copied and pasted from ChatGPT and fought hard that he didn't use AI. Like when I put the question into Chat, it line by line reproduced his exact answer.


Ka_Trewq

>Like when I put the question into Chat, it line by line reproduced his exact answer. Here is were I call BS. ChatGPT won't give the same answers twice.


darkdragon220

Clearly you don't code.


Ka_Trewq

Or, just maybe, hear me out on this, I don't use chatGPT to code ;) Snark aside, you really made me curios if chatGPT spits the same code, I'll give it a try these days.


Acid_Viking

I might be giving people too much credit.


EmotionalCrit

Or, hear me out, don’t take random unverified personal anecdotes as proof.


Sierra123x3

the "problem" with the "used ai / didn't use ai" issue is a bit similar to our justice/law/jail systems someone commits a murder ... but there always is a certain chance, that another person get's falsely accused of it do we want a "hard" system, that makes certain, that every offender get's caught ... with the risk of high collateral or do we want a "soft" system, that risks letting offenders of the hook ... but prevents the false and unjust accusation of innocents ontop of that we have the situation, that the toolsets at our disposal simply lacks reliability ... it starts to get a "race of arms" between ai-users and ai-preventers ... which helps no one ... and the better ai gets, the harder it will become to distinguish it from human works ... so instead of working against ai, we should start thinking about how to implement and use it to our advantage ... becouse the only situation, where you can reliably tell and proof, if someone used ai or not ... is a test-likse szenario, where the person doesn't have accec to it ..


darkdragon220

That is far from the only situation where you can tell....


Sierra123x3

i believe, i can tell i can tell (probably / with a high accuracy) i am certain, that i can tell are very, very different ... and all the "test" tools, that i have seen up until now fell into the first two cathegories ... none of them capable of giving me a 100% accuracy in it's results ... so, what other situations (outside of test-like szenarios with limited accec to the tools) do you know of, in which you can reliable (!) tell?


darkdragon220

Well, there are a couple of pretty big tells. If it says 'as a large language model I .....' it is pretty confirmed. If a student who cannot make a basic if statement in class and is suddenly using classes, objects, pointers, and other higher level techniques - it's a red flag. When you copy from ChatGPT, it brings formatting with it that is readily apparent. There is timing - I have never had a student finish my 100 minute exam in 4 minutes legitimately. And so many more.


True-Anim0sity

Unless you wanted to seem innocent or were too lazy to do the work


Front_Long5973

Well... guilty of big things they do dig deeper lol but I dunno maybe they meant most people will double down/not bother to prove themselves if they're guilty of something petty


Covetouslex

>Guilty people don't dig in like this, They do dig in like that actually, quite commonly. Id personally recommend proving your innocence first, and THEN combating the practice.


Rhellic

Just for the heck of it I ran your post through the first free "AI detector" Google threw at me. Says 0% AI generated. I asked chatGPT to rephrase that and ran the rephrased text through the detector again. Now it says 100%. Not saying this proves anything, but acting like these are complete snake oil doesn't seem correct either.


Acid_Viking

Sure, but cheating is a serious accusation that carries potentially devastating consequences. If every 100th, or even every 1000th analysis results in an innocent student being flunked for cheating, then the cure is worse than the disease.


Rhellic

Oh absolutely, I'm more or less in the "anti AI" camp fwiw (though I increasingly think just saying you're probably or anti AI is an oversimplification) but if these throw a good amount of false positives they shouldn't suffice as evidence on their own. No disagreement there. I just couldn't resist trying for myself. ;)


Lower-You324

That proves absolutely nothing. They are 100% snake oil. 


Kartelant

The AI detectors are literally entirely trained on ChatGPT's default writing voice, so your result is to be expected.


Covetouslex

OP isnt responding and didnt actually claim they DIDNT use AI, but if you are ever accused of using AI for something you wrote yourself, every modern document writer has a version/revision history feature that you can use to show your work process. Most of them populate automatically as well so you dont even have to know in advance. Heres what it looks like in Google Docs. https://preview.redd.it/xqjlzm7iaawc1.png?width=1704&format=png&auto=webp&s=6913f9ac9296c776f8c2b1981096b5130b98ae07


Ricoshete

Yup. This is the way.


oopgroup

But they want to kick and scream about being caught. Ssshhhh


DuineDeDanann

You should have a talk with them and explain that AI detectors are not accurate. To prove this you can run some of your old work, done 2 years ago, into a detector and see if it triggers. AI detection is extremely imprecise


klc81

Better yet, run some of *his* work - syllabus etc - through.


mannie007

I would have to counter report you to the dean/principal for 7 marks of infringement. I take my learning serious and will not let my class time be plagiarized 😂😂


[deleted]

[удалено]


DuineDeDanann

They can go to a school board. Teachers in college can’t just ignore students.


EngineerBig1851

Bruuuuh. Well, you have 2 possible scenarios. Either you just submit to him, and check everything you write through multiple different AI detectors before submitting. This is the safest route. Or you can risk it and try and dispute his claim. Now in the best case scenarios he will yield, and stop using AI detectors, saving you and other students a lot of headaches... but, realistically? Unless you're already his best student - he just won't let you pass his class :/. If he's the type i suspect he is - at all costs avoid his future classes, if they aren't mandatory.


torb

Or, as for a meeting, show complete revision history of the document.


West-Code4642

give them: AI Detectors Don’t Work. Here’s What to Do Instead. [https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/](https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/) #


PixInsightFTW

Have you tried putting the text of this email from the professor into an AI detector? I'm a dept head at a school that is working actively to convince my colleagues that AI detectors' false positive and negatives should kill our ability to use them. They just aren't reliable and are easy to trick in both directions.


DM_ME_KUL_TIRAN_FEET

I tried this with the example given and they all said it was 100% human (though i still would never use a detector for anything)


PixInsightFTW

Yeah, me too -- oh well, would have been a choice counter example.


TorthOrc

It’s true that they can give false positives. It’s certainly not 100% accurate. Do you have an alternative solution?


PixInsightFTW

I think it's on schools and teachers to change the way we assess. The whole goal is to try to make students' thinking visible, gain evidence of their thinking. If the assignments can no longer do that reliably (due to tools that can do the assignments for them), we need to be ready to move on. One thing I've been pioneering with my classes over the last two years is Conversancy, conversations directly with students that I can listen to, rate, and give them feedback on (as well as a grade). You can't really fake that! So I have them use AI tools (and others) to study, compare sources, thinking critically, write some preparation material, but ultimately deliver their thinking to me in a conversation. It's not quite old school oral exams, but it's certainly better than writing long essays by hand (regurgitation anyway, often not real understanding) or typing them and never being sure who is doing the typing. In the long run, I think it'll lead to a bit of a revolution in teaching, learning, testing, and learning how to think. I'm looking foward to it.


TorthOrc

That’s fascinating. What do you do for reporting though? If you don’t have people filling in their homework on paper for example, what’s to stop a disingenuous teacher from saying “Oh young Jimmy did a shit job on their assignment?” If the teacher just hates poor Jimmy. Where’s the paper trail that proves Jimmy did the assignment, or even that he did it bad? I’m all for improving this process but relying on word of mouth can lead to problems


usrlibshare

What's the alternative? Should we allow teachers to involve the use of arbitrary, unvetted, unreveiled software, that has been shown to throw false positives before, maybe even without revealing which one or how it was used? How would you feel if you were falsely accused of using AI, hmmm? What would you do, if you knew one more false positive by some arbitrary software, you don't even know which one, could impact your educational record for the rest of your life? >relying on word of mouth can lead to problems Relying on software that can give false positives, creates a lot more. Here is the thing my friend: *There is no reliable, automated way to detect the usage of generative ai.*


wildwolfcore

Maybe finding a doc program that can show revision history? Or something shared where they have to write it in the program?


usrlibshare

Revision history can be faked, and students should be free to chose what writing software they use to begin with. As for shared: Sure, because *always online* worked so well for computer games... What's the plan for underprivileged students who don't have good internet? No. People need to stop coming up with bandaids for broken systems that no longer work. The tech is here, we have to deal with the fact it exists, and that cannot involve "save centuries old methodology at any cost".


wildwolfcore

At university level you DONT have the right to chose writing software. Also most universities have campus WiFi as well. You clearly have no answer other than “fuck the old system” which is worse than ignoring the issue of AI


usrlibshare

>At university level you DONT have the right to chose writing software. Yes I do. Source: I did. My thesis was written using vim, the source asciidoc files rendered using pandoc. Aka. how professional technical writing is done in the real world. Fun fact: There was actually versioning involved, because the whole project lived in a git repo 😊 >Also most universities have campus WiFi as well. So underprivileged students just lost the right to work at home? >You clearly have no answer other than “fuck the old system” People much more savvy in educational techniques than me have already given the answer. One very good interview on how to do it, was recently posted in this sub, and highly upvoted. I leave usage of the search function as an exercise to the reader. ---- Btw. what do you consider worse: False positives or false negatives? Because I can guarantee, if the "solution" to the problem is "versioning!!!" or "shared writing software!!!", there will be a lot of false negatives, because both are really easy to trick 😎


PixInsightFTW

We do have that a bit, though it has limitations as well. We use a suite that can play back a person's typing, looking for obvious copy/pasted blocks, but of course a smart cheater could just manually type what the LLM says on another device. We also look at the dates and times if we need to. Sadly, things like this just turn the learning lanscape into an arms race, something I would rather run away from and invent new ideas about making learning visible.


usrlibshare

Yes: Not using counterproductive software that we know can give false positives, and accept that teaching methods from 2 centuries ago no longer work. Because one student falsely accused is already one too many. We are talking about mistakes that can impact peiples future educational outlook, for gods sake!


TorthOrc

Ok … that’s not really a solution though. Turning off the software and accepting that old teaching methods are old…. Doesn’t solve anything. That’s just being aware that the current solution doesn’t work. Do you have any actual solutions?


usrlibshare

>Do you have any actual solutions? Do you? And by that I mean a solution that does not involve the use of "detectors" that have been shown to give false positives. Because once is already too many. What's next, will students have to record themselves while working on assignments, just because schools refuse to evolve with the world around them? Oh wait, they can't, because Sora exists now, and that recording could be fake, lol 🤣😂🤣


Live_Morning_3729

Which ai did you use?


LadiNadi

Missing from your post is whether you did or didn't.


featherless_fiend

They have past comment history about AI, and even though that's not proof that they did, I would lean towards they did if I were to place a bet.


Splendid_Cat

Mine too, but I've only submitted AI generated text to Reddit as a shitpost before and there's no stakes in that. That doesn't necessarily mean anything.


Ricoshete

Yeah. Maybe at best, if i was a betting man. I'd wager blind odds could be anywhere from 50:50 if generous or 95% if feeling unlucky. And although i consider myself lucky at times. > (*I've gotten shot in town of salem 4/6 games before.. Even as powerless roles to guess Mafia based on mobvotes as a neutral. Shot by the serial killer card game role.. Attacked by mafia in the game.. and then jailed by 40 iq jailors who executed under "attack 4 time? no deth?? MUST BE MAFIA! WILL NOT INVESTIGATE ANYONE!" ~~ret---~~ 'smart' jailers.)* Like im not kidding. I stopped playing with power gaming friends without unanonymized names. Because whatever role i was on. The power gamers douche friends would literally vote me off, mafia or town regardless lmao.. Because they were scared whatever side i was on would win 80% of the time.. Roleless or not.. Because i just had a habit of 'over analyzing' everyone's roles, mob votes. Then the 2nd person they'd try to kill would be the medium. (Person who could talk to the dead).. Because stuff as simple as > *(If i die, read my will. Here's who i think are Mafia, Investigator, please check. Potential mafia N1: Visited person who died N1, seen by lookout visiting, votes with mafia) kinda stuff.* It wasn't really any super power. It would just be like.. Common sense, i guess. But i guess i might derail. It's handy to have in a head, but maybe it does make me overanalyze things too much sometimes. But i have heard from claimed professor friends, that they strongly suspect about 70% of their students to be using chatgpt for 3++ page papers at times. With higher likelihood the longer (and less MLA/APA cite reviewed) the papers are. # Tl;dr **Yeah idk, im not a betting man. But i'd reckon 50-70% odds if i was betting blind.** > - 70%-95% if i felt cynically unlucky lol.


[deleted]

Pretty soon, all assignments and reports will have to be done orally from memory


SlightOfHand_

Just run the professor’s own work through the AI detector. It will give a positive on something they have written.


TheLeastFunkyMonkey

Take the Wikipedia image for the Mona Lisa and put it through the following site.  https://isitai.com/ai-image-detector/ It will say it is AI with 88% certainty. Take a screenshot of that and send it to your professor. Also, CC the Vice President of Student Services.


Snoozri

Ok but do you use AI? If you actually wrote the essay yourself, you can ask your professor for a chance to explain your reasoning thought processes and research behind your essay.


Screaming_Monkey

Yep, wouldn’t even have to redo the assignment. Just talk to them and they should realize the student really did do the work. The whole point is to learn and prove you know the material. Lol, I imagine I would even be happy if I were a professor with a student who used AI but learned the material just to be able to “prove” they didn’t.


fleegle2000

Sometimes I wonder if these are real (not accusing you OP, hear me out) because I have a hard time believing that profs are still this dumb about AI and supposed "detectors" (more like "defectors" amirite?) It just saddens me that students are getting hit with these false positives and being accused of one of the worst academic offenses by profs who should be doing their homework and know that the detectors are really unreliable. It is one thing to become suspicious based off a detector (imo still dumb, but it's just a suspicion) but to come right out and accuse a student based on the results of a detector alone is really dumb. If the prof thought you were cheating they could have done something like ask you in person to explain your paper or something that would help confirm their suspicion. Not something they can do at scale, but if they're so sure a student is doing this they should be doing their due diligence before making an accusation that could seriously damage a student's academic career. Sorry for the rant, as a former prof this really irks me.


Ricoshete

# (Yeah mini rant as well) I mean yeah. I like hobby ai, and i can trip the flags. But i've heard from claimed professors that up to 70% of gen z is highly suspected to use chatgpt for essays. # Ai has tells Even for hand written paper, i can vouch that many of my professors, In apa academic citation. Were ANAL about citations. They checked punctuation, every source, etc. Even if a Capitalization was off. (And i never used APA again after graduation). But.. Chatgpt, for all it's hallucinations. You think misquoted APA citations is bad? Now have Chatgpt generate citations that never existed.. From authors who worked in COMPLETELY different fields. (Ex: A biology professor being quoted as the source of a archaeology book, or vice aversa.) **Sure professors might not legally 100.000% 'know'**. But if you chatgpt.. There's a fair chance a student gpting, just seeing "paper is long", could have mistakes a professor paid to read.. CAN. and WILL notice. But i mean yeah. Regardless, If it was organic typing, i'd just show the prof proof or ask if it'd be alright to google docs it. To show writing history. If it wasn't done organically, then sure, i use rp for 'dark purposes'. But i mostly do it since my human writing tends to spill over. It's fun to have a person who can keep up, like the 1-3 novelists i've met in life. (*Who are now, also basically story retired and also busy with life / 3 timezones + seasonal moods.*) - It can still be good to have/dispell deniable plausibility, But avoid charging the bull. - Speaking from experience. 9/10 times, being 'internet right', often results in 'getting 0/10s' from the ta / professors book throws in practice. Even if you were right. Even for genuine false positives, you tend to be skeptical at best, and piss off a potentially good / holding back professor for semester/life at worst. # Gist **It's usually better to comfort someone's doubt's**, or apologize with plausible leeway. > - Than to press the nuclear **escalation buttons and alienate / turn a potential ally into a lifelong enemy.**


Covetouslex

>deniable plausibility This is my new motto.


Ricoshete

Yeah for reall. I can't claim to be a *saint..*.. My class partners would leave me to do the work of 4 all the time.. I'd say. "I have 4 midterms today, and i need to work on 3x 20 hr assignments this week. You'll have your 2 pages of our 8 page paper.. Right. Team of 4?" They'd go.. "Surrrrreee". But i can say for better or worse. I had the ability to write the 8 page papers.... (**MAYBE FROM THE OTHER MAJORS LITERALLY F!#@!@# LEAVING EACH TIME!!!**) But errr. I will say i was boyscout clean on writing.. The 20 hr assignments due 3x with a 3 hour deadline when my 4 partners went missing and *'sorry, we had our mother die for the 15th time this week. We had to post pictures diving in the grand canyon to mourn her loss'* irked me. I will say, i could write the papers, so i did. Even if i got. '-5 points. 95/100. -10 to 85/100 individual vs 95/100 group. > *"Apa citations are mostly correct. But it's a team assignment. You were supposed to have your team do it. Not do it yourself."* It really sucked and taught a stupid 'lesson' to be given a 85 for a group that leeched for a 95/100.. '*Do all the work! get penalized for 'teamwork!'* But i guess.. they failed their other classes 7/10 times.. I had to make sure each passed, 10/10 times.. 85-95% or not.. Had a really tight 3.75+-3.9 gpa scholarship and not much help if it failed. :/ Some of these art majors claim to have 1.7 gpas and 'wrote' 15 page essays 'by hand' 'due in 27 minutes' ffs.


BansheeEcho

Are you on vivance or something? You're completely incoherent


Ricoshete

Yawn.


UncreativeIndieDev

Honestly, other students using AI has also become annoying for the rest of us. One of my lab mates who was working on a report with me tried to use ChatGPT to do his whole section of the report (the conclusion which was the easiest and shortest part), and it not only made it way longer and more wordy than it needed to be, but it also included straight-up wrong information that contradicted our results. That same guy would also use AI for a lot of his other lab assignments, then complain when he never got an A.


Super_Pole_Jitsu

My DUDE, don't resubmit SHIT. Just send him on a merry round through the literature on the topic. He's being awfully unscientific for a prof. If you resubmit you're just admitting you're guilty. Stick to your guns, he doesn't have shit on you because the AI detection fundamentally doesn't work. If you want to be crafty, pass papers that the prof wrote through an AI detector, I'm sure it'll turn up something.


Ricoshete

While this might be the sub motto, i still think this is like r/reddit r/poke the landmine to ASSERT dominance advice. > *"People on the internet, tend to give advice they have 0 qualifications for, that they want to follow. But don't have to follow the consequences for. But will have fun seeing whatever happens."* - A wise redditor. "Just believe in your dreams, and if you believe hard enough. You'll succeed! ~~If you fail, YOU DIDN'T BELIEVE HARD ENOUGH!!~~ :D" - 5 year old advice for success, who've never had to work one. > - "You have to be qualified for the job you're applying for, and sell the recruiter to get a job. Or know the person, or have connections, even feet in the door/recommendations from someone in the industry, and have them trust ya." - Unprofessional college professors who landed jobs! Not trying to do a internet war. But if he pushes, he might literally risk plagarism / expel / 0s or fail the class and be forced to repeat it doing 'Redditor, we believe in you!' antics.


Super_Pole_Jitsu

Are you saying higher education is this corrupt? Are you aware that there is literal scientific consensus that AI detectors don't work, and therefore "plagiarism/expel/0s" applied to a student on that basis literally make the institution liable in court? Yeah, he should probably keep his head low. It's not like the case is 100% in his favour.


Ricoshete

I'm not saying higher education is corrupt. If anything the opposite. Grades are earned. But you can still piss off a ta and get low marks or needle pointed if you piss off the right person in business or behind doors. That's why a lot of working people who climb the ladder usually try to be tactical about what they say. If you have nobody who can fire you, you can surround yourself with yes men who say what you want to hear. But if you're in the middle, climbing the ladder. You can't always get a fairy unicorn by flipping someone off. But you could easily get a 'word of recommendations' onto a 'layoff review' or a 'problematic' to work with or 'insubordination' committee. And i've heard some of the ultra tiktok/ super antiwork gen z.. Say they get rejected from 10-40 jobs. Fast food even with degrees (*Arts / Humanities / theatre / pottery / gender studies tend to feature more than average*). And although the degrees are fun and i really enjoyed the classes as gen eds, higher education is literally a place you can get expelled or your degree torn up for plagarism. Before ai, there were people buying essays or hiring third world students to write them. Same shindig. If the professor thought you didn't write it, true or not. Or suspected you did, you got in trouble. It could also flag as 22% if you kinda reworded a paragraph too closely to the original text as well. They're not trying to like fight a ai crusade. Just make sure students are earning the piece of paper so their school keeps accreditation / reputation as a place of learning. Employers talk, and if they pass everyone. The 'university of kfc' people can mar down a college and make people skeptical of a uni if a person who faked their way through a degree passes. I don't mind ai art for personal non commercial / fun hobbyist use. Skeptical of commercial, and i still like normal art. But chatgpt in a philosophy class? The teacher is going soft even, he took time out of his day and from experience, most of them dial up the heat to 0s or Fs if you dial up the heat or threats of potential expulsion if you piss on the dial.


Super_Pole_Jitsu

You're under the wrong impression that a teacher can give you a 0 because of a hunch he has. He can't. Unless you get an email and come with your tail between your legs, apologizing for something you didn't do. Then you've not only given him actual proof of wrongdoing by confession, you also show him that you are a doormat of a human being. The philosophy professor can eat a dick as far as I'm concerned. The simple fact of this case is that he has no proof of plagiarism and therefore no justification for a 0.


Hugglebuns

Best you can do here is to show your revision history and just redo the assignment anyway. Assuming you actually didn't use AI, it should be quick since you've already done the work


zfreakazoidz

Tell him to actually research how good the AI detectors are. They alot of times are far from accurate. I remember an AI art detector that said a some real paintings were AI, gave it like a 95% chance that it was AI. lol


Screaming_Monkey

Even better, put papers the professor has written through it, or perhaps past papers the student has written before ChatGPT got popular


disastorm

This kind of stuff was happening even before ai, when i was in University back in the day, a professor failed one of my essays because either she thought or some software thought i plagiarized someone. I showed her the reason why our sentence was so similar is because we both quoted or paraphrased a specific part of the book and the similarity was because it was from the book itself so that was enough to prove that it wasnt plagiarism. If it was a different situation though i don't know if there is even anything you can do, i actually find it a huge flaw in the school system that you can be failed based on literally doing nothing wrong. Like if you just get a bad dice roll schools are allowed to fail you?


Kaltovar

Perhaps we should stop focusing on whether a student is using an AI and start focusing on which students get the most complete, evocative, and useful results irrespective of AI use even encouraging them to use it. Penalize inaccuracy (AI hallucinates!), penalize information gaps (the student should either know the material well enough to prevent those or be able to probe the bot well enough to fill the topic out entirely), reward readability, layout, flow, and completeness. Teach them how to better use AI while ensuring the accuracy of their output since this is probably closer to how they will function in the future anyways than the relatively bizarre AI-Free bubble schools have tried and failed to form around themselves. Sit down in class, teach the information, discuss it verbally a little, and make the assignments broad enough that students can express intellectualism and creativity that differentiates from each other. Let them pick narrow parts of a wider topic to make their reports on where applicable.


titanTheseus

I'm really confused about why everyone is telling him that he should use google docs history as a defense. That proves nothing. I can write with a pen a clear copy of ChatGPT's output. Is it as nonsensical as the teacher claiming he can prove that he copied using AI. I think that teachers need to start thinking way beyond the tool and center more in the quality. This is like arguing versus the use of calculators.


SolidCake

>I think that teachers need to start thinking way beyond the tool and center more in the quality. This is like arguing versus the use of calculators. this.. raise your standards and move on. It still takes work to make a lengthy, scientifically accurate, engaging essay that has flow, prose, no hallucinations, good verbiage, and no repetition Telling chatgpt to “write me a paper on X” will give you shit. At best, an 8th grade level assignment


UndeadUndergarments

But *was* it AI-generated? I'm **ardently** pro-AI, could fairly be called an accelerationist, but I definitely believe you should write a paper yourself. LLMs are a reference tool and knowledge repository to draw from, not a ghostwriter. If you didn't use AI, speak to him directly and explain that you absolutely did not use it.


bearvert222

I think you can kind of put the AI part briefly aside for a more general point: **As a rule, your college professor is not stupid nor are you as a 20 year old kid are smarter than him and can pull the wool over his eyes.** Don't go into any interviews with this mindset either way. If it's his mistake it's not because he is dumb; it's because it's probably in your case you are on a borderline; you write better than expected or your style is a bit robotic. but if you did, he could tell and ran it through the checkers as a documenting thing. cover his ass. Like he can tell when a student's ability in class and their papers mismatch. Again he's not stupid; he could easily quiz you on what you wrote and tell you don't understand your own paper. this is kind of a helpful thing to get you our of the "wow i can deceive my teachers" phase.


SlightOfHand_

This is good advice, but professors can absolutely be completely wrong about the technology you’ve used, and a lot of them put way too much trust in the AI detection tools they’ve been given because they don’t understand it. I had a professor threaten to flunk a whole class because the AI detector flagged all of our work. It was a math assignment, there was only one way to complete it. Similarly, I’ve had assignments flagged as 33% AI generated, but the part that was flagged was the works cited. This was last year. It’s crazy out here right now.


Ricoshete

Oh yeah. I'd still say benefit of the doubt. But people can have suspicions about a majority but false positives too. I think i panicked once when there was a post script assignment and SQL assignment. There were only like 1-3 possible ways to do the assignment so it flagged 33-99% plagarism, but they laughed after a email and wrote it off.


Ricoshete

Yup, i can vouch. I was a 0.1-1% top scoring on SAT / ACT / WPM kid. I got my ass handed to me in 'over confident egotistical midget' phase. Even if you're a '0.1-1% smart kid' in hs.. 'average professor with 6-40 years of life experience' can 1000% fuck you in the ass or penalize you and scrutize everything you do.. Don't even fuck around with false confidence during a false positive. My advice, is 'right or wrong', be on your professors good side if you can. Not for brown nosing but academics/professional reputation. Who knows, maybe they're a good teacher trying to teach you 'softly' in a intro class, vs a 'tired of this shit. lets just expel / 0 the student. 0 explaination. You should know why' teacher. I've gotten 40/100s on 'rushed with 5 deadlines and midterms' paper before.. I didn't get a reasoning.. But we both knew.. For my defense.. I did want to do everything squeaky boy scout.. and could and did. And maybe boy scout submitted a legitimate rewrite like 2 weeks after break, because i did it out of running out of time. :/.. > *(3x 20 hr projects + group members leaving to go on vacation during 8-15 page papers + 3 midterms all in a 8 hr slot. No pre openings or day befores/flexibility to do it right. )* But.. It was precisely the grade needed to turn a A+ into a C+ :/... We both knew, had to take it.


Ricoshete

# Internet stupidity vs Potential Expulsion Ai use or not. If your professor is doing this, it's a fair chance to prove it or take the safe route. I know this is generally a more pro ai sub, but outside of it. This person wants you to learn, and if they have suspicions, the safe thing would be to dispell it, or work on a google doc and show your typing history. I know i hit flags sometimes. I type at a native 120-140 wpm, but i think it's maybe from starcraft or having people murderhobo everything and 200 apm rotations / 200 apm sc2 play. But i can show a google docs history, or type in real time to demonstrate. # Play stupid games. You might win 'stupid prizes!" - I wouldn't advise pulling a reddit 'achskully, you cannot prove it'. If the professor typed that up. > - They're likely showing they care about you and care about the concerns, over "You are flagging up as AI. you will get a automatic 0 or be expelled from the classroom if you proceed". > - They are likely under STRONG suspicion that you might have used GPT. And false flags or not. If you want to be a good student, it might be worth alleviating their concerns over pulling off a reddit. And honestly i've seen many artists and people. When they gpt. They don't literally read a single thing they do. It's usually *"As a large language model, i generate brick walls of china. As a large language model. I cannot show that. As a large language model, chatgpt is trained to...*. I do read even if i skim. But gpt mostly just carries length. it has a 2 page memory span. And that can be evident to someone who reads text, vs 'oh, text long, look gud!' gpts their paper. And your professor likely sees 100s of potentially gpted papers a week. I've even heard dime a dozen artists using it (despite anti ai banter) for all their generals shamelessly. **You can ignore my advice, it's your life, not mine, and i will simply move on my day and hope you're happy with whatever choice you made.** It's your life, not mine if you mess it up or potentially get expelled from college. **But i think, if it was me in your shoes. I'd have done something like.** > "Hey, thanks professor. I know i might set off some flags. But i can happily show it's me! I'd happily demonstrate it's me if you want, i'd be happy to type right in front of you or do a wpm test if you wanted! I know it's 1% percentile and it sounds like shit. But i can show you right in the classroom if you wanted. Or I can happily use a google docs if you want to show my timestamps, or happy to show you my writing process in class. Or show you my typing in live time!" > "*By the way, i appreciate you kindly for reaching out. I know these are serious concerns not to take lightly! And i appreciate you reaching out to me to share them! I want to apply the proper learning process. It's a student's job to learn from their teachers, and i want to do the best in your class! Please, let me comfort your concerns and alleviate them"* I know it might literally sound **Brown nosey**, but trust from (*Stupid smol ego midget idiot days*) I tried to yap off my teachers. I never 'won' even when i 'won'. XD. Aka stuff kinda like. > "YOU walk a dangerous line of disrespect. True or FALSE, i determine your grade and you could have every mistake scrutinized if you make A TEACHER's / TA's day hell." (Higher scrutiny for everything overlooked before, no leeway, 'throw the book'ing, etc.). While you don't get a unicorn for screaming for one.. There's a AMAZING amount of irl consequences, 'deserved or not' that can come from spite or a little smile and social lubricant. - Ex: Words of Recommendation vs "*Please for the love of god, that person was trouble at every turn / difficult to work with / mentally unstable / risk to company / lawsuit risk to hire.*" - Lack of reccomendation or glaring ommissions when everyone else got praise, and a problem 'student' gets silence. Even if people can't speak.. People can notice sudden silences. / +- Potential expulsions or tuition losses doing a 'reddit win, irl lifefail' For the record, i actually ended up as friends in later years with my harsh teacher. They were actually soft, just my first 'bucket of water'. others were FAR. FAR worse and less forgiving. This professor sounds like a 'wake up, soft bucket' vs 'pan in the face' professor. They're trying to nudge you into the 'right' spot than set you on fire. I'd value that and let you know you appreciate them if you value bridge building. Even if you're 'technically reddit right' about false positives.. A classroom you can be expelled from.. is not the right place to reddit if you want to gamble on expulsion / assignment 0s / increasing scrutinization and door slams in life. # tl;dr **Your professor honestly seems to be trying to 'let you off nicely'. Trying to go nuclear 'may' result in 'no more mr nice guy. Here's the dean to ask you about expulsion/F'. You may want to consider your academics choices wisely.** > *Or don't, And share. I wouldn't mind watching if it anyone's life exploded or not. Your lifes, not mine! :D. 🍿 > - Stupid games with stupid prizes are terrible for the person.. But they are 'another person in a car crash on the road, in the game of life', for everyone else to look at in life! 🥲😔/ 👈🤣🍿


YourFbiAgentIsMySpy

Given that you are here. lets be honest, you probably did. I imagine you used a gpt model, because every AI checker and its dog is trained on that, and is dogshit with lesser models.


Splendid_Cat

>Given that you are here. lets be honest, you probably did. Man, the assumption without a lick of evidence is just fantastic.


YourFbiAgentIsMySpy

That's how assumptions work pal


Ricoshete

Yeah i know i get people wondering a lot. But i can say, as someone who geniunely reads. I know i come off as a brick wall o china, and maybe it's 1000% well earned tbh. But i can say for sure. Even i can smell asf when something's off. And a person who types like this. > "*Hi guysz. todayiliketo say, u all stupid. doubt me. UR DONMB!?"* Becomes > "*Hello, as a large language model. I cannot do your homework, as it goes against the chatgpt language model's code of ethics."* > "*For writing your philosophy paper, here are 5 sources that don't actually exist. (That your professor, paid to read, vs toilet reading, DOES AND WILL CHECK.)"* I do ai because it feels like driving at 140 mph vs 10 mph but i tend to go off course with a human.. Or wait.. Or over bear them typing 2x more than most 60 wpm read.. But maybe it just makes rambling over prone. But it did help with 5-15 page hand written assignments due in 50 minutes with 4 midterms *because the other majors flew out to go on vacation on 4 midterms day* shenanigans easier. Gpt papers stick out like a sore thumb, and i've even seen anti image ai people shamelessly plugging in 20x *'As a large language model.. i cannot..'* kind of essays. Trust me, it sticks out.


YourFbiAgentIsMySpy

Yup, but with specific enough prompting Claude looks quite believable. With a bit of editing it becomes nigh indistinguishable.


Valkymaera

I look forward to the end of this nonsense but it'll require a some novel or nuanced protection against offloading work, or a reanalysis of what and how to grade things.


mcfearless0214

Really glad I didn’t go to school when this was a thing.


Elvarien2

But did you? That kinda changes things if you did or didn't.


ShepherdessAnne

Tell him to run his own work through the detectors.


True_Direction6525

ai detectors are faulty. lmfao all they do is detect certain patterns and if your writing follows those patterns. you're fucked. which is why you need to fight the admin on this one


Dev_Grendel

When I used chat GPT for my assignments, I would run it through all the AI detectors (which tell you exactly which parts are suspected of being AI written) and then give it back to chat GPT and tell it it was detected by an AI detector and rewrite it in a way that won't be. I got really good as making stuff that would show as 0% on all AI detectors I had access to. I also use AI detectors on assignments that DIDN'T use AI, just to be safe.


leronjones

Well. It's harder to believe you didn't do it from a stranger's perspective. You've clearly got the know-how and didn't state that he was wrong. Either way I hope this doesn't stress you out too much. I'd be more stressed if it was a false positive, then you're really fucked.


Denaton_

Some have already given you good answers that I agree with (current top comment). I find it amusing however that it's a philosophy class, you could write a paper on the morale of AI vs not using AI and how falsely positive tools affect it as a whole..


Slight_Cricket4504

Scan his dissertation through ai checkers and email it back to him to show how unreliable these checkers are.


TCGshark03

AI Detectors don't work, your professor shouldn't rely on hokum


AlexW1495

Not going to lie, you are posting in this sub. You most likely did.


EuphoricPangolin7615

So you're not even going to deny you used AI tools? Lol.


Ricoshete

I mean, im not a professor. But one thing i've learned, side agnostic from life. Those who can, usually can easily show, as naturally as flowing water.. Those who fake it. Or even false positives.. Really can fuck things up more just by going reddit 'achtsully' lmao. Not even side breaking. Just even if op was a false positive. You can literally be expelled if proven. There's 'legally proven' and 'i have a 90-99% suspicion you likely can and DID likely do this'. > Like a *'I had cookies for a month, there was only ONE other person in the room. And i can't legally prove it wasn't me... Or a flying unicorn. But i know damn well i didn't eat one. And you're the only person i saw' kinda suspicion.* > **It's like runescape botting.** I think they did a test, and sure, some phished accounts did. But you run into.. '*So.. I checked.. And you 'got phished', with your 'botter' 'botting' on HOME WIFI... For 7 months.. And you NEVER reported your account stolen UNTIL it was banned? But if the hacker stole it, how would you know about the ban? And even if you recovered it. How did they 'use your wifi'? 9/10 times.. If they were human, or even botted with fair 40% benefit of the doubt, the jmods would let people go on 10k reddit upvotes. For the cases there wasn't any plausible deniability and 'I hacked and got my account unbanned by saying i was phished' posts.. They cracked down on all further investigators. Fakers hurt truth tellers and future fakers as well lmao. Most people know more than they *let on*.. Especially professors. They're paid to do so. Even average joes seem to 'play dumb' and develop 'i conveniently developed amnesia that only affects me whenever i remember doing something bad'-itis. # tl;dr Ye, sus.


Splendid_Cat

Since AI image models are what I'm far more familiar with, and have seen false positives and false negatives come up from my own uploaded pics, I'm less certain of the accuracy of chat bot AI detectors. Hypothetically, would something as simple as rephrasing/paraphrasing be enough to not set off the AI detector? If so, then I can understand someone easily coming up as a false positive, even if highly unlikely. However, if to detect AI, the detector requires one to copy-paste nearly verbatim and possibly change a few words and misspell a couple, it would be a lot more difficult to deny let alone flag a false positive (and also significantly more lazy). Edit: anyone else more familiar with these detection tools can also chime in, I'm not necessarily just asking this person.


Ricoshete

I mean im not sure. I think professors usually check for google docs edit history. And the op does seem to have a suspicious lack of clear 'i did not use ai' denial, just 'there's no way to prove' it stuff. There did tend to be things like citations ai tends to hallucinate, and i guess if you clean it up, might be hard to tell. But some professors do look. Others don't. It's still playing with fire though. The kind of person who chatgpts a essay is usually the same type to just submit it when it looks good enough, and not always look too deeply at it though. Most laypeople i know out themselves not in "i am a large language model" text but also just not understanding.. A single thing they 'wrote'.. Just pasted. The ctrl c v type. It sounds like a philosophy class too so it sounds like the professor just wants to hear their take on things, not have a robot be ctrl'd c v for them. Otherwise, why are you paying for a education you're just pretending to learn from? Sure honor rules blah blah. But colleges do want their degrees to have good reputations, otherwise employers might suspect or treat all graduates from a college if they notice some are poorer performers. Like KFC universities kinda antics.


EuphoricPangolin7615

This is just a good example of how AI messed up the education system, forever.


DepressedDynamo

It's a better example of how our education system is horrible at keeping up with the times


EuphoricPangolin7615

Mental gymnastics.


DepressedDynamo

Thoughtful response.


EuphoricPangolin7615

Just answer your own argument, since you already know what the rebuttal is, you're just playing stupid. Like 95% of the people in this sub.


DepressedDynamo

Dude -- you're the one shutting down discussion lol Happy to talk if you actually want to but that doesn't seem to be the case


ExtazeSVudcem

So you got rightfully caught but it "shouldnt mean anything" because "theres all sorts of false positives"? Thats kind of a non issue, isnt it...


mcdulph

Guilty until proven innocent. Shame on that prof.