Originally published at: Students fight false accusations from AI-detection snake oil
…
I feel sorry for any student having to navigate the field of “AI”-driven turds that is the current American educational environment. One way or another, I don’t see anyone coming out of school for the better because this technology exists.
I promise you there’s at least one lawyer somewhere in the country keeping an eye on this. These schools need to be careful or they’re going to find themselves on the wrong end of a class action lawsuit. College is expensive. Failing a class costs actual money. Getting expelled and trying to reapply to a different college costs even more money, plus the possibility of lost future earnings by getting a degree from a less prestigious school…holy crap this will add up to really big bucks really fast. Seriously…anyone who works in the general counsel’s office at any college or university should be immediately sending out a memo telling all faculty to stop using these tools.
And, presumably, students who don’t rely on AI chatbots for content but do use writing aids like Grammarly.
The result of using Grammarly has occasionally been accused of being AI-generated by detection engines such as Turnitin.[45] Schools are struggling to develop rules about its use that are consistent and fair, with some teachers recommending Grammarly to all of their students and others rejecting it.[46][47]
At my previous university, we had explicit instructions not to give zeros because of cheating suspicions. Those were supposed to go to the academic integrity committee, which I sometimes served on, that had a pseudo-judicial review panel for these things. Prof sends in evidence, student gets to rebut, that kind of thing. I think that some ignored the requirement because it was a pain and/or they might not wanted the student to face the harsher penalties that were possible in the formal process.
This was post GPT, but before GPTs has output good enough to be a plausible submission. Plenty of them were in the category of “I pasted the first few lines into google and got your whole paper/program, but with the names changed, as the first hit.” Still, they denied cheating until that point.
Better start keeping your rough drafts, kiddos, and don’t just update/save over the iterations…
AI has tremendous potential to help people learn, as it has unlimited patience and multimodal interfaces (for example conversing with someone, repeatedly, about a subject until they have learnt it) but is not going to ever be 100% accurate so should never be used as a judge.
If I were a student these days I’d give serious thought to just screen-recording the whole writing process.
I hope that’s true but so far I haven’t seen any of the upside potential for AI in education play out, while there’s unlimited examples of downside potential that we’ve been seeing lately.
I posted this in the earlier topic.
I’m an academic librarian. A few terms ago, I worked with a student for about four or five weeks, on and off, on her term paper. She’s bilingual, possibly ESL, but I suspect that’s not the case. Anyway, I was mostly reading the rough drafts for structure, flow, comprehension, and helping her with her citations. The instructor flagged it as AI generated. I stepped in to say that she’d been working with me for a month or so, but exactly what happened in the article, “she’d been flagged before for AI generated work.”
I suppose it’s possible that a student would spend a few minutes generating an AI text, then spend hours every day playing on the internet while looking like they’re writing, and involving the school librarian in the pretense of writing a paper. But yikes, that’s a huge, paranoid assumption. Even when she was done, it still read like a student paper.
I am just on the verge of tossing the big decks of index cards I made for papers I wrote 25 years ago. The charm they have as artifacts of a bygone era is outweighed by the space they take up. (Maybe someone here wants 'em? They’re yours for the price of shipping.)
Back then we were still slightly uncertain about how to even cite Internet sources. I have no idea how teachers even began to weather what was shortly to emerge.
Is there money to be made by establishing a business to install and certify Faraday cages (covering the most common cell-phone frequencies) in educational settings? [semi-winky emoji] (“hell no! they just can download a local app to do a lot what they need!” …enough local memory for a LLM? wow)
I don’t think that the physical classroom is the location where most of the AI-related fraud is happening. Mostly these are assignments that students do at home and turn in. Plus a huge percentage of classes are being taught remotely now, including about half of community college classes in some areas of the U.S.
Now schools should revert back to handwritten submissions, might help disincentivize letting the laptop do all the work
Counterpoint: Is there much point in having students do their writing by hand, while preparing them for futures in which they’ll likely never write by hand?
Pretty much the entire education sector in Ireland has decided that “AI detection” services are prima facie ilegal as students work is personal data (decided case law in Europe) and once it gets ingested into “AI” your rights over personal data cease.
Also because everyone hates them and thinks they are scammy shit too.
It has the extra benefit of making things frustrating for whoever has to grade them?
Sure, 'cuz it at least helps develop fine motor control, which helps with speed mashing on the gamepads