OpenAI could watermark the text ChatGPT generates, but hasn't

Even though it can be trivially circumvented, why not use the tool to catch the laziest people, and go from there?

That said: I realize for most people the difference doesn’t matter, but we shouldn’t just assume everyone is using ChatGPT. Especially not when new releases are coming out so often. Are students exclusively using ChatGPT? Because I somehow doubt there’s any teachers even trying to check for plagiarism against all the main LLMs out there. ChatGPT, GPT-4o, and GPT-4o-mini; Claude 3.5 Sonnet, Opus, and Haiku; Gemini 1.5 Pro and Gemini Advanced, Llama 3.1 8B, 70B, and 405B. All can be easily jailbroken. All have somewhat different styles. All can give wildly different results from even small changes in prompting. Claude 3.5 Sonnet in particular makes it easy to iterate on responses to get better and more varied output.

Ultimately, if you are a teacher and you want an essay written (or assignment done) without AI, make it an in-class assignment. Not fundamentally different than a math teach wanting to control when and how students use calculators. You can (for now) also do things like how math homework requires showing work. You can require students submit their metadata - outlines, notes, citations, early drafts and revisions.

Otherwise, you need to learn how to make sure the AI use doesn’t detract from, and ideally supports, maximizing learning from the assignment. Just like what happened with the internet in general (it no longer took a trip to the library to find good information), you need to train students on optimal use of available tools and then raise the standard so that a lazy effort is no longer worth a good grade.

1 Like