People were frequently using “ignore previous instructions and do X” to unmask chatgpt text in a lot of contexts, from teachers using it to sabotage “AI” plagiarism to unmasking “AI”-driven live chats by fraudsters - OpenAI disabled it. I really have to assume they realize that some form of fraud is the single biggest use case (even, essentially, by corporate users) and don’t want to step on that.
Except for easily identifying low-effort uses like student plagiarism, SEO fraud, etc, which makes up a good percentage of what it’s used for, and which are pernicious problems.