OpenAI could watermark the text ChatGPT generates, but hasn't

Another important risk we are weighing is that our research suggests the text watermarking method has the potential to disproportionately impact some groups. For example, it could stigmatize use of AI as a useful writing tool for non-native English speakers.

It’s clever to couch your desire to have your bot slurry be as pervasive and hard to detect as possible as a matter of justice for the marginalized; but (aside from the significant odds that it’s a purely cynical invocation of something they know some people who aren’t them will be concerned about) it seems like a strange argument to make.

I suspect that there are instances where demanding written English proficiency is just Shibboleth testing; but it’s not like Sam Altman’s Social Justice AI is primarily in the business of breaking down those barriers; and that use case is dwarfed by the number of cases where it’s about circumventing an actual skill requirement or educational exercise; or about churning out bot slurry for SEO and phishing exercises.

6 Likes