It’s honestly a bit more surprising(at least to me) than mere laziness:
A situation where you are maintaining some sort of database, with the intention of useful reporting or analysis, of things with potentially overlapping/ambiguous/non-unique/sometimes deliberately misspelled and sometimes accidentally/similar things where the real world spits on your naive ontology and classification; is exactly where you cook up some more robust identification scheme in order to preserve your own sanity and make the job easier.
Nielson isn’t exactly an impoverished mom 'n pop shop; and they (at least aspire to) be taken seriously as a source of audience engagement statistics on a national scale even as some of the data and search outfits from the internet have started sniffing for blood, and things like the streaming services (where the server logs provide details Nielson would dream of having basically for free) have become more prominent.
I would have expected them to be using something a bit more… scalable…than “broadcasters self-report their shows, identified by natural language display name”; which has the virtue of simplicity at a tiny scale; but is vulnerable to error(even in the absence of malice) in volume. Some sort of system with UUIDs, or other ugly-but-unambiguous identifiers seems like it would have become a necessity years ago. Is their “database” made of harried secretaries manually dumping stuff into a big hideous Excel sheet on the shared drive?