It’s especially important for newspapers, libraries, etc. to hang on to individual published copies until copyright runs out and they can be scanned and freely copied and distributed. This has of course getting harder and harder as copyright terms are extended. How many newspaper companies are still in business since 1923? How many libraries, universities or other institutions have copies that old hanging around until they can be freely distributed once their copyright terms have ended?
Not all newspapers give paid access. Chattanooga Times-Free Press has digital archives but only back to 1996 or so. They do not allow the humble reader access to the morgue for indexes of articles before that date. The library has microfilm but no indexes so tough luck if you’re looking for a specific article and cannot recall its author.
My local weekly keeps their archive at the Probate Court where anyone can access print copies. No index, though.
I’ve noticed that a number of news sites are bundling their articles up in JSON within javascript. When the page loads, the script inserts the data in the JSON into the DOM to make it readable. The actual script isn’t part of the page, sometimes on another site. (If only we had some standard like HTML5…)
At the moment, it’s not a big deal to reach inside the script blackbox and pull out the article text, but I wonder where this is heading? There’s a whiff of impending DRM in the air, which would make archiving news articles useless.
This topic was automatically closed after 5 days. New replies are no longer allowed.