Originally published at: https://boingboing.net/2018/01/30/rent-seekers-rant.html
Originally published at: https://boingboing.net/2018/01/30/rent-seekers-rant.html
Hah he just goes full on “cats and dogs, living together, mass hysteria!” at the end there. What a buffoon.
Wow, he’s butthurt about the rent-seeking gravy train reaching the end of the line. His argument that research and improvement will come to an end is laughable; the continued advancement of the Linux kernel is an excellent counterexample.
Maybe if MPEG-LA hadn’t become too greedy with HEVC, they wouldn’t be in this position.
Companies will slash their investments in video compression technologies. Thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.
If only some of the companies in the Alliance - which includes Amazon, Apple, Google, Facebook, Hulu, Netflix and many others - had some kind of incentive to produce better codecs in the future.
Like, say, bandwidth costs.
Damn the internet and its completely cost-free environment! DAMN IT TO HELL AND BACK!
Is there a nice desert island or remote gulch where we can ship Chiariglione and Ajit Pai to build their rent seeking Randian infrastructure?
Christ, what an asshole.
The situation he describes appears to be a pretty standard instance of “unstable cartel”; with the interesting twist of added patent trolls(who are essentially cartel members even more predisposed to short-sighted profit maximization).
It’s not like the MPEG-LA was done in by moral suasion in favor of software freedom or the like; nor was it just a straight “your price is just too high” problem(the price was high enough for janky hacks like SoCs with codecs disabled in firmware or some consoles and OSes making certain codecs add-on because the price of making them a baseline feature was too high; but it wasn’t all that high in absolute terms).
What really ruined the deal, though, is that the MPEG-LA couldn’t actually sell a license to use their products: they could sell most of one, the bulk of the patents were in the pool; but not all of them and you could bring fully paid up with them and be just as vulnerable as when you started against whatever patents weren’t in the pool.
If they had been able to actually guarantee an otherwise unencumbered license this would just be a dispute over price; one that could be resolved by a few discounts if necessary. Their inability to get everyone to add their patents to the pool ruins that: whether members witholding a few choice bits or bomb-throwing NPEs who don’t care, they couldn’t get everyone to agree, so they can’t actually deliver the product they hope to sell.
Edit: the diagram he supplied of the 3 patent pools, mostly disjoint, plus outlying entities that reserve the right to do whatever. Can’t imagine why MPEG-LA licenses aren’t selling better…
He writes as though “the AOM” is some sort of amorphous entity that has an appetite for IP but doesn’t actually consist of anybody:
“There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.”
This only makes sense if you suppose an environment where video codecs are somehow worth real money despite users of video codecs apparently being just a vague, purely parasitic, body; rather than actually being a rather long list of interested parties that do rather a lot of development.
If you’ve ever used a Raspberry Pi+Kodi as a MythTV front end, you’ve undoubtedly run into that one. You have to buy a license key for MPEG-2 if you’re going to be playing over-the-air ATSC recordings (H.264 support is paid for in the Pi’s price).
I wonder if that will continue being the case since, in the US at least, the last MPEG-2 patents are due to expire in very short order? One of them (6181712) expires today, in fact, and the last (7334248) on Feb. 14. What a coincidence that Mr. Chiariglione is making all this noise!
Not to sound offensive, but that guys cheese slid of his cracker.
I’ll give him that, but not for the reasons he states.
In the past 30 years we have gone from having consumer computers running at about 8Mhz (scalar units only, and frequently not pipelined) with enough RAM to hold “a few” frames of low resolution video to consumer computers running at thousands of Mhz (super scaler, frequently with vector units, and always deeply pipelined) with enough RAM for many many frames of high resolution video. However today’s consumer computers are also barely getting faster year over year. The old “18 months is a doubling” is gone for all markets other then cell phones, and even there it is still just more or less catching up to laptops/desktops. I expect it to hit that same wall (maybe not at the exact same point, but in the same neighborhood).
So today’s video compression algorithms would just plain not be possible to run in reasonable time on 1988 era consumer computers. The video compression algorithms of 2048 are likely to run “a little slow” on 2018’s computers, but not unreasonably slowly (as in what 2048 does in “real time” might be an overnight job, but not years).
That leaves some room for improvement, most of the low hanging fruit has already been snatched up (well, plus ignoring hardware resources we have been working on this stuff for decades, most or all of the non-hardware related low hanging fruit has been grabbed already!)
I disagree that a ISO style standard would advance faster then an open standard consortium though. I just think both will go slower then either/both methods would-have/did starting from 1988.
I would be curious to know how much room team lossy compression thinks they have; what tricks they are looking into; and what we will treat as acceptable fidelity in various contexts.
Lossless compression is hardly trivial as a development; but being able to ask “Well, can you reverse it or not?” provides a vastly better behaved standard than “does it sound lousy or not?” or any of the myriad other tests one might impose based on human perceptual quirks and a commitment, or lack thereof, to fidelity.(though not entirely well behaved: there is probably a delightful and unhelpful universe of functions that are fully bijective but differ from hash functions only in returning values of arbitrary size; not in difficultly of reconstructing inputs from outputs, and those are ‘lossless compression’ in only the most charitable of senses).
Lossy compression admits of any trick that lossless does(in the trivial case you can just apply lossless compression at some stage if it seems like a good idea); plus whatever perceptual tweaks you can get away with. I imagine that the classics like beating up on the chroma channel are pretty much mined out; but have no real sense of how much room is believed to exist for more exotic ones or how far people expect to be able to depart from fidelity for various applications and get away with it.
(By way of a fun, if probably extreme, example; a heraldic blazon is, for most coats of arms, going to be a lot more compact(especially if you compress the text) than any but the most gruesomely compressed picture; and is supposed to be sufficient for a suitably skilled but not otherwise informed reader to reconstruct the original; not pixel-for-pixel but in all relevant particulars. Do we suspect that the deep-dreaming neural networks of the world might have similar arrangements for hallucinating adequately-similar cat videos from some relatively tiny set of parameters extracted from the original? And will we be willing to watch that just to save a hundred kilobits per second?)
Classic Republican argument. Industry must be allowed unlimited profits, and also government subsidies, or the Job Creators will all go Galt.
That’s a good point. What would have taken a suitcase full of hard drives (or VHS tapes) 30 years ago now fits on a chip the size of your thumbnail. Another 30 years of advancement at the same rate would be inconvenient. You’d need a microscope to find the chip with your movie collection on it and tiny tweezers to plug it in.
30 years ago, you could watch a single low-res image gradually download one scanline at a time. Now a full-length movie with audio and subtitles in multiple languages downloads in a few minutes, far faster than you could watch it. What use would another 30 years development at the same rate be, especially when the trend is switching from downloading to streaming and people don’t need it to transmit much faster than play speed?
For consumers, sure. For pros, who make/manage/store/deliver the content? Lots of advantage.
I hear Rockall is just lovely at this time of the year. Or indeed at any time of the year.
That’s reserved for British fascists and nationalists who want to defend our borders.
How does Australia feel about donating these islands to the libertarian-capitalist cause?
The islands are among the most remote places on Earth: They are located approximately 4,099 km (2,547 mi) southwest of Perth, 3,845 km (2,389 mi) southwest of Cape Leeuwin, Australia, 4,200 km (2,600 mi) southeast of South Africa, 3,830 km (2,380 mi) southeast of Madagascar, 1,630 km (1,010 mi) north of Antarctica, and 450 km (280 mi) southeast of the Kerguelen Islands
The Kerguelen Islands are also uninhabited.
Rack off, the HIMIs have an ecosystem.
The LibCaps are welcome to launch a seastead out in international waters.
Sure, but now you’re talking about of very specialised set users who have essentially limitless IT budgets, rather than “everyone on earth” using their smart phone.
Individual video consumers (i.e. us!):
I think the trend away from downloading towards streaming is partly economic, but also partly because people consume videos on devices with reliably low storage (phones!). Personal devices that could hold more video may change that significantly. Or maybe the economic factors will dominate, but at least you will be able to hold more temporary downloads of streaming content if the providers allow it (Netflix does for most things, so I can watch that on a bus commute, while HBO Now/Go doesn’t, and I get less value from it). Better compression would help here. So would cheaper SSD.
Guess who also makes videos of their kids/dogs/dates? We do! Guess who runs out of space on our phones/laptops/.whatnot? We do! Better compression would be helpful. (So would cheaper storage)
Who is increasingly dealing with limited “unlimited” internet services? (or the just as irritating, but at least more honest “get N gigs for one low monthly price!”). Yep, us. Here storage won’t help, but improved compression will.
On the producer side (largely not us):
You want to be a streaming provider? Sending out thousands or millions of streams? Storage might be an issue if you have a huge catalogue…but internet bandwidth is a serious problem here. Better compression helps.
Are you an internet service provider? Guess what is using your bandwidth? Yep, mostly video. If better compression is invented you can’t force anyone to use it (except by charging per byte transmitted), but if they do things get easier for you (er, unless you started charging too much per byte, in which case better compression causes you to make less money…too bad, try not to do that!)
I’m sure I missed a lot of other use cases, but better compression would indeed make a lot of things better, or at least cheaper. At this point I don’t know if it will provide us any transformative experiences (like when we went from pictures rare to pictures expected, or from pictures that looked like crap to ones that look like photos, or from videos never to rare to common to OMG why are the ads on this website moving!). Maybe something like VR shared spaces (as in real locations, not CG…or incredibly detailed CG?). Then again I think today’s compression can do that, so maybe not so much that. Even without transformative cases (and they are seldom easy to identify in advance anyway) it is a brass ring that I’m glad someone is grabbing for.