A new (slow) open source JPEG algorithm makes images 35% smaller and looks better than older compression systems


#1

Originally published at: http://boingboing.net/2017/03/22/improved-psychovisual-models.html


#2

Good thing no one is going to slap a patent on this, right?


#3

They probably aren’t, because the code has already been released under an open source license, which (a) would make it difficult or impossible to later get and enforce a patent, and (b) would create terrible optics if Google tried.

If they come up with a way to improve the performance by multiple orders of magnitude (making it useful in the real world, and for video), it’s the improvement that would be commercially valuable, so that’s the thing they’d want to patent, anyway.


#4

John Gruber says that it took eight minutes to compress a single image on his top of the line Imac. He concludes that this new algorithm is going to be deployed mostly on server backends, rather than in image editing programs.

Although I might be interested in an app that offers to recompress all my images using the algorithm in the background - check back in a week or two and see how much disk space has been freed up by optimizing your photo library.


#5

Is it just me, or do all of these cats eyes look identical?


#6

It’s just you. Check out all the noise above the eye in the middle pic. There’s a little in the right one, but it’s not bad.


#7

So it’s extremely slow now. What are the chances it won’t be improved inside of a year?

Now imagine if Microsoft owned it.


#8

It would be as intuitive and productive as Word. :+1:


#9

I just ran this on a whole slew of images. “Slow” is true - and the memory use is kind of drastic as well; I think a couple of 12Mpix images peaked over 6GB. Not the sort of thing you run automatically on uploaded images on your dinky 1GB-RAM VPS. Still, it does what it claims, and I hope it’s possible to prune the memory usage a bit. (It might not be - I have no idea what sort of structures this builds internally.)


#10

I tried this on my MacBook Air over the weekend (would have used my Windows desktop but guetzli requires dark magic to run on Windows right now). It’s interesting but I doubt you’ll see it on even backends any time soon.

The memory and CPU time involved are intense.

What’s kind of funny to me is BBS ran the screenshot I uploaded through some other library and increased the file size by about 3kb. :smiley:


#11

Revolutionary compression algorithm even comes with a customized logo


#12

For the CPU issue, one would think this would be a good place to let a GPU do the heavy lifting instead. I’ve certainly found that to be the case for video transcoding. An AMD FX-6100 (yeah, I know, Bulldozer sucks) can’t transcode 1080i MPEG-2 to H.264 in real time, but NVENC-enabled ffmpeg on the same system, using an Nvidia GTX 960 at the “slow” setting can do it over 7x faster than real time.


#13

Whether or not a GPU would help depends entirely on how this new algorithm works.
If it’s easy to parallelise then a GPU may help, but otherwise it’s not going to make any difference.


#14

psychovisualization

Album name!


What is your Band Name, Rapper Name, Album Name
#15

You mean it would piss me off with ‘features’ like the Ribbon and difficult alignment that instead works flawlessly and smoothly in LibreOffice?


#16

I read an article a couple of days ago (which naturally I can’t now find) saying that Guetzli is basically much the same as MozJPEG, but much slower. I’ve had pretty good results using this online encoder in the past: https://mozjpeg.codelove.de/

See also: http://www.jpegmini.com/


#17

Is the encoder available for public use? I didn’t notice a link to it in the articles or paper.


#18

It’s available on GitHub.


#19

I’m going to use this as an excuse to link to BPG.

Sometimes I daydream about the bandwidth savings BPG would provide for animated gifs on this BBS alone.


#20

I’ve read some comparisons between the two. Like with most lossless compression of comparable quality, the benefits vary depending on what you’re compressing with each. In terms of CPU and RAM usage during the compression phase, mozjpeg is currently the clear winner. Whether it’ll stay that way is unclear at the moment (the current implementation of guetzli seems pegged to a single core; if it can be threaded and run in a VM like Ruby or Java usually are, it might be faster).

Which one will result in a smaller, nicer looking image is going to depend on the original image. If I was building a service right now, I’d go with Mozilla’s library. In six months or a year, who knows?

Jpegmini seems to be involved in a lot of AstroTurfing right now. I’m not saying they’re not good but I don’t trust them because of the AstroTurf.