Chroma sampling, or combining with command-line function?

I recently took over the running of a website with a large number of existing images. To begin with, a bunch of images were straight from the camera and needed downsizing. retrobatch handled that with aplomb. Thanks!

Tests using various site performance tools (pagespeed, yslow) also suggested that the image compression could be improved on. Google’s Pagespeed help suggests running images through Imagemagick’s convert function (described in google’s developer docs). The size savings through a combination of chroma sampling (4:2:0), progressive jpeg interlacing, quality reduction to 85% and enforcing sRGB were considerable without impacting greatly on image quality. I haven’t identified the main contributing factor but I would have been surprised if the images had previously been saved with 100% jpeg quality, so I suspect chroma sampling played a major part.

Is that a setting that retrobatch can alter? If not, might that be a candidate for incorporation into retrobatch? Likewise, is this something that could be incorporated into a retrobatch workflow as a (bash) script if ImageMagick’s convert is installed on one’s system?

I’m trying to work out a way of making such things super simple for the client, e.g. a drag-and-drop variant that outputs the various image sizes already pre-optimised for the internet.

The new command line options in the 1.1 beta might help you out:
http://flyingmeat.com/download/latest/#retrobatch

As well, we’ve got an automator action now which can help out with drag and drop.

You can also make a workflow that’ll convert images to sRGB, and then strip the profile out to save on file size. And then you can also set the output format to JPEG, as well as lowering the quality of the image, which will increase the compression. You can also add progressive interlacing.

Those should really do it for you!