I recently took over the running of a website with a large number of existing images. To begin with, a bunch of images were straight from the camera and needed downsizing. retrobatch handled that with aplomb. Thanks!
Tests using various site performance tools (pagespeed, yslow) also suggested that the image compression could be improved on. Google’s Pagespeed help suggests running images through Imagemagick’s convert function (described in google’s developer docs). The size savings through a combination of chroma sampling (4:2:0), progressive jpeg interlacing, quality reduction to 85% and enforcing sRGB were considerable without impacting greatly on image quality. I haven’t identified the main contributing factor but I would have been surprised if the images had previously been saved with 100% jpeg quality, so I suspect chroma sampling played a major part.
Is that a setting that retrobatch can alter? If not, might that be a candidate for incorporation into retrobatch? Likewise, is this something that could be incorporated into a retrobatch workflow as a (bash) script if ImageMagick’s
convert is installed on one’s system?
I’m trying to work out a way of making such things super simple for the client, e.g. a drag-and-drop variant that outputs the various image sizes already pre-optimised for the internet.