Any performance tips for large batches?

Hi all,

I just started using Retrobatch seriously, to generate a bunch of images for app submissions (not iOS). The multi-scale node doesn’t seem to work in the way I need (specific pixel resizing) so I’m having to create a ton of scale → write images nodes. The workflow I have right now has over 100 nodes, and it’s getting a little slugging moving nodes around to try and keep the thing organized.

I have snap to grid, and auto connecting, both disabled, which I thought might help, but it doesn’t seem to make anything smoother to edit. I’m running 10.15 on a 2020 Intel MBP, wouldn’t think this would be a huge performance drag, but that’s been my experience so far.

Is this just how it is when you have a bunch of nodes, or am I missing anything?

Thanks,
Joe

Over a 100 nodes is pushing Retrobatch a bit more than I had envisioned, but I’ll take a look at this case and see what the slowdown might be.

I might also be able to come up with a better solution for you on the scaling front, with a quick JavaScript node, if you can explain the requirements a bit.

Thank you - I submitted my massive workflow via email this morning (along with some separate support inquiries). Once we get that sorted, I can revisit this topic and post the findings.

Please post here with the findings, I’m intrigued!

Current status: I’ve written a little JS plugin which should do what @chellman is after, with a ton less nodes necessary. It’s called “Fixed Width Multi-scale” and you can grab it from here: Retrobatch JavaScript Nodes . Here’s the description:

Resize an image multiple times to a fixed width, values comma separated, with an optional name. Eg: “100 foo_small, 200 foo_med, 400”, where the 400 width version keeps the original name.

Right now it uses Core Image for the scaling, but I might eventually tweak it to use CG if the target size is smaller than the original.

2 Likes