I’m creating a publishing workflow which will begin with large images exported from another application for this express purpose. As such, those large images aren’t needed once Retrobatch does its magic and generates other files.
Is there a best practice method to delete the original files? How, even, would it be done? I can only imagine immediately branching off the initial image fetch (Read Folder in my case) that would run a script to delete each file.
I tried this out tonight. My main pipeline goes Read Images > Set Metadata > Text Watermark and then splits to three copies of > Scale > Write Images. I added a new node directly off the Read Images > Shell Script.
The script is set to run at Every process and contains this:
#!/bin/bash
rm “$1”
At first attempt, Retrobatch crashed, but this was due to the script not having suitable permissions to execute. Once I fixed that, it behaved exactly as I wanted. I watched Finder as it worked and as each trio of output images appeared, so the original disappeared.
Yikes! I was doing some tweaking to my workflow and once complete I ran it, only to find the delete preceded all the other processing. Net effect, it simply deleted all the originals and created no output.
So it seems the sequencing of two child nodes from the same parent is not determinstic.