Generating text for the Watermark node in JavaScript

I have a basic flow that watermarks files. I want each watermark to have a randomised string as part of the text template.

I understand I should ‘read folder’→’javascript’→’text watermark’.

In the javascript node, I hack the preflightAsset function to add a property to the asset passing through, containing this string:

function preflightAsset(document, jsnode, asset) {
	asset.fakeref = Array.from({length: 9}, () => Math.floor(Math.random() * 10)).join('');
    return true;
}

I’m kinda guessing and hoping this is right - it feels right, from what I’ve scratched together from documentation and searches.

However - how do I use this property in the Watermark node? I hoped I could just add a reference to #$asset.fakeref$ in the text entry box, but it seems more complex than I’d naively assume?

Hello and welcome! I’m always happy when someone is doing something with JS.

So, you’re close on getting the right bit of code. Here’s how I would do it:

function processAsset(document, jsnode, asset) {
	var r = Array.from({length: 9}, () => Math.floor(Math.random() * 10)).join('');
	
    asset.setUserValue_forKey(r, "random");
    return true;
}

And then in the watermark node you’d have this token: $userValue.random$

This is covered briefly in the asset api docs, under setUserValue_forKey here: https://flyingmeat.com/retrobatch/jsapi/assetapi/

Let me know how that works out,

-gus

1 Like

Hi @ccgus!

Thanks, that’s absolutely perfect and working beautifully!

I missed the asset documentation page amongst the JSAPI documentation, so thanks for pointing me at the right part. I am used to sloppy JS (the best JS) so was hoping the mere act of adding a property would propagate through, so I suppose it’s good there’s some rigour in this implementation!

My other mistake was to hook on the preflightAsset stage rather than the processAsset stage - would I be right in thinking that state is not maintained throughout these stages, and that they are invoked as Retrobatch initialises, collects, processes and finalises the batch job?

Thanks again. I’m very satisfied at being able to do this, writing just enough code to do the bespoke bits while having big chunky GUI elements to handle the rest of the procssing.

The userValue keys+values will be maintained if you use preflightAsset to set the random string (you can quickly test this just by renaming the function). In general though, when it comes to setting state on the assets I always do that in the processAsset functions. And any time you’re going to modify an asset’s bitmap, you’d do that in processAsset as well.

The reason it’s done this way is to keep memory down. If you only modify things in processAsset, then that would be the only time the bitmap is loaded for the image, and RB will free up that memory as soon as the asset finishes moving through all the nodes (it’s a little bit more complicated than that in reality, but that’s the general idea).

So, if anything is going to add memory, then I do that in processAsset.

1 Like

I’ve been thinking about this, and can only come to the conclusion that it’s asking for a big update.

So I’m watermarking images with a random string, which is working beautifully.

Sometimes, the place the text goes is dark; sometimes it’s light. This means, at random, the text is hard to discern.

My looming questions are thus:

① is it possible to discern a dominant brightness in an image or region of an image? (I half suspect piping to imagemagick is an answer)

but then, this asks

② could colours be set for text nodes? For example, could a string be used to set a colour value in the text node (building on the exisiting ability to set the text through variables?)

③ Is there any way to set logic paths in Retrobatch, so one flow of nodes is followed for one value and another for a different value?

I’m acutely aware that ③ is pushing for Retrobatch, an image processor, to be Turing-complete.

This isn’t possible. Well, it might be possible with JavaScript, but I don’t have the code handy to paste in right now.

This isn’t possible either, though it’s something I could look into.

You can have two Rules nodes (or JS nodes) that each allow things through or not. The first could say “if width is longer than height” and the second would be “if width is less than height”. And then after each node would be a watermark that behaves a little different. It would look something like this:

Thanks again, @ccgus, for really comprehensive answers.

I think I’m kinda hoping an update will introduce variables more through the system; the items that can be set in userValues in the javascript node look like they would be ripe to drive extra features around logic and property setting, but I also appreciate that this is quite a large-scale task.

I suspect I can get js to talk to ImageMagick to handle the dark/light analysis, but then I’d just be getting a string back from the terminal command (or maybe even a couple of pixels in an image) and I wouldn’t be able to use that to change colours of drive logic.

There’s the whole ‘classify images’ feature, and I suppose - again, another feature request - would be to use Apple’s Create ML tool to train dark/light imagery, but then Retrobatch would need new features to allow users to specify models and terms to catch, which again sounds more like a next-significant-feature update.

For now, I can cope with the current limits. I appreciate the time you’ve taken to explain strategies in full here, it’s given me a few ideas, and I hope my thoughts-out-loud may help sway future features and direction.

The only thing I’m really disappointed about is my typo you’ve preserved in a quoted comment reply, but that’s on me…

It’s something that would make sense to add over time.

You can use Create ML to train a model, and then throw that in RB’s “MLModels” folder (use the Help ▸ Open Retrobatch’s Application Support Folder menu item to find it). The model will show up in the Classify Images node, and you can with the Rules node.

Possibly fixed, unless there’s more than one!