fwiw I made some progress on this, I think I can see a way to get this working so one can at least automate verification (even if you measure in madtpg) but I think it is going to end up needing changes to displaycal to make it actually usable.
* I worked out exactly what displaycal does when it measures with an optimised patchset (it's ultimately just a preconditioning profile fed into targen, if anyone wants details then I can post that)
* used this to export a variety of test charts to png
* wrote a script that can convert all these pngs (of which there are 000s) into a single video of appropriate length & with small text overlay to indicate what the pattern is ->
https://raw.githubusercontent.com/3ll3d00d/jrmc-utils/master/create_patterns.sh* ran this on my generated pattern sets (takes a surprisingly long time even on a fast machine, ~300 patches per hour, not sure if normal or some option I'm using is v slow, may revisit this if I get the end to end working) so now I have a bunch of videos that can be used for measurements
I then tried to run displaycal in untethered mode (which is basically just a load of calls to spotread) and I think it's fair to say it's unlikely to work as a completely unsupervised thing
I think making into something I can actually use is going to mean changing displaycal's untethered mode so that it's not trying to autodetect when the patch is read, instead I'll just change my video so that each patch is shown for a defined period of time (like 1s), start playback in paused mode and then tell it to advance playback by 1s each time. I would hope this should be a reliable approach that doesn't take much effort to hack into a displaycal fork but we'll see.