They're all supported with their original #ports "USES", by some #bmake trickery in my new "USES=linuxsrc", fixing up just the parts that are different when building from/for the Linuxulator (like adjusting dependencies and commands to use the #Linux-native versions).
I am looking for a parallelized pipeline system in #Python. Basically a build system lile #SCons but without the files as intermediary step, all in memory. So for example I'd like to read some data files, extract metadata from them, then save that metadata (with :gitannex: #gitAnnex). Along the way there might be other branches of logic that could need parallelization.
Ideally with progress visualisation.
Is there something like this in #Python or do I have to roll my own?
@birnim Sure, I could also directly use #SCons, but it's multithreaded, not multiprocessed, which has some problems when working with NetCDF4 files for example. Intermediate files can also be huge, so a RAMDISK is not ideal. Also, why introduce temporary files for stuff that really shouldn't be stored but are just variables in #Python?