@ct_bergstrom I blame this on the rapid introduction of LLMs which are not so much a technology (yet) as phenomena that can sometimes be useful. This model of "conversational computing" (my words until something catchier emerges) I've just discovered and it's very useful - but hit or miss, requires a lot of coaxing and retrying, etc. Currently micro-managing GPT to finish a one-man project faster than ever would have been possible - but incremental testable building of code is low hanging fruit.