Waiting for a delayed flight, some idle thoughts on what Google needs to fix itself:
Find a replacement for Sundar. This person needs to be both able if articulating a coherent vision for Google as a company, and inspire great engineers to want to work on great problems.
@HalvarFlake a typical cycle of tech company leadership seems to be tech->ops->finance->tech. Not sure if many companies successfully revert from ops leadership back to tech without first trying handing over the reins to finance.
Has there been any PL work on languages with built-in support for software product lines (multiple variants of the same software)?
Conditional compilation could fall into this category. But that's just one part. The problem usually gets solved outside of the language (revision control, build system, etc.).
“Thus it has often been said about the British Infantry in World War I that experienced working-class sergeants managed the delicate task of covertly teaching their new lieutenants to take a dramatically expressive role at the head of the platoon and to die quickly in a prominent dramatic position, as befits public-school men. The sergeants themselves took their modest place at the rear of the platoon and tended to live to train still other lieutenants.”
Since everyone is offering an xz take, here's mine: I am worried that the "insider threat" will lead to working environments where everyone treats everyone else with suspicion. That's not healthy (not for the organization, not for the individuals).
The fundamental conclusion emerging from AI security research seems to be that the system's output should be considered as sensitive as the most sensitive data in the training set (everything leaks) and as untrustworthy as the least trustworthy data in the training set (everything contaminates). This has echoes of the Bell-LaPadula and Biba models. Of course, these models failed because even if they were theoretically sound, they were unworkable in practice. The workaround was to insert humans in the loop to make authorization decisions. But the whole point of AI is to take the human out of the loop. Interesting dilemma.