These two projects share a common design pattern: create authoritative schemas for a given domain, create a string of platforms to collect data under that schema, ingest and mine as much data as possible, provide access through some limited platform, etc. All very normal! This formulation is based on a very particular arrangement of power and agency, however, where like much of the rest of platform web, some higher "developer" priesthood class designs systems for the rest of us to use. The utopian framing of universal platforms paradoxically strongly limit their use, being capable of only what the platform architects are able to imagine. The two agencies both innovate new funding mechanisms to operate these projects as "public-private" partnerships that further dooms them to inevitable capture when the grant money runs out.
This is where the story starts to merge with the story of "AI." Since the dawn of the semantic web, there was a tension between vernacular expression and making things smoothly computable by autonomous "agents." That is a complicated history in its own right, but after >20 years todays "AI" technologies are starting to resemble the dreams of the latter kind of semantic web head.
The projects are both oriented towards creating knowledge graphs that power algorithmic, often natural language query interfaces. The NIH's biomedical translator project is one example: autonomous reasoning agents compute over data from text mining and other curated platforms to yield "serendipitous" emergent information from the graph. The harms of such an algorithmic health system are immediately clear, and have been richly problematized previously. The Translator's prototypes are happy to perform algorithmic conversion therapy, as the many places where violence is encoded in biomedical information systems is laundered into neatly-digestible recommendations.