Bisher haben mir die E-Mails reicher kinderloser Witwen, die ihr Vermögen verschenken wollen, durch die Formulierungen, Ausdrucks- und Übersetzungsfehler eigentlich immer einigen Spaß bereitet.
Aber nun scheint man sich Hilfsmitteln wie #ChatGPT zu bedienen. Da geht der besondere Charme verloren. Schade. Ich hätte sonst gern die gewünschten persönlichen Daten geschickt. #LLM
The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. #AI will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.
The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. #AI will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.
@janriemer They certainly don't. I've been experimenting with getting one to translate toy programs from Perl to C++. Some some out reasonable, some are flat-out wrong, even the best of them are riddled with bugs, and they're all presented with equal confidence.
ToM is an internal state of this-agent about the internal state (mind) of other-agent. It is a prediction of other-agent's future actions, made up to guide proper (re)action of this-agent.
Agent-other-ness needs this-ness of an agent(!).
But, a thing can output something (text) that induces a ToM in an agent (reader), even about that agent itself.
Thing's text mimics signals-tinted-with-ToM of typical agents.
The “open source” models are parasiting on their behind-the-doors overseers. I doubt that it is even according to their APIs usage terms, but that isn’t relevant in the end.
Google has a moat here - they simply don’t (?) have a public API. It is the OpenAI that has to sell away its core to remain afloat.
The incentives for “foundational models” business here is to sell API access under tight contracts. With the progressively steep fines for breaches, making them only accessible for progressively bigger B2B peers. And whack-a-mole any leaks of course. “Intellectual property” gets a new ring to it.
But then there’s fundamental research, like the Google paper that brought us transformers. Even with more performance per dollar gains, the open source community is stuck with the published models until they collectively start doing their own research. This further incentivizes labs going dark.
Actually, this may be even good for AI Notkillingeveryoneism as it would be more incentives for non-proliferation of capabilities.
But then, there’s this “commoditize your complement” drive, that forces hardware vendors into fundamental research and open-sourcing capability gains - so the clients would buy their chips to run the newest and hottest models.
And this is worrying, since even if AI labs go dark or extinct the hardware vendors would be happy to plunge us into AIpocalypse.