itnewsbot, to opensource
@itnewsbot@schleuss.online avatar

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI - In the world of large language models (LLM), the focus has for the longest time be... - https://hackaday.com/2023/05/05/leaked-internal-google-document-claims-open-source-ai-will-outcompete-google-and-openai/

dougholton, to opensource

MosaicML's MPT-7B is a new model announced today: https://www.mosaicml.com/blog/mpt-7b
demo: https://huggingface.co/spaces/mosaicml/mpt-7b-chat
Here's my little comparison of chatbot models asking for psychology of persuasion lesson ideas: https://docs.google.com/document/d/1HjnKhI7_2OY5Alqhismge-HNECmI1S8l0blVWUq24pk/edit?usp=sharing

blake,

@dougholton This model actually passed my litmus test for LLMs, although I can't get it to elaborate.

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar
ErikJonker, to random
@ErikJonker@mastodon.social avatar

Amazing Opensource AI achievement , GPT4ALL

https://github.com/nomic-ai/gpt4all

ianRobinson, to random
@ianRobinson@mastodon.social avatar

Yay. ChatGPT API access from @drafts app works.

I’ll cancel my ChatGPT sub and use the pay-as-you-go API access from Drafts.

JoergSorge, to random

Bisher haben mir die E-Mails reicher kinderloser Witwen, die ihr Vermögen verschenken wollen, durch die Formulierungen, Ausdrucks- und Übersetzungsfehler eigentlich immer einigen Spaß bereitet.

Aber nun scheint man sich Hilfsmitteln wie zu bedienen. Da geht der besondere Charme verloren. Schade. Ich hätte sonst gern die gewünschten persönlichen Daten geschickt.

ascherbaum,
@ascherbaum@mastodon.social avatar

@JoergSorge Was natürlich auch einigen Spamfiltern Probleme bereiten wird die bisher auf so etwas für das Scoring geachtet haben.

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

janriemer, to programming

No, do NOT understand your code.🙄

chrisg,
@chrisg@fosstodon.org avatar

@janriemer That makes two of us then.

markusl,
@markusl@fosstodon.org avatar

@janriemer They certainly don't. I've been experimenting with getting one to translate toy programs from Perl to C++. Some some out reasonable, some are flat-out wrong, even the best of them are riddled with bugs, and they're all presented with equal confidence.

pbinkley, to random
@pbinkley@code4lib.social avatar

The microfilm collection that was sealed in the Westinghouse time capsule at the New York World Fair in 1939 contains over 10,000,000 words. Could we build a from that and talk to 1939? https://archive.nytimes.com/www.nytimes.com/specials/magazine3/items.html#essay

boilingsteam, (edited ) to random
@boilingsteam@mastodon.cloud avatar
leetNightshade,
@leetNightshade@mastodon.social avatar

@boilingsteam Funny, the era of AI has barely just begun. 😬

go_shrumm, to random

What has no agency can't have Theory of Mind.

ToM is an internal state of this-agent about the internal state (mind) of other-agent. It is a prediction of other-agent's future actions, made up to guide proper (re)action of this-agent.

Agent-other-ness needs this-ness of an agent(!).

But, a thing can output something (text) that induces a ToM in an agent (reader), even about that agent itself.

Thing's text mimics signals-tinted-with-ToM of typical agents.

go_shrumm, to climate

can do to the spirit, what coal and oil did to the atmosphere.

edwiebe,
@edwiebe@mstdn.ca avatar

@go_shrumm It’s people doing these things. For personal gain.

go_shrumm,

@edwiebe True! And it's people who can make those people not doing these things.

go_shrumm, to random

If interested in tech, read this apparently leaked internal Google document.

There is so much in there, technically and politically:

  • super huge models are not automatically better
  • LoRA fine tuning is the thing
  • closed source is lost
  • Meta were the first to understand that
  • layman enthusiasm is seen as "an entire planet's worth of free labor" - to be exploited
  • there is no way back

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

boilingsteam, to linux
@boilingsteam@mastodon.cloud avatar

Google “We Have No Moat, and Neither Does OpenAI” - How Open Source LLM Will Win: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

boilingsteam,
@boilingsteam@mastodon.cloud avatar

@dave yeah when they see a threat to their business they suddenly love regulation

dpwiz,
@dpwiz@qoto.org avatar

@boilingsteam This is worrying.

The “open source” models are parasiting on their behind-the-doors overseers. I doubt that it is even according to their APIs usage terms, but that isn’t relevant in the end.
Google has a moat here - they simply don’t (?) have a public API. It is the OpenAI that has to sell away its core to remain afloat.
The incentives for “foundational models” business here is to sell API access under tight contracts. With the progressively steep fines for breaches, making them only accessible for progressively bigger B2B peers. And whack-a-mole any leaks of course. “Intellectual property” gets a new ring to it.

But then there’s fundamental research, like the Google paper that brought us transformers. Even with more performance per dollar gains, the open source community is stuck with the published models until they collectively start doing their own research. This further incentivizes labs going dark.

Actually, this may be even good for AI Notkillingeveryoneism as it would be more incentives for non-proliferation of capabilities.

But then, there’s this “commoditize your complement” drive, that forces hardware vendors into fundamental research and open-sourcing capability gains - so the clients would buy their chips to run the newest and hottest models.

And this is worrying, since even if AI labs go dark or extinct the hardware vendors would be happy to plunge us into AIpocalypse.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • magazineikmin
  • InstantRegret
  • everett
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • khanakhh
  • Durango
  • mdbf
  • JUstTest
  • ethstaker
  • anitta
  • modclub
  • osvaldo12
  • normalnudes
  • ngwrru68w68
  • GTA5RPClips
  • tacticalgear
  • provamag3
  • tester
  • Leos
  • cubers
  • lostlight
  • All magazines