LChoshen, to llm
@LChoshen@sigmoid.social avatar

Do LLMs learn foundational concepts required to build world models? (less than expected)

We address this question with 🌐🐨EWoK (Elements of World Knowledge)🐨🌐

a flexible cognition-inspired framework to test knowledge across physical and social domains

https://ewok-core.github.io

luis_in_brief,
@luis_in_brief@social.coop avatar

@LChoshen I was just talking about this problem with a friend the other day. Really interesting data, thank you for sharing!

metin, to ai
@metin@graphics.social avatar
ai6yr, to ai

Giant sucking sounds from over there on Reddit https://www.bbc.com/news/articles/cxe92v47850o

Viss,
@Viss@mastodon.social avatar

@ai6yr it went from 54 to 64 afterhours, which i guess yeah, constitutes a 'jump', but it smells of meme-stockery to me

Crispius,
@Crispius@mstdn.crispius.ca avatar

@ai6yr I haven’t outright banned the Reddit domain yet, but I feel like it’s coming.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #ParetoCurves: "Which is the most accurate AI system for generating code? Surprisingly, there isn’t currently a good way to answer questions like these.

Based on HumanEval, a widely used benchmark for code generation, the most accurate publicly available system is LDB (short for LLM debugger).1 But there’s a catch. The most accurate generative AI systems, including LDB, tend to be agents,2 which repeatedly invoke language models like GPT-4. That means they can be orders of magnitude more costly to run than the models themselves (which are already pretty costly). If we eke out a 2% accuracy improvement for 100x the cost, is that really better?

In this post, we argue that:

  • AI agent accuracy measurements that don’t control for cost aren’t useful.

  • Pareto curves can help visualize the accuracy-cost tradeoff.

  • Current state-of-the-art agent architectures are complex and costly but no more accurate than extremely simple baseline agents that cost 50x less in some cases.

  • Proxies for cost such as parameter count are misleading if the goal is to identify the best system for a given task. We should directly measure dollar costs instead.

  • Published agent evaluations are difficult to reproduce because of a lack of standardization and questionable, undocumented evaluation methods in some cases."

https://www.aisnakeoil.com/p/ai-leaderboards-are-no-longer-useful

leanpub, to ai
@leanpub@mastodon.social avatar

AI for Efficient Programming: Harnessing the Power of Large Language Models http://leanpub.com/courses/fredhutch/ai_for_software is the featured online course on the Leanpub homepage! https://leanpub.com

doctorambient, to ai
@doctorambient@mastodon.social avatar

"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)

@peter https://thepit.social/@peter/112445916259675495

doctorambient,
@doctorambient@mastodon.social avatar

@gimulnautti I don't disagree with your general point, that AI will be (is) used for making a lot of porn. But is there any evidence that the company OpenAI is specifically moving in that direction right now? Seems to me they're spending an awful lot of effort on moderation specifically to stop that use case. (But I haven't been following this closely.)

gimulnautti,
@gimulnautti@mastodon.green avatar
AccordionGuy, to ai
@AccordionGuy@mastodon.cloud avatar

Do you REALLY want to get a feel for how GPT-4o does what it does? Just complete this poem — by doing so, you’ll have performed a computation similar to the one it does when you feed it a text-plus-image prompt.

https://www.globalnerdy.com/2024/05/15/the-simplest-way-to-illustrate-how-gpt-4o-works/

AccordionGuy,
@AccordionGuy@mastodon.cloud avatar

@gimulnautti Every analogy falls apart at some point — as you inferred, I’m just trying to describe the process simply.

As for responsibility and societal consquences, there are days when I worry that the actual human brains at some of the big LLM vendors aren’t taking them into consideration.

gimulnautti,
@gimulnautti@mastodon.green avatar

@AccordionGuy Yes, I worry about the same.

Some days, as I listen to techno-optimists, long-termists and libertarians, I wonder if underneath it all they really are trying to build a god for themselves to worship..

But then I’m quickly pulled back to the industrial revolution, when automation permanently changed the livelihoods of generations of people, and it took almost a hundred years for living standards to recover.

And, to the level of sociopathy needed to pull that off..

iammannyj, to opensource
@iammannyj@fosstodon.org avatar

IBM open-sources its Granite AI models - and they mean business

Many companies claim to have open-sourced their LLMs, but IBM actually did it.

https://www.zdnet.com/article/ibm-open-sources-its-granite-ai-models-and-they-mean-business/

tayarndt, to LLMs
@tayarndt@techopolis.social avatar
CatherineFlick, to LLMs
@CatherineFlick@mastodon.me.uk avatar

Just FYI, if you have older parents or other family members, set up some sort of shibboleth with them so they know what to ask you if you ever call them asking for something. These new generative models are going to be extremely convincing, and the idiots in charge of these companies think they can use guardrails to stop it being used inappropriately. They can't.

metin, to ai
@metin@graphics.social avatar
vicki, to LLMs
@vicki@jawns.club avatar

The most interesting stuff in right now (to me) is:

  • figuring out how to do it small
  • figuring out how to do it on CPU
  • figuring out how to do it well for specific tasks
webology,
@webology@mastodon.social avatar

@vicki I think this is why Ollama has appealed to me. I can run it on my Macs and when paired with Tailscale, I can access it from anywhere.

faassen,
@faassen@fosstodon.org avatar

@janriemer

@vicki

That's funny!

Nonetheless LLMs can do things with language that are interesting that other algorithms struggle with. And getting that behavior smaller and more reliable is useful - even though the small & reliable of classic algorithms may never be equalled

Seirdy, to react
@Seirdy@pleroma.envs.net avatar

New bookmark: React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity.

“React and the component model standardises the software developer and reduces their individual bargaining power excluding them from a proportional share in the gains”. An amazing write-up by @baldur about the de-skilling of developers to reduce their ability to fight back against their employers.


Originally posted on seirdy.one: See Original (POSSE).

ceoln, to Bitcoin
@ceoln@qoto.org avatar

I feel like it would be very consistent if the next thing after and The and and , turned out to be .

I don't know if it will actually attract and support tons of scams and media bros and think pieces, but if it did it would feel right somehow.

changelog, to LLMs
@changelog@changelog.social avatar

💥 New episode of Changelog & Friends!

🎙️ with @anniesexton

🎧 https://changelog.com/friends/43

#career #llms #culture #podcast

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • GTA5RPClips
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • khanakhh
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • tacticalgear
  • megavids
  • cisconetworking
  • normalnudes
  • osvaldo12
  • ethstaker
  • mdbf
  • modclub
  • Durango
  • tester
  • provamag3
  • cubers
  • Leos
  • anitta
  • lostlight
  • All magazines