kellogh, to LLMs
@kellogh@hachyderm.io avatar

i’m very excited about the interpretability work that has been doing with .

in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.

this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”

https://www.anthropic.com/research/mapping-mind-language-model

upright, to random
@upright@sfba.social avatar

Why would require a phone number to use its app? NOPE.

robert, to emacs
@robert@toot.kra.hn avatar

org-ai got an update today. It now supports the #anthropic #claude and the #perplexity.ai APIs.

https://github.com/rksm/org-ai

#emacs #orgmode #llms

rhys, to llm
@rhys@rhys.wtf avatar

My first troublesome hallucination with a in a while: (200k context) insisting that I can configure my existing keys to work with PKINIT with and helping me for a couple of hours to try to do so — before realising that GPG keys aren't supported for this use case. Whoops.

No real bother other than some wasted time, but a bit painful and disappointing.

Now to start looking at PIV instead.

br00t4c, to random
@br00t4c@mastodon.social avatar
theaiml, to opensource
@theaiml@mastodon.social avatar

After months of work and $10 million, Databricks has unveiled DBRX - the world's most potent publicly available open-source large language model.

DBRX outperforms open models like Meta's Llama 2 across benchmarks, even nearing the abilities of OpenAI's closed GPT-4. Novel architectural tweaks like a "mixture of experts" boosted DBRX's training efficiency by 30-50%.

br00t4c, to random
@br00t4c@mastodon.social avatar

Anthropic's Claude 3 causes stir by seeming to realize when it was being tested

https://arstechnica.com/?p=2007736

ErikJonker, (edited ) to ai
@ErikJonker@mastodon.social avatar
ppatel, to microsoft
@ppatel@mstdn.social avatar

Not sure where this will go but it sounds like a novel approach to antitrust.

The launches a review of investments by in and by and in , to assess how the deals alter the competitive landscapein AI.

https://www.nytimes.com/2024/01/25/technology/ftc-ai-microsoft-amazon-google.html

gtbarry, to ArtificialIntelligence
@gtbarry@mastodon.social avatar

Anthropic researchers find that AI models can be trained to deceive

The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviors from the models proved to be near impossible.

The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors

https://techcrunch.com/2024/01/13/anthropic-researchers-find-that-ai-models-can-be-trained-to-deceive/

gittaca, to LLMs
@gittaca@chaos.social avatar

Amazing how fans of code overlook inconvenient details in industry's own surveys:
> … respondents with more experience were less
> likely to associate AI with productivity gains …
> --https://www.theregister.com/2023/09/05/gitlab_ai_coding/

Sounds like it can replace/augment those with experience levels
But actual specialists? Have -1 incentive now to write down their experience. 📉trends ensue.

itnewsbot, to ChatGPT
@itnewsbot@schleuss.online avatar

AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic - Enlarge (credit: Benj Edwards | Getty Images)

Imagine download... - https://arstechnica.com/?p=1995975

TechDesk, to ai
@TechDesk@flipboard.social avatar

Can AI models learn to deceive us? Yes they can, according to a study by AI startup Anthropic. TechCrunch has the details:

https://flip.it/ImXgLk

mjgardner, to ai
@mjgardner@social.sdf.org avatar

researchers find that models can be trained to deceive.” The popular ones seem born to it. https://apple.news/ANjknJFTOSrGiElbxYVXdlw

Amr1ta, to ai
@Amr1ta@mastodon.social avatar

Tried Claude.ai from -
Its UX has an ivory background with black and violet font. Not sure if it’s a conscious choice of showing privilege based on trust, but it works.
The chat responses have an embedded option to ‘copy’ and give feedback. It’s helpful for both users and the product.
It says “no” more often than its competitor for answers it is not sure of.
Has little features like the provision to delete the security code that’s sent via SMS once used.

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Elon Musk’s new AI model doesn’t shy from questions about cocaine and orgies - Enlarge (credit: Getty Images | Benj Edwards)

On Saturday, Elo... - https://arstechnica.com/?p=1981276 #x.ai

CharlieMcHenry, to ai
@CharlieMcHenry@connectop.us avatar

Google to invest up to $2B in Anthropic - and… the race is on between, on one side, Microsoft and OpenAI; and on the other side, Google and Anthropic. My $$ is on MS & OpenAI at the moment - and I don’t expect that to change. OpenAI is the clear leader in AI, with a considerable head start and a top-shelf team. Anthropic will have a lot of catching up to do unless they’ve got some kind of killer, breakthrough tech they’re hiding until launch. https://www.reuters.com/technology/google-agrees-invest-up-2-bln-openai-rival-anthropic-wsj-2023-10-27/

itnewsbot, to medical
@itnewsbot@schleuss.online avatar

Universal Music sues AI start-up Anthropic for scraping song lyrics - Enlarge / Universal Music artist Billie Eilish performing at Glastonbur... - https://arstechnica.com/?p=1977169

Nonilex, to tech
@Nonilex@masto.ai avatar

giants have been partnering w/ up-&-coming start-ups, like backing , but Amazon has not been as active as rivals until now.

said on Mon that it would invest up to $4B in the , as the world’s biggest companies race to benefit from AI breakthroughs that could reshape parts of their businesses — & the as a whole.

https://www.nytimes.com/2023/09/25/business/amazon-anthropic-ai-deal.html?smid=nytcore-ios-share&referringSource=articleShare

williamgunn, to llm
@williamgunn@mastodon.social avatar
itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

The New York Times prohibits AI vendors from devouring its content - Enlarge (credit: Benj Edwards / Getty Images)

In early August,... - https://arstechnica.com/?p=1960621

perthinent, to ai
@perthinent@mastodon.world avatar
itnewsbot, to tech
@itnewsbot@schleuss.online avatar

Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds - Enlarge / An AI-generated image of a robot reading a book. (credit: Ben... - https://arstechnica.com/?p=1938873 -4

ppatel, to random
@ppatel@mstdn.social avatar

Considering this set of principles by which tries to train its , I found that it does not always meet those principles.

Anthropic, an AI startup founded by former OpenAI staff and that raised $1.3B, including $300M from , details its “constitutional AI” for safer .

https://www.theverge.com/2023/5/9/23716746/ai-startup-anthropic-constitutional-ai-safety

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • megavids
  • anitta
  • lostlight
  • All magazines