dvshkn, to llm
@dvshkn@fosstodon.org avatar

I gave the pizza question to Golden Gate Claude. This is truly the people's LLM. Google btfo.

tripplehelix,
@tripplehelix@fosstodon.org avatar

@dvshkn What bridge?

alxd, to writing
@alxd@writing.exchange avatar

Given how many contests, anthologies and magazines are currently struggling with a flood of / -generated spam, have you heard about anyone trying to fight the problem by asking specifically for stories which AI cannot easily write?

Even the best models I've tried cannot easily use themes, symbols and structures - they always come out unnatural.

Are there any specific limitations, formats that can work similarly?

paulox, to ai
@paulox@fosstodon.org avatar

At PyCon Italia 2024 Ines Montani is presenting her talk "The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs" 🐍

CC @pycon @ines

Follow it live now 👇
https://2024.pycon.it/en/event/the-ai-revolution-will-not-be-monopolized-how-open-source-beats-economies-of-scale-even-for-llms

#PyCon #PyConIT #PyConIT24 #PyConIT2024 #PyConItalia #spaCy #AI #LLM #OpenSource #FOSS

Presentation of Ines talk
Ines giving her talk
O es with a Spacy slide

chikim, to llm
@chikim@mastodon.social avatar

Llama.cpp now supports the distributed inference, meaning you can use multiple computers to speed up the response time! Network is the main bottleneck, so all machines need to be hard wired, not connected through wifi. # https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc

bardo, to llm Portuguese
@bardo@ursal.zone avatar

vocês estão usando ? para quem basta um gpt-3, qual a melhor solução, no sentido ético e confiabilidade? estava usando o aria do opera, mas descobri que tem um mundo de opções por aí? mistral, por exemplo, é confiável?

pseudonym, to ai
@pseudonym@mastodon.online avatar

More shenanigans.

https://nitter.poast.org/petergyang/status/1793480607198323196#m

Link to X/Twitter replaced with nitter link.

Tl;Dr AI suggested adding glue to pizza to make the cheese stick. Sourced from 11 year old reddit post.

These are all good fun to mock until someone actually gets hurt taking these responses literally.

I'm torn. I've thrown my share of shade at s and the rush to shove "AI" into everything, and even what they do well, one can argue if it's worth the cost.

But I've seen some useful stuff too.

SirTapTap, to ai
@SirTapTap@mastodon.social avatar

well if you needed another reason to not use ChatGPT, it is now sucking in 100% of Rupert Murdoch's propaganda

look forward to Chat GPT telling you the election was stolen and only ivermectin mixed with aquarium cleaner can fix it

SirTapTap,
@SirTapTap@mastodon.social avatar

easy to say and wasn't exactly a big loss, but deleted my account

cheukting_ho, to llm
@cheukting_ho@fosstodon.org avatar

opening keynote by @t_redactyl - and illusions

pauleveritt,
@pauleveritt@fosstodon.org avatar

@cheukting_ho @t_redactyl Sunday night: saying bye to Jodie, thinking "whew, glad I’m not getting on a plane tomorrow direct to next conference.”

Today: sad I wasn't there to see Jodie.

veronica, to ai
@veronica@mastodon.online avatar

Slack is being passive aggressive about us requesting our data not be used for global "AI" training at work:

"You will still enjoy all of the benefits of our globally trained AI/ML models without contributing to the underlying models."

No idea what these supposed "benefits" are in the first place, but OK then!

br00t4c, to llm
@br00t4c@mastodon.social avatar

Here's what's really going on inside an LLM's neural network

https://arstechnica.com/?p=2026236

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Fine Tuning LLM Models – Generative AI Course 👇🏼

FreeCodeCamp released today a new course for fine tuning LLM models. The course, by Krish Naik, focuses on different tuning methods such as QLORA, LORA, and Quantization using different models such as Llama2, Gradient, and Google Gemma model.

📽️: https://www.youtube.com/watch?v=iOdFUJiB0Zc

Taffer, to ai
@Taffer@mastodon.gamedev.place avatar

In my mind, the people most likely to use "AI" for things are the ones who sort of know what they want, but don't know how to get it.

So you ask for code to do something, and the LLM spits out something glommed together from Stack Overflow posts or Reddit. How do you know it does what you wanted? How do you debug it if it doesn't work?

Taffer,
@Taffer@mastodon.gamedev.place avatar

If these actually worked, I'd love to select a hunk of code, and have something spit out basic unit tests, or a reasonable documentation outline. Or even check for logic or security errors. How about figuring out how to upgrade my code to eliminate out-of-date libraries?

Taffer,
@Taffer@mastodon.gamedev.place avatar

My fantasy LLMs that actually do something useful are also not trained on data stolen from the Internet. And they don't use enough electricity to power a country, or evaporate a big city's water supply.

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

Research paper from Anthropic.

“Today we report a significant advance in understanding the inner workings of AI models. We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model. This interpretability discovery could, in future, help us make AI models safer.”

https://www.anthropic.com/research/mapping-mind-language-model

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

My use case for LLMs is to see if it turns up any subtopic of interest that I haven’t included in an article I’m writing on a topic.

If it does, then I can research that subtopic to see if I should include it in the article. Which I then write myself. The LLM is a search assistant.

I can also see value in them as research assistants and guides for learning about new topics. With the proviso that nothing an LLM produces should be taken at face value.

Claude is my fav.

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

The new book by Salman Khan, of Khan Academy fame, will be of interest to anyone interested in how chatbots will influence education. There is definitely a place for them as personalised learning tutors. Especially for learners who would have zero chance of getting a human tutor for one-to-one learning.
https://www.penguin.co.uk/books/460644/brave-new-words-by-khan-salman/9780241680964

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • magazineikmin
  • cubers
  • everett
  • rosin
  • Youngstown
  • GTA5RPClips
  • tester
  • slotface
  • khanakhh
  • Durango
  • kavyap
  • DreamBathrooms
  • normalnudes
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • thenastyranch
  • provamag3
  • osvaldo12
  • Leos
  • mdbf
  • tacticalgear
  • modclub
  • megavids
  • anitta
  • lostlight
  • All magazines