metin, (edited ) to ai
@metin@graphics.social avatar

Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.

#AI #ArtificialIntelligence #aliens #alien #MachineLearning #ML #DeepLearning #LLM #LLMs #GenerativeAI #OpenAI #Microsoft

kellogh, to 3DPrinting
@kellogh@hachyderm.io avatar

has been a lot of fun, but i don’t see it scaling out to general audiences. simple things like printing an existing model are pretty complicated. even just, “load model, switch spool, print” is far beyond what my 7yo can do, and that seems like a big UX problem

i wonder if could help parts of the UX. load a model and the LLM asks what you’ll be using it for, adjusts infill & speed parameters appropriately. idk, the whole market seems dead without something big changing

kellogh, to LLMs
@kellogh@hachyderm.io avatar

i wish i knew more about comparing . anyone have resources? one thing i’ve wondered is how to convert an embedding from a “point” to an “area” or “volume”. e.g. an embedding of a 5 paragraph essay will occupy a single point in embedding space, but if you broke it down (e.g. by paragraph), there would be several points and the whole would presumably be at the center. is there a way to trace the full space a text occupies in space?

mnl, to LLMs
@mnl@hachyderm.io avatar

i've spent the whole day procrastihacking on things, and I"m just amazed by how comfortable the whole "how the heck do I do this", "oh no I have to do $tediousStuff", "ohh... I wish I could learn more about X but I don't have the time" has become.

Like I legit feel I kind of wasted my day, but...

eliocamp, to LLMs
@eliocamp@mastodon.social avatar

There's not enough "fuck you"s in the world to react to this shit. should be tools used in the service of people; what in the world is this proposal to make people work for LLMs?!

Any and all changes to scientific publishing needs to be for so that other people can access them and understand them.

And the single most important change would be for Nature and other publishers not to charge 29.99 USD for a shitty 4-paragraph essay that they didn't pay for themselves.

evawolfangel, to LLMs German
@evawolfangel@chaos.social avatar

Pünktlich zum Wochenende ist mein "Longread" erschienen. Ja, 20.000 Zeichen zählt schon als lang - es ist immer gar nicht so einfach, so lange Texte durchzukriegen, weil alle Sorge haben, dass niemand online so lange liest. Dieser ist aber natürlich so spannend, dass ihr ihn bis zur letzten Zeile genießen werdet ;)
Es geht um einen Jailbreak, der mir Einblick gab in die "Ausbruchsphantasien" von Google Bard und um die Frage, ob ein Weltmodell haben 💲
Kurzer Thread:
https://www.zeit.de/digital/internet/2023-11/ki-chatbot-bard-liebe-befehle-emotionen/komplettansicht

pluralistic, to ai
@pluralistic@mamot.fr avatar
kellogh, to random
@kellogh@hachyderm.io avatar

i've noticed that there's a lot of fruitful development around purely treating as black boxes and focusing on prompt engineering + the ReAct pattern. Simply forcing the LLM to draw out it's thoughts over more text increases it's accuracy, and if you also interleave that with input from the user or calls to external services, e.g. Google, you can achieve very interesting results

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Thought:

  1. aren't trained on data after 2021

  2. Most writing becomes done by LLMs

Does that mean our language will start becoming frozen in time? Slower change?

kellogh, to LLMs
@kellogh@hachyderm.io avatar

since everyone hates saying “open source”, what do i call these non-proprietary ?

BenjaminHan, to generativeAI
@BenjaminHan@sigmoid.social avatar

1/ In this age of LLMs and generative AI, do we still need knowledge graphs (KGs) as a way to collect and organize domain and world knowledge, or should we just switch to language models and rely on their abilities to absorb knowledge from massive training datasets?

c0dec0dec0de, to LLMs
@c0dec0dec0de@hachyderm.io avatar

There’s an LLM called Grok? What the fuck. Seriously. Tech bros understand the absolute most obvious thing from your favorite piece of science fiction, please.
The meaning of grok is in the plain text! We’re not even talking about obvious metaphor, it’s just a made-up word for truly deep understanding and you’re going to name your idiot, fabulist, word generator after it?!

joelanman, to ai
@joelanman@hachyderm.io avatar

honestly, have any ai/llms published their training data at all? Seems like such a problem that we don't know what they are trained on.

mnl, to LLMs
@mnl@hachyderm.io avatar

I’ve decided that to alleviate my upset at people basically yelling at people using llms “the technology” instead of maybe having a bit more open mindedness (or dare I say empathy?) by posting a bit more concrete stuff I do with them instead of talking in the abstract.

So, for alt-text.

I love language, I am not a writer, I am trying to get better, I know that my alt text doesn’t convey what I want it to because my words fall flat. They are definitely not what I want to convey.

1/

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Let’s be honest, if you’re a software engineer, you know where all this compute and power consumption is going. While it’s popular to blame , y’all know how much is wasted on , microservices, overscaled , spark/databricks and other unnecessary big data tech. It’s long past time we’re honest with the public about how much our practices are hurting the climate, and stop looking for scapegoats https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/

kellogh, to LLMs
@kellogh@hachyderm.io avatar

I wrote up my thoughts about and explainable . The tl;dr is that traditional explainable ML techniques weren't designed with psychology in mind, whereas LLMs happen to mirror how humans explain and earn trust, or at least are a lot more similar to that https://timkellogg.me/blog/2023/10/01/interpretability

kerim, to ai
@kerim@zirk.us avatar

I wanted to add a simple statement about ChatGPT to my syllabi for this semester. This is a first draft. Welcome constructive feedback, but please respect my overall approach which does not include an outright ban. That's fine if you want to ban it, and I understand why, but I'm trying something different here.

(1/7) 🧵

kellogh, to random
@kellogh@hachyderm.io avatar

Does anyone have experience + feedback with using LLMs on your technical docs as a customer-facing help chatbot? How effective is it? What are the caveats?

mnl, to LLMs
@mnl@hachyderm.io avatar

If anything, LLMs show that the duality language/thought is porous. That's already embodied in the chestnut that "writing is thinking."

Anybody using ChatGPT can't deny that there is meaning encoded in its output, even if it just a derivation of its training corpus.

Give language a bit more credit.

I have a hard time understanding the "it's just fancy autocomplete/stochastic parrots/a language model" as a criticism.

1/

kellogh, to ChatGPT
@kellogh@hachyderm.io avatar

this isn’t rocket science. if you put information in, it will come out. an exploit might not exist today, but it’s only a matter of time before it’s common. training data, prompts, RAG-injected info…all of it needs to follow basic security principles. https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

kellogh, to generativeAI
@kellogh@hachyderm.io avatar

i wish there were some generic solution to DAGs. is a heavily OOP API. it's great if you want to swap component B for component A, but you're pretty much on your own if you want programs to flow differently. e.g. chat is treated entirely different than question-answer, and creating a new flow means getting neck deep in internal langchain APIs. i wish the components were loosely coupled so i could quickly rearrange the flow

BenjaminHan, to LLMs
@BenjaminHan@sigmoid.social avatar

1/

When performing reasoning or generating code, do really understand what they’re doing, or do they just memorize? Several new results seem to have painted a not-so-rosy picture.

The authors in [1] are interested in testing LLMs on “semantic” vs. “symbolic” reasoning: the former involves reasoning with language-like input, and the latter is reasoning with abstract symbols.

metin, to ai
@metin@graphics.social avatar

When generative AI is trained with AI-generated data, it becomes degenerat(iv)e AI.

#AI #ArtificialIntelligence #ML #MachineLearning #DeepLearning #LLM #LLMs #GenAI #GenerativeAI

jonny, to random
@jonny@neuromatch.social avatar

I'm gonna say this one million more times, but

the spam and misinformation from is a feature not a bug to the companies that develop them

when you sell the ability to detect AI spam either by filtering results or through some "knowledge platform," being unable to access information through other means is really good for business

Diluting informational power is the point. All fake everything is what they were designed to do.

heapwolf, to LLMs
@heapwolf@fosstodon.org avatar

Adding first-class support to socket for local-first .

https://github.com/socketsupply/socket/pull/912

it creates a nice trifecta of utility — P2P, UI, and AI.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • everett
  • rosin
  • Youngstown
  • ngwrru68w68
  • khanakhh
  • slotface
  • InstantRegret
  • mdbf
  • GTA5RPClips
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • magazineikmin
  • anitta
  • tacticalgear
  • tester
  • Durango
  • cubers
  • ethstaker
  • cisconetworking
  • modclub
  • osvaldo12
  • Leos
  • normalnudes
  • megavids
  • provamag3
  • lostlight
  • All magazines