remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Last week, Microsoft researchers released WizardLM 2, which it claimed is one of the most powerful open source large language models to date. Then it deleted the model from the internet a few hours later because, as The Information reported, it “accidentally missed” required “toxicity testing” before it was released.

However, as first spotted by Memetica, in the short hours before it was taken down, several people downloaded the model and reuploaded it to Github and Hugging Face, meaning that the model Microsoft thought was not ready for public consumption and had to be taken offline, has already spread far and wide, and now effectively can never be removed from the internet.

Microsoft declined to comment for this article.

According to a now deleted post from the developers of WizardLM 2 about its release, the open source model is Microsoft’s “next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.”" https://www.404media.co/microsoft-deleted-its-llm-because-it-didnt-get-a-safety-test-but-now-its-everywhere/

remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839

metin, to ai
@metin@graphics.social avatar
hrheingold, to LLMs
@hrheingold@mastodon.social avatar

Like words, molecular sequences in biological components are tokens that can be manipulated by :

“Here, using large language models (LLMs) trained on biological diversity at scale, we demonstrate the first successful precision editing of the human genome with a programmable gene editor designed with AI.”

https://www.biorxiv.org/content/10.1101/2024.04.22.590591v1

changelog, to opensource
@changelog@changelog.social avatar

🗞 New episode of Changelog News!

⚠️ @forrestbrazeal on the inside threat to OSS
🍴Vicki Boykis says Redis is forked
👻 @johnonolan says Ghost is federating
🦙 Meta Engineering announces Llama 3
@eieio's questions to ask when you don't want to work
🎙 hosted by @jerod

🎧 https://changelog.com/news/91

doctorambient, to ai
@doctorambient@mastodon.social avatar

Lots of people who work in have, in their head, an idea about what sort of interaction with an might give them pause. The thing that might make them start to suspect that something interesting is happening.

Here's mine:

User: Tell me a cat joke.

LLM: Why did the cat join a band? He wanted to be a purr-cussionist.

User: Tell me a dad joke.

LLM: I think I just did.

(I have never seen this behavior, yet. 🤣)

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I was listing something on eBay, and they encourage starting with an existing listing—presumably to increase the amount of detail and decrease the amount of work.

When I selected the same model, I got a default description that was extremely robotic and wordy while just repeating the spec sheet. I thought it sounded LLM-generated; sure enough when I went to edit it, there is a big shiny “write with AI” button.

🤢

This is not actually helping anyone.

cassidy,
@cassidy@blaede.family avatar

It makes EVERY listing sound identical, lifeless, and lacking critical context like the SPECIFIC condition of the item, why it’s being sold, etc. You get an online marketplace with descriptions masquerading as human-authored all sporting the same useless regurgitation of the structured spec sheet, in a less digestible format.

Companies, don’t do this.

I don’t actually mind some of the “summarize/distill customer reviews” type generative AI stuff!

cassidy,
@cassidy@blaede.family avatar

But this is worse as it mixes machine-written nonsense with the corpus of human-written text. And from poking at a few other listings, everyone is just using this feature and its output as-is without actually adding anything. It’s not being used to improve the experience, it’s being used to replace the one critical human part of the experience.

I hate this.

savvykenya, to LLMs
@savvykenya@famichiki.jp avatar

If you have documents with the answers you're looking for, why not search the documents directly? Why are you embedding the documents then using (Retrieval Augmenter Generation) to make a large language model give you answers? An LLM generates text, it doesn't search a DB to give you results. So just search the damn DB directly, we already have great search algorithms with O(1) retrieval speeds! are so stupid.

pyOpenSci, to python
@pyOpenSci@fosstodon.org avatar

Our upcoming “Build Your First Package!” workshop has a section on how like can be used to support tasks such as documenting and formatting your code to improve usability and maintainability. We’ll also considering the ethical and logistical challenges, pitfalls and concerns associated with using based tools in software development.

There are still a few spaces left! Learn more and register today: https://www.eventbrite.com/e/from-python-code-to-module-tickets-879586546037?aff=oddtdtcreator

smach, to LLMs
@smach@masto.machlis.com avatar

“But this doesn’t save any time!” 3 useful questions when trying #LLMs:

  • Is there another way to get results I want? Don't give up right away.
  • Does AI make this task less or more annoying? Sometimes supervising drudge work feels better even if it's not faster; other times you'd still rather do it yourself.
  • Are results likely to improve as LLMs get better? If so, add a calendar reminder to try again in a few months. Or, keep a list of things you want to re-try post GPT-5 class models.
    #GenAI
RalphBassfeld, to ai
@RalphBassfeld@swiss.social avatar

Thoughtful post by @molly0xfff who finds that LLMs and similar generative AIs can be helpful in specific and limited use cases, but generally the tradeoffs are not worth it. https://www.citationneeded.news/ai-isnt-useless/

vick21, to LLMs
@vick21@mastodon.social avatar

Here is an example of how bad are with math. I asked about velocity in the context of Agile process. The answer?
“Sure! Let's say that an Agile development team has completed four iterations, each lasting two weeks. In the first iteration, they delivered 12 user stories; in the second, they delivered 10; in the third, they delivered 9; and in the fourth, they delivered 8. The total number of user stories completed by the end of the fourth iteration is 49 (12 + 10 + 9 + 8)”.

unixorn, to LLMs
@unixorn@hachyderm.io avatar

Great post about #LLMs hallucinating safety information. Specifically MSDS (Material Safety Data Sheet), but applies to any other safety information.

https://www.funraniumlabs.com/2024/04/phil-vs-llms/

I would much rather get a "no results" when I'm looking for medical interactions than an #LLM helpfully telling me "Here's some bullshit you don't know enough to know is horribly wrong"

Even something as innocent as acetaminophen can destroy your liver if you overdose on it.

#ai @llm @ai

alatitude77, to Discord
@alatitude77@mastodon.social avatar
cassidy, to ai
@cassidy@blaede.family avatar

Facebook’s “AI” is responding to posts pretending to be human; in this case claiming to have a gifted and disabled child who attends a NYC school.

How many other accounts are actually large language models masquerading as people? This seems incredibly irresponsible—and is a great reminder that “AI” is machines mindlessly regurgitating things from elsewhere on the Internet.

AI is not sustainable.

https://www.404media.co/facebooks-ai-told-parents-group-it-has-a-disabled-child/

Via @jasonkoebler for @404mediaco

rcarmo, to ai
@rcarmo@mastodon.social avatar

and have transformed computing indeed: we moved from a completely deterministic approach to telling computers what to do to one where we mumble various spells and incantations and hope they achieve something.

Truly a wonder of our age.

daniel_js_craft, to LLMs
@daniel_js_craft@mastodon.social avatar

Google Gemini aims for a 10 mil tokens context. It's so large that you can put books, docs, videos. They all fit in this context size. Will this replace RAG?

Don't think so because:
-💸 money; you still pay per token
-🐢 slow response time
-🐞 a huge context is hard to debug

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Many tech writers have a constant fear that AI will take our jobs. I often think, what I’m doing isn’t rocket science. Any person with some education can do it. And yet, just as engineers struggle to write, tech writers frequently struggle with AI tools. They don’t understand how to use them effectively. Even though “prompt engineering” is often a ridiculed term online, again and again I hear feedback from TWs about AI not being useful to them, or they simply don’t have interest in AI, as if it’s irrelevant to their work. This blows me away. When I can ramp up on a product in an hour and write a user guide in a couple of days, and code a doc publishing script that automates even more tasks, how can AI not be useful? How can it not be essential?

An often repeated saying is that AI tools won’t replace us, we’ll be replaced by those who know how to use AI tools. I feel like this is more and more true. Consider this scenario: You hire a roofer to install a new roof, which mainly involves removing the old shingles and installing new ones. One roofer arrives with a hammer. It will take this roofer 2 weeks to do the job. Another roofer arrives with a pneumatic roofing nailer power tool. It will take this roofer 3 days to do the job. The cost of the first roofer is 4 times that of the second. The output is pretty much the same. Which roofer do you hire?

It’s the same with tech writers. Suppose you have a large project. One tech writer can create the documentation using AI tools in a quarter of the time, while the other will take 75% longer. Which tech writer do you hire?

Fortunately, I think tech writers can learn how to use AI tools as power tools. Especially with more awareness and knowledge about effective prompting techniques, tech writers can become much more productive using AI." https://idratherbewriting.com/blog/ai-is-accelerating-me

kellogh, to opensource
@kellogh@hachyderm.io avatar

#opensource WizardLM2 8x22B exceeds performance of GPT4 in some benchmarks

  • Apache2 👍
  • progressive learning instead of all-at-once means less power-hungry and more data efficient during training
  • Co-Teaching and Self-Teaching are intriguing, I want to hear more
  • from Microsoft #AI, I imagine GPT5 must be nigh, if they’re releasing competition for GPT4

https://wizardlm.github.io/WizardLM2/ #LLMs

kellogh, to LLMs
@kellogh@hachyderm.io avatar

teaching how to program in python with

its a good study. Definitely some drawbacks, but also seems like a good idea overall

https://austinhenley.com/blog/learningwithai.html

kellogh, to LLMs
@kellogh@hachyderm.io avatar

imagine a construction company that put out a statement, “we’re going to stop using power tools because there’s a lot of workplace injuries”. That would be crazy. Sure, i get that with a circular saw, it LOOKS like you just have to pull a trigger, but there’s actually a bit more to using it safely. You train your workers

metin, to ai
@metin@graphics.social avatar

When generative AI is trained with AI-generated data, it becomes degenerat(iv)e AI.

Norobiik, to generativeAI
@Norobiik@noc.social avatar

In other words, the current tech is a dead end.

He pointed to a quartet of cognitive challenges: , , , and understanding the .

“Those are four essential characteristics of human intelligence — also animal intelligence, for that matter — that current systems can’t do,” he said.

Meta's AI chief: will never reach human-level intelligence
https://thenextweb.com/news/meta-yann-lecun-ai-behind-human-intelligence

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • GTA5RPClips
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • khanakhh
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • tacticalgear
  • megavids
  • cisconetworking
  • normalnudes
  • osvaldo12
  • ethstaker
  • mdbf
  • modclub
  • Durango
  • tester
  • provamag3
  • cubers
  • Leos
  • anitta
  • lostlight
  • All magazines