cassidy, to ai
@cassidy@blaede.family avatar

I get that it’s hot right now, but man, the user experience of LLMs being this bot you type text to seems like a huge step backwards compared to just integrating these AI features natively into products.

williamgunn, to ai
@williamgunn@mastodon.social avatar

A gallery of ways can be used for evil
https://llmsaregoinggreat.com/evil

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

is powered by a hidden army of contractors making $15 per hour. For a technology that is supposedly threatening many jobs, tools require a large workforce to ensure accuracy and trust through a human feedback loop, something has failed to do with .

https://www.techspot.com/news/98600-chatgpt-powered-hidden-army-contractors-making-15-hour.html

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

“There is a world in which generative , as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own” - Naomi Klein

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

lilithsaintcrow, to random
@lilithsaintcrow@raggedfeathers.com avatar

So, my fellow self-publishing folks, one of the bigger sites being used to train chatbots is Scribd. I’m thisclose to taking all my books down from there, because I did not consent to this. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/

mjgardner,
@mjgardner@social.sdf.org avatar

@lilithsaintcrow I sympathize, but that horse has probably left the barn just like published email addresses are irrevocable targets.

Software developers like me are similarly dealing with our license-washed code appearing in ’s and similar services.

Just wait until or others start monetizing their training sets to even less scrupulous “” ventures.

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar

“The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist.”

https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart

jchyip, to random
@jchyip@mastodon.online avatar

Wondering how well patterns hold across or are they more like LLM-specific idioms?

bigdata, to random

🆕 Newsletter 🚀 Building software systems with LLMs and other Generative Models will primarily involve writing text instructions → I explore the fascinating world of prompt engineering, LLMs & #NLProc pipelines.
#MachineLearning #GenerativeAI #LLMs
🔗 https://gradientflow.substack.com/p/the-future-of-prompt-engineering

jchyip, to random
@jchyip@mastodon.online avatar

It's amusing (interesting?) that suffer from .

ppatel, to accessibility
@ppatel@mstdn.social avatar

Q&A with Vint Cerf, chief internet evangelist at and recipient of IEEE's Medal of Honor, on how Google has changed since 2005, the hazards of , for disabled people, and more.

https://techcrunch.com/2023/05/05/vint-cerf-on-the-exhilarating-mix-of-thrill-and-hazard-at-the-frontiers-of-tech/

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #Chatbots #ChatGPT: "Do you think the public has been too credulous about ChatGPT?

It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the “gullibility gap.” In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it made up sexual harassment charges. You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. It’s from Microsoft. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem."

https://www.nytimes.com/interactive/2023/05/02/magazine/ai-gary-marcus.html

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar
BenjaminHan, to gpt
@BenjaminHan@sigmoid.social avatar

1/

Solving causal tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how models perform on causal discovery, actual causality, and causal judgments.

BenjaminHan,
@BenjaminHan@sigmoid.social avatar

6/

The moral of the story: when investigating capabilities of black-box , always perform memorization tests first on the benchmark datasets!

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

:"Outside of the Netflix headquarters in New York City on Wednesday, hundreds of members of the Writer’s Guild of America (WGA) marched for a better contract on the second day of the writer's strike. They were there to communicate a clear message: Writers refuse to be replaced by AI.

Signs showcased slogans such as “Writers Generate All of it,” “Don’t Let ChatGPT Write ‘Yellowstone’,” “I Told ChatGPT To Make A Sign and It Sucked,” and “Don’t Uber Writing.” These signs referred to the unprecedented “AI” category in the guild’s proposal in which they asked to regulate the use of AI on union projects but were met with refusal from studios. Writers are seeking pay for episodes on streaming platforms, and to not have their work devalued and turned into gig labor due to the use of text-generating AI programs to write dialog."

https://www.vice.com/en/article/5d9gkq/striking-writers-are-on-the-front-line-of-a-battle-between-ai-and-workers

shiwali, to random

In the chaos around , I went back and re-read the beautiful article by Lawrence Barsalou on the function of language in human cognition.

Barsalou argues that language evolved in humans to support coordinated action. Archival function of language is secondary. He highlights that has largely studied the secondary function and made minimal advances on the primary.

, , have a similar bias.

Paper: https://barsaloulab.org/Online_Articles/1999-Barsalou-DP-situated_comprehension.pdf

shiwali,

If we focus on the primary function of language - communication, we have to study how language production is tied to reasoning and inference in an system.

Human agents use language to provide information to their partner about the state of their beliefs, knowledge, and reasoning.

If reasoning and language generation are independent (like in ), tokens cannot be considered 'information'.

pinecone,
@pinecone@mastodon.social avatar

@shiwali 2/n You say that "If reasoning and language generation are independent (like in ), tokens cannot be considered 'information'." But they are not independent. Language is used to do many things including sharing information, but also to even be able to think in abstract systematic terms. So, LLM's do have knowledge, they are able to reason over the semantics from language. The article says... [see next]

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar
Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

janriemer, to programming

No, do NOT understand your code.🙄

hobs, to opensource
@hobs@mstdn.social avatar

Smart people at Google and OpenAI are leaving to go work for open source companies because bigtech LLMs don't have a chance against : https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

Jigsaw_You, to internet Dutch
@Jigsaw_You@mastodon.nl avatar
Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar
kellogh, to random
@kellogh@hachyderm.io avatar

i've noticed that there's a lot of fruitful development around purely treating as black boxes and focusing on prompt engineering + the ReAct pattern. Simply forcing the LLM to draw out it's thoughts over more text increases it's accuracy, and if you also interleave that with input from the user or calls to external services, e.g. Google, you can achieve very interesting results

mjgardner, to random
@mjgardner@social.sdf.org avatar

Amazing. has thrown together a way for people to get their private information removed from —mostly only, natch—and it’s a shitty form. https://apple.news/AcJAo2M3rSoaRpt2SPic5Nw

If you needed evidence that they’ve been overrun with salesweasels and MBAs, this is it.

” “

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • JUstTest
  • cubers
  • InstantRegret
  • khanakhh
  • normalnudes
  • everett
  • Durango
  • tacticalgear
  • Leos
  • cisconetworking
  • megavids
  • modclub
  • anitta
  • lostlight
  • All magazines