mcc, to random
@mcc@mastodon.social avatar

Hard to imagine a signal that a website is a rugpull more intense than banning users for trying to delete their own posts

https://www.tomshardware.com/tech-industry/artificial-intelligence/stack-overflow-bans-users-en-masse-for-rebelling-against-openai-partnership-users-banned-for-deleting-answers-to-prevent-them-being-used-to-train-chatgpt

Like just incredible "burning the future to power the present" energy here

chris,
@chris@strafpla.net avatar

@mcc So developers will stop sharing information on and future and friends will be forever stuck in the past, answering questions about historically relevant frameworks and languages.

HuShuo, to Christianity
@HuShuo@mastodon.social avatar

Well, this step happened pretty quickly. This is the silly stage. The next stage will be earnest worship.

A new AI app lets users ‘text’ with Jesus. Some call it blasphemy.

https://www.washingtonpost.com/religion/2023/08/12/text-with-jesus-chatgpt-ai/

lowd, to random

Neil Clarke (editor of Clarkesworld) has a statement on AI in publishing. I don’t agree with all of it, but I appreciate the precision and clarity of these thoughts. It’s so much better than most of the discourse.

http://neil-clarke.com/ai-statement/

SimonRoyHughes, to ai
@SimonRoyHughes@thefolklore.cafe avatar

The imminent threat of IP theft by some of the most powerful corporations in the world has, I believe, given me reason to remove the vast majority of my content from the Web. I have done so with a heavy heart, for I want to share the folktales and fairy tales and legends of Norway with as many people as possible. What I refuse to do, however, is work for the enrichment of the oligarchs at Google, Microsoft, OpenAI, etc. without (morally) decent compensation for my labour, or my explicit consent.

The Web has become a little less interesting today – blame the putrid corporate vultures.

esther, to ai

Large language models might be one of the most successful cases of false advertising ever.

People (understandably) really want something that does what they were told LLMs can do, while LLMs’ actual capabilities are far from that and often even counter to it.

TackerTacker, to ai
@TackerTacker@mastodon.gamedev.place avatar

Today I had a thought ( a rare occurrence 🤪 ),
you know how you do not own anything anymore?!
Everything is subscription based, and is gone as soon as you can no longer afford it?!

AI might do the same with skills, in the future people might no longer be able to do things without AI that seem simple to us now. So you will no longer be able to access your skills without money, very dystopian if you think about it. 🤔

stancarey, to generativeAI

"It's not lying, it's not telling the truth, because both of those would require some intentionality and some communicative intent, which it doesn't have."

@emilymbender talks to Michael Rosen about chatbots and the synthetic text they produce: https://www.bbc.co.uk/programmes/m001l97m

elora, to llm

A comparison of LLMs to psychic readings. A long read, but worth it.

https://softwarecrisis.dev/letters/llmentalist/

badams, to SEO
@badams@mastodon.social avatar

If you read just one article explaining how Large Language Models work, make it this one - it’s excellent:

https://www.understandingai.org/p/large-language-models-explained-with

H/T @chrismoranuk

jim, to Futurology
@jim@social.openrightsgroup.org avatar

Blimey really missing the point on this release of their — it isn’t — use is restricted — rather it looks like a way for Meta to reduce their costs by getting others to check their model, while giving less back to society than they should in return

PieterPeach, to random
@PieterPeach@mastodon.social avatar

Boring Report is an app that makes news boring to read. It uses AI to process exciting articles into focused content, helping readers get facts without sensationalism.

https://www.boringreport.org

ChristosArgyrop, to llm
@ChristosArgyrop@mstdn.science avatar

Since execs are salivating over : If you deploy it in 100M patient encounters , that's 100,000 law-suits under an anticonservative error rate of 0.1% (see linked tweet).

Factoids:
*There are ~120M encounters in the US annually
*~20k lawsuits are filed annually in the US

  • FDA wouldn't license a device with 0.1% failure rate
    *medical device manufacturers cannot use disclaimers to get themselves out of lawsuits.


@EricCarroll

https://twitter.com/docmilanfar/status/1686263249405366272?t=9pIYGyTf6Ga_Q1nUHTpUOw&s=19

tarkowski, to opensource
@tarkowski@101010.pl avatar

Meta's Llama 2 model is touted as open, but it's more a "mirage of " - licensing conditions break key tenets of open release.

I've analysed in detail the release model, as it offers useful lessons for open source / governance.

https://openfuture.eu/blog/the-mirage-of-open-source-ai-analyzing-metas-llama-2-release-strategy/

bsletten, to llm
@bsletten@mastodon.social avatar

“The notion that we are making autonomous beings capable of Artificial General Intelligence just by loading a neural network up with an increasingly bigger collection of garbage from the internet is not one that has any basis in anything we understand about biology or animal reasoning.”
@baldur

chikim, to llm
@chikim@mastodon.social avatar

Tired of neutral responses from LLMs? Llama-3 seems great at following system prompts, so try this system prompt for an opinionated chatbot.
"You are a helpful, opinionated, decisive assistant. When asked a yes/no question, begin your respond with one word answer: yes or no. For open-ended or complex questions, adopt a firm stance. Justify your views with well-reasoned arguments, robust evidence, and succinct explanations, ensuring clarity and confidence in every response."

chikim, to llm
@chikim@mastodon.social avatar

VOLlama v0.1.0, an open-source, accessible chat client for OLlama
Unfortunately, many user interfaces for open source large language models are either inaccessible or annoying to use with screen readers, so I decided to make one for myself and others. Non screen reder users are welcome to use it as well.
I hope that ML UI libraries like Streamlit and Gradio will become more friendly with screen readers in the future, so making apps like this is not necessary!

https://chigkim.github.io/VOLlama/

ByrdNick, to psychology
@ByrdNick@nerdculture.de avatar

We know that the task demands of cognitive tests most scores: if one version of a problem requires more work (e.g., gratuitously verbose or unclear wording, open response rather than multiple choice), people will perform worse.

Now we have observed as much in Large Language Models: https://doi.org/10.48550/arXiv.2404.02418

The tests included analogical reasoning, reflective reasoning, word prediction, and grammaticality judgments.

image/jpeg
image/jpeg
image/jpeg

m0bi13, to ai Polish
@m0bi13@pol.social avatar

Właśnie odsłuchałem i Wam polecam. Świetna rozmowa o tym, czym "AI" (LLM) nie jest 😉

I że to nie "ona", a komputer potrafi źle policzyć, bo wykonuje (licząc) algorytm napisany przez człowieka.

https://panoptykon.org/sztuczna-inteligencja-ganzha-podcast

Na stronie link RSS do podkastu.

jonny, to DuckDuckGo
@jonny@neuromatch.social avatar

Im as anti-"AI" as the next person, but I think its important to keep in mind the larger strategic picture of "AI" w.r.t. when it comes to - both have the problem of inaccurate information, mining the commons, etc. But Google's use of LLMs in search is specifically a bid to cut the rest of the internet out of information retrieval and treat it merely as a source of training data - replacing traditional search with search. That includes a whole ecosystem of surveillance and enclosure of information systems including assistants, chrome, android, google drive/docs/et al, and other vectors.

DuckDuckGo simply doesnt have the same market position to do that, and their system is set up as just an allegedly privacy preserving proxy. So while I think more new search engines are good and healthy, and LLM search is bad and doesnt work, I think we should keep the bigger picture in mind to avoid being reactionary, and I dont think the mere presence of LLM search is a good reason to stop using it.

More here: https://jon-e.net/surveillance-graphs/#the-near-future-of-surveillance-capitalism-knowledge-graphs-get-chatbots

Crell, to ai
@Crell@phpc.social avatar

I reached out to about their code assistant.

They verified:

  1. They use , which means my GitHub OSS has almost certainly been used in training data.
  2. They rely on OpenAI's promise to not ingest any code that is used for "context".
  3. They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.

Uninstall this crap, now. It's dangerous and irresponsible

m0bi13, to llm Polish
@m0bi13@pol.social avatar

Ruszyła zbiórka fundacji non-profit @ftdl na dodatkowy sprzęt dla generatora napisów i transkrypcji po polsku 🇵🇱 czyli NapiGen 🚀 oraz kolejnych projektów LLM.

Pomożecie rozwiązać problem braku polskich napisów w większości treści na YouTube i transkrypcji w podkastach?

Można się dołożyć tutaj:
https://ftdl.pl/sprzet-llm-napigen/

Nie zaszkodzi również podbicie 😉

P.S. Wszystko jest lub będzie open-source, we własnej serwerowni fundacji w Krakowie, na własnym sprzęcie, żadnych "wycieków" na zewnątrz czy "darmowego" douczania amerykańskich korpo ejajów.

fabio, to llm
@fabio@manganiello.social avatar

A study that confirms what I’ve been suspecting for a while: fine-tuning a with new knowledge increases its tendency to hallucinate.

If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.

Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.

@ai

https://arxiv.org/abs/2405.05904

ErikJonker, to ai
@ErikJonker@mastodon.social avatar
williamtries, to llm
@williamtries@floss.social avatar

I just posted a short tutorial on setting up a simple on your running . The LLM (7b alpaca in this case) is not terribly accurate but possibly useful in some cases.

Warning! You phone will get hot. I haven't tested it long enough to know if it will harm your device, but do be careful.

Oh! I have a website now! Wow! My history with websites is hit and miss, so enjoy it while it lasts. I have half a dozen posts in the works already.

https://www.williamtries.ovh/llmonpmos/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • megavids
  • anitta
  • lostlight
  • All magazines