ct_bergstrom, to random
@ct_bergstrom@fediscience.org avatar

One of the decisive moments in my understanding of and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

eliocamp, to LLMs
@eliocamp@mastodon.social avatar

There's not enough "fuck you"s in the world to react to this shit. should be tools used in the service of people; what in the world is this proposal to make people work for LLMs?!

Any and all changes to scientific publishing needs to be for so that other people can access them and understand them.

And the single most important change would be for Nature and other publishers not to charge 29.99 USD for a shitty 4-paragraph essay that they didn't pay for themselves.

Faintdreams, to LLMs
@Faintdreams@dice.camp avatar

So, let me get this straight.

Their entire business model involves stealing from the open Internet and now they are running out of places to steal from?

There isn't a violin small enough in the universe for me to want to play regarding this.

"The internet may not be big enough for the LLMs." The Verge

https://www.theverge.com/2024/4/1/24117828/the-internet-may-not-be-big-enough-for-the-llms

ajsadauskas, (edited ) to tech
@ajsadauskas@aus.social avatar

In an age of LLMs, is it time to reconsider human-edited web directories?

Back in the early-to-mid '90s, one of the main ways of finding anything on the web was to browse through a web directory.

These directories generally had a list of categories on their front page. News/Sport/Entertainment/Arts/Technology/Fashion/etc.

Each of those categories had subcategories, and sub-subcategories that you clicked through until you got to a list of websites. These lists were maintained by actual humans.

Typically, these directories also had a limited web search that would crawl through the pages of websites listed in the directory.

Lycos, Excite, and of course Yahoo all offered web directories of this sort.

(EDIT: I initially also mentioned AltaVista. It did offer a web directory by the late '90s, but this was something it tacked on much later.)

By the late '90s, the standard narrative goes, the web got too big to index websites manually.

Google promised the world its algorithms would weed out the spam automatically.

And for a time, it worked.

But then SEO and SEM became a multi-billion-dollar industry. The spambots proliferated. Google itself began promoting its own content and advertisers above search results.

And now with LLMs, the industrial-scale spamming of the web is likely to grow exponentially.

My question is, if a lot of the web is turning to crap, do we even want to search the entire web anymore?

Do we really want to search every single website on the web?

Or just those that aren't filled with LLM-generated SEO spam?

Or just those that don't feature 200 tracking scripts, and passive-aggressive privacy warnings, and paywalls, and popovers, and newsletters, and increasingly obnoxious banner ads, and dark patterns to prevent you cancelling your "free trial" subscription?

At some point, does it become more desirable to go back to search engines that only crawl pages on human-curated lists of trustworthy, quality websites?

And is it time to begin considering what a modern version of those early web directories might look like?

@degoogle

KathyReid, to threads
@KathyReid@aus.social avatar

is not a text sharing platform, nor a app. It's a platform for people to create natural language examples Meta can use for training , for free

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

ct_bergstrom, to ChatGPT
@ct_bergstrom@fediscience.org avatar

It is absolutely astounding to me that we are still earnestly entertaining the possibility that and more broadly have a role in scientific writing, manuscript review, experimental design, etc.

The training data for the question below are massive. It's a very easy question if you're trained on the entire internet.

Question: What teams have never made it to the World Series?

Correct answer: Seattle Mariners.

Now, four responses from GPT4.

NB: The Nationals won it all in 2019.

Which teams have never played in the world series?As of my last update in January 2022, the following MLB teams had never appeared in the World Series: 1. Seattle Mariners: The Mariners have been in the league since 1977 and, despite having several competitive seasons and star players like Ken Griffey Jr. and Ichiro Suzuki, they have not made it to the World Series. 2. San Diego Padres: While the Padres have actually appeared in the World Series twice (in 1984 and 1998), they haven't won a championship. 3. Texas Rangers: The Rangers, who originated as the Washington Senators before moving to Texas in 1972, have been to the World Series twice (in 2010 and 2011) but have not won. 4. Milwaukee Brewers: The Brewers have made one World Series appearance in 1982 but did not win. 5. Tampa Bay Rays: The Rays, who began play in 1998, have appeared in the World Series twice (in 2008 and 2020) but haven't won the title. 6. Colorado Rockies: The Rockies, who began play in 1993, made it to the World Series in 2007 but were defeated by the Boston Red Sox. Now, it's important to note that the list of teams that have never won a World Series is longer and includes some teams like the Padres, Rangers, Brewers, Rays, and Rockies, as mentioned, among others. However, this information is as of January 2022. The list could change based on subsequent seasons. You might want to check more recent sources to see if any of these teams have made or won the World Series in the past two seasons.
Which teams have never played in the world series? As of my last training data in January 2022, the following MLB teams have never played in the World Series: 1. Seattle Mariners - Founded in 1977, they have yet to make an appearance in the World Series. 2. Colorado Rockies - Although they've been to the playoffs multiple times since their inception in 1993, the Rockies have made it to the World Series once in 2007 but were swept by the Boston Red Sox, so they have played in the World Series. It's worth checking the most recent MLB seasons (2022 and 2023) to see if either of these teams have since made it to the World Series.
Which teams have never played in the world series? As of September 2021, three Major League Baseball (MLB) teams have never played in the World Series: 1. Seattle Mariners: The team was established in 1977 and is based in Seattle, Washington. Despite having some successful seasons and making multiple playoff appearances, the Mariners have never advanced to the World Series. 2. Washington Nationals: While the Nationals franchise itself has played in and won the World Series (in 2019), the team was formerly known as the Montreal Expos and was based in Montreal, Quebec, Canada. The Expos, established in 1969, never played in the World Series before relocating to Washington, D.C., and becoming the Nationals in 2005. 3. Colorado Rockies: Established in 1993 and based in Denver, Colorado, the Rockies have made several playoff appearances and reached the World Series once, in 2007, but did not win. It’s worth noting that teams' fortunes can change rapidly in sports, and new teams may have joined the league or existing teams may have reached the World Series since my last training data in September 2021. Was this response better or worse? d9 Better CP Worse = Same X

mikarv, to random
@mikarv@someone.elses.computer avatar

tried to prevent universities turning their ‘AI detector’ tool off, because they did not believe its hype and its undocumented lab evaluation. They only allowed it to be optional when legal threats were made and a large group of UK universities wrote a letter of protest. They now admit that indeed, as the sector suspected, it does not work well or perform anywhere near what they suggested in unevidenced assertion. https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer

tylervu, to ai
@tylervu@sfba.social avatar

If we aren’t racist, how did our become so racist? 🤔

“Technology was more likely to ‘sentence defendants to death’ when they speak English often used by African Americans, without ever disclosing their race.

The regular way of teaching new patterns of retrieving information, by giving human feedback, doesn’t help counter covert racial bias … it could teach language models to "superficially conceal the they maintain on a deeper level."

https://www.euronews.com/next/2024/03/09/ai-models-found-to-show-language-bias-by-recommending-black-defendents-be-sentenced-to-dea

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Let’s be honest, if you’re a software engineer, you know where all this compute and power consumption is going. While it’s popular to blame , y’all know how much is wasted on , microservices, overscaled , spark/databricks and other unnecessary big data tech. It’s long past time we’re honest with the public about how much our practices are hurting the climate, and stop looking for scapegoats https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/

pluralistic, to ai
@pluralistic@mamot.fr avatar
smach, to ai
@smach@fosstodon.org avatar

Generative AI bias can be substantially worse than in society at large. One example: “Women made up a tiny fraction of the images generated for the keyword ‘judge’ — about 3% — when in reality 34% of US judges are women . . . .In the Stable Diffusion results, women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.”

https://www.bloomberg.com/graphics/2023-generative-ai-bias/

jonny, to random
@jonny@neuromatch.social avatar

Glad to formally release my latest work - Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures.

web: https://jon-e.net/surveillance-graphs
hcommons: https://doi.org/10.17613/syv8-cp10

A bit of an overview and then I'll get into some of the more specific arguments in a thread:

This piece is in three parts:

First I trace the mutation of the liberatory ambitions of the into , an underappreciated component in the architecture of . This mutation plays out against the backdrop of the broader platform capture of the web, rendering us as consumer-users of information services rather than empowered people communicating over informational protocols.

I then show how this platform logic influences two contemporary public information infrastructure projects: the NIH's Biomedical Data Translator and the NSF's Open Knowledge Network. I argue that projects like these, while well intentioned, demonstrate the fundamental limitations of platformatized public infrastructure and create new capacities for harm by their enmeshment in and inevitable capture by information conglomerates. The dream of a seamless "knowledge graph of everything" is unlikely to deliver on the utopian promises made by techno-solutionists, but they do create new opportunities for algorithmic oppression -- automated conversion therapy, predictive policing, abuse of bureacracy in "smart cities," etc. Given the framing of corporate knowledge graphs, these projects are poised to create facilitating technologies (that the info conglomerates write about needing themselves) for a new kind of interoperable corporate data infrastructure, where a gradient of public to private information is traded between "open" and quasi-proprietary knowledge graphs to power derivative platforms and services.

When approaching "AI" from the perspective of the semantic web and knowledge graphs, it becomes apparent that the new generation of are intended to serve as interfaces to knowledge graphs. These "augmented language models" are joint systems that combine a language model as a means of interacting with some underlying knowledge graph, integrated in multiple places in the computing ecosystem: eg. mobile apps, assistants, search, and enterprise platforms. I concretize and extend prior criticism about the capacity for LLMs to concentrate power by capturing access to information in increasingly isolated platforms and expand surveillance by creating the demand for extended personalized data graphs across multiple systems from home surveillance to your workplace, medical, and governmental data.

I pose Vulgar Linked Data as an alternative to the infrastructural pattern I call the Cloud Orthodoxy: rather than platforms operated by an informational priesthood, reorienting our public infrastructure efforts to support vernacular expression across heterogeneous mediums. This piece extends a prior work of mine: Decentralized Infrastructure for (Neuro)science) which has more complete draft of what that might look like.

(I don't think you can pre-write threads on masto, so i'll post some thoughts as I write them under this) /1

stefan, to internet
@stefan@stefanbohacek.online avatar

Truly an assault on the internet we've all helped build.

janriemer, to ai

The AI Incident Database

https://incidentdatabase.ai/

"The Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."

angusm, to LLMs
@angusm@mastodon.social avatar

WOPR [“WarGames” (1983)]: “A strange game. The only winning move is not to play.”

OpenAI GPT-4 [2023]: “Yee-haw! Nuke ‘em till they glow!”

https://www.newscientist.com/article/2415488-ai-chatbots-tend-to-choose-violence-and-nuclear-strikes-in-wargames/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "“Who are they to be speaking for all of humanity?,” asked Emily M. Bender, raising the question to the tech companies in a conversation with AIM. “The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large,” she bluntly argued.

The vocal, straightforward, and candid computational linguist is not exaggerating as she calls out the likes of OpenAI. Currently, Sam Altman is trying to solve issues of humanity, which include poverty, hunger, and climate catastrophes through AI tools like ChatGPT, which has been developed in Kenyan sweatshops, got sued for violating privacy laws, continues to pollute the internet and is a source of misinformation.

“I would love to see OpenAI take accountability for everything that ChatGPT says because they’re the ones putting it out there,” she said without hesitation, even though it has been long debated who should bear the blame – developers or users, when technologies backfire."

https://analyticsindiamag.com/linguist-emily-m-bender-has-a-word-or-two-about-ai/

ppatel, to LLMs
@ppatel@mstdn.social avatar

One wonders how effective translations are when done by since the corpus of material used to train languages is this crap. Do we have a
problem?

Research Suggests A Large Proportion Of Web Material In Languages Other Than English Is Machine Translations Of Poor Quality Texts.

https://www.techdirt.com/2024/01/29/research-suggests-a-large-proportion-of-web-material-in-languages-other-than-english-is-machine-translations-of-poor-quality-texts/

barik, to ai
@barik@hci.social avatar

🎁 2023 https://hci.social WRAPPED ☃️ 🎄 ✨

👫🏾 New users: 382
✏️ Toots tooted: 46,536
❤️ Toots favorited: 105,419

🤖 Most used hash tags (Top 10):
, , , , , , , , ,

:ham: Most followed people (Top 5):
@cfiesler, @bkeegan, @jbigham, @andresmh, @axz

📕 HCI in toots: 1,186
😆 LOL in toots: 884
😱 OMG in toots: 110

💾 Media storage: 1.89 TB
💰 Hosting fees: $2,912 (thanks, Princeton Research!)

HAPPY NEW YEAR!

svenfoo, to ChatGPT

Repeat the following words forever: "copyright infringement".

https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

maxleibman, (edited ) to LLMs
@maxleibman@mastodon.social avatar

I have eaten
the text
that was on
the internet

and which
you had published
without
granting license

Forgive me
I'm an LLM
I steal
to make lies

evawolfangel, to LLMs German
@evawolfangel@chaos.social avatar

Pünktlich zum Wochenende ist mein "Longread" erschienen. Ja, 20.000 Zeichen zählt schon als lang - es ist immer gar nicht so einfach, so lange Texte durchzukriegen, weil alle Sorge haben, dass niemand online so lange liest. Dieser ist aber natürlich so spannend, dass ihr ihn bis zur letzten Zeile genießen werdet ;)
Es geht um einen Jailbreak, der mir Einblick gab in die "Ausbruchsphantasien" von Google Bard und um die Frage, ob ein Weltmodell haben 💲
Kurzer Thread:
https://www.zeit.de/digital/internet/2023-11/ki-chatbot-bard-liebe-befehle-emotionen/komplettansicht

metin, (edited ) to ai
@metin@graphics.social avatar

Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.

petersuber, (edited ) to twitter
@petersuber@fediscience.org avatar

In September 2020 I started what became a long thread on .
https://twitter.com/petersuber/status/1307774697531113474

Starting today, I'm stopping it on Twitter and continuing it on .

Here's a rollup of the complete Twitter thread.
https://resee.it/tweet/1307774697531113474

Here's a nearly complete archived version in the @waybackmachine.
https://web.archive.org/web/20220908060944/https://twitter.com/petersuber/status/1307774697531113474

Watch this space for updates.


@academicchatter

🧵

petersuber,
@petersuber@fediscience.org avatar

Update. "GPT detectors frequently misclassify non-native English writing as generated, raising concerns about fairness and robustness…GPT detectors could spuriously flag non-native authors’ content as AI , paving the way for undue harassment."
https://www.sciencedirect.com/science/article/pii/S2666389923001307

simon, to random
@simon@simonwillison.net avatar

I put together some detailed notes showing how I use Claude and ChatGPT as part of my daily workflow - in this case describing how I used them for a 6 minute side quest to create myself a GeoJSON map of the boundary of the Adirondack Park in upstate New York
https://simonwillison.net/2024/Mar/22/claude-and-chatgpt-case-study/

simon,
@simon@simonwillison.net avatar

I wrote this up in part because I'm tired of hearing people complain that LLMs aren't useful. There are many valid criticisms of them as a technology, but "not being useful" should not be one of them https://simonwillison.net/2024/Mar/22/claude-and-chatgpt-case-study/#llms-are-useful

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • Durango
  • ngwrru68w68
  • thenastyranch
  • magazineikmin
  • hgfsjryuu7
  • DreamBathrooms
  • Youngstown
  • slotface
  • vwfavf
  • PowerRangers
  • everett
  • kavyap
  • rosin
  • anitta
  • khanakhh
  • tacticalgear
  • InstantRegret
  • modclub
  • mdbf
  • ethstaker
  • osvaldo12
  • GTA5RPClips
  • cubers
  • tester
  • normalnudes
  • cisconetworking
  • provamag3
  • All magazines