KathyReid, to stackoverflow
@KathyReid@aus.social avatar

Like many other technologists, I gave my time and expertise for free to because the content was licensed CC-BY-SA - meaning that it was a public good. It brought me joy to help people figure out why their code wasn't working, or assist with a bug.

Now that a deal has been struck with to scrape all the questions and answers in Stack Overflow, to train models, like , without attribution to authors (as required under the CC-BY-SA license under which Stack Overflow content is licensed), to be sold back to us (the SA clause requires derivative works to be shared under the same license), I have issued a Data Deletion request to Stack Overflow to disassociate my username from my Stack Overflow username, and am closing my account, just like I did with Reddit, Inc.

https://policies.stackoverflow.co/data-request/

The data I helped create is going to be bundled in an and sold back to me.

In a single move, Stack Overflow has alienated its community - which is also its main source of competitive advantage, in exchange for token lucre.

Stack Exchange, Stack Overflow's former instantiation, used to fulfill a psychological contract - help others out when you can, for the expectation that others may in turn assist you in the future. Now it's not an exchange, it's .

Programmers now join artists and copywriters, whose works have been snaffled up to create solutions.

The silver lining I see is that once OpenAI creates LLMs that generate code - like Microsoft has done with Copilot on GitHub - where will they go to get help with the bugs that the generative AI models introduce, particularly, given the recent GitClear report, of the "downward pressure on code quality" caused by these tools?

While this is just one more example of , it's also a salient lesson for folks - if your community is your source of advantage, don't upset them.

KathyReid,
@KathyReid@aus.social avatar

@j3j5 @DoesntExist @blogdiva @astrojuanlu

Strong agree. A lot of Elinor Ostrom's work around governance of the commons - where we get the phrase "tragedy of the commons" - relied on mechanisms of co-operation between institutions.

One of the key challenges I see here is that corporations like OpenAI now have a lot more power than even groups of institutions - lawmakers, governments, civil society. We've seen that recently with the way Meta has influenced government policy around paying to share content from commercial news agencies.

There's also a paradox here - an increased production of work in the Commons is good for OpenAI - because it provides them with more data. However, the way in which the Commons is used - to create for-profit products like #GPT, serves as a constraint on people donating creative material to the commons.

abucci, to ChatGPT
@abucci@buc.ci avatar

Regarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.

LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.

Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.

abucci,
@abucci@buc.ci avatar

To put it differently, these tools and techniques are drawing out the value of works created by creative people without replenishing the originators of that value. That's a horribly dehumanizing way of looking at it, as if people were value spigots, but that's the problem isn't it? This is a dehumanzing arrangement. We don't need to be doing this.

LukaszOlejnik, to ai
@LukaszOlejnik@mastodon.social avatar

Just a few months after the launch of ChatGPT, copywriters and graphic designers (freelancers) have been affected by a significant drop in the number of contracts received, and in those that have received them - a drop in earnings. being more skilled was no shield against loss of work or earnings. Being more skilled was no shield against loss of work or earnings
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4527336

johnpettigrew, to ai
@johnpettigrew@wandering.shop avatar

For those of you who use LLMs to help you code, here's a warning: these tools have been shown to hallucinate packages in a way that allows an attacker to poison your application. https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

cassidy,
@cassidy@blaede.family avatar

I guess we wait this one out until the “AI” bubble bursts due to the incredible subsidization the entire industry is undergoing. It is not profitable. It is not sustainable.

It will not last—but the damage to our planet and fallout from the immense amount of wasted resources will.

https://arstechnica.com/information-technology/2023/10/so-far-ai-hasnt-been-profitable-for-big-tech/

#AI #LLM #LLMs #GenAI #ChatGPT #GPT #OpenAI #Copilot #GitHubCopilot #Gemini #Sora

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

ppatel, to random
@ppatel@mstdn.social avatar

Nope! Never saw this coming.

India’s religious AI are speaking in the voice of god — and condoning violence

Claiming wisdom based on the Bhagavad Gita, the bots frequently go way off script.

https://restofworld.org/2023/chatgpt-religious-chatbots-india-gitagpt-krishna/

x0, to ai
@x0@dragonscave.space avatar

How much you want to bet that these moves to close down APIs because of generative AI are being pushed by OpenAI and their ilk, or at least not actively opposed by them? Most of GPT was clearly sourced on reddit. Now, reddit wants to make the API outrageously expensive. With the funding OpenAI has it could pay those rates, but anyone wanting to develop an open-source alternative will not, thus never being able to achieve parity with the commercial models due to having a fair bit of that vast treasure trove of data inaccessible to them which GPT has already used.

k8em0, to gpt

Ah yes, another high profile bug bounty forcing non-disclosure — even for fixed bugs.
🤦🏻‍♀️
It’s the bugs they won’t fix that will put users at risk.
All orgs need a vulnerability disclosure program that doesn’t ban Disclosure.
But what do I know.
I just coauthored the standard

“But it’s a bug bounty & they are paying so it’s fair to ask for non disclosure”
That’s fine if everything submitted is paid work, like a penetration test.
Oh, only paying selectively & only the first of any duplicates?
That’s labor abuse & the worst gig economy deal out there.

“But pen tests don’t get you all the eyeballs”

Neither do bug bounties - you get a random number of eyeballs willing to sign NDAs.

If orgs actually care about security, they cast as wide a net s as possible to get the best researchers - especially those who won’t sign NDAs.

“This is better than no bug bounty”

No, it isn’t.

It breeds a false sense of security for users & the org itself, while actively excluding the highest skilled researchers who will never sign an NDA for speculative pay or who want to see the bugs FIXED as their motivation.

glynmoody, to OpenAI
@glynmoody@mastodon.social avatar

's -4 finally meets its match: Scots Gaelic smashes safety guardrails - https://www.theregister.com/2024/01/31/gpt4_gaelic_safety/ "The safety guardrails preventing OpenAI's GPT-4 from spewing harmful text can be easily bypassed by translating prompts into uncommon languages – such as Zulu, Scots Gaelic, or Hmong."

Itbeard, to gpt

GPT-based models for evil? Easy. A new generative AI-based tool, WormGPT, has stirred up the cybersecurity world: https://thehackernews.com/2023/07/wormgpt-new-ai-tool-allows.html

chrisoffner3d, to OpenAI

Wow. Sam Altman was just fired from openAI. 😳

"Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

https://openai.com/blog/openai-announces-leadership-transition

abucci, to ai
@abucci@buc.ci avatar

Among the many reasons we should resist the widespread application of generative an important, if less concrete, one is to preserve the freedom to change. This class of method crystallizes the past and present and re-generates it over and over again. The net result, if it's used en masse, is foreclosing the future.

If you're stats-poisoned: human flourishing requires the joint distribution of the future to be different from that of the past and present. We, collectively, form a non-stationary system, and forcing the human system to be stationary is a kind of violence.

KOKEdit, to gpt
@KOKEdit@mastodon.social avatar

detectors of writing that is done by are discriminating against speakers by wrongly tagging their writing as created by AI. This could seriously affect their academic & professional achievements. https://tinyurl.com/5h7zykf6 & https://tinyurl.com/3fpxstjy

da_667, to gpt

next time I torrent a bunch of shit because yet another streaming service has risen to claim what they believe to be their slice of the pie, I'm just going to say I'm using their shit to build an AI dataset


mhucka, to machinelearning
@mhucka@fediscience.org avatar

People who work in AI and libraries/archives/museums, we need your help! 👋🏻

A few of us maintain an "awesome-ai4lam" 🕶️ list at https://github.com/AI4LAM/awesome-ai4lam and we need your help finding more things to add. Please tell us what we missed!

You can just reply to this toot, or open an issue/ticket in the GitHub repo, or email me, or whatever is easiest for you.

Please boost this to reach more people! 📣

RememberUsAlways, to gpt
@RememberUsAlways@newsie.social avatar
BenjaminHCCarr, to OpenAI
@BenjaminHCCarr@hachyderm.io avatar

’s Is ’s Dream Tool. Tests Show There’s
Asked to rank 1,000 times, GPT favored names from some more than others, it would fail benchmarks used to assess against protected groups.This simple workflow, isolated names as source of in GPT that could affect . Interviews and experiment show using for /hiring pose serious risk: automated at scale
https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwOTg1NjE0OCwiZXhwIjoxNzEwNDYwOTQ4LCJhcnRpY2xlSWQiOiJTQTA1Q1FUMEFGQjQwMCIsImJjb25uZWN0SWQiOiI2NDU1MEM3NkRFMkU0QkM1OEI0OTI5QjBDQkIzRDlCRCJ9.MdkSGC3HMwwUYtltWq6WxWg3vULNeCTJcjacB-DNi8k

TechDesk, to OpenAI
@TechDesk@flipboard.social avatar

OpenAI is set to launch its GPT store next week, which will enable users to build their own customized versions of ChatGPT and sell them.

https://flip.it/VIUdP8

jonippolito, to ai
@jonippolito@digipres.club avatar

Harvard's metaLab has launched https://aipedagogy.org, a resource chock full of tasty assignments by trailblazers of generative AI in the classroom. (My own "AI Sandwich" is also on the menu.)

I've already stolen Juliana Castro's "Illustrate a Hoax" for my own class!

ajlburke, to gpt
@ajlburke@mas.to avatar

I saw a 17th-century-informed chatbot mentioned in @clive's latest newsletter.

This inspired me to make a custom GPT that answers your questions in the tiresomely long-winded style of the "Ithaca" chapter in James Joyce's "Ulysses"

https://www.andrewburke.me/blogposts/the_joycean_ithaca_catechism_gpt_357

If you feed it the actual questions from the chapter, it's even MORE verbose than the original!

A good start to my plans to do something extra special for Bloomsday 2024.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • normalnudes
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • osvaldo12
  • ngwrru68w68
  • ethstaker
  • JUstTest
  • everett
  • Durango
  • Leos
  • cubers
  • mdbf
  • khanakhh
  • tester
  • modclub
  • cisconetworking
  • anitta
  • tacticalgear
  • provamag3
  • lostlight
  • All magazines