eugenialoli, to windows
@eugenialoli@mastodon.social avatar

The mass exodus from #Windows to #Linux (and #Mac) due to #Windows11 and #AI continues. More and more articles, more and more youtube videos about it, or posts on forums. People are switching. If it continues like that, Linux should have 10% desktop marketshare by the end of the decade (and yes, that's a lot).

#opensource #foss #artificialintelligence #copilot #openai

chirpbirb, to Figma

sent me an email yesterday telling me that they're partnering with as a subprocessor for their "new features".

in totally unrelated news, have you heard of ? it's a great design tool that's completely open source, and they're even on mastodon: @penpot

check it out! https://penpot.app/

keithwilson, to OpenAI

😆 In general ‘AI’ is a very poor name for a bunch of technologies that enable computers to do better pattern matching.

Even worse, is now using ‘AGI’ to mean better-than-human level performance (at what task exactly is unclear), which isn’t what the phrase is generally understood to mean at all.

The more you look into this whole area, the more you realise there’s a lot of smoke and mirrors: marketing hype dressed up as technological revolution.

https://mastodon.social/@jamesbritt/111466685493641006

w7voa, to OpenAI
@w7voa@journa.host avatar

Comedian Sarah Silverman and two authors file copyright infringement lawsuits against Meta Platforms and for allegedly using their content without permission to train language models. https://www.reuters.com/legal/sarah-silverman-sues-meta-openai-copyright-infringement-2023-07-09/

pluralistic, to OpenAI
@pluralistic@mamot.fr avatar

Last week's spectacular soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between () and (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.

1/

parismarx, to tech
@parismarx@mastodon.online avatar

this is exactly the kind of dirt i was hoping this lawsuit would bring to light 🙂

https://www.ft.com/content/9b6da77a-7e37-4e83-94cf-331fe8493894

hankg, to OpenAI

More enshitification. DropBox adds to the OpenAI data vacuum all your files unless you opt out. Having the pipe is one thing. Having it be opt out and not even telling users about it makes me furious.
Dropbox spooks users by sending data to OpenAI for AI search features

paul, to OpenAI

#OpenAI IP block ranges if you want to block them from your instance and scraping your content. I saw Mastodon devs added something to block #GPTBot via robots.txt a few days ago. Here are the IP ranges:

#MastoAdmin #FediBlock

20.15.240.64/28
20.15.240.80/28
20.15.240.96/28
20.15.240.176/28
20.15.241.0/28
20.15.242.128/28
20.15.242.144/28
20.15.242.192/28
40.83.2.64/28

https://openai.com/gptbot-ranges.txt

https://www.theverge.com/2023/8/7/23823046/openai-data-scrape-block-ai

https://github.com/mastodon/mastodon/pull/26396

Norobiik, to OpenAI
@Norobiik@noc.social avatar

"If is found to have violated any in this process, allows for the infringing articles to be destroyed at the end of the case.

In other words, if a federal judge finds that OpenAI illegally copied The Times' articles to train its model, the court could order the company to destroy 's dataset. "

considers legal action against OpenAI as copyright tensions swirl : NPR
https://www.npr.org/2023/08/16/1194202562/new-york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl

parismarx, to ai
@parismarx@mastodon.online avatar

“The AI faith has many popes—an almost exclusively white male cohort of thirtysomething executives and programmers who genuinely believe they are working on the most important thing in the world.”

They’re not. Obviously.

https://www.thenation.com/article/society/how-sam-altman-ran-afoul-of-the-keepers-of-the-ai-faith/

#ai #tech #openai #samaltman #agi

feliz, to OpenAI German
@feliz@norden.social avatar

Don't fall into the trap of the criti-hype marketing of : There's rumour that their creation might become conscious soon - but (as @pluralistic puts it): that's "mystical nonsense about spontaneous consciousness arising from applied statistics".

Rather focus on real problems: ghost labor, erosion of the rights of artists, costs of automation, the climate impact of data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems.

https://pluralistic.net/2023/11/27/10-types-of-people/

parismarx, to OpenAI
@parismarx@mastodon.online avatar

“I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer.”

https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

TechDesk, (edited ) to ai
@TechDesk@flipboard.social avatar

A female computational neuroscience and machine learning expert took to X at the weekend to describe a “dark side” of the startup culture in Silicon Valley.

Sonia Joseph alleged that a culture of sexual coercion has taken hold of San Francisco’s community housing tech scene, with “heavy LSD use” and “sex parties held by mainly male tech and entrepreneurial elites that involve mock-violent role playing with female participants.”

In particular, “early OpenAI employees” were referenced by Joseph, as well as their friends and “adjacent entrepreneurs.” Salon has more.

https://flip.it/t5RReK

w7voa, to uk
@w7voa@journa.host avatar
gisiger, to OpenAI
@gisiger@nerdculture.de avatar

Excellent breakdown of the lawsuit against by @mmasnick here. Basically, it's another attempt of a cash grab by a big player: The lawsuit relies on a false belief that copyright can limit the right to read and process data.

“Similarly, I’ll note that even if the NY Times gets some money out of this, don’t expect the actual reporters to see any of it.”

https://www.techdirt.com/2023/12/28/the-ny-times-lawsuit-against-openai-would-open-up-the-ny-times-to-all-sorts-of-lawsuits-should-it-win/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "“Who are they to be speaking for all of humanity?,” asked Emily M. Bender, raising the question to the tech companies in a conversation with AIM. “The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large,” she bluntly argued.

The vocal, straightforward, and candid computational linguist is not exaggerating as she calls out the likes of OpenAI. Currently, Sam Altman is trying to solve issues of humanity, which include poverty, hunger, and climate catastrophes through AI tools like ChatGPT, which has been developed in Kenyan sweatshops, got sued for violating privacy laws, continues to pollute the internet and is a source of misinformation.

“I would love to see OpenAI take accountability for everything that ChatGPT says because they’re the ones putting it out there,” she said without hesitation, even though it has been long debated who should bear the blame – developers or users, when technologies backfire."

https://analyticsindiamag.com/linguist-emily-m-bender-has-a-word-or-two-about-ai/

rustybrick, to ChatGPT
@rustybrick@c.im avatar

GPTBot - OpenAI's ChatGPT's web crawler, you can now block it with your robots.txt file - if you want https://www.seroundtable.com/openais-chatgpt-web-crawler-gptbot-35835.html

parismarx, to tech
@parismarx@mastodon.online avatar

Friday afternoon shocker: Sam Altman is out as CEO of OpenAI for deceiving the company board.

Is there a scandal about the drop that they’re trying to get ahead of?

https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired

LukaszOlejnik, to privacy
@LukaszOlejnik@mastodon.social avatar

Issues of data protection and human dignity of generative AI processing and creations are an important one. My complaint about OpenAI's data processing. It concerns input and output, access to information, and technology design.
Context/writeup: https://blog.lukaszolejnik.com/ai-llms-gdpr-complaint-and-human-dignity/

The fullcomplaint is here: https://lukaszolejnik.com/stuff/OpenAI_GDPR_Complaint_LO.pdf?ref=mastodon
The supplement is here https://lukaszolejnik.com/stuff/OpenAI_GDPR_Complaint_supplement.pdf?ref=mastodon

pointlessone, to mastodon
@pointlessone@status.pointless.one avatar

The latest drama is that Automattic is about to sign a deal with OpenAI to train AI on WordPress.com and Tumblr content.

Everyone’s got very angry about it. Everyone also conveniently forgot to even mention that OpenAI probably already had crawled most if not all of WP and Tumbler.

Automattic also allows users to opt out and that fueled the Opt Out/Consent discussion that started a bit earlier. I’ll get to it later.

Just the day before (or it feels like it) Google signed a deal with Reddit to get all the data to train their AI.

Everyone’s got very angry about it. Everyone also conveniently forgot to even mention that Google of all corps probably already had crawled most if not all of Reddit. The $60M Google paid is a convenience fee to get a nice db bump instead of having to scrap and clean up all that text.

Reddit doesn't let user to Opt Out.

Last week (or it feels like it) one guy wanted to bridge public toots from Mastodon to bluesky.

Everyone’s got very angry about it. Everyone also conveniently forgot to even mention that people could read those toots just using a different client or a browser. All the bridge did was bring toots to a different audience and allowed them to engage with those toots.

The bride also allows people to opt out and that rekindled the Opt Out/Consent discussion that started a bit earlier. I’ll get to it later.

Some time last year a guy built a Fediverse search engine because discovery between instances is terrible.

Everyone’s got very angry about it. Everyone also conveniently forgot to even mention that most toots are indexed by big search engines anyway but because they rank low they just rarely surface in the results.

The search engine also allowed people to opt out and that kinda started the Opt Out/Consent discussion. I’ll get to it in a bit.

Some time later a completely unrelated thing happened. Discord decided that they won’t let people hotlink images uploaded to Discord.

Everyone’s got very angry about it. But also this time people didn’t forget to mention that you shouldn’t use discord for anything you don’t want to lose. Thing like lore, documentation and basically anything that can be useful 5 minutes after it was said better be somewhere else. The reason is Discord servers are private in the sense that you have to use a specific piece of software with an account to access it. Anything posted there is not accessible outside, including through a search engine.

While all this was going on quite a few people in seemingly unrelated fashion were expressing dissatisfaction with interactions they were having on Mastodon. Specifically they were angry about certain types of replies they were getting. The replies were not threatening or insulting but they were not welcome in a way that I’m having trouble articulating. The most common case I saw is someone would post something open-ended or state a problem they have and they would get a bunch of suggestions how to possibly solve it or people sharing their experience either affirming the problem or otherwise.

Some people got very angry about this. They also conveniently forgot to even notice that this is a non-standard arrangement and they want to Opt Out of the more common case provided by the platform.

So finally we’re at the Opt Out. There’s a lot of different takes but the main thrust is that things should be Opt In instead of the other way around. And I agree. Where I don’t agree is that you all Opt In when you post stuff publicly on the internet. Once you do you set your thing free into the world. You resign control over it. You do not expect to opt in to every single read on your blog. If you want to control who access what you write you don’t post it on the internet in public, you send it in private. Consequently you do not retroactively revoke access. You all know that internet never forgets. You can’t unpublished things on the internet. It was already copied, screenshotted, and archived. And you didn’t know what happens to it unless you’re told.

Public stuff on the internet is public.

#Mastodon #WordPress #Tumblr #bsky #bluesky #fediverse #OpenAI #OptOut #OptIn #AI

blakereid, to generativeAI
@blakereid@mastodon.lawprofs.org avatar

Here’s another copyright suit against and , this time from the New York Times. Things to watch:

  1. A crisp reproduction claim on training data assembly
  2. Extensive memorization claims with receipts
  3. A misappropriation (hot news?) claim
  4. A dilution claim against hallucinations

https://www.theverge.com/2023/12/27/24016212/new-york-times-openai-microsoft-lawsuit-copyright-infringement

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

parismarx, to tech
@parismarx@mastodon.online avatar

Remember when tech CEOs whipped us into a frenzy about generative AI changing everything? Those days are long gone.

Google and OpenAI’s latest demos show the bubble is deflating, but they’re still going to seize as much power as they can before the crash.

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

#tech #ai #google #openai #chatgpt #artificialintelligence

drahardja, to llm
@drahardja@sfba.social avatar

Plagiarism for me, but not for thee

“OpenAI suspends ByteDance’s account after it used GPT to train its own AI model”

https://www.theverge.com/2023/12/15/24003542/openai-suspends-bytedances-account-after-it-used-gpt-to-train-its-own-ai-model

anderseknert, to OpenAI
@anderseknert@hachyderm.io avatar

I’m still seeing people argue that is a responsible actor on the basis that they asked to be regulated. No, they didn’t. They wanted the market to be regulated, which would make it much harder for new actors to enter, and more or less impossible for . It was not an example of them being driven by ethics, but rather the opposite.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • ngwrru68w68
  • InstantRegret
  • thenastyranch
  • magazineikmin
  • GTA5RPClips
  • rosin
  • osvaldo12
  • tacticalgear
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • JUstTest
  • modclub
  • everett
  • provamag3
  • cubers
  • cisconetworking
  • ethstaker
  • Durango
  • mdbf
  • anitta
  • megavids
  • normalnudes
  • tester
  • lostlight
  • All magazines