Artificial Intelligence

FluffyDeveloper, (edited )

To anyone thinking about joining BlueSky, especially artists: everything you post is sent to a third party for AI labeling.

BlueSky uses AI to label content for moderation, and to do that they use a company called https://thehive.ai. If you look through their privacy policy, you will see that they can use content sent to them to train models for all their services, which include generative AI for both text and images.

Update: https://meow.social/@FluffyDeveloper/110652053858910840

FluffyDeveloper,

@SteffoSpieler It's another social media network created by the same people who backed Twitter and Nostr. Instead of going with ActivityPub however, they decided to make their own VC-backed protocol called AT.

The protocol itself is not too bad, but its controlled by private companies and BlueSky has decided to basically avoid moderation and offload everything to AI labelling.

In short: it's decentralised Twitter with venture capitalists trying to control their own fediverse :/

FluffyDeveloper,

Adding sources for my earlier post.

The code calling to hive.ai’s API: https://github.com/bluesky-social/atproto/blob/main/packages/bsky/src/labeler/hive.ts

hive.ai privacy policy (“How We Use Information We Obtain” section at the bottom): https://thehive.ai/privacy

BlueSky’s TOS (section 7.4): https://blueskyweb.xyz/support/tos

susankayequinn, (edited )
@susankayequinn@wandering.shop avatar

Back in Feb, when Grammarly said they were going to think about incorporating AI, I immediately canceled my account and uninstalled. And now they've actually done it.

I'm sharing because with everything going on, writers might not realize this is happening.

From 2022 but relevant: https://www.protocol.com/enterprise/grammarly-writing-assistants-ai-data

lyssachiavari,
@lyssachiavari@wandering.shop avatar

@susankayequinn Haven't had a new book out in a couple years because health, but I always would run Grammarly as my final passthrough for edits, so really glad you posted this because I was going to do it next month 😬

susankayequinn,
@susankayequinn@wandering.shop avatar

@lyssachiavari Same. It was good at catching typos and some comma/hyphen issues. I'm still using WordRake, which will catch some of that stuff, because I haven't heard of them incorporating AI yet.

MedievalMideast,

just changed their terms and conditions to include using anyone's video and audio for training with no option for opting out. You too can help train s!

Living with a disabled spouse, I used Zoom a lot to get through the ongoing global pandemic.

What alternatives are out there for remote teaching/meetings?

MedievalMideast,

I'm not a lawyer, but I don't think Zoom's blog post response (here: https://blog.zoom.us/zooms-term-service-ai/) addresses the issue.

It looks to me like they are saying we don't use your audio and video to train AI, but we can if you consent (and consent might be presumed by agreeing to the Terms of Service).

Furthermore, the opt-out features they highlight are only opting out from using (not training) their machine learning models.

Further furthermore, if I am not mistaken, this blog post has no legal authority. If they want to clarify the terms of service, they need to rewrite the terms of service to be clearer, including exclusions.

MedievalMideast,

Update: Zoom has now updated their terms of service: https://explore.zoom.us/en/terms/

Specifically, they added a sentence after paragraph 10.4:
"Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent."

It remains for lawyers to parse out whether "consent" is implied by agreeing to the terms of service, or requires a separate action, and whether "your" is user-level or company-level. I am not a lawyer.

kristenhg,
@kristenhg@mastodon.social avatar

One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.

It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.

To be clear: I did not write articles using ChatGPT.

johncormier,
@johncormier@mstdn.ca avatar

@kristenhg Is there any surer sign of the death spiral of a website than its duplicitous misrepresentation of authorship? The future does not look bright for How Stuff Works, sadly.

surabax,

@kristenhg There has to be some kind of coordinated political pushback against this kind of shit at this point. Similar to FSF and EFF, but focused specifically on campaigning against unethical use of the technology that harvests and regurgitates human-sourced data, against mass provenance laundering.

ChrisMayLA6,
@ChrisMayLA6@zirk.us avatar

Tom Gauld on the AI (investment) bubble

#AI #investment

tshirtman,
@tshirtman@mas.to avatar

@ChrisMayLA6
i'm definitely willing to fund arm improvements 😂

raphaelmorgan,
@raphaelmorgan@disabled.social avatar

@wrench @ChrisMayLA6 @NikTheDusky not AI. No money.

jwcph, Danish
@jwcph@norrebro.space avatar

Yup, about sums it up...

sfwrtr,
@sfwrtr@eldritch.cafe avatar

@jwcph
Ironically enough, medical diagnosis was one of the first big applications of big AI. Anyone remember Watson AI and the computer that played on Jeopardy? Pattern marching on carefully trained data produced answers that were sound and could be curated by trained doctors. Note that someone that could understand and judge the result in my statement? It didn't make the bank IBM hoped for trust and privacy issues. Maybe doctors weren't happy with the idea of a tool that could replace them?

Generating and creating content from an indiscriminately trained AI isn't the same thing as a system like Watson. Because these systems mimics human creativity, people get suckered into thinking the results are the same as a human. They aren't. It's pattern matching and the AI judging the probability that the next word is the right word/meaning based on trained data. It's not understanding. Not even close. Alone, it can generate probable answers, but it takes a trained human to determine what's crap. Especially true if the answer is obscure—is it something good nobody thought of, or simply a technobabble hallucination?

lffontenelle,
@lffontenelle@mastodon.social avatar

@sfwrtr @jwcph Medical doctor here. It would be great if the computer could guess better than medical doctors what treatment could advert death and improve quality of life. Alas, the computer only could guess what selected doctors would have done, which is not exactly a gold standard

drahardja, (edited )
@drahardja@sfba.social avatar

Here’s a clear example of how aggressive the image processing has become on newer iPhones. This is a comparison between the iPhone 15 Pro Max and the iPhone 11 Pro Max, taking photos of distant text at equal magnification. Note how the 15 Pro Max’s image pipeline has made up all the details.

EDIT: I’m going to drop mentions of “AI” and “hallucinating” here because I think it’s conjuring up the wrong mental models in readers’ heads. What’s likely happening is over-eager noise reduction and sharpening (which may or may not have pattern matching) creating details where none exist. Every phone does some amount of NR and sharpening, but later iPhones are super aggressive in this regard, so much so that the results often depart from what we accept as “reality”.

#apple #iphone #imageProcessing

Original video: https://www.facebook.com/watch?v=885196069752553

artemis,
@artemis@dice.camp avatar

@drahardja
For a long time I would say to people (who were not into photography) "why are you bothering with a camera? Your phone takes better pictures than that!"

I guess I may need to stop saying that. 😬

I'm assuming these AI filters can be turned off though?

jodmentum,
@jodmentum@mastodon.social avatar

@drahardja aggressive seems like a really aggressive word. Do you mean superior?

paco,

I am crying I am laughing so hard at the generated colours from @janellecshane 's blog: https://www.aiweirdness.com/new-ai-paint-colors/

paco,
paco,

This is a Northern line train to Turdley, calling at Burfream, Sindis Poop, Clapham Junction, and Turdley.
@dpp @janellecshane @robey

cindyweinstein,

isotopp,
@isotopp@chaos.social avatar

@cindyweinstein

The original font on the album is Prestige 12 Pitch, with some manual fuzzing.

The track list on the back is in FF Trixie.

Using FF Trixie to produce a PNG, you get this (in black and in white).

doboprobodyne,
@doboprobodyne@mathstodon.xyz avatar

@cindyweinstein
Anyone else feel like this fellow ought to be from Yorkshire?

https://www.youtube.com/watch?v=26ZDB9h7BLY

fullfathomfive,
@fullfathomfive@aus.social avatar

A lot of people have responded to my Duolingo post with things like "Never work for free," and "I would never donate my time to a corporation.” Which I completely agree with.

But here's the thing about Duolingo and all of the other companies like it. You already work for them. You just don’t know it.

On Duo, I thought I was learning a language. Participating in the community by helping other learners and building resources seemed like part of the process.

Luis Von Ahn, the CEO of Duolingo, was one of the creators of CAPTCHA, which was originally supposed to stop bot spam by getting a human to do a task a machine couldn’t do. In 2009 Google bought CAPTCHA and used it to get humans to proofread the books they were digitising (without permission from the authors of those books btw). So in order to access much of the web, people had to work for Google. Most of them didn’t know they were working for Google - they thought they were visiting websites.

This is how they get you. They make it seem like they’re giving you something valuable (access to a website, tools to learn a language), while they’re actually taking something from you (your skills, your time, your knowledge, your labour). They make you think they’re helping you, but really you're helping them (and they’re serving you ads while you do it).

Maybe if people had known what CAPTCHA was really for they would’ve done it anyway. Maybe I still would’ve done all that work for Duo if I’d known it would one day disappear from the web and become training data for an LLM ...

... Or maybe I would’ve proofread books for Project Gutenberg, or donated my time to citizen science projects, or worked on an accessibility app, or a million other things which genuinely improve people’s lives and the quality of the web. I didn’t get an informed choice. I got lured into helping a tech company become profitable, while they made the internet a shittier place to be.

How many things are you doing on the web every day which are actually hidden work for tech companies? Probably dozens, or hundreds. We all are. That’s why this is so insidious. It’s everywhere. The tech industry is built on free labour. (And not just free – we often end up paying for the end results of our own work, delivered back to us in garbled, enshittified form).

And it’s a problem that’s only getting worse with AI. Is that thoughtful answer you gave someone on reddit or Mastodon something that will stay on the web for years, helping people in future with the same problem? Or is it just grist for the LLMs?

Do you really get a choice about it?

fullfathomfive,
@fullfathomfive@aus.social avatar

@mapachin Yes agreed! The fediverse embodies the original principles of the web: open, free, distributed. I think it's a tool we could use to do amazing things, rebuilding the internet commons outside of the corporate system.

RainerMuehlhoff,

@fullfathomfive Check out this paper on "Human-Aided AI", where I wrote on Luis von Ahn and in the end make a distinction of five different types of capture of free human labor on the web!
http://journals.sagepub.com/doi/10.1177/1461444819885334

Nickiquote, (edited )
@Nickiquote@mstdn.social avatar

The Hitchhiker’s Guide was famously written by correspondents like Ford Prefect. It wasn’t an that made shit up based on some random sub-ether drivel. As a result, it was useful.

God, would have fucking hated .

Curiously enough, an edition of The Guide that fell through a time warp from a thousand years in the future defined the executive board of X as “a bunch of mindless jerks who were the first against the wall when the revolution came”.

denny,

@Nickiquote Ugh. And 'grok' is almost exactly the wrong word for what the current batch of word-salad-juggling 'AI' autocomplete things do.

Nickiquote,
@Nickiquote@mstdn.social avatar

@CrypticMirror I was just thinking of Mostly Harmless, with the Vogons buying the Guide from Megadodo for that purpose, much like Musk bought Twitter.

Adams at his most depressed turns out to be the most prophetic.

molly0xfff,
@molly0xfff@hachyderm.io avatar
molly0xfff,
@molly0xfff@hachyderm.io avatar

who could have predicted this

DrollTide,
@DrollTide@mastodon.social avatar

@molly0xfff Rule 34 is undefeated.

aral,
@aral@mastodon.ar.al avatar

We call it AI because no one would take us seriously if we called it matrix multiplication seeded with a bunch of initial values we pulled out of our asses and run on as much shitty data as we can get our grubby little paws on.

koantig,
@koantig@mamot.fr avatar

@aral
"To paraphrase provocatively, 'machine learning is statistics minus any checking of models and assumptions'."
-- Brian D. Ripley (about the difference between machine learning and statistics)
useR! 2004, Vienna (May 2004)

elala,
@elala@nrw.social avatar

@aral 😂 But it helped me. I understand it much better now.😁

freeformz,
@freeformz@hachyderm.io avatar

mattly,
@mattly@hachyderm.io avatar

@freeformz Since when has this stopped them?

Have I told you about my three classifications of bugs?

  1. “It doesn’t do what it’s supposed to” - how most people define bugs
  2. “It does what it’s supposed to, but that’s wrong” - specification problem, happens a lot
  3. “We’re not sure what it’s supposed to do, but it’s doing something we don’t like” - design problem, 100% of the ones I’ve seen were caused by shitty PMs
enobacon,
@enobacon@urbanists.social avatar

@freeformz funny, but programmers aren't safe from layoffs because companies are going to spend their programming salaries on AI and learn this the hard way

grammargirl,
@grammargirl@zirk.us avatar

A team at the University of Chicago just released a tool called Nightshade that makes invisible changes to digital images. The selling point is that these changes “poison” AI models that try to use the images as training data.

https://venturebeat.com/ai/nightshade-the-free-tool-that-poisons-ai-models-is-now-available-for-artists-to-use/

VirginiaMurr,
@VirginiaMurr@mastodon.social avatar

@grammargirl

Thought you might find this interesting (and maybe helpful) :-)
@NaturaArtisMagistra

reallyflygreg,
@reallyflygreg@mstdn.ca avatar

@grammargirl I had to laugh at this "some web users have complained about it, suggesting it is tantamount to a cyberattack on AI models and companies."

My heart bleeds for the poor helpless AI companies unable to commit intellectual property theft. 🎻​

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • magazineikmin
  • Youngstown
  • everett
  • mdbf
  • slotface
  • modclub
  • khanakhh
  • thenastyranch
  • kavyap
  • ethstaker
  • InstantRegret
  • DreamBathrooms
  • ngwrru68w68
  • cubers
  • provamag3
  • rosin
  • GTA5RPClips
  • anitta
  • osvaldo12
  • normalnudes
  • tacticalgear
  • Durango
  • tester
  • megavids
  • Leos
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines