spocko, to Youtube
@spocko@mastodon.online avatar

For the first time today I added an "altered content" liable to my Youtube video because I used two photos that were generated by AI.

It is a new policy by . I've read the guidelines & examples.
Since I'm very interested in want to now what it means for political speech on YouTube this election?

Lisa Monaco, the Deputy Attorney General, recently said they will charge people for threats of violence via AI.
Thoughts @patrickokeefe ?
https://support.google.com/youtube/answer/14328491?hl=en#zippy=%2Cexamples-of-content-creators-dont-have-to-disclose%2Cexamples-of-content-creators-need-to-disclose

AI generated Trump booking photo.

paninid, to TikTok
@paninid@mastodon.world avatar

This ran afoul of community guidelines.

It is still up on / , though? 🤨🤷🏻‍♂️

Interesting what one platform considers okay for its rules and another doesn’t, relative to Mastodon.

The Chinese Communist Party will let you throw shade at failed Austrian artists, but Zuck protects them.

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.

Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."

https://www.wired.com/story/latest-online-culture-war-is-humans-vs-algorithms/

paninid, to random
@paninid@mastodon.world avatar

You don’t get social media without anti-social people.

That is why is a thing.

communitysignal, to trustandsafety
@communitysignal@mastodon.social avatar
remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Today, we live with the irony that the intense pitch and total saturation of political conversation in every part of our lives—simply pick up your phone and rejoin the fray—create the illusion that important ideas are right on the verge of being actualized or rejected. But the form of that political discourse—millions of little arguments—is actually what makes it impossible to process and follow what should be an evolving and responsive conversation. We mistake volume for weight; how could there be so many posts about something with no acknowledgment from the people in charge? Don’t they see how many of us are expressing our anger? These questions elicit despair, because the poster believes that no amount of dissent will actually be heard. And when that happens, in any forum, the posters blame the mods.

The mods do have supporters: “normie” liberals and conservatives who still put a degree of faith in the expert and media classes and who want, more than anything, to restore some bright line of truth so that society can continue to function. A central question of our current moment is whether that faith is enough to unite a critical mass of voters, or whether the medium we have chosen for everything, from photos of our children to our most private conversations, will simply not allow for any consensus, especially one that appeals to a population as broadly complacent as the American consumer class. Normies, who are mostly unified in their defense of the status quo, still wield a reasonable amount of political power, and they will continue to exist in some form. But, as even more of our lives take place within the distortions of online life, how much longer will there be a widely agreed-upon status quo to defend?" https://www.newyorker.com/news/fault-lines/arguing-ourselves-to-death

casilli, to Facebook French
@casilli@mamot.fr avatar

Le 21-22 févr., a accueilli au Parlement européen Kauna Ibrahim Malgwi, ancienne modératrice de FB, membre du bureau du premier syndicat africain de modérateurs. Son témoignage empreint d'humanité et son engagement pour l'organisation des travailleurs ont profondément ému l'auditoire. https://www.humanite.fr/social-et-economie/facebook/kauna-moderatrice-pour-facebook-au-kenya-jai-vu-beaucoup-de-suicides-en-video interview by @pierricm

kbindependent, to Texas
@kbindependent@newsie.social avatar

Supreme Court casts doubt on Florida law regulating social media :

Justices seemed wary of a broad ruling, with Justice Amy Coney Barrett warning of "land mines" she and her colleagues need to avoid in resolving the two cases.
#X

https://kbindependent.org/2024/02/27/supreme-court-casts-doubt-on-florida-law-regulating-social-media/

casilli, to Europe
@casilli@mamot.fr avatar

The DiPLab crew was at the European Parliament a few days ago to organize a panel titled "Meet the human workers behind AI". We listened to the testimonies of microworkers, cloud workers, and internet moderators. Their voices, their struggles, and the solidarity of other platform workers. https://diplab.eu/diplab-on-the-european-parliaments-transnational-forum-of-alternatives-to-uberisation/

internetsociety, to internet
@internetsociety@techpolicy.social avatar

Today at 10am ET, the US Supreme Court hears oral arguments in two cases that may decide if you can moderate content on your website!

You can listen to the live audio stream at:
https://www.supremecourt.gov/oral_arguments/live.aspx

Read our post to understand why this case is so important for the open Internet:

https://www.internetsociety.org/blog/2023/12/can-you-kick-the-trolls-out-of-your-online-forum-u-s-supreme-court-to-decide/

A link is in that post to the amicus brief we submitted for the cases.

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Since October 2023, when Israeli forces began bombarding Gaza in response to the October 7 Hamas attack, Palestinian and pro-Palestinian voices have been censored and suppressed on Facebook and Instagram. Access Now’s new report illustrates how content removal, arbitrary account suspensions, and discriminatory enforcement of content moderation policies against Palestinian voices have been the norm through examples and documentation of:

  • Clear patterns of censorship, including arbitrary content removals, accounts suspensions, “shadow-banning,” and further arbitrary restrictions on pro-Palestinian people and content;
  • Flawed content moderation policies, with Meta stifling freedom of expression through its broad interpretation of the company’s Designated Organizations and Individuals (DOI) policy;
  • Biased rule enforcement, including Meta’s over-moderation of Arabic content compared to Hebrew content, and the company’s failure to adequately address hate speech, dehumanization, and genocidal rhetoric against Palestinians; and
  • Arbitrary and erroneous rule enforcement, with an unacceptable error rate in Meta’s automated decision-making, particularly in non-English languages."

https://www.accessnow.org/press-release/meta-systematic-censorship-palestinian-voices/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "FACEBOOK AND INSTAGRAM’S parent company, Meta, is contemplating stricter rules around discussing Israeli nationalism on its platforms, a major policy change that could stifle criticism and free expression about the war in Gaza and beyond, five civil society sources who were briefed on the potential change told The Intercept.

“Meta is currently revisiting its hate speech policy, specifically in relation to the term ‘Zionist,’” reads a January 30 email sent to civil society groups by Meta policy personnel and reviewed by The Intercept. While the email says Meta has not made a final determination, it is soliciting feedback on a potential policy change from civil society and digital rights groups, according to the sources. The email notes that “Meta is reviewing this policy in light of content that users and stakeholders have recently reported” but does not detail the content in question or name any stakeholders.

“As an anti-Zionist Jewish organization for Palestinian freedom, we are horrified to learn that Meta is considering expanding when they treat ‘Zionism’ — a political ideology — as the same as ‘Jew/Jewish’ — an ethno-religious identity,” said Dani Noble, an organizer with Jewish Voice for Peace, one of the groups Meta has contacted to discuss the possible change. Noble added that such a policy shift “will result in shielding the Israeli government from accountability for its policies and actions that violate Palestinian human rights.”" https://theintercept.com/2024/02/08/facebook-instagram-censor-zionist-israel/?utm_campaign=theintercept&utm_source=twitter&utm_medium=social

Moon, to fediverse
@Moon@shitposter.club avatar
consideration, to LLMs

@CenDemTech has started a research project about content moderation in the Global South / Majority World. We will examine how content moderation systems work in social media and other online services in "low resource" languages in Africa, South Asia, and South America. We will partner with digital rights organizations in these regions and if you are working on this topic it would be great to connect.

https://cdt.org/insights/investigating-content-moderation-systems-in-the-global-south/

researchbuzz, to twitter
@researchbuzz@researchbuzz.masto.host avatar

"Social media company X is in the process of hiring 100 content moderators for a new office in Austin, Texas that will focus on fighting child abuse content, a goal it hopes to complete by the end of the year, an X executive said on Saturday. The Elon Musk-owned company announced the new 'Trust and Safety center of excellence' ahead of a U.S. Senate hearing on Jan. 31 about online child sexual exploitation."

https://money.usnews.com/investing/news/articles/2024-01-27/musks-x-aims-to-hire-100-content-moderators-in-austin-by-end-of-year

senficon, to random
@senficon@ohai.social avatar

Interested in ? The EU is inviting researchers to develop tools to make use of data under the : https://hackathon.peren.fr/ (can't link directly to the English version for some reason, switch to English in top right corner in desktop view)

paninid, to fediverse
@paninid@mastodon.world avatar

The challenge with labeling Mastodon instances as a “community” is there is a non-trivial percentage of users who don’t have just one community, interest, or affiliation.

People contain multitudes 🤷🏻‍♂️

euractiv_tech, to random
@euractiv_tech@eupolicy.social avatar
communitysignal, to random
@communitysignal@mastodon.social avatar

Community members who initially find moderation efforts to be stifling will often come around to see them as a differentiating factor that sets your community apart. Here's one such story:

sebmeineck, to meta German
@sebmeineck@mastodon.social avatar

Hier porträtiert das Magazin Rest Of World den Gründer der Kenyan Content Moderators’ Union: Nathan Nkunzimana. Es geht um traumatisierende Arbeit in der Inhaltsmoderation, um Ausbeutung im Globalen Süden und um Arbeitskampf.

"The man leading Kenyan content moderators’ battle against Meta"

https://restofworld.org/2023/kenya-content-moderators-battle-meta/

spocko, to twitter
@spocko@mastodon.online avatar

This is the kind of question that drives engagement! 1.1 Million votes! doesn't think about the harm his decisions have on real people. Do the parents have any say?
at is practically nonexistent, what happens when violates a TOS or a law?
How long would his defamation or disinformation stay up, causing NEW harm?

Can the families sue ? X?

researchbuzz, to ireland
@researchbuzz@researchbuzz.masto.host avatar

"Helen McEntee says X, formerly Twitter, did not cooperate with Gardaí in taking down 'vile messages' last Thursday. ... 'They [other companies] were taking down their vile messages. X were not. They did not engage. They did not fulfil their own customer standards,' she said."

https://www.independent.ie/irish-news/politics/dublin-riots-elon-musks-x-did-not-take-down-vile-messages-despite-garda-requests-justice-minister-helen-mcentee-says/a2015832225.html

rwg, to bluesky
@rwg@aoir.social avatar

As I mentioned before, I'm digging into , trying to figure it all out.

Today, I'm studying its approach to .

If anyone has insights about this -- especially if you know how many actual content moderators they have -- feel free to reach out.

stefano, (edited ) to mastodon
@stefano@bsd.cafe avatar

Last night, after a tip-off, I decided to start checking out the instances federated with BSD Cafe. I came across some truly appalling instances, featuring horrible images and content that could end up on our timeline. As a result, I've begun integrating some blocklists into BSD Cafe, taking a gradual approach to avoid going overboard with the blocks.

This has led to the immediate removal of over 10 followers from my profile—potentially good folks, but from highly questionable instances. I can't stand by as BSD Cafe gets tainted with such materials.

Friends of the Fediverse, choose your instances wisely. It will ensure a far better experience for everyone.

itnewsbot, to Israel
@itnewsbot@schleuss.online avatar

Creators confused by Elon Musk’s plan to “incentivize truth” on X - Enlarge (credit: NurPhoto / Contributor | NurPhoto)

After rese... - https://arstechnica.com/?p=1979687 -hamaswar #x

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • anitta
  • InstantRegret
  • GTA5RPClips
  • cubers
  • ethstaker
  • normalnudes
  • tacticalgear
  • cisconetworking
  • tester
  • Leos
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines