For the first time today I added an "altered content" liable to my Youtube video because I used two photos that were generated by AI.
It is a new policy by #YouTube. I've read the guidelines & examples.
Since I'm very interested in #ContentModeration want to now what it means for political speech on YouTube this election?
#SocialMedia#SocialNetworks#ContentModeration#Algorithms#RecommendationEngines#Messaging: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.
Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."
#SocialMedia#ContentModeration#Polarization#Ideology: "Today, we live with the irony that the intense pitch and total saturation of political conversation in every part of our lives—simply pick up your phone and rejoin the fray—create the illusion that important ideas are right on the verge of being actualized or rejected. But the form of that political discourse—millions of little arguments—is actually what makes it impossible to process and follow what should be an evolving and responsive conversation. We mistake volume for weight; how could there be so many posts about something with no acknowledgment from the people in charge? Don’t they see how many of us are expressing our anger? These questions elicit despair, because the poster believes that no amount of dissent will actually be heard. And when that happens, in any forum, the posters blame the mods.
The mods do have supporters: “normie” liberals and conservatives who still put a degree of faith in the expert and media classes and who want, more than anything, to restore some bright line of truth so that society can continue to function. A central question of our current moment is whether that faith is enough to unite a critical mass of voters, or whether the medium we have chosen for everything, from photos of our children to our most private conversations, will simply not allow for any consensus, especially one that appeals to a population as broadly complacent as the American consumer class. Normies, who are mostly unified in their defense of the status quo, still wield a reasonable amount of political power, and they will continue to exist in some form. But, as even more of our lives take place within the distortions of online life, how much longer will there be a widely agreed-upon status quo to defend?" https://www.newyorker.com/news/fault-lines/arguing-ourselves-to-death
#SocialMedia#ContentModeration#Facebook#Meta#Israel#Gaza#Palestine#Genocide#Censorship: "Since October 2023, when Israeli forces began bombarding Gaza in response to the October 7 Hamas attack, Palestinian and pro-Palestinian voices have been censored and suppressed on Facebook and Instagram. Access Now’s new report illustrates how content removal, arbitrary account suspensions, and discriminatory enforcement of content moderation policies against Palestinian voices have been the norm through examples and documentation of:
Clear patterns of censorship, including arbitrary content removals, accounts suspensions, “shadow-banning,” and further arbitrary restrictions on pro-Palestinian people and content;
Flawed content moderation policies, with Meta stifling freedom of expression through its broad interpretation of the company’s Designated Organizations and Individuals (DOI) policy;
Biased rule enforcement, including Meta’s over-moderation of Arabic content compared to Hebrew content, and the company’s failure to adequately address hate speech, dehumanization, and genocidal rhetoric against Palestinians; and
Arbitrary and erroneous rule enforcement, with an unacceptable error rate in Meta’s automated decision-making, particularly in non-English languages."
#SocialMedia#Meta#Facebook#Instagram#Censorship#Zionism#ContentModeration: "FACEBOOK AND INSTAGRAM’S parent company, Meta, is contemplating stricter rules around discussing Israeli nationalism on its platforms, a major policy change that could stifle criticism and free expression about the war in Gaza and beyond, five civil society sources who were briefed on the potential change told The Intercept.
“Meta is currently revisiting its hate speech policy, specifically in relation to the term ‘Zionist,’” reads a January 30 email sent to civil society groups by Meta policy personnel and reviewed by The Intercept. While the email says Meta has not made a final determination, it is soliciting feedback on a potential policy change from civil society and digital rights groups, according to the sources. The email notes that “Meta is reviewing this policy in light of content that users and stakeholders have recently reported” but does not detail the content in question or name any stakeholders.
“As an anti-Zionist Jewish organization for Palestinian freedom, we are horrified to learn that Meta is considering expanding when they treat ‘Zionism’ — a political ideology — as the same as ‘Jew/Jewish’ — an ethno-religious identity,” said Dani Noble, an organizer with Jewish Voice for Peace, one of the groups Meta has contacted to discuss the possible change. Noble added that such a policy shift “will result in shielding the Israeli government from accountability for its policies and actions that violate Palestinian human rights.”" https://theintercept.com/2024/02/08/facebook-instagram-censor-zionist-israel/?utm_campaign=theintercept&utm_source=twitter&utm_medium=social
@CenDemTech has started a research project about content moderation in the Global South / Majority World. We will examine how content moderation systems work in social media and other online services in "low resource" languages in Africa, South Asia, and South America. We will partner with digital rights organizations in these regions and if you are working on this topic it would be great to connect.
"Social media company X is in the process of hiring 100 content moderators for a new office in Austin, Texas that will focus on fighting child abuse content, a goal it hopes to complete by the end of the year, an X executive said on Saturday. The Elon Musk-owned company announced the new 'Trust and Safety center of excellence' ahead of a U.S. Senate hearing on Jan. 31 about online child sexual exploitation."
Interested in #contentmoderation? The EU is inviting researchers to develop tools to make use of #platform#transparency data under the #DSA: https://hackathon.peren.fr/ (can't link directly to the English version for some reason, switch to English in top right corner in desktop view)
The challenge with labeling Mastodon instances as a “community” is there is a non-trivial percentage of users who don’t have just one community, interest, or affiliation.
Community members who initially find moderation efforts to be stifling will often come around to see them as a differentiating factor that sets your community apart. Here's one such story:
Hier porträtiert das Magazin Rest Of World den Gründer der Kenyan Content Moderators’ Union: Nathan Nkunzimana. Es geht um traumatisierende Arbeit in der Inhaltsmoderation, um Ausbeutung im Globalen Süden und um Arbeitskampf.
"The man leading Kenyan content moderators’ battle against Meta"
This is the kind of question that drives engagement! 1.1 Million votes! #ElonMusk doesn't think about the harm his decisions have on real people. Do the #SandyHook parents have any say? #ContentModeration at #Twitter is practically nonexistent, what happens when #AlexJones violates a TOS or a law?
How long would his defamation or disinformation stay up, causing NEW harm?
"Helen McEntee says X, formerly Twitter, did not cooperate with Gardaí in taking down 'vile messages' last Thursday. ... 'They [other companies] were taking down their vile messages. X were not. They did not engage. They did not fulfil their own customer standards,' she said."
Last night, after a tip-off, I decided to start checking out the instances federated with BSD Cafe. I came across some truly appalling instances, featuring horrible images and content that could end up on our timeline. As a result, I've begun integrating some blocklists into BSD Cafe, taking a gradual approach to avoid going overboard with the blocks.
This has led to the immediate removal of over 10 followers from my profile—potentially good folks, but from highly questionable instances. I can't stand by as BSD Cafe gets tainted with such materials.
Friends of the Fediverse, choose your instances wisely. It will ensure a far better experience for everyone.