"It’s a social network that doesn’t try to be neutral and isn’t attempting to preserve free speech. It’s trying to make the internet a nicer place to be."
"Mozilla’s content policies also make clear that the platform will err on the side of protecting people who need to be protected..."
The European Commission has launched its transparency database of platforms‘ #contentmoderation decisions (statements of reasons in #digitalservicesact lingo). It already contains thousands of records from platforms such as #TikTok that will hopefully give some quantitative insights into their content moderation approaches, the degree of automation etc. Unfortunately, the qualitative information on individual decisions is very limited. https://transparency.dsa.ec.europa.eu/statement
Looks like we don't need laws after all for platforms to opportunistically suppress political user content: In 🇫🇷 #France, a #Snapchat#lobbyist admitted in a parliamentary hearing that the company was "proud" to have collaborated "hand in hand with the interior ministry" to make sure only user content critical of the mass protests was shown on #SnapMaps.
#SocialMedia#Mastodon#ContentModeration#CSAM: During a two-day test, researchers at the Stanford Internet Observatory found over 600 pieces of known or suspected child abuse material across some of Mastodon’s most popular networks, according to a report shared exclusively with The Technology 202.
Researchers reported finding their first piece of content containing child exploitation within about five minutes. They would go on to uncover roughly 2,000 uses of hashtags associated with such material. David Thiel, one of the report’s authors, called it an unprecedented sum.
“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” said Thiel, referring to a technique used to identify pieces of content with unique digital signatures. Mastodon did not return a request for comment."
Last night, after a tip-off, I decided to start checking out the instances federated with BSD Cafe. I came across some truly appalling instances, featuring horrible images and content that could end up on our timeline. As a result, I've begun integrating some blocklists into BSD Cafe, taking a gradual approach to avoid going overboard with the blocks.
This has led to the immediate removal of over 10 followers from my profile—potentially good folks, but from highly questionable instances. I can't stand by as BSD Cafe gets tainted with such materials.
Friends of the Fediverse, choose your instances wisely. It will ensure a far better experience for everyone.
The challenge with labeling Mastodon instances as a “community” is there is a non-trivial percentage of users who don’t have just one community, interest, or affiliation.
Comparing longtime, volunteer community moderators to politicians is an unforced, massive blunder for the CEO of a platform for volunteer-led communities.
#AI#GenerativeAI#GeneratedImages#ContentModeration: "“These 'general purpose’ models cannot be made safe because there is no single consistent notion of safety across all application contexts,” said Biderman. “What is safe for primary school education applications doesn't always line up with what is safe in other contexts.”
Even so, the results demonstrate that these tools—which, like all AI systems, are deeply embedded with human bias—seem to lack even the most obvious defenses against misuse, let alone protections for peoples’ creative work. And they also speak volumes about the apparent reckless abandon with which companies have plunged into the AI craze.
“Before releasing any AI software, please hand it to a focus group of terminally online internet trolls for 24 hours,” wrote Micah, a user on Twitter competitor Bluesky. “If you aren’t OK with what they generate during this time period, do not release it.”"
This is the kind of question that drives engagement! 1.1 Million votes! #ElonMusk doesn't think about the harm his decisions have on real people. Do the #SandyHook parents have any say? #ContentModeration at #Twitter is practically nonexistent, what happens when #AlexJones violates a TOS or a law?
How long would his defamation or disinformation stay up, causing NEW harm?
Interested in #contentmoderation? The EU is inviting researchers to develop tools to make use of #platform#transparency data under the #DSA: https://hackathon.peren.fr/ (can't link directly to the English version for some reason, switch to English in top right corner in desktop view)
"Social media company X is in the process of hiring 100 content moderators for a new office in Austin, Texas that will focus on fighting child abuse content, a goal it hopes to complete by the end of the year, an X executive said on Saturday. The Elon Musk-owned company announced the new 'Trust and Safety center of excellence' ahead of a U.S. Senate hearing on Jan. 31 about online child sexual exploitation."
#SocialMedia#Meta#Facebook#Instagram#Censorship#Zionism#ContentModeration: "FACEBOOK AND INSTAGRAM’S parent company, Meta, is contemplating stricter rules around discussing Israeli nationalism on its platforms, a major policy change that could stifle criticism and free expression about the war in Gaza and beyond, five civil society sources who were briefed on the potential change told The Intercept.
“Meta is currently revisiting its hate speech policy, specifically in relation to the term ‘Zionist,’” reads a January 30 email sent to civil society groups by Meta policy personnel and reviewed by The Intercept. While the email says Meta has not made a final determination, it is soliciting feedback on a potential policy change from civil society and digital rights groups, according to the sources. The email notes that “Meta is reviewing this policy in light of content that users and stakeholders have recently reported” but does not detail the content in question or name any stakeholders.
“As an anti-Zionist Jewish organization for Palestinian freedom, we are horrified to learn that Meta is considering expanding when they treat ‘Zionism’ — a political ideology — as the same as ‘Jew/Jewish’ — an ethno-religious identity,” said Dani Noble, an organizer with Jewish Voice for Peace, one of the groups Meta has contacted to discuss the possible change. Noble added that such a policy shift “will result in shielding the Israeli government from accountability for its policies and actions that violate Palestinian human rights.”" https://theintercept.com/2024/02/08/facebook-instagram-censor-zionist-israel/?utm_campaign=theintercept&utm_source=twitter&utm_medium=social
I keep getting motorcycle POV police getaway videos in my #YouTube#shorts feed and I keep reporting them for containing dangerous acts. Wtf is wrong with people. #report#contentmoderation
#SocialMedia#ContentModeration#Instagram#Meta#Ads: "Instagram limited the reach of a 404 Media investigation into ads for drugs, guns, counterfeit money, hacked credit cards, and other illegal content on the platform within hours of us posting it. Instagram said it did this because the content, which was about Instagram’s content it failed to moderate on its own platform, didn’t follow its “Recommendation Guidelines.” Later that evening, while that post was being throttled, I got an ad for “MDMA,” and Meta’s ad library is still full of illegal content that can be found within seconds.
This means Meta continues to take money from people blatantly advertising drugs on the platform while limiting the reach of reporting about that content moderation failure. Instagram's Recommendation Guidelines limit the reach of posts that "promotes the use of certain regulated products such as tobacco or vaping products, adult products and services, or pharmaceutical drugs.""
#SocialMedia#Facebook#Meta#Censorship#BigTech#ContentModeration: "Meta blocked a newspaper’s critical report about it on Facebook and its other social sites for hours, sparking a backlash that intensified after the company appeared to subsequently block links to the website of an independent journalist who republished the report.
The controversy began Thursday morning when users noticed that all links to the non-profit newspaper the Kansas Reflector had been flagged as a cybersecurity threat and their posts were removed. About seven hours later, the paper said, most of its links had been restored, save for one — a column that had criticized Facebook and accused it of suppressing posts related to climate change.
Meta apologized to the Reflector and its readers on Thursday for what the company’s communications chief, Andy Stone, called a “an error that had nothing to do with the Reflector’s recent criticism of Meta.”
#FactChecking#News#Media#SERP#Google#Search#Journalism#ContentModeration: "Our investigation found that fact-checks enjoy greater visibility in Google Web Search compared to the articles they seek to correct, both in terms of frequency of appearance and their placement within the SERP rankings. Specifically, our study shows fact-checks rank higher than problematic content across five topical keywords groups, Covid-19, climate change, the war in Ukraine, U.S. liberals and U.S. elections, except in contested stories related to the war in Ukraine, where articles about U.S. bio-labs share equal prominence with their corresponding fact-checks. The findings imply Google moderation effects, as fact-checking content is more prominent given (nearly) equal levels of optimisation. It also implies that fact-checks are generally more prominent for audiences searching for problematic content, though both often appear in the same SERP. Navigational queries (e.g., searching for the name of a source and that content) reduce moderation effects." https://dl.acm.org/doi/abs/10.1145/3614419.3644017
#SocialMedia#Algorithms#Democracy#ContentModeration: "For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.
To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.
For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study." https://hai.stanford.edu/news/building-social-media-algorithm-actually-promotes-societal-values
#Apple#AppStore#AI#GenerativeAI#ContentModeration: "Apple has removed a number of AI image generation apps from the App Store after 404 Media found these apps advertised the ability to create nonconsensual nude images, a sign that app store operators are starting to take more action against these types of apps.
Overall, Apple removed three apps from the App Store, but only after we provided the company with links to the specific apps and their related ads, indicating the company was not able to find the apps that violated its policy itself.
Apple’s action comes after we reported on Monday that Instagram advertises nonconsensual AI nude apps. By browsing Meta’s Ad Library, which archives ads on its platform, when they ran, on what platforms, and who paid for them, we were able to find ads for five different apps, each with dozens of ads. Two of the ads were for web-based services, and three were for apps on the Apple App Store. Meta deleted the ads when we flagged them. Apple did not initially respond to a request for comment on that story, but reached out to me after it was published asking for more information. On Tuesday, Apple told us it removed the three apps on its App Store." https://www.404media.co/apple-removes-nonconsensual-ai-nude-apps-following-404-media-investigation/
#SocialMedia#SocialNetworks#ContentModeration#Algorithms#RecommendationEngines#Messaging: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.
Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."
#SocialMedia#TikTok#China#ContentModeration#Censorship#Surveillance: "What rarely gets mentioned in these discussions, however, is the fact that the Chinese government has built the most comprehensive digital surveillance system in the world, which it primarily uses not to protect children, but to squash any form of dissent that may threaten the power of the Chinese Communist Party. “Everybody exists in a censored environment, and so what gets censored for kids is just one step on top of what gets censored for adults,” Jeremy Daum, a senior research scholar at Yale Law School’s Paul Tsai China Center and the founder of the site China Law Translate, told me.
It should set off warning bells for Americans that many states have explored legislation limiting internet access for minors in ways that mirror what China has done."
For the first time today I added an "altered content" liable to my Youtube video because I used two photos that were generated by AI.
It is a new policy by #YouTube. I've read the guidelines & examples.
Since I'm very interested in #ContentModeration want to now what it means for political speech on YouTube this election?