mastodonmigration, to random
@mastodonmigration@mastodon.online avatar

May 4, 2023

So, this is a big deal.

The Verge: ’s setting up shop on Mastodon and trying to reinvent content moderation >>> https://www.theverge.com/23710406/mozilla-social-mastodon-fediverse-moderation

"It’s a social network that doesn’t try to be neutral and isn’t attempting to preserve free speech. It’s trying to make the internet a nicer place to be."

"Mozilla’s content policies also make clear that the platform will err on the side of protecting people who need to be protected..."

senficon, to instagramreality
@senficon@ohai.social avatar

The European Commission has launched its transparency database of platforms‘ decisions (statements of reasons in lingo). It already contains thousands of records from platforms such as that will hopefully give some quantitative insights into their content moderation approaches, the degree of automation etc. Unfortunately, the qualitative information on individual decisions is very limited. https://transparency.dsa.ec.europa.eu/statement

ilumium, to France
@ilumium@eupolicy.social avatar

Looks like we don't need laws after all for platforms to opportunistically suppress political user content: In 🇫🇷 , a admitted in a parliamentary hearing that the company was "proud" to have collaborated "hand in hand with the interior ministry" to make sure only user content critical of the mass protests was shown on .

Source: https://www.lesnumeriques.com/societe-numerique/snapchat-admet-avoir-bidonne-la-snap-map-pendant-les-emeutes-n211617.html

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: During a two-day test, researchers at the Stanford Internet Observatory found over 600 pieces of known or suspected child abuse material across some of Mastodon’s most popular networks, according to a report shared exclusively with The Technology 202.

Researchers reported finding their first piece of content containing child exploitation within about five minutes. They would go on to uncover roughly 2,000 uses of hashtags associated with such material. David Thiel, one of the report’s authors, called it an unprecedented sum.

“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” said Thiel, referring to a technique used to identify pieces of content with unique digital signatures. Mastodon did not return a request for comment."

https://archive.fo/BEdp5#selection-641.0-693.48

stefano, (edited ) to mastodon
@stefano@bsd.cafe avatar

Last night, after a tip-off, I decided to start checking out the instances federated with BSD Cafe. I came across some truly appalling instances, featuring horrible images and content that could end up on our timeline. As a result, I've begun integrating some blocklists into BSD Cafe, taking a gradual approach to avoid going overboard with the blocks.

This has led to the immediate removal of over 10 followers from my profile—potentially good folks, but from highly questionable instances. I can't stand by as BSD Cafe gets tainted with such materials.

Friends of the Fediverse, choose your instances wisely. It will ensure a far better experience for everyone.

rwg, to bluesky
@rwg@aoir.social avatar

As I mentioned before, I'm digging into , trying to figure it all out.

Today, I'm studying its approach to .

If anyone has insights about this -- especially if you know how many actual content moderators they have -- feel free to reach out.

paninid, to fediverse
@paninid@mastodon.world avatar

The challenge with labeling Mastodon instances as a “community” is there is a non-trivial percentage of users who don’t have just one community, interest, or affiliation.

People contain multitudes 🤷🏻‍♂️

paninid, to random
@paninid@mastodon.world avatar

You don’t get social media without anti-social people.

That is why is a thing.

patrickokeefe, to reddit
@patrickokeefe@mastodon.social avatar

Comparing longtime, volunteer community moderators to politicians is an unforced, massive blunder for the CEO of a platform for volunteer-led communities.

https://www.nbcnews.com/tech/tech-news/reddit-protest-blackout-ceo-steve-huffman-moderators-rcna89544

eff, to Bulgaria
@eff@mastodon.social avatar

The EU could lead to marginalized groups who are often targeted with hate speech facing arbitrary content moderation and discrimination—and could have worldwide effects, EFF’s Christoph Schmon & Paige Collings write for @TechCrunch https://techcrunch.com/2023/10/03/the-eu-media-freedom-act-is-a-dangerous-law/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "“These 'general purpose’ models cannot be made safe because there is no single consistent notion of safety across all application contexts,” said Biderman. “What is safe for primary school education applications doesn't always line up with what is safe in other contexts.”

Even so, the results demonstrate that these tools—which, like all AI systems, are deeply embedded with human bias—seem to lack even the most obvious defenses against misuse, let alone protections for peoples’ creative work. And they also speak volumes about the apparent reckless abandon with which companies have plunged into the AI craze.

“Before releasing any AI software, please hand it to a focus group of terminally online internet trolls for 24 hours,” wrote Micah, a user on Twitter competitor Bluesky. “If you aren’t OK with what they generate during this time period, do not release it.”"

https://www.vice.com/en/article/88xdez/generative-ai-is-a-disaster-and-companies-dont-seem-to-really-care

spocko, to twitter
@spocko@mastodon.online avatar

This is the kind of question that drives engagement! 1.1 Million votes! doesn't think about the harm his decisions have on real people. Do the parents have any say?
at is practically nonexistent, what happens when violates a TOS or a law?
How long would his defamation or disinformation stay up, causing NEW harm?

Can the families sue ? X?

senficon, to random
@senficon@ohai.social avatar

Interested in ? The EU is inviting researchers to develop tools to make use of data under the : https://hackathon.peren.fr/ (can't link directly to the English version for some reason, switch to English in top right corner in desktop view)

researchbuzz, to twitter
@researchbuzz@researchbuzz.masto.host avatar

"Social media company X is in the process of hiring 100 content moderators for a new office in Austin, Texas that will focus on fighting child abuse content, a goal it hopes to complete by the end of the year, an X executive said on Saturday. The Elon Musk-owned company announced the new 'Trust and Safety center of excellence' ahead of a U.S. Senate hearing on Jan. 31 about online child sexual exploitation."

https://money.usnews.com/investing/news/articles/2024-01-27/musks-x-aims-to-hire-100-content-moderators-in-austin-by-end-of-year

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "FACEBOOK AND INSTAGRAM’S parent company, Meta, is contemplating stricter rules around discussing Israeli nationalism on its platforms, a major policy change that could stifle criticism and free expression about the war in Gaza and beyond, five civil society sources who were briefed on the potential change told The Intercept.

“Meta is currently revisiting its hate speech policy, specifically in relation to the term ‘Zionist,’” reads a January 30 email sent to civil society groups by Meta policy personnel and reviewed by The Intercept. While the email says Meta has not made a final determination, it is soliciting feedback on a potential policy change from civil society and digital rights groups, according to the sources. The email notes that “Meta is reviewing this policy in light of content that users and stakeholders have recently reported” but does not detail the content in question or name any stakeholders.

“As an anti-Zionist Jewish organization for Palestinian freedom, we are horrified to learn that Meta is considering expanding when they treat ‘Zionism’ — a political ideology — as the same as ‘Jew/Jewish’ — an ethno-religious identity,” said Dani Noble, an organizer with Jewish Voice for Peace, one of the groups Meta has contacted to discuss the possible change. Noble added that such a policy shift “will result in shielding the Israeli government from accountability for its policies and actions that violate Palestinian human rights.”" https://theintercept.com/2024/02/08/facebook-instagram-censor-zionist-israel/?utm_campaign=theintercept&utm_source=twitter&utm_medium=social

phurd, to random

I keep getting motorcycle POV police getaway videos in my feed and I keep reporting them for containing dangerous acts. Wtf is wrong with people.

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Instagram limited the reach of a 404 Media investigation into ads for drugs, guns, counterfeit money, hacked credit cards, and other illegal content on the platform within hours of us posting it. Instagram said it did this because the content, which was about Instagram’s content it failed to moderate on its own platform, didn’t follow its “Recommendation Guidelines.” Later that evening, while that post was being throttled, I got an ad for “MDMA,” and Meta’s ad library is still full of illegal content that can be found within seconds.

This means Meta continues to take money from people blatantly advertising drugs on the platform while limiting the reach of reporting about that content moderation failure. Instagram's Recommendation Guidelines limit the reach of posts that "promotes the use of certain regulated products such as tobacco or vaping products, adult products and services, or pharmaceutical drugs.""

https://www.404media.co/instagram-throttles-404-media-investigation-into-drug-ads-on-instagram-continues-to-let-people-advertise-drugs/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Meta blocked a newspaper’s critical report about it on Facebook and its other social sites for hours, sparking a backlash that intensified after the company appeared to subsequently block links to the website of an independent journalist who republished the report.

The controversy began Thursday morning when users noticed that all links to the non-profit newspaper the Kansas Reflector had been flagged as a cybersecurity threat and their posts were removed. About seven hours later, the paper said, most of its links had been restored, save for one — a column that had criticized Facebook and accused it of suppressing posts related to climate change.

Meta apologized to the Reflector and its readers on Thursday for what the company’s communications chief, Andy Stone, called a “an error that had nothing to do with the Reflector’s recent criticism of Meta.”

But on Friday, users who attempted to share the column on Facebook, Instagram or Threads, were shown a warning that it violated community guidelines. That seemed suspicious to Marisa Kabas, an independent journalist in New York, who asked the Reflector for permission to publish the text of the column on her own website, the Handbasket." https://edition.cnn.com/2024/04/05/tech/meta-nonprofit-newspaper-independent-journalist-alleged-censorship/index.html

remixtures, to news Portuguese
@remixtures@tldr.nettime.org avatar

: "Our investigation found that fact-checks enjoy greater visibility in Google Web Search compared to the articles they seek to correct, both in terms of frequency of appearance and their placement within the SERP rankings. Specifically, our study shows fact-checks rank higher than problematic content across five topical keywords groups, Covid-19, climate change, the war in Ukraine, U.S. liberals and U.S. elections, except in contested stories related to the war in Ukraine, where articles about U.S. bio-labs share equal prominence with their corresponding fact-checks. The findings imply Google moderation effects, as fact-checking content is more prominent given (nearly) equal levels of optimisation. It also implies that fact-checks are generally more prominent for audiences searching for problematic content, though both often appear in the same SERP. Navigational queries (e.g., searching for the name of a source and that content) reduce moderation effects." https://dl.acm.org/doi/abs/10.1145/3614419.3644017

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.

To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.

For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study." https://hai.stanford.edu/news/building-social-media-algorithm-actually-promotes-societal-values

remixtures, to apple Portuguese
@remixtures@tldr.nettime.org avatar

: "Apple has removed a number of AI image generation apps from the App Store after 404 Media found these apps advertised the ability to create nonconsensual nude images, a sign that app store operators are starting to take more action against these types of apps.

Overall, Apple removed three apps from the App Store, but only after we provided the company with links to the specific apps and their related ads, indicating the company was not able to find the apps that violated its policy itself.

Apple’s action comes after we reported on Monday that Instagram advertises nonconsensual AI nude apps. By browsing Meta’s Ad Library, which archives ads on its platform, when they ran, on what platforms, and who paid for them, we were able to find ads for five different apps, each with dozens of ads. Two of the ads were for web-based services, and three were for apps on the Apple App Store. Meta deleted the ads when we flagged them. Apple did not initially respond to a request for comment on that story, but reached out to me after it was published asking for more information. On Tuesday, Apple told us it removed the three apps on its App Store." https://www.404media.co/apple-removes-nonconsensual-ai-nude-apps-following-404-media-investigation/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.

Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."

https://www.wired.com/story/latest-online-culture-war-is-humans-vs-algorithms/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "What rarely gets mentioned in these discussions, however, is the fact that the Chinese government has built the most comprehensive digital surveillance system in the world, which it primarily uses not to protect children, but to squash any form of dissent that may threaten the power of the Chinese Communist Party. “Everybody exists in a censored environment, and so what gets censored for kids is just one step on top of what gets censored for adults,” Jeremy Daum, a senior research scholar at Yale Law School’s Paul Tsai China Center and the founder of the site China Law Translate, told me.

It should set off warning bells for Americans that many states have explored legislation limiting internet access for minors in ways that mirror what China has done."

https://www.theatlantic.com/technology/archive/2024/05/tiktok-chinese-version/678325/

paninid, to TikTok
@paninid@mastodon.world avatar

This ran afoul of community guidelines.

It is still up on / , though? 🤨🤷🏻‍♂️

Interesting what one platform considers okay for its rules and another doesn’t, relative to Mastodon.

The Chinese Communist Party will let you throw shade at failed Austrian artists, but Zuck protects them.

spocko, to Youtube
@spocko@mastodon.online avatar

For the first time today I added an "altered content" liable to my Youtube video because I used two photos that were generated by AI.

It is a new policy by . I've read the guidelines & examples.
Since I'm very interested in want to now what it means for political speech on YouTube this election?

Lisa Monaco, the Deputy Attorney General, recently said they will charge people for threats of violence via AI.
Thoughts @patrickokeefe ?
https://support.google.com/youtube/answer/14328491?hl=en#zippy=%2Cexamples-of-content-creators-dont-have-to-disclose%2Cexamples-of-content-creators-need-to-disclose

AI generated Trump booking photo.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • modclub
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • Durango
  • normalnudes
  • ethstaker
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • lostlight
  • All magazines