Looks like we don't need laws after all for platforms to opportunistically suppress political user content: In 🇫🇷 #France, a #Snapchat#lobbyist admitted in a parliamentary hearing that the company was "proud" to have collaborated "hand in hand with the interior ministry" to make sure only user content critical of the mass protests was shown on #SnapMaps.
#SocialMedia#Twitter#Musk#CommunityNotes#ContentModeration#FactChecking: "Community Notes’ value to Musk wasn’t just in its potential utility for inviting advertisers concerned about (in)accurate information on Twitter, but in its ability to cut costs: an entirely volunteer-run army of amateur fact-checkers to replace the paid content moderators to whom he’d shown the door. The Chief Twit’s controversial moves here didn’t sit well with many longtime Birdwatchers, who complained to the Post that “we don’t have a moderator anymore.” A college student involved with the Birdwatch Discord from jump mentioned that “it feels like the entirety of the Twitter misinformation system is now reliant on Birdwatch. Birdwatch is important but shouldn’t be the adjudicator of facts.”"
#SocialMedia#TikTok#ByteDance#Kenya#MentalHealth#ContentModeration: "TikTok’s parent company ByteDance is facing a potential lawsuit in Kenya over allegations it failed to protect the mental health of workers tasked with preventing disturbing content from appearing on the short-form video app.
James Oyange Odhiambo, a Kenyan former content moderator for TikTok, who was employed by the outsourcing firm Majorel in Nairobi, alleges he developed post-traumatic stress disorder (PTSD) as a result of his work. He says he was later unfairly dismissed in retaliation for advocating for better working conditions.
The allegations were made in a letter dated June 29 that was sent by the law firm Nzili and Sumbi Advocates to ByteDance and Majorel—the outsourcing company that employs the content moderators—threatening a lawsuit. The letter has given the companies two weeks to comply with a series of demands before lawyers for Odhiambo file a lawsuit.
In the letter, Odhiambo’s lawyers allege that he was required as part of his job to watch, at times, between 250 and 350 TikTok videos per hour. The “vast majority” of these videos, the letter alleges, were “horrific in nature.”"
Taylor Taranto livestreamed his threats to Obama. I wonder which platform he did it on? #Rumble? #Facebook? #TruthSocial? #Twitter? They ALL have TOS, but not all enforce them.
Taranto broke the law, 18 U.S. Code § 879 - Threats against former PRESIDENT, he can be criminally punished.
But what are the penalties for those platforms that knew of the threats--that violated their own TOS--but STILL didn't act?
#EU#ContentModeration#SocialMedia#Twitter#DSA: "Twitter will respect the EU's content moderation rules, known as the Digital Services Act, the company's owner Elon Musk said on French TV.
"If a law is enacted, Twitter commits to comply with it," he told star presenter Anne-Sophie Lapix in a pre-recorded, sit-down interview broadcast in dubbed French on France 2.
Twitter is considered a very large platform under the EU’s content-moderation law, meaning the company will have to put in place mitigation measures against fake news from August 25 onwards. Earlier this month, Digital Minister Jean-Noël Barrot said he was "concerned" the platform wouldn't be able to comply with the rules.
Later this week, Internal Market Commissioner Thierry Breton and his team will visit Twitter’s U.S. headquarters and stress-test the platform."
Comparing longtime, volunteer community moderators to politicians is an unforced, massive blunder for the CEO of a platform for volunteer-led communities.
As applications like ChatGPT and rival products rise in popularity, experts and stakeholders are split on whether and how Section 230 of the Communication Decency Act — a liability shield for internet companies over third-party content — should apply to the new tools.
Ashley Johnson, a senior policy analyst at the Information Technology and Innovation Foundation said the “most likely scenario” is if generative AI is challenged in court, it “probably won’t be considered covered by Section 230.”
“It would be very difficult to argue it is content that the platform, or service, or whoever is being sued in this case had no hand in creating if it was their AI platform that generated the content,” Johnson said."
“Moderation is education. You’re telling people how to be the best member they can be in the community. If you’re going to do that, then you have to codify what that is.” –@patrickokeefe
"Another challenge for multilingual models comes from disparities in the amount of data they train on in each language. When analyzing content in languages they have less training data for, the models end up leaning on rules they have inferred about languages they have more data for. This hampers their ability to understand the nuance and contexts unique to lower-resource languages and imports the values and assumptions encoded into English. One of Meta’s multilingual models, for instance, was trained using nearly a thousand times more English text than Burmese, Amharic, or Punjabi text. If its understanding of those languages is refracted through the lens of English, that will certainly affect its ability to detect harmful content related to current events playing out in those languages, like the Rohingya refugee crisis, the Tigray war, and the Indian farmers’ protest."
@KathyReid@CenDemTech@Wired 🥥 But in "defense" of AI #ContentModeration, it probably will end up being MUCH less expensive than paying human moderators -- and even if it isn't cheaper, it will never try to join a union. 🥥
💡🦾 Today, CDT Research is releasing a new report from Gabriel Nicholas + Aliya Bhatia, “Lost in Translation" – a comprehensive look at how large language models (LLMs) try (and often fail) to understand text in languages other than English.
The automated systems that increasingly mediate our interactions online — such as chatbots, #ContentModeration systems, and search engines — are primarily designed for and work far more effectively in English than in the world's other 7,000 languages. #LLMs
Some bad actors are using Animal Crossing to create their extremist fantasies and lure others into them. An NYU report states that this type of user content is prevalent on gaming sites with extensive content moderation policies.
“If you can’t be empathetic for the things you are not … then you’re not really doing good thoughtful community moderation, trust and safety work. … Ultimately, if you want to be truly great at this work, you have to protect the people who aren’t you.” -@patrickokeefe
"It’s a social network that doesn’t try to be neutral and isn’t attempting to preserve free speech. It’s trying to make the internet a nicer place to be."
"Mozilla’s content policies also make clear that the platform will err on the side of protecting people who need to be protected..."
If you look at the "original page" (don't do it if you are sensitive to this stuff) on Teri's server this garbage is still there. Can anyone who is conversant with admin/moderation provide any guidance here?
I keep getting motorcycle POV police getaway videos in my #YouTube#shorts feed and I keep reporting them for containing dangerous acts. Wtf is wrong with people. #report#contentmoderation
The classic trilemma goes: "Fast, cheap or good, pick any two." The Moderator's Trilemma goes, "Large, diverse userbase; centralized platforms; don't anger users - pick any two." The Moderator's Trilemma is introduced in "Moderating the Fediverse: Content Moderation on Distributed Social Media," a superb paper from @arozenshtein U of Minnesota Law, forthcoming in the journal Free Speech Law, available as a prepub on SSRN:
Rozenshtein proposes a solution (of sorts) to the Moderator's Trilemma: federation. De-siloing social media, breaking it out of centralized walled gardens and recomposing it as a bunch of small servers run by a diversity of operators with a diversity of #ContentModeration approaches. The #Fediverse, in other words.
In 1998, two Stanford kids published a paper in Computer Networks: "The Anatomy of a Large-Scale Hypertextual Web Search Engine," in which they wrote, "Advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers."
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
And they refuse to tell anyone what the rules are, because if they told you what the rules were, you'd be able to bypass them. #ContentModeration is the only #infosec domain where "#SecurityThroughObscurity" doesn't get laughed out of the room:
Even when Twitter’s T&S infrastructure was at its most functional – which I’d say was 2021-mid 2022 – I sometimes saw appeals on content decisions, suspensions, etc. take 2-3 months unless I escalated to personal contacts at the company.
I keep seeing folks expect #moderation and community management decisions on volunteer-run fedi instances to happen in hours – not even days – and jumping to defederation when they don’t get immediate responses. It’s going to burn out so many admins, and makes me sad and worried about the sustainability and scalability of our communities.
I'm overdue for an #introduction, especially with all you new followers…so here goes.
I'm a software engineer with a degree in Anthropology. I highly recommend the combo.
Most recently I was tech lead for Scaled Human Review at Meta. I worked in the Integrity Foundation (what other companies call "Trust and Safety") on Better Engineering initiatives and #Metaverse integration, with the teams that build human review software for the 30-40K external reviewers. I'd sworn I’d never work at Facebook, but I decided to see if I could make a difference. I couldn’t. And it wasn't a good fit for either of us. But I learned a lot about how the sausages are made and why they have such a hard time with #contentmoderation.
I've been on #socialmedia for four decades (seriously, I saw someone catfished in chat in 1978—this stuff isn't new), and virtually everyone I know I met online somewhere—many I've still never met in person. Needless to say, that's made me pretty passionate about making online communities safe for everyone, and especially marginalized groups.
I'm now a freelance #consultant, working on my own projects (I'll write more on that later), and with my wife's #consulting company (see below). I'm planning to do a lot more writing about #society and #technology (as well some #SFF), and to travel more.
I tend to write long posts (like this one). They may get shorter once my blog is back up. I don't stick to one topic, but I'll try to tag them so you can filter. I post about tech stuff (recent, as well as old geeky #Unix stuff), #social issues, #LGBTQ issues (especially the T), pretty #photos, and random personal anecdotes. When I boost, it's because I think it's something that might be interesting to someone, or some group, that follows me. Those tend to include all the above topics, plus SF&F-related things, and cool science stuff.
I'm #pan, #poly, #nonbinary (or #genderqueer, if you prefer). I prefer "they" for pronouns, but "he" is fine. I spent most of my life thinking I really was a straight cis man who just happened to be a bit quirky and a passionate and tearful ally, so I'm not too picky about how you refer to me. I'm also more than happy to answer any questions about all that, public or private.
I grew up mostly in #Maine and then lived in Massachusetts for a long time, but I now live on sovereign #Swinomish land in #WashingtonState (US), on the edge of the San Juan islands. Despite my first name (that's a story) and current location, I'm not Native American, although I focus a lot on Native American rights. My parents were both active in that area, and that was my introduction to civil rights in general.
I've been a #software engineer at various levels (from programmer to CTO to company founder) for 40+ years. I learned BASIC in high school, taught myself Pascal, FORTRAN and PL/1 in college, learned C as an intern at Bell Labs (Murray Hill, one floor up from the Unix crew), and went on from there. In college, I majored in #Anthropology with a concentration in #Psychology, and that's influenced the way I look at software ever since. Software is designed for people. Software systems build communities (whether intended or not). Anyone who does that damn well better understand how people and #communities work.
I've worked for Bell Labs (psych stats), Sperry Research (window systems, UX design), Apollo/HP (programmable shell, windowing systems, Unix porting, UX design), Bright Ideas (cookbook, educational games), OSF (windowing standards), Alfalfa (multimedia email - SMTP and X.400 :)), Wildfire (phone-based voice assistant), Utopia/USWeb (web and security consulting), Saroca (small boats), Messagefire (anti- #spam software), MessageGate (corporate compliance software), Somewhere (software consulting), ZeeVee (web video aggregation, metadata scraping), TiVo (video content correlation, #metadata pipelines), and Meta. Plus a few others.
I've been with my wife, Dr. Mollie Pepper, for over a decade. She's a #sociologist with a focus on #refugee migration, #gender, and violence; the kind of work that gives you PTSD. She did her dissertation on women's roles in the (now extremely defunct) peace process in #Myanmar (aka #Burma). A year ago she was at a military base frantically processing thousands of Afghan refugees and managing translators. She has a consulting company that specializes in evaluating and designing refugee service and placement programs. You can find her at https://carlsonpepper.com/. Everything I know about #feminism, #intersectionality, #queer theory, #CRT, and #racism I either learned from her, or she gave me the theoretical underpinnings to understand them properly.
I have two grown daughters from my first marriage with Nassim Fotouhi; a kick-ass software engineer/engineering manager who came to the States just before the Iranian revolution.
Shadi Fotouhi is an artist (see my profile background photo, go look up the drug codes and compare them to the mermaids' behavior) turned software engineer; building dynamic room installations will do that to you. She worked in QA at a gaming company, and then at Jibo; a robotics startup. Now she's a senior software engineer at Wayfair--Kubernetes, release configuration, and all that fun stuff.
Shireen Hinckley is a documentarian, digital image technician, video editor, and co-founder of Somewhere Films (https://www.somewherefilms.com/shireen-hinckley); a womxn's filmmaking collective. She works for #Beyoncé at Parkwood Entertainment, where she's an editor and post-production supervisor for all of their video releases. She worked on "Black is King" and just about every video since then, whether it's for Instagram, Times Square, Tiffany's, the Oscars, or Chloe x Halle. No, I can't tell you when the Renaissance visual album will be out—but it will be amazing.
I'm incredibly honored to have those wonderful women in my life. I wouldn't be who I am without them.
A couple other things that may come up, especially in my photos. My mother is an artist who lives in Maine in a round house she designed, and the family built, when I was in high school. And I'm part owner of a #lighthouse on Cape Cod.
Elon Musk is threatening to end his $44 billion agreement to buy Twitter, accusing the company of refusing to give him information about its spam bot accounts.