rysiek, (edited )
@rysiek@mstdn.social avatar

Here's a half-formed thought I need to mull a bit on:

Somehow, algorithmic (and especially "AI-driven") decision making tends to only be proposed in contexts where it can only — or mostly — affect those with the least power in the system.

Migrants and asylum seekers.
Prisoners.
Families using any form of state support (child benefits, foodstamps, etc).
Palestinians in Gaza.

It somehow never gets proposed for use-cases where it might affect the wealthy and powerful.

One wonders why. 🤔

🧵/1

rysiek, (edited )
@rysiek@mstdn.social avatar

This is, of course, no mystery.

These systems make mistakes all the time. These mistakes have real consequences to those flagged.

Those with power know this, so they will never allow themselves to be subject to such treatment.

Nobody even dares propose an "AI-driven" system to flag potential sexual predators among white frat boys or Catholic priests.

Or an automatic, algorithmic system for flagging potentially corrupt politicians.

But poor families and Brown people? Sure, why not.

🧵/2

svgeesus,
@svgeesus@mastodon.scot avatar

@rysiek Actually that sounds like a great idea. Start with the priests.

williampietri,
@williampietri@sfba.social avatar

@rysiek Yeah, it reminds me of an observation about "profiling". A lot of people were fine with random stops for people who looked to cops like they might be in a gang or otherwise up to street crime. But they're not stopping suit wearers on Wall Street to check for insider trading or cocaine use.

rysiek, (edited )
@rysiek@mstdn.social avatar

It's not just about "mistakes".

👉 System's purpose is what it does.

Consider the Australian system for assessing the risk related to migrants held in temporary detention — SRAT:
https://www.theguardian.com/australia-news/ng-interactive/2024/mar/13/serco-australia-immigration-detention-network-srat-tool-risk-rating-ntwnfb-

There's a lot to take in here, but here's a kicker that really goes to the heart of this: there is no way for a person subject to SRAT to ever improve their score.

Every category of events to be recorded and taken into account by SRAT is negative.

Every. Single. One. :blobcat0_0:

🧵/3/end

villewilson,

@rysiek ironically, SRAT means “to shit” in most languages in the Balkans

rysiek,
@rysiek@mstdn.social avatar

@villewilson yeah, that was not lost on me either 💩

grob,
@grob@mstdn.social avatar

@rysiek I think the main reason is not the models' fallibility but their exertion of power over other others. People in power don't like power exerted on themselves, regardless of correctness.

rysiek,
@rysiek@mstdn.social avatar

@grob oh absolutely. But the very observation that using such models is exerting power is already something that is not obvious to a lot of people.

"It's just math and data, perfectly objective" etc etc.

grob,
@grob@mstdn.social avatar

@rysiek quite frankly, it was not obvious to me either. Gotta thank Fedi for a bit of eye-opening :blobcatwink:

foolishowl,
@foolishowl@social.coop avatar

@rysiek Dan McQuillan discusses the use of algorithms to justify racist sentencing in Against AI.

rysiek,
@rysiek@mstdn.social avatar

@foolishowl ooh that gives me another realization: an important bit of difference between how these tools are used in the context of people with power vs. people without power in the system is if they're used as a suggestion for an action, or a justification of an action.

If that makes sense.

CubeThoughts,
@CubeThoughts@mastodon.social avatar

@rysiek Article 22 of the EU gives people the right to not be subjected to "decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Quite perceptive that they identified these risks in 2016. Of course, it's allowed if authorized by law (with appropriate safeguards).

https://gdpr-info.eu/art-22-gdpr/

dinosauce,
@dinosauce@mastodon.com.tr avatar

@rysiek add tax avoidance to that... somehow trillion-dollar companies and their cronies don't avoid any taxes, yet one needs to detect your local mom-and-pop shop with AI whether it avoids heavy taxes or not

rysiek,
@rysiek@mstdn.social avatar

@dinosauce here's the real kicker:
https://www.irs.gov/newsroom/irs-launches-new-effort-aimed-at-high-income-non-filers-125000-cases-focused-on-high-earners-including-millionaires-who-failed-to-file-tax-returns-with-financial-activity-topping-100-billion

> The new initiative, made possible by Inflation Reduction Act funding, begins with IRS compliance letters going out this week on more than 125,000 cases where tax returns haven’t been filed since 2017. The mailings include more than 25,000 to those with more than $1 million in income, and over 100,000 to people with incomes between $400,000 and $1 million between tax years 2017 and 2021.

dinosauce,
@dinosauce@mastodon.com.tr avatar

@rysiek good to know that! I commented as such because in here our government recently made an announcement implying such a thing, but in disadvantage of small business instead of large corpos

janbogar,
@janbogar@mastodonczech.cz avatar

@rysiek two counterexamples from top of my head are algorithmic trading, which has been here for ages, and anti money loundering software for banks.

I worked on the latter in the past, it performed automated detection of politically exposed individuals, entities on sanction lists and poeple suspect of corruption (based on newspaper articles), and surfaced this information to a human employer within the bank.

rysiek,
@rysiek@mstdn.social avatar

@janbogar right, thank you!

High frequency trading systems are substantially different than what I was talking about. They do not "target" specific people, they are more like other players at the blackjack table who happen to be better at counting cards.

The flagging of PEPs and corruption suspects is closer. It still feels different, in that there is strong human oversight, the bank has a strong incentive to catch any "mistakes" made by the system, and the consequences are much less grim.

rysiek,
@rysiek@mstdn.social avatar

@janbogar but this is good food for thought, I appreciate that.

As I said, I need to mull this thought over, and this helps flesh it out better.

janbogar,
@janbogar@mastodonczech.cz avatar

@rysiek they can target your company and decide its value should be lower. And suddenly your property is much smaller than it was before. With absolutely no way to appeal to anyone.

I wonder if GDPR's right to explanation applies to it :D

rysiek,
@rysiek@mstdn.social avatar

@janbogar

> they can target your company and decide its value should be lower.

Sure, but I do strongly believe "a bunch of stock market gambling algos can take your company stock for a ride" is different in many, many important ways from "an algo can decide to target you with a missile strike" or "an algo can deny you child support benefits you need to be able to, you know, eat; plus you end up accused of a financial crime to boot".

c0dec0dec0de,
@c0dec0dec0de@hachyderm.io avatar

@rysiek they migrate up the privilege gradient after they get some of the wrinkles ironed out, and build up the idea that it’s “okay” to inflict on people

rysiek,
@rysiek@mstdn.social avatar

@c0dec0dec0de yes, this is a great point.

ncrav,
@ncrav@mas.to avatar

@rysiek widely used on other domains: IDS, industrial fault detection, agriculture, logistics, supply chains, etc. Of course you are never going to get newspapers talking about it because it's not "shiny" and most of the people writing don't even know how their food gets to the store.

rysiek,
@rysiek@mstdn.social avatar

@ncrav oh I am not saying such systems are not used in other domains.

But they tend to not be used in a way that can directly, almost immediately ruin people's lives. They are not used to target people.

In these specific domains you mention those using these systems understand the failure rate, understand how to use these systems, and understand that constant human supervision is necessary.

What I am talking about is using algorithmic systems in a way that targets people.

rysiek,
@rysiek@mstdn.social avatar

@ncrav but thank you for your comment, it nudged me into another realization: namely, that using algorithmic systems to target people (the way we use such systems to, say, flag spam or suspicious server log entries) is treating people like objects.

ncrav,
@ncrav@mas.to avatar

@rysiek yes, sorry my mistake 😅 the thing about machine learning, optimization, and mathematics in general, is that the most automated places that use them in a useful way are probably the least talked about: e.g. how the shipping containers are organised and stacked on a large modern yard, most spam detection that is not just banning small servers, etc.

rysiek,
@rysiek@mstdn.social avatar

@ncrav 💯

I made that point before as well. 🙂

vfrmedia,
@vfrmedia@social.tchncs.de avatar

@rysiek also widely used (more with algorithms than AI) for motor insurance, particularly in UK (maybe other countries too?) with the policies for young people which demand a spy box to be added to the vehicle - there's also evidence that new cars are feeding a lot of data (maybe part anonymised) to insurance companies which is then used to set risk profiles and premiums..

EALS_Director,
@EALS_Director@mastodon.sdf.org avatar

@vfrmedia @rysiek it is also the differential of access. AI is predominantly owned by the rich and powerful. The resources needed to employ it put it out of reach for most others, and the datasets needed to leverage it against the rich or powerful are controlled (if not owned) by... the rich and powerful.

rysiek,
@rysiek@mstdn.social avatar

@EALS_Director yes, absolutely!

But if we entertain for a moment the absolutely preposterous notion that such systems are as effective as advertised, there should be no reason not to deploy an "AI-driven" sexual predator flagging system as a screening procedure for Catholic priests working with children, right?

I think we should be making that point more often. 🍿

@vfrmedia

EALS_Director,
@EALS_Director@mastodon.sdf.org avatar

@rysiek @vfrmedia I tend to concur, though I am torn by the idea of playing equally dirty (i.e. deploying flawed systems to make a point). Nonetheless, sometimes I wonder whether any victory can ever come from remaining 'above' the worst people (from a moral high-ground point of view).

rysiek,
@rysiek@mstdn.social avatar

@EALS_Director these systems would never get employed.

The pushback would be immediate and fierce.

But that's the point, that would help put the hypocrisy on display. And if cards are played right, people with power in the system would get on the record explaining in minute detail why such systems are generally a bad idea.

@vfrmedia

vfrmedia,
@vfrmedia@social.tchncs.de avatar

@rysiek @EALS_Director motor insurance issues are already starting to affect at least some of the middle classes and older people in UK as loads of them (particularly Jaguar/Range Rover owners, and recent adopters of EVs) are complaining about higher premiums or even being refused cover (and this is with fairly robust rather than flawed data gathering). However I don't see a lot of pushback, probably due to the small number of companies in the business and insurance being a legal requirement

rysiek,
@rysiek@mstdn.social avatar

@vfrmedia yeah, and the algorithmic side of that scam not being actually publicly understood.

@EALS_Director

whvholst,
@whvholst@eupolicy.social avatar

@rysiek ANPR cameras tend to catch rich speeders more often than poor cyclists...

rysiek,
@rysiek@mstdn.social avatar

@whvholst I'd say cyclists are not at all targeted by ANPR cameras, but I do get your point.

floatybirb,
@floatybirb@mastodon.social avatar

@rysiek The place where power dynamic gets actualized might be a corporate board room where a bunch of possible uses for AI decision making are discussed, and any use case that might result in the company getting sued too much just gets shot down.

Poor people can't afford lots of lawyers, so they end up with more computer tools governing them.

rysiek,
@rysiek@mstdn.social avatar

@floatybirb yes, that's a meaningful part of having, or not, power in the system

teh_dude,

@rysiek computers have historically been used to automate existing bureaucracy. that’s kinda all they do actually.

rysiek,
@rysiek@mstdn.social avatar

@teh_dude yes. Automate and justify.

"It's not a fallible human that made that decision, it's a perfectly logical machine! With data and algorithms!"

robryk,
@robryk@qoto.org avatar

@rysiek

There are a few cases that I can think of of such decisions affecting more powerful people, but in a way that's less important for them (e.g. credit scores).

An interesting case is the US college admissions system and the history of its reliance on SAT/ACT scores. IIUC its reliance on SAT (or any kind of standardized testing) decreased over time in the last decade+, but I can't find any good overview sources on this.

squiddle,
@squiddle@chaos.social avatar

@rysiek
Sth. like esimating the value of a persons wealth for taxation? Or to determine the amount a fee must have to achieve deterrence?

rysiek,
@rysiek@mstdn.social avatar

@squiddle estimating wealth for the purpose of taxation is perhaps somewhat in the ballpark (as a specific person is "targeted" in such a case), but the consequences of "mistakes" or otherwise badly designed system are still way more indirect than in the cases I gave as examples.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines