inthehands,
@inthehands@hachyderm.io avatar

There’s a lot to chew on in this short article (ht @ajsadauskas):
https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination

“An AI resume screener…trained on CVs of employees already at the firm” gave candidates extra marks if they listed male-associated sports, and downgraded female-associated sports.

Bias like this is enraging, but completely unsurprising to anybody who knows half a thing about how machine learning works. Which apparently doesn’t include a lot of execs and HR folks.

1/

inthehands,
@inthehands@hachyderm.io avatar

Years ago, before the current AI craze, I helped a student prepare a talk on an AI project. Her team asked whether it’s possible to distinguish rooms with positive vs. negative affect — “That place is so nice / so depressing” — using the room’s color palette alone.

They gathered various photos of room on campus, and manually tagged them as having positive or negative affect. They wrote software to extract color palettes. And they trained an ML system on that dataset.

2/

inthehands,
@inthehands@hachyderm.io avatar

Guess what? Their software succeeded!…at identify photos taken by Macalester’s admissions department.

It turns out that all the publicity photos, massaged and prepped for recruiting material, had more vivid colors than their manual photos. And they’d mostly used publicity photos for the “happy” rooms and their own photos for the “sad” rooms.

They’d encoded a bias in their dataset, and machine learning dutifully picked up the pattern.

Oops.

3/

inthehands,
@inthehands@hachyderm.io avatar

The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?

I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.

4/

finestructure,
@finestructure@mastodon.social avatar

@inthehands Reminds me of a lesson I learned about 30 years ago in a physics course. In pairs we had to run experiments a full day and then prepare an analysis.

Our results were garbage. We tried everything to explain them, all attempts failed. In the end we went in to present our “results” and expected to be roasted.

On the contrary, our tutor was delighted. Turned out an essential part of the experiment was broken and he praised us for doing all the “false negative” analysis 😮

buherator,
@buherator@infosec.place avatar

@finestructure @inthehands I heard a legend about a lab exercise at our uni where students were tasked to figure out the contents of a box by electrical measurements on some external connectors. Sometimes the box contained a potato wired up.

finestructure,
@finestructure@mastodon.social avatar

@buherator @inthehands If I had any faith that it wouldn’t immediately leak and alert students I’d actually break an experiment on purpose as an instructor to teach this particular lesson 🙂

inthehands,
@inthehands@hachyderm.io avatar

@finestructure @buherator “You should do something to throw them a wrench!” is one of the most common suggestions I get from industry folks about the software project course I teach. And my response is always the same:

Have you •ever• been on a project that didn’t have spontaneous problems, surprising obstacles, sudden wrinkles? Just make sure they’re doing real work, and all the problems naturally happen on their own.

finestructure,
@finestructure@mastodon.social avatar

@inthehands Hah, true. These experiments were a bit different though. Sure, you sometimes encountered real problems but the setups were well maintained and by and large you'd get decent results.

What really lends itself to this kind of “broken experiment" is that you gather the data and can't tell if it's any good until you analyse it later. So you wouldn't be messing with students' data collection, “only” with their analysis.

inthehands,
@inthehands@hachyderm.io avatar

She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset!Surprise!“ It got an audible reaction from the audience. People •loved• her talk.

I wish there had been some HR folks at her talk.

Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.

5/

dalias,
@dalias@hachyderm.io avatar

@inthehands Epic.

inthehands,
@inthehands@hachyderm.io avatar

An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?

If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?

6/

isagalaev,
@isagalaev@mastodon.social avatar

@inthehands first of all, thank you!

Now, reading through this thread prompted a related but different thought: the current generation of Tesla's self-driving AI eschews codified decision-making in favor of learning how to drive based purely on humans. Which should obviously be a bad idea if your stated goal is to devise a better-than-human behavior. But everyone is just closing their eyes and saying "well, I guess they know better what they're doing". They don't.

inthehands,
@inthehands@hachyderm.io avatar

@isagalaev True. Their stubborn focus on vision over other types of input is also baffling. Tesla’s whole approach to self-driving makes no sense to me; looks like a bottomless money pit from where I sit.

(Note that Boston Dynamics doesn’t use ML of this type at all, IIRC.)

isagalaev,
@isagalaev@mastodon.social avatar

@inthehands I think their vision layer is okay. It can reliably identify and classify objects and their placement. It's what to do with this information that has always been the problem: you've got this car over there moving that way and that car standing over here. What input you apply to pedals and the steering wheel? This part turned out to be harder than vision. And now they're trying to solve it with AI as well. Which just swaps one set of edge case for another and can't be debugged.

inthehands,
@inthehands@hachyderm.io avatar

@isagalaev At least some of the embarrassing Tesla self-driving fails I’ve seen in videos online are situations where cross-checking multiple forms of input (radar, map, etc) would probably have helped a lot.

isagalaev,
@isagalaev@mastodon.social avatar

@inthehands I own one, and I can tell when they switched from radar to vision to determine obstacle in front of the car, it became much smoother and reliable. Radar is too low-res and just results in noisy signal you can't rely on.

Another thing that's missing is memory. Musk likes to talk about human eyes as sensors, but we also rely on memory a lot. After through a turn a few times, a human is much better at predicting behavior. But Tesla goes through every interaction with tabula rasa.

inthehands,
@inthehands@hachyderm.io avatar

There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?

Um, yeah…no.

A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”

These systems are garbage.

7/

inthehands,
@inthehands@hachyderm.io avatar

I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.

Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.

8/

inthehands,
@inthehands@hachyderm.io avatar

AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.

Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.

9/

inthehands,
@inthehands@hachyderm.io avatar

Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.

And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.

10/

gentrifiedrose,

@inthehands Hiring managers are so unskilled that in studies, choosing random resumes resulted in a more competent and happy workforce. Hiring managers generally hire people they like or remind them of themselves which always results in a bullied workforce because managers aren't exactly the best workers or nicest people. . They lack the self awareness to know when they should hire the coal covered in dust vs the shiny diamond.

StevenSavage,
@StevenSavage@sfba.social avatar

@inthehands in a discussion I saw someone noted that a "removing AI from workflow" consulting company would soon be viable.

inthehands,
@inthehands@hachyderm.io avatar

At this point, I don’t think it’s even worth trying to talk down business leaders who’ve drunk the Kool-Aid. They need to make their own mistakes.

BUT

I do think there’s a competitive advantage here for companies willing to seize it. Which great candidates are getting overlooked by biased hiring? If you can identify them, hire them, and retain them — if! — then I suspect that payoff quickly outstrips the cost savings of having an AI automate your garbage hiring practices.

/end

inthehands,
@inthehands@hachyderm.io avatar

Yeah. The spam arms race is playing out in many spheres, and it feels kind of desperate right now tbh. A defining feature of our present moment.

From @JMMaok:
https://mastodon.online/@JMMaok/111953486878595616

Voline,
@Voline@kolektiva.social avatar

@inthehands @JMMaok Look y’all, how about we just cut the Gordian Knot here and destroy capitalism? No adverts in a world with no money.

gentrifiedrose,

@inthehands AI is never going to hire a candidate named Devonte who was the local black student union president in favor of Theodore William Authier III who was polo president in college. 😂

failedLyndonLaRouchite,

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

and no one seems aware of this

[ edit ] of course, the right wing spends a lot of time & $ harassing the media about this

weekend_editor,
@weekend_editor@mathstodon.xyz avatar

@inthehands @ajsadauskas

If your dataset contains biases, then ANYTHING you train on it will inherit those biases (absent specific corrective action).

Pretty much every textbook tells you that your models will sometimes train on non-obvious "details". This applies to AI. It applies to machine learning. It applies to statistics, even simple old regression and classification.

If AI/statistics practitioners know this, why do we have to keep re-learning this lesson the hard way?

Perhaps managements need a couple knocks to the side of the head to beat this fact into them?

Shephallmassive,
@Shephallmassive@mastodon.online avatar

@inthehands @ajsadauskas no maybe about it.. its no good saying your a company that encourages diversity if you pay companies to run selection algorithms to weed out the people there had always been prejudices against. Paying so you dont waste time interviewing people who are not like us.Powerful people deciding who is valid. Commissioning algorithms to be prejudiced for you, so you dont risk being caught exercising your nasty illegal prejudices, should still be a crime.

YanoAguy,

@inthehands Audible is a premier audiobook service offering a vast selection of titles across all genres, from bestsellers to exclusive originals. With high-quality narrations and a user-friendly app, it's perfect for enjoying books on the go. Sign up now to try Audible's one-month free offer, with the flexibility to cancel anytime without any hassle. Dive into a new listening experience today! https://www.amazon.co.uk/Audible-Free-Trial-Digital-Membership/dp/B00OPA2XFG?tag=jackos1999-21

isomeme,
@isomeme@mastodon.sdf.org avatar

@inthehands @ajsadauskas

It's not even an ML-specific problem. The oldest axiom of computer programming is "Garbage in, garbage out".

wndlb,
@wndlb@mas.to avatar

@inthehands @ajsadauskas Has Chris Rufo offered his thoughts?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • thenastyranch
  • ngwrru68w68
  • magazineikmin
  • khanakhh
  • rosin
  • mdbf
  • Youngstown
  • slotface
  • everett
  • cubers
  • kavyap
  • ethstaker
  • InstantRegret
  • JUstTest
  • Durango
  • tester
  • osvaldo12
  • cisconetworking
  • tacticalgear
  • normalnudes
  • GTA5RPClips
  • modclub
  • Leos
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines