aardrian, (edited )
@aardrian@toot.cafe avatar

[1/3]

In “Can generative AI help write accessible code?” @tink finds that, sure, it can help:
https://tetralogical.com/blog/2024/02/12/can-generative-ai-help-write-accessible-code/

But only if you accept that fake-AI cannot write accessible code on its own.

Which means you still need to know how to to do it yourself. Or pay someone to. Like copying from Stack Overflow.

cferdinandi,
@cferdinandi@mastodon.social avatar

@aardrian @tink The owner of the last agency I worked with was a "classic Tech Bro" (was trapped at burning man this year) who LOVES AI.

When we first connected, I told him I thought AI mostly just exacerbated the "StackOverflow problem" of people dropping code they don't understand into projects that can ruin them in big but subtle ways.

Fast forward a few months, and he's excited about how "AI can solve A11Y issues for us!"

Nah, dude, your devs just need to learn that a div is not a button.

aardrian,
@aardrian@toot.cafe avatar

@cferdinandi I am sure when they get sued by a disabled user they will blame the LLM and assert no responsibility.

Because that is really the value proposition for that type of user — offload responsibility while making the money.

@tink

cferdinandi,
@cferdinandi@mastodon.social avatar

@aardrian I hate doing it, but the "you're gonna get sued" stick is literally what I had to pull out to get them to give a shit. They were building sites for utility providers that were just completely broken for screen reader and keyboard users.

"What's the minimum we have to do to not get sued?"

🤬

I don't work there anymore for reason.

aardrian,
@aardrian@toot.cafe avatar

@cferdinandi Yeah, I refuse those kinds of clients. Not worth the hassle (nor legal risk when they turn around to blame you).

aardrian,
@aardrian@toot.cafe avatar

[2/3]

@Aaron looks at fake-AI more holistically, identifying places where author failures make it the only viable approach for users (can it add link underlines, tho?):
https://alistapart.com/article/opportunities-for-ai-in-accessibility/

He also points out more general good uses.

His comparison to “Algorithms of Oppression” is, IMO, too loose. These tools are algorithms and they continue to perpetuate oppression.

aardrian,
@aardrian@toot.cafe avatar

[3/3]

Aaron’s is a less acerbic and more hopeful version of my binary “No, ‘AI’ Will Not Fix Accessibility”:
https://adrianroselli.com/2023/06/no-ai-will-not-fix-accessibility.html

Léonie’s confirms that / has trained its LLM on its own broken code and has not improved since I called it out 8 months ago:
https://adrianroselli.com/2021/09/userway-will-get-you-sued.html#MoreFakeAI

The point is, these are fallible tools that can be helpful with the right existing knowledge or in absence of anything else.

Aaron,
@Aaron@front-end.social avatar

@aardrian Having spent the last year and a half funding projects looking to improve outcomes for people with disabilities using AI and related technologies, I am hopeful, yes.

Also agree that using AI to help address software inaccessibility is a hard nut to crack, but worth pursuing.

For whatever it’s worth, I’m still skeptical when it comes to AI generally, especially language models. That said, I think we’re starting to understand how to use them effectively to improve accessibility.

Aaron,
@Aaron@front-end.social avatar

@aardrian I wish I could share some of the amazing applications we’ve received… some really cool stuff folks are thinking about & working on.

We do get to talk about the ones we’ve funded though. Keep tabs on this: https://www.microsoft.com/en-us/accessibility/innovation

We’ll have some announcements at next month as well.

aardrian,
@aardrian@toot.cafe avatar

@Aaron I agree there is lots of potential and I am hopeful fake-AI can get past the scammers and environmental damage to genuinely help more people.

I just wish it wasn’t hyped beyond merit at the expense of existing technologies that can already so much good work (or caused them to rebrand as “AI”).

Sadly, I am at the age where it sounds like an old man afraid of new technology when I am really just wary of the tech culture that raised me.

aardrian,
@aardrian@toot.cafe avatar

@Aaron Geez, even that sounded grumpy.

heydon,
@heydon@front-end.social avatar

@aardrian Wait wait wait... they trained their AI in accessibility by showing it code they're to incompetent to know is actually inaccessible, causing the AI to regurgitate inaccessible output?

aardrian,
@aardrian@toot.cafe avatar

@heydon Yes.

Which is utterly on-brand for them.

heydon,
@heydon@front-end.social avatar

@aardrian Amazing stuff 10/10

alastc,

@heydon @aardrian I think a decision-tree (type of thing) would perform better than an LLM. Tick-boxes and get the correct output for your scenario, rather than the fuzzy-matching process provided by an LLM.

heydon,
@heydon@front-end.social avatar

@alastc @aardrian Both would be shite, though, innit

aardrian,
@aardrian@toot.cafe avatar

@heydon @alastc
If UserWay wrote it, yeah.

vick21,
@vick21@mastodon.social avatar

@aardrian IMHO, nobody can fix accessibility, but we can/should try… I would apply the same thinking toward AI. Now if you mix money into this affair, that’s a different matter altogether. So, I would concentrate on the money issue, first and foremost.

aardrian,
@aardrian@toot.cafe avatar

@vick21 I agree all around.

We already have a proxy for how money can make accessibility worse while amping its claims — overlays.

Driven by money first.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • accessibility
  • DreamBathrooms
  • InstantRegret
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • modclub
  • Youngstown
  • ngwrru68w68
  • slotface
  • osvaldo12
  • kavyap
  • mdbf
  • thenastyranch
  • JUstTest
  • everett
  • cubers
  • cisconetworking
  • normalnudes
  • Durango
  • anitta
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • megavids
  • Leos
  • lostlight
  • All magazines