@emilymbender@dair-community.social
@emilymbender@dair-community.social avatar

emilymbender

@emilymbender@dair-community.social

Professor, Linguistics, University of Washington

Faculty Director, Professional MS Program in Computational Linguistics (CLMS)

If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/

This profile is from a federated server and may be incomplete. Browse more on the original instance.

emilymbender, to random
@emilymbender@dair-community.social avatar

Arghh - more problematic reporting, this time about robo-therapists.

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

A thread:
/1

emilymbender,
@emilymbender@dair-community.social avatar

For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted.

/2

emilymbender,
@emilymbender@dair-community.social avatar

They talk up the idea that this is effective because people are more willing to open up to a "bot" than a real person. BUT WHAT IS HAPPENING TO THAT DATA?

(This finally comes up 1000 words further down the article.)

/3

emilymbender, (edited ) to random
@emilymbender@dair-community.social avatar

Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.

Monday March 4, noon Pacific
https://www.twitch.tv/dair_institute

See you there!

emilymbender, to random
@emilymbender@dair-community.social avatar

It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government.

... and a short thread because there is so much awfulness in this one article.
/1

https://www.ft.com/content/f2ae55bf-b9fa-49b5-ac0e-8b7411729539

emilymbender,
@emilymbender@dair-community.social avatar

Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2

emilymbender,
@emilymbender@dair-community.social avatar

Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3

(This answer to my freedom of information request is raising a lot of questions not already answered in this answer to my freedom of information request...) /4

emilymbender,
@emilymbender@dair-community.social avatar

Here they're basically admitting they don't trust it to speak for them (and they shouldn't) but also that they think that some government communication is just so much BS. /5

emilymbender,
@emilymbender@dair-community.social avatar
emilymbender,
@emilymbender@dair-community.social avatar

The level of magical thinking here is astonishing: /7

emilymbender,
@emilymbender@dair-community.social avatar

(Uh, isn't the idiom "silver bullet" used under negation as in "There is no silver bullet that will solve this problem"?)
/8

emilymbender,
@emilymbender@dair-community.social avatar

I can imagine doing statisical analysis of prescription databases to help identify patterns of error and then using that information to reduce error rates. Given the rest of this article though, I have no reason to believe they aren't planning to just 'ask ChatGPT'.

/9

emilymbender,
@emilymbender@dair-community.social avatar

The final horror I want to point out in this article is that the reporting is entirely uncritical. No skepticism, no demands that the ministers in question show how they evaluated the tech and why they find it to be fit for purpose, etc. /fin

emilymbender, to random
@emilymbender@dair-community.social avatar

Mystery AI Hype Theater 3000 Ep 27: in which @alex and I get to talk to
Annalee Newitz and Charlie Jane Anders of @ouropinions about Asimov's Laws in fiction and US government documents

https://www.buzzsprout.com/2126417/14597462-episode-27-asimov-s-laws-vs-ai-death-making-w-annalee-newitz-charlie-jane-anders-february-19-2024

W/thanks to Christie Taylor for production!

Audiogram with MAIHT3k logo, title "Episode 27: Asimov's Laws vs. 'AI' Death-Making" and subtitle "The US military is soliciting frameworks for 'ethical' applications of autonomous weapons systems. But can you really logic gate the value of human life?" Captions: ANDERS: I'm trying to concretely understand what they mean by ethics when their starting position is, we're going to be murdering people. And I think what they mean is minimizing collateral damage. So how do you prevent it from just choosing targets based on, well, this looks like a terrorist, because that's what I saw in a Google image search. I feel like the idea that ethics is something that can be quantified is the work of someone who has never thought about morality, ethics, or the value of human life for, like, a second. NEWITZ: It really reminds me of what happens when a company wants to reassure the public that its tech products are secure. And within the computer security community, of course, oftentimes these rubrics are laughed at. Those kinds of systems, though, those kinds of rubrics really help companies that want to basically rubber stamp their products, right? They want to be able to say, like, yes, we've subjected this to a security review and we've quantified our risk. And our risk is zero on a scale of one to five or whatever. [...]

emilymbender,
@emilymbender@dair-community.social avatar
emilymbender, to random
@emilymbender@dair-community.social avatar

Nobody:

Google: "The program is part of the Google News Initiative, which launched in 2018 to provide publishers with technology and training."

Google strong-arming cash-strapped small publishers into polluting the info ecosystem and calling it "training".

https://www.adweek.com/media/google-paying-publishers-unreleased-gen-ai/

emilymbender, (edited ) to random
@emilymbender@dair-community.social avatar

I'm happy to announce that Chirag Shah & my paper "Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?" is now available through ACM TWeb. A short thread of introduction:

https://dl.acm.org/doi/10.1145/3649468

>>

emilymbender,
@emilymbender@dair-community.social avatar

This paper is definitely a reaction to OpenAI, Google, Microsoft and others pushing chatbots as information access tools (and as such is a follow up to our 2022 paper "Situating Search"):

https://dl.acm.org/doi/10.1145/3498366.3505816

>>

emilymbender,
@emilymbender@dair-community.social avatar

We argue that it is important to reassert the central research focus of the field of information retrieval, because information access is not merely an application to be solved by the so-called ‘AI’ techniques du jour. Rather, it is a key human activity, with impacts on both individuals and society.

>>

emilymbender,
@emilymbender@dair-community.social avatar

Sec 2 gives an overview of information behavior, Sec 3 reviews the literature on what people want from information access systems, and Sec 4 provides a critical look at how LLMs might fit into information access.

>>

emilymbender,
@emilymbender@dair-community.social avatar

Sec 5 does some envisioning of what kinds of research questions might be beneficial to pursue to advance the goals of information access research given the presences of LLMs.

Finally, Sec 6 looks at the web as an information ecosystem and the threats posed by the unregulated use of synthetic media machines.

alex, to random

Thanks to Jack O'Brien and Miles Gray for having @emilymbender and I on The Daily Zeitgeist!

We talked about everything from AI to chicken brooders to the show For All Mankind.

https://www.iheart.com/podcast/105-the-daily-zeitgeist-28516718/episode/resist-the-urge-to-be-impressed-154514127/

emilymbender,
@emilymbender@dair-community.social avatar

@alex Doing our best to make ridicule as praxis part of the Zeitgeist!

rlcw, to random
@rlcw@ecoevo.social avatar

@emilymbender Question, to which I am hoping you may have an insightful answer: Can an LLM like ChatGPT comprehend when it does not have enough information to give a factually correct answer or is it technologically unable to distinguish between hallucination and fact?

emilymbender,
@emilymbender@dair-community.social avatar

@rlcw Everything that comes out of an LLM is made up --- it has no information other than information about the distribution of word forms in text.

emilymbender, to random
@emilymbender@dair-community.social avatar

One of the highlights for me at the LSA annual meeting in NYC was getting to meet Talia Sherman, host of the podcast Tomayto Tomahto --- so it was super fun to follow up with an interview for that same podcast. You can find the episode now on your favorite podcatcher. Here's the link for Podcast Addict:

https://podcastaddict.com/tomayto-tomahto/episode/171927046

Thanks, Talia, for having me on your podcast and for asking such great questions!

emilymbender, to random
@emilymbender@dair-community.social avatar

Has anyone heard of the news site Biharprabha? They ran an article today with a fabricated quote attributed to me. I've emailed the editor in the hopes of getting it taken down, but we'll see.

emilymbender,
@emilymbender@dair-community.social avatar

@MarkRDavid I do this mostly because I talk to journalists a lot and don't always get a notification from the journalist themselves when the pieces go up.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • khanakhh
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • osvaldo12
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • tacticalgear
  • modclub
  • normalnudes
  • ngwrru68w68
  • cisconetworking
  • tester
  • GTA5RPClips
  • Leos
  • anitta
  • provamag3
  • lostlight
  • All magazines