@emilymbender@dair-community.social
@emilymbender@dair-community.social avatar

emilymbender

@emilymbender@dair-community.social

Professor, Linguistics, University of Washington

Faculty Director, Professional MS Program in Computational Linguistics (CLMS)

If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/

This profile is from a federated server and may be incomplete. Browse more on the original instance.

emilymbender, to random
@emilymbender@dair-community.social avatar

Mystery AI Hype Theater 3000 Episode 9 is up!

"Call the AI Quack Doctor"

with @alex and our new theme music!

https://peertube.dair-institute.org/w/cDct9r1KJX7KgpnV8o71R2

emilymbender, to random
@emilymbender@dair-community.social avatar

"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.

https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees

From @daveyalba

>>

emilymbender,
@emilymbender@dair-community.social avatar

“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”

>>

emilymbender,
@emilymbender@dair-community.social avatar

Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.

➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.

>>

emilymbender,
@emilymbender@dair-community.social avatar

“Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.”

➡️Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$).

“But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.”

➡️False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines.

>>

emilymbender,
@emilymbender@dair-community.social avatar

“On the same day, [Google] announced that it would be weaving generative AI into its health-care offerings.”

➡️ 🚨🚨🚨

“Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.”

➡️Employees are correct.

>>

emilymbender,
@emilymbender@dair-community.social avatar

“One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review.”

➡️Not a good look, Google.

“But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.”

➡️And it shows…

>>

emilymbender,
@emilymbender@dair-community.social avatar

“When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings”

➡️So tempting to focus on fictional future harms rather than current real ones.

Thank you, @daveyalba for this reporting.

emilymbender, to random
@emilymbender@dair-community.social avatar

"The copyright symbol — which denotes a work registered as intellectual property — appears more than 200 million times in the C4 data set."

Appreciate this reporting from @nitashatiku & co

https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/

emilymbender, to random
@emilymbender@dair-community.social avatar

"It is essential in this moment that we hold companies accountable for the technology they put into the world and not allow them to displace that accountability to the so-called AI systems themselves." me to Pranav Dixit at BuzzFeed News

https://www.buzzfeednews.com/article/pranavdixit/google-60-minutes-ai-claims-challenged

emilymbender, to random
@emilymbender@dair-community.social avatar

Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics & wondered what that was about?

History of it all can be found in this excellent talk by @timnitGebru (w/ co-author @xriskology )

https://www.youtube.com/watch?v=P7XT4TWLzJw

>>

emilymbender,
@emilymbender@dair-community.social avatar

@timnitGebru @xriskology

Also great for understanding what the bundle of ideologies is, how they connect, and why any serious work towards improving things for people on this planet should be very clearly distanced from any of that.

>>

emilymbender,
@emilymbender@dair-community.social avatar

@timnitGebru @xriskology

And just to be very clear: If your work has been exposed as pointing to eugenicist or otherwise racist underpinnings, it's not enough to just "disavow".

If you want to break that connection, you've got to do the work: read those who have been documenting the harms, understand how those harms relate to the work you were pointing to and interrogate how the concepts you've been drawing on could mean your work is perpetuating harm.

>>

emilymbender, to random
@emilymbender@dair-community.social avatar

To all those folks asking why the "AI safety" and "AI ethics" crowds can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology.

>>

emilymbender,
@emilymbender@dair-community.social avatar

isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT).

>>

emilymbender,
@emilymbender@dair-community.social avatar

If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".

>>

emilymbender,
@emilymbender@dair-community.social avatar

If the call for "AI safety" is couched in terms of protecting humanity from rogue AIs, it very conveniently displaces accountability away from the corporations scaling harm in the name of profits.

>>

emilymbender,
@emilymbender@dair-community.social avatar

It's frankly infuriating to read a signatory to the "AI pause" letter complaining that the statement we released from the listed authors of the Stochastic Parrots paper somehow squandered the "opportunity" created by the "AI pause" letter in the first place.

>>

emilymbender,
@emilymbender@dair-community.social avatar

Yes, we need regulation. But as we said:

"It is indeed time to act: but the focus of our concern should not be imaginary "powerful digital minds." Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."

https://www.dair-institute.org/blog/letter-statement-March2023

>>

UlrikeHahn, to random
@UlrikeHahn@fediscience.org avatar

@emilymbender @RuthStarkman I did: so I clearly missed something you intended, e.g., is your point intended to appy only to ACL, and not say, Springer Volumes?

emilymbender,
@emilymbender@dair-community.social avatar

@RuthStarkman @UlrikeHahn my comment was for Ulrike. If you're citing something that was published in a closed venue and posted to arXiv, then I think you cite the published/reviewed version and also give the arXiv link.

emilymbender, to random
@emilymbender@dair-community.social avatar

Several things that can all be true at once:

  1. Open access publishing is important
  2. Peer review is not perfect
  3. Community-based vetting of research is key
  4. A system for by-passing such vetting muddies the scientific information ecosystem

>>

emilymbender, to random
@emilymbender@dair-community.social avatar

MSFT lays off its responsible AI team

The thing that strikes me most about this story from @zoeschiffer and @caseynewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"

https://www.platformer.news/p/microsoft-just-laid-off-one-of-its

>>

emilymbender,
@emilymbender@dair-community.social avatar

@zoeschiffer @caseynewton

Self-regulation was never going to be sufficient, but I believe that internal teams working in concert with external regulation could have been a really beneficial combination.

https://twitter.com/emilymbender/status/1336517038525665280?s=20

>>

emilymbender,
@emilymbender@dair-community.social avatar

@zoeschiffer @caseynewton

So, what can we do as researchers and otherwise, in this moment?

I have to believe there is value in calling out when we see it, in declining to participate in the , and in advocating for regulation.

And those who are making buck off of this will try to tell us: You can't regulate this --- that will stifle progress!

To which I ask: Progress towards what, exactly?

>>

emilymbender,
@emilymbender@dair-community.social avatar

@zoeschiffer @caseynewton

And they will tell us: You can't possibly regulate effectively anyway, because the tech is moving too fast.

But (channeling Ryan Calo here): The point of regulation isn't to micromanage specific technologies but rather to establish and protect rights. And those are enduring.

>>

emilymbender,
@emilymbender@dair-community.social avatar

@zoeschiffer @caseynewton

I call on everyone who is close to this tech: we have a job to do here. The techcos where the $, data and power have accumulated are abandoning even the pretext of "responsible" development, in a race to the bottom.

At the very least, we should be working to educate those around us not to fall for the hype---to never accept "AI" medical advice, legal advice, psychotherapy, etc.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines