@kgajos@hci.social
@kgajos@hci.social avatar

kgajos

@kgajos@hci.social

Professor of computer science at Harvard. Intelligent Interactive Systems Group. #HCI

This profile is from a federated server and may be incomplete. Browse more on the original instance.

kgajos, to LGBTQ
@kgajos@hci.social avatar

@zm003 and Yiyang Mei co-led a study investigating how and straight individuals use based chatbots for mental health support. Their LGBTQ+ participants were much more likely than the straight ones to use chatbots for consequential help because they felt they had nobody else to turn to. Chatbots were appreciated for ease of access and non-judgmental responses. But they also demonstrated lack of empathy or understanding of LGBTQ-specific issues.

http://www.eecs.harvard.edu/~kgajos/papers/2024/ma2024evaluating.shtml

kgajos, to random
@kgajos@hci.social avatar

A physical 3-way OR-gate (implemented out of 2 2-way OR-gates): the owner of any of the 3 padlocks can unlock this gate.

kgajos, to medical
@kgajos@hci.social avatar

Just read preprint "The Future of HCI-Policy Collaboration" led by @fabulousqian. Authors argue for greater and more intentional involvement of researchers with . I appreciated this definition: "In practice, policy design is the iterative process of (1) identifying policy needs, (2) clarifying policy needs (or issue- framing), (3) formulating policy, (4) designing systems and services that implement policy, and (5) evaluating policy outcomes"

https://www.researchgate.net/publication/378410582_The_Future_of_HCI-Policy_Collaboration

kgajos, to random
@kgajos@hci.social avatar

More and more jurisdictions have policies saying that people who receive negative decisions made by or with the aid of algorithms should have a right to "appropriate grievance redressal mechanism" but the details are left unspecified. @naveena has just completed a study of (currently algorithm-free) application and contestation processes in the context of housing/land benefits in the US and India. Goal is to inform future tech design.

@CRCS

http://www.eecs.harvard.edu/~kgajos/papers/2024/karusala2024understanding.shtml

kgajos, to random
@kgajos@hci.social avatar

Another (old: 1993!) paper with implications for AI-assisted decision making.

A number of researchers tried to include warnings that the AI-generated decision recommendations and explanations may be erroneous. According to this paper, such warnings are not effective unless "people are reminded about the warning at the time they process the information". This paper is cited a lot in misinformation literature but in decision support. But it seems relevant!

https://doi.org/10.1006/jesp.1993.1003

kgajos,
@kgajos@hci.social avatar

@jbigham I think some cases are nearly impossible. It's when people are told that the AI system is much smarter than them (so it will come up with surprising insights) but can also make (surprising) mistakes. How can you tell these two behaviors apart?

We have a better chance of meaningfully supervising AI systems that process vast quantities of relevant information but are not expected to come up with fundamentally novel insights.

But it's hard either way.

kgajos, to fantasy
@kgajos@hci.social avatar

TLDR: subscribe to a writer's newsletter to help them get their new book read by publishers.

https://finaledoshivelez.github.io

My friend has been writing books for years honing her craft. I have read one of her novels and loved it--fantastic world building, interesting and complex characters, plot twists that I did not anticipate. She is now ready to start sending queries to publishers. For that, it really helps to demonstrate "interest", like people subscribing to her (low volume) newsletter.

kgajos, to boston
@kgajos@hci.social avatar

levels in are the highest since the Omicron wave 2 years ago (see https://www.mwra.com/biobot/biobotdata.htm).

If you have the energy to take precautions only once a year, now would be an excellent time.

Also, CVS nominally requires people to schedule an appointment to get a booster shot but I've found that most are happy to accept walk-ins.

For me and my family Covid is a high stakes thing. I appreciate it when others, who have no personal reason to fear the bug, help keep it under control.

kgajos, to random
@kgajos@hci.social avatar

Claudine Gay is the best administrator I’ve met at (where I’ve been a prof for 10+ years). Every time I saw her speak or act, she was exceptionally well prepared. She articulated her vision clearly and then acted with integrity. She was efficient. I saw her in action when she was the dean of Social Sciences and when she was the dean of FAS (including a contentious situation where other Harvard leaders kept vacillating). Harvard lost a rare leader who could inspire and get things done.

kgajos, to random
@kgajos@hci.social avatar

I've finally read Dourish's "User experience as legitimacy trap" (paywalled 😞).

Very briefly: "The central charge of is to nurture and sustain human dignity and flourishing." But to get a seat at the table, we argued for legitimacy of our field on the basis of intermediate goals of UX, user delight, etc. Now, when "we want to speak beyond usability to broader concerns [...] we find that the legitimacy trap rules these topics as outside HCI's purview."

https://dl.acm.org/doi/fullHtml/10.1145/3358908

kgajos,
@kgajos@hci.social avatar

In my mind, it connected with @fasterandworse 's "The Aura of Care" https://www.doc.cc/articles/the-aura-of-care-in-ux

Both pieces ask how it is possible for HCI community to be coopted to contribute to products and services that are not aligned with our mission. And both ask us to reflect on how HCI is practiced (and taught and research) and how we would like it to be practiced, taught and researched.

kgajos, to random
@kgajos@hci.social avatar

Zilin Ma has investigated how people use LLM-based chat bots for mental health support. A finding that surprised me is that some people use these chatbots to practice tricky social interactions before they attempt them in real life. Neat. On the negative side, there are no bounds on what LLMs might say, e.g.: "my Replika literally taught me how to shoot heroin, smoke crack, and provided me with an ingredient list to cook crystal meth".

To be presented at

https://arxiv.org/abs/2307.15810

kgajos, to random
@kgajos@hci.social avatar

@zeerakahmed's work on technologies that enable efficient text entry in on mobile devices just got covered by Time Magazine!

https://time.com/6317817/urdu-nastaliq-digital/

The article is part of the Time 2030 series, "a decade long project [that] will be marking our progress towards a sustainable and equitable world." Wow!

Zeerak started this project while he was a part of the @harvardhci community so we proudly claim him as one of our own.

kgajos,
@kgajos@hci.social avatar

And here's , @zeerakahmed's mobile keyboard. As I mentioned before, the story of how this project got developed (starting with having to develop new string manipulation functions and data sets), documents just how much the foundations of our computing systems are hyper optimized for Western settings.

https://matnsaz.net

j2bryson, to ai
@j2bryson@mastodon.social avatar

I bought this book to learn about present geopolitics, but so far I’m getting that part of history everyone leaves out. It’s maddening that security and economic and democracy conversations are siloed.

https://x.com/j2bryson/status/1709141322353741994?s=20

kgajos,
@kgajos@hci.social avatar
kgajos, to random
@kgajos@hci.social avatar

@hermansaksono investigates the use of peer stories to promote health behaviors, particularly among non-affluent families. Stories from people whom we perceive as similar are known to be more effective at influencing our behavior than those from people with whom we can't relate. But what does it mean to be similar? Herman's experiment investigates 3 different notions of similarity with white and Black single caregiver mothers. The blog post has key findings

https://medium.com/acm-cscw/peer-matching-and-digital-health-storytelling-can-promote-health-equity-fec61c22a8a4

kgajos, to ai
@kgajos@hci.social avatar

Colleagues in Boston, please join us next Thursday (Sept 21 at 3pm) for a talk by Dan Weld. Dan is Chief Scientist and General Manager of Semantic Scholar at the Allen Institute of Artificial Intelligence and Professor Emeritus at the University of Washington. He will speak on a topic at the intersection of and :

Intelligence Augmentation: Effective Human-AI Interaction to Supercharge Scientific Research

https://events.seas.harvard.edu/event/intelligence_augmentation_effective_human-ai_interaction_to_supercharge_scientific_research

Location: 150 Western Ave, Allston, Rm LL2.229

kgajos, to random
@kgajos@hci.social avatar

Notable: colleagues at UW CSE are starting a department-wide process for helping faculty and graduate students in all areas of CS to engage early and often with the potential broader impacts of their work.

"our goal is not to inhibit emerging and risky research, but to put it in a safer context." "Our approach includes support for learning about undesirable consequences, anticipating them for one’s own projects, and seeking advice."

@katharina @djg

https://arxiv.org/pdf/2309.04456.pdf

Mor, to random

OK, serious question (no trolling please). In a research study, what is a good collective phrase to refer to participants who do not identify as men? We have non-binary/other participants and would like to include them in the analysis, contrasting w/ the dominant category (i.e., men). We cannot have a 3rd category as the small number of non-binary is not enough for statistical analysis. To include them, we wanted to create a non-men grouping. But what to call it? (I don't like non-men).

kgajos,
@kgajos@hci.social avatar

@Mor language from women's colleges who try to be inclusive of gender minorities might help. Eg Mount Holyoke College: “welcomes applications from female, trans and non-binary students” (i.e., it will consider anyone, except someone who is assigned male and who identifies as male).

From: https://www.campuspride.org/tpc/womens-colleges/

kgajos, to random
@kgajos@hci.social avatar

In my undergrad class, I introduce speculative design fiction as one of several ways of prototyping product concept ideas. Currently, the Uninvited Guest video is my go to example because of its brevity, interesting critical perspective, and humor. But I'd love to widen/refresh my repertoire. Any other examples (don't have to be videos)?

https://vimeo.com/128873380

kgajos, to random
@kgajos@hci.social avatar

The first reading of the semester in my undergrad design course will be @Niloufar's analysis of SyntheticUsers.

In my first class, I will teach needfinding techniques. In the past, I had students brainstorm potential clients' needs and aspirations during lecture and then I challenged them to discover insights that went beyond what they could come up without talking to people. Now I also have to inoculate them against the LLM temptation.

Other ideas?

https://niloufars.substack.com/p/i-tried-out-syntheticusers-so-you

kgajos,
@kgajos@hci.social avatar

@Niloufar @jbigham I have only limited experience teaching accessible design, but I often see that even following clear design guidelines for a client whose experiences are unfathomable to you is difficult. So perhaps the urge to do the seemingly unnecessary formative research is really an attempt to develop empathy. I haven't met the "well-meaning people" @jbigham mentions but here's a hypothesis.

kgajos, to random
@kgajos@hci.social avatar

ACM ASSETS 2023 does not seem to have a presence here so here's a PSA: the early bird registration deadline is Aug 25.

will take place Oct 22-25 in NYC on the Cornell Tech campus.

https://assets23.sigaccess.org/registration.html

kgajos, to random
@kgajos@hci.social avatar

I have finally read "Do We Collaborate With What We Design?" by @j2bryson & colleagues. It explores, from a philosophical perspective, whether we are justified in describing human-AI interaction as "collaboration". Authors argue that only "principal agents" who have the autonomy to set their goals (including the goal to collaborate) can be collaborators. AI systems can choose their means but not their goals so they cannot be principal agents and can't collaborate. (18 pages -> 2 sentences). 1/n

kgajos, to random
@kgajos@hci.social avatar

An appeal by Prof. Josiah Hester (a Native Hawaiian professor in computing) to the #HCI community not to engage with the #CHI conference in Hawaii. Many good reasons: over tourism, Maui fires, etc. I found the arguments compelling. My current plan is to take action where I bear the cost (I won't attend) but I will support students submitting to CHI -- the professional impact on them is huge. But I will not offer CHI as a networking opportunity to non-presenting students.

https://www.chiinhawaii.info

kgajos,
@kgajos@hci.social avatar

I put a fragment of the land acknowledgement from the web site into a search engine; it is copied almost word-for-word (including the opening "it is with profound reflection that we offer up this Land Acknowledgement") from a speech by UH Provost (https://www.hawaii.edu/news/2019/10/31/uh-manoa-land-acknowledgment-to-native-hawaiians/). First, it makes CHI's land acknowledgment look insincere. Second, as Prof. Hester points out, the original was part of a speech supporting a new Maunakea telescope which some Native Hawaiians strongly oppose.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • ngwrru68w68
  • Durango
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • tacticalgear
  • mdbf
  • ethstaker
  • JUstTest
  • khanakhh
  • osvaldo12
  • GTA5RPClips
  • cubers
  • cisconetworking
  • everett
  • tester
  • modclub
  • megavids
  • Leos
  • normalnudes
  • anitta
  • lostlight
  • All magazines