@cmaclell@sigmoid.social
@cmaclell@sigmoid.social avatar

cmaclell

@cmaclell@sigmoid.social

Assistant Professor at Georgia Tech's School of Interactive Computing.

I do research at the intersection of #CognitiveSystems, #AI, #HCI, #CognitiveScience, and #LearningScience. My work focuses on understanding how people teach and learn and building computational systems that can teach and learn like people do.

Outside work, I am into #dogs, #backyardchickens, #beekeeping, and #sourdough. I love #scifi and #fantasy.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

cmaclell, to random
@cmaclell@sigmoid.social avatar

Lane Lawley just did an awesome job presenting our new interactive task learning system called VAL at . Our system can learn new tasks on the fly from interactive language instructions and can then execute these tasks at the user’s command. Although we use an LLM to support natural language interaction, VAL’s real power comes from its hierarchical task network representation/planner, which lets it generalize and reason. You can learn more about our system here: https://dl.acm.org/doi/10.1145/3613904.3641915

This is a gif showing someone interacting with the VAL agent in the Overcooked AI game environment. They ask it to cook an onion in a chat interface and the system, responds by asking the user for clarification on how to cook an onion. The user then clarifies that this consists of getting an onion, putting it in the pot, and turning it on. As the user provides clarification, the agent begins executing the task in the environment. When the user then asks VAL to cook a tomato it is able to generalize its knowledge to this new situation and execute this task without any additional instruction from the user.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Summer reading recs?!

I’m looking to compile a big pile of IRL books for summer beach reading.

Big fan of scifi, specfic, murder mysteries. Strong female characters = bonus points. Can’t deal with darkness. Nothing high brow or tedious. Nonfiction lover but that list is already bursting at the seams.

I also love long series - 3 body problem, wheel of time, foundation … those waves all have good memories attached.

If that brings anything to mind, please send the rec!!

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @dcm I love the expanse. If you’re looking for something a bit more light hearted, but still sci-fi you might check out To Say Nothing of the Dog by Connie Willis (https://en.m.wikipedia.org/wiki/To_Say_Nothing_of_the_Dog). A witty novel about time traveling historians.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Do psychologists "measure"?

Weird question, right?

"Measurements of attributes such as emotions, well-being, or intelligence are widely used for various purposes in society, but it remains a matter of discussion whether psychological measurement is analogous to measurement in the natural sciences, and to what extent it qualifies as measurement at all.'
https://doi.org/10.1080/09515089.2023.2300693
Edit: author is here! @mieronen

My initial take: what?! This seems silly. But I'm starting to warm up to it. It's about causality. Consider: "insomnia causes fatigue"; no one disputes it. But there's not a physical thing in the world called insomnia that causes a physical thing in the world, fatigue billiard-ball-style. Rather, the physical causal chain happens by way of a lack of sleep causing the brain state that leads to the mind state of fatigue (in other words, that word "cause" is doing some heavy lifting in that phrase). The question is: can you meaningfully talk about causality when you have abstracted away from physical interactions?

On one hand, of course - you can develop causal models formulated entirely at the psychological level (rewards, punishments, surprises, mood) that make falsifiable predictions and you can both perturb and measure these things to test those models.

On the other hand, we probably do need to take some care that we aren't confusing ourselves as we throw around that word "cause" interchangeably for things that physically interact and abstractions of those things.

Thoughts? I'm particularly curious about cases in which this type of abstraction has led researchers astray.

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @katejjeffery @mieronen if we use a modern physical computer as a metaphor for the brain and software as metaphor for the mind, it’s certainly possible to have glitches, bugs, errors, etc due solely to physical hardware problems, software problems, or both. I would be surprised if something similar was not true for mental disorders in people.

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @katejjeffery @mieronen @yoginho Yes, analogies can be tricky and the word “mind” here is overloaded, so let me rephrase. A physical computer is analogous to the brain. Their architectures differ, but presumably they are both information processing systems. Software corresponds to the information, knowledge, algorithms that are being computed/processed by the brain. A mental disorder might be purely a software issue (ie corrupted info or algorithm) without any hardware issue.

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @katejjeffery @mieronen @yoginho Now presumably all software manifests in some way at the hardware level (eg info is stored and retrieved in computer memory/neurons after all), but the hardware can be doing everything it is supposed to do and there can still be problems that manifest at the “software” level.

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @katejjeffery @mieronen @yoginho check out John Anderson’s book “How Can the Human Mind Occur in the Physical Universe” (https://academic.oup.com/book/4367). Also this paper “practices and pitfalls in inferring neural representations” (https://pure.mpg.de/rest/items/item_2563085_7/component/file_2574085/content). They show it is hard to definitively infer “software” level representations by analyzing “hardware” activations. This suggests it would be difficult to infer a software-related mental problem by analyzing the brain directly.

cmaclell,
@cmaclell@sigmoid.social avatar

@NicoleCRust @katejjeffery @mieronen @yoginho Now, if there was a problem in the hardware, then you could probably see that by analyzing the brain.

jbigham, to random
@jbigham@hci.social avatar

review that recommended RRX for us dinged us for novelty compared to an arxiv paper that was publish about 20 days before the CHI deadline.

assuming that's even a good criticism -- do we just cede to whoever posts on arxiv first now?

what do other communities do?

cmaclell,
@cmaclell@sigmoid.social avatar

@jbigham @5tuartreeves I got exactly the same criticism! (Labeling our work as not HCI enough and recommending we submit to an NLP venue instead) Maybe we got the same reviewer who is just copy pasting this same criticism on all their potentially NLP-related papers 😅. I also like their comment that designing a usable interactive machine learning interface for end users is NOT an HCI problem 🙄.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • InstantRegret
  • DreamBathrooms
  • ethstaker
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • Youngstown
  • slotface
  • modclub
  • love
  • kavyap
  • everett
  • cubers
  • provamag3
  • mdbf
  • khanakhh
  • tacticalgear
  • osvaldo12
  • rosin
  • tester
  • GTA5RPClips
  • cisconetworking
  • Durango
  • Leos
  • normalnudes
  • anitta
  • JUstTest
  • All magazines