@inthehands@hachyderm.io
@inthehands@hachyderm.io avatar

inthehands

@inthehands@hachyderm.io

Composer, pianist, programmer, professor, rabble rouser, redhead

Computer Science at https://www.macalester.edu/mscs/
(Student projects: https://devgarden.macalester.edu)
Artistic Director of https://newruckus.org
Freelance dev, often with https://bustout.com
Musical troublemaker https://innig.net/music/

Minneapolis, MN
Born & raised in Ft. Collins, CO
he/him or they/them

The heart is the toughest part of the body.
Tenderness is in the hands.
— Carolyn Forché

searchable

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Adam_Cadmon1, to random
@Adam_Cadmon1@mastodon.online avatar

But because these 10 dudes have more money than 30 or 40 million people, combined, we listen to them.

inthehands,
@inthehands@hachyderm.io avatar

@researchbuzz @Adam_Cadmon1
A pattern that’s showing up with increasing frequency is that AI is a lot more exciting to execs and investors than it is to customers.

seachanger, to random
@seachanger@alaskan.social avatar

if I were biden, I’d whip every single willing rep and senator into a full action plan to bring integrity to the supreme court, starting with the demand that judges whose households have documented partisan or ethics issues involving politicians must be recused from all decisions involving those leaders

https://www.theguardian.com/law/article/2024/may/18/samuel-alito-flag-supreme-court-ethics-election?CMP=Share_iOSApp_Other

inthehands,
@inthehands@hachyderm.io avatar

@seachanger
A big problem here is a mismatch in what options the different parties are willing to put on the table. The right wing got as far as it did despite national unpopularity because of utter shamelessness, a willingness to abuse every lever of power and ignore fundamental principles of democracy.

It’s not entirely a bad thing we still have one major party that’s reluctant to abuse power and subvert process and circumvent democracy in similar ways. But that is what the moment requires.

inthehands, to random
@inthehands@hachyderm.io avatar

So…the “Slack will now train AI using your data” thing is not as much of a five alarm fire as I’d first assumed:

“We do not develop LLMs or other generative models using customer data.” ← GOOD.

“Data will not leak across workspaces.” ← Or so they say. They •are• training across workspaces, but sounds like recommender systems and not generative models, so…we’ll see. Seems fraught. Still, that public commitment does mean something — legal exposure, at least.

https://slack.com/intl/en-gb/trust/data-management/privacy-principles

1/2

inthehands,
@inthehands@hachyderm.io avatar

The thing is, I expect these commitments to shift underfoot, from Slack and from every other company going through a WOOT! WE HAVE CUSTOMER DATA! phase.

We desperately need a regulatory regime for this. All that “Oh no! Congress must act to prevent Skynet!!” was just a bunch of BS, a laser pointer dot to keep the kittens in legislatures from looking at the actual issues.

2/2

StillIRise1963, to random
@StillIRise1963@mastodon.world avatar

We’re in A LOT of trouble around here.

inthehands,
@inthehands@hachyderm.io avatar

@StillIRise1963 @Okanogen
Hoping for it, but absolutely not counting on it.

inthehands, to random
@inthehands@hachyderm.io avatar

This is a perfect case study in how LLMs (don’t) work.

Please consider carefully what human processes a systems like this could actually replace. https://toot.cat/@devopscats/112445057997076822

inthehands,
@inthehands@hachyderm.io avatar

It’s perhaps not obvious that in the example above, the LLM •does• actually do something useful! It conveys information about what’s typical: “When people talk about a goat and a boat and a river, there’s usually a cabbage too. Here are words that typically appear in the ‘answer’ position in such a context.”

What the LLM doesn’t do is actually solve the problem — or even understand the question. Its answer is garbage. Garbage with clues, as in a detective story. But garbage.

inthehands,
@inthehands@hachyderm.io avatar

I’ve noticed developers often express excitement about LLM assistants when working with unfamiliar tools, and express horror about them when working with tools they know well. That pattern repeats in other domains as well.

It makes sense: “garbage with clues” can be helpful when you’re learning something unfamiliar. It’s truly helpful to hear “When people import [e.g.] Hibernate and say SessionFactory, code like this typically appears next.” That’s useful! Also probably wrong!

inthehands,
@inthehands@hachyderm.io avatar

Two thoughts:

  1. Folks could design and market these ML tools around the idea of •identifying patterns• (the thing machine learning is actually good at) instead of •providing answers•. Pure fantasy at this point; too much collective investor mania around the wet dream of the magic answer box. Just noting that a better choice is on the table.
inthehands,
@inthehands@hachyderm.io avatar
  1. CS / software education / developer training and mentorship needs to redouble its emphasis on •critical reading• of existing code, not just producing code. By critical reading, I mean: “What does this code do? Does it •really• do that? What is its context? What hidden assumptions am I making? How can it break? Does it do what we •want• it to do? We •do• we want it to do? What is our goal? Why? Is that really our goal? What is the context of our goal? How can our larger goal break?” etc.
inthehands, to random
@inthehands@hachyderm.io avatar

This from @caseynewton brings to the foreground something that’s been eating at me:

What exactly do Google and OpenAI and Microsoft and the rest of the AI bubble think is going to happen here when LLMs disincentivize the creation of the data that feeds them?!

https://www.platformer.news/google-io-ai-search-sundar-pichai/

inthehands,
@inthehands@hachyderm.io avatar

Working that out a bit:

The premise of the ad-supported web is that you profit by driving traffic to your site. The premise of LLMs is in large part to •supplant• traffic to web sites. LLMs need people to keep creating web sites. (More in the article.)

In private, behind closed doors, is the expectation that…well, what? That people will just keep posting useful bot-visible information for free?

inthehands,
@inthehands@hachyderm.io avatar

Is there an expectation that AI model trainers will end up paying for content, and ad revenue is supplanted by “LLM training data fee” revenue? (I can’t imagine GoogleAISoft’s investors think that’s the high-ROI lucrative future they’ve buying.)

Do they privately know this LLM stuff is a bubble, and expect it to burst before data source die-out kills it?

At a guess, it’s all just FOMO greed-panic and nobody’s thinking that far ahead. But truly, I wonder! What •do• they think will happen?!

inthehands, to random
@inthehands@hachyderm.io avatar

Who called it “code review” instead of “objection-oriented programming”

inthehands,
@inthehands@hachyderm.io avatar

(The serious answer to this frivolous question is of course “somebody on a good team where code review was supposed to be constructive dialogue and not just endless nitpicking“)

inthehands, to random
@inthehands@hachyderm.io avatar

So…there is a concerted campaign, with Musk as its mouthpiece, to discredit Signal and get people to switch to Telegram. It’s disinformation, but there’s also useful information in it. The useful information is that a hideous, powerful, right-wing crank — or whoever’s yanking his chain — really, really wants people to use Telegram.

We’ve long known Telegram’s security is weak. But now, in light of this new information, we should move forward assuming that Telegram is actively compromised.

inthehands,
@inthehands@hachyderm.io avatar

Lest it get lost in that longer post:

Assume Telegram is compromised. Not just vulnerable. Compromised.

inthehands, to random
@inthehands@hachyderm.io avatar

Systems thinking from @tofugolem:

The coverup is the thing that turns one incident into an epidemic.

Boy Scouts and child abuse, corporation and financial crimes, police and racist violence: the pattern is all around. If you’re an org leader, fight that urge to cover up the small scandal — unless you’d rather have a flood of them. https://mastodon.social/@tofugolem/112435995009456601

inthehands, to random
@inthehands@hachyderm.io avatar

Something I tell my software students a lot when they’re looking for jobs is to remember that a shockingly large number of job descriptions are written by people in HR who have next to zero understanding of the industry, the specific team, or the business need.

All they’ve got to work with is fragments they’ve heard without comprehension, coming to them through a terrible game of corporate telephone.

1/

inthehands,
@inthehands@hachyderm.io avatar

I think there’s a sense out there that at some level, companies must be fundamentally competent or they’d have gone out of business, whatever they’re doing must make some kind of sense, and therefore it’s up to job applicants to please •them•, to meet •their• standards.

It takes a decade or two in industry to understand how barely-functional more human orgs are, how much of the world runs on humans scrambling to mop up the slop our own processes create.

11/

inthehands,
@inthehands@hachyderm.io avatar

Re that last sentence:
https://how.complexsystems.fail

“Complex systems contain changing mixtures of failures latent within them.”
“Complex systems run as broken systems.”
“Catastrophe is always just around the corner.”
“Human operators have dual roles: as producers & as defenders against failure.”

•That• is the reality every one of us is walking into as either a job applicant or a hiring org. As a human in the world.

12/

inthehands,
@inthehands@hachyderm.io avatar

That sense that it’s the job applicant’s job to please the infallible company is especially keen with students. Trained their whole lives to seek out gold stars, students are always looking for how to get the next A — and feel lost walking into a world where that’s not how it works anymore, where the only people handing out gold stars are people looking to manipulate you.

13/

inthehands,
@inthehands@hachyderm.io avatar

I’ve seen companies abuse that desire to please. Google in particular recruited on campus throughout the 2010s with the same attitude as the cocky high schooler whose dating strategy is to act like nobody is good enough for them. (That parallel was close enough to make me deeply uncomfortable.) Google was seen among students as the gold star that proved that you’re one of the smart ones, that you’ll make it.

Another reason those layoffs did so much psychological damage.

14/

inthehands,
@inthehands@hachyderm.io avatar

I don’t think it all has to be this bad.

In a better world, the folks writing job descriptions are good psychologists and good social scientists: neither hiding the hiring process from teams nor dumping in the laps of engineers who have no idea how to run a good one, but instead collaboratively understanding the needs of teams and then using their knowledge of humans and human systems to find the best people.

Is that too much to ask? I’m fairly cynical, but dammit, I don’t think it is.

/end

inthehands,
@inthehands@hachyderm.io avatar

@airwhale
Yes. Women. First-gen students. Minorities. Students from blue-collar families. The list goes on: anyone who’s coming from a position of vulnerability, who’s been told their whole life that one step out of line means they get punished.

inthehands,
@inthehands@hachyderm.io avatar

@acdha Yup. Yup. (Right there with you downthread!)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • ethstaker
  • Youngstown
  • everett
  • slotface
  • osvaldo12
  • rosin
  • mdbf
  • kavyap
  • DreamBathrooms
  • provamag3
  • ngwrru68w68
  • Durango
  • modclub
  • cubers
  • khanakhh
  • Leos
  • tacticalgear
  • cisconetworking
  • vwfavf
  • tester
  • anitta
  • normalnudes
  • JUstTest
  • All magazines