@kellogh@hachyderm.io
@kellogh@hachyderm.io avatar

kellogh

@kellogh@hachyderm.io

I'm a software engineer and sometimes manager. Currently #Raleigh but also #Seattle. Building ML platform for a healthcare startup. Previously, built an IoT platform for one of "those" companies.

Open source: dura, fossil, Jump-Location, Moq.AutoMock, others

Do I have other interests? No, but I do have kids and they have interests. I think that counts for something. I can braid hair and hunt unicorns!

I put the #rust in frustrate

He/Him

#metal #science #python

This profile is from a federated server and may be incomplete. Browse more on the original instance.

KathyReid, to stackoverflow
@KathyReid@aus.social avatar

Like many other technologists, I gave my time and expertise for free to because the content was licensed CC-BY-SA - meaning that it was a public good. It brought me joy to help people figure out why their code wasn't working, or assist with a bug.

Now that a deal has been struck with to scrape all the questions and answers in Stack Overflow, to train models, like , without attribution to authors (as required under the CC-BY-SA license under which Stack Overflow content is licensed), to be sold back to us (the SA clause requires derivative works to be shared under the same license), I have issued a Data Deletion request to Stack Overflow to disassociate my username from my Stack Overflow username, and am closing my account, just like I did with Reddit, Inc.

https://policies.stackoverflow.co/data-request/

The data I helped create is going to be bundled in an and sold back to me.

In a single move, Stack Overflow has alienated its community - which is also its main source of competitive advantage, in exchange for token lucre.

Stack Exchange, Stack Overflow's former instantiation, used to fulfill a psychological contract - help others out when you can, for the expectation that others may in turn assist you in the future. Now it's not an exchange, it's .

Programmers now join artists and copywriters, whose works have been snaffled up to create solutions.

The silver lining I see is that once OpenAI creates LLMs that generate code - like Microsoft has done with Copilot on GitHub - where will they go to get help with the bugs that the generative AI models introduce, particularly, given the recent GitClear report, of the "downward pressure on code quality" caused by these tools?

While this is just one more example of , it's also a salient lesson for folks - if your community is your source of advantage, don't upset them.

kellogh,
@kellogh@hachyderm.io avatar

@ErikJonker @KathyReid right, i’m pretty sure it was already being used. by every model. just they’re actually getting paid for it now

unclepj, to random
@unclepj@zirk.us avatar

Ok, question. Why do we say Attorney at Law? Is there some other kind? Like Attorney at Having a Good Time or Attorney at Snowboarding?

May I suggest Attorney at Napping? That’s something I could get behind.

kellogh,
@kellogh@hachyderm.io avatar

@unclepj @futurebird attorney inlaw is what you call your wife’s divorce attorney

paul, to random
@paul@oldfriends.live avatar

I was looking for a project that would let me manage my Mastodon follows & followers better. Haven't found anything but did come across Mastodon+Steampipe. If you know of something, @ me

Anyone using Mastodon + Steampipe? Steampipe is an open-source zero-ETL engine to instantly query cloud APIs using SQL
https://hub.steampipe.io/plugins/turbot/mastodon

It has many different "recipes" to see the data on your instance/Mastodon account. https://hub.steampipe.io/plugins/turbot/mastodon/tables

Users &Admins can create access token and use it

kellogh,
@kellogh@hachyderm.io avatar

@paul @jcrabapple holy hell, steampipe…, i’ve needed something like this for forever

kellogh, to random
@kellogh@hachyderm.io avatar
kellogh, to random
@kellogh@hachyderm.io avatar
kellogh, to ai
@kellogh@hachyderm.io avatar

Experiments and discussion of various UX approaches to displaying code generation suggestions

Personally, the big problems I have with are

  • “this change is crap, except this line here, give me just that”
  • changes that are poorly integrated into the file
  • the chat never has the right context, so conversations tend to be circular or unhelpful

https://austinhenley.com/blog/intellicode.html

kellogh, to random
@kellogh@hachyderm.io avatar

are there mastodon clients that are scriptable?

i keep wishing for something good, like @ivory, but extremely customizable that lets me display toots differently, or filter in non-standard ways, or include links to resources that others might not find useful

Lobrien, to random

Nothing can be done to stop this, says only industry where this regularly happens. https://mastodon.world/@hn100/112493477923174205

kellogh,
@kellogh@hachyderm.io avatar

@Lobrien i can’t get past the toots where AI summary is citing Reddit. its like a scene from Silicon Valley (HBO)

lzg, to random
@lzg@mastodon.social avatar

frustrated once again googling what the fuck is a red herring, an idiom that never sticks to my brain no matter how many times i’ve looked it up

kellogh,
@kellogh@hachyderm.io avatar

@lzg you probably get most idioms, this one is just a red herring

kellogh, to random
@kellogh@hachyderm.io avatar

this has been bugging me a lot. like, yeah, there’s definitely AI scams out there. and yeah, a lot of people are using it from the wrong end, but it’s also clearly a substantial technology. time to realize that
https://mas.to/@carnage4life/112484753548884371

kellogh,
@kellogh@hachyderm.io avatar

@leoncowle one phenomenon, it sometimes seems like the anti-AI activity starts to feel more scammy than the AI applications they criticize. FUD, but with a moral sense of urgency and inconsistent logic. it sets off the scam alert in my brain. i’m not sure anyone is really making money off the anti-AI dialog, but it triggers that same pattern in my brain

kellogh,
@kellogh@hachyderm.io avatar

@swiftcoder i think the iTerm2 overreaction really highlighted that people have indeed stopped thinking and are instead using their emotions

kellogh,
@kellogh@hachyderm.io avatar

@maltimore i wish you weren’t right

lzg, to random
@lzg@mastodon.social avatar

i’ve realized i have about 5 scripted things i say to neighbors on morning walks, but they are all slightly odd and maybe need a little workshopping

kellogh,
@kellogh@hachyderm.io avatar

@lzg what? are you thinking of eating him?!

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

It seems that Google is failing again with the first attempts of integrating AI into search or are we just seeing the terrible mistakes and is 98% of the experience great ?
In marketing/PR terms , things are not going well i think. For OpenAI it's easier they don't have a userbase with billions of users (Gmail, Search, Drive etc.).

kellogh,
@kellogh@hachyderm.io avatar

@ErikJonker yeah, the trustworthiness of LLMs isn’t really an issue. the problem is their profile of trustworthiness didn’t match Google’s existing product

when you create a new product, e.g. perplexity, it’s all new so any feature is positive. when you change an existing product, each feature is a diff from the previous state. so in this case, LLMs made google decidedly worse

if they had launched a totally new product, it probably would have had a mostly positive response

kellogh, to random
@kellogh@hachyderm.io avatar

“One House Republican called the incident "vile" and said it has caused concern among GOP lawmakers.”

anonymous coward, show thyself!

https://www.axios.com/2024/05/22/rnc-vials-blood-capitol-police-suspicious

kellogh, to ai
@kellogh@hachyderm.io avatar

iTerm2 developer caves to the bullies and moves the feature to a plugin

https://news.ycombinator.com/item?id=40458135

kellogh,
@kellogh@hachyderm.io avatar

@Xoriff eh, the hacker news & mastodon comments got into the bullying range pretty fast.

a lot of people seem to feel entitled to free software being catered to their wishes. i’ve run into the same sort of entitlement in software i’ve open sourced

kellogh,
@kellogh@hachyderm.io avatar

@capraobscura thanks for illustrating my point

kellogh,
@kellogh@hachyderm.io avatar

@sanityinc the whole fiasco highlights how much we demand from open source, how little respect maintainers get, and how tiny the communities are. most people didn’t even realize this was an open source project

kellogh, to LLMs
@kellogh@hachyderm.io avatar

i’m very excited about the interpretability work that has been doing with .

in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.

this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”

https://www.anthropic.com/research/mapping-mind-language-model

kellogh,
@kellogh@hachyderm.io avatar

further, you can also artificially activate these concepts

they have a version of with the “golden gate bridge” concept artificially activated, and so it tries to make everything it says about thr golden gate bridge

https://www.anthropic.com/news/golden-gate-claude

kellogh,
@kellogh@hachyderm.io avatar

so now we have a way to interpret and query responses in a structured format, as well as a control mechanism for driving LLM behavior

this is great news

Bruce Schneier wrote that prompt injection boils down to the fact that data and code pass through the same channel. with this interpretability work, we’re seeing the beginnings of a control channel separated from the data channel — you can control LLM behavior in a way that you can’t override via the data channel

https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html

kellogh,
@kellogh@hachyderm.io avatar

this is great work. i’m excited to see where this goes next

i hope exposes this via their API. at this point in time, most of the promising interpretability work is only available on open source models that you can run yourself. it would be great to also have them available from vendors

kellogh, to LLMs
@kellogh@hachyderm.io avatar

if i had more time, i'd love to investigate PII coming from . i've seen it generate phone numbers and secrets, but i wonder if these are real or not. i imagine you could look at the logits to figure out if phone number digits were randomly chosen or if the sequence is meaningful to the LLM. anyone aware of researchers who have already done this?

kellogh,
@kellogh@hachyderm.io avatar

i would guess that phone numbers are probably mostly random, since so many phone numbers are found online, whereas AWS keys are less common, so you're probably more likely to get partial or even full real keys

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • JUstTest
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • provamag3
  • anitta
  • lostlight
  • All magazines