@kellogh@hachyderm.io
@kellogh@hachyderm.io avatar

kellogh

@kellogh@hachyderm.io

I'm a software engineer and sometimes manager. Currently #Raleigh but also #Seattle. Building ML platform for a healthcare startup. Previously, built an IoT platform for one of "those" companies.

Open source: dura, fossil, Jump-Location, Moq.AutoMock, others

Do I have other interests? No, but I do have kids and they have interests. I think that counts for something. I can braid hair and hunt unicorns!

I put the #rust in frustrate

He/Him

#metal #science #python

This profile is from a federated server and may be incomplete. Browse more on the original instance.

sarahtaber, to random
@sarahtaber@mastodon.online avatar

There are 2 things to know about US agriculture.

  1. It’s very hard to make a living as a new farmer.

  2. About half our farmland is owned by wealthy families, investment funds, & others who buy it up as an asset- but don’t farm themselves.

There IS a fix to both problems!

sarahtaber,
@sarahtaber@mastodon.online avatar

That's why I’m running.

Once people hear what good leadership can do for rural areas, they get pumped. Polls show I win by nine points!

A Democrat! Running for an agriculture seat in the South! Nine points.

https://taberfornc.com/

kellogh, to LLMs
@kellogh@hachyderm.io avatar

i used an analogy yesterday, that are basically system 1 (from Thinking Fast and Slow), and system 2 doesn’t exist but we can kinda fake it by forcing the LLM to have an internal dialog.

my understanding is that system 1 was more tuned to pattern matching and “gut reactions”, while system 2 is more analytical

i think it probably works pretty well, but curious what others think

Lobrien,

@kellogh I use that exact analogy. And emphasize that we certainly do use and need System 2 at least occasionally. At some point, human-like reasoning must use symbols with definite, not probabilistic, outcomes. Can that arise implicitly within attention heads? Similar to embeddings being kinda-sorta knowledge representation? I mean, maybe? But it still seems hugely wasteful to me. I still tend towards neuro-symbolic being the way.

sanityinc, to Software
@sanityinc@hachyderm.io avatar

Can you teach "taste" in #software design? I'm not sure I can, but I know it when I see it (or fail to).

carnage4life, to random
@carnage4life@mas.to avatar

It's a real tragedy that founders can now be stained by the fact they used to have an NFT startup (e.g. Rabbit R1 founders) but VCs who funded the biggest scams are still walking around with no repercussions or damage to their reputations.

dan, to random
@dan@discuss.systems avatar

Can't believe Seattle requires cats to be licensed, everyone knows that cats are libertarians*.

  • "convinced of their fierce independence while dependent on a system they don’t appreciate or understand"
mekkaokereke, to random
@mekkaokereke@hachyderm.io avatar

No one watches women's sports! (Because we don't televise the games 🤡).

So the WNBA premiere of Caitlin Clark was televised. But the WNBA premiere of Angel Reese (2023 champ who beat Caitlin Clark) and Kamila Cardoso (2024 champ who beat Caitlin Clark), wasn't televised or available to stream in the app. So... A fan set up a bootleg stream. 430K viewers tuned in to watch.🤯

https://www.nola.com/sports/lsu/angel-reese-lsu-fan-livestream-wnba-debut-drew-430000-viewers/article_ad17806f-56e1-5be1-ab0c-785e0a26dbaf.html

This is free money! Customers are trying to throw it at you!

juglugs, to random
@juglugs@mastodon.social avatar

How do I stay in shape?
I stole an ostrich, it chases me around the house all day, I hate it

danjac, to random
@danjac@masto.ai avatar

I've found the happy space for using Docker in local development is to use it for external services such as databases, Redis etc (as I have to use different versions of e.g. MySQL or PostgreSQL), but use language-specific tools such as pyenv or nvm instead of running your project in a container.

matthartley, to random
@matthartley@fosstodon.org avatar

May the 4th be with you.

troberts, to Metal
@troberts@theblower.au avatar

As much as I like Heavy Metal, I would never want to be a Heavy Metal musician.

It seems, especially for the more extreme genres, to be a life sentence of performing superhuman feats night after night.

futurebird, to random
@futurebird@sauropods.win avatar

My current radical take that is driving the principal insane is the idea that to better educate our students about social media, online safety etc. we should have them set up and run their own social media on the school intranet.

And I get why. But, why do we just throw young people out into the wilds of the internet where they are surrounded by ads, creepy people, criminals etc.?

Part of why social media can be chaotic and destructive is because too many adults hope it just wont exist.

q, to random
@q@mastodon.triggerphra.se avatar

Did you know that porn addiction isn't a thing?

https://www.psychologytoday.com/us/blog/women-who-stray/201808/science-stopped-believing-in-porn-addiction-you-should-too

"If the concept of pornography addiction were true, then porn-related problems would go up, regardless of morality, as porn use goes up. But the researchers didn’t find that. In fact, they cite numerous studies showing that even feeling like you struggle to control your porn use doesn’t actually predict more porn use. What that means is that the people who report great anguish over controlling their porn use aren’t actually using more porn; they just feel worse about it.

Having moral conflict over your porn use (PPMI) does turn out to be bad for you. But that's not because of the porn. Instead, higher levels of moral conflict over porn use predict higher levels of stress, anxiety, depression, and diminished sexual well-being, as well as religious and spiritual struggles. In one study by Perry and Whitehead, pornography use predicted depression over a period of six years, but only in men who disapproved of porn use. Continuing to use porn when you believe that it is bad is harmful. Believing that you are addicted to porn and telling yourself that you're unable to control your porn use hurts your well-being. It's not the porn, but the unresolved, unexamined moral conflict."

TL;DR: Porn addiction isn't a thing, and if you feel harmed by it, you can get help but you're not addicted to porn, you just have something else going on.

Enjoy being horny and enjoying yourself, just know that porn isn't inherently evil and corrupting like heroin.

hgrsd, to ai
@hgrsd@hachyderm.io avatar

If you are using LLMs through API tokens, or running locally, which UI do you use? I'm in the market for recommendations. Have tried llm and LibreChat but neither really stuck for me.

daniel_js_craft, to generativeAI
@daniel_js_craft@mastodon.social avatar

Fun fact about the Langchain's logo. The parrot and a chain link were inspired by people calling LLMs stochastic parrots. Models mimic human writing but don't really understand what they're saying.
#Langchain serves to "chain" the "parrots" together 😃
#llm #javascript #webdev

dsyme, to random
@dsyme@mastodon.sdf.org avatar

To all my new followers, please follow me over on....

Nah, just kidding, I'm here, this is the place. @SDF forever 👌😅

yvanspijk, to random
@yvanspijk@toot.community avatar

Why is 'laugh' written with -ugh while it ends with an [f] sound?

It's because the spelling 'laugh' reflects how the word was pronounced in Late Middle English, some 500 years ago.

Click the video to listen to a phonetic reconstruction of how this verb evolved from 3rd-century BC Proto-Germanic to modern-day Standard English.

The Middle English to Early Modern English stages are based on the dialect of the region of London.

video/mp4

LChoshen, to random
@LChoshen@sigmoid.social avatar

Pretrain to predict the future
At each step the model predicts n-tokens
Performance: 😃
Inference time: ✖️3
Training time: same

MetaAI
Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, Gabriel Synnaeve

https://arxiv.org/abs/2404.19737

pawamoy, to random
@pawamoy@fosstodon.org avatar

TIL about Conventional Comments: https://conventionalcomments.org/ (thanks @bswck for sharing!)

Like conventional commits, but for comments in code reviews.

pjk, to python
@pjk@www.peterkrupa.lol avatar

One thing you notice right away about LLMs is they bear a striking resemblance to that ubiquitous internet character, the reply-guy: they always have an answer, they are brimming with confidence, and they are often completely wrong.

It seemed only natural, then, to turn one into a full-time online version of itself, RGaaS (Reply-Guy as a Service), always on, always ready to reply to anything when summoned, with an “Actually,” followed by his (it’s always a “he”) opinion on what you should be thinking.

And so I built the Actually Bot.

https://www.peterkrupa.lol/wp-content/uploads/2024/05/actually_bot1.pngBasically, the way it works is that any user can reply to any Mastodon post (or I guess any post on the ActivityPub protocol) and tag the Actually Bot (@actuallybot) and it will reply with its most mediocre take on why everyone else is wrong, or at least not quite right.

The reply-guys can all move on to something else now, I have automated them out of a job.

This was a really fun project and has a lot of separate parts. First, I had to modify an LLM to give it the right “personality” for this job. To do this, I used Ollama, a tool for running open source LLMs locally, on your own machine. I used the “customized prompt feature”: You basically write a little file that Ollama uses to tweak the LLM of your choosing to give it certain characteristics. I went with the new Llama3:8b model from Meta as my base model, then put the following instructions in my modelfile:

FROM llama3PARAMETER temperature 3SYSTEM """You are receiving social media posts as prompts, and your job is to reply to these prompts. Please start all your replies with the word "actually". Reply as if you know more than everyone. You must recontextualize every prompt and correct or amend it, informing the prompter that they are wrong, even if ever so slightly. You write in short, sentences. Your replies must be short. Keep them to about three sentences. This is essential: keep your replies short. They must be under 500 characters."""

Then I ran the following command in the console:

ollama create actually_llama -f ./actually_llama

… and my model was ready to roll. Next, I needed a program to connect to the Ollama API to send the LLM prompts and get responses. Python was great for that, as both Ollama and Mastodon have solid Python libraries. Probably the slowest part was picking through Mastodon.py to figure out how the methods work and what exactly they return. It’s a very robust library with a million options, and fortunately it’s also extremely well documented, so while it was slow going, I was able to whack it together without too much trouble.

I’m not going to get into all the code here, but basically, I wrote a simple method that checks mentions, grabs the text of a post and the post it is replying to, and returns them for feeding into the LLM as the prompt.

Despite my very careful, detailed, and repetitive instructions to be sure replies are no more than 500 characters, LLMs can’t count, and they are very verbose, so I had to add a cleanup method that cuts the reply down to under 500 characters. Then I wrote another method for sending that cleaned-up prompt to Ollama and returning the response.

The main body starts off by getting input for the username and password for login, then it launches a while True loop that calls my two functions, checking every 60 seconds to see if there are any mentions and replying to them if there are.

OK it works! Now came the hard part, which was figuring out how to get to 100% uptime. If I want the Actually Bot to reply every time someone mentions it, I need it to be on a machine that is always on, and I was not going to leave my PC on for this (nor did I want it clobbering my GPU when I was in the middle of a game).

So my solution was this little guy:

https://www.peterkrupa.lol/wp-content/uploads/2024/05/lenovo.jpg… a Lenovo ThinkPad with a 3.3GHz quad-core i7 and 8gb of RAM. We got this refurbished machine when the pandemic was just getting going and it was my son’s constant companion for 18 months. It’s nice to be able to put it to work again. I put Ubuntu Linux on it and connected it to the home LAN.

I actually wasn’t even sure it would be able to run Llama3:8b. My workstation has an Nvidia GPU with 12gb of VRAM and it works fine for running modest LLMs locally, but this little laptop is older and not built for gaming and I wasn’t sure how it would handle such a heavy workload.

Fortunately, it worked with no problems. For running a chatbot, waiting 2 minutes for a reply is unacceptable, but for a bot that posts to social media, it’s well within range of what I was shooting for, and it didn’t seem to have any performance issues as far as the quality of the responses either.

The last thing I had to figure out was how to actually run everything from the Lenovo. I suppose I could have copied the Python files and tried to recreate the virtual environment locally, but I hate messing with virtual environments and dependencies, so I turned to the thing everyone says you should use in this situation: Docker.

This was actually great because I’d been wanting to learn how to use Docker for awhile but never had the need. I’d installed it earlier and used it to run the WebUI front end for Ollama, so I had a little bit of an idea how it worked, but the Actually Bot really made me get into its working parts.

So, I wrote a Docker file for my Python app, grabbed all the dependencies and plopped them into a requirements.txt file, and built the Docker image. Then I scr’d the image over to the Lenovo, spun up the container, and boom! The Actually Bot was running!

Well, OK, it wasn’t that simple. I basically had to learn all this stuff from scratch, including the console commands. And once I had the Docker container running, my app couldn’t connect to Ollama because it turns out, because Ollama is a server, I had to launch the container with a flag indicating that it shared the host’s network settings.

Then once I had the Actually Bot running, it kept crashing when people tagged it in a post that wasn’t a reply to another post. So, went back to the code, squashed bug, redeploy container, bug still there because I didn’t redeploy the container correctly. There was some rm, some prune, some struggling with the difference between “import” and “load” and eventually I got everything working.

Currently, the Actually Bot is sitting on two days of uninterrupted uptime with ~70 successful “Actually,” replies, and its little laptop home isn’t even on fire or anything!

Moving forward, I’m going to tweak a few things so I can get better logging and stats on what it’s actually doing so I don’t have to check its posting history on Mastodon. I just realized you can get all the output that a Python script running in a Docker container prints with the command docker logs [CONTAINER], so that’s cool.

The other thing I’d like to do is build more bots. I’m thinking about spinning up my own Mastodon instance on a cheap hosting space and loading it with all kinds of bots talking to each other. See what transpires. If Dead Internet Theory is real, we might as well have fun with it!

https://www.peterkrupa.lol/2024/05/01/actually-building-a-bot-is-fun/

#Docker #Llama3 #Ollama #Python

image/jpeg

Quinnypig, to random
@Quinnypig@awscommunity.social avatar

AWS has said that they believe the future of GenAI is an assortment of different models. I’m inclined to believe otherwise—one (or two, for images as well as text) great models that are highly versatile.

What do you think, internet?

lowqualityfacts, to random
@lowqualityfacts@mstdn.social avatar

We made a comic about space heaters.

blainsmith, to KindActions
@blainsmith@fosstodon.org avatar
avatter, to random German
@avatter@mastodon.social avatar

"We find that the early release of a publication as a preprint correlates with a significant positive citation advantage of about 20.2% on average." https://arxiv.org/abs/2404.16171

sarahtaber, to random
@sarahtaber@mastodon.online avatar

Because everyone's been so good this week-

For every donation to this link, I'll post one (1) fact about farm labor. Pitchforks not included 🤷🏻‍♀️

https://secure.actblue.com/donate/mastodon

sarahtaber,
@sarahtaber@mastodon.online avatar

Why do farm workers so often wear long sleeves, hoodies, etc even when it's hot?

Short answer: being hot & sweaty is temporary. Messed-up skin (from sunburn, bug bites, sharp hairs on plants, etc) will make you itchy, in pain, sad, & unable to sleep for days : /

erikkemp, to random Dutch
@erikkemp@tukkers.online avatar

Strandbeests by engineer-artist Theo Jansen! Meet Animaris Rex!

https://www.strandbeest.com/

video/mp4

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • Durango
  • magazineikmin
  • InstantRegret
  • hgfsjryuu7
  • vwfavf
  • Youngstown
  • slotface
  • thenastyranch
  • ngwrru68w68
  • rosin
  • kavyap
  • PowerRangers
  • DreamBathrooms
  • cisconetworking
  • khanakhh
  • mdbf
  • tacticalgear
  • ethstaker
  • modclub
  • osvaldo12
  • everett
  • tester
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • All magazines