Remember "#DeathPanels"? #SarahPalin promised us that #UniversalHealthcare was a prelude to a Stalinist nightmare in which unaccountable bureaucrats decided who lived or died based on a cost-benefit analysis of what it would cost to keep you alive versus how much your life was worth.
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
I've written an ActivityPub server which . That's all it does. It won't record favourites or reposts. There's no support for following other accounts or receiving replies. It cannot delete or update posts nor can it verify signatures. It doesn't have a database or any storage beyond flat files.
But it will happily send messages and allow itself to be followed.
This shows that it is totally possible to broadcast fully-featured ActivityPub messages to the Fediverse with minimal coding skills and modest resources.
I wanted to create a service a bit like FourSquare. For this, I needed an ActivityPub server which allows posting geotagged locations to the Fediverse.
I didn't want to install a fully-featured server with lots of complex parts. So I (foolishly) decided to write my own. I had a lot of trouble with HTTP Signatures. Because they are cursed and I cannot read documentation. But mostly the cursed thing.
Creating a minimum viable Mastodon instance can be done with half a dozen static files. That gets you an account that people can see. They can't follow it or receive any posts though.
I wanted to use PHP to build an interactive server. PHP is supported everywhere and is simple to deploy. Luckily, Robb Knight has written an excellent tutorial, so I ripped off his code and rewrote it for Symfony.
The structure is relatively straightforward.
/.well-known/webfinger is a static file which gives information about where to find details of the account.
/[username] is a static file which has the user's metadata, public key, and links to avatar images.
/following and /followers are also static files which say how many users are being followed / are following.
/posts/[GUID] a directory with JSON files saved to disk - each ones contains the published ActivityPub note.
/photos/ is a directory with any uploaded media in it.
/outbox is a list of all the posts which have been published.
/inbox is an external API endpoint. An ActivityPub server sends it a follow request, the endpoint then POSTs a cryptographically signed Accept message to the follower's inbox. The follower's inbox address is saved to disk.
/logs is a listing of all the messages received by the inbox.
/new is a password protected page which lets you write a message. This is then sent to...
/send is an internal API endpoint. It constructs an ActivityPub note, with attached location metadata, and POSTs it to each follower's inbox with a cryptographic signature.
That's it.
The front-end grabs my phone's geolocation and shows the 25 nearest places within 100 metres. One click and the page posts to the /send endpoint which then publishes a message saying I'm checked in. It is also possible to attach to the post a short message and a single photo with alt text.
There's no database. Posts are saved as JSON documents. Images are uploaded to a directory. It is single-user, so there is no account management.
I've raised an issue on Mastodon to see if they can support showing locations in posts. Hopefully, one day, they'll allow adding locations and then I can shut this down.
The code needs tidying up - it is very much a scratch-my-own-itch development. Probably riddled with bugs and security holes.
Making some biscotti, but I’ve taken a drastic flavoring turn: they’re heavily dosed with fresh ground cardamom seeds and a few drops of rose essence. Otherwise they’re true to form, flour, sugar, almonds, pistachios. #ItalianFood#persianFood#whatIsHappening#Homemade#weirdCooking#what
Last week I attended an unofficial discussion group about the future of AI in Government. As well as the crypto-bores who have suddenly pivoted their "expertise" into AI, there were lots of thoughtful suggestions about what AI could do well at a state level.
Some of it is trivial - spell check is AI. Some of it is a dystopian hellscape of racist algorithms being confidently incorrect. The reality is likely to be somewhat prosaic.
Although I'm no longer a civil servant, I still enjoy going to these events and saying "But what about open source, eh?" - then I stroke my beard in a wise-looking fashion and help facilitate the conversation.
For many years, my role in Cabinet Office and DHSC was to shout the words "OPEN SOURCE" at anyone who would listen. Then patiently demolish their arguments when they refused to release something on GitHub. But I find myself somewhat troubled when it comes to AI models.
Let's take a theoretical example. Suppose the Government trains an AI to assess appeals to, say, benefits sanctions. An AI is fed the text of all the written appeals and told which ones are successful and which ones aren't. It can now read a new appeal and decide whether it is successful of not. Now let's open source it.
For the hard of thinking - this is not something that exists. It is not official policy. It was not proposed as a solution. I am using it as a made-up example.
What does it mean to open source an AI? Generally speaking, it means releasing some or all of the following.
The training data.
The weights assigned to the training data.
The final model.
I think it is fairly obvious that releasing the training data of this hypothetical example is a bad idea. Appellants have not consented to having their correspondence published. It may contain deeply personal and private information. Releasing this data is not ethical.
Releasing how the data is trained is probably fine. It would allow observers to see what biases the model has encoded in it. Other departments could use the model to train their own AI. So I (cautiously) support the opening of that code.
But training weights without the associated data is kind of useless. Without the data, you're unable to understand what's going on behind the scenes.
Lastly, the complete model. Again, I find this problematic. There are two main risks. The first is that someone can repeatedly test the model to find weaknesses. I don't believe in "security through obscurity" - but allowing someone to play "Groundhog Day" with a model is risky. It could allow someone to hone their answers to guarantee that their appeal would be successful. Or, more worryingly, it could find a lexical exploit which can hypnotise the AI into producing unwanted results.
Even if that weren't a concern, it appears some AI models can be coerced into regurgitating their training data - as discovered by the New York Times:
The complaint cited examples of OpenAI’s GPT-4 spitting out large portions of news articles from the Times ... It also cited outputs from Bing Chat that it said included verbatim excerpts from Times articles. NY Times copyright suit wants OpenAI to delete all GPT instances
Even if a Government department didn't release its training data - those data are still embedded in the model and it may be able to reconstruct them. So any sensitive or personal training data might be able to be reconstructed.
Once again, to be crystal clear, the system I am describing doesn't exist. No one has commissioned it. This is a thought experiment by people who do not work in Government.
So where does that leave us?
I am 100% a staunch advocate for open source. Public Money means Public Code. Make Things Open It Makes Things Better.
But...
It seems clear to me that releasing training data is probably not possible - unless the AI is trained on data which is entirely safe / legal to make public.
Without the training data, the way it is trained is of limited use. It should probably be opened, but would be hard to assess.
The final model can only be safely released if the training data is safe to release.
I want to live in a world where the data and algorithms which rule the world are transparent to us. There will be plenty of AI systems which can and should be completely open - nose-to-tail. But there will be algorithms trained on sensitive data - and I can't see any safe, legal, or moral way of opening them.
Again, I want to stress that this particular example is a figment of my imagination. But at some point this will have to be reckoned with.
The venerable NaNoWriMo is a self-directed challenge. To whit - can you write a 50,000 word novel in the month of November? It doesn't have to be a good novel. You just need to complete it. 50k words over 30 days is 1,667 words per day. If you can type at about 20 Words Per Minute, then you can bash out a novel in 90 minutes per day.
I know that I can bash out a novel in half a month given sufficient motivation. I have dozens of stories that I want to tell. I finally understand why authors complain about their characters not doing what they need them to do. Trying to engineer a nifty plot point is tougher than I thought. It's fascinating to write characters you don't like - and it can be hard to give them a suitable comeuppance. Stories I thought would be short went on far too long. Being clever rarely works. The thundercrack of realising exactly how something is going to work is brilliant.
But, most importantly, I can commit to a creative challenge, execute it, and complete it.
I've loved the feedback people have given - good and bad. I don't think I want to try and publish it as a "real" book. But we'll see.
Throughout September and October, I spent some time planning out the bones of my book. I wrote titles for chapters gave each a very vague synopsis. If I had a thought about a plot-point, I scribbled it down. This is similar to my algorithm to write an assignment. A paragraph of 100 words means that you only have to write 17 paragraphs per day. If your chapter has a beginning, middle, and end then you only need to write 6 paragraphs for each.
I also went to a NaNoWriMo "Write In" during October. It was kind of nice to sit with others and chat about our story ideas. It's also harder to doss about on the Internet when you're surrounded by people typing.
I mostly wrote in plain-text. When I did use something like Google Docs, I got distracted by its spell-check and (often erroneous) grammar suggestions. I found it incredibly important to get into the flow. Running on huge paragraphs without stopping to think if I'd spelled "obstreperous" correctly. All of that can be saved for editing. The most important thing is to get the story out.
I realise how privileged I am to have a couple of hours each day to write. And I don't mean to suggest that you should feel bad if you don't. But the nice thing about writing is that there are no short-cuts. I cannot teach you "one weird trick that authors hate". You literally have to sit at the keyboard and fling your fingers at it until the words are on the page.
I suppose the only "trick" is not caring too much about the end result while you're writing. Once the words are out, it's OK to go back and fix all your mistakes.
I think so! It's fun writing short stories. They're an interesting way to examine what I think about the world. Perhaps next year I will try to turn one of them into a full length novel.
I should probably read more about writing and attend some of the workshops run by published authors. It might also be useful to get beta-readers to commit to giving me feedback on each chapter.
Would I like to be the next Andy Weir and transform my blog into a best-seller and then a movie? Yes, obviously. But I'd rather be realistic about what I can achieve and how I can maximise the fun I have.
Anyway, you can read Tales of the Algorithm online - and I'd love to know what you think of it.
"The truth is, #capitalists hate #capitalism. Inevitably, the kind of person who presides over a giant corporation and wields power over millions of lives - workers, suppliers and customers - believes themselves to be uniquely and supremely qualified to be a wise dictator.
For this kind of person, #competition is "wasteful" and distracts them from the important business of making everyone's life better by handing down unilateral - but wise and clever - edits. Think of #PeterThiel's maxim, "competition is for losers."
That's why giant companies love to merge with each other, and buy out nascent competitors. By rolling up the power to decide how you and I and everyone else live our lives, these executives ensure that they can help us little people live the best lives possible....."
What a delightful defense from abusive telemarketing efforts, @pluralistic :
"Jolly Roger sells different personas: "Whitebeard" is a confused senior who keeps asking the caller's name, drops nonsequiturs into the conversation, and can't remember how many credit-cards he has. "Salty Sally" is a single mom with a houseful of screaming, demanding children who keep distracting her every time the con artist is on the verge of getting her to give up compromising data."
My Linux laptop used to suspend perfectly. I'd close the lid and it would go to sleep. Open it up, it would spring to life - presenting me with a password screen. But, some time in the last few months, it has stopped doing that.
If I close the lid, it keeps running. This is unhelpful.
If I manually run the suspend command - systemctl suspend - the laptop blanks the screen then immediately turns it back on at the lock screen. It doesn't suspend.
I know that suspend physically works - becasue running any of these other command does properly suspend the machine. But powering it back up goes straight to the desktop - no lock screen!
I went down a bit of a rabbit hole, following lots of suggestions from various people on the Internet. None of these helped me - but they may be useful pointers to you.
I tried disabling everything in . I couldn't get PXSX to be disabled. But even with everything else off, the suspend didn't work.
Forget the old woman who lived in a shoe: A Michigander in her mid-thirties made a home in a Family Fare supermarket sign for nearly a year, local outlet MLive reported. The unnamed woman lived rent-free and went unnoticed by shoppers and workers until a construction crew working on repairs to the building’s roof stumbled upon her humble abode two weeks ago, Midland Police Officer Brennon Warren told the outlet. “She essentially made it home,” Warren said, describing how officers discovered a cozy apartment complete with desk, houseplant, computer printer, coffeemaker, and a cubbyhole of food. It’s unclear why and how the woman chose to build her nest there; police said she’s employed and turned down housing assistance when offered. When she was discovered, some of the store workers said they recognized her from around the property and that she occasionally seemed to vanish into thin air. They now have a new name for the legendary eave-dweller: “The Roof Ninja.” Authorities and Family Fare are working to help her find a new home.
This picture is of five different chat apps
YouTube loads drastically slower if it thinks you are using Mozilla Firefox [Video] (9to5google.com)
Google appears to be intentionally slowing down YouTube on Mozilla Firefox, as a new video clearly demonstrates.
Official SoloBoardGaming Discussion - Mage Knight Board Game (2011)
Take our poll:...