@blog@shkspr.mobi avatar

blog

@blog@shkspr.mobi

This profile is from a federated server and may be incomplete. Browse more on the original instance.

blog, to CSS
@blog@shkspr.mobi avatar

CSS only colour-scheme selector - no JS required
https://shkspr.mobi/blog/2023/10/css-only-colour-scheme-selector-no-js-required/

Yesterday I wrote about a lazy way to implement a manual dark mode chooser. Today I'll show you a slightly more sensible way to do it. It just uses CSS, no need for JavaScript.

Here's a scrap of HTML which present a dropdown for a user to choose their colour scheme:

<select id="colour-mode">  <option value="">Theme Selector</option>  <option value="dark">Dark Mode</option>  <option value="light">Light and Bright</option>  <option value="eink">eInk</option></select>

It will look something like this:
Theme SelectorDark ModeLight and BrighteInk

Modern CSS gives us a way to see which of those options have been chosen by the user:

#colour-mode option:checked[value="dark"]

That can be combined with the new . This allows the CSS to say "If the body has a checked elements with this specific value, then apply these CSS rules" - like so:

body:has( > #colour-mode option:checked[value="dark"] ) {  background: ...}

OK! So, depending on which option the user selects, the CSS can be made to do all sorts of weird and wonderful things. But, that will require...

CSS variables

Here's some CSS which will set various colours for light mode and dark mode. Then it sets the default colours to the light mode:

:root {  /* Light mode variables */  --light-background: beige;  --light-text: #000;  --light-bg: #0F0;  /* Dark mode variables */  --dark-background: #000;  --dark-text: #FFF;  --dark-bg: #FF0;    /*  Default variables */  --background: var(--light-background);  --text: var(--light-text);  --bg: var(--light-bg);}

So the rest of the CSS can have things like:

p {   color: var(--text);}

The <p> will be set to use the colour from --text which, at first, is the same as --light-text.

That can be changed with both :has() and :checked like so:

body:has( > #colour-mode option:checked[value="dark"] ) {  --text: var(--dark-text);}

That says "If the body element has a child which has the ID "colour-mode", and if "colour-mode" has a child option with the checked value of "dark", then set the --text variable to be the value of --dark-text.

String enough of those together and that will make a pretty capable theme switcher!

User Preferences

Some browsers will know whether their user has a preference for dark or light mode. Perhaps the user has set their phone to dark mode, or flipped a switch somewhere for light mode. This preference can be determined in CSS using the .

A site can set the default value of the colour variables like so:

@media (prefers-color-scheme: light) {  :root {    --text: var(--light-text);  }}@media (prefers-color-scheme: dark) {  :root {    --text: var(--dark-text);  }}

Demo

Caveats

A few issues:

  1. This doesn't yet work on Firefox. Even if you enable layout.css.has-selector.enabled in about:config. Support is coming soon™.
  2. This doesn't remember your user's choice. So they'll need to toggle it on every page load.
  3. Choosing a sensible colour scheme means you should test it for accessibility.

Saving the selection

Sadly, this does require JS. This uses localstorage rather than a cookie. If a user doesn't have JS enabled, this will gracefully degrade; the theme will follow the user's preferences, switching will work but won't be remembered.

//  Get the theme switchervar themeSelect = document.getElementById('theme');//  If a theme has previously been selected, set itif (localStorage.theme) {    themeSelect.value = localStorage.theme;}//  Listen for any changes and save themif(themeSelect) {    themeSelect.addEventListener('change', function(event){        localStorage.setItem('theme', themeSelect.value);    });}

Further Reading

https://shkspr.mobi/blog/2023/10/css-only-colour-scheme-selector-no-js-required/

blog, to Finance
@blog@shkspr.mobi avatar

Free Open Banking API using Nordigen / GoCardless
https://shkspr.mobi/blog/2023/10/free-open-banking-api-using-nordigen-gocardless/

A few weeks ago I was moaning about there being no OpenBanking API for personal use. Thankfully, I was wrong!

As pointed out by Dave a company called Nordigen was set up to provide a free Open Banking service. It was quickly bought by GoCardless who said:

We believe access to open banking data should be free. We can now offer it at scale to anyone - developers, partners and Fintechs - looking to solve customer problems.

And, I'm delighted to report, it works! As a solo developer you can get access to your own data for free via the GoCardless APIs.

Screenshot from GoCardless. 1. Test with your own data. See how the product flow would look like for your users and what data is available. 2. Set up the API. Follow our documentation to set up the API and start collecting bank account data. 3. Customise the user interface.Pay as you go. Make the user agreement flow for your customers to match your brand. 4. Ready to go live? Need help and advice to set up faster?

You'll get back a JSON file from each of your banks and credit cards with information like this in it:

{   "bookingDate": "2023-07-11",   "bookingDateTime": "2023-07-11T20:52:05Z",   "transactionAmount": {       "amount": "-2.35",       "currency": "GBP"   },   "creditorName": "GREGGS PLC",   "remittanceInformationUnstructured": "Greggs PLC, London se1",   "merchantCategoryCode": "5812",   "internalTransactionId": "123456789"}

For foreign exchange, transactions look like this:

{   "bookingDate": "2023-10-01",   "bookingDateTime": "2023-10-01T21:41:40Z",   "transactionAmount": {      "amount": "-0.82",      "currency": "GBP"    },    "currencyExchange": {      "instructedAmount": {         "amount": "1.00",         "currency": "USD"      },      "sourceCurrency": "USD",      "exchangeRate": "1.2195",      "targetCurrency": "GBP"   },   "creditorName": "KICKSTARTER.COM",   "remittanceInformationUnstructured": "Kickstarter.com, Httpswww.kick, 1.0 U.S. DOLLAR USA",   "merchantCategoryCode": "5815",   "internalTransactionId": "987654321"}

Depending on your card and the transaction type, you might also get a few more bits of metadata.

Get started at https://gocardless.com/bank-account-data/. From there, it's a case of following the quickstart guide.

A few niggles

There's a bit of bouncing around. You've got to get an API key, get the institution ID, sign in, get redirected, get an ID from the callback, then get the bank account details. And then you can get the transactions!

Oh, and the access token only lasts a short while, so you'll need to either re-auth or use a refresh token.

Bank authorisation only lasts 90 days, so you'll have to refresh your details every 3 months. That's standard across all opening banking, but a bit of a pain.

GoCardless have pretty comprehensive bank coverage but they are missing a few which you might find useful.

Because there are so many financial institution in there, you might find it difficult to work out which one you need to log in to. For example, if you have a Barclays Credit Card, which of these is the right one for you?

{    "id": "BARCLAYCARD_COMMERCIAL_BUKBGB22",    "name": "Barclaycard Commercial Payments",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYCARD_COMMERCIAL_BUKBGB22.png"  },  {    "id": "BARCLAYCARD_BUKBGB22",    "name": "Barclaycard UK",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYCARD_COMMERCIAL_BUKBGB22.png"  },  {    "id": "BARCLAYS_BUSINESS_BUKBGB22",    "name": "Barclays Business",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYS_WEALTH_BUKBGB22.png"  },  {    "id": "BARCLAYS_CORPORATE_BUKBGB22",    "name": "Barclays Corporate",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYS_WEALTH_BUKBGB22.png"  },  {    "id": "BARCLAYS_BUKBGB22",    "name": "Barclays Personal",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYS_WEALTH_BUKBGB22.png"  },  {    "id": "BARCLAYS_WEALTH_BUKBGB22",    "name": "Barclays Wealth",    "bic": "BUKBGB22",    "transaction_total_days": "730",    "countries": [      "GB"    ],    "logo": "https://cdn.nordigen.com/ais/BARCLAYS_WEALTH_BUKBGB22.png"  },

But, overall, it's an excellent service. Now I just need to find / write something to ingest the data and do something with it!

https://shkspr.mobi/blog/2023/10/free-open-banking-api-using-nordigen-gocardless/

blog, to hacking
@blog@shkspr.mobi avatar

Book Review: The Cuckoo's Egg - Clifford Stoll
https://shkspr.mobi/blog/2023/10/book-review-the-cuckoos-egg-clifford-stoll/

Book cover - illustration of a person sat in front of a computer.This book is outstanding. It's the mid 1980s, you're administrating a nascent fleet of UNIX boxen, and you are tasked with accounting for a 75¢ billing discrepancy.

Naturally that eventually leads into an international conspiracy involving the FBI, NSA, and an excellent recipe for chocolate chip cookies. It is a fast paced, high-tension, page turner. There's also a sweet moral core to the story - as well as the somewhat saddening death of naïvety.

It's hard to overstate just how fun this book is. Yes, with the benefit of hindsight running unpatched machines and letting any old hippy connect to them was always going to be a security nightmare. But some of the problems faced by those early pioneers are still present today.

Default passwords, unmonitored systems, uninterested law enforcement, dictionary attacks, buggy permissions, the moral quandary of responsible disclosure - it's all in here.

Of course, there are a few bits which look pretty dated now. Especially some of the attitudes to online privacy:

“You’re not the government, so you don’t need a search warrant. The worst it would be is invasion of privacy. And people dialing up a computer probably have no right to insist that the system’s owner not look over their shoulder. So I don’t see why you can’t.”

It's also nice seeing how internecine warfare between hackers has barely evolved:

From long tradition, astronomers have programmed in Fortran, so I wasn’t surprised when Dave gave me the hairy eyeball for using such an antiquated language. He challenged me to use the C language
...
VI was predecessor to hundreds of word processing systems. By now, Unix folks see it as a bit stodgy—it hasn’t the versatility of Gnu-Emacs, nor the friendliness of more modern editors. Despite that, VI shows up on every Unix system.

There's some deep wisdom in there for any programmer to reflect on:

If people built houses the way we write programs, the first woodpecker would wipe out civilization.

I urge anyone with an interest in computer security to read it. There's a huge amount of entertaining history in there - and plenty of lessons that we still need to learn.

https://shkspr.mobi/blog/2023/10/book-review-the-cuckoos-egg-clifford-stoll/

blog, to HowTo
@blog@shkspr.mobi avatar

Rewriting WordPress's JetPack Related Posts Shortcode
https://shkspr.mobi/blog/2023/10/rewriting-wordpresss-jetpack-related-posts-shortcode/

I like the JetPack related post functionality. But I wanted to customise it far beyond what the default code allows for.

So here's how I went from this:

The old layout has three items, with small images and indistinct text.

To this:

The new layout has 4 items, each boxed off, with a larger image and more distinct text.

Documentation

The complete documentation for related posts is pretty easy to follow.

This is an adaptation of "Use ".

Remove the automatic placement

You can turn off the original "related posts" by adding this to your theme's functions.php:

function jetpackme_remove_rp() {    if ( class_exists( 'Jetpack_RelatedPosts' ) ) {        $jprp = Jetpack_RelatedPosts::init();        $callback = array( $jprp, 'filter_add_target_to_dom' );        remove_filter( 'the_content', $callback, 40 );    }}add_filter( 'wp', 'jetpackme_remove_rp', 20 );

Add the new Related Posts

In your theme's index.php (or wherever else you like) you can add this code to insert the new related posts functionality:

if ( is_single() ) {    echo "<section>";        echo do_shortcode( '[jprelp]' );    echo "</section>";}

Create the new functionality

And this goes in your theme's functions.php file. I've commented it as best I can. Let me know if you need more info.

function jetpackme_custom_related() {    //  Check that JetPack Related Posts exists    if (            class_exists( 'Jetpack_RelatedPosts' )            && method_exists( 'Jetpack_RelatedPosts', 'init_raw' )    ) {            //  Get the related posts            $related = Jetpack_RelatedPosts::init_raw()                ->set_query_name( 'edent-related-shortcode' )                 ->get_for_post_id(                    get_the_ID(),   //  ID of the post                    array( 'size' => 4 )//  How many related items to fetch                );            if ( $related ) {                //  Set the container for the related posts                $output = "<h2 id='related-posts'>The Algorithm™ suggests:</h2>";                $output .=   "<ul class='related-posts'>";                foreach ( $related as $result ) {                    $related_post_id = $result['id'];                    // Get the related post                    $related_post = get_post( $related_post_id );                    //  Get the attributes                    $related_post_title = $related_post->post_title;                    $related_post_date  = substr( $related_post->post_date, 0, 4 ); // YYYY-MM-DD                    $related_post_link  = get_permalink( $related_post_id );                    //  Get the thumbnail                    if ( has_post_thumbnail( $related_post_id) ) {                        $related_post_thumb = get_the_post_thumbnail( $related_post_id, 'full',                             array( "class"   => "related-post-img",                                   "loading" => "lazy" //   Lazy loading and other attributes                            )                         );                    } else {                        $related_post_thumb = null;                    }                    //  Create the HTML for the related post                    $output .= '<li class="related-post">';                    $output .=    "<a href='{$related_post_link}'>";                    $output .=       "{$related_post_thumb}<p>{$related_post_title}</p></a>";                    $output .=    "<time>{$related_post_date}</time>";                    $output .= "</li>";                }                //  Finish the related posts container                $output .="</ul>";            }        //  Display the related posts        echo $output;    }}add_shortcode( 'jprel', 'jetpackme_custom_related' );   //  Shortcode name can be whatever you want

Bit of CSS to zhuzh it up

Feel free to add your own styles. This is what works for me.

.related-posts  {    list-style: none;    padding: 0;    display: inline-flex;    width: 100%;    flex-wrap: wrap;    justify-content: center;}.related-posts > * {    /* https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_flexible_box_layout/Controlling_ratios_of_flex_items_along_the_main_axis#combining_flex-grow_and_flex-basis */    flex: 1 1 0;}    .related-post {        min-width: 10em;        max-width: 20em;        text-align: center;        margin: .25em;        border: .1em var(--color-text);        border-style: solid;        border-radius: var(--border-radius);        position: relative;        display: flex;        flex-direction: column;        min-height: 100%;        overflow: clip;    }        .related-post h3 {            font-size: 1em;            padding-top: .5em;        }        .related-post img {            object-fit: cover;            height: 9em;            width: 100%;            border-radius: 0 1em 0 0;            background: var(--color-text);            display: inline-block;        }        .related-post p {            margin: 0 .25em;        }        .related-post time {            font-size: .75em;            display: block;        }

ToDo

  • Use transients to store the data to prevent repeated slow API calls?
  • Perhaps some teaser text?
  • Adjust the layout so the date always floats to the bottom?

https://shkspr.mobi/blog/2023/10/rewriting-wordpresss-jetpack-related-posts-shortcode/

blog, to ai
@blog@shkspr.mobi avatar

AI isn't a drill, and your users don't want holes
https://shkspr.mobi/blog/2023/10/ai-isnt-a-drill-and-your-users-dont-want-holes/

There's a popular saying; "No One Wants a Drill. What They Want Is the Hole". It's a pithy (and broadly) correct statement. But I don't think it goes far enough. Let's apply the Five Whys method to the issue:

  • No one wants a drill. What they want is the hole.
  • No one wants a hole. What they want is a picture hook.
  • No one wants a picture hook. What they want is art hanging on the walls.
  • No one wants art hanging on the walls. What they want is a pleasant living environment.
  • No one wants a pleasant living environment. What they want is to attract a mate.

And so on. Feel free to substitute with your own anecdotes and biases.

Sure, there are some people who will buy ridiculously overpowered tools because they like gadgets. But those gadgets mostly serve as a gateway to our real needs.

Website designers often fail to appreciate that most small businesses don't want a website. They want customers. The restaurants near me have some truly dreadful websites. Broken URls, crappy pictures, and obnoxious designs abound. I once naïvely thought that I could sell them my web design services. But that was a dead end. Those restaurants are full most nights from walk-in traffic. They don't need a snazzy book-in-advance system. While some people might prefer an online reservation form, the majority are quite happy to call on the phone to book a table. The chances of anyone ordering takeaway from an individual website is basically nill - so they're happy to go with Just Eat / Deliveroo.

And now we're seeing the same mistakes made with AI.

I saw a post recently (that I'll lightly anonymise) which said:

I think there's an opportunity to sell AI services to local businesses. Like electricians, retail, restaurants.
What services or products I could offer?
Perhaps branded ChatGTP? AI content generation? AI trained on their data? AI customer support?

This is 100% the wrong approach. AI is a drill - and people don't want drills; they want holes.

When I think of all the small businesses I interact with - what could AI bring them? The answer isn't to fling half-baked technologies at people in the hope it transforms their businesses. This requires talking to people. Find out what their problems are, what they need fixing, where they struggle.

Does my plumber need an AI trained on 5 years worth of data to know which customers he should prioritise? No! He's got more work than he can handle and is pretty adept at picking the lucrative jobs.

Does the local charity shop need 24/7 chatbot support which hallucinates the contents of the shelves to potential customers? No! They need some more volunteers to sort through donations.

I think AI can be useful. But it is a tool, not a product.

https://shkspr.mobi/blog/2023/10/ai-isnt-a-drill-and-your-users-dont-want-holes/

blog, to money
@blog@shkspr.mobi avatar

Why is there no OpenBanking API for personal use?
https://shkspr.mobi/blog/2023/10/why-is-there-no-openbanking-api-for-personal-use/

The recent news that MoneyDashboard is suddenly shutting down has exposed a gap in the way OpenBanking works. It is simply impossible for a user to get read-only access to their own data without using an aggregator. And there are very few aggregators around.

Why is it impossible for me to get programmatic access to my own data?

There are two interlinked reasons which I'd like to discuss.

Background

OpenBanking is a brilliant idea encoded in an excellent standard wrapped in some very complex processes and with some rather unfair limitations.

OpenBanking presents a standardised API to allow read and write access to a financial account. So I could give a smartphone app read-only access to my credit card and let it automatically tell me when I've spent more than £50 on sausage rolls this week. Or I could add all my bank accounts to one service which would let me see my net worth. Or any of a hundred ideas.

I could also connect my accounts in such a way that when Bank Account A drop below £100, an OpenBanking API request is sent to Bank Account B to transfer some money to A.

Nifty!

Access

But here's the first problem. The only way you can get access to a bank's API is if you have a licence. And you only get a licence if you're a financial institution who can prove that they have robust security controls. Which means that individuals have to go through an aggregator. Or, in OpenBanking terms, an "Account Information Service Provider".

Some OpenBanking providers will let individuals play in a "sandbox" to test out the API. There are no real accounts and no real money, it's just a way to test how the API works.

I can see why that makes sense for write access. You don't want a user's unpatched Raspberry Pi suddenly sending all their money to Russia.

And I can see why that makes sense for organisations which deal with data from multiple people. One leak and everyone is exposed.

But I'm not convinced that it makes sense to deny an individual read-only API access to their own account. Sure, I might accidentally leak my own data - but the same risk exists if I download a PDF statement from my bank.

Coverage

The second problem is that not every OpenBanking consumer will talk to every OpenBanking provider.

For example, I have an account with Coventry Building society. They have an OpenBanking API which no one uses! They're not the largest financial institution in the UK, but have a fair few customers. And yet all the OpenBanking apps refuse to work with it.

So even if I did find an aggregator with an API, it may not work with all my financial institutions.

What's next?

As much as I love using someone else's website or app, sometimes there's nothing better than writing something bespoke.

I was using MoneyDashboard as an . I gave them read-only access to my accounts and then piggybacked off their API. But that's now closed.

Similarly, I was using Moneyed - which offered a personal OpenBanking API - but that shut down as well.

And now I can't find anything.

If you know of an Account Information Service Provider which provides read-only API access to connected accounts, please let me know!

https://shkspr.mobi/blog/2023/10/why-is-there-no-openbanking-api-for-personal-use/

blog, to anime_titties
@blog@shkspr.mobi avatar

It has never been cheaper to commit a crime
https://shkspr.mobi/blog/2023/10/it-has-never-been-cheaper-to-commit-a-crime/

The UK has what is known as a "Standard Scale" of fines for criminal acts. For example, breaking the law may incur "a fine not exceeding level 4 on the standard scale".

Part of the reasoning behind this, so I understand, is to make it simpler for the Government to update the value of those fines. Rather than having to change every law in the land - and have tedious votes on them - it's possible to change one law and have its provisions cascade down to all others. Efficient!

The modern Standard Scale was brought in by the Criminal Justice Act 1991. The fines are:

Level on the scale
Offence committed
11 April 1983 - 1 October 1992
Offence committed
after 1 October 1992

1
£25
£200

2
£50
£500

3
£200
£1,000

4
£500
£2,500

5
£1,000
£5,000

(Source: Sentencing Act 2020)

As you can see, the fines increased quite dramatically from their old value.

And, since 1992 the fines have increased by... nothing!

The Bank of England's Inflation calculator estimates that a £5,000 fine in 1992 should be approximately £10,000 in 2023.

As I understand it, the Standard Scale can be increased via a Statutory Instrument. With the stroke of a pen (and a lot of behind the scenes work) the Justice Secretary could increase these fines so they kept up with inflation.

And that nearly happened! In 2014, a Draft statutory Instrument was published. It proposed increasing the Levels 1-4 fines by 400% - so Level 4 would go from £2,500 to £10,0001.

Quite why it was never published, I was not able to find out.

All I know is that during this time of rapid inflation, it appears that the Government are doing nothing to make sure crime doesn't pay.

It is estimated that in 2021, 77% of all offenders received a fine. That's approximately 737,000 offenders who are paying less than they would have thirty years ago.

The very least the Government could do is ensure that the criminals who do get caught, charged, and convicted have to pay a fine which reflects the severity of their crime.

  1. The Level 5 fine had already become a potentially unlimited fine due to Legal Aid, Sentencing and Punishment of Offenders Act 2012 - Section 85.

https://shkspr.mobi/blog/2023/10/it-has-never-been-cheaper-to-commit-a-crime/

blog, to wordpress
@blog@shkspr.mobi avatar

The minimal-div minimal-span philosophy of this blog
https://shkspr.mobi/blog/2023/10/the-minimal-div-minimal-span-philosophy-of-this-blog/

If you've ever learned Mandarin Chinese, you'll know about "measure words". They're the sort of thing that trip up all new learners of the language. While 个 (gè) can be used as a generic measure word, using it everywhere makes you sound like an idiot (according to my old teacher). So you learn to use 个 for people, 包 for packets, and 根 for things which are long and thin.

English has a similar construct. You might say "one bunch of flowers" or "two glasses of wine" or "three bowls of soup".

You could say "one thing of flowers" or "two things of wines" or "three things of soups" but the measure words give much needed context and semantics.

If you get it wrong and said to a publican "four mugs of beer please" they'd probably know what you meant but it could be a bit confusing.

And isn't that very much like HTML?

The language of the web gives us semantic elements for our markup. When you use <button> to draw a button on screen, the browser knows exactly what to expect, how to display the content, and what it should do. A search engine can extract meaning from the page. Users of assistive technology can be told that they're on a button. Everything is lovely!

You don't have to do that, of course. You could use <div class="button" onclick="something()"> - with enough CSS and JS you'll have something which looks and acts more-or-less like a button. But you'll lose all the semantics which make life easier for browsers, search engines, assistive technologies, and anything else that a user uses to interact with your site.

HTML has dozens of semantic elements. There's <address> for contact details, <time> for dates and times, <nav> for navigation elements.

There are two main "generic" elements. <div> for blocks of stuff, and <span> for a short run of text. I find that most modern websites over-use these elements. I want to reiterate, there's nothing illegal or immoral about doing so; the web police aren't going to take you to gaol. I personally think that writing semantic HTML is easier to maintain, easier to understand, easier for accessibility, and easier for automatically extracting meaning.

So, for a while now, I've been slowly working on my blog's theme in order to remove as many <div>s and <span>s as possible. I started out with a couple of hundred of each. I'm now down to about 35 of each - depending on which page you're on.

Here are some of the problems I found.

Semantic wrapping is complicated

I use Schema.org microdata in my HTML. For example, when a user has left a comment, I might indicate their name by using <span itemprop="name">Juliet Capulet</span>

Or, in the heading of a post I might use

This post has<span itemprop="https://schema.org/commentCount">3</span> comments and is<span itemprop="https://schema.org/wordCount">400</span> words long

Because there's no HTML element like <commentcount> or <wordcount>, I have to wrap those numbers in something if I want to indicate the semantic content behind them. I could "cheat" and use something like <var> or <output> but they're as semantically irrelevant as <span>.

Similarly, it's hard to do semantic comments for WordPress.

Some things are just things

I have a large calendar at the bottom of every page showing my archives. Each calendar is its own "thing" and so is wrapped in a <div> which controls its style and layout. That group of calendars is also its own thing - because .

They're inside a widget - which itself is inside of an <aside>.

I was using a <table> layout for them, but it wasn't flexible and it ballooned the size of the DOM. Perhaps I should treat them as lists? At least then they'd be easier to skip?

WordPress defaults

All built in widgets in WordPress take the following form:

<h2 class="widget-title">...</h2><div>   ...</div>

I don't think they need that extra <div>, although I can see why it might be necessary for styling.

Similarly, https://developer.wordpress.org/reference/functions/wp_nav_menu/ is encased in a <div> by default - although that can be removed.

What's Next?

I'm going to continue hacking away out of a sense of masochism. Perhaps the only person who will notice (other than me) is someone accidentally viewing the source of this page.

But I think it's worth it. There's that story about how the original Mac circuit board was laid out so that, despite never being seen by normal people, it was part of the overall æsthetic. And I think that's what I'm going for - something that I can be satisfied with.

https://shkspr.mobi/blog/2023/10/the-minimal-div-minimal-span-philosophy-of-this-blog/

blog, to history
@blog@shkspr.mobi avatar

Who said "Brits think 100 miles is a long distance - Americans think 100 years is a long time"?
https://shkspr.mobi/blog/2023/09/who-said-brits-think-100-miles-is-a-long-distance-americans-think-100-years-is-a-long-time/

It's one of those pithy little quotes which reveals so much about our two cultures. The average Briton considers anything more than a 45 minute trip a bit of a schelp, whereas Americans will seemingly drive half a day just to get some ribs from that one place they like. Conversely, I went to school opposite a church which pre-dated Columbus's invasion of North America - and I doubt that was the oldest church in the town!

But who said it first? Oh, there are a variety of sites online which will swear that it's a modern author. But let's see if we can find a quote from the last century.

Back in 1999, Neil Gaiman was interviewed in Locus Magazine and said:

England has history; Americans have geography. Which goes back to that joke, ‘America is a country where 100 years is a long time, and England is a country where 100 miles is a long way.’ Both of those things are true on many levels. There really isn’t a great English road trip tradition, because in three or four days, you’ve done it all. Whereas in America, the idea of the road trip is this magnificent long slog.

So it was already an establish trope by the tail-end of the millennium - as can be seen in Billboard Magazine's October 1998 edition.

Diana Gabaldon published "Drums of Autumn" in 1996. She's often cited as the origin of the quote.

[Image: She smiled, but with a wry edge to it. 'My father always said that was the difference between an American and an Englishman. An Englishman thinks a hundred miles is a long way; an American thinks a hundred years is a long time.' Roger laughed, taken by surprise. 'Too right You'll be an American, then, I suppose?' ]

Travelling back a bit further, there's a Usenet post from 1995 where 'Mike "from the US, but my wife is from Scotland" Bartman' says:

It appears that the difference between the US and the UK is that in the UK 100 miles is a long way, and in the US 100 years is a long time... ;^)

There's quite a few Usenet posts with that phrase, but I couldn't find any before the mid 90s.

In 1992, Benjamin Jones wrote a column in "EUROPE, The Magazine of the European Community" (ISSN 0191-4545)

[Image: There's a saying that the difference between the two nations is that the British think 100 miles is a long way, while the Americans think 100 years is a long time.]

(60MB PDF Source)

But the earliest I can reliably trace it back is a book from 1991 called "The Changing context of social-health care : its implications for providers and consumers". It in, Emily Friedman wrote a paper called "Patients as Partners: The Changing Health Care Environment" which talks, in part, about the litigious nature of American consumers of health services. She writes:

[Image: Unfortunately or fortunately, this situation will prevail for some time to come, because the United States, as a nation, is going through a delayed adolescence, and we are questioning everything. We are a very new country, even if we are an old democracy, and we don't have it all down yet. As my friend Simon, an Englishman, says, "The British think a hundred miles is a long way; Americans think a hundred years is a long time." ]

You can read the original at the Internet Archive or on Google Books.

Who was this "Simon"? Is he a real or imagined interlocutor? Did he originate this mot juste? Given the passage of time, it's probably impossible to find out.

Sadly, Emily died in 2016. It sounds like she fought tirelessly for justice - may she rest in power.

As for unreliable sources? There's this page from Jan Kučera which catalogues jokes posted to HUMOR@UGA.CC.UGA.EDU. It appears to have gone live around 1996:

[Image: From: hcate.OSBU_North@XEROX.COM
Subject: Life 3.S A collection of clean humor gathered on: 21 Nov 88. "Give me a place to sit, and I'll watch."
-- friend of Archimedes "Great leaders are rare, so I'm following myself." Guy walks into a restaurant. Orders eggs. The waitress asks "How would you like those eggs cooked?" The guy says "Hey, that would be great."
"No job too big; no fee too big!" --Dr. Peter Venkman, "Ghost-busters" Difference between US & UK... UK - 100 miles is a long distance. US - 100 years is a long time.]

That claims to be from 1988 - but there doesn't appear to be an archive of the HUMOR@UGA.CC.UGA.EDU listserve that I can find.

However, it does turn up in the venerable TextFiles:

[Image: From: fraser@engine.dec.com (Product Acoustics GroupMLO6-2/T13223-8744). Subject: Difference between US & UK... Keywords: rec_humor_cull, smirk. Date: 22 Nov 88 16:30:06 GMT. Organization: Digital Equipment Corporation. UK - 100 miles is a long distance. US - 100 years is a long time. Edited by Brad Templeton. MAIL, yes MAIL your jokes to watmath!looking!funny . Attribute the joke's source if at all possible. I will reply, mailers willing. If you MUST reply to a rejection, include a description of your joke because there is 0 chance I will remember which one it was.]

It looks like "Fraser" at DEC sent that via UUCP to (for those of you not familiar with the now obsolete "Bang Path Notation) funny at looking via University of Waterloo's math department. Whereupon Brad Templeton probably re-circulated it to Usenet's "rec.humor.funny".

And that's as far back as I can trace it. Early Internet history is either mouldering on a set of tapes somewhere or completely lost. Google Books and Archive.org don't show the phrase appearing any earlier. But perhaps your research skills are better than mine?

Can you find an earlier reference? If so, please stick a comment in the usual box.

https://shkspr.mobi/blog/2023/09/who-said-brits-think-100-miles-is-a-long-distance-americans-think-100-years-is-a-long-time/

blog, to mastodon
@blog@shkspr.mobi avatar

How far did my post go on the Fediverse?
https://shkspr.mobi/blog/2023/09/how-far-did-my-post-go-on-the-fediverse/

I wrote a moderately popular post on Mastodon. Lots of people shared it. Is it possible to find out how many different ActivityPub servers it went to?

Yes!

As we all know, the Fediverse is one big chain mail. I don't mean that in a derogatory way.

When I write a post, it appears on my server (called an "instance" in Mastodon-speak).

Everyone on my instance can see my post.

My instance looks at all my followers - some of whom are on completely different instances - and sends my post to their instances.

As an example:

  • I am on mastodon.social
  • John is on eggman_social.com
  • Paul is on penny_lane.co.uk
  • Both John and Paul follow me. So my post gets syndicated to their servers.

With me so far?

What happens when someone shares (reposts) my status?

  • John is on eggman_social.com
  • Ringo is on liverpool.drums
  • Ringo follows John
  • John reposts my status.
  • eggman_social.com syndicates my post to liverpool.drums

And so my post goes around the Fediverse! But can I see where it has gone? Well... sort of! Let's look at how.

A note on privacy

People on Mastodon and the Fediverse tend to be privacy conscious. So there are limits - both in the API and the culture - as to what is acceptable.

Some people don't share their "social graph". That is, it is impossible to see who follows them or who they follow.

Users can choose to opt-in or -out of publicly sharing their social graph. They remain in control of their privacy.

In the example above, if Ringo were to reshare John's reshare of my status - John doesn't know about it. Only the original poster (me) gets notified. If John doesn't share his social graph, it might be possible to work out where Ringo saw the status - but that's rather unlikely.

Mastodon has an API rate limit which only allows 80 results per request and 1 request per second. That makes it long and tedious to crawl thousands of results.

Similarly, some instances do not share their social data or expose anything of significance. Some servers may no longer exist, or might have changed names. It's impossible to get a comprehensive view of the entire Fediverse network.

And that's OK! People should be able to set limits on what others can do with their data. The code you're about to see doesn't attempt to breach anyone's privacy. All it does is show me which servers picked up my post. This is information which is already shown to me - but this makes it slightly easier to see.

The Result

I looked at this post of mine which was reposted over 100 times.

It eventually found its way to… 2,547 instances!

Ranging from 0ab.uk to թութ.հայ via godforsaken.website and many more!

And that's one of the things which makes me hopeful this rebellion will succeed. There are a thousand points of light out there - each a shining beacon to doing things differently. And, the more the social media giants tighten their grip, the more these systems will slip through their fingers.

The Code

This is not very efficient code - nor well written. It was designed to scratch an itch. It uses Mastodon.py to interact with the API.

It gets the instance names of all my followers. Then the instance names of everyone who reposted one of my posts.

But it cannot get the instance names of everyone who follows the users who reposted me - because:
[Image: Followers from other servers are not displayed. Browse more on the original profile.]

The only way to get a list of followers from a user on a different instance is to apply for an API key for that instance. Which seems a bit impractical.

But I can get the instance name of the followers of accounts on my instance who reposted me. Clear?

I can also get a list of everyone who favourited my post. If they aren't on my instance, or one of my reposter's follower's instances, they're probably from a reposter who isn't on my instance.

My head hurts.

Got it? Here we go!

import configfrom mastodon import Mastodonfrom rich.pretty import pprint#  Set up accessmastodon = Mastodon( api_base_url=config.instance, access_token=config.access_token, ratelimit_method='pace' )#  Status to check forstatus_id = 111040801202691232print("Looking up status: " + str(status_id))#  Get my datame = mastodon.me()my_id = me["id"]print("You have User ID: " + str(my_id))#  Empty setsinstances_all        = set()instances_followers  = set()instances_reposters  = set()instances_reposters_followers  = set()instances_favourites = set()#  My Followersfollowers = mastodon.account_followers( my_id )print( "Getting all followers" )followers_all = mastodon.fetch_remaining( followers )print("Total followers = " + str( len(followers_all) ) )#  Get the server names of all my followersfor follower in followers_all:    if ( "@" in follower["acct"]) :        f = follower["acct"].split("@")[1]        instances_all.add( f )        if ( f not in instances_followers):            print( "Follower: " + f )            instances_followers.add( f )    else :        instances_all.add( "mastodon.social" )        instances_followers.add( "mastodon.social" )total_followers  = len(instances_followers)print( "Total Unique Followers Instances = " + str(total_followers)  )#  Reposters#  Find the accounts which reposted my statusreposters     = mastodon.status_reblogged_by( status_id )reposters_all = mastodon.fetch_remaining(reposters)#  Get all the instance names of my repostersfor reposter in reposters_all:    if ( "@" in reposter["acct"]) :        r = reposter["acct"].split("@")[1]        instances_all.add( r )        if ( r not in instances_followers ) :            print( "Reposter: " + r )            instances_reposters.add( r )total_reposters  = len(instances_reposters)print( "Total Unique Reposters Instances = " + str(total_reposters)  )# Followers of reposters     # This can take a *long* time!   for reposter in reposters_all:       if ( "@" not in reposter["acct"]) :          reposter_id = reposter["id"]        print( "Getting followers of reposter " + reposter["acct"] + " with ID " + str(reposter_id) )        reposter_followers = mastodon.account_followers( reposter_id )           reposter_followers_all = mastodon.fetch_remaining( reposter_followers )          for reposter_follower in reposter_followers_all:                if ( "@" in reposter_follower["acct"]) :                 f = reposter_follower["acct"].split("@")[1]                instances_all.add( f )                if (f not in instances_reposters_followers) :                    print( "   Adding " + f + " from " + reposter["acct"] )                    instances_reposters_followers.add( f )   total_instances_reposters_followers  = len(instances_reposters_followers)print( "Total Unique Reposters' Followers Instances = " + str(total_instances_reposters_followers)  )#  Favourites#  Find the accounts which favourited my statusfavourites     = mastodon.status_favourited_by( status_id )favourites_all = mastodon.fetch_remaining(favourites)#  Get all the instance names of my favouritesfor favourite in favourites_all:    if ( "@" in favourite["acct"]) :        f = favourite["acct"].split("@")[1]        instances_all.add( f )        if ( f not in instances_favourites ) :            print( "Favourite: " + f )            instances_favourites.add( r )total_favourites = len(instances_favourites)print( "Total Unique Favourites Instances  = " + str(total_favourites) )print( "Total Unique Reposters Instances = " + str(total_reposters)  )print( "Total Unique Followers Instances = " + str(total_followers)  )print( "Total Unique Reposters' Followers Instances = " + str( len(instances_reposters_followers) ) )print( "Total Unique Instances = " + str( len(instances_all) ) )

https://shkspr.mobi/blog/2023/09/how-far-did-my-post-go-on-the-fediverse/

blog, to aitools
@blog@shkspr.mobi avatar

A love letter to electric power tools
https://shkspr.mobi/blog/2023/09/a-love-letter-to-electric-power-tools/

When I was seven or eight, I asked Santa to bring me a set of screwdrivers for Christmas. I wanted to take apart my toys to see how they worked1. I also thought they might be useful on our upcoming holiday; if the aeroplane needed repairing mid-flight I'd be able to help[2](-47128-help "To this day I've never heard a plane's Captain announce over the tannoy "Is there any one on board who has a screwdriver?"")!

Santa heard my plea and delivered a set of screwdrivers3. I used them for years. A few decades later and they're still in use4 - in fact, they're used a little too often.

For years I resisted the idea of an electric screwdriver. I don't know if it was pride, stubbornness, or a misplaced sense of machismo. I had two working hands, why shouldn't I exert my raw manly power and transform them into torque? Electric screwdrivers were for wimps!

And then, one day, I saw a USB-powered electric screwdriver and though "fuck it, why not?"

It was a revelation!

All of a sudden the little jobs I'd been putting off for ages were easy to accomplish. When I was tired from a day of DIY, it was a breeze to screw things back together. My hands didn't hurt after grappling with a stuck screw. I became a full convert.

Last week I had to saw some fence panels to length. "No worries!" I thought, "I've got a hacksaw!"

Two hours of sweating in the hot sun, and with only half the panels cut, I gave in and got an electric jigsaw5. This weekend I did the rest in about 15 minutes with minimum sweating, swearing, and injury.

Why am I like this? Why do I struggle with the hard, manual way and only then reluctantly let tools help me?

I'm like this with computers as well. When I started programming in university, I was strictly a "type it in notepad" kinda guy. I couldn't afford an IDE6. What did I need "syntax highlighting" for? Auto-complete was just for lazy programmers.

And then, one day, after banging my head against my desk once too often a class-mate induced me to switch.

The same happened with PHP. I spent ages hand-crafting things. Learning the hard way what worked and what didn't. Coming up with my own bespoke solutions until it was just too much for me to manage. And then I switched to the Symfony framework.

In one sense, it is useful to do things manually. To learn what works and what doesn't. To understand where the limits of usefulness are. To be equipped to manage if you're stuck without tools.

And, it's helpful not to prematurely optimise. The British phrase "all the gear, no idea" perfectly describes someone who grabs all the (expensive) tools without the faintest idea how they work and what to do with them7.

I'm getting better, mind you. During my MSc, I asked for advice and started using Zotero before getting too far down the manual route. That saved me a huge amount of time and heartache.

So, my plea to you - and to future me - remember that's it is OK to use tools. It isn't cheating. It isn't unseemly. Sometimes, it isn't about they journey you take, it is about the destination.

  1. Sadly, I never quite mastered the art of putting them back together again. So many R.A.T.S. never worked properly again after I'd finished with them.
  2. To this day I've never heard a plane's Captain announce over the tannoy "Is there any one on board who has a screwdriver?"
  3. It is also possible that my parents thought that screwdrivers were cheaper than whatever plastic junk was currently being advertised on TV.
  4. I honestly think they're the only birthday present from my pre-teen years I still have. All the He-Man toys8 slowly went to jumble-sales.
  5. I also got a battery, extra blades, new gloves, eye protection, some masks, clamps, and a new drill. Oh, and a battery + charger. Because I am weak-willed and need all the toys.
  6. Yes, that's how old I am. We had to pay for our C++ IDEs. And the compiler cost extra.
  7. Of course, I have the opposite problem. I spend months reading reviews and micro-optimising for the perfect cost/value ratio.
  8. THEY'RE NOT DOLLS! THEY'RE ACTION FIGURES!

https://shkspr.mobi/blog/2023/09/a-love-letter-to-electric-power-tools/

blog, to technology
@blog@shkspr.mobi avatar

Book Review: Hacking Capitalism - Modeling, Humans, Computers, and Money by Kris Nóva
https://shkspr.mobi/blog/2023/09/book-review-hacking-capitalism-modeling-humans-computers-and-money-by-kris-nova/

I was saddened to hear of Kris Nóva's untimely death a few weeks ago. I had her book "Hacking Capitalism" on my eReader for several months, but hadn't got around to reading it yet. Never put these things off.

The book is a complicated but fitting legacy. It absolutely showcases Nóva's ideas, ideals, and potential. Perhaps a little overwrought in places, and a little underpowered in others. It's clear that her heart was in the right place and she was making a huge impact in the world. The staccato paragraphs and bullet point entries only underline how much she had to say and how quickly she wanted to say it.

There are a lot of pithy statements buried in the pages:

For every exciting innovation, a legacy system must rot.

And some sections ought to be tattoo'd on the inside of our eyelids:

Remember: The tech industry expects competition because the tech industry is built on capitalism.
There’s legally nothing wrong with playing the same game that the corporations are playing.

The book attempts to map cybersecurity hacking onto work/life hacking. For example, can you use social engineering tricks to find job openings?

You can start to probe for weaknesses by looking for a response. In career exploitation, this can be a response to a tweet, a LinkedIn message, or an email.

It's a clever way of looking at the world. Indeed, Nóva has an interesting perspective on the tech scene as a trans-woman:

In my opinion, masculine socialization directly leads to the ruthlessly competitive personality types that are rewarded within tech. This opinion is based on my experience as both a man and woman working in the tech industry.

I'm not sure I buy all of the pronouncements she makes though. The book does overlook that some of us working in tech are doing so for Governments, charities, and non-profits. Sure, we're all inside capitalism, but our employers aren't necessarily solely driven by a profit motive.

It's certainly a hell of a lot better than most "thought leader" / "industry insider" nonsense. It's a good way to open your eyes to some of the ways the system is stacked against people - and how you can fight back.

Kris leaves behind a huge legacy. It would have been brilliant to see how much further she could have gone.

I'll leave the last word to her:

I want to buy as much land as possible and give it back to society and to the people. I want to wake up in the morning and — even if for a single day — I want to feel free.

https://shkspr.mobi/blog/2023/09/book-review-hacking-capitalism-modeling-humans-computers-and-money-by-kris-nova/

blog, to solar
@blog@shkspr.mobi avatar

One month with a solar battery - real statistics
https://shkspr.mobi/blog/2023/09/one-month-with-a-solar-battery-real-statistics/

August is meant to be full of gloriously hot days. An endless parade of sunshine and drinks in the park. This year it seemed mostly grey, miserable, and prone to pissing it down at a moment's notice.

We all know that solar panels' efficiency wilts in the heat, but do they get a tan work standing in the English rain?

At the beginning of August we installed a 4.8kWh solar battery to supplement our 5kW of solar panels.

The battery provides a CSV of readings taken every 15 minutes. It measures solar power, household usage, and battery usage. August, looked like this:

A graph of the month covered in lines showing solar power and electricity usage.

Not massively helpful. But, with a little bit of 🐍 Python and 🐼 Pandas, I worked out the following:

🏠 Our home used 290kWh

We're tracking pretty close to the UK average of about 10kWh per day. Our average of 9.4kWh each day is perhaps slightly higher than normal for a 2-person household. But we work from home regularly and have a lot of hungry smarthome gadgets.

🔌 Only 15% of our electricity came from the grid

The sun doesn't shine at night. Duh! But the battery usually provides most of our power after sunset. The battery can only discharge at a maximum of 2.4kW, I think. So if we use the electric shower, oven, or other high power appliances, then we draw from the grid.

So, what did that 44kWh cost us?

💷 Normally, we'd pay £90 for August's electricity. We only paid £14!

The price you pay for electricity depends on where you are in the UK and what tariff you're on. With a mix of solar and battery, we cut our August bill by 85%.

But what's the mix between solar direct and solar delayed?

🌞 Solar gave us 46% of our electricity

About 135kWh of our month electricity needs was met directly from solar. That means photons hit the panels, they bounced down into the inverter, and then straight into the wires in our walls, where they were gobbled up by laptops, TVs, and toasters.

That just leaves the battery…

🔋 Battery storage gave us 39% of our electricity

We used about 113kWh from stored solar. An average of 3.6kWh per day. Perhaps that means our 4.8kWh battery is over specced? I'm not so sure. Some days we use .

Of course, not all of the solar power can get used or stored. Once the battery is full, that electricity has to go somewhere…

🔙 We sold 140kWh of solar back to the grid

Our solar power feeds into our local grid for our neighbours to use. We sell the electricity at market rates - which change every 30 minutes. This made us £13.

📈 Total cost for August's electricity? £1.

Yup! For the whole month of August, our electricity bill was £1.

(Plus the standing charge, of course!)

Disclaimer

OK, time for a little bit of a sanity check.

Firstly, these data are drawn directly from the battery. It has clamps over our import and export wires to monitor what the household is doing. These are broadly accurate - I estimate less than 2% different from what our smart meter and inverter report.

Secondly, the battery groups up the stats every 15 minutes. So, again, that's likely to introduce some errors into the data.

Thirdly, prices for both import and export can vary massively. Our export price in particular varies depending on demand.

Fourthly, these data were gathered in South-East London on an East / West split solar site. Your panels will be in a different location and will perform differently.

Fifthly, the price of panels and battery storage is high. If you can afford the up-front capital costs of an installation, I think it makes sense to do so. The payback period is usually under 10 years. But can be considerably shorter during a time of rising energy costs.

I want more stats!

Every day at sunset, my solar panels publish their generation stats to GitLab. You can download all the data from 2020 and see how much solar generation we've had.

If you need more, I published 5 years of minute-by-minute solar generation as Open Data from our previous house. This dataset has been cited in several academic papers.

I'm considering whether to release my daily usage statistics. At the moment, it feels a little invasive. You can tell when I put the kettle on in the morning, see when I load a tumble-dryer, and calculate just how long I use the oven for. Perhaps you can even analyse the overnight fluctuations and work out what model of fridge I have. I don't think you can tell what video content I'm watching because it's hidden in the noise of my other appliances. But you could probably tell if I was home or not.

Here's a typical daily graph. What do you think you can figure out from this?

Graph with multiple lines. There's a spike about 6AM which is probably a kettle being boiled. Another near lunchtime which might be a microwave. The evening has a couple of hours of high use - which is probably a washing machine.

So I think I'll release it in a year's time. That's a decent balance between openness and privacy.

I hope you've found this blog post useful. If you have, you can support me by:

  • Switching to Octopus Energy - if you join, we both get £50. They do dynamic pricing for import and export. And, even better, they have an API so you can query your energy usage.
  • Supporting OpenBenches - it's a crowdsourced site of memorial benches run by me and my wife.
  • You can also buy me a book to read.

Thanks!

https://shkspr.mobi/blog/2023/09/one-month-with-a-solar-battery-real-statistics/

blog, to ComputerScience
@blog@shkspr.mobi avatar

Selectively Compressed Images - A Hybrid Format
https://shkspr.mobi/blog/2023/06/selectively-compressed-images-a-hybrid-format/

I have a screenshot of my phone's screen. It shows an app's user interface and a photo in the middle. Something like this:

Screenshot of a camera app on a phone. The middle is a photo, the sides show the user interface.

If I set the compression to be lossy - the photo looks good but the UI looks bad.
If I set the compression to be lossless - the UI looks good but the filesize is huge.

Is there a way to selectively compress different parts of an image? I know WebP and AVIF are pretty magical but, as I understand it, the whole image is compressed with the same algorithm and the same settings.

There are two ways to do this. The impossible way and the cheating way.

Selective Compression

In theory it should be possible to tell an image format to compress some chunks of an image with a different compression algorithm.

And yet... none of the documentation I've found shows that's possible.

GiMP's native XCF and Photoshop's PSD files work; they store different layers each of which can have a different filetype. I understand that TIFF and .djvu also have that capability.

But those sorts of files don't display in web browsers.

So...

Let's Cheat!

It's possible to use an SVG to embed multiple images of different formats. SVG is used as, effectively, a layout engine.

The syntax is relatively straightforward:

<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 1080 512">   <image width="1080" height="512" x="0" y="0"      xlink:href="data:image/jpeg;base64,..........."   />   <image width="1080" height="512" x="0" y="0"      xlink:href="data:image/png;base64,..........."   /></svg>

That draws the JPG then draws the PNG on top of it. If the PNG has a transparent section, the JPG will show through. The JPG can be set to as low a quality as you like and the PNG remains lossless.

Here's what it looks like - click for full size:

https://shkspr.mobi/blog/wp-content/uploads/2023/06/Mixed-Compression.svg

Embedded images are Base 64 encoded, which does lose some of the compression advantages. But, overall, it's smaller than a full PNG and better quality than a full JPG.

Look, if it's stupid but it works it's not stupid.

But surely there must be a way of doing this natively?

https://shkspr.mobi/blog/2023/06/selectively-compressed-images-a-hybrid-format/

blog, to metaverse
@blog@shkspr.mobi avatar

Gadget Review: Meta Quest 2 replacement headstrap
https://shkspr.mobi/blog/2022/12/gadget-review-meta-quest-2-replacement-headstrap/

The headstrap which ships with the Meta Quest 2 is shit. It is a cheap piece of fabric, held together with velcro. It's fiddly to adust and uncomfortable to use for longer than a few minutes.

Zuckerberg likes causing you pain.

So I purchased the cheapest upgrade strap I could find - £15 on special offer.

A VR headset strap with lots of questions.

  • It has a forehead cushion, which reduces pressure on the face.
  • The tightness is adjustable with a dial - much easier to use and more precise than the tension straps on the original.
  • The angle of the headset can be more easily adjusted.
  • Back cushions - again, to reduce pressure.
  • Top cushion - can you guess what that's for?

There are loads of straps out there - some with extra batteries, some with built-in cooling and defogging, some with extra headphones, and - of course - some with panda eyes.

On the one hand, it is nice there's a vibrant ecosystem of accessories. On the other hand, it would be nice if Facebook Meta did some testing with human users to make their devices comfortable.

https://shkspr.mobi/blog/2022/12/gadget-review-meta-quest-2-replacement-headstrap/

blog, to Facebook
@blog@shkspr.mobi avatar

Is Open Graph Protocol dead?
https://shkspr.mobi/blog/2022/11/is-open-graph-protocol-dead/

Facebook Meta - like many other tech titans - has institutional Shiny Object Syndrome. It goes something like this:

  1. Launch a product to great fanfare
  2. Spend a few years hyping it as ✨the future✨
  3. Stop answering emails and pull requests
  4. If you're lucky, announce that the product is abandoned but, more likely, just forget about it.

Open Graph Protocol (OGP) is one of those products. The value-proposition is simple.

  • It's hard for computers to pick out the main headline, image, and other data from a complex web page.
  • Therefore, let's encourage websites to include metadata which tells our services what they should look at!

OGP works pretty well! When you share a link on Facebook, or Twitter, or Telegram - those services load the website in the background, look for OGP metadata, and display a friendly snippet.

Facebook Meta were the driving force behind OGP - and have now left it to fester.

Is OGP finished?

And, that might be fine. Facebook Meta are a small company with limited resources. They can't afford to fund standards work indefinitely. And, anyway, OGP is complete, right? It has all the tags that anyone could ever possibly want. Why does it need any improving?

Well, that's not the case. We know, for example, that Twitter have created their own proprietary OGP-like meta tags. Similarly, Pinterest have their own as well. And even Google are going their own way with Rich Snippets.

This is annoying for developers. Now we have to write multiple different bits of metadata if we want our links to be supported on all platforms.

Standards work is never "finished". Developers want to add new features. Users want to interact with new forms of content.

Tomorrow someone is going to invent a way to share smells over the Internet. How does that get represented in an Open Graph Protocol compliant manner?

<meta property="twitter:olfactory" content="C₃H₆S"> or
<meta property="facebook:nose" content="InChIKey/MWOOGOJBHIARFG-UHFFFAOYSA-N"> or
<meta property="og:smell" content="pumpkin spice"> or...

We know from bitter experience that having several mutually incompatible ways to implement something is a nightmare for developers and provides a poor user-experience.

So we create standards bodies. They're not perfect, but a group of interested folks can do the hard work to try and satisfy oppositional stakeholders.

This is my plea to Facebook Meta. If you're no longer interested in improving OGP, OK. You do you. But hand it over to people who want to keep this going. Maybe it's the W3C, or IndieWeb, or Schema.org or someone. Hell, I'm not busy, I'll take it on.

Remember, if you love something, let it go.

https://shkspr.mobi/blog/2022/11/is-open-graph-protocol-dead/

blog, to random
@blog@shkspr.mobi avatar

Liberating out-of-copyright photos from SmartFrame's DRM
https://shkspr.mobi/blog/2022/05/liberating-out-of-copyright-photos-from-smartframes-drm/

During the middle of the 20th Century, the UK's Royal Air Force took thousands of photographs of the country from above. Think of it like a primitive Google Earth.

Those photographs are "Crown Copyright". For photographs created before 1st June 1957, the copyright expires after 50 years.

Recently, the organisation "Historic England" started sharing high-resolution copies of these photos on a nifty interactive map.

Aerial Photo Explorer You can explore over 400,000 digitised photos taken from our aerial photo collections of over 6 million photographs preserved in the Historic England Archive. Use our nationally important collections of aerial photographs to explore your area. Find the place where you live or why not look for your favourite football ground, railway station or the places you visit?

But there were two problems.

Firstly, they claimed that the photographs were still under copyright. This (no doubt inadvertent) mistake was pointed out to them and was eventually corrected.

It seems that, without announcement, @HistoricEngland have dropped the claim that copyright-expired pre-June 1957 RAF images are still in copyright, from their Aerial Photo Explorer pic.twitter.com/uceK7qzQxd

— Andy Mabbett (@pigsonthewing) April 2, 2022

The second, and to my mind more troubling, problem is that the photos were "protected" using SmartFrame's Digital Restrictions Management.

SmartFrame has some useful features - it allows for high-resolution photos to be loaded "zoomed out" in lower resolution. The user can then zoom in on a portion which then gets loaded as higher resolution. That's it really. SmartFrame's main selling point is that it "brings robust image control". AKA, it uses DRM to prevent users from downloading images.

It also promises "Complete image protection". This is nonsense. If you transmit an image to a user, the user can copy that image.

Here's how easy it is to download the images which SmartFrame claims to protect.

Screenshots

The obvious flaw in SmartFrame is that users can take screenshots of the high-resolution image. Zoom in, screenshot, pan left, another screenshot, repeat.

Of course, stitching together all those images is a bit of a pain. But perfectly possible to automate if you wanted to.

Canvas Chunks

The way SmartFrame works is by loading small "chunks" of the image and then drawing then on a <canvas> element.

In your browser's network inspector, you'll see each 256x256 sub-image loading.
Screenshot of a network inspection panel. Dozens of JPEG images are being downloaded.
The images are not encrypted, so they can be saved directly. Again, it is a manual and tedious process to scrape them all and then stitch them together.

Inspecting the network requests shows that they all use the same Accept Header of authorization/wndeym9ajvin,*/* - that appears to be common across multiple SmartFrame instances. Bad form of them to reuse that key!

Canvas Access

It's fairly easy to download anything drawn onto a <canvas> element by running:

var c = document.getElementsByClassName("stage") c[0].toDataURL() 

However, SmartFrame have overloaded the .toDataURL() function - so it produces a warning when you try that. It's simple enough to disable their JS once the image has loaded.

Of course, the <canvas> is smaller than the full resolution image - so you may need to manually increase its size first.

It's also possible to simply right-click on the <canvas> in the inspector and copy the Base64 representation of the image:

Screenshot of the context menu showing a download option.

Putting it all together

I am indebted to Stuart Langridge for connecting all the dots. He has written and fully documented some code which is, essentially:

  1. Grab the canvas
  2. Resize it
  3. Wait several seconds for the image chunks to fully load onto the canvas
  4. Turn the canvas into a Data URL
  5. Download the data

It looks something like this:

var container = document.querySelector("div.articlePage.container");container.style.width="6000px";container.style.maxWidth="6000px";setTimeout(()=>{  var stage = document.querySelector("canvas.stage");  var url = document.createElement("canvas").toDataURL.call(stage);  var a = document.createElement("a");  a.href = url;  a.download = "liberated.png";  a.click();}, 3000);

And that's it. A user can paste a dozen lines of Javascript into their browser's console and get a full-resolution PNG.

Warnings

This technique should only be used to download images which are free of copyright restrictions.

Companies should be careful before buying a DRM solution and ensure that it is fit for purpose. SmartFrame really isn't suitable as sold. Despite its grandiose claims of "Super-strong encryption" and "Multi-layered theft-prevention" - it took less than weekend to bypass.

It is possible that SmartFrame will update their systems to defeat this particular flaw. But, thankfully, DRM can never work effectively. You can't give users a locked box and a key - then expect them to only unlock the box under the "right" circumstances.
As Bruce Schneier once said:

trying to make digital files uncopyable is like trying to make water not wet.

https://shkspr.mobi/blog/2022/05/liberating-out-of-copyright-photos-from-smartframes-drm/

blog, to android
@blog@shkspr.mobi avatar

LG killed its 360 camera after only 4 years - here's how to get it back
https://shkspr.mobi/blog/2021/11/lg-killed-its-360-camera-after-only-4-years-heres-how-to-get-it-back/

Four years ago, I reviewed the LG-R105 360 Camera. It's a pretty nifty bit of hardware. Sadly, LG have decided that they don't want to support it any more. They already got your money, so fuck you for expecting any further updates.

Here's their message:

We express a sincere gratitude for your patronage to LG 360 CAM Manager Service.
Due to changes in our operation policies, LG 360 CAM Manager Service via mobile applications will be terminated as of June 20, 2020.

Well, that's a load of bollocks, isn't it! Here's how you can continue using the camera on modern versions of Android - and connect to it on Linux.

Get The App

LG have removed it from Google Play. They could have left it there, but they didn't. But the Internet never forgets. So you can download the final version from APK Pure.

Copy the APK to your phone and install it. You can read the instructions to see how the app works.

If you're lucky, everything will just work. If not, read on…

Reset the camera

Charge the camera via USB-C. Turn it on by holding the power button for 4 seconds. After all the lights have stopped flashing, simultaneously hold down power and shutter for about 12 seconds. You'll get an assortment of flashy lights and sounds. This is the camera resetting. You may need to turn it off and on again.

Go into the app, and search for your device. Click on the device it finds.

Now, go to your phone's WiFi settings. You should see a new network called something like LGR105_123456.OSC. Connect to it.

The password will be 00123456 - so 00 plus the last 6 numbers of the Access Point name. Secure!

You can now go back to the app and use it as per normal.

Root it!

Oh yes 😁 using LGLAF you can force the camera into ADB debugging mode. You will also need to install PyUSB.

As per these instructions:

  1. Turn off camera by holding the power button until it beeps forlornly. Keep holding it down until the double LEDs on the side stop flashing.
  2. Plug a USB cable into a computer, but do not connect the camera
  3. Press and hold the shutter button while plugging the USB-C into the camera
  4. Keep holding the shutter button down until led turns blue
  5. In a terminal, type python lglaf.py --cr
  6. Type setprop persist.sys.usb.config mtp,adb
  7. Type exit and unplug camera
  8. Hold power button down until the blue LED goes off
  9. Hold power button to turn on the camera
  10. Plug camera back into USB-C
  11. On computer, type adb devices and you should see the camera

Now, using something like https://shkspr.mobi/blog/2021/10/notes-on-using-an-android-phone-as-a-webcam-on-linux/ you can connect to the camera and use it just like an Android phone!

Screenshot of an Android device with lots of debug options.

It acts just like a normal Android device - you have access to all the settings, developer mode, etc.
The Android "about phone" screen.

HTTP Requests

Once you've got access to the camera, you can turn on its WiFi and connect to your home network. As per these helpful instructions you can then use the built in OCS API.

For example, sending a HTTP GET to http://192.168.123.456:6624/osc/info will get you back:

{  "manufacturer": "LGE",  "model": "LG-R105",  "serialNumber": "123456",  "firmwareVersion": "R10510l",  "supportUrl": "developer.lge.com/friends",  "endpoints": {    "httpPort": 6624,    "httpUpdatesPort": 6624  },  "gps": false,  "gyro": true,  "uptime": 18,  "api": [    "/osc/checkForUpdates",    "/osc/commands/execute",    "/osc/commands/status",    "/osc/info",    "/osc/state"  ],  "apiLevel": [    1,    2  ]}

To take a photo, run:
curl -X POST http://192.168.123.456:6624/osc/commands/execute -H 'Content-Type: application/json' -d '{"name": "camera.takePicture"}'
that will save it on the camera's SD card.

To get a photo from the camera, run:
curl -X POST http://192.168.123.456:6624/osc/commands/execute -H 'Content-Type: application/json' -d '{"name": "camera.getLivePreview"}' --output test.jpg

I haven't figured out 360 streaming (if it is even possible) but you can get a preview of one of the cameras:

To start a session, run:
curl -X POST http://192.168.123.456:6624/osc/commands/execute -H 'Content-Type: application/json' -d '{"name": "camera.startSession"}'
To start a stream, run:
curl -X POST http://192.168.123.456:6624/osc/commands/execute -H 'Content-Type: application/json' -d '{"name": "camera._startPreview", "parameters": {"sessionId": "123"}}'

You will get back:

{  "results": {    "_previewUri": "udp://:1234"  },  "name": "camera._startPreview",  "state": "done"}

Run VLC and open the network stream udp://:1234 and you'll get a low-resolution preview of what the camera is seeing.

You can see more commands on the Open Spherical Camera API page.

Full list of commands

By decompiling the APK, I was able to extract these available commands. Anything which starts with _ is a manufacturer specific command, so won't work on other OSC cameras.

  • camera._getRecordingStatus
  • camera._getThumbnail
  • camera._getVideo
  • camera._listAll
  • camera._liveSnapshot
  • camera._manualMetaData
  • camera._pauseRecording
  • camera._resumeRecording
  • camera._startPreview
  • camera._startStillPreview
  • camera._stopPreview
  • camera._stopStillPreview
  • camera._updateTimer
  • camera.closeSession
  • camera.delete
  • camera.getFile
  • camera.getImage
  • camera.getMetadata
  • camera.getOptions
  • camera.listFiles
  • camera.setOptions
  • camera.startCapture
  • camera.startSession
  • camera.stopCapture
  • camera.takePicture
  • camera.updateSession

Enjoy!

https://shkspr.mobi/blog/2021/11/lg-killed-its-360-camera-after-only-4-years-heres-how-to-get-it-back/

blog, to cs
@blog@shkspr.mobi avatar

EBCDIC is incompatible with GDPR
https://shkspr.mobi/blog/2021/10/ebcdic-is-incompatible-with-gdpr/

Welcome to acronym city!

The Court of Appeal of Brussels has made an interesting ruling. A customer complained that their bank was spelling the customer's name incorrectly. The bank didn't have support for diacritical marks. Things like á, è, ô, ü, ç etc. Those accents are common in many languages. So it was a little surprising that the bank didn't support them.

The bank refused to spell their customer's name correctly, so the customer raised a GDPR complaint under Article 16.

The data subject shall have the right to obtain from the controller without undue delay the rectification of inaccurate personal data concerning him or her.

Cue much legal back and forth. The bank argued that they simply couldn't support diacritics due to their technology stack. Here's their argument (in Dutch - my translation follows)

Dutch text and a diagram.

Bank X also explained that the current customer data management application was launched in 1995 and is still running on a US manufactured mainframe system.
This system only supported EBCDIC ("extended binary-coded decimal interchange code"). This is an 8-bit standard for storing letters and punctuation marks, developed in 1963-1964 by IBM for their mainframes and AS/400 computers. The code comes from of the use of punch cards and only contains the following characters…

(Emphasis added.)

EBCDIC is an ancient (and much hated) "standard" which should have been fired into the sun a long time ago. It baffles me that it was still being used in 1995 - let alone today.

Look, I'm not a lawyer (sorry mum!) so I've no idea whether this sort of ruling has any impact outside of this specific case. But, a decade after the seminal Falsehoods Programmers Believe About Names essay - we shouldn't tolerate these sorts of flaws.

Unicode - encoded as UTF-8 - just works. Yes, I'm sure there are some edge-cases. But if you can't properly store human names in their native language, you're opening yourself up to a lawsuit.

Source

GDPRhub - 2019/AR/1006

Dance

Reactions

Très intéressant ! https://t.co/bRxEem8Rem

— Marie ʕʘᴥʘʔ Julien (@mariejulien) October 20, 2021

Hâte de mettre en justice tous les sites et autres compagnies qui ont décidé que le fait que j'ai un accent dans mon nom de famille soit source de bug (avec évidemment un message d'erreur qui n'a rien à voir. Histoire de bien pas comprendre pourquoi ça marche pas) https://t.co/ReIodsI1dh

— Grumpy Nat 🇨🇭🇧🇷🇲🇫 (@Nat_Keely) October 20, 2021

https://twitter.com/joachimesque/status/1450746564100730882

La France va sortir de l'UE juste pour que leur état-civil et autres administrations puissent continuer à ruiner la vie de quelqu'un parce qu'il a un tilde dans son nom https://t.co/i8FisgEEjD

— Lays Y. M. Farra (@LYMFHSR) October 20, 2021

Does this mean that Z̷̡̧̢̰͓̪͖̭͙̰̣̱̬̹̙̜̪̣̏̿̏̋͑́̒͑́̒̿̇̈̍̇̌͝͝a̵̡̧͍̘̮̤̙̹͙̦̙͙͖͓̥̟̦͔͒̇̊̊̔̓́͒́̌̈́̑͋̏̏̏̚͘͝͠͝l̶͉̯̱͇̭̭̉̉̈́̿͐̽̒̎̽͌̚͜ģ̸̧̛͙̩̹̰̤̱̖̘̻̪̻̮̫̟̙̲͍̰̻͕̗̫̿̆̃́͗̽̊̽̌̔̂͂̈͊̐̈́̈̈́̈̓̆͌̑́̕͜ǫ̶̢̹̥̮̟͍̔̑̔̽ can finally open a bank account? https://t.co/06cTjHxdgx

— KristoferA 🌏 (@KristoferA) October 20, 2021

Next up, I’m suing La Poste for still using ISO-8859-1 when printing labels. Poor “Frédéric” I recently sent a game to… https://t.co/Z7WuFY0QmK

— Bastien Nocera (@hadessuk) October 20, 2021

Eine Erschütterung der Macht, als würden Millionen Banken-ITler in panischer Angst aufschreien und dann verstummen. https://t.co/H0WokiIZnu

— Michael Büker 🇺🇦 (@emtiu) October 21, 2021

https://shkspr.mobi/blog/2021/10/ebcdic-is-incompatible-with-gdpr/

blog, to CSS
@blog@shkspr.mobi avatar

Pure CSS Corner Banner
https://shkspr.mobi/blog/2021/09/pure-css-corner-banner/

Scratching my own itch. Here's how to make a "beta" ribbon in CSS.

https://shkspr.mobi/blog/wp-content/uploads/2021/09/Beta-Banner.png

Place this HTML at the end of your document:

<hr id="beta" aria-label="Warning this page is a beta.">

(Doesn't have to be <hr> - use whatever makes sense in your design.)

Then, add this CSS:

#beta {    float: left;    top: 1.5em;    left: -3em;    position: absolute; /* or fixed if you want it to always be visible */    transform: rotate(-45deg);    background: red;    color: white;    font-weight: bold;    padding-left: 3em;  padding-right: 3em;    padding-top: .5em;  padding-bottom: .5em;    border: 0;  margin: 0;    height: auto;   width: auto;    z-index: 999999999; /* or whatever is needed to show on top of other elements */}#beta::before {    content: "⚠️ BETA ⚠️";}

You can adjust and simplify the CSS as per your requirements and your site's existing CSS.

"But," I hear you cry, "that isn't pure CSS!" You're right, of course. Luckily, there's a ✨magical✨ way this can be added with absolutely zero HTML!!

As pointed out by Mathias Bynens, you don't need . Rather than use an <hr> element, we can just append the the CSS ::after the <body>.

body::after {    float: left;    top: 1.5em;    position: absolute;    transform: rotate(-45deg);    background: red;    color: white;    font-weight: bold;    left: -3em;    padding-left: 3em;    padding-right: 3em;    padding-top: .5em;    padding-bottom: .5em;    border: 0px;    margin: 0;    z-index: 999999999;    content: "⚠️ BETA ⚠️";}

But, why though?

A few reasons:

  • I didn't want the banner to be accidentally select as text.
  • Using an <hr> feels like somewhat better semantics than yet another bloody <div>!
  • Dunno. Just seemed like a good idea at the time - and I could only find ribbons with lots of complicated stuff I didn't need.

https://shkspr.mobi/blog/2021/09/pure-css-corner-banner/

blog, to github
@blog@shkspr.mobi avatar

Sometimes a bad patch is better than no patch
https://shkspr.mobi/blog/2020/12/sometimes-a-bad-patch-is-better-than-no-patch/

Cunningham's Law states "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."

Edent's 7th Law (My blog; my rules!) states "the best way to get an open source project to fix an issue is to send a slightly wrong Pull Request."

Let me explain...

Two years ago, I noticed an annoying bug in the markdown parser of WordPress's JetPack plugin. I did what every good open sorcerer is supposed to do - I wrote out a comprehensive bug report, including screenshots, and posted it in the .

I didn't know it was the wrong place, but they kindly directed me to the right place.

A year later and no-one had taken a look at the bug. I get it. It's low priority and they have hundreds of bugs coming in. So I added a reproduceable test case.

Another year went by, and I asked for help. I am not an expert in JetPack's baroque structure and didn't fancy spending a few days trying to understand it. With a bit of guidance, I found the correct file and submitted a patch.

I knew the PR wasn't brilliant. It had no tests, only handled a limited subset of the problem, it didn't quite conform to the project style, and was done in a bit of a rush. But, crucially, it worked. I could demonstrate that this problem could be fixed - even if I hadn't fixed it in the right way.

To be clear - this wasn't a malicious PR. I just didn't have the time or energy to dedicate to writing something perfect. I knew from experience that it's easier to fix a bug in the right way if you've seen it fixed the wrong way.

And that's what happened. The talented folks in the project - seeing that it was possible - are now creating a patch which works properly.

I've done this a few times. Spotted a bug, written an almost-working fix, and let people more clever than me worry about the details.

Don't spam open source projects with crappy PRs. Take your time to do something which works. Show that the issue is solvable. Let others with more experience finesse your code into something beautiful.

https://shkspr.mobi/blog/2020/12/sometimes-a-bad-patch-is-better-than-no-patch/

blog, to random
@blog@shkspr.mobi avatar

The biggest competitor to your digital service? The Mars Bar.
https://shkspr.mobi/blog/2020/12/the-biggest-competitor-to-your-digital-service-the-mars-bar/

This post has been languishing in my drafts folder for about a decade. It has recently become relevant again.

When I was at Vodafone, selling ringtones was our top priority. They cost almost nothing to produce, supply, and market. Yet people would pay through the nose for them. I say people. I mean kids.

We saw a huge spike in purchases just as schools finished for the day. While we didn't exactly market directly to kids, we knew that they were buying.

Way back then, marketing high-price, low-friction goods to under 18s was seen as perfectly acceptable. Ringtones weren't an addictive drug. They weren't as pointless and dangerous as loot-boxes. While some people got scammed into subscriptions, most people seemed to genuinely like having a different 25-second-long clip of a popular song every day.

One of the marketing gurus we had on staff came to talk to us about promotion.

"Who is our biggest competitor?" she asked.

We listed off the other mobile operators, the third party ringtone providers, the nascent app stores.

"No." She said. "We are in direct competition with Mars, Nestle, and Coca-Cola! Every time a kid has 25p to spare, they have a choice. They can choose to buy a chocolate bar, or they can choose to buy a ringtone. Our job is to encourage them to buy digital goods, rather than sugary treats."

The way she made it sound - it was almost like we were doing these kids a favour by saving them from a life of tooth-decay and diabetes!

Nowadays, most adverts for subscription services have a line about how their product "costs less than a posh coffee!" That's their competitor - your disposable income.

It's the same for most services. You're not competing against the other players in your industry. You're fighting for attention with Minecraft and Mario. There are only 24 hours in the day - who does the customer choose to spend it with?

But, perhaps more than that, your biggest competitor is the desire by your users to preserve their last bar of battery life.

https://shkspr.mobi/blog/2020/12/the-biggest-competitor-to-your-digital-service-the-mars-bar/

blog, to random
@blog@shkspr.mobi avatar

Scammers registering date-based domain names
https://shkspr.mobi/blog/2020/01/scammers-registering-date-based-domain-names/

Yesterday, January 2nd, my wife received a billing alert from her phone provider.
An SMS saying there's a problem with your phone bill.

Luckily, she's not with EE - because it's a pretty convincing text. That domain name is specifically designed to include the day's date.

If you're stood up on a crowded train, with your phone screen cracked, would you notice that a . is where a / should be? A quick look at the URl shows a trusted domain at the start - followed by today's date.

It starts with https:// - that means it's secure, right? Is .info even recognisable as Top Level Domain?

Scammers know these domains get blocked pretty quickly - so there's no point registering a generic name like billing-pdf.biz only to have it burned within a day. By the time I'd fired up a VM to inspect it, major browsers were already blocking the site as suspicious.

Is there any way to stop this? No, not really. Domain names are cheap - you can buy a new .info for a couple of quid. The https:// certificate was freely provided by Let's Encrypt. The site was probably hosted somewhere cheap, and whose support staff are asleep when abuse reports come in from the UK.

And that's the price we pay for anyone being able to buy their own domain and run their own secure site.

Money and technical expertise used to be strong barriers to prevent people from registering scam domains. But those days are long gone. There are no technical gatekeepers to keep us safe. We have to rely on our own wits.

https://shkspr.mobi/blog/2020/01/scammers-registering-date-based-domain-names/

blog, to SmartHome
@blog@shkspr.mobi avatar

Two years of home heating data
https://shkspr.mobi/blog/2019/02/two-years-of-home-heating-data/

I have a Tado smart thermostat - part of my smarthome project. As well as letting me set the temperature from my phone, it records environmental data, and provides a handy API for me to retrieve it.

This blog post will show you why I've gathered the data, let you download the full dataset, and explain what I learned from it.

Why do this?

There's a long-standing plan to use waste-heat from a nearby supermarket to provide communal heat to our neighbourhood.

A low temperature heat main would connect directly to the chillers at the superstore via a heat recovery unit and circulate ~25° hot water around the neighbourhood. The homes will symbiotically draw on the waste heat produced by the supermarket.
A heat pump would be installed in each household, as a replacement to their existing gas boiler.
The properties will use the existing radiators to run their heating longer but at a lower temperature in order to deliver the same level of thermal comfort.
Bioregional's Rose Hill renewable energy feasibility study

In order to assess whether this is feasible, we need to understand heating demand.

Graph it up!

This graph shows February 14th 2017 - 2019. The green graph line is the temperature inside my house. The blue graph line is the external temperature as measured by a local weather sensor in Oxford. The vertical lines represent when the heating was on - with yellow, orange, and red representing low-demand, medium-demand, and high-demand.

https://shkspr.mobi/blog/wp-content/uploads/2019/02/2-years-full-fs8.png

  • Fairly obviously, heating demand is highest in winter - but it surprised me just how late it starts.
  • Internal temperature doesn't vary as much as external.
  • Even in the depth of winter, with the heating off, the house didn't drop bellow 10°C. That's pretty good insulation!

Zoom!

Let's zoom in on December 2018 (tap for bigger):
https://shkspr.mobi/blog/wp-content/uploads/2019/02/December-2018-fs8.png

We were away over the holidays, so the heating was set to stay off - unless the internal reading was under 10°C. Luckily, that didn't happen!

You can also see how rapidly the house cools when the external temperature drops.

Conversely, the rise in external temperature from 5 to 10 barely raised the internal temperature.

Enhance!

Let's zoom in on a couple of days in December 2018.

https://shkspr.mobi/blog/wp-content/uploads/2019/02/Dec-Days-fs8.png

Here you can see the heating demand. The theory is that Tado can use the weather forecast to see how much heat it needs to generate. I can't easily assess whether it works in practice, but there are a few instances where the heating cuts off before the house reaches the target temperature. Presumably because it know the environment will provide the rest of the thermal energy.

Get The Data!

There are two datasets available. Both are being released under CC BY-SA 4.0.

The first is a CSV file of heating demand (190KB).

The second is the complete Tado output (2.5MB Zip, 25MB unzipped).
It is a series of daily JSON files which contain:

  • Internal temperature
  • External temperature
  • Internal humidity
  • Whether anyone was at home
  • What the thermostat was set to
  • If the heating was on
  • Heating demand
  • When the hot water was on

Feel free to use this data for something interesting, or to make beautiful graphs. I make no claims to its accuracy or completeness.

https://shkspr.mobi/blog/2019/02/two-years-of-home-heating-data/

blog, to Weather
@blog@shkspr.mobi avatar

Tado API Guide - updated for 2019
https://shkspr.mobi/blog/2019/02/tado-api-guide-updated-for-2019/

Tado is a brilliant smart thermostat. But their API is very poorly documented. This is an updated guide for 2019. I am indebted to Stephen C Phillips' original documentation.

Getting started

You will need:

  • A Tado (duh!)
  • Your Username (usually your email address)
  • Your Password
  • A Client Secret

Getting the client secret

I'm using this client secret:
wZaRN7rpjn3FoNyF5IFuxg9uMzYJcvOoQ8QWiIqS3hfk6gLhVlG57j5YNoZL2Rtc
This secret may change in the future. In the examples, I'll shorten it to wZa to make it easier to read. You will need to use the full length secret when running this code.

To get the current secret, you can visit https://my.tado.com/webapp/env.js and get the secret from there.

var TD = {    config: {        version: 'v587',        tgaRestApiEndpoint: 'https://my.tado.com/api/v1',        tgaRestApiV2Endpoint: 'https://my.tado.com/api/v2',        susiApiEndpoint: 'https://susi.tado.com/api',        oauth: {            clientApiEndpoint: 'https://my.tado.com/oauth/clients',            apiEndpoint: 'https://auth.tado.com/oauth',            clientId: 'tado-web-app',            clientSecret: 'wZaRN7rpjn3FoNyF5IFuxg9uMzYJcvOoQ8QWiIqS3hfk6gLhVlG57j5YNoZL2Rtc'        }    }};

If that ever changes, you will need to open your web browser's development tools, and then look in the network tab. Then, log in to https://my.tado.com/webapp/.

You should see the token:
Debug screen of a web browser.

Get Bearer Token

These examples use the curl command on Linux.

Here's how to turn your username and password into a "Bearer Token" - this is needed for every subsequent API call:

curl -s "https://auth.tado.com/oauth/token" -d client_id=tado-web-app -d grant_type=password -d scope=home.user -d username="you@example.com" -d password="Password123" -d client_secret=wZa

The response will be:

{    "access_token": "abc",    "token_type": "bearer",    "refresh_token": "def",    "expires_in": 599,    "scope": "home.user",    "jti": "xyz-123"}

The real access_token will be very long. I've shortened it to abc make things easier to read in these examples.

The access token expires after 600 seconds. You can either request a new one with the username and password, or use the provided refresh_token like so:

curl -s "https://auth.tado.com/oauth/token" -d grant_type=refresh_token -d refresh_token=def -d client_id=tado-web-app -d scope=home.user -d client_secret=wZa

Get your details

The next step is to get your homeId - this will also be needed for subsequent API calls:

curl -s "https://my.tado.com/api/v1/me" -H "Authorization: Bearer abc"

You'll get back your data, like this:

{    "name": "Terence Eden",    "email": "you@example.com",    "username": "your_user_name",    "enabled": true,    "id": "987654321",    "homeId": 123456,    "locale": "en_GB",    "type": "WEB_USER"}

Your homeId is what's important here. I'm going to use the example 123456 - you should use your own.

Check it all works

This request will check that you've got the right details.

curl -s "https://my.tado.com/api/v2/homes/123456" -H "Authorization: Bearer abc"

You'll get back information about your installation. I've redacted mine for privacy.

{    "id": 123456,    "name": " ",    "dateTimeZone": "Europe/London",    "dateCreated": "2015-12-18T19:21:59.315Z",    "temperatureUnit": "CELSIUS",    "installationCompleted": true,    "partner": " ",    "simpleSmartScheduleEnabled": true,    "awayRadiusInMeters": 123.45,    "usePreSkillsApps": true,    "skills": [],    "christmasModeEnabled": true,    "contactDetails": {        "name": "Terence Eden",        "email": " ",        "phone": " "    },    "address": {        "addressLine1": " ",        "addressLine2": null,        "zipCode": " ",        "city": " ",        "state": null,        "country": "GBR"    },    "geolocation": {        "latitude": 12.3456789,        "longitude": -1.23456    },    "consentGrantSkippable": true}

Get your data

OK, here's where the fun begins. This gets the data about your installation - including firmware details, device names, etc.

curl -s "https://my.tado.com/api/v2/homes/123456/zones" -H "Authorization: Bearer abc"

Here's what you get back - I've redacted some of my details.

[{    "id": 1,    "name": "Heating",    "type": "HEATING",    "dateCreated": "2015-12-21T15:46:45.000Z",    "deviceTypes": ["RU01"],    "devices": [{        "deviceType": "RU01",        "serialNo": " ",        "shortSerialNo": " ",        "currentFwVersion": "54.8",        "connectionState": {            "value": true,            "timestamp": "2019-02-13T19:30:52.733Z"        },        "characteristics": {            "capabilities": ["INSIDE_TEMPERATURE_MEASUREMENT", "IDENTIFY", "OPEN_WINDOW_DETECTION"]        },        "batteryState": "NORMAL",        "duties": ["ZONE_UI", "ZONE_LEADER"]    }],    "reportAvailable": false,    "supportsDazzle": true,    "dazzleEnabled": true,    "dazzleMode": {        "supported": true,        "enabled": true    },    "openWindowDetection": {        "supported": true,        "enabled": true,        "timeoutInSeconds": 1800    }}, {    "id": 0,    "name": "Hot Water",    "type": "HOT_WATER",    "dateCreated": "2016-10-03T11:31:42.272Z",    "deviceTypes": ["BU01", "RU01"],    "devices": [{        "deviceType": "BU01",        "serialNo": " ",        "shortSerialNo": " ",        "currentFwVersion": "49.4",        "connectionState": {            "value": true,            "timestamp": "2019-02-13T19:36:17.361Z"        },        "characteristics": {            "capabilities": []        },        "isDriverConfigured": true,        "duties": ["ZONE_DRIVER"]    }, {        "deviceType": "RU01",        "serialNo": " ",        "shortSerialNo": " ",        "currentFwVersion": "54.8",        "connectionState": {            "value": true,            "timestamp": "2019-02-13T19:30:52.733Z"        },        "characteristics": {            "capabilities": ["INSIDE_TEMPERATURE_MEASUREMENT", "IDENTIFY", "OPEN_WINDOW_DETECTION"]        },        "batteryState": "NORMAL",        "duties": ["ZONE_UI", "ZONE_LEADER"]    }],    "reportAvailable": false,    "supportsDazzle": false,    "dazzleEnabled": false,    "dazzleMode": {        "supported": false    },    "openWindowDetection": {        "supported": false    }}]

State

This command will tell you if you're home or not. Or, in other words, whether the Tado thinks you're nearby:

curl -s https://my.tado.com/api/v2/homes/123465/state -H "Authorization: Bearer abc"

This is what you'll get back if you're at home

{"presence":"HOME"}

Zones

My Tado has two "Zones". 0 is for Hot Water, 1 is for Heating. Yours may be different.

Hot Water Information

curl -s https://my.tado.com/api/v2/homes/123456/zones/0/state -H "Authorization: Bearer abc"

Here's information about your hot water:

{    "tadoMode": "HOME",    "geolocationOverride": false,    "geolocationOverrideDisableTime": null,    "preparation": null,    "setting": {        "type": "HOT_WATER",        "power": "OFF",        "temperature": null    },    "overlayType": null,    "overlay": null,    "openWindow": null,    "nextScheduleChange": {        "start": "2019-02-13T19:00:00Z",        "setting": {            "type": "HOT_WATER",            "power": "ON",            "temperature": null        }    },    "link": {        "state": "ONLINE"    },    "activityDataPoints": {},    "sensorDataPoints": {}}

Heating

It's much the same for Heating information:

curl -s https://my.tado.com/api/v2/homes/123456/zones/1/state -H "Authorization: Bearer abc"

This also gets you humidity data etc:

{    "tadoMode": "HOME",    "geolocationOverride": false,    "geolocationOverrideDisableTime": null,    "preparation": null,    "setting": {        "type": "HEATING",        "power": "ON",        "temperature": {            "celsius": 15.00,            "fahrenheit": 59.00        }    },    "overlayType": null,    "overlay": null,    "openWindow": null,    "nextScheduleChange": {        "start": "2019-02-13T17:30:00Z",        "setting": {            "type": "HEATING",            "power": "ON",            "temperature": {                "celsius": 18.00,                "fahrenheit": 64.40            }        }    },    "link": {        "state": "ONLINE"    },    "activityDataPoints": {        "heatingPower": {            "type": "PERCENTAGE",            "percentage": 0.00,            "timestamp": "2019-02-13T10:19:37.135Z"        }    },    "sensorDataPoints": {        "insideTemperature": {            "celsius": 16.59,            "fahrenheit": 61.86,            "timestamp": "2019-02-13T10:30:52.733Z",            "type": "TEMPERATURE",            "precision": {                "celsius": 0.1,                "fahrenheit": 0.1            }        },        "humidity": {            "type": "PERCENTAGE",            "percentage": 57.20,            "timestamp": "2019-02-13T10:30:52.733Z"        }    }}

Weather

Tado also provides you with data about the external weather:

curl -s https://my.tado.com/api/v2/homes/123456/weather -H 'Authorization: Bearer abc'

You get back a basic weather report for your location:

{    "solarIntensity": {        "type": "PERCENTAGE",        "percentage": 68.10,        "timestamp": "2019-02-10T10:35:00.989Z"    },    "outsideTemperature": {        "celsius": 8.00,        "fahrenheit": 46.40,        "timestamp": "2019-02-10T10:35:00.989Z",        "type": "TEMPERATURE",        "precision": {            "celsius": 0.01,            "fahrenheit": 0.01        }    },    "weatherState": {        "type": "WEATHER_STATE",        "value": "CLOUDY_PARTLY",        "timestamp": "2019-02-10T10:35:00.989Z"    }}

Controlling your home

It's possible to turn the heating and hot water on / off.

Turn Heating On

This is a PUT request

curl -s 'https://my.tado.com/api/v2/homes/123456/zones/1/overlay' -X PUT -H 'Authorization: Bearer abc' -H 'Content-Type: application/json;charset=utf-8' --data '{"setting":{"type":"HEATING","power":"ON","temperature":{"celsius":21,"fahrenheit":69.8}},"termination":{"type":"MANUAL"}}'

Just to make it easier to read, this is the JSON data that you have to PUT:

{    "setting": {        "type": "HEATING",        "power": "ON",        "temperature": {            "celsius": 21,            "fahrenheit": 69.8        }    },    "termination": {        "type": "MANUAL"    }}

If it has worked, you'll get back this response:

{    "type": "MANUAL",    "setting": {        "type": "HEATING",        "power": "ON",        "temperature": {            "celsius": 21.00,            "fahrenheit": 69.80        }    },    "termination": {        "type": "MANUAL",        "projectedExpiry": null    }}

End Manual Heading Mode

This is a simple DELETE command:

curl -s 'https://my.tado.com/api/v2/homes/123456/zones/1/overlay' -X DELETE -H 'Authorization: Bearer abc'

Turn on Hot Water

Much the same as before

curl -s 'https://my.tado.com/api/v2/homes/123456/zones/0/overlay' -X PUT -H 'Content-Type: application/json;charset=utf-8' -H 'Authorization: Bearer abc'--data '{"setting":{"type":"HOT_WATER","power":"ON"},"termination":{"type":"MANUAL"}}'

Turn off Hot Water

Again, a DELETE

curl -s 'https://my.tado.com/api/v2/homes/123456/zones/0/overlay' -X DELETE -H 'Authorization: Bearer abc' 

Historic Information

You can get a complete view of historic data with:

curl -s 'https://my.tado.com/api/v2/homes/123456/zones/1/dayReport?date=2018-02-14' -H 'Authorization: Bearer abc' 

The date at the end is in ISO8601 format. You'll receive info on internal and external temperature, humidity levels, whether the heating and hot water were on, and a few other bits and bobs.

What's Next?

There are a bunch of other things you can do with the API, like setting a schedule etc. Sadly, I don't have time to document them all. But this should be enough to get you detailed information, and basic control.

I'd love it if someone could make OpenAPI documentation for this.

https://shkspr.mobi/blog/2019/02/tado-api-guide-updated-for-2019/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • tester
  • osvaldo12
  • magazineikmin
  • cubers
  • thenastyranch
  • normalnudes
  • Youngstown
  • ngwrru68w68
  • slotface
  • mdbf
  • rosin
  • InstantRegret
  • kavyap
  • DreamBathrooms
  • JUstTest
  • khanakhh
  • anitta
  • modclub
  • Leos
  • everett
  • ethstaker
  • Durango
  • GTA5RPClips
  • provamag3
  • megavids
  • tacticalgear
  • cisconetworking
  • lostlight
  • All magazines