It feels like "Sign in with Facebook" is a whole lot less common than it used to be, not sure I remember the last time I saw a new service that had that, whereas "Sign in with Google" and "Sign in with Apple" still show up a whole lot
@czottmann uhm... if you don't want to use Google anymore, because of AI, then stay far away from Kagi, because they have been using it for longer (plus other reasons) https://d-shoot.net/kagi.html
The big reveal for one of the games I've been working on for a little while is happening in about 40 minutes. Excited to see it and what the reaction is going to be. (Yeah, I haven't seen the trailer internally myself yet.) You can find that happening over here: https://www.youtube.com/watch?v=vovkzbtYBC8
Signups for https://flakytest.dev have almost completely stopped and only 3 out of the 180 signups configured the pytest plugin and successfully submitted test results to process.
None of those 3 users ever returned after that so I continue to be the only active user 🥲
@anze3db TIL about this project :) it looks cool, but I could tell you why a solution like this wouldn't probably help in our specific case, if it can be helpful to better target the product audience (hint: the answer is on "why is the test flaky?")
@anze3db so, we have a few flaky integration tests, but the issue is mainly one: the test suite sucks a bit (it doesn't respect the "wait") and tries to access elements of a page when the page isn't fully loaded. The solution we decided is to move to a different suite and rewrite them all (also because we want to get rid of old useless ones). Flagging a single test wouldn't be useful for us. It happens randomly.
@anze3db the workarounds are: splits tests in multiple sub jobs, so we only rerun the ones which failed and not all of them and having the tests to rerun automatically on fail
@anze3db it would mean muting over 90% of them. They randomly failed, depending on which page was taking a tiny bit more to load (with test suite not respecting the wait command). There is a plan to rewrite them all with a different suite, eliminate those which are not useful and make the code base more maintainable. I wouldn't risk muting them because often they did catch very important breaking changes we were about to introduce in our API. They literally saved us.
@webology totally random question: do you keep your own copy (on GitHub, whatever...) of content you publish to micro.blog ? I was thinking about having one, to keep in parallel with my main blog, for shorter stuff, but I was thinking about how to maintain my own copy (yes, I'm aware we can export data, but still...), thanks
@webology thanks! But if this is what they define a "backup", I think I will have to think about my own solution 🤔 (I want to have my source of MarkDown files, not a generated html).
I'm planning to write these posts with iA Writer (which supports posting to Micro.blog).
Maybe a simple repository will do the trick and instead of triggering the deploy + publish like I do on my normal blog, I can quickly publish and eventually push to GH.
But I'm not sure if and how much I will stick with it.
Many years ago - when I was very young and you were even younger - it was standard for an ISP to provide all their users with a small amount of webspace. Both Pipex and Demon offered webspace back in 1996. If my hazy memory is correct, they offered a few megabytes - more than […]