I'm adding relevant blogs randomly as I come across them. I shuffle the order every few hours.
If I add so many blogs that the build time ratchets up too far for my liking, I may start the blog shuffle before the RSS fetching, and limit how many blogs are shown... I don't know.
I want it to be a discovery tool, not a "keep up with your RSS feeds" tool, that's what RSS readers are for :)
@sarajw I was thinking: statically generate an array of blog posts and put them in the script tag, with some JS that randomly shuffles the array and then generates the DOM list. You’d still have to regenerate the site every day or so to fetch new posts, but at least not every hour. But perhaps I’m misunderstanding what you’re doing 😅
today I'm thinking about the tradeoffs of using git rebase a bit. I think the goal of rebase is to have a nice linear commit history, which is something I like.
but what are the costs of using rebase? what problems has it caused for you in practice? I'm really only interested in specific bad experiences you've had here -- not opinions or general statements like “rewriting history is bad”
@b0rk I’m a fan of rebasing to make a private branch’s commits look nice. But as a PR reviewer, it can be a pain that commits keep changing after you already looked at them. Thus I try to stop rebasing when review started (except rebasing from main).