Incidentally, this reminds me how awfully inadequate are the GitLab moderation and anti-spam measures; to this day, you still can't remove spam comments from snippets; you can't limit emoji reactions to at least remove the troll ones; and you can't block known bad actors, only report their accounts.
After 10+ years of vertical mouse, I switched back to a trackball. Sadly, Logitech does not make the optical TrackMan any more, so I had to settle for an M575.
In retrospect, we should have figured out that Jia Tan was a plant from the fact that they showed up to do releases. In 20+ years of contributions to FLOSS projects I haven't found anybody willing to do the same.
When Netflix’s “Three Body Problem” slows down, and remembers that science fiction is about characters instead of plot receptacles and drama machines, it’s actually not that bad
It’s still about as subtle as a two-by-four to the back of your head, of course; even the needle drops are so much in your face that you’ll have to file a restraining order
After you spent a few years reading code in free software projects, you start to recognise the individual style of a few developers from their use of indentation, white space, comments, and yes: even bugs.
After nearly a year of light maintenance, I’ve finally managed to spend some time cleaning up json-glib: mopped up the build system, added the copyright and licensing metadata, and did some spring cleaning of the internals…
Why on earth would you design a local IPC/RPC mechanism and use JSON, of all the stupid serialisation formats, as the payload.
JSON is terrible at anything at scale; it's wildly inefficient for constant time access, and the only reason it works at all on the Web is that you can count on an optimised JavaScript engine to paper over the format inefficiencies.
Seriously, folks: go look at how bad the Language Server Protocol is with large data sets.
@dvogel I have been only tangentially involved in discussions about LSP; I know that its implementation inside GNOME Builder has been a point of contention because of performance issues with the format; the allocation-heavy approach necessary while parsing adds a ton of overhead, and requires parsing the whole thing instead of jumping to various sections
@craftyguy for payload, I'd probably use something that has offsets and lengths upfront, so you can easily get to the data without allocations—or even get the whole payload in a single allocation. Something that supports binary data without encoding it in base64 and validating it as UTF-8, as well.
@aarbrk I don’t think so; I’ve seen people saying you can do that on commercial composting facilities, if they also collect used frying oil. It won’t work in your own composter.
Finally landed a bunch of changes in JSON-GLib that I've been working off and on for the past three months, mainly dealing with proper JSON conformance.
Had to undo a lot of generic/extensible code in the tokeniser I lifted out of GLib, in order to get to a decent state; I've also added a whole conformance test suite to ensure that we don't deviate (too much) from RFC8259.
Can't wait to see bugs getting filed because the parser got stricter.
@alatiera to be fair, I'll likely end up adding an optional "strict" mode to the tokeniser and parser, as a way to bail out on things like empty data, or comments; for anything else, like unescaped control characters or floating point numbers with no leading/trailing zero, I very much doubt anybody ever noticed.