To ensure this order independence, we must never incur a merge conflict! Instead we'd apply arbitrary yet deterministic priority based on peer IDs. With this peers can merge as frequently or infrequently as they like! Maybe we'd integrate this collaborative editing into the SRTP/XMPP videocalls I've established?
Our established text editor already logs a sequence of editing operations, which we can reformat into a text-CRDT by adding some provenance metadata.
Now that I've established the basics, we need to address performance issues!
First while writing could be as trivial as appending the edits, that would incur heavy overhead in reading the document in the correct order. However if we keep the document sorted in the correct order then writing to it will remain very efficient (akin to editing an ordinary plaintext file) whilst reading would become trivial!
Second we'd be storing a significant amount of data per byte of text!
To compress a CRDT's provenance metadata away to practically nothing (following Ink & Switch's AutoMerge's lead, ideally we'd achieve compatibility with it!) arranging the data into columnar form reveals plenty of opportunities! Storing all values in one field across a bunch of edits, as opposed to storing an array of edits.
Depending on what works best for each column we could use run-length, diff, codebook, or Deflate compression. With variable-lengthed ints.
Stretching a bit beyond plaintext editing I've stated that I'd have devs highlight string literals (serialized in plaintext by surrounding 0 bytes) to denote them. However I'd represent them differently in a CRDT to ensure they get merged correctly. Designed a good CRDT is as much a UX exercise as it is a datamodelling one.
Instead I'd dangle begin/end "annotations" off the insert-character edits for toggling string literals on & off.
ELF files can be queried from the commandline, so I'm starting studying thse tools!
After init'ing i18n & parsing commandline flags addr2line parses addresses from stdin or remaining commandline args. For each (if valid) it might parse trailing text, iterates over DWARF modules list is available to find the specified one, iterates over the module's ELF SCN headers to build a relocations table to locate given address & offset it. With possible postprocessing based on additional parsing.
Otherwise with some validation it binary searches the address in a DWARF lookuptable to retrieve the containing module, checks additional lookuptables including some tree traversals to find & output which function & sourcefile the address is in to output, similar for the symbol name & type amongst other things to display as the reader desires.
addr2line leans heavily on the DWARF format to supply these details! Which is is implemented as separate libraries in the same repo.
@baldur Its worth emphasizing: These skills don't goes out-of-date! Browserdevs go out of our way to ensure the pages you've written yesterdecade still work!
New optional features have been added, & best practices have solidified since the 1990s... But still!
Having to periodically rewrite outdated code is something webdevs bring upon themselves! Having to keep ontop of the latest frameworks is something webdevs bring upon themselves!
Code isn't the only thing which would be included in an the OS for our hardware-Internet Communicator! It'll need images, fonts, audio, voices, etc to communicate with you! Can we self-host much of this development too?
Up to a few thousand pixels, we could have a UI for setting the colour at any given pixel of an image (stored in Arithmetic Core) displayed onscreen. Probably worth sacrificing colour-depth for resolution. Include flood-fill!
If we need a larger canvas, we could lean on the Compositor Coprocessor to blend in brush-strokes simulated from the full detail of (if we have it) the signal-processed & smoothed touchscreen input. Akin to Krita. We'd probably add a sidebar form to tweak this brush.
Or we might want to save input from the camera to edit! Likewise for the microphone, producing audio data.
In all these cases we'd want to compress this sheer quantity of data. And postprocess it!
Since I've established a couple graph viewers, we could use those to compose all the various "transcoding" (defined generously!) tools to tweak data however you want it. This could save as code for the Output Unit, whilst (with relatively little extra effort) providing a convenient tool to refine recorded images, audio, video, etc! Add a gesture to view results at any stage in the pipeline.
Maybe we'd add a few tools to use with it, as desired!
There'd be some Microsoft fonts we'd be practically expected to ship (non-commercial licensed). As for self-hosting the development of our own...
I'd need to render a menu of glyphs (labeling it before we have a font would be interesting... though we'd probably use Times or something), & overlay some draggable sprites upon the vector outline when editing an individual glyph.
Another editor could be used for the BIOS's font, based on the bitmap editor. Nudging towards compressibility!
As for auditory fonts... I've stated that I'd include a joystick in that form-factor. We could use it to select a parameter (in the Arithmetic Core) to nudge up & down as the voice reads out some text.
Or we could ship eSpeak's, since we'd be shipping their pronunciation rules! These could be edited as text.
For further creative tools... Maybe another metathread? This should be enough to self-host development of the OS.
Unless your programs are truly trivial (and even then you'd want to call OS "libraries" for I/O) they are constructed from multiple parts each compiled separately, so we need tooling & formats to "link" these parts together into a singular whole. The other day I explored how Linux does this as it is running, today I'll be exploring the official implementation of the ELF file format.