I've now finished my metathread on hypothetical developer tools for self-hosted maintenance of my hypothetical hardware-Internet Communicator! And with it I believe I've described practically all the software & hardware comprising an inclusive decolonial browser & operating system! Following existing web (no JS), internet, Unicode, Xiph, USB, AutoMerge, etc standards.
Its not simple, but I think the hypothetical hardware made it much simpler!
So my question: Once I've published these threads, what do I tackle next? Whether I choose to extend this hardware/OS to new usecases or design new hardware?
I've heard interest in expanding my XMPP discussion beyond the basics.
I haven't said much about authentication & multi-user devices.
Mapping could be a useful feature!
So far I've leaned towards serving audiences in my design, I'd love to explore creative tools more!
Some data available in our hypothetical hardware-Internet Communicator is very sensitive with privacy concerns far outweighing any desire to reprogram its use. So how'd I keep a close eye on who uses this raw data?
This would necessarily involve a close eye on any password managers, & unnecessarily any instant messengers.
I'm talking about data like the framebuffer, open windows, password vault, status LEDs/buzzer output, notifications (UX reasons), fingerprint scanner, etc.
Except... I'm a bit queasy about limiting the control others have over their devices, so how does this differ from Microsoft's, Apple's, or Nintendo's "Secure Boot"?
1st off I'd minimize how much I'd rely on these defences, minimize the constraints it puts upon you to a level most everyone should be comfortable with. Instead I'm mainly relying on hardware/firmware-level sandboxing!
2nd if you configure authenticated boot I'd let you use those creds to loosen (or tighten) these checks.
The authenticators used to bypass our Secure Boot would have to be ones we (the vendor) approve of, if this check is to be meaningful at all. But it'd let you overwrite all the other software on your device!
With asymmetric cryptography these credentials don't even need to live on the device itself, which could be useful for work machines. Though I'd want to use our control over the authenticators to inform their employees that this is a work machine!
After initializing internationalization & parsing commandline flags elflint iterates over each arg transiently & carefully opening each given file, branching upon its subtype. Whilst aggregating errors.
For proper ELF files it retrieves the E header, outputs the filename, initializes LibEBL, validates the ELF headers, validates its P headers, validates its Sheaders, validates exception handlers if present are non-NULL, & cleans up.
For each section findtextrel retrieves the header & branches over whether its dynamic. If it is it iterates over each entry there-in to flag whether we've found a TEXTREL one. If its a symtable it saves the index of the last one.
Afterwhich it errors out having not found a textrel. Or it allocates a segments array (initially 10 slots) & iterates over the P headers for each actually-existing P headers of type PT_LOAD & not flagged PF_W it appends to the array.
If we've gathered any segments we initialize the DWARF iterator, possibly opens a debug info file, & iterates over the sections a final time. For each findtextrel double checks we actually have section data, & iterates over its entries (handled according to REL/RELA subtype) to exhaustively check against expected values.
Yesterday I discussed how I'd build a codeforge for & upon our string-centric hardware (as well as clientside feedreader, crash reporter, & fallback crash reporter) comprising of a repo viewer & issue tracker. What else would we want to include in it?
We'd want to let you down the AutoMerge documents to fork them & attach them to issues to request they be merged. We may add a minor integration between the 2 components to merge with a click of a button, but that might not be worth it.
This catalog would list all repos, issue tracker(s), mailing lists, chatrooms, websites, etc for each subproject. Though I wouldn't want to mandate that subprojects use our services. The repos may be processed to extract the permissions each component requires.
This catalogue would be useful for performing analysis over the entire project!
We can index the input formats each program supports, so our devices can download it & offer to install software for new file formats it encounters.
After parsing commandline flags (I see internationalization but not its initialization) validating (in some cases, single) args remain elfcompress parses each remaining arg aggregating error codes, before tidying up!
For each it opens the given ELF file validating its kind & fstat(), retrieves the E, S, & P headers, considers calculating a last-offset based on that, & iterates over the ELF sections.
For each section (round 2) it retrieves various properties upon successfully retrieving the section whilst applying any requested compression, creates a new section in the output ELF file, & copies the various data over to it with minor tweaks (and more compression!).
It finishes by possibly finalizing various global names possibly adding compression, retrieves the updated E header, retrieves S header string index, possibly iterates over those names, updates layout info & tidies up!
For each section name elfcompress retrieves the index & S header, possibly computes some layout info, writes the S header, & checks if we're at the desired entry. If we have it retrieves various layout info & possibly adds compression!
There's a handful of helper wrapper functions dealing with additional details of how ELF files are laid out.
3.5/3.5! Fin for today! I don't want to get started on another command at this point...
kitten Ivy Pepper: "What do you need a gun for if you're just adding numbers?"
Mordecai Heller; "I subtract numbers too."
Ivy sweetly: "What does that mean?"
Mordecai: "Oh, no! Its precocious!"
We could get far on our hypothetical string-centric hardware collaborating peer-to-peer using the CRDT version control system I described the other day! But giving the project a central server (running on our custom hardware, lets say) can streamline development whilst adding quality control, so how'd we implement it?
We'll implement our "codeforge" as a suite of independent services, using an (as yet, proprietary) <form> control for authentication.
Perhaps the other most-important feature of a codeforge is an issue tracker. For this I'd start from a webframework which uses the Parsing Unit for URL routing & data lookups, the Output Unit for templating, & the Arithmetic for less-trivial queries. Allowing webforms to update the stored labelled-tree of data.
Once we've datamodelled "issues" we can render them to HTML & webfeeds, & provide forms to create nwe issues & comments thereupon.
We may include code to render issues to/from emails, though the previously-mentioned authentication widget could allow anyone to use the web UI.
Furthermore I'd want to have the OS automatically pull up this form in response to any crashes. But what if the form engine itself crashes? Then I'd task the BIOS team to build a fallback interface, offering a simpler (UDP?) endpoint for it to call. Akin to the fuzzer, I'd simplify/anonymize the crash report.
@alcinnz What a coincidence! I was just thinking the same thing. If I could choose a life without technology around forever, I would, but I can't stand a short break from it. FOMO is a real struggle. 🙃