The focus on memory corruption vulnerabilities right now has in its roots a fundamental error in how we perform risk analysis. The types of software for which CVEs issue, which is a shrinking subset of all software, happens to be the type of software for which memory corruption vulnerabilities figure highly in the threat model.
So it may appear that 70% or whatever of 2023 vulnerabilities were about memory safety, but much less than 70% of annualized loss expectancy is attributable to it.
@falcon While it's true that browsers and operating systems are a small subset of all the many kinds of software, they are used heavily every day by everyone. And the things in them that people are looking to fix are (for example) file format parsers and device drivers (Android), which historically really are successfully attacked fairly regularly.
For these classes of software, the focus on memory unsafety is data-driven and the risk analysis is not faulty. Linux in particular has so many crashers, silent data corrupters, and exploitable bugs found by Syzkaller that it's essentially in bug bankruptcy.
The many other types of software are much less likely to be written in memory-unsafe languages, and hence they have other problems. If they were written in memory-unsafe languages, we might find ourselves back at square one with them. For example, I once worked on a large web app written in C++, and it was full of all the bugs and they were being exploited. Now it's in Java, migrated gradually, and its developers now focus on application-domain bugs.
That's a success, not an indicator of flawed analysis.
Professionalism, at its best, is as an act of love and belief towards those we work with, rather than a set of behavioral standards that we have to live up to. We review final documents for typos because taking the time to produce high quality, clean, work product shows our clients that they matter to us. We send agendas, and show up on time because we care about those we’re meeting with, and not wasting their time is a way to express that care. And when these norms do not communicate care - when they will not succeed in making our people feel cared for, we can let them go.
It's worth pointing out that the root of user responsibility in the 23andMe breach extends far beyond password management. It traces back to the very act of users uploading sensitive DNA data to the platform.
This initial step, frequently undertaken without fully grasping the implications, exposed not only the individuals but also their relatives and families to potential cyber threats.
@Daojoan I like your writing overall, but this one isn't doing it for me. What's the constructive lesson people should learn from it?
The same sad outcome could in principle happen to a person who uses some non-optional health care provider (as opposed to 23andme), like a hospital web iste. Is the victim less to blame in that case?
Or, does the fact that a correctly configured MyChart/Epic uses SMS 2FA absolve the victim, even if a similar outcome occurs?
Do you suffer anxiety because the world is all messed up? I suggest soothing yourself by writing some nice, clear, well-specified, well-tested computer programs. Mmmm. Yes. Words mean specific things, and the program performs its task. Feels good. Deep in your bones.
Now, as an unrelated aside, one tiny reason among many that the world is messed up is that people keep writing computer programs. Try not to think about this
@creachadair That's where we differ: It was all harmless fun when we wrote on chalkboards. Things went off the rails with the invention of machines that could run the programs
OK, my kids (both adults) haven't seen Gremlins, and I haven't since I was like 12, but I am still confident that is the second most important Xmas movie. Could suck! Could transcend! Will report how this goes.
@tqbf@lcamtuf Browsers and phones are literally drowning in bugs that actual bad guys really do use to really hurt people. It's not just servers that are easy to attack; it's all 0-click or attacker-unilateral software, and that includes tons of soft attack surface on the clients.
As you say, prevalence is important, but so is impact. The private sector offensive actors are doing a lot of damage to civil society. Killing journalists, for example. Enabling militarized cops to spy.
When memory unsafety goes away, we'll spend our time on the next biggest chunk of bugs: the eval class of bugs.
Credential phishing is over if you want it; memory unsafety is over if you want it; then comes code injection.
Bad guys must be kept in check by all means. The means are complementary, not mutually exclusive. Mexican drug cartels are bad, and they are even worse when they have access to NSO Group malware. Political, social, technical, economic, et c. means all work to reduce harm, and are more effective when used in concert.
Forget about heartstrings. If the ecosystem as a whole can be compromised for $2M, as indeed the ecosystems we depend on can be, then the ecosystem is not in fact healthy.
As for utilitarianism, (a) it's a broken moral framework; but more importantly (b) many people are harmed when John Podesta gets phished — or when his phone gets rekt via memory unsafety. Even rare attacks (and I don't think these are) can have high salience even in a utilitarian sense.
@lcamtuf@tqbf I don't think it's an 'easy' prescription, and I prescribed it for myself. I fought very, very hard to be 'allowed' to do that extra work. While also rethinking platform design and UX paradigms and working in those teams.
As Ian Beer noted, at a certain bug density — which we are at, thanks to memory unsafety, among other things — security design is irrelevant.
So, yeah, even from a utilitarian perspective, it's imperative. That's why every platform vendor is, in fact, moving in that direction.
Bought a frozen pizzoid, and its crust turns out to be sourdough instead of honest, God-fearing real dough. Our respective counsel shall be in correspondence anon, Whole Foods