Vulkan SC is a streamlined API based on the Khronos Vulkan API that enables state-of-the-art GPU-accelerated graphics and computation to be deployed in safety-critical systems. The Vulkan SC Working Group at Khronos released the latest Vulkan SC 1.0.13 maintenance update. Alongside the new specification there are significant Vulkan SC tooling updates and streamlined ecosystem synergy with mainstream Vulkan. Read on to discover more!
The compiler will almost certainly not be #OpenSource, which is just a tiny bit sad, but the advancement of Rust into this domain would be a milestone.
A missed opportunity, in my view, to discuss how #SiliconValley’s somewhat recent entry into the #SafetyCritical systems space is really the far more pressing issue here.
Hand-waving systems safety yields enormous cost savings (far more than is typically expected) and is corrosive to a modern society in ways that courts could never rectify once sufficiently lost.
Let's talk about #Mercedes vehicles equipped with #DrivePilot a bit - a Level 3-capable vehicle that has been recently "approved" in a handful of US states.
This article almost entirely focuses on the legal dynamics of consumer liability should this vehicle create a direct (or, presumably, an indirect) incident.
But, as always, I want to talk about what I feel are the #SystemsSafety realities at work here and the many foot-guns that are associated with that.
The legal expert cited, Professor William Widen, and Professor Phil Koopman have offered their thoughts on attributing liability (between the vehicle and the human driver), as linked to in the article.
And it has actually proved to be extraordinarily dangerous when #Musk lies about the capabilities and availability of #Tesla's #FSDBeta product, in particular.
The press often allows Wall Street analysts, that are not competent in #SafetyCritical systems, to advance Musk's dangerous lies.
The #SystemsSafety community has been battling this for years.
So, if it was not the case before, now that #Twitter seems to be walled off for good...
How can the #NWS and, say, The White House (as a two random examples) continue to remain solely on social media platforms that are inaccessible to millions of unregistered users?
Those exclusively on the #Fediverse, an open platform, are being actively denied public services, official policy announcements and timely emergency alerts.
#Musk is actively and aggressively tearing down decades of hard-fought, industry norms as it pertains to #SafetyCritical systems work - and that is very much by design.
And disinterested regulators allow him to get away with the most blatant of violations.
New engineering graduates are being cycled through Musk-controlled firms like #Tesla - only to likely leave with a very poor internal safety culture (that other firms can later exploit).
Marc Andreessen: "what happens when basically very smart people who are taken very seriously in one field decide to branch out and decide to become experts on society and politics and decide to weigh in on the future shape of society, and basically it turns out they’re just horribly bad, they just have catastrophic judgment once they’re outside of their core discipline.”
And Marc is obviously just going to uncritically regurgitate #Musk talking points.
Marc has recently been embracing a more hardware-oriented mindset ever since COVID-19 erupted, and yet, is simply disinterested in understanding the details of the space.
Does a well-maintained validation process for any given vehicle demand a Lidar sensor (or any other hardware)?
That is it.
System designers, that are properly engaged in designing a #SafetyCritical system, are subservient to both the initial and continuous (that is, never ending) validation process.
Ross Gerber, a prominent #Tesla investor and advocate that was featured in my previous thread is clearly angry that his #FSDBeta-active vehicle was observed in displaying some highly-compromising behaviors.
This conversation is telling because we have really reached the "next stage" of this vast Tesla wrongdoing - organized collusion outside of Tesla.
Remember: YouTube videos and personal testimonials can never show positive safety progress in the context of a #SafetyCritical system.
Never.
Videos and personal testimonials can only display safety-related issues.
And even with organized efforts by many prominent #FSDBeta "testers" in the #Tesla community to only publish what they feel are the most visually-performant drives... I have never seen a video that did not contain serious safety-related issues (many which are "unseen").
I was at #Chicago Pride all weekend while visiting with my wife (videos and photos soon!), so I missed this #Tesla Drama concerning #FSDBeta that erupted.
Ok.
Let us, again, all put on our #SystemsSafety hats and take a look at the situation here as I understand it.
Below is the video that kicked the beehive between Tesla defenders and detractors on "what really happened?".
This clearly chaotic video was taken from a larger drive sequence in which FSD Beta was active.
With respect to #SafetyCritical systems, "positive assumptions of safety" are incompatible with the analysis of these systems - particularly by those outside of a systems safety lifecycle.
#Tesla is included here as well since, per my previous threads on the matter, Tesla is not maintaining a systems safety lifecycle with their #FSDBeta program.
The assumption must be made that the Tesla vehicle would have blown the stop sign.
Well this led me down a rabbit hole.
Headline is older Aussie drivers don't trust ADAS and how to get them too.
By checking just the first few cited research papers where 'the clear safety benefits' have been shown, you find they DO NOT do that. Surprise, surprise.
They use a lot of "could", "can" and "should". Because they don't know. Because there is no proof.
And legislation is being written on the back of these. FFS.
It is a combination of "too much" and "too vague" relative to the extraordinarily high demands of responsibly developed#SafetyCritical systems.
The myopic #AI position of #Tesla and others is really a tacit admission that they are "rolling the dice" and hoping that "the numbers" work out enough to satisfy deadbeat regulators and "the numbers" are low enough that the press does not catch on.
It is a perversion and betrayal of decades of systems work.
This is a good piece, but the one small nit that I have again is bringing in a "data argument".
I know that it is tempting for an audience that is generally not at all experienced in #SafetyCritical systems, but I would highly recommend resisting it.
In fact, there is no could be higher.
It is our obligation to assume that the "crash rate" of #Tesla's #FSDBeta product is unquantifiably higher.
Taking a break from #Musk's Hate Train on the Hellsite to recall this series of Tweets from a few years ago.
While under-appreciated then and now, the Tweet thread by Musk posted below contains an extremely damning #SystemsSafety admission and it displays the considerable #PublicSafety blind spot associated with remotely updating #SafetyCritical systems without oversight.
Musk has no clue what he admitted to here, but systems safety experts do.
And if you hear anyone describing them as such, it almost certainly means that they are (knowingly or not) hand-waving away the incomparable #SystemsSafety differences between a consumer electronic device and a #SafetyCritical system.
The Hard Truth is that if one's engineering and management experience has been dominated by their time at, say, #Apple... this will not translate well to a role that involves #SafetyCritical systems of the caliber of cars.
The competencies involved are Night and Day.
That is not to say that talented engineers from consumer/business hardware and software realms can never become competent in safety-critical systems work... but it is a considerable jump.
First off, it should be recognized that a "regression" is a software term which, in the context of a #SafetyCritical system, is woefully incomplete by itself.
The question asked and the statement made should be...
"How was an existing validation process deficient such that it allowed a safety-related defect to enter the public?"
The second Tweet touches on a topic that Musk is undoubtedly unaware of called Configuration Management (CM) - an extremely complex and important concept in #SafetyCritical systems work.
You know sometimes, if you have an older #iPhone, and #Apple pushes an OTA update and in some older devices it causes some issues that newer devices do not see?
That is likely because Apple has lost some visibility on the significant number of hardware configurations that they are supporting.
But a potentially deadly situation when CM visibility is lost with a #SafetyCritical system like a car!
In that Tweet, Musk is hand-waving #Tesla's responsibilities in maintaining CM (and a validation process to match) as an "impossibility".
It is not impossible.
It is inherently costly and complex and it will substantially reduce Tesla's flexibility in changing vehicle hardware on-the-fly - an oft-cited competitive advantage.
Musk does not want that baggage, which is unavoidable in responsible#SafetyCritical systems work, so Musk and #Tesla toss "the testing" upon its untrained customers and the public.
That is why that Tweet is so revealing.
The other thing, of course, is that "QA", by itself, is not a sufficient processes for #SafetyCritical systems - and, yet again, it is a term stripped from consumer/business software and hardware domains.
The last Tweet in that thread was written by Musk a little over 13 hours after "the issues" were discovered when the second Tweet was published.
For #SafetyCritical systems of this complexity, no matter how many people are on the team, no matter how talented the people are on the team, there is zero chance that the "10.3.1" point update was actually validated.
There simply is not enough wall clock time.
Musk and #Tesla just tossed it out, like if they were shipping a video game update.