#Autopilot and #FSDBeta-equipped vehicles are Level 2-capable vehicles - with the exact same limitations as many other vehicles on the market today.
Namely, the key limitation is that the human driver must remain the fallback for any dynamic driving task or vehicle failures at all times and under all conditions.
Effectively, that means that the human has the exact same control responsibilities between the two vehicles shown below.
A missed opportunity, in my view, to discuss how #SiliconValley’s somewhat recent entry into the #SafetyCritical systems space is really the far more pressing issue here.
Hand-waving systems safety yields enormous cost savings (far more than is typically expected) and is corrosive to a modern society in ways that courts could never rectify once sufficiently lost.
People today cannot really remember a time when everyday products would readily kill or maim. So, having lost those experiences over the decades, Silicon Valley increasingly saw a business opportunity.
But modern society is grounded on the public’s trust and, given enough critical mass, trust can be virtually lost overnight.
Quite literally.
That is Great Depression stuff right there, folks.
My word there are suddenly a lot of Automatic Emergency Brake experts, I suppose they needed something to do after being the submarine experts and the AI experts.
All that I have seen are amply demonstrating they know tap all about AEB.
Let's talk about #Mercedes vehicles equipped with #DrivePilot a bit - a Level 3-capable vehicle that has been recently "approved" in a handful of US states.
This article almost entirely focuses on the legal dynamics of consumer liability should this vehicle create a direct (or, presumably, an indirect) incident.
But, as always, I want to talk about what I feel are the #SystemsSafety realities at work here and the many foot-guns that are associated with that.
And it has actually proved to be extraordinarily dangerous when #Musk lies about the capabilities and availability of #Tesla's #FSDBeta product, in particular.
The press often allows Wall Street analysts, that are not competent in #SafetyCritical systems, to advance Musk's dangerous lies.
The #SystemsSafety community has been battling this for years.
I was at #Chicago Pride all weekend while visiting with my wife (videos and photos soon!), so I missed this #Tesla Drama concerning #FSDBeta that erupted.
Ok.
Let us, again, all put on our #SystemsSafety hats and take a look at the situation here as I understand it.
Below is the video that kicked the beehive between Tesla defenders and detractors on "what really happened?".
This clearly chaotic video was taken from a larger drive sequence in which FSD Beta was active.
Well this led me down a rabbit hole.
Headline is older Aussie drivers don't trust ADAS and how to get them too.
By checking just the first few cited research papers where 'the clear safety benefits' have been shown, you find they DO NOT do that. Surprise, surprise.
They use a lot of "could", "can" and "should". Because they don't know. Because there is no proof.
And legislation is being written on the back of these. FFS.
Anytime someone or something uses the term "I think", "could be" or similar and then makes some sort of #SystemsSafety assessment... you can toss it right in the trash.
Systems safety is about constantly and exhaustively asking pointed questions and seeking quantifiable answers.
Academic and industry research can be used to be develop a safety case, but it is not at all complete by itself.
I am not going to lecture Dr. Hotez on how to deal with this... but I can say this... the strategy on the #Musk-side and of his sycophants is to passionately pretend like they want to have a Good Faith debate... but they are just looking for "gotcha soundbites".
This is a good piece, but the one small nit that I have again is bringing in a "data argument".
I know that it is tempting for an audience that is generally not at all experienced in #SafetyCritical systems, but I would highly recommend resisting it.
In fact, there is no could be higher.
It is our obligation to assume that the "crash rate" of #Tesla's #FSDBeta product is unquantifiably higher.
Efficiently and reliably extracting "safety data" from the roadway is a non-starter.
It simply cannot be done.
Why?
Because the only "safety data" that actually matters for this conversation is "data" that is forensically extracted from all interactions with a #FSDBeta-active vehicle - both direct and indirect interactions.
Really thinking of just dropping #Reddit, for what it is worth.
Dropped my profile off my Mastodon bio.
Reddit management is giving me serious #Musk vibes lately and I have been getting follow spammed 4-5 times daily for the last two weeks.
Met a lot of great technical and #SystemsSafety experts on there though - mostly through pushing back against #Tesla's #Autopilot and #FSDBeta wrongdoings.
Taking a break from #Musk's Hate Train on the Hellsite to recall this series of Tweets from a few years ago.
While under-appreciated then and now, the Tweet thread by Musk posted below contains an extremely damning #SystemsSafety admission and it displays the considerable #PublicSafety blind spot associated with remotely updating #SafetyCritical systems without oversight.
Musk has no clue what he admitted to here, but systems safety experts do.
And if you hear anyone describing them as such, it almost certainly means that they are (knowingly or not) hand-waving away the incomparable #SystemsSafety differences between a consumer electronic device and a #SafetyCritical system.
@jbenjamint@ottocrat The other thing routinely left out of these conversations is lifecycle “emissions” (negative externalities) through the potential complication of #SystemsSafety via #EVs.
This “fact check” opinion article may be more on point than the opinion article it is responding to (and so believe that it is), but such pieces are, as usual forgetting of the larger, complex system at work here.
Systems atop systems. These pieces at least need to make a nod to it, in my view.
Just seen a clip of Jim Farley talking on the Fully Charged show (would not normally watch it) and he explains how OEMs have no clue about software and built a business on no having a clue, which has made things "tricky" for them. Also, that they STILL have no clue about software. Quite enlightening.
@CrackedWindscreen The problem is... and I am not at all convinced Farley recognizes it (or wishes to admit it based on, potentially, feedback from its internal teams)... that today's consumer software demands cannot have a safety foundation under it.
That is the "tricky" part.
That is the part that many automakers likely cannot square internally.
I would have to toss all of my #SystemsSafety experience in the garbage to satisfy today's auto consumer.
@samabuelsamid, who I generally agree with, offers some thoughts.
@mimsical, who I also generally agree with, is mentioned in passing.
And, well, I have typically agreed less with Timothy Lee's thoughts over the years (particularly on #Tesla-related topics), but this article is reasonable enough.
But let's crack it open and take a look at a few things that I think are iffy.
Hmm. Going to throw up some flags here. This needs a #SystemsSafety perspective.
Firstly, there is no #PublicSafetly value (ultimately, the only thing that matters) in "criticizing" #Waymo and #Cruise (or, at least without criticizing, say, the #NHTSA at the same time)...
They are operating under a complete and deliberate lack of regulatory scrutiny that should be in place, but is not in place.
#SystemsSafety is not about appearances or YouTube videos or press releases or expanded service areas or the lack of "bad news reports" or numbers on a page.
It is about constantly asking questions and receiving quantifiable answers of system reliability.
No, because "flying cars" have never remotely made financial sense - particularly with the #SystemsSafety concerns which are almost always hand-waved away by the startups.
Competent #SystemsSafety experts knew better that they would not, though.
Because, exactly like I mentioned in the first Toot, no one and no firm has ever offered a remotely convincing safety case to realize a viable Level 5-capable vehicle.
A Level 5-capable vehicle has an effectively unbounded Operational Design Domain (ODD).
Such an expansive ODD would undoubtedly require a validation strategy that would rely on an unquantifiable number of breakthroughs.
@PeceK All of this said, while the few names I highlighted do not seem to have the "technical chops" at understanding these technologies... almost every single signatory to these various letters stayed silent (or applauded) while:
#Musk and Andrej #Karpathy were engaged in these wrongdoings; and/or
While Musk sent his cult to attack Professor Missy Cummings, a top #SystemsSafety researcher, who first raised these issues years ago.
Additionally, I would argue that it is an ethical responsibility to note that these vehicles are not capable of “driving themselves”, at the very least.
The #SystemsSafety community has spent years publishing open content on these matters.
Interesting how some automotive outlets are reporting the Tesla leak as either a "huge date leak" OR "data shows big problems with Autopilot and FSD", but not both.
@CrackedWindscreen Honestly, I am not sure that I have ever seen any reporting (in a major publication) that really digs down deep into the core issue here - flimsy and non-existent safety processes at #Tesla.
Some competent #SystemsSafety experts tapped by some publications are given space to make nods to it, but really not enough space.
I mean… the lack of a process at Tesla is clear… that definitively means that Tesla is doing nothing to understand their own system.
#Handelsblatt, a prominent #German business publication, has obtained a considerable amount of internal Tesla files that may point to #PublicSafety and vehicle #safety concerns that were possibly hidden by Tesla.
Apparently, Handelsblatt is releasing reports on these files in stages.
I will be watching this space... probably to add comments after more is publicly released.
A "culture of secrecy" with the traits cited are a ubiquitous sign, in the #SafetyCritical systems space, that a firm is engaged in willful wrongdoings.
A canary in the coal mine, if you will.
#SystemsSafety experts and myself have spent years on Twitter, Reddit and, now, the Fediverse attempting to detail #Tesla's observable wrongdoings.
Those are ultimately opinions though.
If Tesla was hiding actual defects and/or lying to safety regulators - various criminal charges are in play.