Even when Musk was largely talking about #Tesla back-in-the-day, it was in large part to exaggerate the capabilities of #Autopilot which has negatively impacted roadway and public safety.
And we can see the recent public and environmental safety issues of Musk's decision to re-insert himself into #SpaceX.
Taking a break from #Musk's Hate Train on the Hellsite to recall this series of Tweets from a few years ago.
While under-appreciated then and now, the Tweet thread by Musk posted below contains an extremely damning #SystemsSafety admission and it displays the considerable #PublicSafety blind spot associated with remotely updating #SafetyCritical systems without oversight.
Musk has no clue what he admitted to here, but systems safety experts do.
My thread is about how this series of Tweets from Musk reveal quite a bit about #Tesla's internal engineering processes and how troubling those processes undoubtedly are.
If I see Secretary #Buttigieg do another interview where he pontificates with the press about the wisdom of #Tesla's #Autopilot product name... I might just lose it.
We are far beyond that now in outsized vehicle design dangers - and far beyond just Tesla's wrongdoings anymore (although Tesla's wrongdoings do remain somewhat unique and extreme).
Secretary Buttigieg is still putzing around on a field that is nearly a decade old at this point.
#Tesla hand-waves this #HumanFactors fact as well on their official Autopilot product website, as shown below.
That “relaxation”, that “workload reduction” you might feel when using #Autopilot or #FSDBeta?
Guess what that really is?
Complacency.
Deadly complacency.
And it strikes when you least expect it, when the vehicle suddenly encounters a failure and it takes you an additional second or three to regain situational and operational awareness.
#Autopilot and #FSDBeta-equipped vehicles are Level 2-capable vehicles - with the exact same limitations as many other vehicles on the market today.
Namely, the key limitation is that the human driver must remain the fallback for any dynamic driving task or vehicle failures at all times and under all conditions.
Effectively, that means that the human has the exact same control responsibilities between the two vehicles shown below.
If your #NHTSA investigators are still wondering, after years of twiddling their thumbs, why #Autopilot-active, #Tesla vehicles keep slamming into the back of roadside emergency vehicles… I just provided you with the answer.
And, in that vein, I will say it again... where were these same individuals when #Tesla was using #AI as a masquerade to wholesale and sloppily experiment on the public via its #Autopilot and #FSDBeta programs?
I see that several signatories from #Google#DeepMind are on here...
Hmm. Interesting.
I saw many of their DeepMind colleagues clapping like seals during Tesla's "AI Days" - farcical events that presented zero safety cases.
Elon Musk's brain-computer interface start-up Neuralink has begun recruiting people for its first human trial; discarded bodies will be used to train Tesla Autopilot.
This is a good piece, but the one small nit that I have again is bringing in a "data argument".
I know that it is tempting for an audience that is generally not at all experienced in #SafetyCritical systems, but I would highly recommend resisting it.
In fact, there is no could be higher.
It is our obligation to assume that the "crash rate" of #Tesla's #FSDBeta product is unquantifiably higher.
Complex issues will surface in the courtroom like #HumanFactors, mode confusion and #AutomatedDriving system internals that, frankly, judges and juries are going to be strained to understand and appreciate.
But the #NHTSA, in its infinite wisdom, is just tickled pink to have private litigation deal with #Tesla’s vast #Autopilot wrongdoings because the agency can continue to do nothing, which they do well.
I am not going to lecture Dr. Hotez on how to deal with this... but I can say this... the strategy on the #Musk-side and of his sycophants is to passionately pretend like they want to have a Good Faith debate... but they are just looking for "gotcha soundbites".
“I think we will get the hallucination problem to a much, much better place,” Altman said. “I think it will take us a year and a half, two years. Something like that. But at that point we won’t still talk about these. There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”
What is Altman is talking about? An #LLM has no notion of truth or accuracy, so you can't just dial up some "truth coefficient."
@kentindell@chrisoffner3d In fact, I see quite a bit of similarities between what Chris has mentioned in the post cited below and the extremely dangerous assumptions that #Tesla has been using to underpin their #FSDBeta program - namely, the so-called "generalized self-driving" (no defined Operational Design Domain).
The Tesla #Autopilot Team has always embraced a very primitive safety strategy that they try to sell as "validation" - if I am being generous.
The piece that the the New York Times made about the latter from #OpenAI and friends about the existential risks posed by AI has so much bull in it that it might have been written by a chatbot as well.
Seriously stuff like: "They say the technology has shown signs of advanced abilities and understanding [...]"
Sure, take the points made by an executive at face value, it's not like he's trying to hype the thing into the stratosphere and pump up his shares.
@gabrielesvelto The media, with some small, notable exceptions, had embraced this same pattern for years with respect to #Tesla's #Autopilot and #FSDBeta programs (broadly, advancing #Musk's hype and lies without analysis)... and it has resulted in completely avoidable death and injury.
There was never a reckoning on that and, as a result, we are now at the next stage.
NHTSA Gives Tesla Two Weeks To Show Its Work On Autopilot And FSD
In a letter dated July 3, NHTSA asks Tesla to describe all changes to the systems in the “design, material composition, manufacture, quality control, supply, function, or installation of the subject system, from the start of production to date.”
Tesla's autopilot seems to be only semi-reliable. Far from foolproof, it relies on drivers to be vigilant in a new way.
I sometimes think: as a pedestrian and user of mass transit, a failure could impact me, too …
"The final 11 seconds of a fatal Tesla Autopilot crash
A reconstruction of the wreck shows how human error and emerging technology can collide with deadly results"
Teslas Full-Self-Driving-Beta-Software ist in letzter Zeit ins Kreuzfeuer geraten und hat mit zahlreichen Rückschlägen zu kämpfen.
Neben Kollisionen und bundesstaatlichen Ermittlungen gibt es nun offenbar einen neuen Vorfall, der das Vertrauen in die Sicherheit des Systems erschüttern könnte.
A former Tesla employee, who said he was harassed, threatened and eventually fired after expressing safety concerns, leaked personnel records and data about the company’s Autopilot driver-assistance software, including thousands of accident reports.
The German Handelsblatt really did an amazing job in investigating this story!