"Five police officers are suing Tesla after being injured by a Model X that plowed into them while they were conducting a routine traffic stop. The incident took place on February 27, 2021, when an allegedly impaired driver over-relied on the Model X's 'Autopilot' system, which reportedly gave off 150 warnings to take control of the vehicle in a 34-minute time span." https://www.autoblog.com/2023/08/10/video-tesla-driver-plowed-into-police-car-despite-150-warnings-from-autopilot/
“I think we will get the hallucination problem to a much, much better place,” Altman said. “I think it will take us a year and a half, two years. Something like that. But at that point we won’t still talk about these. There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”
What is Altman is talking about? An #LLM has no notion of truth or accuracy, so you can't just dial up some "truth coefficient."
@kentindell@chrisoffner3d In fact, I see quite a bit of similarities between what Chris has mentioned in the post cited below and the extremely dangerous assumptions that #Tesla has been using to underpin their #FSDBeta program - namely, the so-called "generalized self-driving" (no defined Operational Design Domain).
The Tesla #Autopilot Team has always embraced a very primitive safety strategy that they try to sell as "validation" - if I am being generous.
Teslas Full-Self-Driving-Beta-Software ist in letzter Zeit ins Kreuzfeuer geraten und hat mit zahlreichen Rückschlägen zu kämpfen.
Neben Kollisionen und bundesstaatlichen Ermittlungen gibt es nun offenbar einen neuen Vorfall, der das Vertrauen in die Sicherheit des Systems erschüttern könnte.
#Autopilot and #FSDBeta-equipped vehicles are Level 2-capable vehicles - with the exact same limitations as many other vehicles on the market today.
Namely, the key limitation is that the human driver must remain the fallback for any dynamic driving task or vehicle failures at all times and under all conditions.
Effectively, that means that the human has the exact same control responsibilities between the two vehicles shown below.
#Tesla hand-waves this #HumanFactors fact as well on their official Autopilot product website, as shown below.
That “relaxation”, that “workload reduction” you might feel when using #Autopilot or #FSDBeta?
Guess what that really is?
Complacency.
Deadly complacency.
And it strikes when you least expect it, when the vehicle suddenly encounters a failure and it takes you an additional second or three to regain situational and operational awareness.
If your #NHTSA investigators are still wondering, after years of twiddling their thumbs, why #Autopilot-active, #Tesla vehicles keep slamming into the back of roadside emergency vehicles… I just provided you with the answer.
#Musk’s completely unsupported and unhinged promises of future therapies, treatments or prosthetic devices enabled by his #Neuralink firm and this #Tesla “#Optimus” humanoid robotics project is pretty damn disgusting.
A new, disgusting low.
Even relative to Tesla’s vast #Autopilot and #FSDBeta wrongdoings, which is saying something.
#Musk’s completely unsupported and unhinged promises of future therapies, treatments or prosthetic devices enabled by his #Neuralink firm and this #Tesla “#Optimus” humanoid robotics project is pretty damn disgusting.
A new, disgusting low.
Even relative to Tesla’s vast #Autopilot and #FSDBeta wrongdoings, which is saying something.
NHTSA Gives Tesla Two Weeks To Show Its Work On Autopilot And FSD
In a letter dated July 3, NHTSA asks Tesla to describe all changes to the systems in the “design, material composition, manufacture, quality control, supply, function, or installation of the subject system, from the start of production to date.”
I am not going to lecture Dr. Hotez on how to deal with this... but I can say this... the strategy on the #Musk-side and of his sycophants is to passionately pretend like they want to have a Good Faith debate... but they are just looking for "gotcha soundbites".
This is a good piece, but the one small nit that I have again is bringing in a "data argument".
I know that it is tempting for an audience that is generally not at all experienced in #SafetyCritical systems, but I would highly recommend resisting it.
In fact, there is no could be higher.
It is our obligation to assume that the "crash rate" of #Tesla's #FSDBeta product is unquantifiably higher.
Really thinking of just dropping #Reddit, for what it is worth.
Dropped my profile off my Mastodon bio.
Reddit management is giving me serious #Musk vibes lately and I have been getting follow spammed 4-5 times daily for the last two weeks.
Met a lot of great technical and #SystemsSafety experts on there though - mostly through pushing back against #Tesla's #Autopilot and #FSDBeta wrongdoings.
Taking a break from #Musk's Hate Train on the Hellsite to recall this series of Tweets from a few years ago.
While under-appreciated then and now, the Tweet thread by Musk posted below contains an extremely damning #SystemsSafety admission and it displays the considerable #PublicSafety blind spot associated with remotely updating #SafetyCritical systems without oversight.
Musk has no clue what he admitted to here, but systems safety experts do.
My thread is about how this series of Tweets from Musk reveal quite a bit about #Tesla's internal engineering processes and how troubling those processes undoubtedly are.
Even when Musk was largely talking about #Tesla back-in-the-day, it was in large part to exaggerate the capabilities of #Autopilot which has negatively impacted roadway and public safety.
And we can see the recent public and environmental safety issues of Musk's decision to re-insert himself into #SpaceX.
The piece that the the New York Times made about the latter from #OpenAI and friends about the existential risks posed by AI has so much bull in it that it might have been written by a chatbot as well.
Seriously stuff like: "They say the technology has shown signs of advanced abilities and understanding [...]"
Sure, take the points made by an executive at face value, it's not like he's trying to hype the thing into the stratosphere and pump up his shares.
@gabrielesvelto The media, with some small, notable exceptions, had embraced this same pattern for years with respect to #Tesla's #Autopilot and #FSDBeta programs (broadly, advancing #Musk's hype and lies without analysis)... and it has resulted in completely avoidable death and injury.
There was never a reckoning on that and, as a result, we are now at the next stage.
And, in that vein, I will say it again... where were these same individuals when #Tesla was using #AI as a masquerade to wholesale and sloppily experiment on the public via its #Autopilot and #FSDBeta programs?
I see that several signatories from #Google#DeepMind are on here...
Hmm. Interesting.
I saw many of their DeepMind colleagues clapping like seals during Tesla's "AI Days" - farcical events that presented zero safety cases.
If I see Secretary #Buttigieg do another interview where he pontificates with the press about the wisdom of #Tesla's #Autopilot product name... I might just lose it.
We are far beyond that now in outsized vehicle design dangers - and far beyond just Tesla's wrongdoings anymore (although Tesla's wrongdoings do remain somewhat unique and extreme).
Secretary Buttigieg is still putzing around on a field that is nearly a decade old at this point.
I do not recall an open letter with thousands of prominent signatories, a hastily-assembled White House Task Force and a big Senate hearing for automated driving systems.
You know... #SafetyCritical systems that are masquerading as #AI that have killed people and have the capacity to readily cause immediate injury and death.
We have an unregulated Wild West out there on that.
Any automated driving system regulations that exist, exist as a patchwork at the US state level.
Those regulations are extremely weak and do not even attempt to regulate partial automated driving systems (i.e. #Tesla#Autopilot, #GM SuperCruise, #Ford Blue Cruise and so on).
It is like Groundhog Day, once again, at the US Department of Transportation.
Incredible.
Here we are again, Secretary #Buttigieg all but admitting that #Tesla's marketing strategies are deceptive and dangerous... but not leveraging the broad powers of the #NHTSA to do anything about it.
This has only been going on for nearly 10 years now...
In terms of the product name #Autopilot (which is often conflated with #Tesla's Full Self-Driving product in nebulous ways by Tesla, Tesla's "testers" and the media)... here is why "Autopilot" in the context of commercial aircraft has a completely different systems safety foundation than is practical for roadway vehicles.