Let's explore why a system capable of partial driving automation (like FSD Beta) and automated driving systems more broadly are decidedly not at all like ChatGPT.
#Tesla's "AI Days" often featured Tesla's internal misunderstanding of this - often making direct comparisons between consumer/business #AI systems without presenting anything close to an accompanying safety case that would incorporate the physical realities of these systems.
For many in the #MachineLearning community, this went by unnoticed and, at times, ignorantly applauded.
If one is looking for the primordial "AI Safety" issues we are dealing with today, I would start there.
The fact is, again, that #Tesla has never maintained a Safety Management System (which would robustly govern the conduct of any human participants) nor a systems safety lifecycle in their #FSDBeta program.
After all, that would be quite expensive.
And, as such, these so-called "test drivers" are not only hung out to dry by Tesla, but also left out there to construct these absurd, dangerous concoctions and justifications out of whole cloth - all with the public's safety fully exposed.
#Teslacannot care because Tesla has committed themselves to fully "productize" an unvalidated safety-critical system and provide such a system to its customers at a high cost.
A safety-critical system cannot be safely "tested" by untrained consumers.
We do not even allow otherwise highly-trained, but "normal" commercial aircraft pilots operate pre-certified aircraft over unpopulated areas.
Had this Supercharger site been full several of us might have been tempted to attach a tow strap to this guy's tow hooks and shown him that a Tesla has enough torque and adhesion to drag a super-size Ram pickupm even with the brakes set, front hubs locked, and in gear or park.
Or perhaps this is the new Tesla model "A" - for A-hole?
Not a fan of the company’s founder but… today is our 1 year #Tesla-versary. My wife and I are work from home empty nesters now, so we downsized from 2 cars (2019 RAV4 & 2018 Leaf) to one last year. I have my #motorcycle available if I can’t use the car, which is rare.
Overall, our expectations were exceeded. It cost us around $269 in electricity to drive 6,956 miles in 12 months. Zero maintenance costs. Zero complaints.
Elon Musk, on CNBC, says he'll continue to be in charge of "product stuff" after new #Twitter CEO comes on board, explaining she was recruited because of her experience with advertising.
He'll have a launch event for his own #AI, adds Musk, explaining he doesn't want to talk muchabout the #Tesla project right now but predicing it'll result in a "ChatGPT moment" with self-driving.
@kentindell The real question, in my view, is if this individual even recognizes that this vehicle behavior was safety-deficient.
What we are dealing with here is "stage 2" of #Tesla's years-long, unchecked marketing campaign of exaggerating the capabilities of their automated driving systems - that is, human drivers are now just "making up their own rules" in order to advocate for Tesla (the company) and for #FSDBeta.
Meanwhile, the #NHTSA is still hopelessly stuck on "stage 1".
I do not recall an open letter with thousands of prominent signatories, a hastily-assembled White House Task Force and a big Senate hearing for automated driving systems.
You know... #SafetyCritical systems that are masquerading as #AI that have killed people and have the capacity to readily cause immediate injury and death.
We have an unregulated Wild West out there on that.
Any automated driving system regulations that exist, exist as a patchwork at the US state level.
Those regulations are extremely weak and do not even attempt to regulate partial automated driving systems (i.e. #Tesla#Autopilot, #GM SuperCruise, #Ford Blue Cruise and so on).
@digitalcourage@phranck#Tesla-Fahrer hier! Die Kameras zeichnen nicht permanent auf, sondern, nur wenn der #SentryMode aktiv ist. Eine Aufzeichnung wird dann mit einem Lichtsignal nach außen visualisiert. Im Inneren sieht man es auch an dem von @Breznsoiza gezeigten Screen.
Der Aufkleber geht gar nicht, da er für die Fahrsicherheit notwendige Kameras verdeckt. Was wenn den jemand vor Fahrtantritt nicht entdeckt? Wer haftet bei einem Unfall? Aufkleber bei mir würde ich zur Anzeige bringen.
@digitalcourage@phranck@Breznsoiza Differenzierte Aufklärung wäre hilfreich, siehe die von mir genannten Fakten. Es ist schlicht falsch das #Tesla generell permanent aufzeichnet. Selbst mit aktivierten Sentry-Mode nicht. Visualisierung nach außen zeigt Aufnahme an.
Und eure Kamera-Aufkleber sind eine potentielle Unfallgefahr weil Tesla die Kameras für die Assistenzsysteme benötigt. Schafft die bitte ganz schnell wieder ab.
Here we have another #Tesla#FSDBeta clip showing some highly-questionable automated vehicle behavior (the automated vehicle proceeds through a marked crosswalk where a pedestrian has already entered).
Let's put our #SystemsSafety caps on and take a look here...
As such, the "great debate" over the legality of this automated vehicle behavior is premature.
Not immaterial (as traffic laws are important), but premature.
The immediate question should be… was this automated vehicle behavior quantifiably expected by those internal to a would-be systems safety lifecycle that #Tesla is supposed to be maintaining?
What was the underlying systems safety foundation of this automated vehicle behavior, if any?
These are but a few of the foundational questions that should be asked here.
If you have been following my other threads on this topic, you will not be surprised when I say that #Tesla does not maintain a systems safety lifecycle nor a Safety Management System in the #FSDBeta program.
As such, Tesla is deprived of these very vital bits of information.
These FSD Beta human drivers are just... goofing around out there... under an absurd guise that they are "training" a #ML model.
The "great debate" exists on that Twitter thread because #Tesla is not maintaining an appropriate process.
Where scientific analysis should exist under a robust process, that vacuum created by Tesla, is instead filled with hand-wavy, emotionally-driven, subjective opinions and tribal knowledge.
And all of that is entirely worthless technically and in terms of systems safety.
It is the relentless pursuit of these vital questions (a pursuit missing from #Tesla's sloppy "testing" by, in part, utilizing untrained human drivers) that differentiate this safety-critical system from a consumer/business "AI" product.
Everything critical cannot be captured via automated data collection.
What is needed is robust, exhaustive and physical validation with all participants managed under a common Safety Management System and systems safety lifecycle.
Lastly, as I always do at the end of these threads, at no time are #Tesla vehicles capable of "driving themselves"... despite how it may appear.
Appearances with these systems are extremely dangerous - and Tesla has done much to encourage dangerous appearances in order to sell cars and pricey add-ons.
The human driver has the exact same vehicle control responsibilities in a Tesla vehicle as if they were driving a 1995 Dodge Neon.