Such systems carry additional burdens that are foreign to more consumer/business-level #MachineLearning systems - in particular, the need to exhaustively quantify "the unseen" through objective analysis.
It is something that, most notably, #Tesla fails to recognize with respect to their #FSDBeta program, likely by design.
The term "safety" is tossed around quite a bit by Musk, by Tesla and by these untrained human drivers.
But a complete assessment of safety is not only what is seen (the #FSDBeta-active vehicle did not appear to collide with anything), but of the potential, future unhandled failure modes that are unseen.
That is the only pathway of progress, the only pathway towards a continuously safe system.
Anything else, like what is happening with the FSD Beta program, is just goofing around.