adamjcook,

One of the major misconceptions with systems is that they are an "AI".

But they are not.

They are systems.

Such systems carry additional burdens that are foreign to more consumer/business-level systems - in particular, the need to exhaustively quantify "the unseen" through objective analysis.

It is something that, most notably, fails to recognize with respect to their program, likely by design.

Let's explore two examples.

kentindell,
@kentindell@mastodon.social avatar

deleted_by_author

  • Loading...
  • adamjcook,
    adamjcook,

    Observe the clip below.

    This is an untrained human driver operating a vehicle with active.

    The partial automated vehicle, to the excitement of the human driver, appears to successfully negotiate a "Michigan Left" turn at almost the same time that the traffic light turns from red to green.

    Why do I say "appears to"?

    Can you see it?

    https://youtube.com/clip/UgkxP9qC0JIE0F0eAf4eO5wh1OsYg-BsdmWW

    adamjcook,

    So.

    One of the major, ahem, "selling points" of 's program is that untrained human drivers are providing massive, rich "training data" to enhance the reliability of the total Tesla fleet.

    These untrained human drivers are able to provide feedback, both manually via on-screen buttons and recently by voice note if the human driver "disengages" the automated system.

    Additionally, according to Tesla, the system can send data back to Tesla automatically after a disengagement event.

    adamjcook,

    The human driver, in the clip above, did not disengage the automated system.

    But the human driver also asked no questions of the larger systems-level context of this maneuver!

    Here is an important question - assuming that the traffic light remained red, would this automated have stopped or have been able to stop in time?

    You might have an opinion on that.

    I have an opinion on that.

    This human driver in the clip might have an opinion on that.

    Who is right?

    adamjcook,

    That is the importance of, in this case, maintaining a robust systems safety lifecycle and, for the human test drivers, a robust Safety Management System.

    That is, the human test drivers are continuously read into the broader systems aspects of the vehicle under test.

    Maintaining such a system converts would-be subjective opinions like the above (which have no technical or safety value) into objective analyses that creates quantifiable reliability progress.

    adamjcook,

    But maintaining such a process is extremely costly and cannot be productized - both being priorities for and .

    Therefore, it is not done.

    Besides the clear lack of technical value, the danger here is obvious.

    With partial automated driving systems (like ), where the human driver is the fallback, the experience shown in the clip above constructs complacency.

    That is, a past, possibly coincidental maneuver that appeared successful is not so "lucky" the next time.

    adamjcook,
    adamjcook,

    Hmm.

    What if the furthest oncoming traffic lane (the lane closest the curb) had high-speed traffic instead of slow-moving traffic in this scenario?

    Would the FSD Beta-active vehicle have been able to sufficiently capture the necessary physical situation before committing to the left hand turn?

    Note how the vehicle in the furthest lane only appears on the visualization on the vehicle HMI after the vehicles in the closest lane move out of the way...

    Very crucial questions here.

    adamjcook,

    What can be established here?

    The should-be validation process for this automated vehicle is being totally deprived of crucial, objective analyses.

    No amount of automated training can replace that.

    Simulated environments?

    Even the best, closed-loop simulations have significant domain gaps.

    No, what is necessary is exhaustive, physical and controlled testing and validation to ask these questions continuously.

    Training and simulations are tools to aid validation - not validation itself.

    adamjcook,

    The term "safety" is tossed around quite a bit by Musk, by Tesla and by these untrained human drivers.

    But a complete assessment of safety is not only what is seen (the -active vehicle did not appear to collide with anything), but of the potential, future unhandled failure modes that are unseen.

    That is the only pathway of progress, the only pathway towards a continuously safe system.

    Anything else, like what is happening with the FSD Beta program, is just goofing around.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tacticalgear
  • DreamBathrooms
  • cisconetworking
  • magazineikmin
  • InstantRegret
  • Durango
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • mdbf
  • khanakhh
  • kavyap
  • everett
  • JUstTest
  • modclub
  • Leos
  • cubers
  • ngwrru68w68
  • ethstaker
  • osvaldo12
  • GTA5RPClips
  • anitta
  • provamag3
  • normalnudes
  • tester
  • megavids
  • lostlight
  • All magazines