adamjcook, to random

One of the major misconceptions with systems is that they are an "AI".

But they are not.

They are systems.

Such systems carry additional burdens that are foreign to more consumer/business-level systems - in particular, the need to exhaustively quantify "the unseen" through objective analysis.

It is something that, most notably, fails to recognize with respect to their program, likely by design.

Let's explore two examples.

adamjcook,

The term "safety" is tossed around quite a bit by Musk, by Tesla and by these untrained human drivers.

But a complete assessment of safety is not only what is seen (the -active vehicle did not appear to collide with anything), but of the potential, future unhandled failure modes that are unseen.

That is the only pathway of progress, the only pathway towards a continuously safe system.

Anything else, like what is happening with the FSD Beta program, is just goofing around.

adamjcook,
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • ethstaker
  • DreamBathrooms
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • Youngstown
  • everett
  • mdbf
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • Durango
  • InstantRegret
  • GTA5RPClips
  • tester
  • cubers
  • cisconetworking
  • normalnudes
  • khanakhh
  • modclub
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines