adamjcook, to tesla

🧵Here is my take on this development… this case should have never made it to judges and juries.

It does make it to judges and juries because of the deliberate disinterest of the US’s theoretical auto safety regulator - the .

We are dealing with extraordinarily complex and highly-opaque system engineering issues here.

https://www.bloomberg.com/news/articles/2023-08-17/tesla-failed-to-fix-autopilot-after-fatal-crash-engineers-say

adamjcook,

Complex issues will surface in the courtroom like , mode confusion and system internals that, frankly, judges and juries are going to be strained to understand and appreciate.

But the , in its infinite wisdom, is just tickled pink to have private litigation deal with ’s vast wrongdoings because the agency can continue to do nothing, which they do well.

eff, to random
@eff@mastodon.social avatar

There are about a dozen questions we have about self-driving cars and the massive amount of footage they collect. Chief among them: exactly what do police get access to when they get a warrant for footage? https://www.eff.org/deeplinks/2023/08/impending-privacy-threat-self-driving-cars

meltedcheese,
@meltedcheese@c.im avatar

@eff 1/ This certainly could become a problem, but not at the current level of technology. I know the tech well, having worked as Chief Scientist for at a major automotive company until I retired last year. Bottom line: it is far easier to collect vast amounts of data from vehicle sensors than to use it, store it or transmit it. The amount retained today in consumer vehicles, if any, is very small. Think x 8 +/- (Continued)…

adamjcook, to cars

Let's talk about vehicles equipped with a bit - a Level 3-capable vehicle that has been recently "approved" in a handful of US states.

This article almost entirely focuses on the legal dynamics of consumer liability should this vehicle create a direct (or, presumably, an indirect) incident.

But, as always, I want to talk about what I feel are the realities at work here and the many foot-guns that are associated with that.

https://www.autonews.com/mobility-report/mercedes-drive-pilot-automated-system-poses-legal-questions

🧵👇

adamjcook, (edited )

The legal expert cited, Professor William Widen, and Professor Phil Koopman have offered their thoughts on attributing liability (between the vehicle and the human driver), as linked to in the article.

That work is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4444854

It is a sensible proposal and Professor Koopman is one of the foremost experts in systems, and systems.

Still, I submit that no proposal can quantifiably protect consumers with this type of system.

Why?

adamjcook, to random

One of the major misconceptions with systems is that they are an "AI".

But they are not.

They are systems.

Such systems carry additional burdens that are foreign to more consumer/business-level systems - in particular, the need to exhaustively quantify "the unseen" through objective analysis.

It is something that, most notably, fails to recognize with respect to their program, likely by design.

Let's explore two examples.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • mdbf
  • magazineikmin
  • Durango
  • Youngstown
  • vwfavf
  • slotface
  • ngwrru68w68
  • khanakhh
  • rosin
  • kavyap
  • thenastyranch
  • PowerRangers
  • DreamBathrooms
  • Leos
  • ethstaker
  • hgfsjryuu7
  • osvaldo12
  • cubers
  • GTA5RPClips
  • modclub
  • InstantRegret
  • everett
  • tacticalgear
  • normalnudes
  • tester
  • cisconetworking
  • provamag3
  • All magazines