davidallengreen,
@davidallengreen@mastodon.green avatar

This is a briefing note on the legal rule that computers are presumed to be operating
correctly

Happy to hear from anyone with an informed view on this issue

https://discovery.ucl.ac.uk/id/eprint/10151259/1/briefing-presumption-that-computers-are-reliable.pdf

pauliewaulie,

@davidallengreen on the face of it, this has to be Exhibit A in any 'the law is an ass' thesis 🤣🤣🤣🤣

npg,

@davidallengreen For 30 years, I was a project manager responsible for implementing accounting and other related computer systems. We (myself, the team , the sponsors) always presumed that the system would make mistakes and that the key questions were: Is the error material? What is the process for investigation and (if needed) mitigation. It was never a safe assumption that the system never made mistakes, and everyone in the industry knows that.

rrwo,
@rrwo@floss.social avatar

@davidallengreen

That there is a field devoted to software and hardware verification would suggest the tech industry doesn't assume this. So why should the legal profession assume this?

How do you define "correctly"?

The computer may be doing exactly what it was programmed to, but the specification itself could be flawed.

A related issue is whether the data in the computer is the data generated by the computer: How do you know that someone did not tamper with the data?

Pionir,
@Pionir@toot.bike avatar

@davidallengreen if computers and/or computer programs operated correctly there would be no bug fixes, security updates or vulnerability management in IT. (there are loads).

Even the hardware has software bugs, although it's usually called firmware in that context.
E.g.
https://www.securityweek.com/companies-respond-to-downfall-intel-cpu-vulnerability/
And
https://www.zdnet.com/article/log4j-zero-day-flaw-what-you-need-to-know-and-how-to-protect-yourself/

tommorris,
@tommorris@mastodon.social avatar

@davidallengreen The solution is really tough. One can't draw a straightforward inference from past discovered bugs to current undiscovered bugs.

A lot of bugs in a bug tracker might mean sloppy design or poor architecture or bad technical choices... or it might mean a diligent team who has responsibly found and fixed problems.

Ultimately, the problem is we don't really know how to build software very well. It's somewhat remarkable computers work at all.

intuitionismRus,

@davidallengreen in my professional life I help people build high veracity, high performance distributed systems. There is a number of well established concepts and processes that, to my eye, would translate into legal principles. They encapsulate various concepts of “truth”. More than willing to discuss - there is a body of serious minded engineers out there that would love to see the legal standards made “safer” - current legal formulation only undermines the “laws” legitimacy.

epistatacadam,
@epistatacadam@toot.wales avatar

@davidallengreen
Two of the authors have Gresham College lectures about the risks of computing.
The one close to my heart both professionally and geographically is entitled "Computer bugs in Hospitals: a new killer"

https://www.gresham.ac.uk/watch-now/computer-bugs-hospitals-new-killer

The situation has got worse since then as the focus on following process rather than adjusting treatment to patient needs has become almost universal, with even small transgressions (to suit the patient) recieving heavy chastisement by the regulators.

tomdewar,
@tomdewar@mas.to avatar

@davidallengreen Looks eminently sensible to me. I hadn’t realised there was ever an assumption of computer-is-perfect (which is barking mad). Computer vs software distinction is important: the former are very reliable, the latter very fallible. My view is that there needs to be proportionality. If you’re proposing to take someone’s liberty away based on a software’s say so, there ought to be a high bar to clear proving its accuracy beyond a reasonable doubt. For a speeding fine, not so much.

nowster,
@nowster@fosstodon.org avatar

@davidallengreen There is a saying among seasoned programmers: Garbage In Gospel Out.

gulfie,

@davidallengreen having seen the preceding post which gives some context,@and as a software engineer of over 30 years standing I would say that from the end user perspective, that is, the ordinary person in the street, this is a reasonable nay necessary presumption.

gulfie,

@davidallengreen where to start…

jfrench,
@jfrench@cupoftea.social avatar

@davidallengreen even if the software works flawlessly it needs good data. Anything that relies on the output of software should also have to prove the province of the data that produced that output. Good software and data design should also be able to show an audit log. If it can't do this it can't be trusted.

For example a video that shows someone did something should be able to prove the source and that it has not been tampered with. Otherwise a deep fake will land someone in trouble very soon. (If it hasn't happened already)

stevel,
@stevel@hachyderm.io avatar

@davidallengreen as a professional software developer I'm reasonably confident that the code I write doesn't work first time -and if it does, I'm suspicious. It's why we are so obsessed about testing-but there the tests reflect the assumptions of the authors, only explore a fraction of the "the configuration space" of the code and are only tested in environments where the network is stable, the clocks are all in sync, the hardware considered reliable....

matty,

@davidallengreen Would it were so!

kentindell,
@kentindell@mastodon.social avatar

deleted_by_author

  • Loading...
  • WeeFreeScot,

    @davidallengreen the document does not distinguish between faults, failures, and errors. This is a critical mistake.

    richard0x4a,

    @davidallengreen As a software developer, I think this on page 4 is, um, an extremely optimistic view:

    "All the documents and records mentioned above are routinely kept or are readily available for all professionally developed and managed systems, so disclosure is not onerous."

    But I do think that if such records are not present that should raise serious questions about the reliability of evidence based on computers.

    jmlw1,

    @davidallengreen
    Thank you.

    One might assume that the hardware is operating correctly - although this may be using error-correction - and executing its programming subject to any external dependencies.

    But assuming that programming is correct is almost certainly fallacious in at least some cases, especially those where the program is operating outside it’s designed and tested parameters:

    bazzargh,
    @bazzargh@hachyderm.io avatar

    @davidallengreen last para, (2) should maybe read "the number of relevant bugs...". It's rare to see a large system that isn't absolutely drowning in bugs, of all shapes and sizes, not just in-house but updates to 3rd party dependencies, even when critical paths where the notion of "correct" is well defined may be thoroughly tested and working.
    In handling incidents I'll often spot dozens of bugs, but have to focus on the one that caused the outage

    iain,

    @davidallengreen
    I’m a career software developer, and reading the “practical proposal” doesn’t give me any heebiejeebies.

    Still I wonder how many systems will pass this bar … and if any really should.

    There have been attempts to turn coding into a regulated field with real standards (a la proper engineering), completely unsuccessful. The modern world is built on gee whiz flashy software and it’s been achieved by building complexity up to the stars and never looking closely at correctness.

    materialsalchemy,

    @davidallengreen From a programming viewpoint, that looks like a sensible and proportionate methodology. However, I'm concerned by the implicit assumption here: "2) shows that the number of bugs is sufficiently high to question the reliability of the system". It's known that any system with no bugs reported or fixed has - almost by definition - undiscovered bugs in it, and that the number of bugs fixed in the past doesn't correlate at all with the system's current reliability.

    Sanakism,
    @Sanakism@mastodon.online avatar

    @davidallengreen from a software engineer's perspective, point 9 in that document seems reasonable at first glance, but could still have issues.

    It's frequently the case that software appears to be working correctly but exhibits incorrect behaviour around boundary conditions (minimums, maximims, 0, etc.) or with unorthodox user input (e.g. someone clicking a button twice if the designer expected only one click).

    stevesplace,

    @davidallengreen No computer runs "correctly" by anyone's definition. All run software which, as we are well aware, is often flawed. Even the operating systems that software runs on get numerous security upgrades, let alone patches for errors. Furthermore, CPUs even make mistakes & need patches. CPU pipelines are vulnerable to exploit, as is all software. This "correctly" bullshit is Kafkaesque. Better to say all computers run incorrectly & prove one does, an impossibility. That paper is trash.

    SaveTheOkapi,

    @davidallengreen uh that’s insane. Software can be operating without any runtime errors or apparent bugs, and still be doing completely the wrong thing

    fnxweb,

    @davidallengreen I'm an aging software engineer. I'd always presume the software was faulty if an issue was claimed. No amount of rigor can prove software is perfect nowadays, since it never is.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines