CeeBee

@CeeBee@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

CeeBee,

A process called rapid recalcification has existed for almost a decade. I don’t understand why it hasn’t made its way into dentistry yet.

CeeBee,

Yes, because like I said, nothing is ever perfect. There can always be a billion little things affecting each and every detection.

A better statement would be “only one false detection out of 10 million”

CeeBee,

The truth is the numbers I cited are the 1:N stats. The 1:1 numbers are far higher, because you can immediately control for distance, lighting, angle, and gaze direction.

With my system I worked with had a 1:1 match rate that was statistically perfect (but nothing is ever 100%).

the technology simply shouldn’t be used in those scenarios as it’s entirely inappropriate, but that’s a separate discussion.

Agreed. Its use as a payment system is just ridiculous.

CeeBee,

LMAO. You have no idea what I built the system for, and I have no skin in the game anymore as I moved on to a completely different industry that doesn’t even use AI at all.

The implications of your argument is the same with flat earthers where they demand photographic proof of a spherical Earth, but when they are shown photos from space they simply say it’s fake and NASA is in the lie.

Sometimes you just can’t get past people’s preconceived biases regardless of the facts.

CeeBee,

Considering the impact of these faults, it is obviously not good enough.

I was throwing a rough number out there, but the true error rate is lower than what I said. But when with those numbers this tech is statistically safer than driving a car.

The other half of the equation is policy management. Every single one of these systems should operate with human-in-the-loop. Meaning after a detection is made, it goes over to a person to make a “real” determination for accuracy.

CeeBee,

100%

I don’t disagree with a word you said.

FR for a payment system is dumb.

CeeBee, (edited )

I have this problem with my face in the TSA pre and passport system

Interesting. Can you elaborate on this?

Edit: downvotes for asking an honest question. People are dumb

CeeBee,

What I’m saying is we had a deployment in a large facility. It was a partnership with the org that owned the facility to allow us to use their location as a real-world testing area. We’re talking about multiple buildings, multiple locations, and thousands of people (all aware of the system being used).

Two of the employees were twins. It wasn’t planned, but it did give us a chance to see if twins were a weak point.

That’s all I’m saying. It’s mostly anecdotal, as I can’t share details or numbers.

CeeBee, (edited )

pages.nist.gov/frvt/reports/…/frvt_1N_report.pdf

It’s a 481 443 page report directly from the body that does the testing.

Edit: mistyped the number of pages

Edit 2: as I mentioned in another comment. I’ve read through this document many times. We even paid a 3rd party to verify our interpretations.

CeeBee,
CeeBee,

With that logic we would need to test the system on every living person to see where it fails.

The system had been tested ad nauseum in a variety of scenarios (including with twins and every other combination you can think of, and many you can’t). In this particular situation, a real-world test in a large facility with many hundreds of cameras everywhere, there happened to be twins.

It’s a strong data point regardless of your opinion. If it was the only one then you’d have a point. But like I said, it was an anecdotal example.

CeeBee,

That was a really garbage system then. Like disgracefully bad Fisher Price quality.

The reality is that there are more crap systems than really good ones out there. And there are as many algorithms and different ways of doing it as there are companies.

The system I developed was so good, even when we tried all kinds of shenanigans to trip it up, we just couldn’t do it.

CeeBee,

Because using FR for a payment system is dumb.

Everyone here seems to be hyperfixated on the payment system aspect.

I’m talking purely in the context of FR tech being vastly better than what people think, since everyone has this Idea that FR tech doesn’t work.

CeeBee,

My references are the NIST tests.

pages.nist.gov/frvt/reports/…/frvt_1N_report.pdf

That might be the one you’re looking at.

Another thing to remember about the NIST tests is that they try to use a standardized threshold across all vendors. The point is to compare the results in a fair manner across systems.

The system I worked on was tested by NIST with an FMR of 1e-5. But we never used that threshold and always used a threshold that equated to 1e-7, which is orders of magnitude more accurate.

And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there’s a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.

This definitely is a massive problem and likely does contribute to poor public perception.

CeeBee,

Ya, most upvotes and downvotes are entirely emotionally driven. I knew I would get downvoted for posting all this. It happens on every forum, Reddit post, and Lemmy post. But downvotes don’t make the info I share wrong.

CeeBee,

You know what overfitting is, right?

As you reply to someone who spent a decade in the AI industry.

This has nothing to do with overfitting. Particularly because our matching algorithm isn’t trained on data.

The face detection portion is, but that’s simply finding the face in an image.

The system I worked with used a threshold value that equates to an FMR of 1e-07. And it wasn’t used in places like subways or city streets. The point I’m making is that in the few years of real world operation (before I left for another job) we didn’t see a single false detection. In fact, one of the facility owners asked us to lower the threshold temporarily to verify the system was actually working properly.

CeeBee, (edited )

Ok, sure

Edit: the truth is that saying x% accuracy isn’t entirely correct, because the Numbers just don’t work that way. It’s just a way we convey the data to the average person. I can’t count the amount of times I’ve had asked “ok, but what doesn’t mean in terms of accuracy? What’s the accuracy percentage?”

And I understand what you’re saying now. Yes I did have the number written down incorrectly as a percentage. I’m on mobile this whole time doing a hundred other things. I added two extra digits.

CeeBee,

Np.

As someone else pointed out in another comment. I’ve been saying the x% accuracy number incorrectly. It’s just a colloquial way of conveying the accuracy. The truth is that no one in the industry uses “percent accuracy” and instead use FMR (false match rate) and FNMR (false non-match rate) as well as some other metrics.

CeeBee,

So you are saying yourself that your argument has nothing to do with what’s in the article?..

OP said “reliability standards with a lot of nines have to be met”. All I’m saying is that we’re already there.

CeeBee,

Based on your comments I feel that you’re projecting the confidence in that system onto the broader topic of facial recognition in general; you’re looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn’t we treat all facial recognition implementations as unacceptable if only the best – and presumably most expensive – ones are?

It’s a good question, and I don’t have the answer to it. But a good example I like to point at is the ACLU’s announcement of their test on Amazon’s Rekognition system.

They tested the system using the default value of 80% confidence, and their test resulted in 20% false identification. They then boldly claimed that FR systems are all flawed and no one should ever use them.

Amazon even responded saying that the ACLU’s test with the default values was irresponsible, and Amazon’s right. This was before such public backlash against FR, and the reasoning for a default of 80% confidence was the expectation that most people using it would do silly stuff like celebrity lookalikes. That being said, it was stupid to set the default to 80%, but that’s just hindsight speaking.

My point here is that, while FR tech isn’t perfect, the public perception is highly skewed. If there was a daily news report detailing the number of correct matches across all systems, these few showing a false match would seem ridiculous. The overwhelming vast majority of news reports on FR are about failure cases. No wonder most people think the tech is fundamentally broken.

A rhetorical question aside from that: is determining one’s identity an application where anything below the unachievable success rate of 100% is acceptable?

I think most systems in use today are fine in terms of accuracy. The consideration becomes “how is it being used?” That isn’t to say that improvements aren’t welcome, but in some cases it’s like trying to use the hook on the back of a hammer as a screw driver. I’m sure it can be made to work, but fundamentally it’s the wrong tool for the job.

FR in a payment system is just all wrong. It’s literally forcing the use of a tech where it shouldn’t be used. FR can be used for validation if increased security is needed, like accessing a bank account. But never as the sole means of authentication. You should still require a bank card + pin, then the system can do FR as a kind of 2FA. The trick here would be to first, use a good system, and then second, lower the threshold that borders on “fairly lenient”. That way you eliminate any false rejections while still maintaining an incredibly high level of security. In that case the chances of your bank card AND pin being stolen by someone who looks so much like you that it tricks FR is effectively impossible (but it can never be truly zero). And if that person is being targeted by a threat actor who can coordinate such things then they’d have the resources to just get around the cyber security of the bank from the comfort of anywhere in the world.

Security in every single circumstance is a trade-off with convenience. Always, and in every scenario.

FR works well with existing access control systems. Swipe your badge card, then it scans you to verify you’re the person identified by the badge.

FR also works well in surveillance, with the incredibly important addition of human-in-the-loop. For example, the system I worked on simply reported detections to a SoC (with all the general info about the detection including the live photo and the reference photo). Then the operator would have to look at the details and manually confirm or reject the detection. The system made no decisions, it simply presented the info to an authorized person.

This is the key portion that seems to be missing in all news reports about false arrests and whatnot. I’ve looked into all the FR related false arrests and from what I could determine none of those cases were handled properly. The detection results were simply taken as gospel truth and no critical thinking was applied. In some of those cases the detection photo and reference (database) photo looked nothing alike. It’s just the people operating those systems are either idiots or just don’t care. Both of those are policy issues entirely unrelated to the accuracy of the tech.

CeeBee,

Fair enough

CeeBee,

We were cheaper on both hardware and software costs than just about anyone else, and we placed easily in the top 5 for performance and accuracy.

The main issue was the covid came around, and since we’re not a US company and the vast majority of interest was in the US we were dead in the water.

What I’ve learned through the years is that the best rarely costs the most. Most corporate/vendor software out there are chosen by just about every consideration aside from quality.

CeeBee,

I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

This is true. The distance definitely makes a difference, but there are systems out there that get incredibly high accuracy even with surveillance footage.

CeeBee,

people with totally different facial structures get identified as the same person all the time with the “AI” facial recognition

All the time, eh? Gonna need a citation on that. And I’m not talking about just one news article that pops up every six months. And nothing that links back to the UCLA’s 2018 misleading “report”.

I’m assuming Apple’s software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result.

You assume a lot here. People have this conception that all FR systems are trained blackbox models. This is true for some systems, but not all.

The system I worked with, which ranked near the top of the NIST FRVT reports, did not use a trained AI algorithm for matching.

CeeBee,

I’ve been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now.

I can guarantee you haven’t. I’ve worked in the FR industry for a decade and I’m up to speed on all the news. There’s a story about a false arrest from FR at most once every 5 or 6 months.

You don’t see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

I’m assuming that of apple because it’s been around for a few years longer than the current AI craze has been going on.

No it hasn’t. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades.

We’ve been doing facial recognition for decades now, with purpose built algorithms. It’s not mucb of leap to assume that’s what they’re using.

Then why would you assume companies doing FR longer than the recent “AI craze” would be doing it with “black boxes”?

I’m not doing a bunch of research to prove the point.

At least you proved my point.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • InstantRegret
  • thenastyranch
  • magazineikmin
  • everett
  • rosin
  • Youngstown
  • slotface
  • mdbf
  • cubers
  • kavyap
  • osvaldo12
  • DreamBathrooms
  • anitta
  • khanakhh
  • cisconetworking
  • ngwrru68w68
  • Durango
  • tester
  • provamag3
  • normalnudes
  • GTA5RPClips
  • modclub
  • ethstaker
  • megavids
  • Leos
  • lostlight
  • All magazines