Happy to be corrected by an expert, but seems clear to me that every "AI detector" for text / image / video / audio is a complete scam and they should never be used in any context.
@maxkennerly For images, there are clear indications that detecting fakes in at least some settings is possible. Many models leave clear artifacts that are imperceptible to the naked eye.
For natural language, there doesn't seem to be any sound basis to build a detector on (unless the text is explicitly watermarked).
@maxkennerly always about that false-positive false-negative rate... Which they don't publish. And even if they did, gotta apply Bayes with useful priors to do anything with it. That means that it's got to be on whoever is using the tool to weigh the risks in context, including the costs of false positives/negatives. So just a nightmare really...
@maxkennerly If you felt like the problem with computers was that they were too accurate, then boy do Microsoft/Google/etc have an outrageously expensive toy for you!
@maxkennerly There may, briefly, have been a window when it was possible. But they’re clearly now more a source of damaging false positives than anything else.
(Not an expert, per se, but I do have a CS degree and an irregular newsletter on one aspect of ML, including regulation, and so have followed the space decently closely: https://openml.fyi)
@maxkennerly It's become clear that one thing AI sucks at is figuring out what is true and what is fake. (Which raises the question as to what the hell it's good for)
@maxkennerly Seems to me that it's comparable to spam detectors. They'll never be 100% because it's an arms race. But yeah, some will definitely be better than others.
Add comment