Last term, I had a final assignment option having students use #ChatGPT to write their final essay, then critique the results. It was great, and I'll be doing it again. Everyone in #academia should do something like this with their class if they can. Short 🧵 on what we found.
You can detect really obvious stuff - e.g. the phrase "as a large language model, I...". But mostly it's really hard to detect their output consistently.
The current anti-cheating tools have both a high false-positive rate and a high false-negative rate.
You're 100% right about AI detectors being inaccurate. Two additional things I read somewhere:
They fall for their own tricks, ie, if you tell GPT "make it more like a human wrote it," it can pass more often than if a human did write it. The language used for comparative "realness" is the same for input and output.
The other thing is it's culturally biased, and more likely to false-negative a non-native speaker of English, surprising no-one. 🙄
Add comment