@albertcardona@bahome Too often though it does change meaning — sometimes subtly — by substituting not-quite-synonyms and changing weaker but appropriate claims into stronger but false ones. I saw that in the datasets for the AI-detector bias paper (arXiv:2304.02819) and in my own informal tests. It seems to me that fluency requires understanding meaning, which LLMs can't do.