"For the moment, we recommend that if #LLMs are used to write scholarly reviews, reviewers should disclose their use and accept full responsibility for their reports’ accuracy, tone, reasoning and originality."
PS: "For the moment" these tools can help reviewers string words together, not judge quality. We have good reasons to seek evaluative comments from human experts.
Update. I've been playing with #Elicit, one of the new #AI#search engines. Apart from answering your questions in full sentences, it cites peer-reviewed sources. When you click on one, Elicit helps you evaluate it. Quoting from a real example:
"Can I trust this paper?
• No mention found of study type
• No mention found of funding source
• No mention found of participant count
• No mention found of multiple comparisons
• No mention found of intent to treat
• No mention found of preregistration"