albertcardona,
@albertcardona@mathstodon.xyz avatar

Not sure what those who advocate for the use of ChatGPT in scientific writing have in mind. It is the very act of writing that helps us think about the connections and implications of our results, identify gaps, and devise further experiments and controls.

Any science project that can be written up by a bot from tables of results and associated literature isn’t the kind of science that I’d want to do to begin with.

Can’t imagine completing a manuscript not knowing what comes next, because the writing was done automatically instead of me putting extensive thought into it.

And why would anyone bother to read it if the authors couldn’t be bothered to write it. Might as well put up the tables and figures into an archive online, stamp a DOI on it, and move on.

bahome,
@bahome@mastodon.social avatar

@albertcardona
I quote David McCullough often:
“Writing is thinking. To write well is to think clearly. That's why it's so hard."

bahome,
@bahome@mastodon.social avatar

@albertcardona
That said, I do understand where AI tools can help people be understood when writing in a language in which they lack native fluency.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@bahome Writing is rewriting. If ChatGPT helps with achieving native fluency—meaning doesn't change—that is a legit use.

WordsByWesInk,
@WordsByWesInk@mstdn.social avatar

@albertcardona @bahome Too often though it does change meaning — sometimes subtly — by substituting not-quite-synonyms and changing weaker but appropriate claims into stronger but false ones. I saw that in the datasets for the AI-detector bias paper (arXiv:2304.02819) and in my own informal tests. It seems to me that fluency requires understanding meaning, which LLMs can't do.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@WordsByWesInk @bahome

Indeed. “If”. Present-day LLMs are far from that, and there’s no guarantee that the LLM approach ever will handle meaning.

eliocamp,
@eliocamp@mastodon.social avatar

@WordsByWesInk @albertcardona @bahome yes, I sometimes do this, instructing ChatGPT to not alter meaning nor style and only change grammar errors, wrong prepositions, typos, etc. And then I need to read carefully the result and also edit that. No one should ever publish the output from an LLM without review.

albertcardona, (edited )
@albertcardona@mathstodon.xyz avatar

@eliocamp @WordsByWesInk @bahome

The problem is that of course people do and will continue to do so–to publish ChatGPT output without careful review.

A problem that compounds when the person reviewing isn’t skilled enough to notice the problems.

ramblingsteve,
@ramblingsteve@fosstodon.org avatar

@albertcardona yes, why should I be bothered to read something that nobody could be bothered to write?

jouni,

@albertcardona If you end up with the similar results as human would have written it, then what is the difference? I think it is the end result that counts, not how it is done.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@jouni Precisely that a human hasn’t written it, meaning, a human didn’t spend time thinking about it, selecting statements to go into the paper, and reserving others to seed future experiments. The learning on the human side didn’t happen. Same issue with undergrads using to write essays. It’s not the busywork that matters, but the learning and insight that the writing process brings about.

jouni,

@albertcardona The purpose of undergrad writing is to learn write. Those who write scientific papers surely know how to write and I don't see purpose of scientific papers as writing practice. They communicate the findings to public, that is the purpose of papers. And I would say that is the sole reason for scientific papers to exists. If AI can succeed in writing an academic paper, we have problem somewhere else.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@jouni I’m afraid here we disagree completely.

TruthSandwich,

@albertcardona

Counterpoint: Reviewing what the bot has written serves the same purpose with less effort.

drdrowland,
@drdrowland@fediscience.org avatar

i'm thinking that using llms to create rough drafts that are then extensively edited offer the best of both worlds: fast production and more attention to detail good work.

if you're able to jump right to focusing on edits instead of waiting for first drafts the work becomes far more efficient. do some people love wasting time putting words on paper only to change all those words later? sure, but it's a waste of time. it's far better to be able to swap out entire pages of text quickly.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@drdrowland Not sure I'd like to automatically generate a rough summary of a body of work, peppered with slight inaccuracies and some wild statements, and then have a go at fixing it. There's an anchoring effect there that I don't like for sure. I'd rather anchor myself on my past thoughts on the data at hand and the caveats of how it was generated.

nichg,
@nichg@sigmoid.social avatar

@albertcardona The norms of academic writing include a lot of formalism and boilerplate that I think tends to make people worse writers when they internalize it - passive voice, burying the ledge, etc. The better world would be one where people could get papers through peer review without making them worse to read. But in the world we have, I can see people wanting to write with looser style and hit a button that turns that into the style that is expected by the scientific community.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@nichg I’d prefer an alternative solution: submit your free-style manuscript to a journal that will take it. There are many. Also, there’s no guarantee that the LLM-mediated transformation from free-style to corseted dusty academic style would work well. Subtle or small changes in meaning can dramatically alter the whole message.

nichg,
@nichg@sigmoid.social avatar

@albertcardona Sure, but given people can't easily just change the realities of funders and tenure panels looking at impact factor and specific prestige journals, I'm not surprised that people are finding solutions to make it more bearable for them personally, and I can't really justify being categorically against them doing so.

I mean, in my case I personally just left academia entirely because of this stuff which is arguably less constructive than a bad technological bypass.

richard_merren,
@richard_merren@mastodon.social avatar

@albertcardona But if you want to churn out a thousand realistic looking articles that raise doubt about climate change or claim that fracking runoff is safe or support gay conversion therapy, then ChatGPT is your man.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@richard_merren

The point has been made that ChatGPT is an excellent spam generator.

HistoPol,
@HistoPol@mastodon.social avatar
djsf,
@djsf@fosstodon.org avatar

@albertcardona I like using ChatGPT as a learning tool by creating short summaries of well-known topics to help me with whatever I'm working on. It's usually pretty good at that.

i wouldnt trust it with much else.

jcolomb,

@albertcardona also we should make a difference between using a LLM and using chatgpt, which is probably the most advanced (technically) and probably the most unethical LLM available.

minzastro,
@minzastro@astrodon.social avatar

@albertcardona also there should be nothing wrong in just publishing the data with some brief description. Sometimes it looks like people are obliged to accompany it with 10+ pages that noone will ever read.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@minzastro That exists, in FigShare, micropublications, and others, which grant a DOI. E.g., our very own: http://dx.doi.org/10.6084/m9.figshare.856713

minzastro,
@minzastro@astrodon.social avatar

@albertcardona I know it exists, it's just by far not that well respected and recognised

albertcardona,
@albertcardona@mathstodon.xyz avatar

@minzastro So far so good with our FigShare entry, which essentially assigns a DOI to a dataset. Perhaps the issue is that this form of publication isn't as well know as it should be.

ysegrim,
@ysegrim@furry.engineer avatar

@albertcardona Think positive: You can then also ask ChatGPT to formulate a rejection in the peer review phase!

albertcardona,
@albertcardona@mathstodon.xyz avatar

@ysegrim That's awful. Rejections are the hardest, for they have to be precise, kind, and constructive.

ysegrim,
@ysegrim@furry.engineer avatar

@albertcardona I mean: Rejections for articles obviously being written by a LLM starting from a very thin prompt and a table or two.

Emphatic, constructive rejections for real papers are more important than ever!

LupinoArts,
@LupinoArts@mstdn.social avatar

@albertcardona maybe journals should introduce a new of those mandatory sections like "Conflict of Interest": "Automation disclosement: This article was written with/without the use of Large 'Language' 'Models'".

steveroyle,
@steveroyle@biologists.social avatar

@albertcardona I feel the same. I saw someone I respect advocating use of LLMs for paper writing, saying “it’s only like using a grammar check”. I get that if one is writing in a second language, it could be helpful; but this was kind of shocking to me.

mzedp,
@mzedp@mas.to avatar

@albertcardona I don't think anyone is expecting to copy and paste the output of a language model without checking it first.

Having a computer program capable of parsing human text and providing summaries is extremely valuable.

Nowadays there's too much research being published. Scientists can't keep up.

These tools will help us sift through this information to find the valuable insights we need. It's already paying off, and it has other valuable applications too.

https://www.nature.com/articles/d41586-023-01487-y

futurebird,
@futurebird@sauropods.win avatar

@albertcardona

Consider if English isn't your best language. Or you just struggle writing with a more sophisticated style.

You could turn an outline, into a paragraph, then edit it.

Editing would always be essential since GPT has no logic, no idea what it's talking about. It just knows how to make sentences.

I can see some limited uses.

Also giving it a paragraph you wrote and asking it to make it more formal.

jannem,
@jannem@fosstodon.org avatar

@futurebird @albertcardona
English ability is very limited where I live. I've suffered through two decades of reviewing or "help improve" papers that were clearly just run through Google Translate.

LLMs are at least going to be a major improvement on that. And with proper supervision, decent quality translation will be one of the genuine good use cases for the technology.

AskPippa,
@AskPippa@c.im avatar

@futurebird @albertcardona Some researchers hire science writers to edit what they've written into better English.

kevinriggle,
@kevinriggle@ioc.exchange avatar

@albertcardona It’s like the people who are so proud of their typing speed—quantity is not the same as quality, and often anti-correlated

paninid,
@paninid@mastodon.world avatar

@albertcardona indeed

  • All
  • Subscribed
  • Moderated
  • Favorites
  • science
  • DreamBathrooms
  • ngwrru68w68
  • mdbf
  • tacticalgear
  • Youngstown
  • rosin
  • slotface
  • Durango
  • kavyap
  • everett
  • thenastyranch
  • InstantRegret
  • khanakhh
  • magazineikmin
  • megavids
  • anitta
  • GTA5RPClips
  • modclub
  • cisconetworking
  • cubers
  • osvaldo12
  • ethstaker
  • provamag3
  • normalnudes
  • Leos
  • tester
  • JUstTest
  • lostlight
  • All magazines