FractalEcho, to ChatGPT
@FractalEcho@kolektiva.social avatar

The racism in chatGPT we are not talking about....

This year, I learned that students use chatGPT because they believe it helps them sound more respectable. And I learned that it absolutely does not work. A thread.

A few weeks ago, I was working on a paper with one of my RAs. I have permission from them to share this story. They had done the research and the draft. I was to come in and make minor edits, clarify the method, add some background literature, and we were to refine the discussion together.

The draft was incomprehensible. Whole paragraphs were vague, repetitive, and bewildering. It was like listening to a politician. I could not edit it. I had to rewrite nearly every section. We were on a tight deadline, and I was struggling to articulate what was wrong and how the student could fix it, so I sent them on to further sections while I cleaned up ... this.

As I edited, I had to keep my mind from wandering. I had written with this student before, and this was not normal. I usually did some light edits for phrasing, though sometimes with major restructuring.

I was worried about my student. They had been going through some complicated domestic issues. They were disabled. They'd had a prior head injury. They had done excellent on their prelims, which of course I couldn't edit for them. What was going on!?

We were co-writing the day before the deadline. I could tell they were struggling with how much I had to rewrite. I tried to be encouraging and remind them that this was their research project and they had done all of the interviews and analysis. And they were doing great.

In fact, the qualitative write-up they had done the night before was better, and I was back to just adjusting minor grammar and structure. I complimented their new work and noted it was different from the other parts of the draft that I had struggled to edit.

Quietly, they asked, "is it okay to use chatGPT to fix sentences to make you sound more white?"

"... is... is that what you did with the earlier draft?"

They had, a few sentences at a time, completely ruined their own work, and they couldnt tell, because they believed that the chatGPT output had to be better writing. Because it sounded smarter. It sounded fluent. It seemed fluent. But it was nonsense!

I nearly cried with relief. I told them I had been so worried. I was going to check in with them when we were done, because I could not figure out what was wrong. I showed them the clear differences between their raw drafting and their "corrected" draft.

I told them that I believed in them. They do great work. When I asked them why they felt they had to do that, they told me that another faculty member had told the class that they should use it to make their papers better, and that he and his RAs were doing it.

The student also told me that in therapy, their therapist had been misunderstanding them, blaming them, and denying that these misunderstandings were because of a language barrier.

They felt that they were so bad at communicating, because of their language, and their culture, and their head injury, that they would never be a good scholar. They thought they had to use chatGPT to make them sound like an American, or they would never get a job.

They also told me that when they used chatGPT to help them write emails, they got more responses, which helped them with research recruitment.

I've heard this from other students too. That faculty only respond to their emails when they use chatGPT. The great irony of my viral autistic email thread was always that had I actually used AI to write it, I would have sounded decidedly less robotic.

ChatGPT is probably pretty good at spitting out the meaningless pleasantries that people associate with respectability. But it's terrible at making coherent, complex, academic arguments!

Last semester, I gave my graduate students an assignment. They were to read some reports on labor exploitation and environmental impact of chatGPT and other language models. Then they were to write a reflection on why they have used chatGPT in the past, and how they might chose to use it in the future.

I told them I would not be policing their LLM use. But I wanted them to know things about it they were unlikely to know, and I warned them about the ways that using an LLM could cause them to submit inadequate work (incoherent methods and fake references, for example).

In their reflections, many international students reported that they used chatGPT to help them correct grammar, and to make their writing "more polished".

I was sad that so many students seemed to be relying on chatGPT to make them feel more confident in their writing, because I felt that the real problem was faculty attitudes toward multilingual scholars.

I have worked with a number of graduate international students who are told by other faculty that their writing is "bad", or are given bad grades for writing that is reflective of English as a second language, but still clearly demonstrates comprehension of the subject matter.

I believe that written communication is important. However, I also believe in focused feedback. As a professor of design, I am grading people's ability to demonstrate that they understand concepts and can apply them in design research and then communicate that process to me.

I do not require that communication to read like a first language student, when I am perfectly capable of understanding the intent. When I am confused about meaning, I suggest clarifying edits.

I can speak and write in one language with competence. How dare I punish international students for their bravery? Fixation on normative communication chronically suppresses their grades and their confidence. And, most importantly, it doesn't improve their language skills!

If I were teaching rhetoric and comp it might be different. But not THAT different. I'm a scholar of neurodivergent and Mad rhetorics. I can't in good conscious support Divergent rhetorics while supressing transnational rhetoric!

Anyway, if you want your students to stop using chatGPT then stop being racist and ableist when you grade.

aram, to ChatGPT
@aram@aoir.social avatar

Before you install the new app on iOS, ask yourself: what's the worst thing the maker of the world's greatest bullshit machine could do with this kind of information, and why are they requesting it?

kristenhg, to ai
@kristenhg@mastodon.social avatar

One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.

It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.

To be clear: I did not write articles using ChatGPT.

IgorRock, to ChatGPT
@IgorRock@social.cologne avatar
abucci, to ChatGPT
@abucci@buc.ci avatar

Regarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.

LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.

Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.

RainofTerra, to ChatGPT
@RainofTerra@terra.incognita.net avatar
governa, to ChatGPT
@governa@fosstodon.org avatar
LukaszOlejnik, to ai
@LukaszOlejnik@mastodon.social avatar

Just a few months after the launch of ChatGPT, copywriters and graphic designers (freelancers) have been affected by a significant drop in the number of contracts received, and in those that have received them - a drop in earnings. being more skilled was no shield against loss of work or earnings. Being more skilled was no shield against loss of work or earnings
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4527336

pitrh, to ChatGPT
@pitrh@mastodon.social avatar

I just saw a post that referred to ChatGPT as "Mansplaining as a service", and it is so wonderfully correct - instant generation of superficially plausible yet totally fabricated nonsense presented with unflagging confidence regardless of topic without concern, regard or even awareness of the expertise of its audience :D

generalising, to ChatGPT
@generalising@mastodon.flooey.org avatar

I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. https://arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up a lot last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

gyokusai, to ai
@gyokusai@mastodon.social avatar

hold on, let me find my shocked face first

“ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.”

https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data/

“Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data”

attacus, to ChatGPT
@attacus@aus.social avatar

It turns out you can deploy some of the older Internet monsters against the newer ones.

atoponce, to ChatGPT
@atoponce@fosstodon.org avatar

Why large language models are not intelligent, exhibit .

parismarx, to ai
@parismarx@mastodon.online avatar

ChatGPT does not have “a memory.” It does not “remember” anything. OpenAI is just storing data about you and your engagement with its product.

I’m begging the media to stop repeating tech companies’ misleading framings of their products.

image/png
image/png
image/png

WiseWoman, to ChatGPT
@WiseWoman@fediscience.org avatar

An adjunct professor for computer security found some odd stuff her students turned in:

https://labs.ripe.net/author/kathleen_moriarty/the-llm-misinformation-problem-i-was-not-expecting/

It was not the students' use of a that was the problem, but they were using material found on the internet that itself was created by a hallucinating ChatBot and published without verification!

This is a type of model collapse we will be dealing with not just at universities in the near future.

johnpettigrew, to ai
@johnpettigrew@wandering.shop avatar

For those of you who use LLMs to help you code, here's a warning: these tools have been shown to hallucinate packages in a way that allows an attacker to poison your application. https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

parismarx, to tech
@parismarx@mastodon.online avatar

The AI boom requires massive data centers that consume enormous amounts of water and energy.

Tech CEOs have plans for hundreds more hyperscale facilities in the coming years, but activists around the world are fighting back to protect their communities and force us to ask who really benefits from the future Silicon Valley is building.

https://disconnect.blog/ai-is-fueling-a-data-center-boom/

exador23, to ChatGPT
@exador23@m.ai6yr.org avatar

responds beautifully to a fan asking if it's OK to use to help them write song lyrics.

Full response: https://www.theredhandfiles.com/chatgpt-making-things-faster-and-easier/

mamund, to ChatGPT
@mamund@mastodon.social avatar

ChatGPT's odds of getting code questions correct are worse than a coin flip

https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/

"'Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose,' the team's paper concluded. 'Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style.' Among the set of preferred ChatGPT answers, 77 percent were wrong." --

parismarx, to tech
@parismarx@mastodon.online avatar

In 2014, Ursula Le Guin warned that corporate power wanted to stop us from even imagining freedom as life got even harder.

Nearly a decade later, people are struggling and the tech industry seems intent on using generative AI to further replace art with cheap commodities churned out with AI tools. It’s our responsibility to stop them.

https://www.disconnect.blog/p/generative-ai-closes-off-a-better

jfballenger, to ChatGPT

Last term, I had a final assignment option having students use to write their final essay, then critique the results. It was great, and I'll be doing it again. Everyone in should do something like this with their class if they can. Short 🧵 on what we found.

albertcardona, to science
@albertcardona@mathstodon.xyz avatar

Not sure what those who advocate for the use of ChatGPT in scientific writing have in mind. It is the very act of writing that helps us think about the connections and implications of our results, identify gaps, and devise further experiments and controls.

Any science project that can be written up by a bot from tables of results and associated literature isn’t the kind of science that I’d want to do to begin with.

Can’t imagine completing a manuscript not knowing what comes next, because the writing was done automatically instead of me putting extensive thought into it.

And why would anyone bother to read it if the authors couldn’t be bothered to write it. Might as well put up the tables and figures into an archive online, stamp a DOI on it, and move on.

evawolfangel, to ChatGPT German
@evawolfangel@chaos.social avatar

haha has it even become easier to convince chatbots to do what I want - even if they should not? This is not even social engineering, this is way too easy. (sorry I am bored...)

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • tsrsr
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • hgfsjryuu7
  • Youngstown
  • InstantRegret
  • slotface
  • khanakhh
  • rosin
  • ngwrru68w68
  • kavyap
  • PowerRangers
  • normalnudes
  • tacticalgear
  • cubers
  • everett
  • vwfavf
  • ethstaker
  • osvaldo12
  • Durango
  • mdbf
  • cisconetworking
  • modclub
  • GTA5RPClips
  • tester
  • anitta
  • All magazines