hypolite,

The teacher profession has lately been hard in the US, and is going to be made even harder by LLMs. I reject the article's comparison with calculators, these are exact and you need to know what to ask before getting a useful answer from them. On the contrary LLMs satisfy neither of these propositions by accepting arbitrary prompts and outputting only plausible answers which might be useful or not.

I believe the introduction of accessible LLMs will further the divide between privileged students who will reap the benefits of homework vs the others who will use free LLM tools to skip homework, a cheap short-term win that will end up costing them in the long run.

♲ piaille.fr/@eglantine/11070143…

ramaniscence,

@hypolite @eglantine I think this article oversimplifies the problem LLMs present.

Context: I teach Programming I at my University, which I deliver simultaneously to students online and on campus. I'm also currently taking my Master's entirely online in education.

I'll start by saying I agree with much of what has been said. I think homework as it is generally used is harmful to students because it creates a culture where you can be sent home with work, and also, a lot of homework is just put into place to try to fill in the gaps of poor instruction.

I think effective homework involves independent research at all levels. Providing students with the necessary tools and then asking them to do some extra to develop their problem-solving skills. For that reason, long-term assignments like science projects and book reports are the best way to handle independent problem-solving and create those habits casually without being too time-consuming. Not just nightly vocabulary worksheets and things like that. Ideally, this would also give parents a direct hand in their child's education by creating a more creative and collaborative space.

At best, LLM works as a search engine, but at worse, it decimates early research incentives. It goes far and beyond what a calculator can do. A calculator requires manual entry, ensuring things are input correctly, and typically requires at least some active brain power. LLMs do not. The newest iterations of ChatGPT can OCR a document and spit out information. The only thing the user has to do is copy/paste or snap and upload a picture. A student struggling with a concept could quickly think this is a shortcut and bypass any problem-solving, with barely any active brain power. Absolutely nothing committed to long-term memory, except maybe how to upload a file.

As it's already been pointed out as well, those results are...not great. For maths, maybe you get something that makes sense; for research, you may return complete nonsense that looks like it makes sense (believe me, I have tried). You can ask ChatGPT to write you a summary of a thing you didn't research, but is it right? Ultimately though, in either case, something of value is lost.

This is especially difficult with college-level hybrid courses like mine. I can not realistically watch every student all the time, nor should I have to. In addition, given the low-level concepts I'm teaching, ChatGPT can nail the solution every time. Indeed, my students could Google the answers, but at least that requires them to look for a solution, parse out solution options and, hopefully, read the feedback others have given on those solutions. Let's face it, that's what real developers do anyway, right?

For that reason, I have always explained to them that ChatGPT works best as a search engine, but remember that its responses are not human and that the same dialog can typically be received from any actual human online. I don't send my students home with homework ever. It's up to them whether to not they continue to work on assignments, or school-adjacent work, outside of class, which generally comes down to time management. But I try to incentivize the value of independent research and developing problem-solving skills, and ChatGPT is upending that on multiple fronts.

For now, it's just whack-a-mole; rewrite the assignment to be less AI-friendly, try multiple avenues and advice on developing problem-solving skills without LMMs, make sure objectives and expectations are clearly defined, and teach them to use LMMs as a tool safely. I'm not necessarily afraid of the future where LMM is right 100% of the time and they become super search engines, so much as I am about the damage they are doing right now that is reducing the rigidity of people's patience to solve their problems. Certainly, Google did that, but I need to think about what I'm typing into Google a bit, at least right now.

hypolite,

@ramaniscence Thank you for this write-up, I'm also not happy with the linked article but it was a good conversation base.

I'm also not sure why anyone would be afraid of LLM perfect accuracy. I'm afraid of the opposite, that it will never reach 100% accuracy because models are trained towards plausibility first, not accuracy. I don't even believe they can ever reach 100% accuracy, which would be superhuman anyway. But their increasing use as authoritative source and their window dressing as humans (by using first person pronouns for example) makes them a prime vector to leverage and launder popular biases.

kellogh,
@kellogh@hachyderm.io avatar

@hypolite sooo…idk how to break this to you, but calculators aren’t exact. Try calculating a geometric mean using the naive approach (multiply and nth root)

hypolite,

@kellogh Would "accurate" be a better term for you?

kellogh,
@kellogh@hachyderm.io avatar

@hypolite you’ve now entered the gray area where it’s no longer straightforward to clearly articulate what’s wrong with LLMs. Fwiw you can dig up plenty of areas where LLMs give very accurate results or can be made to give accurate results, and plenty of areas where calculators give total garbage

hypolite,

@kellogh It isn't a uniformly gray area for me though, I've stated that if you know what to ask the calculator, you will get consistent and accurate answers. On the other hand, for LLMs there is no way to ensure the accuracy of the output in advance.

You were able to mention a specific case where calculators are falling short, but I dare you to mention specific cases where LLMs always give accurate results. This is the main difference for me. Calculators can be trusted to be consistent even in their shortfalls, LLMs can't be trusted to be consistent even when the actually output accurate information.

kellogh,
@kellogh@hachyderm.io avatar

@hypolite ChatGPT gives me the same result every time for this prompt, 7 times in a row. I could improve it further by lowering the temperature via the API or by adding examples

Complete the following sentence with no explanation. Stop when you've completed it.

The first sentence of the pledge of allegiance is:

kellogh,
@kellogh@hachyderm.io avatar

@hypolite to follow up on this, remember that LLMs are just neural nets, just a bunch of multiply and add operations, so they really are just calculators. The only reason you get unstable runs is because of a “random seed”, and many models (not OpenAI though) let you set the random seed, so you get stable results, just like how naive geometric mean can give you stable garbage. The random seed isn’t core to how the LLM works, it just makes it seem more creative

hypolite,

@kellogh But this is only true if you control the model, which isn't the case for ChatGPT. You don't get an email each time they update the model of the free version of their tool. So you can't guarantee the consistency over time.

kellogh,
@kellogh@hachyderm.io avatar

@hypolite that’s an issue with all software as a service, like Excel if you use O365. And it’s only true of some LLMs, specifically the proprietary ones. And even those, they let you control when you take changes (they have frozen versions that deprecate every 6 months or so), which is more than you can say for O365

hypolite,

@kellogh What are you talking about? The topic is ChatGPT used by students to skip homework. I don't know what you're trying to achieve here but you're wearing down my patience.

Let me be clear: I won't be convinced LLMs provide overall positive net value because of the way they can and have been used to produce misinformation at scale. What I'm interested in is how bad is it going to be in specific contexts, like for screenwriters, translators and now students.

If you think LLMs are a fine piece of technology, we will not see eye to eye no matter what inane comparison you draw with other technologies. LLMs are uniquely positioned to drag down the value of written knowledge to below zero at a global scale, which no other technology has been even remotely able to do before.

If this isn't a concern for you, it's fine, but please miss me with your defense of LLMs.

kellogh,
@kellogh@hachyderm.io avatar

@hypolite ah, sorry, I misjudged the situation

hypolite,

@kellogh Thanks, I appreciate it.

DialupDownload,
@DialupDownload@hachyderm.io avatar

@hypolite @eglantine A lot of homework is ineffective, bad teaching and absolutely deserves to die this death.

This is absolutely a reckoning for teachers and they need to skip straight from the denial stage to “deal with it” because the technology is absolutely going to explode in usage.

hypolite,

@DialupDownload How do you "deal with it" when your school/district policy includes mandatory homework and you have no say in it? At this point it's just cruelty.

starbreaker,

@hypolite @DialupDownload You work with the voters in your district to elect a new Board of Education and change the policy.

Also, what benefits do you see in homework? I generally agree with Alfie Kohn, that homework was never a good idea. I think it promotes bad habits that carry into adult life, like working unpaid overtime and not leaving work at work (where it damned well belongs). Even if homework does boost academic achievement, is it really worth it if kids are chained to their desks at home, fighting with their parents (who become the school's enforcers at home) and have no time to play, socialize, or explore?

I'll tell you one thing: the reason I still read for pleasure is that when told I had to read such and such for school, I would refuse on principle.

hypolite,

@starbreaker Here's the quote from the article I'm basing myself on:

One study of eleven years of college courses found that when students did their homework in 2008, it improved test grades for 86% of them

It's only one study and it's only college courses but it's confirming my own bias. I was fortunate to grow up in an environment where my parents were available to push for and help with homework, and it probably helped with my grades considering my lack of attention in class even if I thought I got it.

The problem is that the curriculum density can't be adequately covered in-class. I believe homework can help with understanding or cementing knowledge quickly dished in class. Homework essays are at a particularly uncomfortable intersection of requiring a lot of time, not being obvious/specific about the knowledge/skill it's meant to train, and easily done plausibly by LLMs, so I believe these will go away first, but then what will remain of the in-person essay tests?

DialupDownload,
@DialupDownload@hachyderm.io avatar

@hypolite I have no answers for that one and it points to the larger failure of said policies. Teachers are in an absolutely shit position with this one if stuck between those mandates. I was not considering it through that lense so thank you for mentioning.

hypolite,

@DialupDownload You're welcome, and I appreciate the self-awareness.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ukteachers
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines