enigmatico,
@enigmatico@mk.absturztau.be avatar

It came to my attention that the Linux Foundation allows the use of generative AI for people's submissions, and I am a little bit alarmed.

https://www.linuxfoundation.org/legal/generative-ai

Jain,
@Jain@blob.cat avatar

@enigmatico why? I mean the defined process for commits should prevent contributions which are actually bad... I dont mind if AI is used as a tool, and i wouldnt mind when AI somewhen actually produces good stuff...

enigmatico,
@enigmatico@mk.absturztau.be avatar

@Jain How do you know what you're submitting if the AI made it for you in the first place? lmao

If you are dealing with a kernel and you're using an AI to generate the code for you... a critical component of the system (or the most critical one, it's the core itself), with a tool that is known to write bad code and introduce bugs... yeah, I can see a problem.

Hopefully they will catch those issues before merging the commits, but something tells me this is not going to always be the case and this will lead to unsafe code that is going to cause problems. I'm surprised Linus didn't ditch it entirely like Gentoo did. Maybe he's buying OpenAI stocks for his retirement.

Jain,
@Jain@blob.cat avatar

@enigmatico
> How do you know what you're submitting if the AI made it for you in the first place? lmao

Same question could be asked for anyone which ever copied code from Stack Overflow...
The fix to this problem was and is always reviews. And within the kernel i expect it at least twice, like the commiter and the one which merges the commit.

Do you know how AI can be used in real world scenarios for development nowdays?
A good example is generating some Data Models. Its something one has to write lot of boilerplate and it will be changed during development anyway, so having some basic properties for a use case can save you time and by generating just properties there is not many way to introduce bugs tho.

Tbh, i expect that language models will be equally good as the average developer in 3-4 years... Or at least some coworkers of mine when I look at their code.
I dont trust them either and so should every dev check other devs work and a halfway proper review process should be common.

enigmatico,
@enigmatico@mk.absturztau.be avatar

@Jain idk. My experience with generative AI code is code that looks good but doesn't run or doesn't work as expected. It's a no-no in my list. And please don't compare copypasting from stack overflow with AI generated code. At least the code in Stack overflow was written by a human being.

Jain,
@Jain@blob.cat avatar

@enigmatico Since my main Argument is that one shouldnt commit Code which one doesnt fully understand anyway, i will continue compare it within that process. It simply doesnt matter what the source of the Code is, when you forgot or ignore the review process of kernel development...

enigmatico,
@enigmatico@mk.absturztau.be avatar

@Jain And my point is that a human will write code following it's logic. Not statistical analysis and common patterns. Yes, a human might write untested code that might also not work or introduce bugs as well and someone might copypaste them. But at least there was a thought process.

Here, there is none. Just machine saying "this looks like something a programmer would write based on your input". I can see why using it is a problem.

Jain,
@Jain@blob.cat avatar

@enigmatico A human writing Code will use common patterns as well as statistically analysis too. Thats why benchmarks and code metrics are used as indicator as well as common patterns are used since it helps other devs to understand whats going on.
> But at least there was a thought process.
Same on my case, that step isnt lost in any way and cant be removed. Thats what i argued from beginning.

> Here, there is none. Just machine saying "this looks like something a programmer would write based on your input". I can see why using it is a problem.
Lets generalize what a Software Developer actually does in their daily tasks: A dev gets its information on how something needs to be changed and then implements this.
Do you really think that AI will replace that process fully?
No thats wrong, Language models should be seen as a tool which can potentially helps getting stuff done but that basic generalized process will not be replaced at all.
In the end it is the responsibility of the developer to ensure that the result fits the initial informations (and yes also that the code looks nice and has no bugs, that doesn't change either)

Jain,
@Jain@blob.cat avatar

@enigmatico I have a different Argument which should proof what will happens: The process of developing a Software simplified already over years. The fewest developers now days write their own data structures: At some point DB-Engines were developed so that devs dont have to care how the groundlaying data structures need to be stored. Nowdays most devs rely on already existing source code and the same happens with the introduction of language models. Devs do more and more build their stuff on third party source code. Of course if they find a bug they need to fix it, but on the philosophical layer it doesnt matter if the source is from another human or if its generated by a machine, the process of development is the same while the tools develops further.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • slotface
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • khanakhh
  • ethstaker
  • JUstTest
  • ngwrru68w68
  • cisconetworking
  • modclub
  • normalnudes
  • osvaldo12
  • cubers
  • GTA5RPClips
  • Leos
  • tester
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines