OpenStars,
@OpenStars@startrek.website avatar

I see no mention of Hitler nor abusive language, are you sure that’s a real AI? /s :-P

GolfNovemberUniform,

Machine learning also needs tons of carbon burnt on a local power plant in order to run

IronicDeadPan,

I don’t know why, but “mechanical turk” keeps cropping up when I think about this sort of stuff.

Medli,

To be fair the human had how many years of training more than the AI to be fit to even attempt to solve this problem.

wagesj45,
wagesj45 avatar

And hundreds of thousands of years of evolution pre-training the base model that their experience was layered on top of.

c0mbatbag3l,
@c0mbatbag3l@lemmy.world avatar

Exactly, people don’t seem to understand that our intelligence/problem solving ability is based on two major factors.

  1. Our evolutionary lineage, pattern recognition and instinct, etc.
  2. Our nurtured upbringing which creates the “training data” we need to accomplish specific tasks. Even if that upbringing isn’t holistic it would still require a significant amount of training to do anything programming-wise that the “three minutes and a coffee” side of the panel is completely ignoring.

Without these a human is useless, we have training data as well, it’s just organic and learned over a lifetime in addition to the billions of years of life evolving on this planet.

CanadaPlus,

Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.

If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.

flashgnash,

I think it’s still faster than actual solutions in some cases, I’ve seen someone train an ML model to animate a cloak in a way that looks realistic based on an existing physics simulation of it and it cut the processing time down to a fraction

I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

CanadaPlus, (edited )

I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

This. I’m sure to a sufficiently intelligent observer it would still look wrong. You could probably achieve the same thing with a conventional algorithm, it’s just that we haven’t come up with a way to profitably exploit our limited perception quite as well as the ML does.

In the same vein, one of the big things I’m waiting on is somebody making a NN pixel shader. Even a modest network can achieve a photorealistic look very easily.

Toribor,
@Toribor@corndog.social avatar

Exactly. Explaining to a computer what a photo of a dog looks like is super hard. Every rule you can come up with has exceptions or edge cases. But if you show it millions of dog pictures and millions of not-dog pictures it can do a pretty decent job of figuring it out when given a new image it hasn’t seen before.

Dultas,

Another problem is people using LLM like it’s some form of general ML.

Mango,

The correct tool for calculating the area of a rectangle is an elementary school kid who really wants that A.

Buttons,
@Buttons@programming.dev avatar

I think the joke is that the Jr. Developer sits there looking at the screen, a picture of a cat appears, and the Jr. Developer types “cat” on the keyboard then presses enter. Boom, AI in action!

The truth behind the joke is that many companies selling “AI” have lots of humans doing tasks like this behind the scene. “AI” is more likely to get VC money though, so it’s “AI”, I promise.

Buttons,
@Buttons@programming.dev avatar
Telodzrum,
CanadaPlus,

This is also how a lot (maybe most?) of the training data - that is, the examples - are made.

On the plus side, that’s an entry-level white collar job in places like Nigeria where they’re very hard to come by otherwise.

PumpkinSkink,

I recently heard somewhere that the joke in India is that in western tech company’s “AI” stands for “Absent Indians”.

Wirlocke,

That’s simultaneously funny and depressing.

CanadaPlus,

It’s also Blockchain and uses quantum computers somehow. /s

lugal,

This is all funny and stuff but chatGPT knows how long the German Italian border is and I’m sure, most of you don’t

SquirtleHermit,

So I apparently have too much free time and wanted to check. So I asked ChatGPT how long the border was exactly, and it could only get an approximate guess, and it had to search using Bing to confirm.

https://lemmy.world/pictrs/image/2c3f0936-cc65-4016-9a49-a10868032138.png

lugal,

That’s a number I never got. I got either 700 something km or 1000 something. It’s only sometimes that chatGPT realizes that there are Austria and Switzerland in between and there is no direct border

Blackmist,

Google’s AI gives it as:

The length of the German-Italian border depends on how you define the border. Here are two ways to consider it:

Total land border: This includes the main border between the two countries, as well as the borders of enclaves and exclaves. This length is approximately 811 kilometers (504 miles).

Land border excluding exclaves and enclaves: This only considers the main border between the two countries, neglecting the complicated enclaves and exclaves within each country’s territory. This length is approximately 756 kilometers (470 miles).

It’s important to note that the presence of exclaves and enclaves creates some interesting situations where the border crosses back and forth within the same territory. Therefore, the definition of “border” can influence the total length reported.

Kuma,
@Kuma@lemmy.world avatar

Here I am wondering why no one made the joke that the answer was not found (404) but chat gpt assumed it was the answer 😂

SquirtleHermit,

lol, holy shit… I can’t believe I didn’t notice that.

flathead,

Presumably, we can expect 404 to eventually replace 42 as the answer to everything.

xmunk,

Nobody knows how long any border is if it adheres to any natural boundaries. The only borders we know precisely are post-colonial perfectly straight ones.

lugal, (edited )

Well, for non-adjacent countries, the answer is still straightforward

xmunk,

Yes, I too can confidently state the precise length of the infamous Loatian-Canadian border.

lugal,

I’ve tried, but chatGPT won’t give me an answer. So far, my personal record is Serbia - Iraq. If you find 2 countries that are further apart, yet chatGPT will give you a length of the border, feel free to share a screenshot!

https://sopuli.xyz/pictrs/image/211a278d-5c6b-4a3a-91ac-ad446979b361.webp

xmunk,

Thank you for your service, Sir - that made my day.

orca,
@orca@orcas.enjoying.yachts avatar

Make sure you ask the AI not to hallucinate because it will sometimes straight up lie. It’s also incapable of counting.

lugal,

But where is it fun in it if I can’t make it hallucinate?

orca,
@orca@orcas.enjoying.yachts avatar

I do feel bad when I have to tell it not to. Hallucinating is fun!

lugal,

But does it work to tell it not to hallucinate? And does it work the other way around too?

orca,
@orca@orcas.enjoying.yachts avatar

It’s honestly a gamble based on my experience. Instructions that I’ve given ChatGPT have worked for a while, only to be mysteriously abandoned for no valid reason. Telling AI not to hallucinate is apparently common practice from the research I’ve done.

lugal,

Makes me wonder: Can I just asked it to hallucinate?

orca,
@orca@orcas.enjoying.yachts avatar

Yep. Tell it to lie and it will.

sxan,
@sxan@midwest.social avatar

The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we’ll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else’s code.

Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.

I guess mocking and laughing now is about all we can do.

docAvid,

Well, we could end capitalism, and demand that AI be applied to the betterment of humanity, rather than to increasing profits, enter a post-scarcity future, and then do whatever we want with our lives, rather than selling our time by the hour.

fidodo,

The only way I see that happening is if the entire economy collapses because nobody has jobs, which might actually happen pretty soon 🤷

sxan,
@sxan@midwest.social avatar

Amen. Let’s do that thing: you have my vote.

pearsaltchocolatebar,

The best part is that dumbass devs are actively working on self improving AI that will take their jobs.

KevonLooney,

at the point AI becomes self-improving

This is not a foregone conclusion. Machines have mostly always been stronger and faster than humans, because humans are generally pretty weak and slow. Our strength is adaptability.

As anyone with a computer knows, if one tiny thing goes wrong it messes up everything. They are not adaptable to change. Most jobs require people to be adaptable to tiny changes in their routine every day. That’s why you still can’t replace accountants with spreadsheets, even though they’ve existed in some form for 50 years.

It’s just a tool. If you don’t want to use it, that’s kinda weird. You aren’t just “debugging” things. You use it as a junior developer who can do basic things.

sxan,
@sxan@midwest.social avatar

This is not a foregone conclusion.

Sure, I agree. There’s many a slip twixt the cup and the lip. However, I’ve seen no evidence that it won’t happen, or that humans hold any inherent advantage over AI (as nascent as it may be, in the rude forms of LLMs and deep learning they’re currently in).

If you want something to reflect upon, your statement about how humans have an advantage of adaptability sounds exactly like the previous generation of grasping at inherant human superiority that would be our salvation: creativity. It wasn’t too long ago that people claimed that machines would never be able to compose a sonnet, or paint a “Starry Night,” and yet, creativity has been one of the first walls to fall. And anyone claiming that ML only copies and doesn’t produce anything original has obviously never studied the history of fine art.

Since noone would now claim that machines will never surpass humans in art, the goals have shifted to adaptability? This is an even easier hurdle. Computer hardware is evolving at speeds enormously faster than human hardware. With the exception of the few brief years at the start of our lives, computer software is more easily modified, updated, and improved than our poor connective neural networks. It isn’t even a competition: conputers are vastly more well equipped to adapt faster than we are. As soon as adaptability becomes a priority of focus, they’ll easily exceed us.

I do agree, there are a lot of ways this futur could not come to pass. Personally, I think it’s most likely we’ll extinct ourselves - or, at least, the society able to continue creating computers. However, we may hit hardware limits. Quantum computing could stall out. Or, we may find that the way we create AI cripples it the same way we are, with built-in biases, inefficiencies in thinking, or simply too high of resource demands for complexity much beyond what two humans can create with far less effort and very little motivation.

KevonLooney,

creativity has been one of the first walls to fall

Uh, no? Unless you think unhinged nonsense without thought is “creative”. Right now, these programs are like asking a particularly talented insane person to draw something for you.

Creativity is not just creation. It’s creation with purpose. You can “create art” by breaking a vase. That doesn’t mean it’s good art.

sxan,
@sxan@midwest.social avatar

And, yet, I’ve been to an exhibit at the Philadelphia Museum of Fine Art that consist of an installation that included a toilet, among other similarly inspired works of great art.

On a less absurd note, I don’t have much admiration for Pollock, either, but people pay absurd amounts of oof for his stuff, too.

An art history class I once took posed the question: if you find a clearing in a wood with a really interesting pile of rocks that look suspiciously man-made, but you don’t know if a person put it together or if it was just a random act of nature… is it art? Say you’re convinced a person created it and so you call it art, but then discover it was an accident of nature, does it stop being art?

I fail to see any great difference. AI created art is artificial, created with the intention of producing art; is it only not art because it wasn’t drawn by a human?

KevonLooney,

If you’re talking about

en.wikipedia.org/wiki/Fountain_(Duchamp)

that’s a seminal work of avant guard art. You are still talking about it 100 years later. It’s obviously great art.

Art is a work of visual, auditory, or written media that makes you feel emotion. That’s it. Does this pile of rocks make you feel happy or sad or anything? Then it’s art.

AI makes pictures like a camera does. It doesn’t make it art unless you make something that evokes emotion.

sxan,
@sxan@midwest.social avatar

We’re saying the same thing. AI can create art. My point was that we used to claim that art was a domain that was unassailable by machines, and this obviously is not true. So now, humans - or the particular human to whom I was replying - had a new goalpost: adaptabiility.

We’ll keep coming up with new goalposts where “humans have an edge” that will keep us relevant and ascendant over machines, and irreplaceable. I believe we’ll run out of goalposts faster than many people would like.

You know, there is one small other hope I have: that, despite how we’ve raised them, our children will be better than us, and will stop the cycle of wealth concentration. It’s unlikely, but it’s the only chance I see.

JoYo,

Artwork is never the art.

someacnt_,

I do have one notion where we can have edge over. Human brain is quite optimized in energy usage, as a consequence of natural selection. Meanwhile, IIRC computers are optimized for speed, and so it often wastes energy. Now let’s see where this goes - will they be able to operate without gushing in energy?

thesporkeffect,
carzian,

I can’t wait for chatgpt sort

UNWILLING_PARTICIPANT,

sort this d (gestures rudely at the concept of llms)

v_krishna,
@v_krishna@lemmy.ml avatar

Did you just post your open ai api key on the internet?

lawrence,

Let’s put it here in ascii format this free OpenAI API Key, token, just for the sake of history and search engines healthiness… 😂

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

But seriously, I hope they have already changed it.

UNWILLING_PARTICIPANT,

Haha it looks that way doesn’t it. Hopefully those are scoped and limited 😳

konalt,
@konalt@lemmy.world avatar

After a small test, it doesn’t work.

mosiacmango,

Nah, this is a meme post about using chatgpt to check even numbers instead of simple code.

Same joke as the OP, different format.

thesporkeffect,

Thank you for your concern everyone I did not create this image

ook_the_librarian,
@ook_the_librarian@lemmy.world avatar

But which consumes more energy? Like really. I’m betting AI does, but some tasks might be close.

TonyTonyChopper,
@TonyTonyChopper@mander.xyz avatar

the future unifying metric for productivity should be joules per line of code. If you cost more than a machine you get laid off

notTheCat,

Time to config a formatter to insert a newline whenever the syntax is still correct

ColdFenix,
@ColdFenix@discuss.tchncs.de avatar

What ChatGPT actually comes up with in about 3 mins.

https://discuss.tchncs.de/pictrs/image/f26357ee-f21a-4497-8192-52bc3eddbd0a.png

Klear,

Nice, that saves the coffee.

UNWILLING_PARTICIPANT,

But at what cost 😔

xmunk,

Probably about five bucks a cup.

underisk,
@underisk@lemmy.ml avatar

the comic is about using a machine learning algorithm instead of a hand-coded algorithm. not about using chatGPT to write a trivial program that no doubt exists a thousand times in the data it was trained on.

kfet,
@kfet@lemmy.ca avatar

GPT is ML

stephen01king,

But not all ML is GPT

Honytawk,

The strengths of Machine Learning are in the extremely complex programs.

Programs no junior dev would be able to accomplish.

So if the post can misrepresent the issue, then the commenter can do so too.

funkless_eck,

The biggest high level challenge in any tech org is security and there’s no way you can convince me that ML can successfully counter these challenges

“oh but it will but it will!”

when

“in the future”

how long in the future

“When it can do it”

how will we know it can do it

“When it can do it”

cool.

Scew,
@Scew@lemmy.world avatar

You probably wreck in chess. :)

xmunk,

I strongly disagree. ML is perfect for small bullshit like “What’s the area of a rectangle” - it falls on its face when asked:

Can we build a website for our security paranoid client that wants the server to completely refuse to communicate with users that aren’t authenticated as being employees… Oh, and our CEO requested a password recovery option on the login prompt.

ulterno,
@ulterno@lemmy.kde.social avatar

I got interested and asked ChatGPT. It gave a middle-management answer.
Guess we know who’ll be the first to go.

Pelicanen,

I think the exact opposite, ML is good for automating away the trivial, repetitive tasks that take time away from development but they have a harder time with making a coherent, maintainable architecture of interconnected modules.

It is also good for data analysis, for example when the dynamics of a system are complex but you have a lot of data. In that context, the algorithm doesn’t have to infer a model that matches reality completely, just one that is close enough for the region of interest.

underisk,
@underisk@lemmy.ml avatar

Yes that is what they are good at. But not as good as a deterministic algorithm that can do the same thing. You use machine learning when the problem is too complex to solve deterministically, and an approximate result is acceptable.

the_post_of_tom_joad,

But can machine learning teach nerds not to ruin jokes? ;)

ulterno,
@ulterno@lemmy.kde.social avatar

I think it’s the nerds’ job to teach the ML to ruin jokes.

underisk,
@underisk@lemmy.ml avatar

if it did, they wouldn’t be nerds anymore.

kurwa,

But like why would you use ML to do basic maths? Whoever did that is dumber than a junior dev 😝

pearsaltchocolatebar,

Lol, no. ML is not capable of writing extremely complex code.

It’s basically like having a bunch of junior devs cranking out code that they don’t really understand.

ML for coding is only really good at providing basic bitch code that is more time intensive than complex. And even that you have to check for hallucinations.

Scubus,

LLM are not good for that, but Machine Learning as a whole does not have that limitation.

kurwa,

To reiterate what the parent comment of the one you replied to said, this isn’t about chat GPT generating code, it’s about using ML to create a indeterministic algorithm, that’s why in the comic it’s only very close to 12 and not 12 exactly.

BluesF,

ML is not good for coding, it is good for approximately solving very complex problems.

NounsAndWords,

I’m hoping even a junior dev has had more than 60 hours of training.

audiomodder,

Yea, but does the AI ask me why “x” doesn’t work as a multiplication operator 14 times while complaining about how this would be easier in Rust?

MxM111,
MxM111 avatar

Well, if training is included, then why it is not included for the developer? From his first days of his life?

ilinamorato,

When did the training happen? The LLM is trained for the task starting when the task is assigned. The developer’s training has already completed, for this task at least.

Deceptichum,
Deceptichum avatar

No? The LLM was trained before you ever even interacted with it. They're not going to train a model on the fly each time you want to use it, that's fucking ridiculous.

ilinamorato,

That’s the joke that the comic is making. Whether or not it’s reflective of reality, they’re joking about a company training a new AI model to calculate the area of rectangles.

0ops,

And even if they do need to train a model, transfer learning is often a viable shortcut

RealJoL,

The difference is that the dev paid for their training themselves

MxM111,
MxM111 avatar

The company OpenAI also paid for LLM training and then sell LLM to users.

thetreesaysbark,

Sort of… If the dev didn’t pay for their training, they wouldn’t need as big of a wage to pay off their training debt (the usual scenario I’d wager).

So in a way the company is currently paying off the debt for the Devs training, most of the time.

Bonehead,

This is so unrealistic. Developers don't drink decaf.

ulterno,
@ulterno@lemmy.kde.social avatar

Glucose dev here.

eager_eagle,
@eager_eagle@lemmy.world avatar

regardless of experience, that’s probably what makes him a junior

marcos,

And LLMs don’t get on the correct answer.

Klear,

I think this comic might predate the LLM craze.

stephen01king,

This post is not specifically about LLMs, though?

marcos,

That’s what people have been pointing. The 60 hours of training should have been a dead giveaway.

I hope the neurons use a logistic activation function. If it’s a saturating linear one, the result will still be full of surprises.

RobMyBot,

I was shocked and appalled by this blatant inaccuracy.

Bye,

I do, exclusively

Getting rid of caffeine (decaf still has a little) has been amazing for me.

jaybone,

I’m trying to switch to non-alcoholic vodka.

Rolive,

Also called voda.

Cracks_InTheWalls,
@Cracks_InTheWalls@sh.itjust.works avatar

Non-alcoholic gin and tonics are the shit. If you’re legit looking for non-alcoholic drinks and like G&Ts give it a try.

pufferfischerpulver,

Completely agree, it’s basically just botanicals anyway. Well and booze

Non alcoholic beer has gotten a lot better the last years as well.

Asafum,

How so? I more than likely take in too much caffeine lol

DrRatso,

Personally, if i have too much and/or too late, i have a hard time falling asleep in the evening.

lobut,

I’m not the person you’re replying to but for me, I used to get random headaches and jitters and I feel more consistent now.

The problem is the withdrawal period can be hard for some. It was for me, but overall worth it in the end.

Cliff,

So you get consistent headaches an jitters now instead of getting it randomly?

UNWILLING_PARTICIPANT, (edited )

How much you drinking? I didn’t think it had an impact on me, even afternoon or evening, and only realised the difference when I cut it out

Asafum,

I have a “thermos” style bottle that’s probably 16oz that I drink throughout the day every day. Weekends I’ll drink more as I’m home and it’s readily available.

It’s cold brew so it’s already cold for anyone disgusted by the “throughout the day” bit lol

HumanPerson,

16 oz is not that much, although cold brew is a little stronger. I used to consume a about gram of caffeine a day but withdrawal for me was a light headache and slightly lower energy for a day (I went no caffeine for a little while to reduce tolerance). I did notice my energy improve without it, however I am sometimes not able to get enough sleep and it is good for leveling out energy in those cases. I generally try to have low doses and occasional strategic bursts when necessary. Also if you are worried about sleep you can do the math using the half life of caffeine (5 hours avg.) to figure out how much you are on when you go to bed. Sorry if incoherent, I have been busy this week and not getting much sleep.

UNWILLING_PARTICIPANT,

Not the same, but I switched to tea mostly for aesthetic reasons, and after a brief adjustment period, I’m finding it a lot more fun an varied than coffee drinking. And easier to find v low caffeine, or tasty 0 caffeine teas of as many varieties as you can imagine.

I’ll still have a social coffee every now and then, but anyway I’d recommend it, at least to check out. It’s like discovering scotch after a lifetime of beer drinking.

Appoxo,

Try eplaining tea to others though.
Every time I am on-site I get asked for two options: Coffee or water.

UNWILLING_PARTICIPANT,

That’s why I bring a bag or two in my breast pocket when I go out!

Appoxo,

I assume your are either not interested in loose tea or not there yet.

Once you reach temperature sensitive teas (like japanese greens) that are additionally sensitive to hard water it quickly becomes difficult to brew tea at work/not at home.

Personally I started to bring a 400ml thermos (about my usual cup) and on some days my 1L thermos.
Both my thermos keep a 70°C tea warm (probably 50°C) even until end of work and so temperature doesnt become an issue but instead oxidadation. Greens like to become a faint brown color and change their taste. Sometimes for the better, sometimes not.

UNWILLING_PARTICIPANT,

not interested in loose tea or not there yet.

This strikes me as particularly funny, thank you, that is very accurate. I have dabbled in the leaf that is loose, mostly buying baggies from the bulk food store, so not particularly fresh (or high quality). But yeah I am trying to stick to the cheap stuff for now. I love how it’s so much less expensive than coffee!

Friends keep sending me these boutique tea and m samples now that I’m drinking tea haha, so I do know what I’m missing

Appoxo,

Yeah, bagged tea is definitely more cheap compared to those more boutique teas.
But you can get it cheaper in local tea shops or on sites like yunnan sourcing. But: shipping and import

UNWILLING_PARTICIPANT,

I’m not even a year into seriously being into tea, so I imagine I’ll just get more particular over time. I’m still working through a few boxes of various grocery store black and herbal teas, so maybe I’ll look around for something different when those start to run out.

I do really love a big pot of green tea while I’m working at my desk job.

Appoxo, (edited )

My biggest advice:
Stop going to grocery stores and instead try to find a local tea store with loose teas stored in non-see-through containers.
They will usually let you smell the teas and you can determine if they may fit your taste or not.
But watch out: Smell ≠ Taste
And the taste is very dependant on your water hardness (if hard, you should get a water filter) and for how long and how hot you brew it.

Wish you best of luck :)

If you want a starter:
80-90°C water (my cups are ~400ml)
Chinese gunpowder tea (4g is my go to)
1-2 mint leaves (my choice is nana/maroccanian mint)
Brew for 1-2min
If it’s a good quality tea, you can reuse the bag and maybe add another 1-2 leaves of mint

Agent641,

DECaf is a pseudo abbreviation for Dangerously and Extraordinarily Caffeinated.

It has a higher KDR than a Panera charged lemonade.

orca,
@orca@orcas.enjoying.yachts avatar

Ahh the future of dev. Having to compete with AI and LLMs, while also being forced to hastily build apps that use those things, until those things can build the app themselves.

EdibleFriend,
@EdibleFriend@lemmy.world avatar

And also, as a developer, you have to deal with the way Star Trek just isn’t as good as it used to be.

Because you’re all fucking nerds.

(Me too tho)

Deceptichum,
Deceptichum avatar

SNW has been thoroughly enjoyable so far.

Klear,

Let’s invent a thing inventor, said the thing inventor inventor after being invented by a thing inventor.

SuperIce,

You could make a religion out of this.

BleatingZombie,

A recursive religion! I’m in!

konalt,
@konalt@lemmy.world avatar

The sun is a deadly laser.

JDubbleu,

I mean if you have access but are not using Copilot at work you’re just slowing yourself down. It works extremely well for boilerplate/repetitive declarations.

I’ve been working with third party APIs recently and have written some wrappers around them. Generally by the 3rd method it’s correctly autosuggesting the entire method given only a name, and I can point out mistakes in English or quickly fix them myself. It also makes working in languages I’m not familiar with way easier.

AI for assistance in programming is one of the most productive uses for it.

antonplankton,
JDubbleu,

That was a pretty interesting read. However, I think it’s attributing correlation and causation a little too strongly. The overall vibe of the article was that developers who use Copilot are writing worse code across the board. I don’t necessarily think this is the case for a few reasons.

The first is that Copilot is just a tool and just like any tool it can easily be misused. It definitely makes programming accessible to people who it would not have been accessible to before. We have to keep in mind that it is allowing a lot of people who are very new to programming to make massive programs that they otherwise would not have been able to make. It’s also going to be relied on more heavily by those who are newer because it’s a more useful tool to them, but it will also allow them to learn more quickly.

The second is that they use a graph with an unlabeled y-axis to show an increase in reverts, and then never mention any indication of whether it is raw lines of code or percentage of lines of code. This is a problem because copilot allows people to write a fuck ton more code. Like it legitimately makes me write at least 40% more. Any increase in revisions are simply a function of writing more code. I actually feel like it leads to me reverting a lesser percentage of lines of code because it forces me to reread the code that the AI outputs multiple times to ensure its validity.

This ultimately comes down to the developer who’s using the AI. It shouldn’t be writing massive complex functions. It’s just an advanced, context-aware autocomplete that happens to save a ton of typing. Sure, you can let it run off and write massive parts of your code base, but that’s akin to hitting the next word suggestion on your phone keyboard a few dozen times and expecting something coherent.

I don’t see it much differently than when high level languages first became a thing. The introduction of Python allowed a lot of people who would never have written code in their life to immediately jump in and be productive. They both provide accessibility to more people than the tools before them, and I don’t think that’s a bad thing even if there are some negative side effects. Besides, in anything that really matters there should be thorough code reviews and strict standards. If janky AI generated code is getting into production that is a process issue, not a tooling issue.

orca,
@orca@orcas.enjoying.yachts avatar

Oh I use Copilot daily. It fills the gaps for the repetitive stuff like you said. I was writing Stories in a Storybook.js project once and was able to make it auto-suggest the remainder of my entire component states after writing 2-3. They worked out of the gate too with maybe a single variable change. Initially, I wasn’t even going to do all of them in that coding session just to save time and get it handed off, but it was giving me such complete suggestions that I was able to build every single one out with interaction tests and everything.

Outside of use cases like that and getting very general content, I think AI is a mess. I’ve worked with ChatGPT’s v3.5-4 API a ton and it’s unpredictable and hard to instruct sometimes. Prompts and approaches that worked 2 weeks ago, will now suddenly give you some weird edge case that you just can’t get it to stop repeating—even when using approaches that worked flawlessly for others. It’s like trying to patch a boat while you’re in it.

The C suite people and suits jumped on AI way too early and have haphazardly forced it into every corner. It’s become a solution searching for a problem. The other day, a friend of mine said he had a client that casually asked how they were going to use AI on the website they were building for them, like it was just a commonplace thing. The buzzword has gotten ahead of itself and now we’re trying to reel it back down to earth.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • programmerhumor@lemmy.ml
  • kavyap
  • mdbf
  • tacticalgear
  • osvaldo12
  • InstantRegret
  • DreamBathrooms
  • modclub
  • magazineikmin
  • thenastyranch
  • khanakhh
  • Youngstown
  • slotface
  • rosin
  • ethstaker
  • megavids
  • tester
  • GTA5RPClips
  • Durango
  • normalnudes
  • cubers
  • ngwrru68w68
  • everett
  • provamag3
  • cisconetworking
  • anitta
  • Leos
  • JUstTest
  • lostlight
  • All magazines