FlorianSimon

@FlorianSimon@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FlorianSimon,

This is an ironic ChatGPT answer, meant to (rightfully) creep you out.

FlorianSimon,

Are they not a ChatGPT troll account or a bot?

FlorianSimon,

This is not really a slam dunk argument.

First off, this is not the kind of code I write on my end, and I don’t think I’m the only one not writing scripts all day. There’s a need for scripts at times in my line of work but I spend more of my time thinking about data structures, domain modelling and code architecture, and I have to think about performance as well. Might explain my bad experience with LLMs in the past.

I have actually written similar scripts in comparable amounts of times (a day for a working proof of concept that could have gone to production as-is) without LLMs. My use case was to parse JSON crash reports from a provider (undisclosable due to NDAs) to serialize it to our my company’s binary format. A significant portion of that time was spent on deciding what I cared about and what JSON fields I should ignore. I could have used ChatGPT to find the command line flags for my Docker container but it didn’t exist back then, and Google helped me just fine.

Assuming you had to guide the LLM throughout the process, this is not something that sounds very appealing to me. I’d rather spend time improving on my programming skills than waste that time teaching the machine stuff, even for marginal improvements in terms of speed of delivery (assuming there would be some, which I just am not convinced is the case).

On another note…

There’s no need for snark, just detailing your experience with the tool serves your point better than antagonizing your audience. Your post is not enough to convince me this is useful (because the answers I’ve gotten from ChatGPT have been unhelpful 80% of the time), but it was enough to get me to look into AutoGen Studio which I didn’t know about!

FlorianSimon,

I would argue that there just isn’t much gain in terms of speed of delivery, because you have to proofread the output - not doing it is irresponsible and unprofessional.

I don’t tend to spend much time on a single function, but I can remember a time recently where I spent two hours writing a single function. I had to mentally run all cases to check that it worked, but I would have had to do it with LLM output anyway. And I feel like reviewing code is just much harder to do right than to write it right.

In my case, LLMs might have saved some time, but training the complexity muscle has value in itself. It’s pretty formative and there are certain things I would do differently now after going through this. Most notably, in that case: fix my data format upfront to avoid edge cases altogether and save myself some hard thinking.

I do see the value proposition of IDEs generating things like constructors, and sometimes use such features, but reviewing the output is mentally exhausting, and it’s necessary because even non-LLM sometimes comes out as broken. Assuming that it worked 100% of the time: still not convinced it amounts to much time saved at the end of day.

FlorianSimon,

What bug have you spotted?

FlorianSimon,

I’m not trying to convince myself of anything. I was very happy to try LLM tools for myself. They just proved to be completely useless. And there’s a limit to what I’m going to do to try out things that just don’t seem to work at all. Paying a ton of money to a company to use disproportionate amounts of energy for uncertain results is not one of them.

Some people have misplaced confidence with generated code because it gets them places they wouldn’t be able to reach without the crutches. But if you do things right and review the output of those tools (assuming it worked more often), then the value proposition is much less appealing… Reviewing code is very hard and mentally exhausting.

And look, we don’t all do CRUD apps or scripts all day.

FlorianSimon,

Something seems to fly above your head: quality is not optional and it’s good engineering practice to seek reliable methods of doing our work. As a mature software person, you look for tools that give less room for failure and want to leave as little as possible for humans to fuck up, because you know they’re not reliable, despite being unavoidable. That’s the logic behind automated testing, Rust’s borrow checker, static typing…

If you’ve done code review, you know it’s not very efficient at catching bugs. It’s not efficient because you don’t pay as much attention to details when you’re not actually writing the code. With LLMs, you have to do code review to ensure you meet quality standards, because of the hallucinations, just like you’ve got to test your work before committing it.

I understand the actual software engineers that care about delivering working code and would rather write it in order to be more confident in the quality of the output.

FlorianSimon,

You have every right not to, but the “useless” word comes out a lot when talking about LLMs and code, and we’re not all arguing in bad faith. The reliability problem is still a strong factor in why people don’t use this more, and, even if you buy into the hype, it’s probably a good idea to temper your expectations and try to walk a mile in the other person’s shoes. You might get to use LLMs and learn a thing or two.

FlorianSimon,

🤡

FlorianSimon,

Truth is, your complete misunderstanding of the person you replied to seems to suggest otherwise, and the arrogant delivery doesn’t help.

Seems like that one hit a nerve, uh?

FlorianSimon,

I’m glad that you’re finding this useful. When I say it’s useless, I speak in my name only.

I’m not afraid to try it out, and I actually did, and, while I was impressed by the quality of the English it spits out, I was disappointed with the actual substance of the answers, which makes this completely unusable for me in my day to day life. I keep trying it every now and then, but it’s not a service I would pay for in its current state.

Thing is, I’m not the only one. This is the opinion of the majority of people I work with, senior or junior. I’m willing to give it some time to mature, but I’m unconvinced at the moment.

FlorianSimon,

OK

FlorianSimon, (edited )

I have not tried Copilot, no. I’m not giving any tool money, personal info and access to my code when it can’t reliably answer a question like: “does removing from a std::vector invalidate iterators?” (not a prompt I tried on LLMs but close enough).

That shit’s just dangerous, for obvious reasons. Especially when you consider the catastrophic impact these kinds of errors can have.

There needs to be a fundamental shift to something that detects and fixes the garbage, which just isn’t there ATM.

FlorianSimon,

I thought you were not reading me anymore?

I never said I was an expert on Copilot, I’ve consistently said LLMs are not where they should be in terms of reliability, which is also true of Copilot.

Edit: oh and sorry for not being willing to waste my time trying out every new piece of tech on the block when all they’re doing is rehashing unsound ideas 🤷‍♂️

FlorianSimon,

I don’t like how bad its underlying tech is, but I guess subtlety, just like software quality, aren’t your strong suit?

FlorianSimon,

You can try funny quips again, but you’ve tried before, said you’d stop reading and here we are. Are you done for realsies now?

FlorianSimon,

You can try funny quips again, but you’ve tried before, said you’d stop reading and here we are. Are you done for realsies now?

FlorianSimon,

Thing is, if you want to sell the tech, it has to work, and what most people have seen by now is not really convincing (hence the copious amount of downvotes you’ve received).

You guys sound like fucking cryptobros, which will totally replace fiat currency next year. Trust me bro.

FlorianSimon, (edited )

If everybody in society “votes” that kind of stuff “down”, the hype will eventually die down and, once the dust has settled, we’ll see what this is really useful for. Right now, it can’t even do fucking chatbots right (see the Air Canada debacle with their AI chatbot).

Not every invention is as significant as the Internet. There’s thing like crypto which are the butt of every joke in the tech community, and people peddling that shit are mocked by everyone.

I honestly don’t buy that we’re on the edge of a new revolution, or that LLMs are close to true AGI. Techbros have been pushing a lot of shit that is not in alignment with regular folks’ needs for the past 10 years, and have maintained tech alive artificially without interest from the general population because of venture capital.

However, in the case of LLMs, the tech is interesting and is already delivering modest value. I’ll keep an eye on it because I see a modest future for it, but it just might not be as culturally significant as you think it may be.

With all that said, one thing I will definitely not do is spend any time setting up things locally, or running a LLM on my machine or pay any money. I don’t think this gives a competitive edge to any software engineer yet, and I’m not interested in becoming an early adopter of the tech given the mediocre results I’ve seen so far.

FlorianSimon,

I don’t care about the edge of that tech. I’m not interested in investing any time making it work. This is your problem. I need a product I can use as a consumer. Which doesn’t exist, and may never exist because the core of the tech alone is unsound.

You guys make grandiloquent claims that this will automate software engineering and be everywhere more generally. Show us proof. What we’ve seen so far is ChatGPT (lol), Air Canada’s failures to create working AI chatbots (lol), a creepy plushie and now this shitty device. Skepticism is rationalism in this case.

Maybe this will change one day? IDK. All I’ve been saying is that it’s not ready yet from what I’ve seen (prove me wrong with concrete examples in the software engineering domain) and given that it tends to invent stuff that just doesn’t exist, it’s unreliable. If it succeeds, LLMs will be part of a whole delivering value.

You guys sound like Jehovah’s witnesses. get a hold of yourselves if you want to be taken seriously. All I see here is hyperbole from tech bros without any proof.

FlorianSimon,

Show me proof or shut up. It’s that simple. This is not a subjective matter like wine tasting. There needs to be objective and tangible proof it works.

Hyperbole again.

FlorianSimon,

Touché

FlorianSimon,

I’ll check OpenUI, thanks for the suggestion

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • InstantRegret
  • vwfavf
  • Youngstown
  • thenastyranch
  • slotface
  • rosin
  • hgfsjryuu7
  • ethstaker
  • kavyap
  • khanakhh
  • ngwrru68w68
  • DreamBathrooms
  • PowerRangers
  • normalnudes
  • magazineikmin
  • GTA5RPClips
  • tacticalgear
  • osvaldo12
  • cisconetworking
  • Durango
  • everett
  • mdbf
  • modclub
  • tester
  • Leos
  • cubers
  • provamag3
  • All magazines