spiderman,

guess we can still earn bread for another day.

FauxPseudo,
@FauxPseudo@lemmy.world avatar

Does anyone have an alternative title for this that doesn’t sound clickbaity as f***? I’m kind of afraid to feed whatever media organization came up with that headline.

vallode,

It is a typical Youtube clickbait title but the Youtuber in question doesn’t seem to be nefarious, it does what it says on the tin :P I’d say the description covers what the video is about better.

Sparrow_1029,
@Sparrow_1029@programming.dev avatar

Am I one of the few who just doesn’t use AI at all? I don’t have to generate tons of code for work at the moment and brand new projects that I’ve been given are small–meaning I wouldn’t necessarily use it to generate starter boilerplate. I have coworkers that love copilot or spend much longer prompting ChatGPT than they would if they wrote code themselves. A majority of my time is spent modelling the problem, gathering rejuirements, researching others’ solutions online (likely this step could be better AI-assisted?), not actually implementing a solution in code.

Anyway, I’m not super anti-AI in software development, and I see where it could be useful. Maybe it just isn’t for me yet. The current hype around it as well as the attitude of big-tech exceptionalism (“AI can salve all our problems”) feels a bit like a bubble, at least regarding the current generation of LLMs and ML

HarkMahlberg,
HarkMahlberg avatar

A majority of my time is spent modelling the problem, gathering requirements, researching others’ solutions online (likely this step could be better AI-assisted?), not actually implementing a solution in code.

Perfectly said. The most difficult part of software engineering is all the things outside the code. Did I find all the stakeholders? Did they state all their requirements? Did I interpret those requirements properly? Do they like my UI mockup in paint? Can I convince them to take a "version 1" without some bells and whistles so that they can get it on time, and I can give them the bells and whistles in "version 2" later? Is my current design consistent with other products/tools in the organization? Do any requirements actually violate regulations (spoiler: the client is an asshole and violating regulations is the whole point)? Can I convince them to drop that requirement? How can I improve my organizations' SDLC so that the product is quicker and easier to deliver?

And the most pressing question of all: do the clients and stakeholders actually need me to write this product in the first place? Is there an existing solution that would be cheaper and quicker to just buy?

Even given all the things AI can do right now, I doubt it can handle even a fraction of all those tasks above. Let alone writing code that compiles...

vallode,

I think it’s a sensible position to be in. I tend to use AI in order to build awareness of it’s capabilities. I find that sometimes it is useful for brainstorming or “tip of my tongue” searches but as you say the actual coding capabilities are exaggerated.

To me Cognition AI is doing what most other hype-based startups do which is generate good headlines so that VC money can keep pouring in. It is up to us to spot and question things that don’t quite make sense… and anything with the words “AI understands X” don’t make sense currently ^^

bionicjoey,

I’m in a similar boat. The challenges I run into at work are often resultant of the weird infrastructure we use in our datacenters and our weird legacy software stacks, so there’s no reason to believe AI will have anything in its training set that will help me. People don’t realize that AI is only as good as their problem is ubiquitous. If you spend your time working on weird legacy systems, it can’t possibly predict all the weird needles you need to thread when developing solutions.

admin,
@admin@lemmy.my-box.dev avatar

One way it can be useful is when you use it as a more verbal variant of rubber duck debugging. You’ll need to state the issue that you’re facing, including the context and edge cases. In doing so, the problem will also become more clear to you yourself.

Contrary to a rubber duck, it can then actually suggest some approach vectors, which you can then dismiss or investigate further.

lemmy___user,

This is how I use LLMs right now, and there have been a few times it’s been genuinely helpful. Mind you, most of the time it’s been helpful, it’s because it hallucinates some nonsense that gets me in the right direction, but that’s still at least a little better than the duck.

admin,
@admin@lemmy.my-box.dev avatar

That was my experience as well with GPT 3.5. But the hit ratio is a lot better with GPT 4, and other models like Mixtral and its derivatives.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • thenastyranch
  • rosin
  • slotface
  • ngwrru68w68
  • Durango
  • PowerRangers
  • kavyap
  • cisconetworking
  • tsrsr
  • everett
  • mdbf
  • anitta
  • tacticalgear
  • khanakhh
  • ethstaker
  • vwfavf
  • InstantRegret
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • Leos
  • normalnudes
  • modclub
  • provamag3
  • All magazines