kurtseifried,

I just started using GitHub CoPilot for VSCode, and.. wow. This basically does 50-80% of the scut work, meaning I can crank stuff out 2-5x faster and focus on the higher level stuff and not spend time thinking about "did I remember to handle error X when. grab a web page." Also, it'll reduce the amount of typing, which is good for avoiding RSI. At $19/month/user for a team, the ROI on this is like... 20-30 minutes?

GitHut CoPilot example1

hobs,
@hobs@mstdn.social avatar

@kurtseifried
It will change the kind of we write and the kinds of bugs and features we create. I don't know whether it creates more or fewer insidious bugs and (bloat, less readable style, antipatterns). At the very least it will recommend more MS tools. I've not seen studies on code quality and productivity or the long term outcomes for large complex systems. But it looks like LLMs change the way we think: https://dl.acm.org/doi/pdf/10.1145/3544548.3581196

kurtseifried,

@hobs I just realized what memories are being twigged. A lot of the “LLM will bring doom to programming” crowd sounds the same as the anti-PHP crowd (“random people will make web apps that suck! They won’t learn comp sci!”)

hobs,
@hobs@mstdn.social avatar

@kurtseifried
Ahh yes. But these aren't random people. These are random machines, trained to deceive us. Humans learn the truth. They learn what works and what doesn't. ChatGPT and Codex is not "grounded". It lives in the world of social media, not the physical world . It never runs its code. It gets rewarded by producing hype (likes). It doesn't get punished when things break (or when a php website sux). .

kurtseifried,

@hobs again, I’m reminded of the “self driving cars get into more collisions” mantra. Or is it because they report every collision? I know myself and everyone I know that drives has been at least one or more non reported collisions, eg once I was stopped in a parking lot at a stop sign and someone decided to back into my stopped car, like they clearly didn’t look behind them or in their mirrors at all before hitting the gas. Good thing there wasn’t a person behind them. I feel like very few people have proper, rigorous quality control on the output generated by humans, again, anecdotal data (but ask around and you’ll find I’m not the only one) during my divorce the opposing counsel managed to misspell one of my kids names on the paperwork, get my ex wife’s address wrong and some other insanely basic errors, all at $450 an hour.

hobs,
@hobs@mstdn.social avatar

@kurtseifried
Indeed. I'm 100% in favor of AI helping us think better. But personally I wouldn't get into a self driving car trained to give me a thrill ride. Fortunately most self-driving car companies are grounded in reality. Copilot is not. YouChat and perplexity.ai are grounded. Google and ChatGPT are not (I haven't researched BingChat). For self-driving tech I prefer Comma.ai for my car, is important.

hobs,
@hobs@mstdn.social avatar

@kurtseifried
I build bots that educate new coders and democratize access to AI -- random humans building stuff. Codex and ChatGPT do not educate and empower. There are much more effective, more powerful alternatives in the open source world that are more like the php of . and have made the wrong design choice. They will get crushed by better LLM architectures like Perplexity.ai YouChat AgentGPT and AutoGPT.

kurtseifried,

@hobs long term I suspect open source LLM will win heavily for one simple reason: no guard rails. It took a decade plus for web apps to become easy to deploy, it won’t take Opensource LLM stuff a decade, probably 1-2 years to get to a “good enough” situation for most of us.

hobs,
@hobs@mstdn.social avatar

@kurtseifried
Huggingface has already made it approachable for most developers. And peeplexity.ai's open source plugin ecosystem has produced some amazing plugins in only a couple months. Stuff is happening fast.

kurtseifried,

@hobs yup, I’m guessing within a few months we’ll have good cookbooks, whatever the LLM equivalent of a docker container is and so on, and then it will only get better. The real challenge will be training data, especially with respect to licensing, I’m betting we’ll see places like Reddit go to a stock photo style business. It’s another potentially lucrative revenue stream, they’d be insane not to.

hobs,
@hobs@mstdn.social avatar

@kurtseifried
Fortunately the data challenge is not as hard as it appears. The state of the art is to use 1000x less data... higher quality curated feedback. Check out BLOOMZ etc. They are 100x faster, smaller, and require 1000x less data. Indistinguishable from in quality.

hobs,
@hobs@mstdn.social avatar

@kurtseifried
If you have time for another podcast you might like Changelog . It surprises me every week with what people are building.

spmatich,
@spmatich@ioc.exchange avatar

@kurtseifried and thank you to all those devs, whose work had been committed in github, for which Microsoft sought permission from, to use their code before using it to train the model upon which copilot is based.
As for the devs whose work Microsoft did not approach. I guess you have to ask Microsoft to tell you who they are, so you can personally ask them for permission, and thank them, as well as acknowledging their contributions in your work...

kurtseifried,

@spmatich I’m an open source kind of person so .. yeah. We all stand on the shoulders of giants. Pretending otherwise is just wrong.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines