kellogh,
@kellogh@hachyderm.io avatar

has been a lot of fun, but i don’t see it scaling out to general audiences. simple things like printing an existing model are pretty complicated. even just, “load model, switch spool, print” is far beyond what my 7yo can do, and that seems like a big UX problem

i wonder if could help parts of the UX. load a model and the LLM asks what you’ll be using it for, adjusts infill & speed parameters appropriately. idk, the whole market seems dead without something big changing

teotwaki,
@teotwaki@mastodon.online avatar

@kellogh couldn’t the same be said about nearly every hobby? I don’t expect a 7yo to be an expert in woodworking, know how to work with grain, understand the differences in glue, joints, and the myriad different tools and techniques one could use. Same for baking, scuba diving, etc.

teotwaki,
@teotwaki@mastodon.online avatar

@kellogh LLMs are letter generators. Nothing more. They can’t understand the content of an image, let alone a 3D model. Sure, they might be able to infer that a massive overhang without supports could fail on your printer, but it couldn’t explain why.

kellogh,
@kellogh@hachyderm.io avatar

@teotwaki imo 3D printing can be a lot more than just a hobby

LLMs are a language interface to computers. i’m not convinced on the whole “big brain” AGI thing, but even today they’re very good at letting people interface with complex systems. i’d imagine such a UI would offload the actual model manipulation to other non-AI tools, the LLM would just reformat the task at hand into parameters to other software

teotwaki,
@teotwaki@mastodon.online avatar

@kellogh Yeah I could definitely see that latter argument for LLMs. But they won’t be able to replace knowledge and understanding. I can explain that this is a functional part for the car, and the LLM can respond with “ya definitely don’t want PLA, printerboi”, and it could maybe recommend some general truths. But if the user doesn’t understand anything, an LLM won’t be able to be a secondary brain.

kellogh,
@kellogh@hachyderm.io avatar

@teotwaki i see it as following the rules of a disruptive technology. at first its a lot worse than the status quo, but enables a whole new use case that the previous tech didn’t. it captures the market from the low end and steadily chips away at the mid and high end of the market until one day its hard to imagine ever doing it the old way

SilentMobius,
@SilentMobius@mastodon.social avatar

@kellogh @teotwaki there's a good reason there are no child-safe CNC machines as well.

kellogh,
@kellogh@hachyderm.io avatar

@SilentMobius @teotwaki there’s some good articles about this, but from a safety and security standpoint, the LLM needs to be treated like a user. that means validating its instructions. this isn’t unknown territory, if you take a machine made for experts and you redesign it for general audiences, that means putting guardrails in place. if the LLM isn’t reliable, then you act accordingly and put appropriate guards in place

teotwaki,
@teotwaki@mastodon.online avatar

@kellogh @SilentMobius What’s new territory is instead of a dumb machine doing what it’s told, we now have something that appears to be reasoning and giving argued instructions or guidance—as flawed as it may be.

Two decades ago, we saw people driving into lakes because the GPS told them to. What guardrail do you put in place when an entire generation of marketing will be based on “the machine knows better than you” (see Tesla FSD).

kellogh,
@kellogh@hachyderm.io avatar

@teotwaki @SilentMobius right, to clarify what you’re saying, we have sophisticated agents capable of doing both great things as well as making big mistakes. the problem is LLMs can’t easily be held accountable, and until they can, you need to treat them specially. a lot of their value is reduced until then. regardless, their potential is so insanely high, that you can still find lots of places where they provide value safely today

SilentMobius,
@SilentMobius@mastodon.social avatar

@kellogh @teotwaki personally I'm saying that an LLM is fundamentally the wrong technology and will never be the right solution. Some form of ML aimed at specific combinations of settings? Sure with a lot of research, that could be useful. But something that, at its base level is a "plausible text generator" is never the right solution.

kellogh,
@kellogh@hachyderm.io avatar

@SilentMobius @teotwaki well, it depends on how you use them. also, multi-model LLMs are quickly breaking outside just language generation. if you’re okay with ML but not LLMs, then i’m unclear what the line is for you. my hunch is you just don’t understand them

  • All
  • Subscribed
  • Moderated
  • Favorites
  • 3DPrinting
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Durango
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • tacticalgear
  • ethstaker
  • JUstTest
  • InstantRegret
  • Leos
  • normalnudes
  • everett
  • khanakhh
  • osvaldo12
  • cisconetworking
  • modclub
  • anitta
  • tester
  • megavids
  • provamag3
  • lostlight
  • All magazines