Endward23,

My question is: Imagine we would put all the data input of a certain task, eg. making a meal, into text fragments and send this “sense data”-pakets ( ^1^ to the AI, would the AI be able to cook if the teach the AI how to give output that controlls a robot arm?

If the answer of this question is yes, we already have a very usefull general tool. The LLM-AI will be able to controll and observe some situations. In the case that the answer is “no”, I guess, it would have interesting implications.

^1^ : Remember, some part of AI are already able to tell what is on a given photo. Not 100%, but good enough for a meal maybe. In some cases, it woul task “provokant”.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • rosin
  • osvaldo12
  • thenastyranch
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • Youngstown
  • slotface
  • mdbf
  • love
  • kavyap
  • GTA5RPClips
  • everett
  • anitta
  • cubers
  • ethstaker
  • Durango
  • ngwrru68w68
  • tacticalgear
  • modclub
  • cisconetworking
  • Leos
  • provamag3
  • normalnudes
  • tester
  • megavids
  • JUstTest
  • All magazines