mattlav1250,
@mattlav1250@journa.host avatar

@ErikJonker this may be true, but unfortunately the inference cost of the models at this scale is completely unsustainable for that sort of use, or anything equivalent.

It takes 128 GPUs per instance to run GPT-4.

The cost per conversation on GPT-3 was 38c.

GPT-4 is roughly 3 times as expensive to run: or $1.14 PER CONVERSATION.

At those rates, routine casual usage is just not viable...even assuming we could give every child 128 dedicated GPUs, which we definitely cannot.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • GTA5RPClips
  • magazineikmin
  • InstantRegret
  • everett
  • osvaldo12
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • kavyap
  • Durango
  • ngwrru68w68
  • thenastyranch
  • DreamBathrooms
  • JUstTest
  • khanakhh
  • Leos
  • cisconetworking
  • ethstaker
  • modclub
  • tester
  • cubers
  • tacticalgear
  • provamag3
  • normalnudes
  • anitta
  • megavids
  • lostlight
  • All magazines