devinprater,

Well, I'm sure an NVDA addon with this will be here soon. Just amazing how far things have come. I mean, I hope there's a Windows version of the Llava part of this anyway.

llamafile is the new best way to run a LLM on your own computer:

https://simonwillison.net/2023/Nov/29/llamafile/

pax,

@devinprater but where I can download the bigger model from? the hugging face interface is quite bad.

devinprater,

@pax I'm not sure

MutedTrampet,

@devinprater Well, LLaVA can be run locally as part of llama.cpp. The image encoding can be rather slow on CPUs, though.

devinprater,

@MutedTrampet Yeah, I ran Ollama, with a few models, and it's like 3 tokens per second lol. But once they start using more than just NVIDIA GPU's, I think it'll get faster for people like me that have AMD GPU's.

MutedTrampet,

@devinprater Yeah, it runs slowly for me too, but still fun.

bryansmart,

@MutedTrampet @devinprater Turns out the descriptions it provides are nowhere as good as GPT4, though, so probably not worth it for screen reader integration yet.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines