WizardLM-70B-V1.0 Released on HF

These are the full weights, the quants are incoming from TheBloke already, will update this post when they’re fully uploaded

From the author(s):

WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.

This model is license friendly, and follows the same license with Meta Llama-2.

Next version is in training and will be public together with our new paper soon.

For more details, please refer to:

Model weight: huggingface.co/WizardLM/WizardLM-70B-V1.0

Demo and Github: github.com/nlpxucan/WizardLM

Twitter: twitter.com/WizardLM_AI

GGML quant posted: huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML

GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ

fhein,

Me a few months ago when upgrading my computer: pff, who needs 64GB of RAM? Seems like a total waste

Me after realising you can run LLM at home: cries

AsAnAILanguageModel,

Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

noneabove1182,

agreed, it seems quite capable, i haven’t tested all the way down to q2 to verify but i’m not surprised

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localllama@sh.itjust.works
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • osvaldo12
  • kavyap
  • khanakhh
  • Durango
  • megavids
  • everett
  • tacticalgear
  • modclub
  • normalnudes
  • ngwrru68w68
  • cisconetworking
  • tester
  • GTA5RPClips
  • Leos
  • anitta
  • provamag3
  • JUstTest
  • lostlight
  • All magazines