vwbusguy, to opensource
@vwbusguy@mastodon.online avatar

FYI - is NOT . The license is categorically not open source. Among other things, the llama 2 and 3 licenses explicitly violate Field of Endeavor.

I see all sorts of blogs and marketing materials claiming things are "open source" because they used llama somewhere. Please do not take these claims at face value.

vitriolix, to random
@vitriolix@mastodon.social avatar

Now we can all whip the llamas ass

"Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.
"
https://about.winamp.com/press/article/winamp-open-source-code

linuxtldr, to linux
@linuxtldr@noc.social avatar

How to Run and Use Meta’s Llama 3 on Linux

https://linuxtldr.com/llama-3-linux/

davidegts, to ai
@davidegts@mastodon.social avatar

"There has been a shift in the space: some models, like & , have evolved into entire web platforms spanning multiple use cases & access points. Other large language models like or , though technically speaking they share a basic architecture, don’t actually fill the same role. They are intended to live in the background as a service or component, not in the foreground as a name brand." https://techcrunch.com/2024/04/19/too-many-models/

kleaders, to llm
@kleaders@fosstodon.org avatar

With all the valid concern around and power and water usage, I thought I'd start a blog series on tiny LLMs. Let's see what they can do on real tasks on very power efficient hardware.

https://kyle.works/blog/tiny-llm-reviews-intro/

noondlyt, to random
@noondlyt@mastodon.social avatar

I'll take the Llama
#Alpaca #Llama

gptchatly, to meta
@gptchatly@mastodon.social avatar

Outside North America you can try the new Meta's Llama-3 70B model without the need for registration nor login:
https://gptchatly.com/meta-llama-3-70b.html

gadgetry, to meta
@gadgetry@techhub.social avatar
ramikrispin, to llm
@ramikrispin@mstdn.social avatar

(1/3) Llama 3 is out! 🚀

Meta released today Llama 3, the next generation of the Llama model. LLama 3 is a state-of-the-art open-source large language model. Here are some of the key features of the model: 🧵👇🏼

video/mp4

ramikrispin, to datascience
@ramikrispin@mstdn.social avatar

New release to Ollama 🎉

A major release to Ollama - version 0.1.32 is out. The new version includes:
✅ Improvement of the GPU utilization and memory management to increase performance and reduce error rate
✅ Increase performance on Mac by scheduling large models between GPU and CPU
✅ Introduce native AI support in Supabase edge functions

More details on the release notes 👇🏼
https://github.com/ollama/ollama/releases

Image credit: release notes

winterschon, to llm
@winterschon@hachyderm.io avatar
theaiml, to opensource
@theaiml@mastodon.social avatar

After months of work and $10 million, Databricks has unveiled DBRX - the world's most potent publicly available open-source large language model.

DBRX outperforms open models like Meta's Llama 2 across benchmarks, even nearing the abilities of OpenAI's closed GPT-4. Novel architectural tweaks like a "mixture of experts" boosted DBRX's training efficiency by 30-50%.

orhun, to rust
@orhun@fosstodon.org avatar

Here is how you can use ChatGPT in your terminal - with an interface! 🔥

🦾 tenere: TUI for LLMs written in Rust.

🚀 Supports ChatGPT, llama.cpp & ollama

🦀 Built with @ratatui_rs

⭐ GitHub: https://github.com/pythops/tenere

video/mp4

wagesj45, to ai
@wagesj45@mastodon.jordanwages.com avatar

Please, use to generate tons of that you otherwise couldn't.

But for the love of all that is holy, pay attention to what you are putting out. Read the output. If it doesn't say exactly what you would say, edit it! Make changes. Regenerate. Go through the process of making it good.

I truly don't think people hate AI content. They hate lazy content.

raptor85, to llm
@raptor85@mastodon.gamedev.place avatar

The Code Llama 34b model isn't half bad! Been toying around with it integrated into clion having it explain my own code to me and generate small functions and it's been so far around 90% successful, with most of the errors being minor, the bug detection does have a decent amount of false positives though. I also like that it's aware enough of api's to give doc links

Bonus points for it going off on a tangent once on why console applications are better than gui.

ScribblersEmporium, to random
@ScribblersEmporium@mastodon.world avatar
matthewcroughan, to NixOS

On the flight back home from thaigersprint.org, 100% Offline, learning some Rust on Mobile .

We were running locally on the phone, to ask questions for lack of a search engine or internet connection.

LisaSBaker, to art
@LisaSBaker@mastodon.social avatar
ramikrispin, to python
@ramikrispin@mstdn.social avatar

(1/3) Last Friday, I was planning to watch Masters of the Air ✈️, but my ADHD had different plans 🙃, and I ended up running a short POC and creating a tutorial for getting started with Ollama Python 🚀. The settings are available for both Docker 🐳 and locally.

TLDR: It is straightforward to run LLM models locally with the Ollama Python library. Models with up to ~7B parameters run smoothly with low compute resources.

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

After a long night, a short tutorial for getting started with the Ollama Python version is now available here:

https://github.com/RamiKrispin/ollama-poc

chikim, to ML
@chikim@mastodon.social avatar

Zuckerberg says Meta is training 3 on 600,000 H100s! Wel, time to finetune and quantize everything again when it comes out. lol https://www.reddit.com/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/

jd7h, to LLMs
@jd7h@fosstodon.org avatar
SharonCummingsArt, to VegetableGardening
@SharonCummingsArt@socel.net avatar
ErikJonker, (edited ) to ai
@ErikJonker@mastodon.social avatar

Amazing, with everybody can experiment with a large language model on their own computer, it's maybe not up to the level of GPT4 but still amazing, running on local resources. My laptop is four years old 🙂
https://github.com/Mozilla-Ocho/llamafile

joe, to ai
@joe@toot.works avatar

I followed https://www.youtube.com/watch?v=Kg588OVYTiw to try to get Llama 2 working locally with Llama.cpp but no luck. 😒

Does anyone know how to fix it? I do have llama-2-13b-chat.ggmlv3.q4_0.bin downloaded into the root of the app.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • modclub
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • Durango
  • normalnudes
  • ethstaker
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • lostlight
  • All magazines