thomasfuchs, to random
@thomasfuchs@hachyderm.io avatar

Can’t wait for the “actually LLMs are a worthless scam” articles in 2025 and all the people that pretend they never fell for it

mazisworld,

@thomasfuchs noooo LLM'S ARE THE FUTURE.

We just can't afford to develop. What's going to happen is Governments are going to try and coin the technology and control masses with greater intensities.

For the private sector the development will stagnant because of corporate greed and lacking diversion.

Privatized AI is where the POWER IS.

Yet the government is the world's largest Private Power.

ricardoharvin, to random
@ricardoharvin@mstdn.social avatar

Climate change is an immediate threat to the majority of the world's population.

We need a local, regional, state, national, and international "moon shot" type of concentration of effort using available resources while developing and advancing new science and technologies to counteract the damage humans have caused.

Without this type of all-out, universal cooperation, human civilization, and perhaps humanity itself, are in imminent danger of extinction.

ricardoharvin,
@ricardoharvin@mstdn.social avatar

Every little thing that helps is part of the Everything, Everywhere, All at Once mitigation strategy.

"The work was done in collaboration with 2000 Asociación...local ...negatively impacted by the receding "
“...the ...not only requires...the authors, but all of these other groups...it shows the constant need for collaboration and to develop communities..."
with local involvement is key.
https://eldritch.cafe/@SallyStrange/111285235715183031

happyborg, to llm
@happyborg@fosstodon.org avatar

Try a local using 's :

  1. Get the 4.3GB file:
    curl -LO https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llava-v1.5-7b-q4-server.llamafile

  2. Make executable

  3. Run it

  4. Visit http://127.0.0.1:8080/

You're welcome!

This is a text and image capable LLM, so ~ 4

h/t @simon

More in Simon's blog: https://simonwillison.net/2023/Nov/29/llamafile/

happyborg,
@happyborg@fosstodon.org avatar

@simon
Has anyone found a use for #Llama #LLM (see above)?

Serious question. It's interesting to play with but I haven't found anything I'd use it for yet. The only thing it seems it may be good at is summarising a news page and articles filtered by subject - though I'm not sure that's really useful to me.

happyborg, to llm
@happyborg@fosstodon.org avatar

Other than (maybe) describing images, what has anyone found #Llama #LLM useful for?

I'm trying it occasionally on different things but mostly it isn't up to the task. For those simpler questions it can answer it is way slower than doing a web search. 🤷‍♂️

happyborg,
@happyborg@fosstodon.org avatar

@jsbarretto that's not something I have much use for, but I'm also wondering if they (though I was really taking about #Llama) produce plausible summaries or good summaries.

kjr, (edited ) to llm
@kjr@babka.social avatar

Really amazing... and a good help, thanks @mozilla

Mozilla Lets Folks Turn AI LLMs Into Single-File Executables

#mozilla #llm #ai #llama

https://hackaday.com/2023/12/02/mozilla-lets-folks-turn-ai-llms-into-single-file-executables/

itnewsbot, to ArtificialIntelligence

Mozilla Lets Folks Turn AI LLMs Into Single-File Executables - LLMs (Large Language Models) for local use are usually distributed as a set of wei... - https://hackaday.com/2023/12/02/mozilla-lets-folks-turn-ai-llms-into-single-file-executables/ #artificialintelligence #binaries #localai #llama #llm #ai

chikim, to ML
@chikim@mastodon.social avatar

Zuckerberg says Meta is training 3 on 600,000 H100s! Wel, time to finetune and quantize everything again when it comes out. lol https://www.reddit.com/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/

devinprater, to ai

Well, I'm sure an NVDA addon with this will be here soon. Just amazing how far things have come. I mean, I hope there's a Windows version of the Llava part of this anyway.

llamafile is the new best way to run a LLM on your own computer:

https://simonwillison.net/2023/Nov/29/llamafile/

itnewsbot, to machinelearning

Using Local AI on the Command Line To Rename Images (And More) - We all have a folder full of images whose filenames resemble line noise. How about... - https://hackaday.com/2023/12/29/using-local-ai-on-the-command-line-to-rename-images-and-more/ -to

Polynomial_C, to ai Catalan
@Polynomial_C@mastodon.social avatar

#LLaMA 2 #AI
Picture from X
👍

david_senate, to random

😆 doesn't recommend to use Code LLaMa as it "generates offensive and inappropriate content"

joannaholman, to melbourne
@joannaholman@aus.social avatar

Only day 1 of the show and the llamas are already having an existential crisis

s_mcleod, to llm

M.2 NVMe -> PCIe x16 + some dodgy cabling = Tesla P100 eGPU 😂

Combined with my RTX3090 I can load Q4/Q5 70b models 100% into vRAM with exllama or autogpqt

#LLM #AI #ML #Llama #Nvidia #GPT

image/png
image/jpeg
image/png

stungeye, to ai

Have you seen LLaVA?

The Large Language and Vision Assistant is a multimodal (image & text) #ai model.

It's an open-source approach to visual & language prompting, combining a #ml vision encoder & a large language model (#Vicuna #LLaMA #llm).

It's surprisingly good!

🧵1/n

boilingsteam, to linux
@boilingsteam@mastodon.cloud avatar

Llama 2 Everywhere (L2E): Standalone, Binary Portable, Bootable Llama 2: https://github.com/trholding/llama2.c
#linux #update #foss #release #llama #standalone #binary #llm

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We have previously analysed US class actions against Open AI (here) and Google (here) for unauthorized use of copyright works in the training of generative AI tools, respectively ChatGPT, Google Bard and Gemini. To further develop this excursus on the US case law, in this post we consider two recent class actions against Meta launched by copyright holders (mainly book authors), for alleged infringement of IP in their books and written works through use in training materials for LLaMA (Large Language Model Meta AI). Such case law is interesting for the reconstruction of the technology deployed by Meta and the training methodology (at least from the plaintiff’s perspective) but also because the court has had the chance to preliminarily evaluate the robustness of the claims. Given the similarity of the legal arguments and the same technology being at stake (Meta’s LLaMA), upon the request of the parties, the Court treated the two class actions jointly (here)."

https://copyrightblog.kluweriplaw.com/2024/01/17/generative-ai-admissibility-and-infringement-in-the-two-us-class-actions-against-metas-llama/

happyborg, to llm
@happyborg@fosstodon.org avatar

Having forgotten everything I ever knew about #Perl I'm trying to extend a script I wrote years ago and am almost at the point of asking #Llama #LLM for help.

I mean, assigning a string with an embedded variable really should not be hard 🤷‍♂️

All in an effort to make my faithful old #ftpdns use SSH and SCP instead of FTP, which stops working every time someone in a datacentre sucks their teeth.

If it works I'll have to rename it DIY #DNS.

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

If true, interesting news, illustrates the potential of models like with fine-tuning.
https://www.phind.com/blog/code-llama-beats-gpt4

mike, to llm

The wife and I had a good laugh today while I was experimenting with 's. Feeding a complex prompt that analyzes forum posts, it decided that her career in pharmacy is a crime. 🤣

mike,

That said is still really good, I just need to find the right approach to make it act consistently. I doubt my problem is math, but it is mixing up numbers (0 is 10 is 0 is...). 🤔

I'd assume GPT-4 will work better, but the point is I'm trying to do stuff without OpenAI.

Right now I'm exploring if my idea is viable (it looks promisimg), then I'll move on to recreating it on and fine tuning.

remixtures, to meta Portuguese
@remixtures@tldr.nettime.org avatar

: "These are noteworthy developments but not all complaints can be resolved with promises. Several lawsuits against OpenAI and Meta remain ongoing, accusing the companies of using the Books3 dataset to train their models.

While OpenAI and Meta are very cautious about discussing the subject in public, Meta provided more context in a California federal court this week.

Responding to a lawsuit from writer/comedian Sarah Silverman, author Richard Kadrey, and other rights holders, the tech giant admits that “portions of Books3” were used to train the Llama AI model before its public release.

“Meta admits that it used portions of the Books3 dataset, among many other materials, to train Llama 1 and Llama 2,” Meta writes in its answer."

https://torrentfreak.com/meta-admits-use-of-pirated-book-dataset-to-train-ai-240111/

heiseonline, to OpenAI German

KI-Training: Urheberrechtlich geschützter Datensatz von Buchtexten jetzt offline

Monatelang war eine Textdatei aus fast 200.000 Buchtexten einfach abrufbar, damit wurden KI-Systeme trainiert. Nun wurde sie offline genommen – und analysiert.

https://www.heise.de/news/190-000-Buecher-Urheberrechtlich-geschuetzter-Datensatz-fuer-KI-Training-gesperrt-9268537.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

pb, to llm French
@pb@mast.eu.org avatar

Mise à jour par rapport à mes vidéos précédentes du printemps.

Petit essai en vidéo de .5, un nouveau modèle de libre utilisable avec .cpp, et qui prétend être comparable à GPT 3.5 Turbo, l'avant-dernier modèle d'OpenAI.

La vidéo est aussi faite pour montrer ce que ça donne en rapidité sur un processeur "normal", sans usage du GPU.

(bugs compris : réalisé sans trucage).

https://peertube.eu.org/w/cMHfSKmJ5W2hd62ziiH8o9

happyborg, (edited ) to llm
@happyborg@fosstodon.org avatar

is useless at producing good code so far. It makes stupid mistakes shrouded by wait for it, plausible looking statements.

I expect there are ways I could find in which it could help but I'm not sure it's worth the effort of finding them.

Its lack of domain knowledge, any understanding of the domain, is a problem.

joe, (edited ) to llm

I know that LLM and AI are kind of dirty words in the app dev community and I get the reasons why. Even if you just keep living your life, folks like Dropbox (https://boingboing.net/2023/12/14/dropbox-is-sharing-users-files-with-openai-heres-how-to-opt-out.html) and GitHub are using your data to train their LLMs. I think that there is an opportunity in ethically sourced AI, though. Even if you disagree with that statement, there is a "know your enemy" argument.

That's the reason why I have been trying to learn this shit, lately. ...

#LLM #AI

joe,

... I have tried running #Llama 2 (https://ai.meta.com/llama/) locally and I haven't been able to figure it out yet but it is the reason why I decided to replace my 2017 MBP with a 2021 MBP (with double the RAM and an M1 Pro instead of an i7).

I did have some success with #llamafile (https://github.com/Mozilla-Ocho/llamafile), though. Mozilla released something that lets you package a set of weights as a single multi-gigabyte executable that can run on macOS, Windows, Linux, FreeBSD, OpenBSD, or NetBSD. ...

joe, to ai

I followed https://www.youtube.com/watch?v=Kg588OVYTiw to try to get Llama 2 working locally with Llama.cpp but no luck. 😒

Does anyone know how to fix it? I do have llama-2-13b-chat.ggmlv3.q4_0.bin downloaded into the root of the app.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • khanakhh
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • osvaldo12
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • cisconetworking
  • normalnudes
  • tester
  • ngwrru68w68
  • cubers
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • anitta
  • lostlight
  • All magazines