ErikJonker, to China
@ErikJonker@mastodon.social avatar

So we have a very good AI model in China, SenseNova 5.0, and nobody is talking about, probably because we can't read Chinese https://zhidx.com/p/421866.html

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

Listening to The Ezra Klein Show (How Should I Be Using A.I. Right Now?): https://nytimes.com/2024/04/02/opinion/ezra-klein-podcast-ethan-mollick.html

I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide: He’s a professor at the Wharton School…who’s spent countless hours experimenting with different chatbots, noting his insights in his newsletter and in a new book, “Co-Intelligence: Living and Working With A.I.”

joe, to ai

Yesterday, we looked at how to write a JavaScript app that uses Ollama. Recently, we started to look at Python on this site and I figured that we better follow it up with how to write a Python app that uses Ollama. Just like with JavaScript, Ollama offers a Python library, so we are going to be using that for our examples. Also just like we did with the JavaScript demo, I am going to be using the generate endpoint instead of the chat endpoint. That keeps things simpler but I am going to explore the chat endpoint also at some point.

Install the Ollama Library

The first step is to run pip3 install ollama from the terminal. First, you need to create a virtual environment to isolate your project’s libraries from the global Python libraries.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.58.34%E2%80%AFPM.png?resize=1024%2C647&ssl=1

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.59.03%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic CLI example

At this point, we can start writing code. When we used the web service earlier this week, we used the generate endpoint and provided model, prompt, and stream as parameters. We set the stream parameter to false so that it would return a single response object instead of a stream of objects. When using the python library, the stream parameter isn’t necessary because it returns a single response object by default. We still provide it with a model and a prompt, though.

If you run it from the terminal, the response will look familiar.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.05.20%E2%80%AFPM.png?resize=1024%2C647&ssl=1

If you replace print(output) with print(output['response']), you can more clearly see the important bits.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.09.04%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic Web Application Example

The output is very similar to the node-fetch example from earlier this week. Last week, when we looked at how to dockerize a node app, we output an array as an unordered list. Let’s see if we can replicate that result using the output from Ollama.

If you pip install flask to install flask, you can host a simple HTTP page at port 8080 and with the magic of json.loads() and a for loop, you can build your unordered list.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.27.30%E2%80%AFPM.png?resize=1024%2C651&ssl=1

Every time you load the page, it makes a server-side API call to Ollama, gets a list of large cities in Wisconsin, and displays them on the website. The list is never the same (because of hallucinations) but that is another issue.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-write-a-python-app-that-uses-ollama/

chikim, to llm
@chikim@mastodon.social avatar

Wow, Private LLM locally runs Llama-3-8B on iOS. No idea how accessible with VoiceOver though. https://privatellm.app/en

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

If you compare the difference between now freely available LLMs like GPT 3.5, Claude Sonnet, LLM3 etc with the paid versions of various models, the difference for most ordinary users/usecases is quite small. That must be a problem for the businesscase of companies like OpenAI, they need large numbers of ordinary people willing to pay every month for access to their models, if you look at the enormous investments? Or are other revenue streams more important ? 🤔

ceoln, to ai
@ceoln@qoto.org avatar

I've had occasion to ask an AI about a thing twice lately (a recent online phenomenon, and a book recommendation). Both times I asked both Gemini and ChatGPT, and both times one gave a reasonable if bland answer, and the other (a different one each time) gave a plausible but completely fictional ("hallucinated") answer.

When do we acknowledge that LLMs, and "AI" in general, aren't quite ready to revolutionize the world?

kleaders, to llm
@kleaders@fosstodon.org avatar

With all the valid concern around and power and water usage, I thought I'd start a blog series on tiny LLMs. Let's see what they can do on real tasks on very power efficient hardware.

https://kyle.works/blog/tiny-llm-reviews-intro/

gfarrell, to llm
@gfarrell@mastodon.world avatar
ramikrispin, to llm
@ramikrispin@mstdn.social avatar

It was a pleasure to present this morning at the ODSC East about data automation with LMM.

Code examples and a tutorial are available on this repo: https://github.com/RamiKrispin/lang2sql
The slides are available on this repo: https://github.com/RamiKrispin/talks/tree/main/202404%20ODSC%20East%202024%20-%20%20Data%20Automation%20with%20LLM%20

Thanks to the conference organizers for the invite and the folks attending the session! 🙏

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁

#AI #LLM #LLMs #GenerativeAI #design #ML

dentangle, to foss
@dentangle@chaos.social avatar

All Your Base Are Belong to LLM

"The output from an LLM is a derivative work of the data used to train the LLM.

If we fail to recognise this, or are unable to uphold this in law, copyright (and copyleft on which it depends) is dead. Copyright will still be used against us by corporations, but its utility to FOSS to preserve freedom is gone."

https://blog.brettsheffield.com/all-your-base-are-belong-to-llm

doctorambient, to LLMs
@doctorambient@mastodon.social avatar

People: stop asking #LLMs to explain their behavior.

We already know that LLMs don't have the introspection necessary to explain their behavior, and their explanations are often fanciful or "just wrong."

For instance, Gemini claims it reads your emails for training, Google says it doesn't.

(BTW, if it turns out Gemini is right and Google is lying, that might be another example of an LLM convincing me it's actually "intelligent.")

#largelanguagemodels #ai #machinelearning #googlegemini #llm

doctorambient,
@doctorambient@mastodon.social avatar

A Llama3 example:


>>> Do you happen to know what your context window length is?

Llama: I'm an AI model, and I don't have a fixed "context window" in the classical sense. My training data consists of a massive corpus of text, which I use to generate responses.

The correct answer is 8k.

ByrdNick, to psychology
@ByrdNick@nerdculture.de avatar

We know that the task demands of cognitive tests most scores: if one version of a problem requires more work (e.g., gratuitously verbose or unclear wording, open response rather than multiple choice), people will perform worse.

Now we have observed as much in Large Language Models: https://doi.org/10.48550/arXiv.2404.02418

The tests included analogical reasoning, reflective reasoning, word prediction, and grammaticality judgments.

image/jpeg
image/jpeg
image/jpeg

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

In case you are wondering, the new Microsoft mini LLM - phi3, can handle code generation, in this case, SQL.

I compared the runtime (locally on CPU) with respect to codellama:7B using Ollama, and surprisingly the Phi3 runtime was significantly slower.

joe, to ai

So far this week, we have looked at how to use Ollama from the CLI, how to use Ollama from the web service, and how to use Ollama from a phone or iPad. Today we are going to be using the Ollama JavaScript Library to write an application.

Install the Ollama Library

The first step is to run npm i ollama from the terminal.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-8.30.04%E2%80%AFAM.png?resize=1024%2C728&ssl=1

That installs Ollama as a dependency in package.json.

Basic CLI example

At this point, we can start writing code. When we used the web service earlier this week, we used the generate endpoint and provided model, prompt, and stream as parameters. We set the stream parameter to false so that it would return a single response object instead of a stream of objects. When using the javascript library, the stream parameter isn’t necessary because it returns a single response object by default. We still provide it with a model and a prompt, though.

If you run it from the terminal, the response will look familiar.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-9.19.38%E2%80%AFAM.png?resize=1024%2C728&ssl=1

Basic Web Application Example

The output is very similar to the node-fetch example from earlier this week. Last week, when we looked at how to dockerize a node app, we output an array as an unordered list. Let’s see if we can replicate that result using the output from Ollama.

If you npm install express to install express, you can host a simple HTTP page at port 8080 and with the magic of JSON.parse() and a for loop, you can build your unordered list.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-8.03.57%E2%80%AFPM.png?resize=1024%2C796&ssl=1

Every time you load the page, it makes a server-side API call to Ollama, gets a list of large cities in Wisconsin, and displays them on the website. The list is never the same (because of hallucinations) but that is another issue.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-write-a-javascript-app-that-uses-ollama/

metin, to ai
@metin@graphics.social avatar
cquest, to llm French
@cquest@amicale.net avatar

Ce matin... deux BOT de scrapping pour alimenter des modèles d'IA/ ont abusé du forum d'@osm_fr

C'est pas la première fois et ça devient vraiment une plaie, surtout quand requête les URL de notre ancien , remplacé il y a plusieurs années par

Malgrès plus de 130 000 erreurs 404 rien que ce matin, il continuait à un rythme effréné...

Autre bot albert-bot... de albertai.com (rien avoir avec l'Albert cocorico), bloqué lui aussi.

LMWStuttgart, to ai German
@LMWStuttgart@xn--baw-joa.social avatar

Morgen nicht verpassen: Die Tagung "No risk, no innovation? Künstliche Intelligenz in der Museumspraxis" beschäftigt sich mit KI-basierten Technologien im Museumsbereich. Auch am setzen wir uns mit den Einsatzmöglichkeiten von KI-Technologien am Museum auseinander. Den Grundstein dafür legen unsere KI-Ethikrichtlinien: https://github.com/LMWStuttgart/KI-Ethik

@museum

SergKoren, to apple
@SergKoren@writing.exchange avatar

Hm. Apple just released some open-source on-device LLM AI models.

TritTriton, to apple French
@TritTriton@shelter.moe avatar

Au tour d’ de faire son propre :

“Apple publie , le modèle de langage open source que personne n’attendait”
https://www.numerama.com/tech/1732480-apple-publie-openelm-le-modele-de-langage-open-source-que-personne-nattendait.html

troed, to llm
@troed@ioc.exchange avatar

Asked LLama-3 to implement a CRC32 routine in C. The 8B model.

With the exception of it forgetting to declare the table array, the code compiled without errors.

I also asked it to run the code on a test string, which it did and explained at each step what the intermediate CRC32 was.

Well. The result was wrong. Both when it executed the code itself, as well as when I compiled and ran it ;)

But this would definitely confuse someone who tried to use it for coding. I see nothing wrong with the code - it all looks perfect. If I get the time I might look into why it's not correct.

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Very nice picture that was shared by Ronald van Loon on X, you can discuss if the categories are complete and correct, but it illustrates that the field of AI is much more then just transformers/LLMs.

joe, to ai

Earlier this year, I started looking at how to run a fully on-prem AI. In February, I bought a machine to run the inference engine on and set up Tailscale (which works similarly to Hamachi) to connect to it remotely. If you want to use it remotely, there are a lot of options for native clients.

MacOS

My favorite client for MacOS is MindMac. You can buy it for under $30, it works with multiple models, servers, and server types, and it is easy to use.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-2.34.12%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you want to look further into it, you can check it out at mindmac.app.

Android

My favorite client for Android is Amallo. It is $23 and like MindMac, it works with multiple models, servers, and server types. My only complaint would be that uploading a base64-encoded image to the model doesn’t seem to work well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot_20240420-143906.png?resize=461%2C1024&ssl=1

If you want to look further into it, you can check it out at doppeltilde.com.

ipadOS

There is a version of Amallo for iPadOS but I have been liking Enchanted LLM more. If you like it, there is a version for macOS as well. It has the added benefit of being free.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/IMG_0088.jpg?resize=672%2C1024&ssl=1

If you want to look further into it, you can check it out at the project’s GitHub page.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-i-use-ai/

#AI #Amallo #Enchanted #LLM #MindMac #Ollama

kennwhite, to random

Incredible research at BlackHat Asia today by Tong Liu and team from the Institute of Information Engineering, Chinese Academy of Sciences (在iie.ac.cn 的电子邮件经过验证)

A dozen+ RCEs on popular LLM framework libraries like LangChain and LlamaIndex - used in lots of chat-assisted apps including GitHub. These guys got a reverse shell in two prompts, and even managed to exploit SetUID for full root on the underlying VM!

image/jpeg
image/jpeg

mjgardner, (edited )
@mjgardner@social.sdf.org avatar

@kennwhite Looks like we’re at the “Matt’s Script Archive” level with frameworks.

The difference is that Matt Wright was a high school student in 1995 when he launched MSA and its infamously exploitable FormMail script.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • ngwrru68w68
  • hgfsjryuu7
  • InstantRegret
  • Youngstown
  • rosin
  • slotface
  • everett
  • mdbf
  • tacticalgear
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • PowerRangers
  • anitta
  • magazineikmin
  • vwfavf
  • cisconetworking
  • cubers
  • modclub
  • GTA5RPClips
  • khanakhh
  • Durango
  • osvaldo12
  • normalnudes
  • tester
  • ethstaker
  • provamag3
  • All magazines