gael, (edited ) to llm French
@gael@mastodon.social avatar

Imagine a #LLM running locally on a smartphone...

It could be used to fuel an offline assistant that would be able to easily add an appointment to your calendar, open an app, etc. without #privacy issues.

This has come to reality with this proof of concept using the Phi-2 2.8B transformer model running on /e/OS.

It is slow, so not very usable until we have dedicated chips on SoCs, but works (and #opensource !)

🙏 Stypox

#AI on #mobile 👇 👇 👇

owzim,

@gael I'd say before that happens, running an LLM on your local network is your best bet. Projects like https://ollama.com/ make that incredibly easy.

joe, to ai

Yesterday, we looked at how to write a JavaScript app that uses Ollama. Recently, we started to look at Python on this site and I figured that we better follow it up with how to write a Python app that uses Ollama. Just like with JavaScript, Ollama offers a Python library, so we are going to be using that for our examples. Also just like we did with the JavaScript demo, I am going to be using the generate endpoint instead of the chat endpoint. That keeps things simpler but I am going to explore the chat endpoint also at some point.

Install the Ollama Library

The first step is to run pip3 install ollama from the terminal. First, you need to create a virtual environment to isolate your project’s libraries from the global Python libraries.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.58.34%E2%80%AFPM.png?resize=1024%2C647&ssl=1

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.59.03%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic CLI example

At this point, we can start writing code. When we used the web service earlier this week, we used the generate endpoint and provided model, prompt, and stream as parameters. We set the stream parameter to false so that it would return a single response object instead of a stream of objects. When using the python library, the stream parameter isn’t necessary because it returns a single response object by default. We still provide it with a model and a prompt, though.

If you run it from the terminal, the response will look familiar.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.05.20%E2%80%AFPM.png?resize=1024%2C647&ssl=1

If you replace print(output) with print(output['response']), you can more clearly see the important bits.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.09.04%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic Web Application Example

The output is very similar to the node-fetch example from earlier this week. Last week, when we looked at how to dockerize a node app, we output an array as an unordered list. Let’s see if we can replicate that result using the output from Ollama.

If you pip install flask to install flask, you can host a simple HTTP page at port 8080 and with the magic of json.loads() and a for loop, you can build your unordered list.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.27.30%E2%80%AFPM.png?resize=1024%2C651&ssl=1

Every time you load the page, it makes a server-side API call to Ollama, gets a list of large cities in Wisconsin, and displays them on the website. The list is never the same (because of hallucinations) but that is another issue.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-write-a-python-app-that-uses-ollama/

ironicbadger, to selfhosted
@ironicbadger@techhub.social avatar

I hooked the Enchanted LLM app up to my instance tonight. Running on the epyc box with an nvidia a4000.

I can’t notice a difference in speed between this and the real chat-gpt tbh. And I own the whole chain locally. Man this is cool!!

I even shared the ollama api endpoint with a buddy over Tailscale and now they’re whipping the llamas 🦙 api as well. Super fun times.

https://apps.apple.com/us/app/enchanted-llm/id6474268307

davep, to python
@davep@fosstodon.org avatar

TIL Midlothian is in a different timezone from Hampshire.

joe, (edited ) to programming

Yesterday, we played with Llama 3 using the Ollama CLI client (or REPL). Today I figured that we would play with it using the Ollama API. The Ollama API is documented on their Github repo. Ollama has a client that runs when you run ollama run llama3 and a service that can be accessed from something like MindMac, Amallo, or Enchanted. The service is what starts when you run ollama serve.

In our first Llama 3 post, we asked the model for “a comma-delimited list of cities in Wisconsin with a population over 100,000 people”. Using Postman and the completion API endpoint, you can ask the same thing.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-1.30.48%E2%80%AFPM.png?resize=1024%2C811&ssl=1

You will notice the stream parameter is set to false in the body. If the value is false, the response will be returned as a single response object, rather than a stream of objects. If you are using the API with a web application, you will want to ask the model for the answer as JSON and you will probably want to provide an example of how you want the answer formatted.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-1.45.15%E2%80%AFPM.png?resize=1024%2C811&ssl=1

You can use Node and Node-fetch to do the same thing.

If you run it from the terminal, it will look like this:

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-2.01.19%E2%80%AFPM.png?resize=1024%2C932&ssl=1

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/lets-play-more-with-llama-3/

joe, to ai

Earlier this year, I started looking at how to run a fully on-prem AI. In February, I bought a machine to run the inference engine on and set up Tailscale (which works similarly to Hamachi) to connect to it remotely. If you want to use it remotely, there are a lot of options for native clients.

MacOS

My favorite client for MacOS is MindMac. You can buy it for under $30, it works with multiple models, servers, and server types, and it is easy to use.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-2.34.12%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you want to look further into it, you can check it out at mindmac.app.

Android

My favorite client for Android is Amallo. It is $23 and like MindMac, it works with multiple models, servers, and server types. My only complaint would be that uploading a base64-encoded image to the model doesn’t seem to work well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot_20240420-143906.png?resize=461%2C1024&ssl=1

If you want to look further into it, you can check it out at doppeltilde.com.

ipadOS

There is a version of Amallo for iPadOS but I have been liking Enchanted LLM more. If you like it, there is a version for macOS as well. It has the added benefit of being free.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/IMG_0088.jpg?resize=672%2C1024&ssl=1

If you want to look further into it, you can check it out at the project’s GitHub page.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-i-use-ai/

my_actual_brain, to llm

I’m surprised at the performance an can run at on my 8700-cpu.

It’s a bit slow and I don’t think it’s worth getting a gpu just to make it run at a faster speed, but maybe in time I’ll reconsider it.

If I were going to get a , what would you recommend?

joe, to ai

Last week, Meta announced Llama 3. Thanks to Ollama, you can run it pretty easily. There are 8b and 70b variants available. There are also pre-trained or instruction-tuned variants available. I am not seeing it on the Hugging Face Leader Board yet but the bit that I have played around with it has been promising.

Here are two basic test questions:

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-12.15.45%E2%80%AFPM.png?resize=989%2C1024&ssl=1

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-12.27.47%E2%80%AFPM.png?resize=989%2C1024&ssl=1

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/lets-play-with-llama-3/

kohelet, to llm
@kohelet@mstdn.social avatar

So,
I downloaded Ollama,
installed a local LLM,
installed the Continue VSCode extension, configured it with my local LLM.

Now I just need something to do with it!
Like, any project at all.
huh.

frankel, to random
@frankel@mastodon.top avatar
joe, to ai

A few weeks back, I thought about getting an AI model to return the “Flavor of the Day” for a Culver’s location. If you ask Llama 3:70b “The website https://www.culvers.com/restaurants/glendale-wi-bayside-dr lists “today’s flavor of the day”. What is today’s flavor of the day?”, it doesn’t give a helpful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.29.28%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you ask ChatGPT 4 the same question, it gives an even less useful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.33.42%E2%80%AFPM.png?resize=1024%2C782&ssl=1

If you check the website, today’s flavor of the day is Chocolate Caramel Twist.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.41.21%E2%80%AFPM.png?resize=1024%2C657&ssl=1

So, how can we get a proper answer? Ten years ago, when I wrote “The Milwaukee Soup App”, I used the Kimono (which is long dead) to scrape the soup of the day. You could also write a fiddly script to scrape the value manually. It turns out that there is another option, though. You could use Scrapegraph-ai. ScrapeGraphAI is a web scraping Python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents, and XML files. Just say which information you want to extract and the library will do it for you.

Let’s take a look at an example. The project has an official demo where you need to provide an OpenAI API key, select a model, provide a link to scrape, and write a prompt.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.35.29%E2%80%AFPM.png?resize=1024%2C660&ssl=1

As you can see, it reliably gives you the flavor of the day (in a nice JSON object). It will go even further, though because if you point it at the monthly calendar, you can ask it for the flavor of the day and soup of the day for the remainder of the month and it can do that as well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-1.14.43%E2%80%AFPM.png?resize=1024%2C851&ssl=1

Running it locally with Llama 3 and Nomic

I am running Python 3.12 on my Mac but when you run pip install scrapegraphai to install the dependencies, it throws an error. The project lists the prerequisite of Python 3.8+, so I downloaded 3.9 and installed the library into a new virtual environment.

Let’s see what the code looks like.

You will notice that just like in yesterday’s How to build a RAG system post, we are using both a main model and an embedding model.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-2.28.10%E2%80%AFPM.png?resize=1024%2C800&ssl=1

At this point, if you want to harvest flavors of the day for each location, you can do so pretty simply. You just need to loop through each of Culver’s location websites.

Have a question, comment, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-use-ai-to-make-web-scraping-easier/

governa, to ai
@governa@fosstodon.org avatar
davidbisset, to apple
@davidbisset@phpc.social avatar

NotesOllama uses to talk to local LLMs in Notes on Inspired by Obsidian Ollama.

https://smallest.app/notesollama/

Not sure if i trust this if i don't see the code?

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Llama 3 is already available on Ollama 🚀👇🏼

https://ollama.com/library/llama3

5am, to LLMs
@5am@fosstodon.org avatar

I've been playing around with locally hosted using the tool. I've mostly been using models like mistral and dolphin-coder for assistance with textual ideas and issues. More recently I've been using the llava visual model via some simple , looping through images and creating description files. I can then grep those files for key words and note the associated filenames. Powerful stuff!

mjaschen, to php German
@mjaschen@digitalcourage.social avatar

Wochenrückblick, Ausgabe 39 (2024-18).

Diesmal mit

  • 🗺️ der Bikerouter Hall of Fame aller Supporterinnen bisher ❤️
  • 🚵‍♂️ neuen Reifen für das Crosser-Gravel-Dings und der ersten Ausfahrt damit – unter Anderem zu Deutschlands größter (!) Wüste (!!)
  • 💻 den Erkenntnissen von @atomicpoet zur Ergonomie beim Arbeiten mit dem Notebook vs. Desktop-Computer und wie sich das mit meinen Erfahrungen deckt
  • 🤖 ollama, welches die Möglichkeit bietet, LLMs bequem auf dem eigenen Rechner laufen zu lassen
  • 🐘 cachetool, einem Werkzeug zum Verwalten des PHP opcache
  • 🍎 dem Blick in mein macOS-Applications-Directory: diesmal gibt's den Blick auf alle Apps, die mit „F“ beginnen
  • 🛠️ noch mal Deployment des Blogs, das läuft jetzt tatsächlich mit einem simplen post-merge Hook in Git
  • 🔊 und wie immer Techno

https://www.marcusjaschen.de/blog/2024/2024-18/

ironicbadger, to NixOS
@ironicbadger@techhub.social avatar

A fully self-hosted, totally offline, and local AI using and open-webui as a front-end for

joe, (edited ) to ai

Back in January, we started looking at AI and how to run a large language model (LLM) locally (instead of just using something like ChatGPT or Gemini). A tool like Ollama is great for building a system that uses AI without dependence on OpenAI. Today, we will look at creating a Retrieval-augmented generation (RAG) application, using Python, LangChain, Chroma DB, and Ollama. Retrieval-augmented generation is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. If you have a source of truth that isn’t in the training data, it is a good way to get the model to know about it. Let’s get started!

Your RAG will need a model (like llama3 or mistral), an embedding model (like mxbai-embed-large), and a vector database. The vector database contains relevant documentation to help the model answer specific questions better. For this demo, our vector database is going to be Chroma DB. You will need to “chunk” the text you are feeding into the database. Let’s start there.

Chunking

There are many ways of choosing the right chunk size and overlap but for this demo, I am just going to use a chunk size of 7500 characters and an overlap of 100 characters. I am also going to use LangChain‘s CharacterTextSplitter to do the chunking. It means that the last 100 characters in the value will be duplicated in the next database record.

The Vector Database

A vector database is a type of database designed to store, manage, and manipulate vector embeddings. Vector embeddings are representations of data (such as text, images, or sounds) in a high-dimensional space, where each data item is represented as a dense vector of real numbers. When you query a vector database, your query is transformed into a vector of real numbers. The database then uses this vector to perform similarity searches.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-2.36.49%E2%80%AFPM.png?resize=665%2C560&ssl=1

You can think of it as being like a two-dimensional chart with points on it. One of those points is your query. The rest are your database records. What are the points that are closest to the query point?

Embedding Model

To do this, you can’t just use an Ollama model. You need to also use an embedding model. There are three that are available to pull from the Ollama library as of the writing of this. For this demo, we are going to be using nomic-embed-text.

Main Model

Our main model for this demo is going to be phi3. It is a 3.8B parameters model that was trained by Microsoft.

LangChain

You will notice that today’s demo is heavily using LangChain. LangChain is an open-source framework designed for developing applications that use LLMs. It provides tools and structures that enhance the customization, accuracy, and relevance of the outputs produced by these models. Developers can leverage LangChain to create new prompt chains or modify existing ones. LangChain pretty much has APIs for everything that we need to do in this app.

The Actual App

Before we start, you are going to want to pip install tiktoken langchain langchain-community langchain-core. You are also going to want to ollama pull phi3 and ollama pull nomic-embed-text. This is going to be a CLI app. You can run it from the terminal like python3 app.py "<Question Here>".

You also need a sources.txt file containing the URLs of things that you want to have in your vector database.

So, what is happening here? Our app.py file is reading sources.txt to get a list of URLs for news stories from Tuesday’s Apple event. It then uses WebBaseLoader to download the pages behind those URLs, uses CharacterTextSplitter to chunk the data, and creates the vectorstore using Chroma. It then creates and invokes rag_chain.

Here is what the output looks like:

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-4.09.36%E2%80%AFPM.png?resize=1024%2C845&ssl=1

The May 7th event is too recent to be in the model’s training data. This makes sure that the model knows about it. You could also feed the model company policy documents, the rules to a board game, or your diary and it will magically know that information. Since you are running the model in Ollama, there is no risk of that information getting out, too. It is pretty awesome.

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-to-build-a-rag-system-using-python-ollama-langchain-and-chroma-db/

#AI #ChromaDB #Chunking #LangChain #LLM #Ollama #Python #RAG

dliden, to emacs

Has anyone here worked much with generators in ?

I am looking for a good solution for streaming outputs in my ollama-elisp-sdk project. I think there's a good angle using generators to make a workflow fairly similar to e.g. the OpenAI API. Not sure yet though.

joe, to ai

LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.

Basic CLI Example

Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.

To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.

Its ability to describe an image is pretty awesome.

Basic Web Service

So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.

The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-18-at-1.41.20%E2%80%AFPM.png?resize=1024%2C729&ssl=1

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-can-you-use-llava-and-node-js-to-describe-an-image/

sesivany, (edited ) to ai
@sesivany@floss.social avatar

is the easiest way to run local I've tried so far. In 5 minutes you can have a chatbot running on a local model. Dozens of models and UIs to choose from.
Just the speed is not great, but what can I expect on an Intel-only laptop.

ironicbadger, to selfhosted
@ironicbadger@techhub.social avatar

Did someone say self-hosted LLMs?

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

It was a pleasure to present this morning at the ODSC East about data automation with LMM.

Code examples and a tutorial are available on this repo: https://github.com/RamiKrispin/lang2sql
The slides are available on this repo: https://github.com/RamiKrispin/talks/tree/main/202404%20ODSC%20East%202024%20-%20%20Data%20Automation%20with%20LLM%20

Thanks to the conference organizers for the invite and the folks attending the session! 🙏

ainmosni, to programming
@ainmosni@berlin.social avatar

So, my trial just expired, and while it did cut down on some typing, it also made me feel like the quality of my code was lower, and of course it felt dirty to use it considering that it's a license whitewashing machine.

I don't think I will be paying for it, I don't think the results are worth it.

0xZogG,
@0xZogG@hachyderm.io avatar

@ainmosni: there is other solution where they have free tier -
In general as non frontend dev, I like how it suggests for html and for go even just minimal placeholders function fill-ins is nice.
But as per license and not knowing where my code is sent, I'm looking for selfhosted solution. Found few options with , but unfortunately my current 10 years old HW is not enough for that :P

joe, to ai

So far this week, we have looked at how to use Ollama from the CLI, how to use Ollama from the web service, and how to use Ollama from a phone or iPad. Today we are going to be using the Ollama JavaScript Library to write an application.

Install the Ollama Library

The first step is to run npm i ollama from the terminal.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-8.30.04%E2%80%AFAM.png?resize=1024%2C728&ssl=1

That installs Ollama as a dependency in package.json.

Basic CLI example

At this point, we can start writing code. When we used the web service earlier this week, we used the generate endpoint and provided model, prompt, and stream as parameters. We set the stream parameter to false so that it would return a single response object instead of a stream of objects. When using the javascript library, the stream parameter isn’t necessary because it returns a single response object by default. We still provide it with a model and a prompt, though.

If you run it from the terminal, the response will look familiar.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-9.19.38%E2%80%AFAM.png?resize=1024%2C728&ssl=1

Basic Web Application Example

The output is very similar to the node-fetch example from earlier this week. Last week, when we looked at how to dockerize a node app, we output an array as an unordered list. Let’s see if we can replicate that result using the output from Ollama.

If you npm install express to install express, you can host a simple HTTP page at port 8080 and with the magic of JSON.parse() and a for loop, you can build your unordered list.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-21-at-8.03.57%E2%80%AFPM.png?resize=1024%2C796&ssl=1

Every time you load the page, it makes a server-side API call to Ollama, gets a list of large cities in Wisconsin, and displays them on the website. The list is never the same (because of hallucinations) but that is another issue.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-write-a-javascript-app-that-uses-ollama/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • khanakhh
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • osvaldo12
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • cisconetworking
  • normalnudes
  • tester
  • ngwrru68w68
  • cubers
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • anitta
  • lostlight
  • All magazines