joe

@joe@jws.news

I am a humble Milwaukeean. I write code, travel, ride two-wheeled transportation, and love my dogs. This is my blog. You can also follow as @joe (mastodon), @steinbring (kbin / lemmy), or @steinbring (pixelfed).

This profile is from a federated server and may be incomplete. Browse more on the original instance.

joe, to ai

LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.

Basic CLI Example

Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.

To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.

Its ability to describe an image is pretty awesome.

Basic Web Service

So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.

The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-18-at-1.41.20%E2%80%AFPM.png?resize=1024%2C729&ssl=1

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-can-you-use-llava-and-node-js-to-describe-an-image/

joe, to ai

A few weeks back, I thought about getting an AI model to return the “Flavor of the Day” for a Culver’s location. If you ask Llama 3:70b “The website https://www.culvers.com/restaurants/glendale-wi-bayside-dr lists “today’s flavor of the day”. What is today’s flavor of the day?”, it doesn’t give a helpful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.29.28%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you ask ChatGPT 4 the same question, it gives an even less useful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.33.42%E2%80%AFPM.png?resize=1024%2C782&ssl=1

If you check the website, today’s flavor of the day is Chocolate Caramel Twist.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.41.21%E2%80%AFPM.png?resize=1024%2C657&ssl=1

So, how can we get a proper answer? Ten years ago, when I wrote “The Milwaukee Soup App”, I used the Kimono (which is long dead) to scrape the soup of the day. You could also write a fiddly script to scrape the value manually. It turns out that there is another option, though. You could use Scrapegraph-ai. ScrapeGraphAI is a web scraping Python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents, and XML files. Just say which information you want to extract and the library will do it for you.

Let’s take a look at an example. The project has an official demo where you need to provide an OpenAI API key, select a model, provide a link to scrape, and write a prompt.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.35.29%E2%80%AFPM.png?resize=1024%2C660&ssl=1

As you can see, it reliably gives you the flavor of the day (in a nice JSON object). It will go even further, though because if you point it at the monthly calendar, you can ask it for the flavor of the day and soup of the day for the remainder of the month and it can do that as well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-1.14.43%E2%80%AFPM.png?resize=1024%2C851&ssl=1

Running it locally with Llama 3 and Nomic

I am running Python 3.12 on my Mac but when you run pip install scrapegraphai to install the dependencies, it throws an error. The project lists the prerequisite of Python 3.8+, so I downloaded 3.9 and installed the library into a new virtual environment.

Let’s see what the code looks like.

You will notice that just like in yesterday’s How to build a RAG system post, we are using both a main model and an embedding model.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-2.28.10%E2%80%AFPM.png?resize=1024%2C800&ssl=1

At this point, if you want to harvest flavors of the day for each location, you can do so pretty simply. You just need to loop through each of Culver’s location websites.

Have a question, comment, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-use-ai-to-make-web-scraping-easier/

joe, (edited ) to ai

Back in January, we started looking at AI and how to run a large language model (LLM) locally (instead of just using something like ChatGPT or Gemini). A tool like Ollama is great for building a system that uses AI without dependence on OpenAI. Today, we will look at creating a Retrieval-augmented generation (RAG) application, using Python, LangChain, Chroma DB, and Ollama. Retrieval-augmented generation is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. If you have a source of truth that isn’t in the training data, it is a good way to get the model to know about it. Let’s get started!

Your RAG will need a model (like llama3 or mistral), an embedding model (like mxbai-embed-large), and a vector database. The vector database contains relevant documentation to help the model answer specific questions better. For this demo, our vector database is going to be Chroma DB. You will need to “chunk” the text you are feeding into the database. Let’s start there.

Chunking

There are many ways of choosing the right chunk size and overlap but for this demo, I am just going to use a chunk size of 7500 characters and an overlap of 100 characters. I am also going to use LangChain‘s CharacterTextSplitter to do the chunking. It means that the last 100 characters in the value will be duplicated in the next database record.

The Vector Database

A vector database is a type of database designed to store, manage, and manipulate vector embeddings. Vector embeddings are representations of data (such as text, images, or sounds) in a high-dimensional space, where each data item is represented as a dense vector of real numbers. When you query a vector database, your query is transformed into a vector of real numbers. The database then uses this vector to perform similarity searches.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-2.36.49%E2%80%AFPM.png?resize=665%2C560&ssl=1

You can think of it as being like a two-dimensional chart with points on it. One of those points is your query. The rest are your database records. What are the points that are closest to the query point?

Embedding Model

To do this, you can’t just use an Ollama model. You need to also use an embedding model. There are three that are available to pull from the Ollama library as of the writing of this. For this demo, we are going to be using nomic-embed-text.

Main Model

Our main model for this demo is going to be phi3. It is a 3.8B parameters model that was trained by Microsoft.

LangChain

You will notice that today’s demo is heavily using LangChain. LangChain is an open-source framework designed for developing applications that use LLMs. It provides tools and structures that enhance the customization, accuracy, and relevance of the outputs produced by these models. Developers can leverage LangChain to create new prompt chains or modify existing ones. LangChain pretty much has APIs for everything that we need to do in this app.

The Actual App

Before we start, you are going to want to pip install tiktoken langchain langchain-community langchain-core. You are also going to want to ollama pull phi3 and ollama pull nomic-embed-text. This is going to be a CLI app. You can run it from the terminal like python3 app.py "<Question Here>".

You also need a sources.txt file containing the URLs of things that you want to have in your vector database.

So, what is happening here? Our app.py file is reading sources.txt to get a list of URLs for news stories from Tuesday’s Apple event. It then uses WebBaseLoader to download the pages behind those URLs, uses CharacterTextSplitter to chunk the data, and creates the vectorstore using Chroma. It then creates and invokes rag_chain.

Here is what the output looks like:

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-4.09.36%E2%80%AFPM.png?resize=1024%2C845&ssl=1

The May 7th event is too recent to be in the model’s training data. This makes sure that the model knows about it. You could also feed the model company policy documents, the rules to a board game, or your diary and it will magically know that information. Since you are running the model in Ollama, there is no risk of that information getting out, too. It is pretty awesome.

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-to-build-a-rag-system-using-python-ollama-langchain-and-chroma-db/

joe, to random

Yesterday, I wrote about how I moved a mastodon bot from Pipedream to a docker container. Docker is an efficient way of running isolated little scripts like that. Today, I wanted to review some basic debugging techniques to ensure your script runs as expected.

What docker images exist on the system?

When we looked at how to dockerize a node app, I said that you create a docker image and then run it as a container. So, how do you list the docker images on a system? You run docker images.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.37.35%E2%80%AFPM.png?resize=1024%2C856&ssl=1

What docker containers exist on the system?

If you run docker ps, you can get what containers are running, and if you run docker ps -a, it will include containers that aren’t running.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.46.33%E2%80%AFPM.png?resize=1024%2C856&ssl=1

How do you access a container’s shell?

Like a VM or a system running on bare metal, you can get a shell inside of the docker container. The first step is knowing the container ID for the container you want a shell for. If you look at the output from the docker ps command, you can find it.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.46.33%E2%80%AFPM-2.png?resize=1024%2C856&ssl=1

At this point, you run docker exec -it [container id] /bin/sh to get a shell inside the container.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-1.12.14%E2%80%AFPM.png?resize=1024%2C856&ssl=1

Once you know that the image is there, know if it is running or not, and have a shell inside the container, you should be able to find what is wrong with your container.

Have a questions, comment, etc? Feel free to drop a comment, below.

https://jws.news/2024/debugging-a-docker-container/

joe, to mastodon

Back in 2022, I created “Good Morning, Milwaukee!“. It is a bot that posts every day at 6 am with the weather, the times for sunrise and sunset, and a photo from around the city. When I first wrote it, I wrote it in Node and put it up on Pipedream. Lately, there have been some issues with the weather API that it was using, so I decided to replace it with the OpenWeather API but I figured that while I was at it, I would rewrite it in Python, dockerize it, and run it on my new home lab server.

Let’s start with what the actual Python script looks like.

If you want to reuse this code to create your own bot, there are variables at the top for api_key, zip_code, and mastodon_access_token. The actual posting is done using Mastodon.py.

So, what would the Dockerfile look like?

You’ll notice that it also needs a requirements.txt and a crontab file. Lets see what those look like.

Just make sure that you have a newline at the end of your crontab file. At this point, you can run docker build -t gmmke-app . to build the docker image and then run docker run -d gmmke-app run the container.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-03-at-3.24.33%E2%80%AFPM.png?resize=1024%2C856&ssl=1

With that, it is going to post once when you create the container and then daily at 6:00 AM (Milwaukee time).

Have any questions, comments, etc? Feel free to drop them, below.

https://jws.news/2024/i-rewrote-good-morning-milwaukee-in-python/

joe, to machinelearning

In yesterday’s post, we asked the basic question of what is machine learning. I hoped to illustrate the similarities and differences between artificial intelligence and machine learning. Lately, on this site, we have been spending a bit of time using Python and I wanted to take a moment today to look at a great library for machine learning in Python.

Scikit-learn is the go-to library for machine learning with an amazing ecosystem of plugins. It is open-source and supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities. After you python3 -m venv EnvironmentName and source EnvironmentName/bin/activate, you can install it by running pip install scikit-learn. At that point, you can reference it in your code as sklearn.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-26-at-2.37.12%E2%80%AFPM.png?resize=1024%2C374&ssl=1

The way that scikit-learn works is that you start with some data, you give it to a model, the model learns from it, and then you will be able to make predictions. The common notation is splitting up the data into a part called X (everything you are using to make a prediction) and another part called Y (the prediction you are interested in making). The X could be information about a house (square feet, number of bathrooms, etc) where Y is the house price, or X could be a patient’s health statistics where Y is whether or not they develop diabetes. The model then uses X to try to predict Y.

sklearn.datasets

Let’s take a look at the sklearn.datasets module, first. You can use https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_california_housing.html#sklearn.datasets.fetch_california_housing to get test data directly out of the library about the California housing market.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-27-at-6.37.15%E2%80%AFPM.png?resize=1024%2C650&ssl=1

In the above code, we load the 20,640 records and 9 columns into the data variable and then we set the things that we are using to make a prediction to X and the prediction that we are interested in making to y. So, what are the feature (column) names for the data? If you print(data.feature_names), it will print them.

sklearn.model_selection

Once you have data, you can start working on creating a model. The model itself is nothing more than a Python object but the goal after you create it is to train it. You will want to split your data into a training set and a test set. Using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html#sklearn.model_selection.train_test_split">train_test_split</a> in sklearn.model_selection, you can split it into 70% of the data for training the model and 30% of the data for testing the model (or whatever split you want).

Let’s see what that looks like.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-28-at-8.32.31%E2%80%AFPM.png?resize=1024%2C336&ssl=1

sklearn.impute

A dataset is rarely pristine. There are often missing data points or data points that are set to a value like 0. Imputing is the process of replacing missing or incomplete data with substituted values. https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html#sklearn.impute.SimpleImputer in sklearn.impute lets you replace missing values using a descriptive statistic (e.g. mean, median, or most frequent) along each column.

Let’s see what that looks like.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-29-at-1.53.33%E2%80%AFPM.png?resize=1024%2C302&ssl=1

In the above example, we are taking any X values except num_preg (the number of pregnancies) that have the value 0 and setting it to the mean. That makes it so that missing values don’t scew things when you go to train the model.

Creating and training a model

Like I said above, the model itself is nothing more than a Python object. You can use sklearn to both create and train it, though. Let’s see what it looks like to create a model using sklearn.neighbors (for a regression based on k-nearest neighbors) and then https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor.fit to train the model.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-29-at-3.46.17%E2%80%AFPM.png?resize=1024%2C246&ssl=1

The neat thing about .fit() is that if you want to swap out the KNeighborsRegressor model with a new one, .fit() still works just the same. Let’s look at what it would look like using a linear regression model.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-29-at-3.48.42%E2%80%AFPM.png?resize=1024%2C250&ssl=1

That’s pretty easy.

How do you check the accuracy of the trained model?

Sklearn has a method for predicting using your chosen model and a library for performance metrics. Let’s take a look at what those look like.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-29-at-4.02.57%E2%80%AFPM.png?resize=1024%2C228&ssl=1

In the above code, we are predicting the value for y and then comparing it against the actual value of y. Using just the training data, it is predicting the values with a 75.23% level of accuracy.

So, what is next?

In a future post, I want to step through the whole process of picking a statement to test, adjusting the data, building and training a model, testing, adjusting the model, and making predictions. Let’s save that for another day, though.

https://jws.news/2024/what-is-scikit-learn/

joe, to machinelearning

Last week, we went over some basics of Artificial Intelligence (AI) using Ollama, Llama3, and some custom code. Artificial intelligence (AI) encompasses a broad range of technologies designed to enable machines to perform tasks that typically require human intelligence. These tasks include understanding spoken or written language, recognizing visual patterns, making decisions, and providing recommendations. Machine learning (ML) is a specialized subset of AI that focuses on developing systems that improve their performance over time without being explicitly programmed. Instead, ML algorithms analyze and learn from large datasets to identify patterns and make decisions based on these insights. This learning process allows ML models to make increasingly accurate predictions or decisions as they are exposed to more data.

A few months ago, I added Liner to the resource page of my website. It allows you to easily train an ML model so that you can do image, text, audio, or video classification, object detection, image segmentation, or pose classification. I created “Is this Joe or Not Joe?” using that tool. TensorFlow.js is running client-side with a model that is trained on a half dozen examples of photos that are Joe and a half dozen examples of photos that are not Joe. You can supply a photo and get a prediction if Joe is in the image or not. You can always retrain the existing model with more examples. That is an example of machine learning.

So, you can think of ML as a subset of AI and Deep Learning (DL) as a subset of ML.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/what-is-machine-learning/

joe, to ai

Yesterday, we looked at how to write a JavaScript app that uses Ollama. Recently, we started to look at Python on this site and I figured that we better follow it up with how to write a Python app that uses Ollama. Just like with JavaScript, Ollama offers a Python library, so we are going to be using that for our examples. Also just like we did with the JavaScript demo, I am going to be using the generate endpoint instead of the chat endpoint. That keeps things simpler but I am going to explore the chat endpoint also at some point.

Install the Ollama Library

The first step is to run pip3 install ollama from the terminal. First, you need to create a virtual environment to isolate your project’s libraries from the global Python libraries.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.58.34%E2%80%AFPM.png?resize=1024%2C647&ssl=1

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-5.59.03%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic CLI example

At this point, we can start writing code. When we used the web service earlier this week, we used the generate endpoint and provided model, prompt, and stream as parameters. We set the stream parameter to false so that it would return a single response object instead of a stream of objects. When using the python library, the stream parameter isn’t necessary because it returns a single response object by default. We still provide it with a model and a prompt, though.

If you run it from the terminal, the response will look familiar.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.05.20%E2%80%AFPM.png?resize=1024%2C647&ssl=1

If you replace print(output) with print(output['response']), you can more clearly see the important bits.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.09.04%E2%80%AFPM.png?resize=1024%2C647&ssl=1

Basic Web Application Example

The output is very similar to the node-fetch example from earlier this week. Last week, when we looked at how to dockerize a node app, we output an array as an unordered list. Let’s see if we can replicate that result using the output from Ollama.

If you pip install flask to install flask, you can host a simple HTTP page at port 8080 and with the magic of json.loads() and a for loop, you can build your unordered list.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-22-at-8.27.30%E2%80%AFPM.png?resize=1024%2C651&ssl=1

Every time you load the page, it makes a server-side API call to Ollama, gets a list of large cities in Wisconsin, and displays them on the website. The list is never the same (because of hallucinations) but that is another issue.

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-write-a-python-app-that-uses-ollama/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • magazineikmin
  • everett
  • ethstaker
  • khanakhh
  • InstantRegret
  • Youngstown
  • ngwrru68w68
  • slotface
  • rosin
  • tacticalgear
  • kavyap
  • mdbf
  • JUstTest
  • DreamBathrooms
  • Durango
  • cubers
  • modclub
  • tester
  • cisconetworking
  • GTA5RPClips
  • anitta
  • osvaldo12
  • Leos
  • normalnudes
  • provamag3
  • lostlight
  • All magazines