@virtulis I’ve never really understood what node-gyp actually is or why I need to care about it, just that it occasionally pops up in build error messages and when I see it I get irrationally angry.
Kitten now has a lovely new multi-page Settings screen and… drumroll… a new 🐢 interactive shell (REPL) for you to play with the running state of your Small Web site/app/place and debug your app, inspect/manipulate its database, etc.
I plan on recording demos of each of them tomorrow but you can play with them now.
And here’s a little tutorial to get you started with the shell:
PS. Since it’s not a common thing in ‘Big Web’ development, note that the Settings app is part of Kitten’s own internal web app that is available to all Small Web apps created using Kitten. So all those Small Web apps will be able to take advantage of data portability with backup/restore, Domain integration for managing your hosting account with your domain host, evergreen web, public-key encryption for e2ee peer-to-peer web apps (Small Web), etc.
LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.
Basic CLI Example
Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.
To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.
Its ability to describe an image is pretty awesome.
Basic Web Service
So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.
The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.
Wrote my first programming related blog post in a little while: How to redirect the user back to the previously requested URL after login with Adonis.js:
The Evergreen Web section in Kitten’s¹ settings now has its own page too (and uses Kitten’s new Streaming HTML² workflow).
If you have the previous version of your site up somewhere, you can use the 404-to-307 technique³ to forward missing pages to your old site so as not to break the Web.