elevenhsoft, to random Polish
@elevenhsoft@mastodon.social avatar

we can now attach images to applet for :)

doing it just fine ^^

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Some updates on applet for :)

improved layouts/settings page so now i think everything is not looks like out of place haha

added load, save, remove conversations from history

also now it's possible to pull and remove models locally.

ahh, and beautiful button to stop current typing by bot, so if we don't like what he is saying, we can stop him while typing :)

image/png

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Some progress on applet for

Now we can save and load full conversations. Also from now you can keep you messages context.

kevinctofel, to ai
@kevinctofel@hachyderm.io avatar

Interesting local / #private #AI #search in-progress project worth watching: Perplexica. Aims to be similar to #Perplexity but has a ways to go yet. Works with #Ollama, which is what I’m using on #Linux to test local AI.

https://youtu.be/TkxmOC4HBSg?si=L9uCF9ePlT7Ccs6t

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Hello friends! New applet is coming soon....

This time I'm working on Ollama applet for our lovely :)

joe, to ai

LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.

Basic CLI Example

Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.

To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.

Its ability to describe an image is pretty awesome.

Basic Web Service

So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.

The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-18-at-1.41.20%E2%80%AFPM.png?resize=1024%2C729&ssl=1

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-can-you-use-llava-and-node-js-to-describe-an-image/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • modclub
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • Durango
  • normalnudes
  • ethstaker
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • lostlight
  • All magazines