joe

@joe@jws.news

I am a humble Milwaukeean. I write code, travel, ride two-wheeled transportation, and love my dogs. This is my blog. You can also follow as @joe (mastodon), @steinbring (kbin / lemmy), or @steinbring (pixelfed).

This profile is from a federated server and may be incomplete. Browse more on the original instance.

joe, (edited ) to random

I started this blog over 12 years ago. Over the years, I have changed where it was hosted, what the URL was, and generally how I used it. A few weeks ago, I got the idea to both move hosts and change the URL again. The last time I did that was over 5 years ago.

Almost three years ago, I started posting very regularly to the blog (for a year it was at least weekly). I don’t intend to return to that but I promise a solid attempt at monthly posts. You can follow along here of course but the blog is also available on the fediverse. You can follow @all or @joe to see my posts from Mastodon, Kbin, etc.

Have any questions, comments, etc? Please feel free to drop a comment.

https://jws.news/2023/new-blog-day/

joe, to javascript

Yesterday, we looked at how to define our own web components and how to use web components that were defined using Shoelace. Today, I figured that we should take a quick look at Lit. Lit is a simple library for building fast, lightweight web components.

Let’s see what yesterday’s “hello world” demo would look like with Lit.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

The syntax is a lot more pleasing in my humble opinion. Let’s see how to do something a little more complex, though.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

This example defines <blog-feed></blog-feed> with a json parameter. Lit has a <a href="https://lit.dev/docs/components/lifecycle/#connectedcallback">connectedCallback()</a> lifecycle hook that is invoked when a component is added to the document’s DOM. Using that, we are running this.fetchBlogFeed() which in turn running await fetch(). It is then using https://lit.dev/docs/components/rendering/ to render the list items.

I am kind of digging Lit.

https://jws.news/2024/playing-more-with-web-components/

joe, (edited ) to AdobePhotoshop

This past month, I visited the local Hack & Tell to write a web app that uses Vue.js‘s Composition API. I have written 41 posts involving Vue on this blog but the Composition API is new here. If you are the “eat your dessert first” type, you can check out the result at https://joes-job-tracker.web.app. I want to spend this article reviewing the composition API, though.

Let me start by explaining what the Composition API is. The Composition API is a collection of APIs that enable the creation of Vue components by utilizing imported functions, as opposed to the traditional method of declaring options. This term encompasses the Reactivity API, Lifecycle Hooks, and Dependency Injection. The Composition API is an integrated feature within both Vue 3 and Vue 2.7. It is a bit of a departure from the traditional Vue 2 way of writing code but Vue 2 applications can use the officially maintained @vue/composition-api plugin.

So, what do the differences actually look like? Let’s take a look at the example of an app that tracks the location of the user’s mouse cursor. The first version uses the Vue 2 method where you declare options and the second version does the same thing but uses imported functions.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

So, what are the differences between the two? Let’s compare and contrast.

Vue 2 Composition API
new Vue({ el: '#app', data: { mouseX: 0, mouseY: 0, }, methods: { trackMouse(event) { this.mouseX = event.clientX; this.mouseY = event.clientY; }, }, mounted() { // Attach an event listener to track mouse movement window.addEventListener('mousemove', this.trackMouse); }, beforeDestroy() { // Clean up the event listener to prevent memory leaks window.removeEventListener('mousemove', this.trackMouse); }, }); import { createApp, ref, onMounted, onBeforeUnmount } from 'vue'``createApp({ setup() { const mouseX = ref(0); const mouseY = ref(0); const trackMouse = (event) => { mouseX.value = event.clientX; mouseY.value = event.clientY; }; onMounted(() => { window.addEventListener('mousemove', trackMouse); }); onBeforeUnmount(() => { window.removeEventListener('mousemove', trackMouse); }); return { mouseX, mouseY, }; }, }).mount('#app')

In the Vue 3 Composition API version, the setup function is used to defines the component’s logic, and it uses reactive references (ref) to manage the state. Event handling is encapsulated within the onMounted and onBeforeUnmount lifecycle hooks. Vue 3 promotes a more modular and declarative approach. In the Vue 2 version, the code uses the Options API with a more traditional structure, relying on data, methods, and lifecycle hooks like mounted and beforeDestroy. Vue 3’s Composition API simplifies the code structure and encourages better organization and reusability of logic.

If you are anything like me, your first question is what reactive references are. Vue 2 doesn’t have a built-in equivalent to the ref and reactive features found in Vue 3. When you use a ref in a template, modifying its value triggers Vue’s automatic detection of the change, leading to a corresponding update of the DOM. This is made possible with a dependency-tracking based reactivity system. During the initial rendering of a component, Vue meticulously monitors and records every ref used in the process. Subsequently, when a ref undergoes a mutation, it initiates a re-render for the components that are observing it.

Here are the Vue 2 and the Composition API versions of the same application:

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Vue 3’s reactivity system is more efficient and performs better than Vue 2. This can result in better overall performance, especially in applications with complex reactivity requirements. It also reduces the use of magic keywords like this, which can be a source of confusion in Vue 2.

Here are the two versions, side by side:

VUE 2 COMPOSITION API
new Vue({ el: ‘’, data: { myValue: 42, }, methods: { updateMyValue() { this.myValue = 69; // Update the value }, }, }); import { createApp, ref, onMounted } from ‘vue’;const app = createApp({ setup() { const myValue = ref(42); const updateMyValue = () => { myValue.value = 69; // Update the value }; return { myValue, updateMyValue }; } }); app.mount(‘’);

Another thing that the Composition API brings to the table is composables. Vue composables are special functions that leverage the Composition API to create reactive and reusable logic. They serve as external functions, abstracting reactive states and functionalities for use in multiple components. These composables, also known as composition functions, streamline the process of sharing logic across different parts of an application.

In functionality, composables are akin to Vue 2’s mixins found in the Options API and resemble the concept of Hooks in React.

I am going to wait for a future post to cover composables, though.

https://jws.news/2023/learning-the-composition-api/

joe, to random

I post a lot of sample code on this blog. My CodePen is full of little snippets of this and that. Quite often, these snippets need data to do something useful. A good example of that is my Lit example from this past week. Coming up with that data can be complicated, though. That is why I created a site for assorted test data. If you want to have a little rummage through it, I also made the git repository public for the site. While I was at it, I also put it behind a Cloudflare proxy to speed it up a little.

Have any questions, comments, etc? Please feel free to drop a comment below.

https://jws.news/2024/i-have-been-making-some-infrastructure-impovements/

joe, to fediverse

Last year, I created my own Mastodon instance, started using Pixelfed, and even made it possible to follow this blog from the fediverse. The big benefit of this ecosystem over something like Twitter, Facebook, Tumblr, etc is that every service that is part of the fediverse can be followed by everything else on the fediverse. You can follow a kbin account from a GoToSocial account or follow a WordPress blog from a Misskey account. You don’t need to have an account on a particular service in order to follow someone who uses that service. Over time, I started wondering what would be necessary to make my own applications part of the fediverse. How hard could it be?

Earlier this year, David Neal and wrote about how to use the WebFinger protocol to create a Mastodon alias, and I noticed it from something that Ray Camden tooted. I set a webfinger file up on jws.dev and now, if you search for @joe or mention @joe in a toot, it will forward to @joe.

https://i0.wp.com/jws.news/wp-content/uploads/2023/12/Screenshot-2023-11-19-at-7.03.18-AM.png?resize=408%2C412&ssl=1

That left me wondering what would be required to build something that behaves not as an alias but as its own instance. If you visit verify.funfedi.dev and look the the API endpoints for a known good Mastodon account, you will see that there is a webfinger file (similar to the one that I used above), an actor URI (like https://toot.works/users/joe), an inbox (like https://toot.works/users/joe/inbox), and an outbox (like https://toot.works/users/joe/outbox). Replicating that shouldn’t be too hard.

This bit of node should give you https://social.joe.workers.dev/joe, https://social.joe.workers.dev/.well-known/webfinger?resource=acct:joe@social.joe.workers.dev, and https://social.joe.workers.dev/joe/outbox along with https://social.joe.workers.dev/joe/followers to display people following the account, https://social.joe.workers.dev/joe/inbox to pretend to be an inbox, and one sample toot in the outbox.

https://mstdn.social/@joe@social.joe.workers.dev

With these few endpoints, you have a functional mastodon-compatible one-user server. You won’t be able to follow this account from every instance but I think that it is because I am running it from Cloudflare Workers and a lot of firewalls block it.

Next, I wrote https://fediverse.joe.workers.dev (which you can see of the result of at https://codepen.io/steinbring/full/ZEwRmVJ). The node script behind that API takes my PixelFed, Mastodon, and WordPress activity from the past 30 days and interleaves the results so that they are in chronological order from newest to oldest. With all of my fediverse content available from a central API, it would be simple enough to update the ActivityPub instance to use the result instead of the hard-coded “orderedItems” array. I think that doing it as a boost instead of a new toot would be the better path, though.

You could also use Firebase Cloud Firestore to drive the “orderedItems” array from the instance. Adding a new firestore record would add a new toot to the fediverse.

https://jws.news/2023/writing-my-own-minimally-viable-mastodon-compatible-activitypub-instance/

joe, to fediverse

Before Twitter collapsed, a lot of my online identity was connected to my account on the service. A lot of my content was also trapped inside of silos that were controlled by corporations. I attempted in 2018 to reduce my dependency on things like Twitter and Instagram by using IFTTT to mirror content to things like Mastodon, Flickr, and Tumblr. After the collapse, I made Mastodon, Pixelfed, and this blog more central to my online identity. These services all use ActivityPub and services that federate with each other are the future of the social web.

That said, I still cross-post content from the fediverse to Flickr, Tumblr, and Linkedin. Both Flickr and Tumblr are reportedly considering adding federation. Until then, I figure that it helps those who are trapped in those ecosystems. I have also experimented with cross-posting from the blog to Bluesky but the API isn’t simple enough to make it feasible. It looks like Threads is adding federation in the next year or two. I hope that eventually, federation becomes a basic expectation.

If you find yourself seeking services that prioritize user control and data ownership but are unsure how to begin, feel free to share your thoughts or questions in the comments.

https://jws.news/2024/how-i-future-proofed-my-online-identity/

joe, to vuejs

This past autumn, I started playing around with the Composition API, and at the October 2023 Hack and Tell, I put that knowledge into writing a “Job Tracker“. The job tracker used Vuex and Firebase Authentication to log a user in using their Google credentials. With const store = useStore() on your view, you can do something like Welcome, {{user.data.displayName}} but using this technique you can also use …

const LoginWithGoogle = async () => {<br></br>try {<br></br>await store.dispatch('loginWithGoogle')<br></br>router.push('/')<br></br>}<br></br>catch (err) {<br></br>error.value = err.message<br></br>}<br></br>}

… to kick off the authentication of the user. I want to use it to finally finish the State Parks app but I also want to use Pinia instead of Vuex, I wanted the resulting app to be a PWA, and I wanted to allow the user to log in with more than just Google credentials. So, this past week, I wrote my “Offline Vue Boilerplate“. It is meant to be a starting point for the State Parks app and a few other apps that I have kicking around in my head. I figured that this week, we should go over what I wrote.

Overview

The whole point of this “boilerplate” application was for it to be a common starting point for other applications that use Firebase for authentication and a NoSQL database. It uses:

I was using a lot of this stack for work projects, also. It is nice because Firebase is cheap and robust and you don’t need to write any server-side code. Hosting of the front-end code is “cheap-as-chips”, also. The Job Tracker is hosted using Firebase Hosting (which is free on the spark plan) and The Boilerplate App is hosted using Render, which is just as free.

Authentication

I am most proud of how I handled authentication with this app. Here is what the Pinia store looks like:

From your view, you can access {{ user }} to get to the values that came out of the single sign-on (SSO) provider (the user’s name, email address, picture, etc). For this app, I used Google and Microsoft but Firebase Authentication offers a lot of options beyond those two.

https://i0.wp.com/jws.news/wp-content/uploads/2024/03/Screenshot-2024-03-04-at-11.31.08%E2%80%AFAM.png?resize=1024%2C588&ssl=1

Adding Google is pretty easy (after all, Firebase is owned by Google) but adding Microsoft was more difficult. To get keys from Microsoft, you need to register your application with the Microsoft identity platform. Unfortunately, the account that you use for that must be an Azure account with at least a Cloud Application Administrator privileges and it can not be a personal account. The account must be associated with an Entra tenant. This means that you need to spin up an Entra tenant to register the application and get the keys.

The third SSO provider that I was tempted to add was Apple but to do that, you need to enroll in the Apple Developer program, which is not cheap.

Firebase Cloud Firestore

I have become a big fan of Firebase Cloud Firestore over the years (at least for situations where a NoSQL database makes sense). The paradigm that I started playing around with last year involved putting the Firebase CRUD functions in the composable.

Here is an example <script> block from the Job Tracker:

The author of the view doesn’t even need to know that Firebase Cloud Firestore is part of the stack. You might wonder how security is handled.

Here is what the security rule looks like behind the job tracker:

The rule is structured so that any authenticated user can create a new record but users can only read, delete, or update if they created the record.

How I made it into a Progressive Web App (PWA)

This is the easiest bit of the whole process. You just need to add vite-plugin-pwa to the dev dependencies and let it build your manifest. You do need to supply icons for it to use but that’s easy enough.

The Next Steps

I am going to be using this as a stepping-stone to build 2-3 apps but you can look forward to a few deep-dive posts on the stack, also.

Have any questions, comments, etc? Please feel free to drop a comment, below.

[ Cover photo by Barn Images on Unsplash ]

https://jws.news/2024/wrote-a-thing-with-vue-and-firebase/

joe, to random

Web Components are a set of technologies that allow you to create reusable custom elements with functionality encapsulated away from the rest of your code. This allows you to define something like <js-modal>Content</js-modal> and then attach behavior and functionality to it.

In this post, I want to explore how web components do what they do.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In the above pen, there are two examples. The first one (<js-description>Content</js-description>) uses a custom element (defined in JavaScript using customElements.define()). It is definitely useful but if you look at the second example (<js-gravatar>Content</js-gravatar>), there is now a <template> element that allows you to define what is within the custom element.

I plan on building on some of these concepts in a later post. Have a question, comment, etc? Feel free to drop a comment, below.

https://jws.news/2020/web-components-101/

joe, (edited ) to javascript

Earlier this week, when I wrote about how to build an autocomplete using Vue.js, it was less about exploring how to do it and more about documenting recent work that used Vuetify. I wanted to use today’s post to go in the other direction. Recently, I discovered the value of using Lit when writing Web Components. I wanted to see if we could go from the HTML / CSS example to a proper web component.

First crack at it

Lit is powerful. You can do a lot with it. Let’s start with a rewrite of Tuesday’s final example to one that uses just Lit.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

The first thing that we do in this is to import LitElement, html, css from a CDN. Our CountySelector class extends LitElement and then customElements.define('county-selector', CountySelector) at the bottom of the page is what turns our class into a tag. The static styles definition is how you style the tag. You will notice that there aren’t any styles outside of that. The markup is all defined in render() near the bottom. The async fetchCounties() method gets the list of counties from the data.jws.app site that I created last week.

This works but web components are supposed to be reusable and this isn’t reusable enough, though.

How do we increase reusability?

As you might remember from last month’s post about web components, you can use properties with a web component. This means that the placeholder and the options for the autocomplete can be passed in as properties in the markup.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

You will notice that the big difference between this version and the first one is that we dropped the API call and replaced it with a countyList property that defines the options. We can do better, though.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this next version, we eliminate all explicit references to counties since a person might presumably want to use the component for something other than counties. You might want to use it to prompt a user for ice cream flavors or pizza toppings.

How do you use Vue with a web component?

Unfortunately, you aren’t going to be able to use something like v-model with a web component. There are other ways to bind to form inputs, though. Let’s take a look.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In the above example, optionsList and selectedOption are defined as Refs. The ref object is mutable (you can assign new values to .value) and it is reactive (any read operations to .value are tracked, and write operations will trigger associated effects). The options list can be passed into the web component using the :optionsList property. You might notice, though that the example is using .join(', ') to convert the array to a comma-delimited list. That is because you can not pass an array directly into a web component. That is likely going to be the subject of a future article. You might also notice that it is triggering the when you click on a suggestion and onBlur. The dispatchEvent() method sends an event to the object, invoking the affected event listeners in the appropriate order. That should trigger updateSelectedOption when you select a county or finish typing one that isn’t in the list.

So, what do you think? Do you have any questions? Feel free to drop a comment, below.

https://jws.news/2024/how-to-implement-an-autocomplete-using-web-components/

joe, (edited ) to webdev

Have you ever stumbled upon those form fields that suggest options in a drop-down as you type, like when you’re entering a street address? It turns out, that making those are not as difficult as you would think! Today, I’m gonna walk you through three cool ways to pull it off using Vue.js. Let’s dive in!

Vuetify

If you are a Vue developer, you have likely used Vuetify at some point. It is an open-source UI library that offers Vue Components for all sorts of things. One of those things just happens to be Autocompletes.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Last week, I spoke about creating a repository of data for coding examples. The first one is a list of counties in the state of Wisconsin. In this example, the values from the API are stored in a counties array, the value that you entered into the input is stored in a selectedCounty variable, and the fetchCounties method fetches the values from the API. Thanks to the v-autocomplete component, it is super easy using Vuetify.

Shoelace

Shoelace (now known as Web Awesome) doesn’t have a built-in autocomplete element but there is a stretch goal on their kickstarter to add one. That means that we need to build the functionality ourselves.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Our Shoelace version has a filteredCounties variable so that we can control what is shown in the suggestions and a selectCounty method to let the user click on one of the suggestions.

Plain HTML and CSS

We have already established that Shoelace doesn’t have an autocomplete but neither does Bulma or Bootstrap. So, I figured that we would try a pure HTML and CSS autocomplete.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

This is very similar to our Shoelace example but with some extra CSS on the input. You might be wondering about that autocomplete attribute on the input. It is a different type of autocomplete. The autocomplete attribute specifies if browsers should try to predict the value of an input field or not. You still need to roll your own for the suggestions.

https://jws.news/2024/how-to-impliment-an-autocomplete-using-vue/

joe, to javascript

In last week’s post, I said, “That is because you can not pass an array directly into a web component”. I said that I might take a moment at some point to talk about how you could do that. Well, today is the day we are doing that.

Let’s start with a basic Lit example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

You will notice that the ArrayList class has an items property that is an array type. Lit won’t let you do something like <array-list items = ['Item 1', 'Item 2', 'Item 3']></array-list> but it is fine with you passing it in using javascript. That means that myList.items = ['Item 1', 'Item 2', 'Item 3']; does the job, fine.

So, how can we do that with a Vue.js array?

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

It is the same thing except you need to set the value of the list in an onMounted() lifecycle hook.

What about if we want to do it with React?

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

With React, you just set the value in a useEffect() React hook.

https://jws.news/2024/how-to-pass-an-array-as-a-property-into-a-web-component/

joe, to transit

The gym I attend offers group fitness classes but I have never had enough confidence to sign up for one. Back in January, I saw the announcement of a six-week “Beginner Cycling” class and I immediately signed up for it. The pitch was that it was a cycling class for people who had never taken a cycling class before. I liked the idea of everyone in the class being in the same place as me. This past Sunday was the final week of the class and later that day, I signed up for a 6 pm Cycling class for later this week.

I enjoy the regular expectation that I need to be at the gym at a given time. It helps. I’m hoping that I can keep this up in the long term.

https://jws.news/2024/why-i-took-a-beginner-cycling-class-at-the-gym/

joe, (edited ) to food

There is a Chili Cook-Off at work tomorrow and I agreed to bring something that I plan to call <Chili><Con Carne /></Chili>. So, what is the plan? I plan to start with a nice stout, thick-ground beef, 1 chopped yellow onion, a habanero or two, and a few chopped-up strips of bacon until the meat is browned and the onion is soft. Next, I’ll stir in chopped tomatoes, tomato paste, and garlic. After that, I’ll add parsley, basil, chili powder, paprika, cayenne pepper, oregano, salt, and pepper. At that point, I’ll add the kidney beans and then thicken it as needed with flour and cornmeal.

What do you think of the plan?

https://jws.news/2023/the-2023-bader-rutter-chili-cook-off/

joe, to javascript

We briefly played with web components once before on here but it has been a few years and I wanted to go a little deeper. Web components are a suite of different technologies that allow developers to create custom, reusable, encapsulated HTML tags for use in web pages and web apps. Essentially, they let you create your own HTML elements with their own functionality, independent of the rest of your codebase.

Let’s start by taking a look at a very basic example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this example, the MyGreeting class extends HTMLElement to create a custom element. The constructor then creates a shadow DOM for encapsulation and adds a <span>element with a greeting message (which uses the name attribute for customization). The customElements.define method then registers the custom element with the browser, associating it with the tag <my-greeting>.

So, what can we do with this? You might have heard of Shoelace / Web Awesome. That is just a collection of cool web components. Let’s take a look at a quick example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

As you can see above, you just include the activates Shoelace’s autoloader and then registers components on the fly as you use them. Let’s look at a slightly more complicated example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

If you flip open the JavaScript panel, you will see that it still neeeds event listeners for the open and close buttons but it is not as complex as if you were writing this frome scratch.

https://jws.news/2024/playing-with-web-components/

joe, to bluesky

Four months ago, I created a Bluesky account to play around this the API and managed to create a simple node script to post a status to it. I wasn’t able to figure out how to get it to work with IFTTT, though. This week, I spun up a Pipedream workflow to try to post an announcement when a new blog post goes up.

https://i0.wp.com/jws.news/wp-content/uploads/2024/03/Screenshot-2024-03-27-at-7.33.56%E2%80%AFPM.png?resize=1024%2C522&ssl=1

If you wanted to replicate what I have so far, you should be able to set up your trigger like this and then the second step just looks like …

The only issue is that Bluesky requires you to specify exactly where in the string the URIs are and I don’t think that I can be bothered to figure out how to go about that at the moment. Until I figure that out, folks will need to copy and paste URLs instead of clicking on them.

https://jws.news/2024/this-blog-has-a-bluesky-account-with-a-few-issues/

joe, to random

I am going to be at CypherCon today and tomorrow. You should come and say hi if you are there, too.

https://jws.news/2024/i-am-going-to-be-at-cyphercon-today-and-tomorrow/

joe, (edited ) to javascript

Earlier this week, we started looking at React and I figured that for today’s post, we should take a look at the https://react.dev/reference/react/useEffect and https://react.dev/reference/react/useMemo React Hooks. Hooks are functions that let you “hook into” React state and lifecycle features from function components. In yesterday’s post, we used https://codepen.io/steinbring/pen/GRLoGob/959ce699f499a7756cf6528eb3923f75. That is another React Hook. The useState Hook allows us to track state in a function component (not unlike how we used Pinia or Vuex with Vue.js).

The useEffect React hook lets you perform side effects in functional components, such as fetching data, subscribing to a service, or manually changing the DOM. It can be configured to run after every render or only when certain values change, by specifying dependencies in its second argument array. The useMemo React hook memoizes expensive calculations in your component, preventing them from being recomputed on every render unless specified dependencies change. This optimization technique can significantly improve performance in resource-intensive applications by caching computed values.

Let’s take a look at a quick useEffect, first. For the first demo, we will use useEffect and useState to tell the user what the current time is.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Let’s walk through what we have going on here. The App() function is returning JSX containing <p>The current time is {currentTime}</p> and currentTime is defined by setCurrentTime. The code block useEffect(() => {}); executes whenever the state changes and can be used to do something like fetching data or talking to an authentication service. It also fires when the page first renders. So, what does that empty dependency array (,[]) do in useEffect(() => {},[]);? It makes sure that useEffect only runs one time instead of running whenever the state changes.

We can get a little crazier from here by incorporating the setInterval() method.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this example, it still runs useEffect(() => {},[]); only once (instead of whenever the state changes) but it uses setInterval() inside of useEffect to refresh the state once every 1000 milliseconds.

Let’s take a look at another example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this one, we have three form elements: a number picker for “digits of pi”, a color picker for changing the background, and a read-only textarea field that shows the value of π to the precision specified in the “digits of pi” input. With no dependency array on useEffect(() => {});, whenever either “Digits of Pi” or the color picker change, useEffect is triggered. If you open the console and make a change, you can see how it is triggered once when you change the background color and twice when you change the digits of pi. Why? It does that because when you change the number of digits, it also changes the value of pi and you get one execution per state change.

So, how can we cut down on the number of executions? That is where useMemo() comes in. Let’s take a look at how it works.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this revision, instead of piValue having a state, it is “memoized” and the value of the variable only changes if the value of digits changes. In this version, we are also adding a dependency array to useEffect() so that it only executes if the value of color changes. Alternatively, you could also just have two . Let’s take a look at that.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

If you throw open your console and change the two input values, you will see that it is no longer triggering useEffect() twice when changing the number of digits.

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/exploring-useeffect-and-usememo-in-react/

joe,

The actual blog post is on a WordPress blog and it is publishing to the Fediverse from there. There is some sort of magic that that (first-party) wordpress plugin is doing, but I couldn’t tell you what it is.

joe, (edited ) to javascript

Over the years, we have looked at jQuery, AngularJS, Rivets, and Vue.js. I figured that it was time to add React to the list. Facebook developed React for building user interfaces. The big difference between Vue and React is that React uses one-way binding whereas Vue uses two-way binding. With React, data flows downward from parent components to child components and to communicate data back up to the parent component, you would need to use a state management library like Redux. React is also a library where Vue.js is a framework, so React is closer to Rivets. You can run React with or without JSX and write it with or without a framework like Next.js.

Let’s look at how binding works in React and how it compares to Vue. In Vue, if we wanted to bind a form input to a text value, it would look like this

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

… and to do the same thing in React, it would look like this

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

You will notice that the React version uses the useState React Hook. Where the Vue version uses const inputText = ref('Blah Blah Blah'); for reactive state management, React uses const [inputText, setInputText] = React.useState('Blah Blah Blah'); to manage state. Also, Vue has two-way binding with the v-model directive but with React, the text updates when the state changes, but the state doesn’t automatically update when the input is edited. To deal with this, you manually handle updates to the input’s value via an onChange event. The developer is responsible for triggering state changes using the state updater functions in React. Another big difference is that Vue uses a template syntax that is closer to HTML while with React, JSX is used to define the component’s markup directly within JavaScript.

Let’s take a look at one more example

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

This example is very similar to what we had before but now there are child elements for setInputText(event.target.value)} /> and <TextDisplay text={inputText} />. This approach promotes code reusability. By encapsulating the input field and display logic into separate components, these pieces can easily be reused throughout the application or even in different projects, reducing duplication and fostering consistency. It is kind of the React way. Each component is responsible for its own functionality — InputForm manages the user input while TextDisplay is solely concerned with presenting text. This separation of concerns makes the codebase easier to navigate, understand, and debug as the application grows in complexity.

Have a question, comment, etc? Feel free to drop a comment.

https://jws.news/2024/it-is-time-to-play-with-react/

#JavaScript #JSX #React #VueJs

joe, to javascript

I have been spending a lot of the past month working on a Vue / Firebase side project, so this is going to be a fairly short post. I recently discovered that there are modifiers for v-model in Vue. I wanted to take a moment to cover what they do.

.lazy

Instead of the change being reflected the moment you make it, this modifier makes it so that the change syncs after change events.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

.number

This modifier uses parseFloat() to cast the value as a number. This is a good one to use with inputs that you are already using type=”number” with.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

.trim

This one is pretty straight-forward. It simply trims any extra whitespace off of the string.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

https://jws.news/2024/v-model-modifiers-in-vue/

joe, to random

This past month, I was talking about how I spent $528 to buy a machine with enough guts to run more demanding AI models in Ollama. That is good and all but if you are not on that machine (or at least on the same network), it has limited utility. So, how do you use it if you are at a library or a friend’s house? I just discovered Tailscale. You install the Tailscale app on the server and all of your client devices and it creates an encrypted VPN connection between them. Each device on your “tailnet” has 4 addresses you can use to reference it:

  • Machine name: my-machine
  • FQDN: my-machine.tailnet.ts.net
  • IPv4: 100.X.Y.Z
  • IPv6: fd7a:115c:a1e0::53

If you remember Hamachi from back in the day, it is kind of the spiritual successor to that.

https://i0.wp.com/jws.news/wp-content/uploads/2024/03/Screenshot-2024-03-04-at-2.37.06%E2%80%AFPM.png?resize=1024%2C592&ssl=1

There is no need to poke holes in your firewall or expose your Ollama install to the public internet. There is even a client for iOS, so you can run it on your iPad. I am looking forward to playing around with it some more.

https://jws.news/2024/joe-discovered-tailscale/

joe, to llm

Back in December, I paid $1,425 to replace my MacBook Pro to make my LLM research at all possible. That had an M1Pro CPU and 32GB of RAM, which (as I said previously) is kind of a bare minimum spec to run a useful local AI. I quickly wished I had enough RAM to run a 70B model, but you can’t upgrade Apple products after the fact and a 70B model needs 64GB of RAM. That led me to start looking for a second-hand Linux desktop that can handle a 70B model.

I ended up finding a 4yr-old HP Z4 G4 workstation with a Xeon® W-2125 Processor,128 GB of DDR4 2666 MHz RAM, a 512GB SAMSUNG nVme SSD, and a NVIDIA Quadro P4000 GPU with 8GB of GDDR5 GPU Memory. I bought it before Ollama released their Windows preview, so I planned to throw the latest Ubuntu LTS on it.

Going into this experiment, I was expecting that Ollama would thrash the GPU and the RAM but would use the CPU sparingly. I was not correct.

This is what the activity monitor looked like when I asked various models to tell me about themselves:

Mixtral

An ubuntu activity monitor while running mixtral

Llama2:70b

An ubuntu activity monitor while running Llama2:70b

Llama2:7b

An ubuntu activity monitor while running llama2:7b

Codellama

An ubuntu activity monitor while running codellama

The Xeon W-2125 has 8 threads and 4 cores, so I think that CPU1-CPU8 are threads. My theory going into this was that the models would go into memory and then the GPU would do all of the processing. The CPU would only be needed to serve the results back to the user. It looks like the full load is going to the CPU. For a moment, I thought that the 8 GB of video RAM was the limitation. That is why I tried running a 7b model for one of the tests. I am still not convinced that Ollama is even trying to use the GPU.

A screenshot of the "additional drivers" screen in ubuntu

I am using a proprietary Nvidia driver for the GPU but maybe I’m missing something?

I was recently playing around with Stability AI’s Stability Cascade. I might need to run those tests on this machine to see what the result is. It may be an Ollama-specific issue.

Have any questions, comments, or concerns? Please feel free to drop a comment, below. As a blanket warning, all of these posts are personal opinions and do not reflect the views or ethics of my employer. All of this research is being done off-hours and on my own dime.

https://jws.news/2024/hp-z4-g4-workstation/

#Llama2 #LLM #Mac #Ollama #Ubuntu

joe, to ai

LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.

Basic CLI Example

Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.

To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.

Its ability to describe an image is pretty awesome.

Basic Web Service

So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.

The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-18-at-1.41.20%E2%80%AFPM.png?resize=1024%2C729&ssl=1

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-can-you-use-llava-and-node-js-to-describe-an-image/

joe, (edited ) to ai

Back in January, we started looking at AI and how to run a large language model (LLM) locally (instead of just using something like ChatGPT or Gemini). A tool like Ollama is great for building a system that uses AI without dependence on OpenAI. Today, we will look at creating a Retrieval-augmented generation (RAG) application, using Python, LangChain, Chroma DB, and Ollama. Retrieval-augmented generation is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. If you have a source of truth that isn’t in the training data, it is a good way to get the model to know about it. Let’s get started!

Your RAG will need a model (like llama3 or mistral), an embedding model (like mxbai-embed-large), and a vector database. The vector database contains relevant documentation to help the model answer specific questions better. For this demo, our vector database is going to be Chroma DB. You will need to “chunk” the text you are feeding into the database. Let’s start there.

Chunking

There are many ways of choosing the right chunk size and overlap but for this demo, I am just going to use a chunk size of 7500 characters and an overlap of 100 characters. I am also going to use LangChain‘s CharacterTextSplitter to do the chunking. It means that the last 100 characters in the value will be duplicated in the next database record.

The Vector Database

A vector database is a type of database designed to store, manage, and manipulate vector embeddings. Vector embeddings are representations of data (such as text, images, or sounds) in a high-dimensional space, where each data item is represented as a dense vector of real numbers. When you query a vector database, your query is transformed into a vector of real numbers. The database then uses this vector to perform similarity searches.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-2.36.49%E2%80%AFPM.png?resize=665%2C560&ssl=1

You can think of it as being like a two-dimensional chart with points on it. One of those points is your query. The rest are your database records. What are the points that are closest to the query point?

Embedding Model

To do this, you can’t just use an Ollama model. You need to also use an embedding model. There are three that are available to pull from the Ollama library as of the writing of this. For this demo, we are going to be using nomic-embed-text.

Main Model

Our main model for this demo is going to be phi3. It is a 3.8B parameters model that was trained by Microsoft.

LangChain

You will notice that today’s demo is heavily using LangChain. LangChain is an open-source framework designed for developing applications that use LLMs. It provides tools and structures that enhance the customization, accuracy, and relevance of the outputs produced by these models. Developers can leverage LangChain to create new prompt chains or modify existing ones. LangChain pretty much has APIs for everything that we need to do in this app.

The Actual App

Before we start, you are going to want to pip install tiktoken langchain langchain-community langchain-core. You are also going to want to ollama pull phi3 and ollama pull nomic-embed-text. This is going to be a CLI app. You can run it from the terminal like python3 app.py "<Question Here>".

You also need a sources.txt file containing the URLs of things that you want to have in your vector database.

So, what is happening here? Our app.py file is reading sources.txt to get a list of URLs for news stories from Tuesday’s Apple event. It then uses WebBaseLoader to download the pages behind those URLs, uses CharacterTextSplitter to chunk the data, and creates the vectorstore using Chroma. It then creates and invokes rag_chain.

Here is what the output looks like:

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-08-at-4.09.36%E2%80%AFPM.png?resize=1024%2C845&ssl=1

The May 7th event is too recent to be in the model’s training data. This makes sure that the model knows about it. You could also feed the model company policy documents, the rules to a board game, or your diary and it will magically know that information. Since you are running the model in Ollama, there is no risk of that information getting out, too. It is pretty awesome.

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-to-build-a-rag-system-using-python-ollama-langchain-and-chroma-db/

joe, to ai

A few weeks back, I thought about getting an AI model to return the “Flavor of the Day” for a Culver’s location. If you ask Llama 3:70b “The website https://www.culvers.com/restaurants/glendale-wi-bayside-dr lists “today’s flavor of the day”. What is today’s flavor of the day?”, it doesn’t give a helpful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.29.28%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you ask ChatGPT 4 the same question, it gives an even less useful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.33.42%E2%80%AFPM.png?resize=1024%2C782&ssl=1

If you check the website, today’s flavor of the day is Chocolate Caramel Twist.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.41.21%E2%80%AFPM.png?resize=1024%2C657&ssl=1

So, how can we get a proper answer? Ten years ago, when I wrote “The Milwaukee Soup App”, I used the Kimono (which is long dead) to scrape the soup of the day. You could also write a fiddly script to scrape the value manually. It turns out that there is another option, though. You could use Scrapegraph-ai. ScrapeGraphAI is a web scraping Python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents, and XML files. Just say which information you want to extract and the library will do it for you.

Let’s take a look at an example. The project has an official demo where you need to provide an OpenAI API key, select a model, provide a link to scrape, and write a prompt.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.35.29%E2%80%AFPM.png?resize=1024%2C660&ssl=1

As you can see, it reliably gives you the flavor of the day (in a nice JSON object). It will go even further, though because if you point it at the monthly calendar, you can ask it for the flavor of the day and soup of the day for the remainder of the month and it can do that as well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-1.14.43%E2%80%AFPM.png?resize=1024%2C851&ssl=1

Running it locally with Llama 3 and Nomic

I am running Python 3.12 on my Mac but when you run pip install scrapegraphai to install the dependencies, it throws an error. The project lists the prerequisite of Python 3.8+, so I downloaded 3.9 and installed the library into a new virtual environment.

Let’s see what the code looks like.

You will notice that just like in yesterday’s How to build a RAG system post, we are using both a main model and an embedding model.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-2.28.10%E2%80%AFPM.png?resize=1024%2C800&ssl=1

At this point, if you want to harvest flavors of the day for each location, you can do so pretty simply. You just need to loop through each of Culver’s location websites.

Have a question, comment, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-use-ai-to-make-web-scraping-easier/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • ngwrru68w68
  • anitta
  • Durango
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • tester
  • GTA5RPClips
  • cisconetworking
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines