joe

@joe@jws.news

I am a humble Milwaukeean. I write code, travel, ride two-wheeled transportation, and love my dogs. This is my blog. You can also follow as @joe (mastodon), @steinbring (kbin / lemmy), or @steinbring (pixelfed).

This profile is from a federated server and may be incomplete. Browse more on the original instance.

joe, (edited ) to javascript

Earlier this week, we started looking at React and I figured that for today’s post, we should take a look at the https://react.dev/reference/react/useEffect and https://react.dev/reference/react/useMemo React Hooks. Hooks are functions that let you “hook into” React state and lifecycle features from function components. In yesterday’s post, we used https://codepen.io/steinbring/pen/GRLoGob/959ce699f499a7756cf6528eb3923f75. That is another React Hook. The useState Hook allows us to track state in a function component (not unlike how we used Pinia or Vuex with Vue.js).

The useEffect React hook lets you perform side effects in functional components, such as fetching data, subscribing to a service, or manually changing the DOM. It can be configured to run after every render or only when certain values change, by specifying dependencies in its second argument array. The useMemo React hook memoizes expensive calculations in your component, preventing them from being recomputed on every render unless specified dependencies change. This optimization technique can significantly improve performance in resource-intensive applications by caching computed values.

Let’s take a look at a quick useEffect, first. For the first demo, we will use useEffect and useState to tell the user what the current time is.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Let’s walk through what we have going on here. The App() function is returning JSX containing <p>The current time is {currentTime}</p> and currentTime is defined by setCurrentTime. The code block useEffect(() => {}); executes whenever the state changes and can be used to do something like fetching data or talking to an authentication service. It also fires when the page first renders. So, what does that empty dependency array (,[]) do in useEffect(() => {},[]);? It makes sure that useEffect only runs one time instead of running whenever the state changes.

We can get a little crazier from here by incorporating the setInterval() method.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this example, it still runs useEffect(() => {},[]); only once (instead of whenever the state changes) but it uses setInterval() inside of useEffect to refresh the state once every 1000 milliseconds.

Let’s take a look at another example.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this one, we have three form elements: a number picker for “digits of pi”, a color picker for changing the background, and a read-only textarea field that shows the value of π to the precision specified in the “digits of pi” input. With no dependency array on useEffect(() => {});, whenever either “Digits of Pi” or the color picker change, useEffect is triggered. If you open the console and make a change, you can see how it is triggered once when you change the background color and twice when you change the digits of pi. Why? It does that because when you change the number of digits, it also changes the value of pi and you get one execution per state change.

So, how can we cut down on the number of executions? That is where useMemo() comes in. Let’s take a look at how it works.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this revision, instead of piValue having a state, it is “memoized” and the value of the variable only changes if the value of digits changes. In this version, we are also adding a dependency array to useEffect() so that it only executes if the value of color changes. Alternatively, you could also just have two . Let’s take a look at that.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

If you throw open your console and change the two input values, you will see that it is no longer triggering useEffect() twice when changing the number of digits.

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/exploring-useeffect-and-usememo-in-react/

joe,

The actual blog post is on a WordPress blog and it is publishing to the Fediverse from there. There is some sort of magic that that (first-party) wordpress plugin is doing, but I couldn’t tell you what it is.

joe, (edited ) to javascript

Over the years, we have looked at jQuery, AngularJS, Rivets, and Vue.js. I figured that it was time to add React to the list. Facebook developed React for building user interfaces. The big difference between Vue and React is that React uses one-way binding whereas Vue uses two-way binding. With React, data flows downward from parent components to child components and to communicate data back up to the parent component, you would need to use a state management library like Redux. React is also a library where Vue.js is a framework, so React is closer to Rivets. You can run React with or without JSX and write it with or without a framework like Next.js.

Let’s look at how binding works in React and how it compares to Vue. In Vue, if we wanted to bind a form input to a text value, it would look like this

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

… and to do the same thing in React, it would look like this

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

You will notice that the React version uses the useState React Hook. Where the Vue version uses const inputText = ref('Blah Blah Blah'); for reactive state management, React uses const [inputText, setInputText] = React.useState('Blah Blah Blah'); to manage state. Also, Vue has two-way binding with the v-model directive but with React, the text updates when the state changes, but the state doesn’t automatically update when the input is edited. To deal with this, you manually handle updates to the input’s value via an onChange event. The developer is responsible for triggering state changes using the state updater functions in React. Another big difference is that Vue uses a template syntax that is closer to HTML while with React, JSX is used to define the component’s markup directly within JavaScript.

Let’s take a look at one more example

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

This example is very similar to what we had before but now there are child elements for setInputText(event.target.value)} /> and <TextDisplay text={inputText} />. This approach promotes code reusability. By encapsulating the input field and display logic into separate components, these pieces can easily be reused throughout the application or even in different projects, reducing duplication and fostering consistency. It is kind of the React way. Each component is responsible for its own functionality — InputForm manages the user input while TextDisplay is solely concerned with presenting text. This separation of concerns makes the codebase easier to navigate, understand, and debug as the application grows in complexity.

Have a question, comment, etc? Feel free to drop a comment.

https://jws.news/2024/it-is-time-to-play-with-react/

#JavaScript #JSX #React #VueJs

joe, to random

This past month, I was talking about how I spent $528 to buy a machine with enough guts to run more demanding AI models in Ollama. That is good and all but if you are not on that machine (or at least on the same network), it has limited utility. So, how do you use it if you are at a library or a friend’s house? I just discovered Tailscale. You install the Tailscale app on the server and all of your client devices and it creates an encrypted VPN connection between them. Each device on your “tailnet” has 4 addresses you can use to reference it:

  • Machine name: my-machine
  • FQDN: my-machine.tailnet.ts.net
  • IPv4: 100.X.Y.Z
  • IPv6: fd7a:115c:a1e0::53

If you remember Hamachi from back in the day, it is kind of the spiritual successor to that.

https://i0.wp.com/jws.news/wp-content/uploads/2024/03/Screenshot-2024-03-04-at-2.37.06%E2%80%AFPM.png?resize=1024%2C592&ssl=1

There is no need to poke holes in your firewall or expose your Ollama install to the public internet. There is even a client for iOS, so you can run it on your iPad. I am looking forward to playing around with it some more.

https://jws.news/2024/joe-discovered-tailscale/

joe, to vuejs

This past autumn, I started playing around with the Composition API, and at the October 2023 Hack and Tell, I put that knowledge into writing a “Job Tracker“. The job tracker used Vuex and Firebase Authentication to log a user in using their Google credentials. With const store = useStore() on your view, you can do something like Welcome, {{user.data.displayName}} but using this technique you can also use …

const LoginWithGoogle = async () => {<br></br>try {<br></br>await store.dispatch('loginWithGoogle')<br></br>router.push('/')<br></br>}<br></br>catch (err) {<br></br>error.value = err.message<br></br>}<br></br>}

… to kick off the authentication of the user. I want to use it to finally finish the State Parks app but I also want to use Pinia instead of Vuex, I wanted the resulting app to be a PWA, and I wanted to allow the user to log in with more than just Google credentials. So, this past week, I wrote my “Offline Vue Boilerplate“. It is meant to be a starting point for the State Parks app and a few other apps that I have kicking around in my head. I figured that this week, we should go over what I wrote.

Overview

The whole point of this “boilerplate” application was for it to be a common starting point for other applications that use Firebase for authentication and a NoSQL database. It uses:

I was using a lot of this stack for work projects, also. It is nice because Firebase is cheap and robust and you don’t need to write any server-side code. Hosting of the front-end code is “cheap-as-chips”, also. The Job Tracker is hosted using Firebase Hosting (which is free on the spark plan) and The Boilerplate App is hosted using Render, which is just as free.

Authentication

I am most proud of how I handled authentication with this app. Here is what the Pinia store looks like:

From your view, you can access {{ user }} to get to the values that came out of the single sign-on (SSO) provider (the user’s name, email address, picture, etc). For this app, I used Google and Microsoft but Firebase Authentication offers a lot of options beyond those two.

https://i0.wp.com/jws.news/wp-content/uploads/2024/03/Screenshot-2024-03-04-at-11.31.08%E2%80%AFAM.png?resize=1024%2C588&ssl=1

Adding Google is pretty easy (after all, Firebase is owned by Google) but adding Microsoft was more difficult. To get keys from Microsoft, you need to register your application with the Microsoft identity platform. Unfortunately, the account that you use for that must be an Azure account with at least a Cloud Application Administrator privileges and it can not be a personal account. The account must be associated with an Entra tenant. This means that you need to spin up an Entra tenant to register the application and get the keys.

The third SSO provider that I was tempted to add was Apple but to do that, you need to enroll in the Apple Developer program, which is not cheap.

Firebase Cloud Firestore

I have become a big fan of Firebase Cloud Firestore over the years (at least for situations where a NoSQL database makes sense). The paradigm that I started playing around with last year involved putting the Firebase CRUD functions in the composable.

Here is an example <script> block from the Job Tracker:

The author of the view doesn’t even need to know that Firebase Cloud Firestore is part of the stack. You might wonder how security is handled.

Here is what the security rule looks like behind the job tracker:

The rule is structured so that any authenticated user can create a new record but users can only read, delete, or update if they created the record.

How I made it into a Progressive Web App (PWA)

This is the easiest bit of the whole process. You just need to add vite-plugin-pwa to the dev dependencies and let it build your manifest. You do need to supply icons for it to use but that’s easy enough.

The Next Steps

I am going to be using this as a stepping-stone to build 2-3 apps but you can look forward to a few deep-dive posts on the stack, also.

Have any questions, comments, etc? Please feel free to drop a comment, below.

[ Cover photo by Barn Images on Unsplash ]

https://jws.news/2024/wrote-a-thing-with-vue-and-firebase/

joe, to llm

Back in December, I paid $1,425 to replace my MacBook Pro to make my LLM research at all possible. That had an M1Pro CPU and 32GB of RAM, which (as I said previously) is kind of a bare minimum spec to run a useful local AI. I quickly wished I had enough RAM to run a 70B model, but you can’t upgrade Apple products after the fact and a 70B model needs 64GB of RAM. That led me to start looking for a second-hand Linux desktop that can handle a 70B model.

I ended up finding a 4yr-old HP Z4 G4 workstation with a Xeon® W-2125 Processor,128 GB of DDR4 2666 MHz RAM, a 512GB SAMSUNG nVme SSD, and a NVIDIA Quadro P4000 GPU with 8GB of GDDR5 GPU Memory. I bought it before Ollama released their Windows preview, so I planned to throw the latest Ubuntu LTS on it.

Going into this experiment, I was expecting that Ollama would thrash the GPU and the RAM but would use the CPU sparingly. I was not correct.

This is what the activity monitor looked like when I asked various models to tell me about themselves:

Mixtral

An ubuntu activity monitor while running mixtral

Llama2:70b

An ubuntu activity monitor while running Llama2:70b

Llama2:7b

An ubuntu activity monitor while running llama2:7b

Codellama

An ubuntu activity monitor while running codellama

The Xeon W-2125 has 8 threads and 4 cores, so I think that CPU1-CPU8 are threads. My theory going into this was that the models would go into memory and then the GPU would do all of the processing. The CPU would only be needed to serve the results back to the user. It looks like the full load is going to the CPU. For a moment, I thought that the 8 GB of video RAM was the limitation. That is why I tried running a 7b model for one of the tests. I am still not convinced that Ollama is even trying to use the GPU.

A screenshot of the "additional drivers" screen in ubuntu

I am using a proprietary Nvidia driver for the GPU but maybe I’m missing something?

I was recently playing around with Stability AI’s Stability Cascade. I might need to run those tests on this machine to see what the result is. It may be an Ollama-specific issue.

Have any questions, comments, or concerns? Please feel free to drop a comment, below. As a blanket warning, all of these posts are personal opinions and do not reflect the views or ethics of my employer. All of this research is being done off-hours and on my own dime.

https://jws.news/2024/hp-z4-g4-workstation/

#Llama2 #LLM #Mac #Ollama #Ubuntu

joe, to fediverse

Before Twitter collapsed, a lot of my online identity was connected to my account on the service. A lot of my content was also trapped inside of silos that were controlled by corporations. I attempted in 2018 to reduce my dependency on things like Twitter and Instagram by using IFTTT to mirror content to things like Mastodon, Flickr, and Tumblr. After the collapse, I made Mastodon, Pixelfed, and this blog more central to my online identity. These services all use ActivityPub and services that federate with each other are the future of the social web.

That said, I still cross-post content from the fediverse to Flickr, Tumblr, and Linkedin. Both Flickr and Tumblr are reportedly considering adding federation. Until then, I figure that it helps those who are trapped in those ecosystems. I have also experimented with cross-posting from the blog to Bluesky but the API isn’t simple enough to make it feasible. It looks like Threads is adding federation in the next year or two. I hope that eventually, federation becomes a basic expectation.

If you find yourself seeking services that prioritize user control and data ownership but are unsure how to begin, feel free to share your thoughts or questions in the comments.

https://jws.news/2024/how-i-future-proofed-my-online-identity/

joe, to ai

Back in December, I started exploring how all of this AI stuff works. Last week’s post was about the basics of how to run your AI. This week, I wanted to cover some frequently asked questions.

What is a Rule-Based Inference Engine?

A rule-based inference engine is designed to apply predefined rules to a given set of facts or inputs to derive conclusions or make decisions. It operates by using logical rules, which are typically expressed in an “if-then” format. You can think of it as basically a very complex version of the spell check in your text editor.

What is an AI Model?

AI models employ learning algorithms that draw conclusions or predictions from past data. An AI model’s data can come from various sources such as labeled data for supervised learning, unlabeled data for unsupervised learning, or data generated through interaction with an environment for reinforcement learning. The algorithm is the step-by-step procedure or set of rules that the model follows to analyze data and make predictions. Different algorithms have different strengths and weaknesses, and some are better suited for certain types of problems than others. A model has parameters that are the aspects of the model that are learned from the training data. A model’s complexity can be measured by the number of parameters contained in it but complexity can also depend on the architecture of the model (how the parameters interact with each other) and the types of parameters used.

What is an AI client?

An AI client is how the user interfaces with the rule-based inference engine. Since you can use the engine directly, the engine itself could also be the client. For the most part, you are going to want something web-based or a graphical desktop client, though. Good examples of graphical desktop clients would be MindMac or Ollamac. A good example of a web-based client would be Ollama Web UI. A good example of an application that is both a client and a rule-based inference engine is LM Studio. Most engines have APIs and language-specific libraries, so if you want to you can even write your own client.

What is the best client to use with a Rule-Based Inference Engine?

I like MindMac. I would recommend either that or Ollama Web UI. You can even host both Ollama and Ollama Web UI together using docker compose.

What is the best Rule-Based Inference Engine?

I have tried Ollama, Llama.cpp, and LM Studio. If you are using Windows, I would recommend LM Studio. If you are using Linux or a Mac, I would recommend Ollama.

How much RAM does your computer need to run a Rule-Based Inference Engine?

The RAM requirement is dependent upon what model you are using. If you browse the Ollama library, Hugging Face, or LM Studio‘s listing of models, most listings will list a RAM requirement (example) based on the number of parameters in the model. Most 7b models can run on a minimum of 8GB of RAM while most 70b models will require 64GB of RAM. My Macbook Pro has 32GB of unified memory and struggles to run Wizard-Vicuna-Uncensored 30b. My new AI lab currently has 128GB of DDR4 RAM and I hope that it can run 70b models reliably.

Does your computer need a dedicated GPU to run a Rule-Based Inference Engine?

No, you don’t. You can use just the CPU but if you have an Nvidia GPU, it helps a lot.

I use Digital Ocean or Linode for hosting my website. Can I host my AI there, also?

Yeah, you can. The RAM requirement would make it a bit expensive, though. A virtual machine with 8GB of RAM is almost $50/mo.

Why wouldn’t you use ChatGPT, Copilot, or Bard?

When you use any of them, your interactions are used to reinforce the training of the model. That is an issue for more than the most basic prompts. In addition to that, they cost up to $30/month/user.

Why should you use an open-source LLM?

What opinion does your employer have of this research project?

You would need to direct that question to them. All of these posts should be considered personal opinions and do not reflect the views or ethics of my employer. All of this research is being done off-hours and on my own dime.

Why are you interested in this technology?

It is a new technology that I didn’t consider wasteful bullshit in the first hour of researching it.

Are you afraid that AI will take your job?

No.

What about image generation?

I used (and liked) Noiselith until it shut down. DiffusionBee works but I think that Diffusers might be the better solution. Diffusers lets you use multiple models and it is easier to use than Stable Diffusion Web UI.

You advocate for not using ChatGPT. Do you use it?

I do. ChatGPT 4 is a 1.74t model. It can do cool things. I have an API key and I use it via MindMac. Using it that way means that I pay based on how much I use it instead of using it via a Pro account, though.

Are you going to only write about AI on here, now?

Nope. I still have other interests. Expect more Vue.js posts and likely something to do with Unity or Unreal at some point.

Is this going to be the last AI FAQ post?

Nope. I still haven’t covered training or fine-tuning.

https://jws.news/2024/ai-frequently-asked-questions/

joe, (edited ) to ai

Around a year ago, I started hearing more and more about OpenAI‘s ChatGPT. I didn’t pay much attention to it until this past summer when I watched the intern use it where I would normally use Stack Overflow. After that, I started poking at it and created things like the Milwaukee Weather Limerick and a bot that translates my Mastodon toots to Klingon. Those are cool tricks but eventually, I realized that you could ask it for detailed datasets like “the details of every state park“, “a list of three-ingredient cocktails“, or “a CSV of counties in Wisconsin.” People are excited about getting it to write code for you or to do a realistic rendering of a bear riding a unicycle through the snow but I think that is just the tip of the iceberg in a world where it can do research for you.

The biggest limitation of something like ChatGPT, Copilot, or Bard is that your data leaves your control when you use the AI. I believe that the future of AI is AI that remains in your control. The only issue with running your own, local AI is that a large learning model (LLM) needs a lot of resources to run. You can’t do it on your old laptop. It can be done, though. Last month, I bought a new Macbook Pro with an M1Pro CPU and 32GB of unified RAM to test this stuff out.

If you are in a similar situation, Mozilla’s Llamafile project is a good first step. A llamafile can run on multiple CPU microarchitectures. It uses Cosmopolitan Libc to provide a single 4GB executable that can run on macOS, Windows, Linux, FreeBSD, OpenBSD, and NetBSD. It contains a web client, the model file, and the rule-based inference engine. You can just download the binary, execute it, and interact with it through your web browser. This has very limited utility, though.

So, how do you get from a proof of concept to something closer to ChatGPT or Bard? You are going to need a model, a rule-based inference engine or reasoning engine, and a client.

The Rule-Based Inference Engine

A rule-based inference engine is a piece of software that derives answers or conclusions based on a set of predefined rules and facts. You load models into it and it handles the interface between the model and the client. The two major players in the space are Llama.cpp and Ollama. Getting Ollama is as easy as downloading the software and running ollama run [model] from the terminal.

Screenshot of Ollama running in the terminal on MacOS

In the case of Ollama, you can even access it via an API to the inference engine.

A screenshot of Postman interacting with Ollama via a local JSON API

You will notice that the result isn’t easy to parse. Last week, Ollama announced Python and JavaScript libraries to make it much easier.

The Models

A model consists of numerous parameters that adjust during the learning process to improve its predictions. They employ learning algorithms that draw conclusions or predictions from past data. I’m going to be honest with you. This is the bit that I understand the least. The key attributes to be aware of with models are what it is trained on, how many parameters big the model is, and the model’s benchmark numbers.

If you browse Hugging Face or the Ollama model library, you will see that there are plenty of 7b, 13b, and 70b models. That number tells you how many parameters are in the model. Generally, a 70b model is going to be more competent than a 7b model. A 7b model has 7 billion parameters whereas a 70b model has 70 billion parameters. To give you a point of comparison, ChatGPT 4 reportedly has 1.76 trillion parameters.

The number of parameters isn’t the end-all-be-all, though. There are leaderboards and benchmarks (like HellaSwag, ARC, and TruthFulQA) for determining comparative model quality.

If you are running Ollama, downloading and running a new model is as easy as browsing the model library, finding the right one for your purposes, and running ollama run [model] from the terminal. You can manage the installed models from the Ollama Web UI also, though.

A screenshot from the Ollama Web UI, showing how to manage models

The Client

The client is what the user of the AI uses to interact with the rule-based inference engine. If you are using Ollama, the Ollama Web UI is a great option. It gives you a web interface that acts and behaves a lot like the ChatGPT web interface. There are also desktop clients like Ollamac and MacGPT but my favorite so far is MindMac. It not only gives you a nice way to switch from model to model but it also gives you the ability to switch between providers (Ollama, OpenAI, Azure, etc).

A screenshot of the MindMac settings panel, showing how to add new accounts

The big questions

I have a few big questions, right now. How well does Ollama scale from 1 user to 100 users? How do you finetune a model? How do you secure Ollama? Most interesting to me, how do you implement something like Stable Diffusion XL with this stack? I ordered a second-hand Xeon workstation off of eBay to try to answer some of these questions. In the workplace setting, I’m also curious what safeguards are needed to insulate the company from liability. These are all things that need addressing over time.

I created a new LLM / ML category here and I suspect that this won’t be my last post on the topic. As a blanket warning, all of these posts are personal opinions and do not reflect the views or ethics of my employer. All of this research is being done off-hours and on my own dime.

Have a question or comment? Please drop a comment, below.

https://jws.news/2024/ai-basics/

#AI #Llama2 #LLM #Ollama

joe, to random

I’m testing IFTTT integrations

https://jws.news/2023/this-is-another-test/

joe, to random
joe, to fediverse

Last year, I created my own Mastodon instance, started using Pixelfed, and even made it possible to follow this blog from the fediverse. The big benefit of this ecosystem over something like Twitter, Facebook, Tumblr, etc is that every service that is part of the fediverse can be followed by everything else on the fediverse. You can follow a kbin account from a GoToSocial account or follow a WordPress blog from a Misskey account. You don’t need to have an account on a particular service in order to follow someone who uses that service. Over time, I started wondering what would be necessary to make my own applications part of the fediverse. How hard could it be?

Earlier this year, David Neal and wrote about how to use the WebFinger protocol to create a Mastodon alias, and I noticed it from something that Ray Camden tooted. I set a webfinger file up on jws.dev and now, if you search for @joe or mention @joe in a toot, it will forward to @joe.

https://i0.wp.com/jws.news/wp-content/uploads/2023/12/Screenshot-2023-11-19-at-7.03.18-AM.png?resize=408%2C412&ssl=1

That left me wondering what would be required to build something that behaves not as an alias but as its own instance. If you visit verify.funfedi.dev and look the the API endpoints for a known good Mastodon account, you will see that there is a webfinger file (similar to the one that I used above), an actor URI (like https://toot.works/users/joe), an inbox (like https://toot.works/users/joe/inbox), and an outbox (like https://toot.works/users/joe/outbox). Replicating that shouldn’t be too hard.

This bit of node should give you https://social.joe.workers.dev/joe, https://social.joe.workers.dev/.well-known/webfinger?resource=acct:joe@social.joe.workers.dev, and https://social.joe.workers.dev/joe/outbox along with https://social.joe.workers.dev/joe/followers to display people following the account, https://social.joe.workers.dev/joe/inbox to pretend to be an inbox, and one sample toot in the outbox.

https://mstdn.social/@joe@social.joe.workers.dev

With these few endpoints, you have a functional mastodon-compatible one-user server. You won’t be able to follow this account from every instance but I think that it is because I am running it from Cloudflare Workers and a lot of firewalls block it.

Next, I wrote https://fediverse.joe.workers.dev (which you can see of the result of at https://codepen.io/steinbring/full/ZEwRmVJ). The node script behind that API takes my PixelFed, Mastodon, and WordPress activity from the past 30 days and interleaves the results so that they are in chronological order from newest to oldest. With all of my fediverse content available from a central API, it would be simple enough to update the ActivityPub instance to use the result instead of the hard-coded “orderedItems” array. I think that doing it as a boost instead of a new toot would be the better path, though.

You could also use Firebase Cloud Firestore to drive the “orderedItems” array from the instance. Adding a new firestore record would add a new toot to the fediverse.

https://jws.news/2023/writing-my-own-minimally-viable-mastodon-compatible-activitypub-instance/

joe, (edited ) to ChatGPT

Last time, we went over what the Composition API is and why it is better than the Options API in Vue. This time, I wanted to explore the subject a little more. Let’s start with an array of 30 3-ingredient cocktails that I had ChatGPT generate. Each cocktail has a name, ingredients list, preparation instructions, and a list of characteristics. Let’s see what we can do with this.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In this simple app, we declare the list of cocktails as const cocktails = ref([]) and then we use the onMounted(() => {} lifecycle hook to populate it with delicious cocktails. Lifecycle hooks allow developers to perform actions at various stages in the life of a Vue component.

So, it works fine with one set of 30 3-ingredient recipes but what if we want to use 2280 recipes, split evenly over simple, intermediate, and advanced difficulty levels? I created a “Bartender” app to explore that and I put the code for it up on GitHub. Like last time, I asked ChatGPT to generate unique recipes but this time, I asked it to generate 760 simple recipes, 760 intermediate recipes, and 760 advanced recipes. Those JSON files are then imported into a composable. You will also notice that ChatGPT‘s definition of “unique” is pretty questionable.

If you check out /components/CocktailList.vue, you can see how the /composables/useCocktails.js composable is used.

A “composable” is a function that leverages Vue’s Composition API to encapsulate and reuse stateful logic. They offer a flexible and efficient way to manage and reuse stateful logic in Vue applications, enhancing code organization and scalability. They are an integral part of the modern Vue.js ecosystem. If you are curious about what the Vue 2 analog is, it would be mixins.

Have a question or comment? Feel free to drop a comment.

https://jws.news/2023/playing-with-vue-3-and-the-composables/

joe, to random
joe, (edited ) to AdobePhotoshop

This past month, I visited the local Hack & Tell to write a web app that uses Vue.js‘s Composition API. I have written 41 posts involving Vue on this blog but the Composition API is new here. If you are the “eat your dessert first” type, you can check out the result at https://joes-job-tracker.web.app. I want to spend this article reviewing the composition API, though.

Let me start by explaining what the Composition API is. The Composition API is a collection of APIs that enable the creation of Vue components by utilizing imported functions, as opposed to the traditional method of declaring options. This term encompasses the Reactivity API, Lifecycle Hooks, and Dependency Injection. The Composition API is an integrated feature within both Vue 3 and Vue 2.7. It is a bit of a departure from the traditional Vue 2 way of writing code but Vue 2 applications can use the officially maintained @vue/composition-api plugin.

So, what do the differences actually look like? Let’s take a look at the example of an app that tracks the location of the user’s mouse cursor. The first version uses the Vue 2 method where you declare options and the second version does the same thing but uses imported functions.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

So, what are the differences between the two? Let’s compare and contrast.

Vue 2 Composition API
new Vue({ el: '#app', data: { mouseX: 0, mouseY: 0, }, methods: { trackMouse(event) { this.mouseX = event.clientX; this.mouseY = event.clientY; }, }, mounted() { // Attach an event listener to track mouse movement window.addEventListener('mousemove', this.trackMouse); }, beforeDestroy() { // Clean up the event listener to prevent memory leaks window.removeEventListener('mousemove', this.trackMouse); }, }); import { createApp, ref, onMounted, onBeforeUnmount } from 'vue'``createApp({ setup() { const mouseX = ref(0); const mouseY = ref(0); const trackMouse = (event) => { mouseX.value = event.clientX; mouseY.value = event.clientY; }; onMounted(() => { window.addEventListener('mousemove', trackMouse); }); onBeforeUnmount(() => { window.removeEventListener('mousemove', trackMouse); }); return { mouseX, mouseY, }; }, }).mount('#app')

In the Vue 3 Composition API version, the setup function is used to defines the component’s logic, and it uses reactive references (ref) to manage the state. Event handling is encapsulated within the onMounted and onBeforeUnmount lifecycle hooks. Vue 3 promotes a more modular and declarative approach. In the Vue 2 version, the code uses the Options API with a more traditional structure, relying on data, methods, and lifecycle hooks like mounted and beforeDestroy. Vue 3’s Composition API simplifies the code structure and encourages better organization and reusability of logic.

If you are anything like me, your first question is what reactive references are. Vue 2 doesn’t have a built-in equivalent to the ref and reactive features found in Vue 3. When you use a ref in a template, modifying its value triggers Vue’s automatic detection of the change, leading to a corresponding update of the DOM. This is made possible with a dependency-tracking based reactivity system. During the initial rendering of a component, Vue meticulously monitors and records every ref used in the process. Subsequently, when a ref undergoes a mutation, it initiates a re-render for the components that are observing it.

Here are the Vue 2 and the Composition API versions of the same application:

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

Vue 3’s reactivity system is more efficient and performs better than Vue 2. This can result in better overall performance, especially in applications with complex reactivity requirements. It also reduces the use of magic keywords like this, which can be a source of confusion in Vue 2.

Here are the two versions, side by side:

VUE 2 COMPOSITION API
new Vue({ el: ‘’, data: { myValue: 42, }, methods: { updateMyValue() { this.myValue = 69; // Update the value }, }, }); import { createApp, ref, onMounted } from ‘vue’;const app = createApp({ setup() { const myValue = ref(42); const updateMyValue = () => { myValue.value = 69; // Update the value }; return { myValue, updateMyValue }; } }); app.mount(‘’);

Another thing that the Composition API brings to the table is composables. Vue composables are special functions that leverage the Composition API to create reactive and reusable logic. They serve as external functions, abstracting reactive states and functionalities for use in multiple components. These composables, also known as composition functions, streamline the process of sharing logic across different parts of an application.

In functionality, composables are akin to Vue 2’s mixins found in the Options API and resemble the concept of Hooks in React.

I am going to wait for a future post to cover composables, though.

https://jws.news/2023/learning-the-composition-api/

joe, (edited ) to food

There is a Chili Cook-Off at work tomorrow and I agreed to bring something that I plan to call <Chili><Con Carne /></Chili>. So, what is the plan? I plan to start with a nice stout, thick-ground beef, 1 chopped yellow onion, a habanero or two, and a few chopped-up strips of bacon until the meat is browned and the onion is soft. Next, I’ll stir in chopped tomatoes, tomato paste, and garlic. After that, I’ll add parsley, basil, chili powder, paprika, cayenne pepper, oregano, salt, and pepper. At that point, I’ll add the kidney beans and then thicken it as needed with flour and cornmeal.

What do you think of the plan?

https://jws.news/2023/the-2023-bader-rutter-chili-cook-off/

joe, (edited ) to random

I started this blog over 12 years ago. Over the years, I have changed where it was hosted, what the URL was, and generally how I used it. A few weeks ago, I got the idea to both move hosts and change the URL again. The last time I did that was over 5 years ago.

Almost three years ago, I started posting very regularly to the blog (for a year it was at least weekly). I don’t intend to return to that but I promise a solid attempt at monthly posts. You can follow along here of course but the blog is also available on the fediverse. You can follow @all or @joe to see my posts from Mastodon, Kbin, etc.

Have any questions, comments, etc? Please feel free to drop a comment.

https://jws.news/2023/new-blog-day/

joe, (edited ) to random
joe, to random

Web Components are a set of technologies that allow you to create reusable custom elements with functionality encapsulated away from the rest of your code. This allows you to define something like <js-modal>Content</js-modal> and then attach behavior and functionality to it.

In this post, I want to explore how web components do what they do.

See the Pen by Joe Steinbring (@steinbring)
on CodePen.

In the above pen, there are two examples. The first one (<js-description>Content</js-description>) uses a custom element (defined in JavaScript using customElements.define()). It is definitely useful but if you look at the second example (<js-gravatar>Content</js-gravatar>), there is now a <template> element that allows you to define what is within the custom element.

I plan on building on some of these concepts in a later post. Have a question, comment, etc? Feel free to drop a comment, below.

https://jws.news/2020/web-components-101/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • ngwrru68w68
  • anitta
  • Durango
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • tester
  • GTA5RPClips
  • cisconetworking
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines