I put together some detailed notes showing how I use Claude and ChatGPT as part of my daily workflow - in this case describing how I used them for a 6 minute side quest to create myself a GeoJSON map of the boundary of the Adirondack Park in upstate New York https://simonwillison.net/2024/Mar/22/claude-and-chatgpt-case-study/
Playing around with small LLM's on my very old laptop using Llamafile , all those models are totally unreliable, hallucinate a lot. Not surprising if you think about how relatively small they are, my resources etc. This is fun to play around with and you can easily change the settings use another model etc. As a learning experience great, for practical use i will stick with ChatGPT4 . https://github.com/Mozilla-Ocho/llamafile #llamafile#experiment#llm#AI
@cbontenbal ...het was me al gelukt het kleinste model direct te draaien (zie beschrijving op genoemde Github), nu het grootste model met de gewichten apart omdat windows geen executable groter dan 4GB aan kan 😀 . Betreffende opensource modellen zijn nog een stuk slechter als GPT4 maar het experimenteren vind ik interessant in mijn vrije tijd. #LLamafile#experimenten
Many options for running Mistral models in your terminal using LLM
I wrote about a whole bunch of different ways you can use my LLM tool to run prompts through Mistral 7B, Mixtral 8x7B and the new Mistral-medium from the terminal:
I know that LLM and AI are kind of dirty words in the app dev community and I get the reasons why. Even if you just keep living your life, folks like Dropbox (https://boingboing.net/2023/12/14/dropbox-is-sharing-users-files-with-openai-heres-how-to-opt-out.html) and GitHub are using your data to train their LLMs. I think that there is an opportunity in ethically sourced AI, though. Even if you disagree with that statement, there is a "know your enemy" argument.
That's the reason why I have been trying to learn this shit, lately. ...
... I have tried running #Llama 2 (https://ai.meta.com/llama/) locally and I haven't been able to figure it out yet but it is the reason why I decided to replace my 2017 MBP with a 2021 MBP (with double the RAM and an M1 Pro instead of an i7).
I did have some success with #llamafile (https://github.com/Mozilla-Ocho/llamafile), though. Mozilla released something that lets you package a set of weights as a single multi-gigabyte executable that can run on macOS, Windows, Linux, FreeBSD, OpenBSD, or NetBSD. ...
Well, I'm sure an NVDA addon with this will be here soon. Just amazing how far things have come. I mean, I hope there's a Windows version of the Llava part of this anyway.
llamafile is the new best way to run a LLM on your own computer: