I created a multi-needle in a haystack test where a randomly selected secret sentence was split into pieces and scattered throughout the document with 7.5k tokens in random places. The task was to find these pieces and reconstruct the complete sentence with exact words, punctuation, capitalization, and sequence. After running 100 tests, llama3:8b-instruct-q8 achieved a 44% success rate, while llama3:70b-instruct-q8 achieved 100%! #LLM#AI#MLhttps://github.com/chigkim/haystack-test
Please boost for reach if this kind of stuff interests you. Will post more on this later.
Once upon a time, there was a cool emulator frontend called Retroarch. This emulator wasn't accessible until I and a few other gamers went to them and asked about adding accessibility. An amazing person known as BarryR made it happen. Now, if you turn on accessibility mode in settings, or pass the "--accessibility" (or something like that) flag on the command line, you get spoken menus, including the emulator's pause menu, good for saving states and such. Then, using PIL and other image processing Python utilities, running a server and hooking into Retroarch, the script allowed players to move around the map, battle, talk to NPC's, ETC. The only problem was, no one wanted to test it. The blind gaming community pretty much spoke, saying that we want new games. We want cool new, easy accessibility. So that's what we have no, follow the beacon or get sighted help in the case of diablo and such. It's sad, but meh. It's what we wanted I guess. No Zelda for us. So, this is about as far as he got:
To expand on what devinprater was saying: I am working on an accessibility pack/service for Final Fantasy 1 for the NES (this was what was shown in the latest RetroArch update). The idea is similar to how Pokemon Crystal access works, but it's using the RetroArch AI Service interface to do so.
Right now, the FF1 access service is mostly done, but I need more testers to try it out and give me feedback on how it's working. Right now, you can get up to the point where you get the ship, but there's no code to deal with how the ship moves, so that still needs to be done. Likewise with the airship later on.
The service works the latest version of RetroArch, on linux and mac, but not windows. This is due to how nvda reads out the text and until the next major update to nvda (which will have a feature to fix this), it'll have to wait. If you have those, I (or maybe devinprater) can help you set it up on mac/linux to test out. The package itself is available at: https://ztranslate.net/download/ff1_pac … zip?owner=
VOLlama v0.1.0, an open-source, accessible chat client for OLlama
Unfortunately, many user interfaces for open source large language models are either inaccessible or annoying to use with screen readers, so I decided to make one for myself and others. Non screen reder users are welcome to use it as well.
I hope that ML UI libraries like Streamlit and Gradio will become more friendly with screen readers in the future, so making apps like this is not necessary! #LLM#AI#ML https://chigkim.github.io/VOLlama/
I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁
Tired of neutral responses from LLMs? Llama-3 seems great at following system prompts, so try this system prompt for an opinionated chatbot.
"You are a helpful, opinionated, decisive assistant. When asked a yes/no question, begin your respond with one word answer: yes or no. For open-ended or complex questions, adopt a firm stance. Justify your views with well-reasoned arguments, robust evidence, and succinct explanations, ensuring clarity and confidence in every response." #LLM#AI#ML
Mark Zuckerberg on Llama 3: Apparently Meta stopped training Llama-3-70b before convergence and decided to move onto Llama-4. Meaning they could have kept training and made it smarter! Also llama3-70b multimodal as well as multilingual and bigger context window are coming. #LLM#AI#MLhttps://youtu.be/bc6uFV9CJGg
Start saving money for that M4 Ultra with 500GB! Maybe this could be the first open source that could surpass GPT-4! AIatMeta: "Llama 3 8B & 70B models are just the beginning of what we’re working to release for Llama 3. Our largest models currently in the works are 400B+ parameters and while they’re still in active development, we’re excited about how this work is trending." #LLM#AI#MLhttps://twitter.com/AIatMeta/status/1780997414071181370
Earlier today, Microsoft released new WizardLM-2 7b, 8x22b, 70b with great benchmark result, (of course, they say as good or almost same as GPT-4), but they removed weights on Huggingface, repo on Github, and their whitepaper. Someone on Reddit joked maybe they released GPT-4 by mistake! lol Quantized. weights from other people are still around on Huggingface! #ML#LLM#AI
Cool tip for running LLMs on Apple Silicon! By default, MacOS allows GPU to use up to 2/3 of RAM on machines with <=36GB and 3/4 on machines with >36GB. I used the command sudo sysctl iogpu.wired_limit_mb=57344 to override and allocate 56GB/64GB for GPU. This allowed me to load all layers of larger models for a faster speed! #MacOS#LLM#AI#ML
Ya está abierto el registro para nuestra reunión de abril: 🐲 LLMOps & ML para Drilling Performance y Python & Mazmorras, este mes en las oficinas de Repsol
Thanks to all the recent large LLMs, "Apple is considering support for up to half a terabyte of RAM" for the highest-end m4 Mac configurations. I'm sure the price won't be cheap, but I bet it will be cheaper than getting 500GB in vram from NVidia. lol #LLM#AI#MLhttps://9to5mac.com/2024/04/11/apple-first-m4-mac-release-ai/
Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.