Start saving money for that M4 Ultra with 500GB! Maybe this could be the first open source that could surpass GPT-4! AIatMeta: "Llama 3 8B & 70B models are just the beginning of what we’re working to release for Llama 3. Our largest models currently in the works are 400B+ parameters and while they’re still in active development, we’re excited about how this work is trending." #LLM#AI#MLhttps://twitter.com/AIatMeta/status/1780997414071181370
Earlier today, Microsoft released new WizardLM-2 7b, 8x22b, 70b with great benchmark result, (of course, they say as good or almost same as GPT-4), but they removed weights on Huggingface, repo on Github, and their whitepaper. Someone on Reddit joked maybe they released GPT-4 by mistake! lol Quantized. weights from other people are still around on Huggingface! #ML#LLM#AI
Cool tip for running LLMs on Apple Silicon! By default, MacOS allows GPU to use up to 2/3 of RAM on machines with <=36GB and 3/4 on machines with >36GB. I used the command sudo sysctl iogpu.wired_limit_mb=57344 to override and allocate 56GB/64GB for GPU. This allowed me to load all layers of larger models for a faster speed! #MacOS#LLM#AI#ML
Ya está abierto el registro para nuestra reunión de abril: 🐲 LLMOps & ML para Drilling Performance y Python & Mazmorras, este mes en las oficinas de Repsol
Thanks to all the recent large LLMs, "Apple is considering support for up to half a terabyte of RAM" for the highest-end m4 Mac configurations. I'm sure the price won't be cheap, but I bet it will be cheaper than getting 500GB in vram from NVidia. lol #LLM#AI#MLhttps://9to5mac.com/2024/04/11/apple-first-m4-mac-release-ai/
And this is also a temporary[?] move away from working in #ML, because right now the #AI hype is making everything reckless and irrational. And I hate being on the back end to over-promising and under-delivering.
And don't get me started on the amount of people working in the area who trust ChatGPT blindly, and are surprised when it doesn't solve things by magic. #YesItsQuiteFrustrating
Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.
years ago, the “language of machine learning” was split between #R and #python but it’s been steadily shifting toward python. At this point, after all the #LLM developments, i think it’s clearly python. i don’t see much R in the LLM world at all. And increasingly, i’m seeing #rust being the “systems language of #ML” #rustlang#LLMs