I don't personally think LLMs will ruin everything, but neither do I think they will solve everything. Despite being in the tech world, I've been skeptical of many of the applications in which they've made an appearance in the past 18 months.
This is the cherry on top. You can no longer avoid it if you're a Google user. And even worse, its hallucinations will displace reliable but smaller sources of info.
Looks like today I finally found a good application for #llm 's: Learning languages!
I've been attempting to learn #arabic through duolingo for a while now, without much success. I figured if there's one thing language models should be good at, it's languages. So far the thing has actually been pretty helpful.
I just posted a short tutorial on setting up a simple #LLM on your #oneplus6 running #postmarketOS . The LLM (7b alpaca in this case) is not terribly accurate but possibly useful in some cases.
Warning! You phone will get hot. I haven't tested it long enough to know if it will harm your device, but do be careful.
Oh! I have a website now! Wow! My history with websites is hit and miss, so enjoy it while it lasts. I have half a dozen posts in the works already.
also, now with #gpt4o, latency is going to be critical if you’re doing streaming audio/video, so #python may start looking less appealing. what’s the new #LLM language? #rust? #go? #cpp? #fortran?
Interesting, the Chat GPT desktop app for the Mac will be slowly rolling out to Plus subscribers starting today, but OpenAI "plans to release a Windows version of the desktop app later this year." Maybe the rumor, that Apple is cosing a deal with OpenAi for ChatGPT, is True... #LLM#AI#ML#OpenAI#Applehttps://www.macrumors.com/2024/05/13/chatgpt-mac-app/
GPT4O combines audio, image, text. It can analyze actual audio. Also you can interrupt voice. It can pick up emotion from audio. You can also ask different speech with diffetrent style including singing! It can see the image in real time and chat on voice. For example solving equation in real time as you write on paper. This is amazing! #LLM#AI#GPT#ML#OpenAI
The OpenAI ChatGPT Spring Update event starts at 18:00 UK time. Twelve minutes from now. Live stream at https://openai.com/index/spring-update/. It's currently live with holding music and visuals.
(1/2) Google released a new foundation model for time series forecasting 🚀
The TimeFM (Time Series Foundation Model) is a foundation model for time series forecasting applications. This pre-trained model was developed by the Google Research team. It joins the recent trend of leveraging foundation models for time series forecasting, which includes Salesforce's Moirai and Amazon's Chronos.
L'IA c'est de la merde, épisode trouze-mille-douze :
ChatGPT consomme jusqu'à 25 fois plus qu'une recherche Google.
De plus, beaucoup d'eau est également utilisée pour refroidir les serveurs qui exécutent tous ces logiciels. Après une conversation d'environ 20 à 50 questions, un demi-litre d'eau est partie en vapeur
L'IA devrait consommer deux fois plus d'énergie que la France entière d'ici 2030
Somehow the relevance of this research using LLM, AI to train robots is not fully appreciated i think. If we can train robots in simulated worlds and that learning can be applied in real world applications, it seems that learning for Robots has no problems with regard to enough trainingdata or am i missing something ? Also this could really accelerate the applications for all kind of tasks. https://eureka-research.github.io/dr-eureka/ #AI#Robots#DrEureka#LLM
there’s a big need for something stronger than #RAG, but more flexible and cheaper than a giant all-knowing #LLM.
A great part about RAG is that it’s just a database. You just issue INSERT/UPDATE and yeah, that’s how you maintain knowledge. No million dollar training process
Ruszyła zbiórka fundacji non-profit @ftdl na dodatkowy sprzęt dla generatora napisów i transkrypcji po polsku 🇵🇱 czyli NapiGen 🚀 oraz kolejnych projektów LLM.
Pomożecie rozwiązać problem braku polskich napisów w większości treści na YouTube i transkrypcji w podkastach?
P.S. Wszystko jest lub będzie open-source, we własnej serwerowni fundacji w Krakowie, na własnym sprzęcie, żadnych "wycieków" na zewnątrz czy "darmowego" douczania amerykańskich korpo ejajów.
Yay, I too got my 7-day suspension badge from Stack Overflow from adding an #LLM#AI disclaimer back after it was first reverted to my four (4) answers!
A study that confirms what I’ve been suspecting for a while: fine-tuning a #LLM with new knowledge increases its tendency to hallucinate.
If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.
Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.