Lots of people who work in #AI have, in their head, an idea about what sort of interaction with an #LLMmight give them pause. The thing that might make them start to suspect that something interesting is happening.
Here's mine:
User: Tell me a cat joke.
LLM: Why did the cat join a band? He wanted to be a purr-cussionist.
Forecasting Time Series with Gradient Boosting โค๏ธ
The skforecast Python ๐ library provides ML applications for time series forecasting using different regression models from the scikit-learn library. Here is a tutorial by Joaquรญn Amat Rodrigo and Javier Escobar Ortiz for time series forecasting with the skforecast using XGBoost, LightGBM, Scikit-learn, and CatBoost models ๐.
โ:blobcat_think:โ I think I've figured out what's been bothering me about this: the text here implies data organises itself.
AI is both the dataset and the organising analysis and management structure that implements decisions/responses based on that dataset.
Where 'Cloud' is an empty marketing term and 'other people's computers' accurately states the real condition, this text here presents only a partial representation of what comprises an AI.
Assuming this is deliberate to highlight the mass theft of data the use of "other people's" from the original phrase doesn't directly state no permission was given for that use. Saying "Just stolen data" would make that point crystal clear.
Sorry. This is pure pedantry from me but it really has been niggling at me since i saw this a week ago. Apparently I'll get no peace if I don't let this out!
[1/2] Surprising findings in brain research ๐ง : As a team from #CharitรฉBerlin shows in #Science, thoughts in the human neocortex flow in one direction โฌ๏ธ, as opposed to the loops seen in mice ๐. That makes processing information extra efficient. These discoveries could further the development of artificial neural networks.
Meta released today Llama 3, the next generation of the Llama model. LLama 3 is a state-of-the-art open-source large language model. Here are some of the key features of the model: ๐งต๐๐ผ
A major release to Ollama - version 0.1.32 is out. The new version includes:
โ Improvement of the GPU utilization and memory management to increase performance and reduce error rate
โ Increase performance on Mac by scheduling large models between GPU and CPU
โ Introduce native AI support in Supabase edge functions
Interpreting the LHC collisions is extremely data-intensive, and #CMSPaper 1282 describes how modern software techniques so our software (and #machinelearning) can run on many different platforms/processors and still efficiently and transparently reconstruct our collisions https://arxiv.org/abs/2402.15366
Google released a new repo with a collection of guides and examples for the Gemini API. This includes a set of guides for prompt engineering and examples of the API features ๐๐ผ
Secrets of Machine Learning: How It Works and What It Means for You by Tom Kohn, 2024
Cutting through the mass of technical literature on machine learning and AI and the plethora of fear-mongering books on the rise of killer robots, Secrets of Machine Learning offers a clear-sighted explanation for the informed reader of what this new technology is, what it does, how it works, and why it's so important.
This week, PyMC version v5.13.0 was released. PyMC is one of the main #Python ๐ libraries for ๐๐๐ฒ๐๐ฌ๐ข๐๐ง statistics โค๏ธ. It provides a framework for probabilistic programming, enabling users to build #Bayesian models with a simple Python API and fit them using ๐๐๐ซ๐ค๐จ๐ฏ ๐๐ก๐๐ข๐ง ๐๐จ๐ง๐ญ๐ ๐๐๐ซ๐ฅ๐จ (MCMC) methods ๐.
The new release includes new features, bug fixes ๐, and documentation improvements ๐. More details on the release notes ๐ ๐ #DataScience#machinelearning#statistics