For example, if we train a model to compute a simple, linear feature and a hard, highly non-linear one, the easy feature is naturally learned first, but both are generalized perfectly by the end of training. However, the easy feature dominates the representations! 3/9
This paper is really just us finally following up on a weird finding about RSA (figure on the here) from a paper Katherine Hermann & I had at NeurIPS back in the dark ages (2020): https://x.com/khermann_/status/1323353860283326464
Thanks to my coauthors @scychan_brains & Katherine! 9/9
FreeCodeCamp released today a new course for fine tuning LLM models. The course, by Krish Naik, focuses on different tuning methods such as QLORA, LORA, and Quantization using different models such as Llama2, Gradient, and Google Gemma model.
“The Protein Universe Atlas is a groundbreaking resource for exploring the diversity of proteins. Its user-friendly web interface empowers researchers, biocurators and, students in navigating the “dark matter” to explore proteins of unknown function.”
🥁 That’s what the committee said about this work, one of the #SIBRemarkableOutputs 2023 👏
So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. 🤔
The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
✅ Text models such as transformer, Llama, Mistral, and Phi-2 models
✅ Image models such as Stable Diffusion
✅ Audio and speech recognition with OpenAI's Whisper
✅ Support for some Hugging Face models
@ramikrispin@BenjaminHan How do this and corenet (https://github.com/apple/corenet) fit together? The corenet repo has examples for inference with MLX for models trained with corenet; is that it, does MLX not have, e.g., activation and loss fns, optimizers, etc.?
@Lobrien@BenjaminHan The corenet is deep learning application where the MLX is array framework for high performance on Apple silicon. This mean that if you are using mac with M1-3 CPU it should perform better when using MLX on the backend (did not test it myself)
MIT launched the 2024 edition of the Introduction to Deep Learning course by Prof. Alexander Amini and Prof.Ava Amini. The course started at the end of April and will run until June. The course lectures are published weekly. The course syllabus keeps changing from year to year, reflecting the rapid changes in this field.
(2/2) The course covers the following topics:
✅ Deep learning foundation
✅ Computer vision
✅ Deep generative modeling
✅ Reinforcement learning
✅ Robot learning
✅ Text to image
Stanford University released a new course last week focusing on Deep Generative Models. The course, by Prof. Stefano Ermon, focuses on the models beyond GenAI models.
(1/2) Google released a new foundation model for time series forecasting 🚀
The TimeFM (Time Series Foundation Model) is a foundation model for time series forecasting applications. This pre-trained model was developed by the Google Research team. It joins the recent trend of leveraging foundation models for time series forecasting, which includes Salesforce's Moirai and Amazon's Chronos.