The PyTorch foundation released a few days ago gpt-fast - an example of a native #PyTorch code for transformer text generation 🚀. This example demonstrates a simple and efficient approach for running #GPT models with less than 1000 lines of #Python code, introducing the following features :
✅ Low latency ⏩
✅ Native PyTorch code - no other dependencies 🎯
✅ Supports Nvidia and AMD GPUs 😎
✅ Tensor parallelism ⌛️
I highly recommend checking The Incredible PyTorch repo 🚀. This repo contains any resource you can imagine that is related to PyTorch, from tutorials for PyTorch to practical applications such as computer vision, classification, regression, forecasting, etc.
Create a Large Language Model from Scratch with Python Tutorial 👇🏼
Another fun tutorial from freeCodeCamp, focusing on building LLM model from scratch with Python. It covers topics such as:
✅ Handling and processing text
✅ Core PyTorch functions for text
✅ Basic language models
✅ Advance methods
✅ Working with GPUs
PyTorch now supports AMD graphics cards as well (at least some select ones)!! This is huge news, because it means AI and ML development can now be pleasant on an AMD GPU as well! It even seems AMD is treating Linux as first class citizen on this one!
🔥 #AMDlabnotes presents another new article - this time to assist data scientists/ML practitioners get their #PyTorch or #TensorFlow environment up and running on #AMD#GPUs 🔥
The crybabies who freak out about The Communist Manifesto appearing on university curriculum clearly never read it - chapter one is basically a long hymn to capitalism's flexibility and inventiveness, its ability to change form and adapt itself to everything the world throws at it and come out on top:
And the source-code is licensed under a homebrewed license cooked up by Meta's lawyers, a license that only glancingly resembles anything from the #OpenSourceDefinition:
Core to Big Tech companies' "open AI" offerings are tools, like Meta's #PyTorch and Google's #TensorFlow. These tools are indeed "open source," licensed under real OSS terms.
#Zuckerberg boasts about how #PyTorch ropes developers into Meta's stack, "when there are opportunities to make integrations with products, [so] it’s much easier to make sure that developers and other folks are compatible with the things that we need in the way that our systems work."
Tooling is a relatively obscure issue, primarily debated by developers. A much broader debate has raged over training data - how it is acquired, labeled, sorted and used.
I only just today, for the first time, experienced @huggingface and their models, pipelines etc.
Completely new to all this, but holy heck. Having spent some time installing PyTorch and then running one of their pipelines I got my very first result back and holy heck.
I feel... all jittery :-D
And this was all for free. It seems... like it shouldn't be possible.
(1/2) Audiocraft is a new Python library for audio processing and generation with deep learning by Meta. It is based on PyTorch, and it provides AI generative models for producing high-quality audio and includes the following applications:
➡️ MusicGen: controllable text-to-music model.
➡️ AudioGen: text-to-sound model.
➡️ EnCodec: high fidelity neural audio codec.
➡️ Multi Band Diffusion: An EnCodec compatible decoder using diffusion.