I've seen sites that sort toots by the number of replies, boosts, and likes they get. That's fine but I want more
I want my timeline classified by #machineLearning; I want to create "buckets" and train the #algorithm to identify toots that belong in said buckets. Then I want to add or subtract these buckets from my timeline
It is amazing to see how the LLMs models become more accessible and easier to train. The llama2.c is an open-source project made by Andrej Karpathy that enables training Llama 2 model in PyTorch locally and then compiling the weights to a binary C file that inferences the model.
Whoever would have predicted that the AI tech to predict/detect problems would itself become a problem?
Some #AI image detecting tools are labeling real #photographs from the #Israel-#Hamas war as fake, creating what an expert calls a "second level of disinformation"
I guess you could say: ”That’s exactly what a programmer would say”, but for me what’s most transformative about #ai#machinelearning is not really it’s abilities to reassemble information.
Naive question (maybe): Is there a definition of 'computation' akin to the mathematical definition of information (entropy/MI)? I don't mean Turing machines. e.g. something that could determine the extent to which a group of neurons/synapses are signalling versus computing? #computation#computerscience#informationtheory#machinelearning#neuroscience
Playing with self-attention in latent diffusion models. This animation illustrates that the model learns to represent 3d scene properties like depth and object semantics. The red dot shows which pixel's self-attention map we're seeing.
It has not been trained on depth maps or segmentation maps, just normal natural images.
Holly Herndon is an #artist known for blending electronic music with #artificialintelligence in her work. Alongside her husband, Mathew Dryhurst, she explores the concept of "embedding" personal identity into #AI models, as seen in their project Holly+, a #machinelearning model trained on Herndon's voice. In this article, the couple reflects on the future of #art and authorship in the age of AI, emphasizing the need for ethical considerations and new economic models. https://www.newyorker.com/magazine/2023/11/20/holly-herndons-infinite-art
“The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”
I can not recommend more to learn the foundations of #machinelearning today. I see a lot of blogposts on last coding recipes for #LLM on medium which assume the reader has the basics, where a lot of coders just landed into AI! So keep your cool, and take your time to learn.
Sometimes, #ChatGPT just needs to say, "I don't know." Instead of launching into a defensive diatribe. I suppose that IS more like people than a robot tho. However, like... just say I don't know. Stop beating around the bush and making me read to find out that the question isn't going to be answered at all.
Out of the nearly 4000 people where I work, I'm the only one who's voiced any opposition to the introduction of AI or ML systems to our workflows. Everyone has become super caught up in the hype. We've got internal workgroups exploring tools like Copilot, ChatGPT, and a bunch of proprietary options that promise to magically solve all kinds of complex issues. Enough "AI no-code" platforms to make my head spin. There's monthly/weekly/daily "AI news" digest emails.
Management just met with a vendor that promises a tool to replace the entire role of DBA while also magically optimizing our data access to impossible speeds (sub 1ms round-trip for ALL queries, updates, and inserts on dozens of TB). Were they skeptical after such oulandish claims? No, they scheduled more demos and asked for a quote.
Some of this stuff is genuinely useful, like Copilot, but most doesn't even pass the sniff test. Legal issues? "our dataset is anonymized, so no one can prove that their content was used." Practical challenges? Who cares, "we can figure that out later". Don't even get me started on the ethical nightmare of mixing AI and PII, and all from a company that makes a big public deal about being socially conscious.
I feel like a wet blanket, constantly raining on everyone's parade with problems, risks, and ethical concerns. But if I don't, then (apparently) no one else will. Some coworkers have privately confirmed that they agree with me, but aren't willing to publicly oppose the hype. Its so tiring and frustrating and I think I might just stop, and let them ride this train until it wrecks.
“A team of researchers from the University of Kansas has developed a tool to weed out #AI-generated academic writing from the stuff penned by people, with over 99 percent accuracy”
tl;dr: Die #OpenSource-Community hat in den letzten Jahren die Herausforderungen von #Cloud- und Mobiltechnologien vernachlässigt, was zu Abhängigkeiten von proprietären Cloud-Diensten führte. Dies stellt ein Problem dar, insbesondere im Hinblick auf digitale Souveränität. 1/2
"[...] as companies like Coca-Cola start making huge investments to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff."
Thanks to everyone who came out last weekend to my talk at the MIT Museum's Cambridge Science Festival! We had a packed house and a lot of discussion afterwards, so I'm really honored that folks found these to be helpful resources 🥰 Special thanks also to the volunteers who helped us squeeze in some more chairs at the last minute 🪑😅
Version 1.7.1 of the NeuralForecast #Python library was released last month by Nixtla. The NeuralForecast library, as the name implies, provides a neural network framework for time series forecasting. 🧵👇🏼
The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
✅ Text models such as transformer, Llama, Mistral, and Phi-2 models
✅ Image models such as Stable Diffusion
✅ Audio and speech recognition with OpenAI's Whisper
✅ Support for some Hugging Face models