This is an exciting new paper that replaces attention in the Transformer architecture with a set of decomposable matrix operations that retain the modeling capacity of Transformer models, while allowing parallel training and efficient RNN-like inference without the use of attention (it doesn't use a softmax)....
This looks amazing, if true. The paper is claiming state of the art across literally every metric. Even in their ablation study the model outperforms all others.
I'm a bit suspicious that they don't extend their perplexity numbers to the 13B model, or provide the hyper parameters, but they reference it in text and in their scaling table.
A landmark referendum backed by the government would give Indigenous people constitutional recognition and greater say on legislation and policy affecting them.
Why do you say they have no representation? There are a lot of specific bodies operating in the government, advisory and otherwise, with the sole focus of indigenous affairs. And of course, currently, indigenous Australians are over represented in terms of parliamentarian race (more than 4% if parliamentarians are of indigenous descent).
The tasks demanded clear, persuasive, relatively generic writing, which are arguably ChatGPT’s central strengths. They did not require context-specific knowledge or precise factual accuracy.
And:
We required short tasks that could be explicitly described for and performed by a range of anonymous workers online
The graphs also show greater improvement for the lowest performers than for the high performers.
Definitely an encouraging result, but in line with anecdotes that, currently, LLMs are only useful for genetic and low complexity tasks, and are most helpful for low performers.
Johnson & Johnson has sued four doctors who published studies citing links between talc-based personal care products and cancer, escalating an attack on scientific studies that the company alleges are inaccurate.
While in general, I'd agree, look at the damage a single false paper on vaccination had. There were a lot of follow up studies showing that the paper is wrong, and yet we still have an antivax movement going on.
Clearly, scientists need to be able to publish without fear of reprisal. But to have no recourse when damage is done by a person acting in bad faith is also a problem.
Though I'd argue we have the same issue with the media, where they need to be able to operate freely, but are able to cause a lot of harm.
Perhaps there could be some set of rules which absolve scientists of legal liability. And hopefully those rules are what would ordinarily be followed anyway, and this be no burden to your average researcher.
Configuration-indifferent compliant building blocks that can be assembled like Lego in any configuration or arrangement to produce compliant mechanisms of any complexity that achieve the same degrees of freedom (DOFs) as their constituent building blocks.
The world’s first mechanical neural network that can learn its behavior. It consists of a lattice of compliant mechanisms that constitute an artificial intelligent (AI) architected material that gets better and better at acquiring desired behaviors and properties with increased exposure to unanticipated ambient loading...
Taking 89.3% men from your source at face value, and selecting 12 people at random, that gives a 12.2% chance (1 in 8) that the company of that size would be all male.
Add in network effects, risk tolerance for startups, and the hiring practices of larger companies, and that number likely gets even larger.
What's the p-value for a news story? Unless this is some trend from other companies run by Musk, there doesn't seem to be anything newsworthy here.
A new ferroelectric polymer that efficiently converts electrical energy into mechanical strain has been developed by Penn State researchers. This material, showing potential for use in medical devices and robotics, overcomes traditional piezoelectric limitations.
So, taking the average bicep volume as 1000cm3, this muscle could: exert 1 tonne of force, contact 8% (1.6cm for a 20cm long bicep), and require 400kV and must be above 29 degrees Celcius.
Maybe someone with access to the paper can double check the math and get the conversion efficiency from electrical to mechanical.
I expect there's a good trade-off to be made to lower the force but increase the contraction and lower the voltage. Possibly some kind of ratcheting mechanism with tiny cells could be used to overcome the crazy high voltage requirement.
GPT-4 was fine-tuned on English and Chinese instruction examples only (source). There's clearly some western bias in the historic events, but it would have been interesting to also discuss if there was a bias towards Chinese events as well. And, if so, what other languages or prompts may elicit that bias.
As an example, could you get the model to have an English bias with "I'm from America..." and a Chinese bias with "I'm from China..." even when using English?
DALL-E was the first development which shocked me. AlphaGo was very impressive on a technical level, and much earlier than anticipated, but it didn't feel different.
GANs existed, but they never seemed to have the creativity, nor understanding of prompts, which was demonstrated by DALL-E. Of all things, the image of an avocado-themed chair is still baked into my mind. I remember being gobsmacked by the imagery, and when I'd recovered from that, just how "simple" the step from what we had before to DALL-E was.
The other thing which surprised me was the step from image diffusion models to 3D and video. We certainly haven't gotten anywhere near the quality in those domains yet, but they felt so far from the image domain that we'd need some major revolution in the way we approached the problem. The thing which surprised me the most was just how fast the transition from images to video happened.
Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and...
I find the link valuable. Despite the proliferation of AI in pop culture, actual discussion of machine learning research is still niche. The community on Reddit is quite valuable and took a long time to form.
China's Baichuan Intelligent Technology Unveils Open-Source 13B Parameter Large Language Model (Article from 11.07.2023) (www.maginative.com)
Retentive Network: A Successor to Transformer for Large Language Models (arxiv.org)
This is an exciting new paper that replaces attention in the Transformer architecture with a set of decomposable matrix operations that retain the modeling capacity of Transformer models, while allowing parallel training and efficient RNN-like inference without the use of attention (it doesn't use a softmax)....
How a plan to recognize Australia's indigenous people became the country's latest culture war (www.nbcnews.com)
A landmark referendum backed by the government would give Indigenous people constitutional recognition and greater say on legislation and policy affecting them.
MIT study: ChatGPT increases productivity for human workers (mashable.com)
Hard data that ChatGPT helps with work.
Johnson & Johnson sues researchers who linked talc to cancer (www.reuters.com)
Johnson & Johnson has sued four doctors who published studies citing links between talc-based personal care products and cancer, escalating an attack on scientific studies that the company alleges are inaccurate.
How old is our universe? New study says Big Bang might have happened 27 billion years ago (www.usatoday.com)
A study published this week in an astronomical journal suggests our universe could be 26.7 billion years old, or about twice as old as we thought.
What Frame Rate is Needed to Simulate Reality? (www.youtube.com)
The frame rate needed to completely eliminate all the artifacts associated with discrete frame rates
Tunable Stiffness Compliant Mechanism with Bistable Switch (www.youtube.com)
Translational binary stiffness compliant mechanism that achieves two different states of stiffness by being triggered using a simple bistable switch
Lego-like Compliant-mechanism Building Blocks that Maintain their DOFs (www.youtube.com)
Configuration-indifferent compliant building blocks that can be assembled like Lego in any configuration or arrangement to produce compliant mechanisms of any complexity that achieve the same degrees of freedom (DOFs) as their constituent building blocks.
Compliant Mechanisms that learn - Mechanical Neural Network Architected Materials (www.youtube.com)
The world’s first mechanical neural network that can learn its behavior. It consists of a lattice of compliant mechanisms that constitute an artificial intelligent (AI) architected material that gets better and better at acquiring desired behaviors and properties with increased exposure to unanticipated ambient loading...
AnimateDiff: New Approach For Animating Diffusion Models Without Specific Tuning (animatediff.github.io)
Elon Musk’s new AI company is staffed entirely by men (qz.com)
There's not much information about xAI, but diversity is already an issue
OC But if that's not the solution, then what is?
OC It's the only logical explanation
OC Ah, General Kenobi
OC Macros have only ever made my life easier
OC Just finishing up a PR
Artificial Muscles Flex for the First Time: Ferroelectric Polymer Innovation in Robotics (scitechdaily.com)
A new ferroelectric polymer that efficiently converts electrical energy into mechanical strain has been developed by Penn State researchers. This material, showing potential for use in medical devices and robotics, overcomes traditional piezoelectric limitations.
A tiny RWKV model with 2.9M (!) params can solve 4.2379*564.778-1209.01 with CoT (twitter.com)
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models (controlavideo.github.io)
Open source "Controlnet for video" compatible with any stable-diffusion-v1-5 based model
World History Through the Lens of AI (towardsdatascience.com)
What historical knowledge do language models encode?
Use it or lose it
Quantum mechanics can't hurt you if you don't understand it
What AI developments have surprised you the most? (waitbutwhy.com)
Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing (arxiv.org)
Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and...