"Jinping's political ideology
China’s latest artificial intelligence chatbot is trained on President Xi Jinping’s doctrine in a stark reminder of the ideological parameters that Chinese AI models should abide by"
It's fashionable to criticize #LLMs, but can you think of another human invention that allows us to spend the energy budget of Tanzania to lift shitposts out of context and present them as if they were authoritative knowledge?
"The vision is that there will be a... Jarvis assistant in your phone that locks you into their ecosystem so hard that you'll never leave," Jenson continued, referencing the superpowered AI assistant in Marvel comics that can make meals for Iron Man and help him fight bad guys. "That vision is pure catnip. The fear is that they can't afford to let someone else get there first."
"It’s simply too early to get into bed with the companies that trained their models on professional content without permission and have no compelling case for how they will help build the news business." #AIhttps://werd.io/view/6650ad27ca5e257d7d0600f2
“Training on Reddit posts considered harmful to model accuracy.”
Peer reviewers gave glowing comments like “No shit Sherlock!”
Asked for a comment the ground breaking paper’s author Captain Obvious simply stated “have you ever even been on Reddit?’ He also encouraged Google to instead train on 4chan content instead as that will totally “fix” everything.
i’m very excited about the interpretability work that #anthropic has been doing with #LLMs.
in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.
this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”
this is great work. i’m excited to see where this goes next
i hope #anthropic exposes this via their API. at this point in time, most of the promising interpretability work is only available on open source models that you can run yourself. it would be great to also have them available from #AI vendors
"AI has not created a situation where we need new principles. The established principles remain the crucial ones. But we do need to think through the changing situations anew, in order to figure out how to respond well to a rapidly changing landscape." Says Hallvard Fossheim from @UiB
Although i am very enthusiastic about the functional aspects with regard to Microsoft Recall, it's a potential security nightmare. We should not be fooled with promises about local encrypted data, that can be compromised, also through federated learning, data could still reach microsoft. Also probably part of the compute will be in the cloud. #Microsoft#Recall#cybersecurity#privacy#AI
@arstechnica it doesn’t need work it needs a fundamental rethink of whether the technology makes sense outside of specific research or narrow use cases.
It should never have made it out of research labs or opt-in curiosities for technologists.
None of these details are interesting and almost aren’t even worth reporting on.
This is a stupid, stupid bubble and saying they need to work on parts of it is like saying we’re close but just need refinement which is concretely untrue.