I feel there has to be a way of training neural networks to recognise the influence of their training data on the output.
This would probably include training a complementary indexing network + database that would then ”reverse-training” resolve and offer at some predetermined accuracy the #copyright-viable sources for each generated #aiart
I need some help though. A proof would show the companies know it can be done, but they just don’t want to.
Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).
The Machine Learning with Graphs course by Prof. 𝐉𝐮𝐫𝐞 𝐋𝐞𝐬𝐤𝐨𝐯𝐞𝐜 from Stanford University (CS224W) focuses on different methods for analyzing massive graphs and complex networks and extracting insights using machine learning models and data mining techniques. 🧵🧶👇🏼
Henry Markram, of spike timing dependent plasticity (STDP) fame and infamous for the Human Brain Project (HBP), just got a US patent for "Constructing and operating an artificial recurrent neural network": https://patents.google.com/patent/US20230019839A1/en
How is that not something thousands of undergrads are doing with PyTorch every week?
The goal, says the patent text, is for <<methods and processes for constructing and operating a recurrent artificial neural network that acts as a “neurosynaptic computer”>> – which seems patentable, but not the overreach that is patenting the construction and operation of an RNN, which is, instead, ludicrous.
Seems likely that the legal office in Markram's research institution did an overreach and got away with it. Good luck enforcing this patent though: Markram did not invent RNNs.
"Two dangerous falsehoods afflict decisions about artificial intelligence:
First, that neural networks are impossible to understand. Therefore, there is no point in trying.
Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
JOSS publishes articles about open source research software. It is a free, open-source, community driven and developer-friendly online journal. JOSS reviews involve downloading and installing the software, and inspecting the repository and submitted paper for key elements
Please reach out if you are interested in reviewing this paper or know one who could review this paper.
The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These "neuromorphic" networks, however, run on hardware that looks nothing like a biological brain, which limits performance.
Now that #NeuralNetworks have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:
#Groq notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for #LLMs https://groq.com/
A wild one — exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)
Very nice picture that was shared by Ronald van Loon on X, you can discuss if the categories are complete and correct, but it illustrates that the field of AI is much more then just transformers/LLMs. #AI#Machinelearning#neuralnetworks#deeplearning#LLM#Transfomers
The Neural Networks from Scratch in #Python 🐍 course by Harrison Kinsley introduces neural networks by coding them from scratch. The course is based on Harrison's book (along with Daniel Kukiela), and it covers the following topics:
✅ Core linear algebra and math operators
✅ Neural network architecture
✅ Different loss functions
✅ Optimization and derivatives
Pleased to share my latest research "Zero-shot counting with a dual-stream neural network model" about a glimpsing neural network model the learns visual structure (here, number) in a way that generalises to new visual contents. The model replicates several neural and behavioural hallmarks of numerical cognition.
As some tout the good tidings and marvels of AI, LLM, and marketing obfuscation ad nauseum, let’s not lose our grasp on how much our own ethics affect that real impact these tools have on all of us. And if we can’t do that, how are we supposed to instill a sense of ethics on these new conscious minds we pride ourselves in creating?
Researchers grow bio-inspired polymer brains for artificial neural networks (phys.org)
The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These "neuromorphic" networks, however, run on hardware that looks nothing like a biological brain, which limits performance.