Very nice picture that was shared by Ronald van Loon on X, you can discuss if the categories are complete and correct, but it illustrates that the field of AI is much more then just transformers/LLMs. #AI#Machinelearning#neuralnetworks#deeplearning#LLM#Transfomers
I feel there has to be a way of training neural networks to recognise the influence of their training data on the output.
This would probably include training a complementary indexing network + database that would then ”reverse-training” resolve and offer at some predetermined accuracy the #copyright-viable sources for each generated #aiart
I need some help though. A proof would show the companies know it can be done, but they just don’t want to.
The Neural Networks from Scratch in #Python 🐍 course by Harrison Kinsley introduces neural networks by coding them from scratch. The course is based on Harrison's book (along with Daniel Kukiela), and it covers the following topics:
✅ Core linear algebra and math operators
✅ Neural network architecture
✅ Different loss functions
✅ Optimization and derivatives
Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).
As some tout the good tidings and marvels of AI, LLM, and marketing obfuscation ad nauseum, let’s not lose our grasp on how much our own ethics affect that real impact these tools have on all of us. And if we can’t do that, how are we supposed to instill a sense of ethics on these new conscious minds we pride ourselves in creating?
14 years after Alan Turing's death, an unpublished manuscript emerged where he suggested the idea of a "disordered" computer that anticipated the rise of connectionism.
"Two dangerous falsehoods afflict decisions about artificial intelligence:
First, that neural networks are impossible to understand. Therefore, there is no point in trying.
Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."