jess, to Cognition
@jess@neuromatch.social avatar

Pleased to share my latest research "Zero-shot counting with a dual-stream neural network model" about a glimpsing neural network model the learns visual structure (here, number) in a way that generalises to new visual contents. The model replicates several neural and behavioural hallmarks of numerical cognition.

#neuralnetworks #cognition #neuroscience #generalization #vision #enactivism #enactiveCognition #cognitivescience #CognitiveNeuroscience #computationalneuroscience

https://arxiv.org/abs/2405.09953

stefaneiseleart, (edited ) to aiart German
@stefaneiseleart@mograph.social avatar
gimulnautti, to mathematics
@gimulnautti@mastodon.green avatar

#mathematics people:

I feel there has to be a way of training neural networks to recognise the influence of their training data on the output.

This would probably include training a complementary indexing network + database that would then ”reverse-training” resolve and offer at some predetermined accuracy the #copyright-viable sources for each generated #aiart

I need some help though. A proof would show the companies know it can be done, but they just don’t want to.

#neuralnetworks

rzeta0, to machinelearning
@rzeta0@mastodon.social avatar

... cover of the second edition of the German translation is looking good!

#machinelearning #python #neuralnetworks

edrogers, to Transformers
@edrogers@fosstodon.org avatar

My talk on for @madpy this month went really well. We covered a lot and folks really seemed to enjoy it. After the talk, I got some requests that I share my slide deck, so they're now shared on the MadPy event page: https://madpy.com/meetups/2024/3/14/20240314-the-evolution-of-the-transformer/

RossGayler, to machinelearning
@RossGayler@aus.social avatar

Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).

@cogsci

stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar
Gert, to ai
@Gert@qoto.org avatar

Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

https://tinyurl.com/4d6pfpdu

stefaneiseleart, to aiart
@stefaneiseleart@mograph.social avatar
dmm, to random
@dmm@mathstodon.xyz avatar

14 years after Alan Turing's death, an unpublished manuscript emerged where he suggested the idea of a "disordered" computer that anticipated the rise of connectionism.

https://www.cs.virginia.edu/~robins/Alan_Turing%27s_Forgotten_Ideas.pdf

strypey, to ai

"Two dangerous falsehoods afflict decisions about artificial intelligence:

  • First, that neural networks are impossible to understand. Therefore, there is no point in trying.

  • Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."

https://betterwithout.ai/backpropaganda

stefaneiseleart, to aiart
@stefaneiseleart@mograph.social avatar
fabrice13, to ArtificialIntelligence Italian
@fabrice13@neuromatch.social avatar

On vs and
Just skimmed through "Inferring neural activity before plasticity as a foundation for learning beyond backpropagation" by Yuhang Song et al. https://www.nature.com/articles/s41593-023-01514-1

Quite interesting but confusing, as I come from DL.
If I got it right, the authors focus on showing how and why biological neural networks would benefit from being Energy Based Models for Predictive Coding, instead of Feedforward Networks employing backpropagation.
I struggled to reach where they explain how to optimize a ConvNet in PyTorch as an EB model, but they do: there is an algorithm and formulae, but I'm curious about how long and stable training is, and whether all that generalizes to typical computer vision architectures (ResNets, MobileNets, ViTs, ...).
Code is also at https://github.com/YuhangSong/Prospective-Configuration

I would like to sit a few hours at my laptop and try to better see and understand, but I think in the next days I will go to Modern . These too are EB and there's an energy function that is optimised by the 's dot product attention.
I think I got what attention does in Transformers, so I'm quite curious to get in what sense it's equivalent to consolidating/retrieving patterns in a Dense Associative Memory. In general, I think we're treating memory wrong with our deep neural networks. I see most of them as sensory processing, shortcut to "reasoning" without short or long term memory surrogates, but I could see how some current features may serve similar purposes...

ramikrispin, (edited ) to machinelearning
@ramikrispin@mstdn.social avatar

(1/3) Machine Learning with Graphs course 🚀

The Machine Learning with Graphs course by Prof. 𝐉𝐮𝐫𝐞 𝐋𝐞𝐬𝐤𝐨𝐯𝐞𝐜 from Stanford University (CS224W) focuses on different methods for analyzing massive graphs and complex networks and extracting insights using machine learning models and data mining techniques. 🧵🧶👇🏼

stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar
stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar

Artificial neural networks, animation loop
created with Blender3D Geometry Nodes -> DepthMap
Render -> HotShotXL, AnimateDiff, IP-Adapters. I got
the workflow for this process from a Youtube tutorial
by www.purz.xyz




Artificial neural networks, animation loop

HxxxKxxx, to random German
@HxxxKxxx@det.social avatar
  1. Ein Neuronales Netz ist kein Gehirn, sondern Mathe
  2. Man kann schon reinsehen, bringt nur nicht viel
  3. Bilder und worte lassen sich in Zahlen verwandeln
  4. Neuronale Netze können „ausgetrickst" werden

Annika Rüll
Lass mal das Innere eines Neuronalen Netzes ansehen!
https://fahrplan.events.ccc.de/congress/2023/fahrplan/events/11784.html

profoundlynerdy, to Neuroscience
@profoundlynerdy@bitbang.social avatar

Is there anything like an model for , biological or artificial?

mstimberg, to foss
@mstimberg@neuromatch.social avatar

Hi, I am looking for reviewers for the following submission to @joss:

“Φ-ML: A Science-oriented Math and Neural Network Library for Jax, PyTorch, TensorFlow & NumPy”

Repo: https://github.com/tum-pbs/PhiML
Paper: https://github.com/openjournals/joss-papers/blob/joss.05823/joss.05823/10.21105.joss.05823.pdf
Pre-review: https://github.com/openjournals/joss-reviews/issues/5823

JOSS publishes articles about open source research software. It is a free, open-source, community driven and developer-friendly online journal. JOSS reviews involve downloading and installing the software, and inspecting the repository and submitted paper for key elements

Please reach out if you are interested in reviewing this paper or know one who could review this paper.

albertcardona, to Neuroscience
@albertcardona@mathstodon.xyz avatar

Henry Markram, of spike timing dependent plasticity (STDP) fame and infamous for the Human Brain Project (HBP), just got a US patent for "Constructing and operating an artificial recurrent neural network": https://patents.google.com/patent/US20230019839A1/en

How is that not something thousands of undergrads are doing with PyTorch every week?

The goal, says the patent text, is for <<methods and processes for constructing and operating a recurrent artificial neural network that acts as a “neurosynaptic computer”>> – which seems patentable, but not the overreach that is patenting the construction and operation of an RNN, which is, instead, ludicrous.

Seems likely that the legal office in Markram's research institution did an overreach and got away with it. Good luck enforcing this patent though: Markram did not invent RNNs.

#neuroscience #RNN #NeuralNetworks #ANN #RidiculousPatents

tyrell_turing, to Neuroscience
@tyrell_turing@fediscience.org avatar

1/ What is the organization of mouse visual cortex across regions?

In our latest work led by Rudi Tong and Stuart Trenholm, now out on bioRxiv (https://biorxiv.org/content/10.1101/2023.11.03.565500v1) we mapped the "feature landscape" of mouse visual cortex.

Here is a thread about what we found.

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Now that have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:

notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for
https://groq.com/

A wild one — exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)

https://idw-online.de/de/news820323

tdverstynen, to Neuroscience
@tdverstynen@neuromatch.social avatar

Are you interested in cortico-basal ganglia networks and would like to model them, but only have a basic proficiency in Python or computational modeling in general?

Well then, I’m happy to announce the release of CBGTPy, a software package for running biologically-realistic simulations of the cortico-basal ganglia-thalamic (CBGT) networks in a dynamic range of tasks. The latest tool out of our Exploratory Intelligence group at CMU, University of Pittsburgh, and University of the Balearic Islands (Spain).

https://www.biorxiv.org/content/10.1101/2023.09.05.556301v1

1/7

lapo, to ai en-us

I notice that I go to twitter nowadays almost exclusively for news about and it seems most of the contents are still there.

(On the other hand, e.g., infosec people seems to be most active here on the Fediverse.)

Is there any "user cluster" I didn't notice or subscribe to on those arguments in here, or do you think I might be right?

noodlemaz, to Korean
@noodlemaz@med-mastodon.com avatar

Loved this post, via @emilymbender, about "AI", , and more - flagging for @lingthusiasm as I think you'll enjoy it too. Big recommend to everyone though.

https://karawynn.substack.com/p/language-is-a-poor-heuristic-for

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines