strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"Two dangerous falsehoods afflict decisions about artificial intelligence:

  • First, that neural networks are impossible to understand. Therefore, there is no point in trying.

  • Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."

https://betterwithout.ai/backpropaganda

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "An underground website called OnlyFake is claiming to use “neural networks” to generate realistic looking photos of fake IDs for just $15, radically disrupting the marketplace for fake identities and cybersecurity more generally. This technology, which 404 Media has verified produces fake IDs nearly instantly, could streamline everything from bank fraud to laundering stolen funds.

In our own tests, OnlyFake created a highly convincing California driver's license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted. The photo even gives the appearance that the ID card is laying on a fluffy carpet, as if someone has placed it on the floor and snapped a picture, which many sites require for verification purposes. 404 Media then used another fake ID generated by this site to successfully step through the identity verification process on OKX. OKX is a cryptocurrency exchange that has recently appeared in multiple court records because of its use by criminals.

Rather than painstakingly crafting a fake ID by hand—a highly skilled criminal profession that can take years to master—or waiting for a purchased one to arrive in the mail with the risk of interception, OnlyFake lets essentially anyone generate fake IDs in minutes that may seem real enough to bypass various online verification systems. Or at least fool some people."

https://www.404media.co/inside-the-underground-site-where-ai-neural-networks-churns-out-fake-ids-onlyfake/

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.

In short: they are bad."

, Gradient Dissent

ulaulaman, to physics
@ulaulaman@mastodon.social avatar
stefaneiseleart, to aiart
@stefaneiseleart@mograph.social avatar
stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar
stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar
fabrice13, to ArtificialIntelligence Italian
@fabrice13@neuromatch.social avatar

On vs and
Just skimmed through "Inferring neural activity before plasticity as a foundation for learning beyond backpropagation" by Yuhang Song et al. https://www.nature.com/articles/s41593-023-01514-1

Quite interesting but confusing, as I come from DL.
If I got it right, the authors focus on showing how and why biological neural networks would benefit from being Energy Based Models for Predictive Coding, instead of Feedforward Networks employing backpropagation.
I struggled to reach where they explain how to optimize a ConvNet in PyTorch as an EB model, but they do: there is an algorithm and formulae, but I'm curious about how long and stable training is, and whether all that generalizes to typical computer vision architectures (ResNets, MobileNets, ViTs, ...).
Code is also at https://github.com/YuhangSong/Prospective-Configuration

I would like to sit a few hours at my laptop and try to better see and understand, but I think in the next days I will go to Modern . These too are EB and there's an energy function that is optimised by the 's dot product attention.
I think I got what attention does in Transformers, so I'm quite curious to get in what sense it's equivalent to consolidating/retrieving patterns in a Dense Associative Memory. In general, I think we're treating memory wrong with our deep neural networks. I see most of them as sensory processing, shortcut to "reasoning" without short or long term memory surrogates, but I could see how some current features may serve similar purposes...

stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar
ramikrispin, (edited ) to machinelearning
@ramikrispin@mstdn.social avatar

(1/3) Machine Learning with Graphs course 🚀

The Machine Learning with Graphs course by Prof. 𝐉𝐮𝐫𝐞 𝐋𝐞𝐬𝐤𝐨𝐯𝐞𝐜 from Stanford University (CS224W) focuses on different methods for analyzing massive graphs and complex networks and extracting insights using machine learning models and data mining techniques. 🧵🧶👇🏼

ramikrispin,
@ramikrispin@mstdn.social avatar

(2/3) The course includes 47 lectures, and it covers topics such as:
✅ ML applications for graph
✅ Graph neural networks (GNN)
✅ Knowledge graph completion
✅ Recommendation with GNN
✅ Geometric deep learning
✅ Link prediction and causality

ramikrispin,
@ramikrispin@mstdn.social avatar

(3/3) Prerequisites
basic knowledge of computer science principles, probability theory, and linear algebra

𝐑𝐞𝐬𝐨𝐮𝐫𝐜𝐞𝐬 📚
Video 📽️: https://www.youtube.com/playlist?list=PLoROMvodv4rOP-ImU-O1rYRg2RFxomvFp
Course website 🔗: https://web.stanford.edu/class/cs224w/

stefaneiseleart, to aiart German
@stefaneiseleart@mograph.social avatar

Artificial neural networks, animation loop
created with Blender3D Geometry Nodes -> DepthMap
Render -> HotShotXL, AnimateDiff, IP-Adapters. I got
the workflow for this process from a Youtube tutorial
by www.purz.xyz




Artificial neural networks, animation loop

stefaneiseleart, (edited ) to aiart German
@stefaneiseleart@mograph.social avatar

Exhibition of dsytopia, shots of an AI video created with Stable Diffusion (XL Model), Krea.AI and Gen-2 AI technologies. Cinematic experiments that
reflect the intersection of art and technology.

HxxxKxxx, to random German
@HxxxKxxx@det.social avatar
  1. Ein Neuronales Netz ist kein Gehirn, sondern Mathe
  2. Man kann schon reinsehen, bringt nur nicht viel
  3. Bilder und worte lassen sich in Zahlen verwandeln
  4. Neuronale Netze können „ausgetrickst" werden

Annika Rüll
Lass mal das Innere eines Neuronalen Netzes ansehen!
https://fahrplan.events.ccc.de/congress/2023/fahrplan/events/11784.html

profoundlynerdy, to Neuroscience
@profoundlynerdy@bitbang.social avatar

Is there anything like an model for , biological or artificial?

mstimberg, to foss
@mstimberg@neuromatch.social avatar

Hi, I am looking for reviewers for the following submission to @joss:

“Φ-ML: A Science-oriented Math and Neural Network Library for Jax, PyTorch, TensorFlow & NumPy”

Repo: https://github.com/tum-pbs/PhiML
Paper: https://github.com/openjournals/joss-papers/blob/joss.05823/joss.05823/10.21105.joss.05823.pdf
Pre-review: https://github.com/openjournals/joss-reviews/issues/5823

JOSS publishes articles about open source research software. It is a free, open-source, community driven and developer-friendly online journal. JOSS reviews involve downloading and installing the software, and inspecting the repository and submitted paper for key elements

Please reach out if you are interested in reviewing this paper or know one who could review this paper.

albertcardona, to Neuroscience
@albertcardona@mathstodon.xyz avatar

Henry Markram, of spike timing dependent plasticity (STDP) fame and infamous for the Human Brain Project (HBP), just got a US patent for "Constructing and operating an artificial recurrent neural network": https://patents.google.com/patent/US20230019839A1/en

How is that not something thousands of undergrads are doing with PyTorch every week?

The goal, says the patent text, is for <<methods and processes for constructing and operating a recurrent artificial neural network that acts as a “neurosynaptic computer”>> – which seems patentable, but not the overreach that is patenting the construction and operation of an RNN, which is, instead, ludicrous.

Seems likely that the legal office in Markram's research institution did an overreach and got away with it. Good luck enforcing this patent though: Markram did not invent RNNs.

tyrell_turing, to Neuroscience
@tyrell_turing@fediscience.org avatar

1/ What is the organization of mouse visual cortex across regions?

In our latest work led by Rudi Tong and Stuart Trenholm, now out on bioRxiv (https://biorxiv.org/content/10.1101/2023.11.03.565500v1) we mapped the "feature landscape" of mouse visual cortex.

Here is a thread about what we found.

appassionato, to books
@appassionato@mastodon.social avatar

Artificial Intelligence in the Age of Neural Networks and Brain Computing, Second Edition demonstrates that present disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity, and smart autonomous search engines.

@bookstodon



remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The way we understand what artificial intelligence is and how we design it has serious implications for society. Marta Peirano reviews the origins and evolution of AI, and addresses its problems and dangers in this article, taken from the catalogue of the exhibition AI: Artificial Intelligence."
https://lab.cccb.org/en/the-double-life-of-artificial-intelligence/

gpuopen, to random
@gpuopen@mastodon.gamedev.place avatar

MLP (Multi-Layer Perceptron) can be used to represent a signal, but it is known that keeping high frequency signal is challenging.

We propose a novel encoding method for MLP which we call "local frequency encoding" on Fri 9:40am at

Paper here: https://gpuopen.com/learn/publications/?utm_source=mastodon&utm_medium=social&utm_campaign=arrg

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Now that have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:

notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for
https://groq.com/

A wild one — exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)

https://idw-online.de/de/news820323

tdverstynen, to Neuroscience
@tdverstynen@neuromatch.social avatar

Are you interested in cortico-basal ganglia networks and would like to model them, but only have a basic proficiency in Python or computational modeling in general?

Well then, I’m happy to announce the release of CBGTPy, a software package for running biologically-realistic simulations of the cortico-basal ganglia-thalamic (CBGT) networks in a dynamic range of tasks. The latest tool out of our Exploratory Intelligence group at CMU, University of Pittsburgh, and University of the Balearic Islands (Spain).

https://www.biorxiv.org/content/10.1101/2023.09.05.556301v1

1/7

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Rethinking open source for AI - We keep using the term “open source” in the context of large language models (LLMs) li... - https://www.infoworld.com/article/3706091/rethinking-open-source-for-ai.html#tk.rss_all

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • osvaldo12
  • magazineikmin
  • khanakhh
  • InstantRegret
  • thenastyranch
  • Youngstown
  • everett
  • mdbf
  • slotface
  • ngwrru68w68
  • DreamBathrooms
  • kavyap
  • tester
  • anitta
  • rosin
  • Durango
  • tacticalgear
  • modclub
  • cubers
  • GTA5RPClips
  • ethstaker
  • normalnudes
  • cisconetworking
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines