Machine Learning


30 billion words of audio transcriptions from 30 million YouTube videos, in multiple languages. More modalities coming soon. From Pleias.

freyablekman, avatar

Interpreting the LHC collisions is extremely data-intensive, and #CMSPaper 1282 describes how modern software techniques so our software (and #machinelearning) can run on many different platforms/processors and still efficiently and transparently reconstruct our collisions

rzeta0, avatar

... cover of the second edition of the German translation is looking good!

#machinelearning #python #neuralnetworks

jakmarcin, Polish avatar

Hi I am looking for a post-doc to work in magnetic fusion on on . If you're interested get in touch with me. More details here:

Posit, avatar

We’re so excited to announce the support of survival analysis for time-to-event data across tidymodels!

• The tidymodels framework is a collection of R packages for modeling and machine learning using tidyverse principles.
• Survival analysis is now a first-class citizen in tidymodels, giving censored regression modeling the same flexibility and ease as classification or regression.

Learn more on the tidyverse blog:

stefan, avatar
jorgecandeias, avatar

@stefan Wait. I only get to see one at the time?! I mean, I build one, and then it show the machine working without any neighbours... How come there's neighbours in this video?

stefan, avatar

@jorgecandeias Ah, sorry, not too sure, I just recorded the one that's already on the page.

manlius, Italian avatar

If you had the feeling that the online discussion about COVID-19 vaccines was biased depending on the actors, you are right.

Using and we have shown that being a human or a bot, verified or unverified (according to previous Twitter rules) and political leaning were relevant factors for choosing the words in posts and, accordingly, the corresponding emotions to trigger.

A genuine computational social science study, led by Anna Bertani for her Msc thesis, now published also in collaboration with Riccardo Gallotti and Pierluigi Sacco


j_bertolotti, avatar

@manlius This deserves a thread with an explanation for non-specialists (i.e. me 😉 )

manlius, avatar

@j_bertolotti i promise I will do one once I'll get more free (tough period).

Glad you are interested.

albertcardona, (edited ) avatar

“Is this a … person?” Asks the incidental meta-meme.

One wonders, what manner of amusing and colorful hats or attire did the people in the training set wear.

Or weather the “eyes” on its wings not only fool predators but also machine learning classifiers.

Biology 1 - 0 Machine Learning.

albertcardona, avatar

The “person”, sipping nectar a few moments later.

albertcardona, avatar

the ask envision on the @letsenvision app is cool, I loaded a 50 page pdf and it just do a RAG on it and answers my questions comprehensively, people should use it more, I hope in future I can load entire folders of document on desktop to do rag

letsenvision, avatar

@menelion @kaveinthran
Ask Envision is soon coming on desktop :)

menelion, avatar

@letsenvision @kaveinthran Great news, thank you!!

danstowell, avatar

PhD opportunity in France: "Machine learning on a solar-powered environmental sensor" working with @lostanlen

weiming, avatar

Calling all data enthusiasts: ever heard of Orange ( Recently stumbled upon this tool for data mining and machine learning. It's Python-based and completely open-source. Sounds pretty good to me? Any users here?

RossGayler, avatar

Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).

@cogsci #CogSci #CognitiveScience #MathPsych #MathematicalPsychology #NeuralNetworks #MachineLearning

jonny, avatar

Aha, well yes it entirely depends on the question at hand and the experimental design. So eg. One major distinction is whether you are trying to say something about a model, a family of models, or the data. Parametric statistics is for inference over samples of a definable population, so eg. a point estimate of accuracy on held out data is fine if all youre trying to do is make a claim about a single model since there is no "population" you are sampling from. If youre trying to make a claim about a class of models then now you are sampling from the (usually) real valued, n-dimensional model space, so there the usual requirements for random sampling within parameter space would apply.

Making a claim about the data is much different, because now you have a joint analysis problem of "the effects of my model" and "the effects of the data" (neuroscientists love to treat the SVMs in their "decoding" analyses as neutral and skip that part, making claims about the data by comparing eg. Classification accuracies as if they were only dependent on the data. Even randomly sampling the subspace there doesnt get rid of that problem because different model architectures, training regimes, etc. Have different capacities for classifying different kinds of source data topologies, but I digress.)

For methods questions like this I try and steer clear of domain specific papers and go to the stats lit or even stats textbooks, because domain specific papers are translations of translations, and often have uh motivated reasoning. For example, the technique "representational similarity analysis" in neuro is wholly unfounded on any kind of mathematical or statistical proof or theory, and yet it flourishes because it sounds sorta ok and allows you to basically "choose your own adventure" to produce whatever result you want.

For k-fold, its a traditional repeated measures problem (depending on how you set it up). The benchmarking paradigm re: standard datasets and comparing accuracy is basically fine if the claim you are making is exactly "my model in particular is more accurate on this particular set of benchmarks." Youre right that even for that, to get some kind of aggregated accuracy you would want an MLM with dataset as random effect, but since the difference in datasets is often ill defined and as you say based in convenience im not sure how enlightening it would be.

Would need more information on the specific question you had in mind to recommend lit, and I am not a statistician I just get annoyed with lazy dogshit and think stats and topology (which is relevant bc many neuro problems devolve into estimating metric spaces) is interesting rather than a nuisance.

neuralreckoning, avatar

@jonny @RossGayler @cogsci I'm very ignorant of statistics, but yeah I agree ML publications are usually pretty poor on this.

XRobotsUK, avatar
rich, avatar

About to try and train a neural network on a smol, binarized version of MNIST....wish me luck 😅 place your bets on train/test accuracy! I reckon at least 10% ;)

dontcallmeDOM, avatar

I've been working over the past few weeks in documenting the systemic impact that AI systems and models are having on the Web:

This led to the publication of the following report by the @w3c team yesterday:
"AI & the Web: Understanding and managing the impact of Machine Learning models on the Web"

Looking forward to input on feedback on the associated github repo!

Sousse, French avatar

l'#UE veut "réguler" l'#Intelligence #artificielle #AI
C’est une première dans le monde. Le #Parlement #européen vient d’adopter une #loi qui va #encadrer l’usage de cette #technologie sans brimer les entreprises qui développent l’intelligence artificielle.

allixender, avatar

Super proud of @evelynuuemaa head of our lab giving her professorial inauguration lecture in the university assembly hall on and for modelling and understanding the state of the 🙏🚀🌍🌐


ramikrispin, avatar

(1/2) MLflow for Machine Learning Development 🚀

The MLflow for Machine Learning Development course by Manuel Gil provides a great introduction to the MLflow Python library 🐍. The course focuses on the MLflow core functionality and workflow and covers the following topics:
✅ Setting MLflow
✅ Creating and working with experiences
✅ Logging metadata (parameters, score, etc.)
✅ Model registry
✅ Model tuning
✅ MLflow project demo

ramikrispin, avatar
yogthos, avatar

One use of LLMs that I haven't seen mentioned before is to use them as a sounding board for your own ideas.

By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.

In this context, the LLM's actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.

Definitely recommend trying this trick next time you're writing something.

yogthos, avatar

@M0CUV that's only a problem if you use online serves, it's pretty easy to run models locally nowadays if your computer is relatively recent. I use gpt4all and it works great

M0CUV, avatar

@yogthos ah, hadn’t looked into running locally - thanks!

itnewsbot, avatar

Hugging Face, the GitHub of AI, hosted code that backdoored user devices - Enlarge (credit: Getty Images)

Code uploaded to AI developer p... -

jobRxiv, avatar

PhD Scholarship in Developing Solid Electrolytes using Machine Learning

Deakin University

We are looking for a prospective PhD candidate who will develop using at Deakin University. Please apply asap!!

See the full job description on jobRxiv:

darkcisum, avatar

I finally understand how Machine Learning works!

#machinelearning #ml #xkcd

futurebird, avatar
itnewsbot, avatar

Reddit sells training data to unnamed AI company ahead of IPO - Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Re... - #largelanguagemodels #machinelearning #stablediffusion #imagesynthesis #textsynthesis #axelspringer #stevehuffman #bloomberg #chatgpt #chatgtp #biz#openai #reddit #ai

researchbuzz, avatar

'The return of the ancestral human remains of Australian and other Indigenous peoples held in anthropological collections could be sped up using machine-based deep learning according to a new study led by QUT computer scientists.'

daeinc, avatar

Voice coding session with AI.

Can Google Gemini create a flocking algorithm as it was advertised? It did (well, after many tries). I've edited the parameters for the video.


  • All
  • Subscribed
  • Moderated
  • Favorites
  • machinelearning
  • DreamBathrooms
  • magazineikmin
  • cisconetworking
  • thenastyranch
  • Youngstown
  • slotface
  • osvaldo12
  • khanakhh
  • mdbf
  • rosin
  • kavyap
  • InstantRegret
  • Durango
  • Backrooms
  • lostlight
  • GTA5RPClips
  • normalnudes
  • cubers
  • ethstaker
  • everett
  • tacticalgear
  • modclub
  • Leos
  • provamag3
  • anitta
  • tester
  • JUstTest
  • provamag4
  • All magazines