🧋 Australian scientists have developed a healthier form of bubble tea. The new product blends oat fibre with tapioca which reduces the drink's sugar content
Thinking about artificial general intelligence (AGI) calls to mind another poorly understood and speculative phenomenon with the potential for transformative impacts on humankind. We believe that the SETI Institute’s efforts to detect advanced extraterrestrial intelligence demonstrate several valuable concepts that can be adapted for AGI research.
"A team of researchers have built a vision implant with tiny electrodes the size of a neuron, seeking to help blind people see again."
The Next Web reports: "Initial tests in mice showed that the implant can effectively stimulate visual perception using only a small amount of electricity."
#PPOD: The JunoCam instrument on NASA’s Juno captured this view of Jupiter’s moon Io — with the first-ever image of its south polar region — during the spacecraft’s 60th flyby of Jupiter on April 9, 2024, revealing mountains and lava lakes. Credit: NASA/JPL-Caltech/SwRI/MSSS; Image processing: Gerald Eichstädt/Thomas Thomopoulos
Chemicals in vapes could be highly toxic when heated, research finds | AI analysis of 180 vape flavors finds that products contain 127 ‘acutely toxic’ chemicals, 153 ‘health hazards’ and 225 ‘irritants’ https://www.byteseu.com/90750/#Science
#3Dprinting has revolutionized manufacturing in a variety of ways, and now researchers are learning how it can improve the performance of energetic materials – like explosives and rocket propellants
French chemist Antoine Lavoisier died #OTD in 1794.
He is best known for his development of the law of conservation of mass, which states that mass is neither created nor destroyed in chemical reactions. This principle helped to debunk the phlogiston theory, which was a prevailing theory at the time that suggested substances released a material called "phlogiston" when they burned. He also made significant contributions in understanding respiration as a form of combustion.
I want to understand this, but I can’t grok more than the headline and the potential applications. Is anyone able to explain it to someone whose physics knowledge is Year 12 + Vertasium? Like, how do massless photons even have momentum?! #science#lazyweb https://mastodon.social/@ScienceScholar/112401789379876027
Researchers found that as chimpanzees aged, they became more skilled at using tools by age six | Beyond this age, chimps continued to hone their skills and display more advanced maneuvers to suit different tasks. https://www.byteseu.com/89406/#Science
Who could have guessed that forcing cows to eat bird poo would result in spreading pathogens ?
Experts fear that H5N1, which was only first detected in cows a few weeks ago, may have been transmitted through a type of cattle feed called “poultry litter” – a mix of poultry excreta, spilled feed, feathers, and other waste scraped from the floors of industrial chicken and turkey production plants.
Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.
In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.
One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?
In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.
This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.
I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.
Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.