If you take the population and divide by the rate of housing starts per year, you get a quantity in dimensions of time and units of years. This quantity roughly speaking is related to the "longevity of a dwelling" you need to have in order for the housing per person that's available not to decline. So if real longevity of houses is more or less a constant, then when this graph is high housing availability is declining, and when it's low it's growing... There's a reason millennials feel cheated
Dagnab it, I am constantly wishing I had more text in my messages and forgetting to tag stuff in my first post. This message is just to tag @economics@a.gup.pe and some hash tags #economics#housing#data#statistics
This discussion is about housing longevity and the adequate production rate of housing starts to keep housing from becoming scarce. There's a graph in the first post that shows very interesting dynamics.
Happy birthday to founder of modern nursing, social reformer, statistician, data visualization innovator & writer Florence Nightingale (1820 – 1910)!
Nightingale earned the nickname "The Lady with the Lamp" during the Crimean War, from a phrase used by The Times, describing her as a “ministering angel” making her solitary rounds of the hospital at night with “a little lamp in her hand”. 🧵1/n
#linocut#printmaking#sciart#womenInSTEM#datavis#nursing#statistics#mathart#MastoArt
English social reformer, statistician and the founder of modern nursing Florence Nightingale was born #OTD in 1820.
Nightingale became famous for her work as a nurse during the Crimean War (1853–1856). Beyond her work in the Crimean War, Nightingale was a prolific writer and statistician. She used statistical methods to analyze and present data on healthcare and public health, making significant contributions to the field of medical statistics.
"Randomized trials cannot address all causal questions of importance in medicine and health policy and may have limited generalizability; thus, investigators may need to use observational studies as a source of evidence to address causal questions. The challenge, then, is to balance the importance of addressing the causal questions for which observational studies are needed with caution regarding the reliance on strong assumptions to support causal conclusions."
"Many of us out here doing applied science have to entirely self-teach and un-learn poor statistics and poor methods training."
So true.
I see recent graduates with the same faulty NHST-based statistical education that I received decades ago. It's disappointing how poorly education has kept up with new and better statistical methods.
In #QuantumFieldTheory, scattering amplitudes can be computed as sums of (very many) #FeynmanIntegral s. They contribute differently much, with most integrals contributing near the average (scaled to 1.0 in the plots), but a "long tail" of integrals that are larger by a significant factor.
We looked at patterns in these distributions, and one particularly striking one is that if instead of the Feynman integral P itself, you consider 1 divided by root of P, the distribution is almost Gaussian! To my knowledge, this is the first time anything like this has been observed. We only looked at one quantum field theory, the "phi^4 theory in 4 dimensions". It would be interesting to see if this is coincidence for this particular theory and class of Feynman integrals, or if it persists universally.
More background and relevant papers at https://paulbalduf.com/research/statistics-periods/ #quantum#physics#statistics
Here's the logical structure of what you will be taught in terms of #statistics as a masters student in pretty much any #science field.
If MY DATA is a sample from two random number generators of PARTICULAR TYPE, and MY TEST has a small p value then MY FAVORITE EXPLANATION FOR THE DIFFERENCES IS TRUE.
This is, quite simply, a logical fallacy. The first thing wrong is that your data IS NOT a sample from a random number generator of that particular type. So we can ignore the rest logically.
Today I am writing on the AIC functions available in my hashtag#R hashtag#Package TidyDensity.
There are many of them, with many more on the way. Some of them are a little temperamental but not to worry it will all be addressed.
My approach is different then that of fitdistrplus which is an amazing package. I am trying to forgo the necessity of supplying a start list where it may at times be required.
Here it is people. A PhD student describing details of what they've come to realize is the completely scientifically bankrupt methodologies their high-powered successful, well funded lab PI demands the lab members do. Everything this person says is basically commonplace in todays labs #science#openscience#statistics#bayesian
Want a simple form of #MCMC analysis in #R well, I got you covered.
My #R#Package TidyDensity has a function called tidy_mcmc_sampling() that is pretty straight forward. It takes a raw vector and performs the calculation you give it over a default of 2k samples.
A five-star rating for Everything is Predictable: How Bayes' Remarkable Theorem Explains the World by Tom Chivers, from Brian Clegg at Popular Science Books.
useR! 2024, the global R user conference, will be taking place in Salzburg, Austria (as well as virtually) in July 2024. We have a full lineup of giants in the field of data science. Thank you Maëlle Salmon for being a part of the conference!
Maëlle Salmon, with a PhD in statistics, is a Research Software Engineer and blogger.
The plotting, statistical, and data selection tools in the mapdata.py data explorer (https://pypi.org/project/mapdata/) can be used even if you don't have any map data. Just add dummy latitude and longitude values to the data table. Zeroes will do. The map and the dummy columns can both be hidden, and you can then explore the data table with the other available tools.
"The Death Spiral Effect: a vicious cycle of self-reinforcing dysfunctional behavior, characterized by continuous flawed decision making, myopic single-minded focus on one (set of) solution(s), resource loss, denial, distrust, micromanagement, dogmatic thinking and learned helplessness."
Estimating the degrees of freedom 'k' and the non-centrality 'ncp' parameters of the chi-square distribution from just a vector of numbers? I think I am there. Here is a post the work I did over the last couple of days:
This time, a behavioral model in mathematical form for #sociology and quantitative/computational #psychology
I propose the probability for a person to take the time and find out the #truth about an issue can be modelled based on the inverse ratio of culturally significant events they have to contend with.
@ramikrispin I think this is it. The Mega Test Scrip creates 1000 different combinations of the rchisq() data and runs it all using different approachs