In case you are talking about the COVID vaccine, no, that was not demonstrated to be effective, it was claimed to be. Big difference, especially when Pfizer then wanted a moratorium of 70 YEARS to release the full trial data.
Also, even if they were effective, there was no evidence that they were also safe and didn’t cause any long-term side effects, because such a study was impossible to carry out given the speed at which these vaccines were developed. In fact, the usual requirement for these studies to be done before the product could be put on the market were deliberately waived in order to roll them out as quickly as possible.
People were right to be skeptical of this, and they were right to protest being forced to take them. The people who blindly trusted “the science” are, in fact, the Brawndo consumers here.
That’s a good point, and kinda reminds me of the Efficient Market Paradox, which basically says an efficient market is impossible since there would be no profit to be made, and hence, no point in participating. But if people drop out because of that, inefficiencies will invariably pop again, thus presenting an opportunity for those seeking to profit, which of course only ends up restoring the efficiency.
So in essence, the market is always just teetering on the edge of efficiency, never fully getting there yet never straying too far either. Perhaps there’s a corollary here (or a similar paradox) that explains why the assumption of rationality, as ridiculous as it seems at face value, is in fact also valid and reasonable.
Yeah, I mean that’s basically what GPT4Chan did, which someone else already mentioned ITT.
Basically, this guy took a dataset of several gigabytes worth of archived posts from /pol/ and trained a model on that, then hooked it up to a chatbot and let it loose on the board. You can see the results in this video.
Yes, I understand that. But I’m fairly certain the quality of the data will still have a massive influence over how much and how egregiously that happens.
Basically, what I’m saying is, training your AI on a corpus on shitposts instead of factual information seems like a good way to increase the frequency and magnitude of such hallucinations.
That’s a good point, but perhaps it’s because that sort of reporting simply isn’t very popular?
A lot of people seem to prefer simply hating the rich, which is easy to do because it doesn’t require empathizing with their problems. And they certainly DO also have problems because having a lot of money places a huge burden of responsibility on someone’s head (that’s why so many lottery winners tend to get rid of it so quickly).
And it’s not enough to simply put the money in a bank account and live off the interest, because rates might change, your bank might fail (and deposits over a quarter million are generally not covered by the FDIC), or inflation might outgrow the rate of return. The truth is, there’s constant pressure to find some way to turn a profit on your capital, but that generally requires taking a risk, and boom, there’s a new source of anxiety for ya.
I studied econ as a minor and I don’t recall any of my professors ever making the pretense that it was some sort of ultimate or incontrovertible truth. In fact, I’m fairly certain that’s where I learned the expression “all models are wrong, but some are useful”.
You know, that’s actually a valid concern, because just as excessive poverty tends to create many problems, so does excessive wealth. Just look at how many lottery winners end up just as poor as they started within a very short time period because they’re not used to having that much money. Also, there’s being worried about taxes, theft, or just plain jealousy from others.
Money might be able to solve some of your problems, but it will never solve all of them.