“The Court’s self-delegitimization [has escalated] into what looks more like a death spiral, an institutional collapse … The majority is so ethically challenged, internally divided, jurisprudentially sloppy, and ideologically polarized that it cannot do a competent job despite what by historical standards is a ridiculously light workload. ”
This, by Garrett Epps, is the most important thing you’ll read this year about our out-of-control Supreme Court.
@JamesGleick This may be the most intellectually honest, linguistically efficient, artistically pleasing, and holistically scathing diss of a governmental entity I have ever read which despite talking about something brutally dangerous, is hysterically comical.
Ernest says he disabled or constrained traffic between kbin and rest of the instances due to some security concerns. I think the problems are temprary, hopefully.
If nobody is following me, which is true at the moment, does boosting do anything?
If I understand correctly, it doesn't have anything to do with voting-like functionality to gauge popularity but an equivalent of retweeting. This means, if no one is following me, there's no reason for me to boost anything. Am I right?
It is striking how much the Unabomber's manifesto sounds like Tucker Carlson when philosophizing: against leftists, feminists, political correctness, cities, and progress.
Note also the irony in how much the Unabomber sounds like today's AI hypster/doomsayers:
Microsoft should create and launch a Reddit clone, keep the API completely open for client developers (but not data miners), pay top Reddit moderators to move their communities over, and use the project’s data for continuously training their own AI models.
Nassim Nicholas Taleb, the author of “The Black Swan”, has exposed the limitations of ChatGPT. ChatGPT fails to understand the ironies and nuances of history and produces nonsensical and contradictory responses. Taleb also criticizes ChatGPT as a mere parrot of human texts, and not a source of original insights.
@haritulsidas What's astounding to me is people expecting anything else from a chatbot trained on human text with a probabilistic approach to guess contextually appropriate strings of text following a text prompt, while the bot has no internal, conceptual logic which can relate concepts, causes and effects, entities and actions to each other, and no access to a database of fact checked information to compare its outputs to.
I may be getting ahead of myself regarding my expectation of people to understand how this technology works but shouldn't everyone have some sort of built in skepticism for a technology going from barely understanding and creating coherent sentences to seemingly philosophizing virtually overnight? Even if nobody exactly understood how it does what it does, I'd expect more of an inclination to believe it's more smoke and mirrors or tech trickery than the apparent belief in GPT's recreation of human-like intelligence.