When you write hashtags that contain multiple words, make the first letter of each word a capital letter, for example #DogsOfMastodon. This will make the tag readable to blind people.
Blind people use the internet through screen reader apps, which read text out aloud. By putting a capital at the start of each word in a hashtag, you are telling the screen reader how to say the tag correctly.
In the non-techy world this is generally known as "CamelCase".
@feditips Not your fault, but I’m musing on the last 25 years that I’ve been frequently annoyed about advice that tell us all to modify our behaviours to make “screen readers” work, when it all seems quite technically possible that the screen reader software makers could just do a better job coping with the world as it is.
For example in this specific case a screen reader could contain a dictionary. Hopefully now with #AI becoming prevalent we will finally see better readers!
In the chaos around #NLP, I went back and re-read the beautiful article by Lawrence Barsalou on the function of language in human cognition.
Barsalou argues that language evolved in humans to support coordinated action. Archival function of language is secondary. He highlights that #CognitiveScience#Linguistics has largely studied the secondary function and made minimal advances on the primary.
@pinecone I agree with you that cognitive science has evolved significantly since Barsalou wrote the article.
But, #AI#ML haven't turned that corner yet. LLMs 'learn language' by only consuming text written for archival purposes by humans. If that is what they are trained with, what would extend their capabilities to situated, coordinated action?
There is a huge gap between the language of action and being able to apply that action in the world. And, that gap is scientific.
“The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While #AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist.”
@ErikJonker Hmm, this is of course no unstoppable force of nature. First, we should regulate #ai and address ethical concerns. Moreover, we need to figure out which economic policies are required to ensure that this #tech improves the standard of living for all, and not just reduce labor costs and increase profits for corporations. #llm
Laying off reporters, but starting a 24/7 TV channel, is not a good look for WaPo. A lot of resource suck for a trickle of revenue. Hurts the core product.
Here is a better interview of Geoffrey Hinton on PBS News Hour, where he articulates his concerns better and in more words. IMO there are still huge leaps of logic in his belief that these systems have "understanding" and can be dangerous on their own, i.e. without humans programming and setting goals for them, which gets the tech giants creating them (and gaining enormous political power) off the hook, as if #AI suddenly dropped from the sky.
Hanlon's razor — "never attribute to malice that which is adequately explained by stupidity" — is very relevant to the #AI apocalypticist BS.
It's not some future "superintelligent" AI maliciously killing people that we should worry about.
It's the poorly trained, not sufficiently understood, impossible to audit "AIs" already killing people (or otherwise fscking lives up) we need to tackle.
“Google is shifting the way it presents search results to incorporate conversations with artificial intelligence, along with more short video and social-media posts.”
“Google plans to make its search engine more ‘visual, snackable, personal, and human.’”
The Luring Test: AI and the engineering of consumer trust | Federal Trade Commission
> Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don’t comprise a class of people protected by anti-discrimination laws.
Chatbots are trained on astronomical amounts of data taken from the internet. Operating in a way akin to predictive text, they build a model to predict the likeliest word or sentence to come after the user’s prompt. This can result in factual errors, but the plausible nature of the responses can trick users into thinking a response is 100% correct.
The role of critical thinking and health skepticism to be taught in schools, colleges and universities is more critical now than ever before. The onslaughts are coming, and discerning facts from fiction (worse, “directional nonsense”) will become a critical skill for an informed human being
Here is a better interview of Geoffrey Hinton on PBS News Hour, where he articulates his concerns better and in more words. IMO there are still huge leaps of logic in his belief that these systems have "understanding" and can be dangerous on their own, i.e. without humans programming and setting goals for them, which gets the tech giants creating them (and gaining enormous political power) off the hook, as if #AI suddenly dropped from the sky.
Would you be completely shocked to discover that the original "Q" on Reddit, which spawned Q-Anon was actually an AI / LLM?
Next imagine if such a thing was informed by a super granular and intimate understanding of our thoughts and beliefs and convictions. The AI apocalypse isn't killer robots. It's mass "collective suicide", with our human weaknesses weaponized against each other.
RT @DorotheaBaur
The term 'immortality' is misleading IMO. Humans, animals and plants live, and then they die. They are mortal. "digital intelligence" is never alive and can thus never die. It's a best permanent, or lasting, but certainly not immortal. #AI#Hinton.
🚀 On Sunday, during my flight from NYC to Seattle, I developed an intriguing AI project in Python. Introducing rbot – a GPT-4 based chatbot that processes user prompts and custom conversation decorators, enabling more context-aware responses than out-of-the-box ChatGPT Plus with GPT-4.
🧠 Custom conversation decorators help the chatbot understand the context, resulting in more accurate and relevant responses, surpassing the capabilities of standard GPT-4 implementations.
In the future, will we be able to tell if the other participants in a conversation - pretty much anything that isn't face-to-face - are real rather than #ai?