A conversation that keeps popping up in my mind since FOSDEM centers around open source projects and “AI,” and I still don’t know what I think. So let me share some thoughts here on the famously nuance-friendly Internet. 😜
During a chat w/folks from several open source organizations, someone suggested GNOME could attract funding by “sprinkling some AI on it.” Several folks laughed at the topical joke, but then realized it was in earnest. 🧵
In the chaos around #NLP, I went back and re-read the beautiful article by Lawrence Barsalou on the function of language in human cognition.
Barsalou argues that language evolved in humans to support coordinated action. Archival function of language is secondary. He highlights that #CognitiveScience#Linguistics has largely studied the secondary function and made minimal advances on the primary.
Something I’ve been thinking about a lot in the current battle over the future of (pseudo) AI is the cotton gin.
I live in a country where industrial progress is always considered a positive. It’s such a fundamental concept to the American exceptionalism claim that we are taught never to question it, let alone realize that it’s propaganda.
One such myth, taught early in grade school, is the story of Eli Whitney and the cotton gin. Here was a classic example of a labor-saving device that made millions of lives better. No more overworked people hand cleaning the cotton (slaves, though that was only mentioned much later, if at all). Better clothes and bedding for the world. Capitalism at its best.
But that’s only half the story of this great industrial time saver. Where did those cotton cleaners go? And what was the impact of speeding up the process?
Now that the cleaning bottleneck was gone, the focus was on picking cotton as fast as possible. Those cotton cleaners likely, and millions of other slaves definitely, were sent to the fields to pick cotton. There was an unprecedented explosion in the slave trade. Industrial time management and optimization methods were applied to human beings using elaborate rule-based systems written up in books. How hard to punish to get optimal productivity. How long their lifespans needed to be to get the lost production per dollar. Those techniques, practiced on the backs and lives of slaves, became the basis of how to run the industrial mills in the North. They are the ancestors of the techniques that your manager uses now to improve productivity.
Millions of people were sold into slavery and worked to death because of the cotton gin. The advance it provided did not, in fact save labor overall. Nor did it make life better overall. It made a very small set of people much much richer; especially the investors around the world who funded the banks who funded the slave purchases. It made a larger set of consumers more comfortable at the cost of the lives of those poorer. Over a hundred years later this model is still the basis for our society.
Modern “AI” is a cotton gin. It makes a lot of painstaking things much easier and available to everyone. Writing, reading, drawing, summarizing, reviewing medical cases, hiring, firing, tracking productivity, driving, identifying people in a lineup…they all can now be done automatically. Put aside whether it’s actually capable of doing any of those things well; the investors don’t care if their products are good, they only care if they can make more money off of them. So long as they work enough to sell, the errors, and the human cost of those errors, are irrelevant. And like the cotton gin, AI has other side effects. When those jobs are gone, are the new jobs better? Or are we all working that much harder, with even more negative consequences to our life if we fall off the treadmill? One more fear to keep us “productive”.
The Luddites learned this lesson the hard way, and history demonizes them for it; because history isn’t written by the losers.
They’ve wrapped “AI” with a shiny ribbon to make it fun and appealing to the masses. How could something so fun to play with be dangerous? But like the story we are told about the cotton gin, the true costs are hidden.
This picture is hilarious because the words “Meat ball” were transliterated into Arabic. The resulting Arabic words (ميت بول) were then interpreted (incorrectly) as Arabic, THEN translated into English. The first word “Meat” can be misread as the Arabic word for “is dead”. The second word “ball” is how an Arabic speaker would pronounce the name “Paul.” Therefore, the English description of this dish is “Paul is dead.”
Just spent at least two hours deleting all of my work from Tumblr, before their AI scraping shit hits the fan, although it's probably too late. In that case, the deletion functions as a gesture of protest.
This shameless large-scale intellectual property theft by greedy tech business assholes everywhere is starting to make the internet pretty annoying. 😖
@jessie is a lover of #languages and helps run #CommonVoice, @mozilla 's open #voice#data set, which now supports over 100 languages. She also teaches #WebDev and loves #hiking. She's awesome you should follow her 🇬🇧
That's all for now, please do share your own lists so we can create deeper connections, and a tightly-connected community here
I'm reminded here of @maryrobinette's short story - "Red Rockets" - "She built something better than fireworks. She built community."
Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.
Do you work with #voice or #speech#data? You might contribute data, write data specifications for collection, perform filtering or pre-processing, train #ASR or #TTS models, or design or perform evaluations on #ML speech models.
If so, I’d love your help to understand current #dataset#documentation practices, and what we can do to make them better as part of my #PhD#research
The #survey takes 10-20 minutes to complete, and you can opt in to win one of 3 gift cards valued at $AUD 50 each.
Research Protocol 2021/427 approved by #ANU Human Research Ethics Committee
Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12 #ai#ml#neuroscience#computationalneuroscience#interpretability#NeuralRepresentations#neurips2023
I am truly amazed at the number of applicants I have seen off of this single post. And almost all are well suited candidates worth my time to review. I am astonished that a single post on the fedi is more effective than actually hiring a recruiter. Thank you everyone for the boosts and applications.
While many applicants have made it through and are currently being hired because we have so many positions we have quite a few still available for every level from sr to jr, and both data scientists and programmers. So please keep boosting, sharing, and applying if anyone is interested.
Just a reminder this is 100% remote, no fixed hours, will pay market rates for position. I will be your direct boss and hiring manager (also owner, founder, and inventor of the tech).
Man, watching this insane video with the ex Google AI guy who keeps claiming AI is sentient etc. No proof. No evidence. Just statements.
Show me the ML/AI that's proactively doing things, unprompted, the way every other sentient being operates. Show me the math that shows current ML eventually becomes sentient.
So far it doesn't do jack unprompted and it chokes on it's own shit when fed it instead of going off and doing its own thing. Anthropomorphism at it's finest
I am still hiring for top-tier programmers and data scientist. Please reboost, share, recommend, or reply if you know anyone who might be interested.
Fully remote! Live and work from anywhere with internet (including the beach!)
I am the company owner, and will be both your direct boss and the hiring manager.
Semantic Web, AI, and Java are some of the key techs. Open-source and Linux oriented experience ideally. OSS contributions and activity will be weighted heavily, particularly in relevant areas.
#ML systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear #MLsec issue that applies well beyond the #LLM case
Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.
I wrote up my thoughts about #LLMs and explainable #ML. The tl;dr is that traditional explainable ML techniques weren't designed with psychology in mind, whereas LLMs happen to mirror how humans explain and earn trust, or at least are a lot more similar to that https://timkellogg.me/blog/2023/10/01/interpretability#AI
Deep toots, high toots