#AI#Algorithms#Constitutionalism: "People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history."
To Halt or Not to Halt? That Is the Question by Cristian Calude, 2024
Can mathematics be done by computers only? Can software testing be fully automated? Can you write an anti-virus program which never needs any updates? Can we make the Internet perfectly secure? Your guess is correct: the answer to each question is negative.
#SocialMedia#SocialNetworks#ContentModeration#Algorithms#RecommendationEngines#Messaging: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.
Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."
#AI#Algorithms#DSA#OSA#AlgorithmicAudits#Law#PoliticalEconomy: "Accepted in the Proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency. For almost a decade now, scholarship in and beyond the ACM FAccT community has been focusing on novel and innovative ways and methodologies to audit the functioning of algorithmic systems. Over the years, this research idea and technical project has matured enough to become a regulatory mandate. Today, the Digital Services Act (DSA) and the Online Safety Act (OSA) have established the framework within which technology corporations and (traditional) auditors will develop the ‘practice’ of algorithmic auditing thereby presaging how this ‘ecosystem’ will develop. In this paper, we systematically review the auditing provisions in the DSA and the OSA in light of observations from the emerging industry of algorithmic auditing. Who is likely to occupy this space? What are some political and ethical tensions that are likely to arise? How are the mandates of ‘independent auditing’ or ‘the evaluation of the societal context of an algorithmic function’ likely to play out in practice? By shaping the picture of the emerging political economy of algorithmic auditing, we draw attention to strategies and cultures of traditional auditors that risk eroding important regulatory pillars of the DSA and the OSA. Importantly, we warn that ambitious research ideas and technical projects of/for algorithmic auditing may end up crashed by the standardising grip of traditional auditors and/or diluted within a complex web of (sub-)contractual arrangements, diverse portfolios, and tight timelines."
"The characters in the book were forever making sacrifices or performing rituals so the gods would smile on their endeavors. Often the rituals didn’t work, but the humans somehow never blamed the gods, only themselves." Why @artologica.net paints the algorithm. https://open.substack.com/pub/artologica/p/how-i-became-the-voice-of-the-algorithm
#Cybersecurity#Encryption#QuantumComputing#Algorithms: "Chen’s (not yet peer-reviewed) preprint claims a new quantum algorithm that efficiently solves the “shortest independent vector problem” (SIVP, as well as GapSVP) in lattices with specific parameters. If it holds up, the result could (with numerous important caveats) allow future quantum computers to break schemes that depend on the hardness of specific instances of these problems. The good news here is that even if the result is correct, the vulnerable parameters are very specific: Chen’s algorithm does not immediately apply to the recently-standardized NIST algorithms such as Kyber or Dilithium. Moreover, the exact concrete complexity of the algorithm is not instantly clear: it may turn out to be impractical to run, even if quantum computers become available.
But there is a saying in our field that attacks only get better. If Chen’s result can be improved upon, then quantum algorithms could render obsolete an entire generation of “post-quantum” lattice-based schemes, forcing cryptographers and industry back to the drawing board.
📢 Two fully funded #PhD positions in our group, the Sydney Algorithms and Computing Theory group (SACT) at the University of #Sydney, and André van Renssen
#SocialMedia#Algorithms#Democracy#ContentModeration: "For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.
To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.
For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study." https://hai.stanford.edu/news/building-social-media-algorithm-actually-promotes-societal-values
People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.
@futurebird The main players have a big advantage, #Google can already detect #AI#content because they have been training #algorithms for so long the small players don't have that advantage. I would suggest using data from before #ChatGPT became popular with end-consumers. The good thing for small AI companies is, they don't get Robot.txt & #Ip blocked (i think >15% of major sites are blocking main AI scrapers) so they still have access to those data pools which are also guaranteed not to be AI
"The “Algorithmic Sabotage” radically reworks our technopolitical arrangements away from the structural injustices, supremacist perspectives and necropolitical power layered into the “algorithmic empire”, highlighting its materiality and consequences in terms of both carbon emissions and the centralisation of control"
Already a future classic, Joy Boulamwini’s Coded Bias. This documentary is already old, but it’s significance will continue to rise, as future generations will increasingly have their lives dictated from behind the scenes by invisible #AI. Schools you can go to, internships you can get, work you can get, loans you can get, how you will judged in court. And they are not talking about China.
Machine Vision: How Algorithms Are Changing the Way We See the World by Jill Walker Rettberg, 2023
Providing an overview of the historical and contemporary uses of machine vision, she unpacks how technologies such as smart surveillance cameras and TikTok filters are changing the way we see the world and one another.
Posting on facebook is dumb. Your intelligent posts get fed to no-one, your worst mistakes get posted for two weeks into circles including even people who are not your friends.
You have an n by m grid, where each square is either filled or empty. You also have a stamp in some shape that you can use to fill squares in the grid. You can only stamp squares if none stamped squares are already filled.
Is determining if a given grid with preset files squares can be completely filled with the stamp in P or NP?
#AI#Algorithms#ML#ResponsibleAI: "Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple with—but we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”—they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic.
This doesn’t mean that people won’t weaponize AI—and already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control." https://www.eff.org/deeplinks/2024/03/how-avoid-ai-apocalypse-one-easy-step
Fmr #Trump treas sec #Mnuchin is telling investors he has a plan to buy #TikTok
Mnuchin told potential backers he aims to maneuver around its price of >$100B & #China’s ban of the export of recommendation #algorithms.
He indicated he could overcome those hurdles by offering to buy the #app w/o the export-blocked #code, essentially forcing his consortium to remake a service built on billions of lines of code.
Observers, & ≥1 person familiar w/the pitch, have said the idea is so far-fetched that it suggests a lack of familiarity w/how #tech companies work. #TikTok users flocked to the #app bc of its surprising suggestions for videos they might like, & there’s no guarantee any #Mnuchin-driven version could duplicate that success — or beat rivals like #Meta & #Google, who have worked for yrs to mirror the experience w/in their own respective apps, #Instagram & #YouTube. #algorithms#SocialMedia#policy
“Everyone wants to build a #TikTok-level #algorithm. That’s a key element of competition in… #tech …right now,” said Matt Perault, UNC prof & fmr #Facebook dir who studies tech #policy.
“…the biggest cos have thrown a lot of money & #engineering talent at that issue & have struggled to do it. If #Mnuchin thinks he can do that & succeed where…successful cos have struggled, good luck.”
Mnuchin, [is] a fmr hedge fund mngr & Hollywood producer w/no #SocialMedia experience….