The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.
Both effective altruism and effective accelerationism embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today.
😆 In general ‘AI’ is a very poor name for a bunch of technologies that enable computers to do better pattern matching.
Even worse, #OpenAI is now using ‘AGI’ to mean better-than-human level performance (at what task exactly is unclear), which isn’t what the phrase is generally understood to mean at all.
The more you look into this whole area, the more you realise there’s a lot of smoke and mirrors: marketing hype dressed up as technological revolution. #AI#AIEthics
#AI#AISafety#AIEthics: "The emerging field of "AI safety" has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader "AI safety epistemic community," which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them."
Excellent keynote by @abebab at #Internetdagarna today here in Sweden. Looking at the chat, many of Sweden's tech geeks are surprised by the messages she is conveying.
Truly appreciate she was provided this space and made time to participate.
🫠 Meta disbanded its Responsible AI team
➥ The Verge
「 RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms 」
Harvard's metaLab has launched https://aipedagogy.org, a resource chock full of tasty assignments by trailblazers of generative AI in the classroom. (My own "AI Sandwich" is also on the menu.)
I've already stolen Juliana Castro's "Illustrate a Hoax" for my own class!
#AI#GenerativeAI#AIEthics#ChatGPT#ChatBots#LLMs: "We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception."
It's remarkable how Yann LeCun's reacts to criticism and feedback in this tone. He genuinely believes that Galactica "was murdered by a ravenous Twitter mob.". How could this be a reasonable response by a scientist?!
I can draw parallels from this reaction to other areas in life where certain people always try to assert superiority, seek immunity from criticism, and accuse others of being mobs, terrorists, etc.
'Exemplifying those fears, the venture capital firm Andreessen Horowitz — one of the biggest financial backers of AI — warned in comments to the US Copyright Office (USCO) that new regulation on training data "will significantly disrupt" investment into the technology and the expectations around it, Insider reports.'
I thought the TechBros are all about "disruption"? Or does this only apply when it works out in their favor?
Next was a fabulous talk by @morganklauss on how we teach computers to see identify at the University of Michigan. Scheuerman audits gender classification across a number of commercial models, demonstrating significant bias and erasure of non-cis people. Highly recommend https://www.youtube.com/watch?v=aeETasFrnMs (6/11) #gender#AIEthics#bias
"Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, …"
I made this diagram as an explanatory model for showing the relationships between different actors when it comes to AI development and use. It gives you something to point at when showing for example who finances the systems, who contributes to the systems and who benefits or suffers from them. In the blog post are explanations of each grouping in the chart.
In a recent guest lecture I noted that self-driving cars are still a dicey proposition despite an investment of 30 years and $100bn. Now San Francisco has suspended Cruise operations after its robotaxi drove over and pinned a pedestrian hit by another car to the ground. I know Cruise gives their cars cutesy names, but did they really have to name this one "panini"? 😬
Repeated calls by artists for transparency in this NYU/USC symposium ignore the fact that generative AI is opaque by nature. Generative AI exploits the most transparent digital medium of all time—"view source" is literally built into every web browser—to make a technology so opaque even even its creators don't understand how it works.
#AI#GenerativeAI#AIEthics#Science: "Companies are unlikely to release details of their latest models for commercial reasons, precluding independent verification and regulation.
Society needs a different approach1. That’s why we — specialists in AI, generative AI, computer science and psychological and social impacts — have begun to form a set of ‘living guidelines’ for the use of generative AI. These were developed at two summits at the Institute for Advanced Study at the University of Amsterdam in April and June, jointly with members of multinational scientific institutions such as the International Science Council, the University-Based Institutes for Advanced Study and the European Academy of Sciences and Arts. Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission.
Here, we share a first version of the living guidelines and their principles (see ‘Living guidelines for responsible use of generative AI in research’). These adhere to the Universal Declaration of Human Rights, including the ‘right to science’ (Article 27). They also comply with UNESCO’s Recommendation on the Ethics of AI, and its human-rights-centred approach to ethics, as well as the OECD’s AI Principles."