ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Now that it's been accepted to ACM FAccT'24, I've updated the preprint of my paper on why artists are right that AI art is a kind of theft. I hope this promotes more serious thought about the visions of generative AI developers and the impacts of these technologies.

https://philpapers.org/rec/GOEAAI-2

@philosophy @facct

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

"What do you mean by progress when you talk about AI?" and progress for whom?

I asked the techno-optimist guy at an AI Hype Manel!

  • Does progress mean getting bigger or better models?

  • What about the impact on environment, water resources, destruction of communities, mining raw materials in Africa?

He first didn't get my Q. Then he said he believed in the "utilitarian view" & developing intelligence is very important.

Just parroting the AI hype people!

akshatrathi, to random
@akshatrathi@mastodon.green avatar

2020: Microsoft sets goal to be carbon negative by end of the decade.

2023: Microsoft's emissions are 30% higher than in 2020.

Main cause? The relentless push to meet AI demand, which requires new data centers built out of carbon-intensive steel, cement, chips.
https://www.bloomberg.com/news/articles/2024-05-15/microsoft-s-ai-investment-imperils-climate-goal-as-emissions-jump-30

cyberlyra,
@cyberlyra@hachyderm.io avatar

@akshatrathi ooh wait lemme guess, we are not supposed to pay attention to the rapacious use of limited resources to fuel AI now—the materials sunk into the ocean for “cooling” that are bleaching corals and melting ice caps so that grade schoolers can cheat on their homework— we are supposed to think about how all this sets us up for more efficient solutions in the future when the technology inevitably improves, amirite?

The real , make no mistake.

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

Disturbing in so many ways.

"AI Outperforms Humans in Moral Judgments"

https://neurosciencenews.com/ai-llm-morality-26041/

"In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AI’s responses in terms of virtuousness, intelligence, and trustworthiness.

This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning."

cyberlyra, to random
@cyberlyra@hachyderm.io avatar

I am really very excited about this open access special issue on AI, power and domination, with papers from friends and colleagues, that says the quiet part loud.

https://firstmonday.org/ojs/index.php/fm

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

There you have it. AI scofflaws.

"Former Amazon exec alleges she was told to ignore the law while developing an AI model — 'everyone else is doing it'"

https://www.businessinsider.com/ex-amazon-ghaderi-exec-suing-ai-race-copyright-allegations-2024

"A former Amazon exec alleges that the company instructed her to ignore copyright rules to stay afloat in the race for AI innovation."

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "In predictive optimisation systems, machine learning is used to predict future outcomes of interest about individuals, and these predictions are used to make decisions about them. Despite being based on pseudoscience (on the belief that the future of the individual is already written and, therefore, readable), not working and unfixably harmful, predictive optimisation systems are still used by private companies and by governments. As they are based on the assimilation of people to things, predictive optimisation systems have inherent political properties that cannot be altered by any technical design choice: the initial choice about whether or not to adopt them is therefore decisive, as Langdon Winner wrote about inherently political technologies.

The adoption of predictive optimisation systems is incompatible with liberalism and the rule of law because it results in people not being recognised as self-determining subjects, not being equal before the law, not being able to predict which law will be applied to them, all being under surveillance as 'suspects' and being able or unable to exercise their rights in ways that depend not on their status as citizens, but on their contingent economic, social, emotional, health or religious status. Under the rule of law, these systems should simply be banned.

Requiring only a risk impact assessment – as in the European Artificial Intelligence Act – is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer. Under the AI ACT's harm approach to fundamental rights impact assessments (perhaps a result of the "lobbying ghost in the machine of regulation"), fundamental rights can be violated with impunity as long as there is no foreseeable harm."

https://zenodo.org/records/10866778

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.

Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."

https://www.science.org/doi/10.1126/science.adp1175

OmaymaS, to tech
@OmaymaS@dair-community.social avatar
  • "Business" is NOT neutral.
  • Tech is NOT apolitical.
  • Industry is NOT detached from the wider societal and political issues.

Executives & investors who promote opposite facts are either naïve or benefiting from isolating & silencing their employees.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that may be denied or granted rights. Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled, digitized, and surveilled society. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, a highly controversial concept whose most important effect has been the undermining of worker, consumer, and voter rights by advancing the power of capital to exercise outsized influence on politics and law. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists and futurists to fantasize about benevolently sentient machines with unalterable needs and desires protected by law. While such fantasies have motivated fascinating fiction and art, once they influence legal theory and practice articulating the scope of rights claims, they threaten to immunize from legal accountability the current AI and robotics that is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering." https://firstmonday.org/ojs/index.php/fm/article/view/13628

jonippolito, to generativeAI
@jonippolito@digipres.club avatar

Google's Education VP wants us to believe AI is the classroom's new calculator, but this is a terrible analogy:

  1. We know how calculators produce their results.
  2. You can check a calculator's answer using pretty much the same algorithm it uses.
  3. Rare floating point errors aside, calculators do not invent false answers.
  4. Calculators are based on math principles; LLMs are based on no principles.

https://news.slashdot.org/story/24/04/06/0541216/ais-impact-on-cs-education-likened-to-calculators-impact-on-math-education

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Our global landscape of emerging technologies is increasingly affected by artificial intelligence (AI) hype, a phenomenon with significant large-scale consequences for the global AI narratives being created today. This paper aims to dissect the phenomenon of AI hype in light of its core mechanisms, drawing comparisons between the current wave and historical episodes of AI hype, concluding that the current hype is historically unmatched in terms of magnitude, scale and planetary and social costs. We identify and discuss socio-technical mechanisms fueling AI hype, including anthropomorphism, the proliferation of self-proclaimed AI “experts”, the geopolitical and private sector “fear of missing out” trends and the overuse and misappropriation of the term “AI” in emerging technologies. The second part of the paper seeks to highlight the often-overlooked costs of the current AI hype. We examine its planetary costs as the AI hype exerts tremendous pressure on finite resources and energy consumption. Additionally, we focus on the connection between AI hype and socio-economic injustices, including perpetuation of social inequalities by the huge associated redistribution of wealth and costs to human intelligence. In the conclusion, we offer insights into the implications for how to mitigate AI hype moving forward. We give recommendations of how developers, regulators, deployers and the public can navigate the relationship between AI hype, innovation, investment and scientific exploration, while addressing critical societal and environmental challenges." https://link.springer.com/article/10.1007/s43681-024-00461-2

eric, to IsraelPalestine
@eric@social.coop avatar

is traditionally used in France to reduce the moth population.

Only a small proportion of French Jews emigrate to . These Binationals are subject to compulsory military service.

This army prepared and launched the first in 2021: https://techhub.social/@estelle/111510965384428730

A development team has designed a more efficient product, which a Frenchman has suggested calling Lavander: https://techhub.social/@estelle/112220409975979758 @palestine

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Happy to share that my paper on why AI art is theft has been accepted to the 2024 ACM Conference on Fairness, Accountability, and Transparency! See you in Rio in June 😃

Preprint here (revisions soon):
philpapers.org/rec/GOEAAI-2
arxiv.org/abs/2401.06178

@facct @philosophy

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "In March 2024, the European Trade Union Institute (ETUI) in Brussels published the book titled Artificial Intelligence, Labour and Society, offering an in-depth analysis of the effects of artificial intelligence on the labor market and society at large. Edited by Aida Ponce Del Castillo, the book features an essay by DiPLab’s co-founder Antonio A. Casilli, contributing to the growing discussion on the ethical dimensions of AI.

The book Artificial Intelligence, Labour and Society highlights the rapid and pervasive expansion of AI technologies, underscoring the end of an era where AI was synonymous with robots and complex algorithms meant only for the technically savvy. Today, AI has become an integral part of our workplaces and daily lives, prompting a significant paradigm shift with deep and often hidden implications for the labor market. The chapters contained in the book bring together reflections from high-level academics and research activists worldwide, adopting a multidisciplinary approach that embraces diverse geographical and cultural perspectives." https://diplab.eu/diplab-featured-in-new-book-just-published-by-etui-artificial-intelligence-labour-and-society/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI experts Camille Francois and Meredith Whittaker discuss how to break up Big Tech and build a safe and ethical AI.
In the final episode of the AI series with Maria Ressa, we meet two women on the front lines of the battle to make artificial intelligence accountable.

Camille Francois is a researcher specialising in combatting disinformation and digital harms. Nowadays she is helping lead French President Emmanuel Macron’s initiative on AI and democracy." https://www.aljazeera.com/program/studio-b-unscripted/2024/2/22/the-ai-series-ai-and-surveillance-capitalism

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

weareopencoop, to random
@weareopencoop@mastodon.social avatar

We updated our Library on our website about AI Literacy https://buff.ly/4at9yeq head over to see a list of papers, articles and posts that we are currently reading.

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"How much do we want AI to be involved in farming? The time for that conversation is now, before these trends are irreversibly locked in. Now is the time to set reasonable ethical limits."

& Birch, 2024

https://aeon.co/essays/how-to-reduce-the-ethical-dangers-of-ai-assisted-farming

strypey,
@strypey@mastodon.nzoss.nz avatar

"For example, an EU working group proposed in 2019 that AI systems ‘should take into account the environment, including other living beings’, but this is so broad it implies no meaningful limits at all on the use of AI in farming. A review of 22 sets of AI ethics guidelines concluded – brutally – that AI ethics, so far, ‘mainly serves as a marketing strategy’."

& Birch, 2024

https://aeon.co/essays/how-to-reduce-the-ethical-dangers-of-ai-assisted-farming

underdarkGIS, to random
@underdarkGIS@fosstodon.org avatar

Excited about our upcoming @emeraldseu : Navigating AI's Ethical Aspects

It's a big topic to tackle.

When? 28 March 11:00 CET
Where? https://emeralds-horizon.eu/events/emeralds-webinar-navigating-ais-ethical-aspects

underdarkGIS,
@underdarkGIS@fosstodon.org avatar

Probably a good time to re-read our last year's paper "Thinking Geographically about AI Sustainability" to refresh all the ideas and perspectives

https://agile-giss.copernicus.org/articles/4/42/2023/

If you have additional pointers to literature, please share

CorinnaBalkow, to random
@CorinnaBalkow@digitalcourage.social avatar

"Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices."

https://arxiv.org/abs/2403.07183

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

echevarian, to random
@echevarian@genart.social avatar
echevarian,
@echevarian@genart.social avatar
estelle, to random
@estelle@techhub.social avatar

The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:

  1. An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”

  2. An AI outputs "100 targets a day". Like a factory with murder delivery:

"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"

  1. "The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."

🧶

estelle,
@estelle@techhub.social avatar

It was easier to locate the individuals in their private houses.

“We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

Yuval Abraham reports: https://www.972mag.com/lavender-ai-israeli-army-gaza/

(to follow) 🧶 @palestine @israel @ethics @military @idf @terrorism

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • rosin
  • InstantRegret
  • ethstaker
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • thenastyranch
  • Youngstown
  • GTA5RPClips
  • slotface
  • Durango
  • khanakhh
  • kavyap
  • megavids
  • everett
  • vwfavf
  • Leos
  • osvaldo12
  • cisconetworking
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • provamag3
  • normalnudes
  • tester
  • JUstTest
  • All magazines