molly0xfff, to ArtificialIntelligence
@molly0xfff@hachyderm.io avatar

The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.

https://newsletter.mollywhite.net/p/effective-obfuscation

molly0xfff,
@molly0xfff@hachyderm.io avatar

Both effective altruism and effective accelerationism embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today.

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

In a training session I ran yesterday someone suggested that using LLMs to 'improve your writing' might have a levelling up effect.

I suspect that is a tempting but deeply problematic way of thinking. I wrote down a few thoughts about why.

https://listed.to/@24601/47902/llms-better-writing-and-cognitive-diversity

j2bryson, to random
@j2bryson@mastodon.social avatar

This is outstanding. Ignore the title; it’s about


And it’s not even that long.

OpenAI and X: Promises of populist technology, shaped by a single man https://wapo.st/47sjowc

keithwilson, to OpenAI
@keithwilson@fediphilosophy.org avatar

😆 In general ‘AI’ is a very poor name for a bunch of technologies that enable computers to do better pattern matching.

Even worse, is now using ‘AGI’ to mean better-than-human level performance (at what task exactly is unclear), which isn’t what the phrase is generally understood to mean at all.

The more you look into this whole area, the more you realise there’s a lot of smoke and mirrors: marketing hype dressed up as technological revolution.

https://mastodon.social/@jamesbritt/111466685493641006

HxxxKxxx, to OpenAI
@HxxxKxxx@det.social avatar

This is a big deal for open AI and ChatGPT! Now you can really talk to the AI. Chat is no longer just text, but a voice interface.
🎧

video/mp4

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The emerging field of "AI safety" has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader "AI safety epistemic community," which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them."

https://drive.google.com/file/d/1HIwKMnQNYme2U4__T-5MvKh9RZ7-RD6x/view

axbom, to random
@axbom@axbom.me avatar

Excellent keynote by @abebab at today here in Sweden. Looking at the chat, many of Sweden's tech geeks are surprised by the messages she is conveying.

Truly appreciate she was provided this space and made time to participate.

I recommend reading this profile of her in Wired:

https://www.wired.com/story/abeba-birhane-ai-datasets/

jbzfn, to OpenAI
@jbzfn@mastodon.social avatar

Sir, this is a Wendy's

jbzfn, to ai
@jbzfn@mastodon.social avatar

🫠 Meta disbanded its Responsible AI team
➥ The Verge

「 RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms 」

https://www.theverge.com/2023/11/18/23966980/meta-disbanded-responsible-ai-team-artificial-intelligence

tracingcovid, to random
@tracingcovid@mstdn.social avatar

[Replacing human workers with tech such as "A.I." means less accountability. Humans have pesky things like self-interest in upholding their long-term reputations, or consciences, or consciousness: ] Nov 16 UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges | For the largest health insurer in the US, AI's error rate is like a feature, not a bug. (Ars Technica) https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social -

jonippolito, to ai
@jonippolito@digipres.club avatar

Harvard's metaLab has launched https://aipedagogy.org, a resource chock full of tasty assignments by trailblazers of generative AI in the classroom. (My own "AI Sandwich" is also on the menu.)

I've already stolen Juliana Castro's "Illustrate a Hoax" for my own class!

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception."

https://arxiv.org/abs/2311.07590

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Bing Chat is now “Microsoft Copilot” in potentially confusing rebranding move - Enlarge / The Microsoft Copilot logo. (credit: Microsoft)

On W... - https://arstechnica.com/?p=1984093

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

YouTube cracks down on synthetic media with AI disclosure requirement - Enlarge (credit: Getty Images)

On Tuesday, YouTube announced i... - https://arstechnica.com/?p=1983996

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

People think white AI-generated faces are more real than actual photos, study says - Enlarge / Eight images used in the study; four of them are synthetic. C... - https://arstechnica.com/?p=1983842

NicoleLazzaro, to ai

Use an AI to fish and you eat for a day.
Train an AI to fish and it's owner may nab your lifetime earnings. Humans need transparency and consent.

#informedconsent #AI #aiethics #guardrails #AIforgood #artificialintelligence #futuretech #futureofAI #machinelearning #technology #programming #tech #robotics #innovation #business

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

It's remarkable how Yann LeCun's reacts to criticism and feedback in this tone. He genuinely believes that Galactica "was murdered by a ravenous Twitter mob.". How could this be a reasonable response by a scientist?!

I can draw parallels from this reaction to other areas in life where certain people always try to assert superiority, seek immunity from criticism, and accuse others of being mobs, terrorists, etc.

juergen_hubert, to ai
@juergen_hubert@thefolklore.cafe avatar

'Exemplifying those fears, the venture capital firm Andreessen Horowitz — one of the biggest financial backers of AI — warned in comments to the US Copyright Office (USCO) that new regulation on training data "will significantly disrupt" investment into the technology and the expectations around it, Insider reports.'

I thought the TechBros are all about "disruption"? Or does this only apply when it works out in their favor?


https://futurism.com/the-byte/ai-investors-horrified-paying-copyrighted-work

bwaber, to random
@bwaber@hci.social avatar

It was easygoing Saturday, which meant I had some time for a short run and lots of talks for my ! (1/11)

bwaber,
@bwaber@hci.social avatar

Next was a fabulous talk by @morganklauss on how we teach computers to see identify at the University of Michigan. Scheuerman audits gender classification across a number of commercial models, demonstrating significant bias and erasure of non-cis people. Highly recommend https://www.youtube.com/watch?v=aeETasFrnMs (6/11)

deborahh, to ai
@deborahh@mstdn.ca avatar

"Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, …"


@timnitGebru https://dair-community.social/@timnitGebru/111377575211954146

axbom, to random
@axbom@axbom.me avatar

Explaining responsibility, impact and power in AI

I made this diagram as an explanatory model for showing the relationships between different actors when it comes to AI development and use. It gives you something to point at when showing for example who finances the systems, who contributes to the systems and who benefits or suffers from them. In the blog post are explanations of each grouping in the chart.

You can also download a PDF of the diagram.

https://axbom.com/aipower/

jonippolito, to SelfDrivingCars
@jonippolito@digipres.club avatar

In a recent guest lecture I noted that self-driving cars are still a dicey proposition despite an investment of 30 years and $100bn. Now San Francisco has suspended Cruise operations after its robotaxi drove over and pinned a pedestrian hit by another car to the ground. I know Cruise gives their cars cutesy names, but did they really have to name this one "panini"? 😬

https://abcnews.go.com/Business/wireStory/california-regulators-suspend-recently-approved-san-francisoc-robotaxi-104258076

jonippolito, to generativeAI
@jonippolito@digipres.club avatar

Repeated calls by artists for transparency in this NYU/USC symposium ignore the fact that generative AI is opaque by nature. Generative AI exploits the most transparent digital medium of all time—"view source" is literally built into every web browser—to make a technology so opaque even even its creators don't understand how it works.

https://eclive.engelberg.center/episodes/genai-the-creativity-cycle-can-ai-help-everyone-enjoy-culture-as-a-global-public-good

#AIethics #AIimages #GenerativeAI #AIlaw

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Companies are unlikely to release details of their latest models for commercial reasons, precluding independent verification and regulation.

Society needs a different approach1. That’s why we — specialists in AI, generative AI, computer science and psychological and social impacts — have begun to form a set of ‘living guidelines’ for the use of generative AI. These were developed at two summits at the Institute for Advanced Study at the University of Amsterdam in April and June, jointly with members of multinational scientific institutions such as the International Science Council, the University-Based Institutes for Advanced Study and the European Academy of Sciences and Arts. Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission.

Here, we share a first version of the living guidelines and their principles (see ‘Living guidelines for responsible use of generative AI in research’). These adhere to the Universal Declaration of Human Rights, including the ‘right to science’ (Article 27). They also comply with UNESCO’s Recommendation on the Ethics of AI, and its human-rights-centred approach to ethics, as well as the OECD’s AI Principles."

https://www.nature.com/articles/d41586-023-03266-1?utm_source=Live+Audience&utm_campaign=426c40f62c-briefing-dy-20231023&utm_medium=email&utm_term=0_b27a691814-426c40f62c-49268715

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines