parismarx, to tech
@parismarx@mastodon.online avatar

Effective altruists and longtermists have infiltrated the UK’s AI policy discussions, leaving the real harms unaddressed while the focus is placed on “existential risks” fueled by the obsessions of rich tech funders.

https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism/

MisuseCase, to ArtificialIntelligence
@MisuseCase@twit.social avatar

It’s not really useful to talk about and robot overlords when discussing the dangers of because that’s not what we have to worry about, at least not for a while yet.

It is much more useful to talk about the concept of which is when systems codified by law and regulation squeeze people into destitution and death. See also “social murder.”

Cc: @carnage4life

/1

csullivan, to ai
@csullivan@flipboard.social avatar

AI has long played a critical role @Flipboard. Today we've updated our Community Guidelines to give publishers guidance on using artificial intelligence in content creation, and to protect the quality of our product.

https://about.flipboard.com/inside-flipboard/ai-has-long-played-a-responsible-role-at-flipboard/

pitchaya, to ArtificialIntelligence
@pitchaya@mastodon.social avatar

Really, folks need to stop using the misleading term and start calling it by what it actually is: pattern-guessing algorithms. And this not very intelligent technology is also not a smart, sustainable move on our already overburdened planet. “It doesn’t make sense to burn a forest and then use AI to track deforestation.”

https://www.theguardian.com/technology/2023/aug/01/techscape-environment-cost-ai-artificial-intelligence

BeAware, to Halloween
@BeAware@social.beaware.live avatar

A few more #halloween wallpapers for those that enjoy the #pixelart style!

Uploaded to my Ko-fi as well, for free. Including one not pictured here due to Mastodon 4 image limit. https://ko-fi.com/i/IB0B6OW1PW

#Wallpaper #AIart #AIhorde #stablediffusionart #artificialintelligence #mastoart #fediart #art #artist #generativeart #generativeAI #stablediffusion #spooky

image/png
image/png
image/png

davidaugust, to workersrights
@davidaugust@mastodon.online avatar

AMPTP is seeking forming a union, Carol Lombardini announced today.

Imagine what I could make with more than what my phone and laptop can do.

video/mp4

ChrisMayLA6, to scifi
@ChrisMayLA6@zirk.us avatar

@tomgauld - between & , something has been lost in translation.

@bookstodon


metin, to random

Good longread about AI:

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

"[...] as companies like Coca-Cola start making huge investments to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff."

robsonfletcher, to microsoft
@robsonfletcher@mas.to avatar

Microsoft has removed an article that advised tourists to visit the "beautiful" Ottawa Food Bank on an empty stomach, after facing ridicule about the company's reliance on artificial intelligence for news.

https://www.cbc.ca/news/canada/ottawa/artificial-intelligence-microsoft-travel-ottawa-food-bank-1.6940356

janriemer, to ai

Aaaaaannnd we have another example of creating bullshit code. 💩

This time it tries to create a "simple" function that checks if a string is an acronym:

https://www.youtube.com/watch?v=Fvy2nXcw3zc&t=224s (YT, because timestamp)

The AI generated code absolutely does not care about at all, so it panics, when you give it a unicode character that happens to not have their char boundary at byte index 1.

1/2

BeAware, to aiart
@BeAware@social.beaware.live avatar
jwwr, to ai
@jwwr@aus.social avatar

The robot revolution has begun. Bret and Jemaine were right.

ChrisMayLA6, to ArtificialIntelligence
@ChrisMayLA6@zirk.us avatar

on :

There 'will come a time where no job is needed'; is the 'most disruptive force in history" In th future, 'you can have a job if you want a job . . . but AI will be able to do everything'.

As with so many Musk has a hard time imagining jobs that are not or mechanical... here is a man who must be sociopathic in his absence of any consideration of the value & necessity of human contact in so many socio-economic activities.

A sad man!

ChrisMayLA6, to ArtificialIntelligence
@ChrisMayLA6@zirk.us avatar

Katherine Maher (ex Wikimedia) sums up the problem with AI (at the other place):

“AI that benefits all humanity” without a single woman or person of color on the OpenAI board. Where have I heard this story before?

ChrisMayLA6, to ArtificialIntelligence
@ChrisMayLA6@zirk.us avatar

There are many things to worry about in the accelerating ascendance of but its (and cooling) requirements have been more of a technical concern... now as John Naughton points out we need to be clear about quite how much energy & resources the new generations of are likely to consume.

If you thought this was all 'virtual', think again, there is a very real material element to the AI economy... one we may not want to 'afford'!

https://www.theguardian.com/commentisfree/2024/mar/02/ais-craving-for-data-is-matched-only-by-a-runaway-thirst-for-water-and-energy

ColinTheMathmo, (edited ) to random
@ColinTheMathmo@mathstodon.xyz avatar

Asking on behalf of a friend ...

Is there anyone reading this who could give a talk on "Math(s) and Artificial Intelligence"?

It would need to be aimed at a general audience, so while the material itself doesn't need to be deep, the person giving the talk would need to have some first-hand experience of the actual math(s) that's involved.

Anyone?

If you're comfortable doing so, please boost for reach ... Mastodon-the-platform relies on networking effects.

Many thanks.

Tags:





quigs, (edited ) to ai
@quigs@hachyderm.io avatar

Reuven Lerner: “I teach courses in Python and Pandas. Never mind that the first is a programming language and the second is a library for data analysis in Python. Meta’s AI system […] assumed that I was talking about the animals (not the technology), and banned me. The appeal that I asked for wasn’t reviewed by a human, but was reviewed by another bot, which (not surprisingly) made a similar assessment.”
https://lerner.co.il/2023/10/19/im-banned-for-life-from-advertising-on-meta-because-i-teach-python/

christianschwaegerl, (edited ) to ArtificialIntelligence
@christianschwaegerl@mastodon.social avatar

Why is it that natural forms of intelligence (advanced forms of problem-solving in metabolisms, organisms, ecosystems) developed over billions of years hardly gain any attention, while so many people are enthralled by digital faux intelligence pushed by shady characters and companies with control and money-making goals? It‘s probably one of the most narrow-minded fascinations in the world, especially as we destroy naturally intelligent systems on a daily basis.
#artificialintelligence #nature

freepeoplesfreepress, to ai

ARTIFICIAL INTELLIGENCE

We need to focus on the AI harms that already exist.

Fears about potential future existential risk are blinding us to the fact AI systems are already hurting people here and now.

Source MIT Technology Review

October 30, 2023

This is an excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini, published on October 31 by Random House. It has been lightly edited.

The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the idea that AI systems should not be integrated into weapons systems because of the lethal dangers, this isn’t because I believe AI systems by themselves pose an existential risk as super intelligent agents.

AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real.

One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity.

I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.

Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now.

When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.

This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.

@freepresswithoutborders

https://www.technologyreview.com/2023/10/30/1082656/focus-on-existing-ai-harms/

ChrisMayLA6, to ArtificialIntelligence
@ChrisMayLA6@zirk.us avatar

Tom Gauld channels Bergman's Seventh Seal, but don't forget at the end Death takes the Knight (after a last supper with his family)... i

So, will inevitably meet its end (its encounter with Death).... but perhaps only after vanquishing us all?

RTP, to news
@RTP@fosstodon.org avatar

Don't Submit Personal Calls / Data / Biometrics To AI Training (Empowering Abuse Potential, Disempowering End Users) For Video Calls

AI Training Can Also Be A Fancy Way Of Saying "Putting You Under Surveillance"

Instead, Use Signal / Session / XMPP + PGP / OMEMO / Jitsi

https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/

enmodo, (edited ) to ai
@enmodo@mastodon.social avatar

Be afraid, be very afraid. Palantir is the last company you want leading AI because they will be the company that neo-fascists hire to spy on your every message sent, every dollar spent, and every movement made.

https://www.fool.com/investing/2023/11/03/palantir-is-quietly-emerging-as-a-leader-in-ai/

Edit: added the image, seemed appropriate

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • cubers
  • rosin
  • Youngstown
  • thenastyranch
  • ngwrru68w68
  • slotface
  • khanakhh
  • everett
  • ethstaker
  • kavyap
  • osvaldo12
  • DreamBathrooms
  • tacticalgear
  • megavids
  • magazineikmin
  • InstantRegret
  • Durango
  • GTA5RPClips
  • cisconetworking
  • modclub
  • tester
  • normalnudes
  • mdbf
  • anitta
  • Leos
  • provamag3
  • lostlight
  • All magazines