jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

#AIethics #Cybersecurity #GPT #OpenAI #LLM #GenAI #GenerativeAI #Python #NodeJS #Ruby #Golang

weareopencoop, to random
@weareopencoop@mastodon.social avatar

We updated our Library on our website about AI Literacy https://buff.ly/4at9yeq head over to see a list of papers, articles and posts that we are currently reading.

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"How much do we want AI to be involved in farming? The time for that conversation is now, before these trends are irreversibly locked in. Now is the time to set reasonable ethical limits."

#VirginieSimoneauGilbert & #Jonathan Birch, 2024

https://aeon.co/essays/how-to-reduce-the-ethical-dangers-of-ai-assisted-farming

#AI #farming #AnimalRights

strypey,
@strypey@mastodon.nzoss.nz avatar

"For example, an EU working group proposed in 2019 that AI systems ‘should take into account the environment, including other living beings’, but this is so broad it implies no meaningful limits at all on the use of AI in farming. A review of 22 sets of AI ethics guidelines concluded – brutally – that AI ethics, so far, ‘mainly serves as a marketing strategy’."

& Birch, 2024

https://aeon.co/essays/how-to-reduce-the-ethical-dangers-of-ai-assisted-farming

underdarkGIS, to random
@underdarkGIS@fosstodon.org avatar

Excited about our upcoming @emeraldseu #Webinar: Navigating AI's Ethical Aspects

It's a big topic to tackle.

When? 28 March 11:00 CET
Where? https://emeralds-horizon.eu/events/emeralds-webinar-navigating-ais-ethical-aspects

#emeraldseu #aiethics #xai #mobilitydatascience #geoai

underdarkGIS,
@underdarkGIS@fosstodon.org avatar

Probably a good time to re-read our last year's paper "Thinking Geographically about AI Sustainability" to refresh all the ideas and perspectives

https://agile-giss.copernicus.org/articles/4/42/2023/

If you have additional pointers to literature, please share

CorinnaBalkow, to random
@CorinnaBalkow@digitalcourage.social avatar

"Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices."

https://arxiv.org/abs/2403.07183

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

echevarian, to random
@echevarian@genart.social avatar
echevarian,
@echevarian@genart.social avatar
luis_in_brief, to random
@luis_in_brief@social.coop avatar

Some days I wonder if, quietly, CC0 is actually a bigger success story than the other @creativecommons licenses put together. Not to slight the other licenses! But CC0 is increasingly catalytic in the library, museum, and data spaces.

https://www.openculture.com/2024/03/the-getty-makes-nearly-88000-art-images-free-to-use-however-you-like.html

poritzj,
@poritzj@mastodon.social avatar

@luis_in_brief @creativecommons @dajb I think CC0 was only created 10 years into CC's existence because @lessig is an academic, and giving credit (i.e., any CC license with BY) is an absolute scholarly value.
Which makes academia's embrace of genAI really weird to me, since it is based on the work of others w/o giving any credit. Why don't academics just viscerally reject genAI for that reason? Is it because this violation of a basic academic norm is happening "at scale"?

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

Github copilot suggests real use names!

Seems that, not only do they include copyrighted data, but also keep user names in the training data (which is sth irrelevant!)

What about private repos?

echevarian, to aiart
@echevarian@genart.social avatar
echevarian, to techno
@echevarian@genart.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles?

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine, a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI.

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”"

https://www.nist.gov/news-events/news/2024/02/nist-researchers-suggest-historical-precedent-ethical-ai-research

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating - Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia... - https://arstechnica.com/?p=2007059

jonippolito, to generativeAI
@jonippolito@digipres.club avatar

AI companies to universities: Personalized tutors will make you obsolete

Also AI companies: Thanks for recording your lectures so we can sell them on the open market to train personalized tutors

https://annettevee.substack.com/p/when-student-data-is-the-new-oil

poppastring, to ai
@poppastring@dotnet.social avatar

Amazon’s Road House reboot is accused of copyright infringement — and AI voice cloning

https://www.theverge.com/2024/2/27/24085264/amazon-road-house-reboot-lawsuit-ai-cloning-copyright-infringement

SuVergnolle, to ArtificialIntelligence French
@SuVergnolle@eupolicy.social avatar

You want to take a step back and reflect on the regulation of Artificial Intelligence? I have a report for you! 🚀

❓What's in it?
As we navigate the evolving landscape of , it is crucial to put democratic principles at the forefront. The report does just that and outlines key recommendations on four different topics: Design, Liability, Ethics, and Governance.

📃 Full report: https://informationdemocracy.org/2024/02/28/new-report-of-the-forum-more-than-200-policy-recommendations-to-ensure-democratic-control-of-ai/

echevarian, to RadioControl
@echevarian@genart.social avatar
jonippolito, to llm
@jonippolito@digipres.club avatar

"Aftermarket" fixes applied after training, like injecting diversity terms into prompts, don't fix the underlying model and can even exacerbate harmful fabrications. If the training set is biased—and the Internet is—it's really hard to correct that after the fact.

https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora - Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an in... - https://arstechnica.com/?p=2005529 #machinelearning #videosynthesis #generativeai #tylerperry #aiandjobs #aiethics #aisafety #biz#openai #sora #ai

poppastring, to ai
@poppastring@dotnet.social avatar

The Ohio House have introduced legislation this month to outlaw the sharing of malicious “deepfakes”. Two proposals create civil offenses, one of the bills goes as far as to create criminal penalties for creators and sharers of such content.

https://www.govtech.com/policy/ohio-lawmakers-from-both-parties-want-crackdown-on-deepfakes

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

✍️ New Blog Post

On The Enshittification of Everything: Melting Down in The AI Summer!

https://www.onceupondata.com/post/2024-02-18-enshittification-of-everything/

rocketdyke, to reddit
@rocketdyke@yellowmustard.club avatar

well, looks like I'll be going in and editing all of my old reddit posts and comments to be gibberish so I can poison a machine learning dataset.

https://9to5mac.com/2024/02/19/reddit-user-content-being-sold/

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

US says AI models can’t hold patents - Enlarge

On Tuesday, the United States Patent and Trademark Of... - https://arstechnica.com/?p=2003310

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

Doodling while listening to Mystery AI Hype Theater.

"Just because you identified a problem, it doesn't mean a chatbot is the solution" -- @emilymbender

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines