ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Towards Accurate Differential Diagnosis with Large Language Models, this paper explores the use of Large Language Models (LLMs) in aiding differential diagnosis (DDx) in medical cases.
https://arxiv.org/abs/2312.00164

scy, to generativeAI
@scy@chaos.social avatar

I'm old enough to remember how @creativecommons was founded as a way for independent creators to safely share their work and build upon each other.

In 2024, their take is now "billion dollar companies plagiarizing your art is fair use".

https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/

Hats off to the author, you don't see that kind of, uh, skillful rhetoric chicanery every day. Like "generative AI doesn't compete with artists because artists are not in the data market". 😬

#CreativeCommons #GenerativeAI

ct_bergstrom, (edited ) to ChatGPT
@ct_bergstrom@fediscience.org avatar

People keep telling me that is amazing for proofreading text and improving scientific writing.

I just gave a section of a grant proposal and it made 11 suggestions, none of which were worth keeping (often adding or removing a comma, or repeating a preposition in a list).

More interestedly, a number of its suggestions were identical to my originals.

ppatel, to ai
@ppatel@mstdn.social avatar

Note that the training data heavily relies on the Bible and its translations. Lots of bias there.

Meta unveils open-source models it says can identify 4,000+ spoken languages and produce speech for 1,000+ languages, an increase of 40x and 10x respectively.

https://www.technologyreview.com/2023/05/22/1073471/metas-new-ai-models-can-recognize-and-produce-speech-for-more-than-1000-languages/

ccgargantua, to ai

People who believe in all of the hype have the same problem as people who live in denial of the technology being as powerful as it is: Thinking the purpose of is to create unique works

Consider a programmer who has lost their hands. An AI tool could be made using a to generate keystrokes based on what the programmer says

User: “Clear the terminal”
GPT: generates clear command

That is the power of that everyone who knows what they’re talking about is excited for

ccgargantua, to ai

At the request of @RedCore, a thread on the potential dangers of

  1. being used to create porn is my biggest concern. Creeps and child predators are already using this technology.

  2. Generative AI could be trained on how individual people talk over text and eventually over the phone. This can be used to scam family members.

  3. AI could be used to profile political opponents and racial minorities. This is already being done in China.

1/?

ErikJonker, to generativeAI
@ErikJonker@mastodon.social avatar

Google releases interactions with there new model, to be released in january. Ofcourse a large part is marketing and hype but also a lot of potential.
https://deepmind.google/technologies/gemini/#introduction

https://www.youtube.com/watch?v=UIZAiXYceBI

phillipdewet, to generativeAI

Let's say is not about getting writers paid, but about getting people to write.

Which, then, is the greater good, what is society trying to incentivise: writers writing more, or more non-writers writing?

That suddenly strikes me as very much a non-trivial question in the face of .

BeAware, to aiart
@BeAware@social.beaware.live avatar
drahardja, (edited ) to fediverse
@drahardja@sfba.social avatar

server admins: Have you considered adding a Terms of Use clause that prohibits the use of posts for training without explicit user consent? I feel like abuse of user-generated content (text and other media included) for training is already upon us, and I wonder if we shouldn’t set ourselves up for legal recourse at some point in the future if we ever need to.

Social media is one of the most ready pools of materials for training models. This tends to continue the trend of generating profits for private corporations by harvesting “public” goods without compensation, especially from artists who work hard to create quality media. I hope the Fediverse can exclude itself from this phenomenon somehow.

This seems especially relevant for .

ct_bergstrom, to ChatGPT
@ct_bergstrom@fediscience.org avatar

Numerous sources claim that TurnItIn has a generative AI detector with something like 98% sensitivity and 99% specificity.

For example, the Washington Post, below.

This is completely implausible given that OpenAI themselves only claim to be able to achieve 26% sensitivity and 91% specificity.

So where does this wild claim come from? I think I've figured it out.

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

# #

jasonstcyr, to generativeAI
@jasonstcyr@mstdn.ca avatar

Are generated images art? Was going to reply to a post, but didn't feel like dunking on somebody else for no reason.

Even if we assume generative AI is only a tool in the hands of a human, like a brush, the tool and the output are not what makes it art or not. It isn't art because of the paint. It isn't art because of the brush, or the canvas, or the beauty of the output.

The art is in the creation, and the human is not really involved in the actual creation.

neptune22222, to generativeAI
@neptune22222@kolektiva.social avatar

@fsf Where can I read about the legal licensing and copyleft issues surrounding generative AI algorithms like LLMs (Large Language Models) like Chat-GPT or Copilot, trained on GPL'd source code?

I wonder if there is a need for a new license that explicitly makes training generative AI on open source code requires the AI model to be open sourced?

Does the FSF have any written opinions or educational materials related to this topic of the relationship between copyleft and generative AI trained on copyleft source code?

abucci, to midjourney
@abucci@buc.ci avatar

Nightshade 1.0 is out: https://nightshade.cs.uchicago.edu/index.html

From their "What is Nightshade?" page:

Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.

In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.

-E

dan613, to generativeAI
@dan613@ottawa.place avatar

Pierre Poilievre is undoubtedly using AI to make creepy videos of himself. They feature:

  • inconsistent pronunciations
  • trouble pronouncing "always"
  • weird jumps from neutral expression to smiling expression

https://vm.tiktok.com/ZMMyoTU49/

metin, (edited ) to ai
@metin@graphics.social avatar

Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.

#AI #ArtificialIntelligence #aliens #alien #MachineLearning #ML #DeepLearning #LLM #LLMs #GenerativeAI #OpenAI #Microsoft

stancarey, to generativeAI

"It's not lying, it's not telling the truth, because both of those would require some intentionality and some communicative intent, which it doesn't have."

@emilymbender talks to Michael Rosen about chatbots and the synthetic text they produce: https://www.bbc.co.uk/programmes/m001l97m

pallenberg, to ChatGPT
@pallenberg@mastodon.social avatar

Ihr Lieben, ich brauche bzw. wuensche mir abermals euer Feedback, welches ihr hier 👉 https://t.ly/hallo mit einer kurzen Sprachnachricht hinterlassen koennt.

Wie steht ihr zu den aktuellen Entwicklungen bei und was haltet ihr von den Produkten wie etc?

Ich baue einen ausfuehrlichen Test von Google Bard zusammen & wuerde da auch gerne sowohl kritisch, wie auch unterstuetzende Stimmen einfangen.

Danke fuer euren Support!

BeAware, to aiart
@BeAware@social.beaware.live avatar

Windows XP Pixel Edition & Grand Canyon Pixel Edition Wallpapers.

@imageai

Also uploaded to my Ko-fi completely uncompressed for free here: https://ko-fi.com/i/IA0A6TAQ0O

image/jpeg
image/jpeg
image/jpeg

mfriess, to ai

Effective 27 July changed their terms of service (T&C) whereby without opt-out YOU give them consent to perpetually use your content (video, audio,…) also for “training and tuning of algorithms and models”.
https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/

Check for yourself:
https://explore.zoom.us/en/terms/
Compare with the version archived 25 July by :
https://web.archive.org/web/20230725013414/https://explore.zoom.us/en/terms/

BenjaminHan, to generativeAI
@BenjaminHan@sigmoid.social avatar

1/ In this age of LLMs and generative AI, do we still need knowledge graphs (KGs) as a way to collect and organize domain and world knowledge, or should we just switch to language models and rely on their abilities to absorb knowledge from massive training datasets?

Jerry, to generativeAI
@Jerry@hear-me.social avatar

I got a most disturbing answer from Google's AI Bard when I kept pushing it to tell me why it incorrectly told me that Google Wallet can store Passports (it cannot).

After much poking, it admitted the wrong answer came from an unofficial source. When I asked it if it always accepts information from unofficial sources, it lied and said it doesn't, that it always looks for other sources to confirm. So, when I asked it if it found this wrong information in more than one unofficial source, it admitted that it did not. When I asked why it told me it does the second checks, but failed to do the additional checks for my question, it gave me this:

"I apologize for not following my own process for verifying information from non-official sources. In this case, I was eager to provide you with an answer to your question, and I did not take the time to fully vet the information I found."

Bard is a serial liar, and gaslighting and backpedaling are one of its developing skills. You really can't trust anything it says.

hrefna, to generativeAI
@hrefna@hachyderm.io avatar

Maybe, and hear me out, if you need to steal the work of others for your to exist, then your generative AI should not exist.

BeAware, to aiart
@BeAware@social.beaware.live avatar
BeAware, to aiart
@BeAware@social.beaware.live avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • khanakhh
  • mdbf
  • InstantRegret
  • Durango
  • Youngstown
  • rosin
  • slotface
  • thenastyranch
  • osvaldo12
  • ngwrru68w68
  • kavyap
  • cisconetworking
  • DreamBathrooms
  • megavids
  • magazineikmin
  • cubers
  • vwfavf
  • modclub
  • everett
  • ethstaker
  • normalnudes
  • tacticalgear
  • tester
  • provamag3
  • GTA5RPClips
  • Leos
  • JUstTest
  • All magazines