cigitalgem, to llm
@cigitalgem@sigmoid.social avatar

Lets do a TOP TEN LLM Risks list

7: Reproducibility economics

Get the full paper here https://berryvilleiml.com/results/

cigitalgem, to llm
@cigitalgem@sigmoid.social avatar

Lets do a TOP TEN LLM Risks list

8: Data ownership

Get the full paper here https://berryvilleiml.com/results/

cigitalgem, to random
@cigitalgem@sigmoid.social avatar
cigitalgem, (edited ) to random
@cigitalgem@sigmoid.social avatar

Lets do a TOP TEN LLM Risks list

9: Model Trustworthiness

Get the full paper here https://berryvilleiml.com/results/

cigitalgem, (edited ) to random
@cigitalgem@sigmoid.social avatar

Lets do a TOP TEN LLM Risks list

10: Encoding Integrity

https://berryvilleiml.com/results/BIML-LLM24.pdf

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

BIML announces new work. An LLM Risk Analysis.

81 risks identified and discussed. The only answer is regulation.

https://berryvilleiml.com/results/

(direct link to paper with reg wall: https://berryvilleiml.com/results/BIML-LLM24.pdf)

noplasticshower, to random
@noplasticshower@zirk.us avatar

Big news tomorrow from BIML on the front about LLMs.

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

What does BIML think of the new NIST report on AI attacks??

Mostly harmless.

See: https://berryvilleiml.com/2024/01/23/another-round-of-adversarial-machine-learning-from-nist/

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

Find, Threat Model, Control. Three critical aspects of

This story features: Legit Security

And mentions Irius Risk and Calypso AI. Am I proud? Yes indeedy.

https://www.darkreading.com/application-security/first-step-in-ai-ml-security-is-finding-them

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

There is a downside to "poisoning" your images with Nightshade. In my view this contributes to information pollution on the net.

https://www.theregister.com/2024/01/20/nightshade_ai_images/

cigitalgem, (edited ) to ML
@cigitalgem@sigmoid.social avatar

Today we worked on comments (some were toughies) from 8 readers/reviewers of our LLM architectural risk analysis (ARA) draft. BIML plans to release this work 1.24.24

#MLsec #ML #AI #threatmodeling #ARA

But not #AdversarialAI

cigitalgem, to random
@cigitalgem@sigmoid.social avatar
seniorfrosk, to random Norwegian
cigitalgem,
@cigitalgem@sigmoid.social avatar
seniorfrosk,
cigitalgem, to random
@cigitalgem@sigmoid.social avatar
cigitalgem, to llm
@cigitalgem@sigmoid.social avatar

Gave a talk yesterday in Rio to a distinguished group of philosophers of mind from all over the world.

Brain trust included a majority of Canadians

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

You have probably heard of recursive pollution a la

This article describes a pollution fountain.

https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

Fake legal citations from an LLM? Why not?

LLMs and other generative models are wrong all the time...and confidently so. The anthropomorphic term "hallucinate" is bullshit and needs to go.

https://www.nytimes.com/2023/12/29/nyregion/michael-cohen-ai-fake-cases.html?unlocked_article_code=1.Jk0.tTQP.ybdqAxp6RSIW&smid=url-share

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar
cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

It's not just authors anymore. The NY Times sues OpenAI and Microsoft over ML copyright issues.

systems leak training data consistently.

https://www.wsj.com/tech/ai/new-york-times-sues-microsoft-and-openai-alleging-copyright-infringement-fd85e1c4?mod=mhp

cigitalgem,
@cigitalgem@sigmoid.social avatar

"Their AI tools divert traffic that would otherwise go to the Times’ web properties, depriving the company of advertising, licensing and subscription revenue, the suit said."

Wonder why search engines are now just "telling you the answer" instead of giving you a URL? Attention capture. meets surveillance capitalism.

cigitalgem,
@cigitalgem@sigmoid.social avatar

In my view, the fiction authors have a much weaker case WRT fair use. When someone is looking up a news story to find out what happened, they don't care about the reporter...they care about the institution. So attention hijacking is an obvious threat.

https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

ML systems do not "know" anything. They simply predict the next token. And training token gathering may have stopped in 2021. Time is frozen in many generative ML systems.

https://gamepad.club/@jonshute/111635193527930925

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

ML models are certainly biased in many well-known examples. But the bias is racism, sexism, and xenophobia, not whatever these "conservatives" believe.

https://www.washingtonpost.com/technology/2023/12/23/grok-ai-elon-musk-x-woke-bias/

cigitalgem, to random
@cigitalgem@sigmoid.social avatar

This kind of visibility into what's in a training set (an ocean of data) is a good trend. Let's regulate LLM foundation models.

https://m.slashdot.org/story/23/12/23/0142225/ai-companies-would-be-required-to-disclose-copyrighted-training-data-under-new-bill

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • cubers
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • tester
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • anitta
  • Leos
  • provamag3
  • modclub
  • lostlight
  • All magazines