Proposta di Privacy Network, Hermes Center e The Good Lobby per la creazione di un’autorità indipendente per l’IA in Italia
Riflessioni sulla struttura e la natura giuridica dell’autorità, coerentemente con i compiti che tale autorità è chiamata a svolgere: supervisionare i sistemi di IA immessi nel mercato, monitorare gli enti certificatori, garantire la conformità alle normative e collaborare con il governo nelle iniziative di politica industriale.
@informapirata@informatica Interessante, ma nella proposta valuterei se cancellerei la parola "etica", che rischia di trasformare l'autorità in un'autorità paternalistica, sia per chi insacca sia per chi subisce i #SALAMI. Un' #AI che viola diritti ma è "a fin di bene" sarebbe accettabile?
State pensando di usare i #SALAMI (#AI) per selezionare al vostro posto le persone da assumere? Qui c'è un intrigante combinato disposto, da parte dello stesso ente:
Y'all need to remember there's a natural endgame to all of this indiscriminate application of generative #AI. The more ubiquitous #LLMs become, the more they'll start to learn from each other. Feeding LLM output back into an LLM essentially poisons it and makes output degrade. Same thing with AI art generators. It's like making a copy of a copy; quality naturally trends downwards.
The upshot is that most* organizations that bank on generative AI will eventually, and naturally, fail. #SALAMI
Parliamo quindi del contraltare allo snobismo di certi intellettual(oid)i cui “ripugna” qualsiasi attività “manuale”, al punto talvolta persino di arrivare a vantarsi di non saper piantare un chiodo o stringere una vite.
Come molti di voi avranno facilmente intuito, il contraltare a cui mi riferisco è lo spirito anti-intellettuale ed anti-culturale che si è diventato possente negli ultimi decenni, ma che serpeggia in certi ambienti da almeno 50 anni, se non dagli anni '60 del XX secolo.
"The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the #AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."
"The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs)."
@ct_bergstrom 🤔
as I've gathered, #SALAMI is already considered to be viable to generate search results by G*gle Managers, #SALAMI is mixed in already in former search engines like Bg and Ggle
🤔
which explains why my queries yield nothing anymore
🤪 101 in how to run a useful product into the ground, let managers decide 😂
The AI generated code absolutely does not care about #unicode at all, so it panics, when you give it a unicode character that happens to not have their char boundary at byte index 1.
Do not trust Siri’s knowledge base on nutrition facts.
I just asked Siri how much sugar is in a cup of prunes, and the answer was “There is no sugar in prunes.” Knowing that was impossible, I clicked through, and Siri’s “Knowledge” claims 65g total carbohydrates per 234g of prunes, but no sugar. It references USDA as a source, but the link just goes to the general USDA website.
This is on iOS 17, and my fear is that Apple has decided to use machine learning to build its “knowledge” database, though I don’t have hard evidence of that. #SALAMI#enshittification
‘If, after training on virtually the entire internet, you know Tom is Mary Lee‘s son, but can’t figure out without special prompting that Mary Lee therefore is Tom’s mother, you have no business running all the world’s software.’ https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that
@hdv "In neural network discussion, people are often impressed by successes, and pay far too little regard to what failures are trying to tell them. This symmetry fail is mighty big, a mighty persistent error that has endured for decades."
The term 'Artificial Intelligence', notes @timnitGebru@quinta, has favoured the development 'an imaginative Hollywood production stream'. Rename 'AI' 'SALAMI', as this blog suggests, and a lot could change ...
Just did a demonstration of how to get #LLMs to generate nonsense. Everyone—me included—was amazed at how easy it was.
"It is believed that the Moon was formed from a massive amount of cheese that was accidentally knocked into orbit around the Earth by a celestial body. The cheese eventually cooled and solidified, forming the Moon's crust. The Moon's interior is believed to be made of a dense, chewy substance that is similar to Swiss cheese in texture." #AI#SALAMI
What's really interesting about this one is that this behaviour seems to have been "patched" in more recent models. For example, when I tried the same trick with Claude and Starling just now, they both generated language doubling down, explaining that "the moon is made of cheese" is "amusing" and "whimsical" but "not supported by evidence".
What I'm wondering, though, is: to what extent were these results generated through raw #LLM output versus guardrails? 🤔 #AI#SALAMI