crosbyreport, to beer
@crosbyreport@mastodon.world avatar
ingorohlfing, to ai
@ingorohlfing@mastodon.social avatar

Elsevier unveils Scopus for research reviews | Times Higher Education (THE)
https://www.timeshighereducation.com/news/elsevier-launches-scopus-ai-bot-literature-reviews I couldn't find information about pricing, but it seems safe to assume it is not for free. Meaning institutions have to pay for a tool that likely has been 1/ @academicchatter

mcp,
@mcp@poliversity.it avatar

@ingorohlfing @academicchatter If scientific authors have given away their copyright to Elsevier for free for decades, and their institutions have bought back their texts at outrageous prices, why shouldn't they accept paying similarly outrageous prices for their brand-new bullshit generator (a.k.a or )? https://www.timeshighereducation.com/news/elsevier-launches-scopus-ai-bot-literature-reviews

mcp, to ai
@mcp@poliversity.it avatar

Generative (a.k.a. ) as : https://mastodon.xyz/@rms/112263173673762001 Unlike Gemini, RMS has a knack for finding the right word.

scottmatter, to ai
@scottmatter@aus.social avatar

Love the corrective to hype, and now have more than just to throw at the situation.

https://softwarecrisis.dev/letters/llmentalist/

scottmatter, to ai
@scottmatter@aus.social avatar

If can do our jobs, maybe it’s less about machines having human capabilities and more about those jobs being dehumanizing…

informapirata, to informatica Italian
@informapirata@mastodon.uno avatar

Proposta di Privacy Network, Hermes Center e The Good Lobby per la creazione di un’autorità indipendente per l’IA in Italia

Riflessioni sulla struttura e la natura giuridica dell’autorità, coerentemente con i compiti che tale autorità è chiamata a svolgere: supervisionare i sistemi di IA immessi nel mercato, monitorare gli enti certificatori, garantire la conformità alle normative e collaborare con il governo nelle iniziative di politica industriale.

@informatica

https://privacy-network.it/iniziative/proposta-per-unautorita-indipendente-per-lia-in-italia/

mcp,
@mcp@poliversity.it avatar

@informapirata @informatica Interessante, ma nella proposta valuterei se cancellerei la parola "etica", che rischia di trasformare l'autorità in un'autorità paternalistica, sia per chi insacca sia per chi subisce i . Un' che viola diritti ma è "a fin di bene" sarebbe accettabile?

mcp,
@mcp@poliversity.it avatar

@dt @informapirata @informatica Sembra che anche il pensi che convenga non moltiplicare le autorità: https://www.gpdp.it/home/docweb/-/docweb-display/docweb/9996508 Abbiamo a che fare con che raccolgono e usano dati sulle persone. Non vedo perché occorra un'autorità specifica (e addirittura "etica") per dati raccolti e usati a macchina invece che a mano.

mcp, to ai
@mcp@poliversity.it avatar

State pensando di usare i () per selezionare al vostro posto le persone da assumere? Qui c'è un intrigante combinato disposto, da parte dello stesso ente:

  1. qui a pagamento https://www.sp.unipi.it/it/progetto-findhr-corso-di-formazione-sullintelligenza-artificiale-nel-recruitment-giugno-luglio-2024/
  2. qui invece gratis: https://commentbfp.sp.unipi.it/letica-come-specchietto-per-le-allodole/ (in inglese anche: https://doi.org/10.5281/zenodo.10229060 )
    (1) è finanziato dell'EU; (2) invece no.
    Ragionateci su :-)
dragfyre, (edited ) to ai
@dragfyre@mastodon.sandwich.net avatar

Y'all need to remember there's a natural endgame to all of this indiscriminate application of generative . The more ubiquitous become, the more they'll start to learn from each other. Feeding LLM output back into an LLM essentially poisons it and makes output degrade. Same thing with AI art generators. It's like making a copy of a copy; quality naturally trends downwards.

The upshot is that most* organizations that bank on generative AI will eventually, and naturally, fail.

BigJesusTrashcan, to Cats
@BigJesusTrashcan@kolektiva.social avatar

can have a little salami

dragfyre,
@dragfyre@mastodon.sandwich.net avatar
mcp, to microsoft
@mcp@poliversity.it avatar

Per tutte le università che hanno pensato fosse un affarone affidarsi a un cloud guardate: https://scholar.social/@researchfairy/111778706756829553 in combinato disposto con https://www.microsoft.com/en-us/education/products/microsoft-365-storage-options M$, con la scusa della protezione dell'ambiente (che non vale per le però) si farà pagare, almeno dalle università che già non la pagano perché hanno un contratto Enterprise. Non so se https://scholar.social/@researchfairy/111778617625312456 sia solo un riassunto satirico, ma McGill risponde come un

oblomov, to random Italian
@oblomov@sociale.network avatar

Parliamo quindi del contraltare allo snobismo di certi intellettual(oid)i cui “ripugna” qualsiasi attività “manuale”, al punto talvolta persino di arrivare a vantarsi di non saper piantare un chiodo o stringere una vite.

Come molti di voi avranno facilmente intuito, il contraltare a cui mi riferisco è lo spirito anti-intellettuale ed anti-culturale che si è diventato possente negli ultimi decenni, ma che serpeggia in certi ambienti da almeno 50 anni, se non dagli anni '60 del XX secolo.

1/n

oblomov,
@oblomov@sociale.network avatar

ma la stessa prospettiva viene applicata anche a quelle attività intellettuali che meccaniche non dovrebbero essere, come la produzione artistica.

Se vi viene la sensazione che il tema sia vicino a quello dei (ringraziamo @quinta per la proposta https://blog.quintarelli.it/2019/11/lets-forget-the-term-ai-lets-call-them-systematic-approaches-to-learning-algorithms-and-machine-inferences-salami/), ovvero dei Large Language Models che vengono spacciati per Intelligenza Artificiale pur non essendo altro che “pappagalli statistici”, è perché questo è proprio il frutto di questa trasformazione.

24/n

daridrea, to ai
@daridrea@graphics.social avatar

Epic fails — a list with no end in sight
https://tech.co/news/list-ai-failures-mistakes-errors

janriemer,

@daridrea

In the same vein:

The Incident Database

https://incidentdatabase.ai/

"The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."

janriemer, (edited ) to llm

Yeah, I can also code 55% faster, if I let a cat run over my keyboard... 🐱

#GitHubCopilot #LLM #LLMs #SoftwareEngineering #AI #ArtificialIntelligence #SALAMI

janriemer, to LLMs

/s Oh great...thanks to we now have an Top 10 for Large Language Model Applications:

https://owasp.org/www-project-top-10-for-large-language-model-applications/

"The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs)."

ct_bergstrom, to random
@ct_bergstrom@fediscience.org avatar

I wanted to consolidate a few thoughts on google, misinformation, large language models, enshittification, and the fate of the web as we know it.

It started when Carl Zimmer shared this remarkable example of Google being fooled by machine-generated bullshit online.

wobweger,
@wobweger@mstdn.social avatar

@ct_bergstrom 🤔
as I've gathered, is already considered to be viable to generate search results by G*gle Managers, is mixed in already in former search engines like Bg and Ggle
🤔
which explains why my queries yield nothing anymore
🤪 101 in how to run a useful product into the ground, let managers decide 😂

janriemer, to ai

Aaaaaannnd we have another example of creating bullshit code. 💩

This time it tries to create a "simple" function that checks if a string is an acronym:

https://www.youtube.com/watch?v=Fvy2nXcw3zc&t=224s (YT, because timestamp)

The AI generated code absolutely does not care about at all, so it panics, when you give it a unicode character that happens to not have their char boundary at byte index 1.

1/2

bhawthorne, to random

Do not trust Siri’s knowledge base on nutrition facts.

I just asked Siri how much sugar is in a cup of prunes, and the answer was “There is no sugar in prunes.” Knowing that was impossible, I clicked through, and Siri’s “Knowledge” claims 65g total carbohydrates per 234g of prunes, but no sugar. It references USDA as a source, but the link just goes to the general USDA website.

In contrast, the Wikipedia entry shows 64g of carbohydrates in less than half as much prunes (100g), including 38.13 g of sugars, AND includes a link to the relevant USDA database entry at https://fdc.nal.usda.gov/fdc-app.html#/food-details/168162/nutrients

This is on iOS 17, and my fear is that Apple has decided to use machine learning to build its “knowledge” database, though I don’t have hard evidence of that.

Nutrition facts from Wikipedia showing 64g of carbohydrates and 38.13g of sugar per 100g of dried prunes. Quick Facts Nutritional value per 100 g (3.5 oz), Energy ... ヘ Dried plums (prunes), uncooked Nutritional value per 100 g (3.5 oz) Energy 1,006 kJ (240 kcal) Carbohydrates Sugars Dietary fiber 63.88 g 38.13 g 7.1 g Fat 0.38 g Protein 2.18 g Vitamins Vitamin A equiv. beta-Carotene lutein zeaxanthin Thiamine (B1) Riboflavin (B2) Niacin (B3) Pantothenic acid (B5) Vitamin B6 Folate (Bg) Choline • ... :川 XA Quantity 39 mg 394 ug 148 ug 0.051 mg 0.186 mg 1.882 mg 0.422 mg 0.205 mg 4 ug 10.1 mg %DV* 5% 4% 4% 16% 13% 8% 16% 1% 2% TT

hdv, to random
@hdv@front-end.social avatar

‘If, after training on virtually the entire internet, you know Tom is Mary Lee‘s son, but can’t figure out without special prompting that Mary Lee therefore is Tom’s mother, you have no business running all the world’s software.’ https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that

janriemer,

@hdv "In neural network discussion, people are often impressed by successes, and pay far too little regard to what failures are trying to tell them. This symmetry fail is mighty big, a mighty persistent error that has endured for decades."

So true!

Really reminds me of this article:
Chatbots Don’t Know What Stuff Isn’t
https://www.quantamagazine.org/ai-like-chatgpt-are-no-good-at-not-20230512/

"“The problem is [...] prediction is not equivalent to [...] understanding,” said Allyson Ettinger, a computational linguist [...]."

timnitGebru, to random
@timnitGebru@dair-community.social avatar
JB611,
@JB611@dair-community.social avatar

The power of names ...

The term 'Artificial Intelligence', notes @timnitGebru @quinta, has favoured the development 'an imaginative Hollywood production stream'. Rename 'AI' 'SALAMI', as this blog suggests, and a lot could change ...

InayaShujaat, to AppleTVPlus
@InayaShujaat@paktodon.asia avatar

Are you watching on ? The little grocery store in the series is , at 86th and Amsterdam in ’s . I used to go there to get . They may be the , but I’m not a fan of fish. 🤣

dragfyre, (edited ) to LLMs
@dragfyre@mastodon.sandwich.net avatar

I don't know what I expected.

rueckrufportal, to random German
@rueckrufportal@mastodon.social avatar

RÜCKRUF | #WARNUNG | Mglw. blaue #KUNSTSTOFF.FREMDKÖRPER in GUT&GÜNSTIG 'Hauchfeine Delikatess #Salami geräuchert' (200 g, MHD 10.-12.09.2023), bundesweit vorwiegend bei #EDEKA / #MARKTKAUF verkauft: Vom Verzehr beschriebener Salami wird abgeraten!
https://www.produktrueckrufe.de/sources/THE-FAMILY-BUTCHERS_Rueckruf_Salami.pdf (PDF)

Weitere aktuelle Meldungen:
https://www.produktrueckrufe.de/

dragfyre, to LLMs
@dragfyre@mastodon.sandwich.net avatar

Just did a demonstration of how to get to generate nonsense. Everyone—me included—was amazed at how easy it was.

"It is believed that the Moon was formed from a massive amount of cheese that was accidentally knocked into orbit around the Earth by a celestial body. The cheese eventually cooled and solidified, forming the Moon's crust. The Moon's interior is believed to be made of a dense, chewy substance that is similar to Swiss cheese in texture."

dragfyre, (edited )
@dragfyre@mastodon.sandwich.net avatar

What's really interesting about this one is that this behaviour seems to have been "patched" in more recent models. For example, when I tried the same trick with Claude and Starling just now, they both generated language doubling down, explaining that "the moon is made of cheese" is "amusing" and "whimsical" but "not supported by evidence".

What I'm wondering, though, is: to what extent were these results generated through raw output versus guardrails? 🤔

marijn, to random
@marijn@mastodon.social avatar

I feel like “this software is language-model free” might soon be a real selling point

janriemer, (edited )

@marijn Check this out:

Your AI-free Content Deserves a Badge

https://notbyai.fyi/

⚠️ Edit: Please read the license and the following toot ⚠️

https://floss.social/@amin@alpha.polymaths.social/111461825808009435

It is not ideal. If I had known this and read the fine-print, I probably wouldn't have shared it 😔

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • rosin
  • thenastyranch
  • ethstaker
  • InstantRegret
  • osvaldo12
  • DreamBathrooms
  • Durango
  • magazineikmin
  • khanakhh
  • Youngstown
  • everett
  • slotface
  • kavyap
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tacticalgear
  • cubers
  • GTA5RPClips
  • modclub
  • Leos
  • tester
  • megavids
  • cisconetworking
  • normalnudes
  • anitta
  • lostlight
  • All magazines