CharlieMcHenry, to opensource
@CharlieMcHenry@connectop.us avatar

Google “We Have No Moat, and Neither Does OpenAI” - Google insider admits that Open Source is already far ahead of anything Google, Microsoft or any other private corp is working on. Not surprising. Release exciting, high-potential new technology into the wild and this is what happens. So Google asks: Where do we add value to the process? Good question. Or is this a case where legacy industry giants simply won’t have a role? is leading the way, and that’s both encouraging AND somewhat alarming. Alarming because the potential for negative outcomes, fraud, misinformation and massive job loss remain in the absence of government oversight and regulation. https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

mnl, to random
@mnl@hachyderm.io avatar

+ 🚨

Jigsaw_You, to internet Dutch
@Jigsaw_You@mastodon.nl avatar
Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar
MattHodges, to random

I'm intrigued by @simon's exploration of prompt injections.¹ It reminds me Contracrostipunctus — If Record Player X is sufficiently high-fidelity, then when it attempts to play the song 'I Cannot Be Played on Record Player X', it will break.² And jumping out of the system with "Record Player Omega" that works by scanning the record and reconfiguring the player into something which won’t break.³

¹ https://simonwillison.net/2023/May/2/prompt-injection-explained/
² https://www.russellsteinberg.com/contracrostipunctus
³ https://www.reddit.com/r/rational/comments/364gsj/geb_discussion_16_chapter_15_jumping_out_of_the/

nighthawk, to random
@nighthawk@aus.social avatar

I wonder if future -driven IDEs will look more like GitHub than a code editor: you file an "issue" to tell the bot what you need, it creates a PR for you to review, you test it, review it, make some changes, ask it to change other things and refine your specs in the issue until you're happy and merge.

That's the workflow I find myself using with ChatGPT, but all the copying and pasting between browser and IDE, and repetitions in the chat make it clumsy. GitHub solved most of the UX for that already.

petersuber, to random
@petersuber@fediscience.org avatar

1/ Conjecture about AI/LLM tools

Find a scientific question with these three properties:

  1. It has a yes/no answer.
  2. It has a history of disagreement. Historically there have been many researchers, experiments, studies, datasets, and arguments on each side.
  3. Recently the question was decisively resolved. The latest research shows consensus, while the past research shows lack of consensus.

,

🧵

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

“Investors must act fast; this is the most exciting get-rich-quick opportunity since Bitcoin started accepting Bitcoin in transactions for Bitcoin.”

https://www.mcsweeneys.net/articles/introducing-total-crap-the-first-magazine-written-entirely-by-ai

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar
michal, to random Czech
@michal@spondr.cz avatar

Toto je fantazie. Že by šlo použít i pro signály mozku a z těch šlo do lidské řeči převést, co mozek vlastně "vidí", to by mě nenapadlo.
https://denikn.cz/1137262/novy-prulom-ve-vyvoji-umele-inteligence-experiment-ukazal-ze-dokaze-pomerne-presne-cist-vase-myslenky/

go_shrumm, to random

Little more hands-on with a certain for some time now.

Once I learned what the "stop sequence" is actually good for, my instinctive ascription of at least a little bit of personality to the thing disappeared immediately.

It does not "know" how to stop by itself.

(or rather: )

manderson, to random
@manderson@mastodon.social avatar

"Croesus, king of Lydia beginning in 560 BC, tested the oracles of the world to discover which gave the most accurate prophecies."


https://languagelog.ldc.upenn.edu/nll/?p=58607&utm_source=feedly&utm_medium=rss&utm_campaign=the-age-of-socratic-ai

manderson, to random
@manderson@mastodon.social avatar

"Croesus, king of Lydia beginning in 560 BC, tested the oracles of the world to discover which gave the most accurate prophecies."


https://languagelog.ldc.upenn.edu/nll/?p=58607&utm_source=feedly&utm_medium=rss&utm_campaign=the-age-of-socratic-ai

skullvalanche, to random
@skullvalanche@gladtech.social avatar

Hinton Today: https://arstechnica.com/information-technology/2023/05/warning-of-ais-danger-pioneer-geoffrey-hinton-quits-google-to-speak-freely/

Hinton a month ago: https://youtu.be/qpoRO378qRY

I find the "oh, I know them, they're good people" argument about A.I. researchers to be such a tired and garbage statement. It doesn't matter how good the scientists who worked on the atomic bomb were, they still ushered in the age of mutually assured destruction.

I'd like to reiterate that my concern about is not that it's going to become conscious, or that it's going to displace a huge portion of the workforce (it will) but before it does any of that it's going to be leveraged by bad actors to flood society with truthy sounding bullshit the likes of which we've never seen and it will lead to killing in the streets.

QAnon is just a canary in the coal mine.

mariyadelano, to random
@mariyadelano@hachyderm.io avatar

A friend was playing a 2017 video game called NieR:Automata and just shared this screenshot with me

Very eerie and all too relevant with our current craze

"I can find no indication of the machines creating new cultures or values.
They only imitate human behavior."

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

is best known for an called , which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all models.

https://www.technologyreview.com/2023/05/01/1072478/deep-learning-pioneer-geoffrey-hinton-quits-google/

Jigsaw_You,
@Jigsaw_You@mastodon.nl avatar

Hinton has long called out ethical questions around#AI.

mmitchell_ai, to random

Reporting from Cade Metz on Geoff Hinton's Google departure. A few things stand out to me; time for thread.
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@mmitchell_ai maybe we could help kick start the collaboration of leading scientists on ways of controlling technology by improving the discourse on this very platform in such away that it brings together the many sources of academic expertise that are already here?

clacke, to random en-us

People overestimate what "AI" of today is and what it can actually do, because memes are spreading that describe any machine-assisted process in a highly glossed-over form, ignoring the required human effort to make it work.

The naive impression is that you just gave some generative engine a prompt and the result came out fully formed, when the actual process is that the people behind the project used multiple purpose-built engines, for each of the engines they iterated on prompts that would output something semi-coherent, and then they used human efforts to tie the result together.

This is currently spreading as "this AI-generated pizza commercial" with no further explanation, but Tom's Hardware interviewed the actual people who made it work:

www.tomshardware.com/news/ai-p…

bkoehn, to random
@bkoehn@hachyderm.io avatar

A colleague made this. Nice explainer.

https://www.youtube.com/watch?v=7uDwEZBl-vs

Kraemer_HB, to random German
@Kraemer_HB@mastodon.social avatar

Menschen empfinden als harmonisch, was sie gewohnt sind. Sie mögen, was sie kennen, am liebsten leicht geremixt, aber wiedererkennbar.

Auftritt "": Bildgeneratoren rekombinieren wohlbekannte Bildinhalte und Stile. -Textgeneratoren ermitteln aus riesigen Textkorpora, welche Wörter am gewöhnlichsten aufeinander folgen. Sie sind beliebt und angenehm, weil sie harmonische Ergebnisse erzeugen - außer dem Betrachter fällt eine Disharmonie auf, insbesondere eine zur Realität.

go_shrumm, to random

Are LLMs generic writers - or rather generic books? Would anyone say of a book, it had theory of mind or feelings?

What if the is a compressed storage of utterances of human actors? The utterances are what we are used to see as signs of the actors’ knowledge or theory of mind or feeling.

That compression is not lossless. The decompression, triggered by prompt, does not restore the original faithfully. We misunderstand the errors of the book as creativity of a writer.

DoomsdaysCW, to random
@DoomsdaysCW@kolektiva.social avatar

groups in , fear as learns their languages

By Rina Chandran, April 03, 2023

• Generative AI models learn from mass data scraped from web
• Indigenous groups fear losing control over their data
• Some move to protect their information from commercial use

"When U.S. tech firm OpenAI rolled out Whisper, a speech recognition tool offering audio transcription and translation into English for dozens of languages including Māori, it rang alarm bells for many Indigenous New Zealanders.

"Whisper, launched in September by the company behind the ChatGPT chatbot, was trained on 680,000 hours of audio from the web, including 1,381 hours of the Māori language."

Read more: https://www.context.news/ai/nz-us-indigenous-fear-colonisation-as-bots-learn-their-languages?utm_source=pocket-newtab

HistoPol,
@HistoPol@mastodon.social avatar
annaleen, to random
@annaleen@wandering.shop avatar

"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @voron

(8/n)

Humanity continues on the path to create #SuperAndroids

"Recent research has taken this approach, training language models [#LLM's] to generate physics simulations, interact with physical environments and even generate #robotic action plans.

Embodied language understanding might still be a long way off, but these kinds of multisensory interactive projects are crucial steps on the way there."

HUMANS ARE STUPID

https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@ShadSterling

...in disagreement about the definition and the terminology.

E.g. in this @reuters article, they state that 's are a form of (), while also starting that "Like other forms of artificial intelligence, generative AI learns how to take actions from past data. "

https://www.reuters.com/technology/what-is-generative-ai-technology-behind-openais-chatgpt-2023-03-17/

Anyone who has spend a couple of days researching, knows that 's do NOT learn...

@simon @annaleen

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • everett
  • magazineikmin
  • Durango
  • thenastyranch
  • Youngstown
  • slotface
  • hgfsjryuu7
  • osvaldo12
  • rosin
  • kavyap
  • mdbf
  • PowerRangers
  • DreamBathrooms
  • modclub
  • khanakhh
  • InstantRegret
  • tacticalgear
  • vwfavf
  • ethstaker
  • ngwrru68w68
  • normalnudes
  • tester
  • GTA5RPClips
  • cubers
  • cisconetworking
  • Leos
  • provamag3
  • All magazines