This is a drum I keep banging, but the more I read about the approach to #AI, the more it strikes me that people are working on creating intelligence from an abstract corpus of data. Intelligence evolved in response to interaction with the physical world, and yet the physical world is completely ignored in AI research, as far as I can tell. People want a mind that emerges fully formed from silicon and data. And that's not how it works.
If all of the people worried about AI (especially various execs high up in companies) we’re just as worried about end user tracking and digital footprint building used by advertisers and large tech firms, we could not just solve the digital “spying” on our online activities, it would end at least half of the “scary AI use cases” they’re worried about.
All this ridiculously overhyped longtermer #AI scaremongering really seems like a strong & increasingly calculated attempt to distract from something or everything else and less like doomsayers having more money than sense and otherwise falling prey to their own snake oil.
Nochal wegen #ChatGPT#AI#LLMs: Ich habe mal ernsthaft versucht, mir einen ~500 Worte Abstract für einen Vortrag schreiben zu lassen. Oder dabei Hilfe zu bekommen. Ernsthaft am Prompt geschraubt, 6 Iterationen. Resultat: Nur, wirklich nur Bullshit. Reiteration von Definitionen und Trivia aus dem Umfeld des Themas. Die Hypothese "Das Ding ist für akademische Zwecke komplett nutzlos, weil es Ideen nicht verstehen kann" konnte wieder nicht widerlegt werden.
I'm struck by the Center for AI Safety's message saying "Mitigating the risk of extinction from AI should be a global priority..." I guess that could be among our priorities but wouldn't it be better to eliminate that risk rather than mitigate it? Humans are making #AI. I think people should examine AI from the perspective of it being a tool (like a microscope) on human behaviour since that's what it's reflecting. We're making AI and scared of it but what we're really scare of, is ourselves.
@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '
If accurate, to me this illustrates the core of the issue with many people's use of so-called #AI tools: this lawyer bought the hype that these #LLM are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.
Nvidia is now a trillion dollar company thanks to the demand glut for their chips to train LLMs. I still wonder about the unit economics but hopefully it'll go down significantly with time. #LLMs#AI#gpu#nvidia
Do I really need to write another Gilbert & Sullivan "Modern Major General" parody, this time about #AI? Last time I parodied that song was about #Google long ago.
Sam Altman and OpenAI's stance on AI ethics is so strange. It's like watching the CEO of Exxon give a passionate speech about the dangers of climate-change from the deck of a brand-new oil platform.
@schmod @futurism
I like the fantasy that this can be controlled anyway.
If we can't agree on management of dangers that have been with us for decades, how are you going to police AI?
Pandora's box is firmly open and can't be shut.
I'm going off to my extinct volcano, with en suite submarine pen, to dream up a dastardly AI to threaten James Bond with. Just let them stop me!
@arstechnica
Billionaires are already causing #MassExtinction, including of humans. #AI designed and controlled by them may accelerate the process. We need to regulate billionaires (and others controlling AIs).