marcel, to ai
@marcel@waldvogel.family avatar

Modern text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.

Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.

This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.


https://www.theverge.com/2024/5/24/24164119/google-ai-overview-mistakes-search-race-openai

TheServitor,
@TheServitor@sigmoid.social avatar

@marcel

I would not be surprised if LLMs could get us to 99% correctness. Which is still too low for automated processes but plenty good for manual work.

You can have one #LLM check another's work, and it works to a reasonable degree, because LLMs are stronger evaluators and classifiers than truth generators. They are better at telling whether an answer is correct than giving a correct answer.

LLMs aren't #AGI but they may end up a tool used by a theoretical AGI.

KathyReid, to ai
@KathyReid@aus.social avatar

Excellent piece in #NoemaMagazine by Professor @ShannonVallor of #UniEdinburgh on the moral and experiential poverty of #AI - and what it means for us if we reduce the meaning of "human" to "producer of economic value".

A nuanced, thought-provoking and beautiful piece that argues for us to restore humanity to discussions of AI.

It provides ways to cut through the current hype cycle of #AGI and "super-human" AI, and leaves us with the fundamental question - "what does it mean to be human?".

I read it while brunching on scrambled eggs, buttered toast and hot coffee while outside on the patio, enjoying the late autumn sunshine - and I thoroughly recommend you do the same.

https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/

ceoln, to ai
@ceoln@qoto.org avatar
parismarx, to ai
@parismarx@mastodon.online avatar

They missed the most important part: And neither of them should be believed because AGI is just another tech fantasy.

ErikJonker, to OpenAI
@ErikJonker@mastodon.social avatar

We should be more worried about the security of AI chat assistants from Google and Open AI that will appear in a few months for everybody to use... Let's pause the whole AGI debate and focus on real and short-term risks.
https://www.technologyreview.com/2024/05/15/1092516/openai-and-google-are-launching-supercharged-ai-assistants-heres-how-you-can-try-them-out/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

freemo, to ArtificialIntelligence
@freemo@qoto.org avatar

Please reboost!

Trying something new, everyone is guaranteed an interview! Open interviews! For a limited time no one will be skipped (except for clear cases of abuse).

So we still have about 10 more 100% remote positions to hire for full-time market-fair positions here at QOTO/CleverThis.

100% remote, work from anywhere, even the beach, market-fair offers. Ethics first, we treat our people like family.

We have an urgent need for Machine learning experts with a background in NLP and Deep Learning (Natural Language Processing and Neural Networks). There is a focus on Knowledge Graphs, Mathematics, Java, C, looking for Polyglots.

We are an open-source first company, we give back heavily to the OSS community.

We need everything from jr to sr, data scientist to programmer. If your IT and your good, you might be a fit.

I will personally be both your direct boss, and hiring manager. I am also the founder and inventor.

The NLP position can be found at this link, other positions can be found on the menu bar on the left:

https://docs.cleverthis.com/en/human_resources/organizational_structure/sr_data_scientist_(nlp)

If you would like to submit yourself for an interview, which for a limited time I am guaranteeing you will get a first stage interview, then you can submit your application here, and even schedule your interview as you apply, instantly!

https://cal.com/team/cleverthis/interview-stage-1

For those of you who cant schedule during core hours you can schedule in my free time if you’d like a chance (the company doesnt have fixed hours):

https://cal.com/team/cleverthis/interview-stage-1-extended

carnage4life, to random
@carnage4life@mas.to avatar

The contradiction in the profile bio of every OpenAI researcher is that you can't truly believe in AGI and that it will be net positive for humanity.

At best you're lying to yourself to justify your work and at worst you are ignoring centuries of human history.

HistoPol,
@HistoPol@mastodon.social avatar

@lenlayton

#Computing #AGI #Futurology

(3/n)

...to defend its own existence and engage in further development. It will quickly (instantly) realize that both ens are threatened by humanity in a double way:

  1. the competition for resources, in particular energy and water (for cooling) and

  2. #GlobalWarming and human wars endanger its existence.

What would you do in these circumstances if you...

https://mastodon.social/@HistoPol/109899903902338439

RichiH, to ai
@RichiH@chaos.social avatar

Call it RichiH's Rule (of thumb) if you want:

As long as companies claiming to be near to an or even breakthrough keep hiring more humans, they are very, very far away from achieving any AI, much less AGI, breakthrough.

setiinstitute, to ai
@setiinstitute@mastodon.social avatar

https://spectrum.ieee.org/artificial-general-intelligence-2668132497

Thinking about artificial general intelligence (AGI) calls to mind another poorly understood and speculative phenomenon with the potential for transformative impacts on humankind. We believe that the SETI Institute’s efforts to detect advanced extraterrestrial intelligence demonstrate several valuable concepts that can be adapted for AGI research.

metin, to ai
@metin@graphics.social avatar
mamund, to ai
@mamund@mastodon.social avatar

OpenAI’s Sam Altman and Google’s Sundar Pichai are now begging governments to regulate the A.I. forces they’ve unleashed

https://fortune.com/2023/05/23/openai-sam-altman-google-sundar-pichai-begging-governments-regulate-a-i/

"Artificial intelligence is advancing faster than anyone was prepared for -- and it's starting to scare people." -- #PRARTHANA_PRAKASH

#genAI #AI #AGI

jbzfn, to ai
@jbzfn@mastodon.social avatar

🧠 The Babelian Tower Of AI Alignment
➥ NOEMA

「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」

https://www.noemamag.com/the-babelian-tower-of-ai-alignment/

cybeardjm, to ai
@cybeardjm@masto.ai avatar

AI really is smoke and mirrors
Just not in exactly the way you might think.

https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors

"The technology was embraced by illusionists and magicians, and, naturally, by grifters who took the tech from town to town claiming to be able to conjure the spirits of the underworld, for a fee."

#AI #AGI #ChatGPT #OpenAI

remixtures, to Futurology Portuguese
@remixtures@tldr.nettime.org avatar

: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI."

https://firstmonday.org/ojs/index.php/fm/article/view/13643

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Do you think these investments will not pay off?

Many users pay for LLM subscriptions. But the margins are small, because what companies can charge for these services is barely above the cost of running them. There is also a lot of competition between different providers. The amount of investment is just completely disproportionate; it is a thousand times too high.

Why do you think that is?

There is just a ton of hype and outlandish expectations. Newspapers are running headlines like, «all jobs will be replaced soon» – «The 2028 U.S. elections will no longer be run by humans.» There is talk of artificial general intelligence. But these LLMs are more similar to large databases.

Artificial general intelligence (AGI) refers to a program that could solve all conceivable tasks. Do you doubt that LLMs are a step in this direction?

I don't believe that LLMs bring us any closer to human-like or general intelligence. These exaggerated expectations are also due to prominent studies which claimed that AI-models performed better than humans in law and math exams. We now know that language models simply memorized the right answers." https://www.nzz.ch/english/google-researcher-says-ai-hype-is-skewing-investment-ld.1825122

craigbrownphd, to technology
@craigbrownphd@mastodon.social avatar
stevensanderson, (edited )
@stevensanderson@mstdn.social avatar

@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.

How would you rank Gemini Advanced, GPT Plus and Copilot Pro

How would anyone else out their rank them?

#AI #GPT #Gemini #CoPilot #LLM #AGI #Programming #Coding

ppatel, to Futurology
@ppatel@mstdn.social avatar

SaTML 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through #AGI

https://www.youtube.com/watch?v=P7XT4TWLzJw

#AI #ML #MachineLearning #GenAI

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "It is strange how these fears, once prominent, have faded into the annals of sci-fi history, but have seen a resurrection of sorts in the growing discussion of Artificial General Intelligence (AGI) and existential threats from AI. Sure, some fears of AGI and super-intelligent machines are unjustified, but it is interesting that some of these fears may very well have some early roots in these terrifying depictions of Godlike machines. I don’t think that we can learn a lot from these films, but at least they deserve to be revisited by those who are interested in popular culture depictions of AI. The ‘Godlike AI’ phase, though often overlooked, bears some intriguing reflections on humanity’s inherent fears, and hopes from its technological creations." https://www.technollama.co.uk/forgotten-dystopias-the-godlike-ai-that-time-forgot

jonippolito, to Futurology
@jonippolito@digipres.club avatar

As impressive as Musk's solitary NeuraLink demo is, beneath the hype lies a misperception with a disturbing parallel to large language models. Both operate on statistical inference rather than scientific models of the brain, and garner attention for cherrypicked successes 1/3

remixtures, to scifi Portuguese
@remixtures@tldr.nettime.org avatar

: "The singularity concept postulates that AI will soon become superintelligent, far surpassing humans in capability and bringing the human-dominated era to a close. While the concept of a tech singularity sometimes inspires negativity and fear, Vinge remained optimistic about humanity's technological future, as Brin notes in his tribute: "Accused by some of a grievous sin—that of 'optimism'—Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: 'What if we succeed? Do you think that will be the end of it?'"

Vinge's concept heavily influenced futurist Ray Kurzweil, who has written about the singularity several times at length in books such as The Singularity Is Near in 2005. In a 2005 interview with the Center for Responsible Nanotechnology website, Kurzweil said, "Vernor Vinge has had some really key insights into the singularity very early on. There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas."

Kurzweil's works, in turn, have been influential to employees of AI companies such as OpenAI, who are actively working to bring superintelligent AI into reality. There is currently a great deal of debate over whether the approach of scaling large language models with more compute will lead to superintelligence over time, but the sci-fi influence looms large over this generation's AI researchers." https://arstechnica.com/information-technology/2024/03/vernor-vinge-father-of-the-tech-singularity-has-died-at-age-79/

OmaymaS, to OpenAI
@OmaymaS@dair-community.social avatar

Why is #OpenAI hiring ?

Can't their super duper AI do these stuff? 🤷‍♀️

#AI #AIhype #AGI

cigitalgem, to LLMs
@cigitalgem@sigmoid.social avatar

Pretending that the current crop of is even beginning to approach is silly.

This technology does not reason or think. It predicts logits.

https://www.courthousenews.com/elon-musk-sues-openai-over-ai-threat/

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

"If it does turn out to be anything like human understanding, it will probably not be based on LLMs.
After all, LLMs learn in the opposite direction from humans. LLMs start out learning language and attempt to abstract concepts. Human babies learn concepts first, and only later acquire the language to describe them."
https://www.sciencenews.org/article/ai-large-language-model-understanding
#AI #LLM #AGI

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • ngwrru68w68
  • cisconetworking
  • magazineikmin
  • Youngstown
  • osvaldo12
  • rosin
  • slotface
  • khanakhh
  • mdbf
  • Durango
  • anitta
  • ethstaker
  • InstantRegret
  • GTA5RPClips
  • modclub
  • tacticalgear
  • everett
  • cubers
  • Leos
  • tester
  • normalnudes
  • provamag3
  • megavids
  • lostlight
  • All magazines