remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #BigTech #SiliconValley: "Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about. A separation letter on the termination documents, which you can read embedded below, says in plain language, “If you have any vested Units ... you are required to sign a release of claims agreement within 60 days in order to retain such Units.” It is signed by Kwon, along with OpenAI VP of people Diane Yoon (who departed OpenAI recently). The secret ultra-restrictive NDA, signed for only the “consideration” of already vested equity, is signed by COO Brad Lightcap.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI."

https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #Claude: "We successfully extracted millions of features from the middle layer of Claude 3.0 Sonnet, (a member of our current, state-of-the-art model family, currently available on claude.ai), providing a rough conceptual map of its internal states halfway through its computation. This is the first ever detailed look inside a modern, production-grade large language model.
Whereas the features we found in the toy language model were rather superficial, the features we found in Sonnet have a depth, breadth, and abstraction reflecting Sonnet's advanced capabilities.
We see features corresponding to a vast range of entities like cities (San Francisco), people (Rosalind Franklin), atomic elements (Lithium), scientific fields (immunology), and programming syntax (function calls). These features are multimodal and multilingual, responding to images of a given entity as well as its name or description in many languages."
https://www.anthropic.com/news/mapping-mind-language-model

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
mwichary, to random
@mwichary@mastodon.online avatar

Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.

NatureMC,
@NatureMC@mastodon.online avatar

@mwichary Yes, there are studies about the social and ethical impact of biased , especially in questions of masculinism, racism or homophobia. It's a fact that the popular models are trained mainly by men (with the "philosophy"***) on men dominated content. The latest is this study: https://cepis.org/unesco-study-exposes-gender-and-other-bias-in-ai-language-models/
This test became quite well-known in 2023: https://rio.websummit.com/blog/society/chatgpt-gpt4-midjourney-dalle-ai-ethics-bias-women-tech/

uniinnsbruck, to Futurology
@uniinnsbruck@social.uibk.ac.at avatar

Physicists developed a new method to prepare quantum operations on a given quantum computer using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation. The study, recently published in Nature Machine Intelligence, marks a significant step forward in unleashing the full extent of quantum computing.

📣 https://www.uibk.ac.at/en/newsroom/2024/how-ai-helps-programming-a-quantum-computer/

@fwf @ERC_Research

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Suno, a generative AI music company, has raised $125 million in its latest funding round, according to a post on the company’s blog. The AI music firm, which is one of the rare start-ups that can generate voice, lyrics and instrumentals together, says it wants to usher in a “future where anyone can make music.”

Suno allows users to create full songs from simple text prompts. While most of its technology is proprietary, the company does lean on OpenAI’s ChatGPT for lyric and title generation. Free users can generate up to 10 songs per month, but with its Pro plan ($8 per month) and Premier plan ($24 per month), a user can generate up to 500 songs or 2,000 songs, respectively, on a monthly basis and are given “general commercial terms.”"

https://www.billboard.com/business/tech/ai-music-company-suno-raises-new-funding-round-1235688773/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.

But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.

Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here."

https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Without some minimal agreement as to what those basic human capabilities are—what activities belong to the jurisdiction of our species, not to be usurped by machines—it becomes difficult to pin down why some uses of artificial intelligence delight and excite, while others leave many of us feeling queasy.

What makes many applications of artificial intelligence so disturbing is that they don’t expand our mind’s capacity to think, but outsource it. AI dating concierges would not enhance our ability to make romantic connections with other humans, but obviate it. In this case, technology diminishes us, and that diminishment may well become permanent if left unchecked.

Over the long term, human beings in a world suffused with AI-enablers will likely prove less capable of engaging in fundamental human activities: analyzing ideas and communicating them, forging spontaneous connections with others, and the like. While this may not be the terrifying, robot-warring future imagined by the Terminator movies, it would represent another kind of existential catastrophe for humanity."

https://www.theatlantic.com/ideas/archive/2024/05/ai-dating-algorithms-relationships/678422/

magnetichuman, to generativeAI
@magnetichuman@cupoftea.social avatar

Companies today are trying to putting Generative AI into everything with the same enthusiasm that they put Radium into consumer products in the 1930s
#GenerativeAI #AIrisks

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

"Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies."

Some thoughts on genAI 'passing' theory of mind tests.

https://listed.to/@24601/51831/minds-and-theories-of-mind

lns, to generativeAI
@lns@fosstodon.org avatar

I wonder if generative AI will cause a real drop in motivation for organic human creativity.. "I'll just have AI make it for me."

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Copyright #IP: "Generative artificial intelligence (AI) has the potential to augment and democratize creativity. However, it is undermining the knowledge ecosystem that now sustains it. Generative AI may unfairly compete with creatives, displacing them in the market. Most AI firms are not compensating creative workers for composing the songs, drawing the images, and writing both the fiction and non-fiction books that their models need in order to function. AI thus threatens not only to undermine the livelihoods of authors, artists, and other creatives, but also to destabilize the very knowledge ecosystem it relies on.

Alarmed by these developments, many copyright owners have objected to the use of their works by AI providers. To recognize and empower their demands to stop non-consensual use of their works, we propose a streamlined opt-out mechanism that would require AI providers to remove objectors’ works from their databases once copyright infringement has been documented. Those who do not object still deserve compensation for the use of their work by AI providers. We thus also propose a levy on AI providers, to be distributed to the copyright owners whose work they use without a license. This scheme is designed to ensure creatives receive a fair share of the economic bounty arising out of their contributions to AI. Together these mechanisms of consent and compensation would result in a new grand bargain between copyright owners and AI firms, designed to ensure both thrive in the long-term."

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4826695

remixtures, to UX Portuguese
@remixtures@tldr.nettime.org avatar

#UX #UserExperience #OpenAI #AI #GPT4o #GenerativeAI: "It is unethical to slap an interface, which convincingly simulates 100% confidence, onto a product which is anything less than 100% accurate, let alone a product that CTO, Mira Murati, calls “pretty good”.

No exceptions; no “it will get better”. If the house doesn’t have a roof, don’t paint the walls.

This does not mean that reduction or removal of complexity is inherently deceitful, but it does mean that the complexity which informs a person, not how, but why something works the way it does can be an important factor in them deciding to use it.

Nothing could make this more evident than the crypto/web3 community’s obsession with “mass adoption” which they generally resolve to being a UX problem. They know that the complexity of crypto is intimidating to non-technical people (crimes and scams aside) so they relentlessly try to remove as much of the complexity as possible.

The unfortunate thing about removing complexity is that you never remove it, but rather, you move it to another place. The other place is always what crypto people like to call a “trusted third party” the very thing that Bitcoin, was created to eliminate."

https://fasterandworse.com/known-purpose-and-trusted-potential/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For years now, OpenAI told everyone that these were all secondary concerns — that its deeper ambition was something nobler, and more public-spirited. But since Altman’s return, the company has been telling a different story: a story about winning at all costs.

And why bother with superalignment, when there’s winning to do?

Why bother getting actresses’ permission, when the right numbers are all still going up?"

https://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/

CenturyAvocado, to ai
@CenturyAvocado@fosstodon.org avatar

Here comes the bullshit machine... @revk @bloor
Someone came into this evening leading to a confusing interaction until the cause was identified.

On a side note, I think I might be done with this internet and tech stuff. I wonder what manual work I can take up instead.

mheadd, to ai
@mheadd@mastodon.social avatar

This is a fundamental mistake that people make when trying to assess whether LLMs are an appropriate tool to use in optimizing a process, function, or service:

"LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence."

This terrific article is a great explainer on how they work and their limitations.

https://ig.ft.com/generative-ai/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "More broadly, across news media coverage of AI in general, reviewing 30 published studies, Saba Rebecca Brause and her coauthors find that, while there are of course exceptions, most research so far find not just a strong increase in the volume of reporting on AI, but also “largely positive evaluations and economic framing” of these technologies.

So, perhaps, as Timit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), has written on X: “The same news orgs hype stuff up during ‘AI summers’ without even looking into their archives to see what they wrote decades ago?”

There are some really good reporters doing important work to help people understand AI—as well as plenty of sensationalist coverage focused on killer robots and wild claims about possible future existential risks.

But, more than anything, research on how news media cover AI overall suggests that Gebru is largely right – the coverage tends to be led by industry sources, and often take claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."

https://reutersinstitute.politics.ox.ac.uk/news/how-news-coverage-often-uncritical-helps-build-ai-hype

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely.

But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. It’s hard to exercise accountability over a company whose former employees are restricted to saying “I resigned.”" https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

skykiss, to random
@skykiss@sfba.social avatar

The criminal rally was a lie and there were fake photos with fake people added. There was almost no-one there.

Fascist-Republican's lie about their fascist rally. Sad.

https://www.dailykos.com/stories/2024/5/19/2241549/-Wildwood-NJ-not-the-first-time-for-fake-Trump-crowd-reports

This photo is a LIE. The crowd is fake.

jadugar63,
@jadugar63@mastodon.social avatar

@skykiss
I guess this explains why cheap photoshop instead of using #generativeAI to fill in the blanks.

https://mastodon.social/@GottaLaff/112475465633714861

1br0wn, to generativeAI
@1br0wn@eupolicy.social avatar

🇬🇧 minister wants to create a “framework or policy” around #GenerativeAI model training transparency but noted “very complex international problems that are fast moving”. She said the UK needed to ensure it had “a very dynamic regulatory environment”. https://on.ft.com/3ULCFn1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it—with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.
(...)
I must assume that eventually an army of shame engineers will rise up, writing guilt-inducing code in order to make their robots more convincingly human. But it doesn’t mean I love the idea. Because right now you can see the house of cards clearly: By aggregating the world’s knowledge, chomping it into bits with GPUs, and emitting it as multi-gigabyte software that somehow knows what to say next, we've made the funniest parody of humanity ever. These models have all of our qualities, bad and good. Helpful, smart, know-it-alls with tendencies to prejudice, spewing statistics and bragging like salesmen at the bar. They mirror the arrogant, repetitive ramblings of our betters, the horrific confidence that keeps driving us over the same cliffs. That arrogance will be sculpted down and smoothed over, but it will have been the most accurate representation of who we truly are to exist so far, a real mirror of our folly, and I will miss it when it goes."

https://www.wired.com/story/generative-ai-totally-shameless/

attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "It all kicked off last night, when a note on Hacker News raised the issue of how Slack trains its AI services, by way of a straight link to its privacy principles — no additional comment was needed. That post kicked off a longer conversation — and what seemed like news to current Slack users — that Slack opts users in by default to its AI training, and that you need to email a specific address to opt out.

That Hacker News thread then spurred multiple conversations and questions on other platforms: There is a newish, generically named product called “Slack AI” that lets users search for answers and summarize conversation threads, among other things, but why is that not once mentioned by name on that privacy principles page in any way, even to make clear if the privacy policy applies to it? And why does Slack reference both “global models” and “AI models?”

Between people being confused about where Slack is applying its AI privacy principles, and people being surprised and annoyed at the idea of emailing to opt-out — at a company that makes a big deal of touting that “Your control your data” — Slack does not come off well."

https://techcrunch.com/2024/05/17/slack-under-attack-over-sneaky-ai-training-policy/?guccounter=1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

techchiili, to microsoft
@techchiili@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines