remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Recently, Bonaventure Dossou learned of an alarming tendency in a popular AI model. The program described Fon—a language spoken by Dossou’s mother and millions of others in Benin and neighboring countries—as “a fictional language.”

This result, which I replicated, is not unusual. Dossou is accustomed to the feeling that his culture is unseen by technology that so easily serves other people. He grew up with no Wikipedia pages in Fon, and no translation programs to help him communicate with his mother in French, in which he is more fluent. “When we have a technology that treats something as simple and fundamental as our name as an error, it robs us of our personhood,” Dossou told me.

The rise of the internet, alongside decades of American hegemony, made English into a common tongue for business, politics, science, and entertainment. More than half of all websites are in English, yet more than 80 percent of people in the world don’t speak the language. Even basic aspects of digital life—searching with Google, talking to Siri, relying on autocorrect, simply typing on a smartphone—have long been closed off to much of the world. And now the generative-AI boom, despite promises to bridge languages and cultures, may only further entrench the dominance of English in life on and off the web."

https://www.theatlantic.com/technology/archive/2024/04/generative-ai-low-resource-languages/678042/

ppatel, to ai
@ppatel@mstdn.social avatar

The consistent theme here is that they all want little regulation. They don't want the others to be entrenched.

A profile of Mistral AI CEO Arthur Mensch, who says, as an atheist, he is uncomfortable with Silicon Valley's #"AGI rhetoric" and "religious" fascination with .

https://www.nytimes.com/2024/04/12/business/artificial-intelligence-mistral-france-europe.html?unlocked_article_code=1.j00.G8zX.lqukAsOFhspc&smid=nytcore-ios-share&referringSource=articleShare&ugrp=c

CharlieMcHenry, to ai
@CharlieMcHenry@connectop.us avatar

Bill introduced to require companies to list what copyrighted materials they use for training purposes.

https://www.engadget.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589.html

algorights, to LLMs Spanish
@algorights@mastodon.social avatar

«Large language models can do jaw-dropping things. But nobody knows exactly why. And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models».
https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/

hendrik, to LLMs

Have people tried to link the output of to devices that take actions in the real world? What if one creates a way for an to make pull requests on or ? The could spend day and night browsing through to improve them. That would be as much exciting as creepy!

ppatel, to ai
@ppatel@mstdn.social avatar

Google touting that its latest models and services can be grounded through its search results isn't the boast it thinks it is, especially considering the quality of its results lately. Has anybody considered the feedback loop of AI results being ranked hire and then being used to ground Gemini Pro?

jchyip, to LLMs
@jchyip@mastodon.online avatar
johnpettigrew, to ai
@johnpettigrew@wandering.shop avatar

This article contains one crucial line that basically undercuts the entire rest of what Pat Gelsinger of Intel is saying:

"Many clients are telling me it is really hard to realize value from their AI investments," Guan told Gelsinger on stage.

In other words, despite all the hype, no-one is actually making money from LLMs (except the consultants). It's a bubble. All the excited keynote presentations in the world can't disguise that.
https://www.theregister.com/2024/04/10/intel_ceo_ai_automation/

metin, (edited ) to ai
@metin@graphics.social avatar

Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.

hendrik, to LLMs

are the ultimate answer to the internet created: they avoid ads, ignore superfluous information of cooking recipe websites, and create a layer of privacy between you and the . Well, at least as long as you can trust the provider of the LLM...

Content providers will feel that this hurts their pocket. It essentially gives everyone on the internet an ad blocker and this may lead to more paywalls. Will providers start to buy information from these content providers? Or will they sell opportunities to place information?

Sevoris, to LLMs

Two articles I saved about a year ago, maybe worth reflecting now when it comes to what can achieve and cannot achieve, have achieved and been used for in the past year, and how the applications scape has been developing:

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

https://skventures.substack.com/p/ai-mass-evolution-and-weickian-loops

ppatel, to ai
@ppatel@mstdn.social avatar

Move over, deep learning: Symbolica’s structured approach could transform

Artificial intelligence startup Symbolica emerged from stealth today and unveiled a novel approach to constructing AI models, leveraging advanced mathematics to imbue systems with human-like reasoning capabilities and unprecedented transparency.

https://venturebeat.com/ai/move-over-deep-learning-symbolicas-structured-approach-could-transform-ai/

ppatel, to accessibility
@ppatel@mstdn.social avatar

I posted an early discussion of this earlier. But Apple's work on this could make shortcuts work so much better and work faster and more accurate.

Ferret-UI: Grounded Mobile UI Understanding with Multimodal

https://arxiv.org/abs/2404.05719

joelanman, to ai
@joelanman@hachyderm.io avatar

OLMo - claims to be a fully open LLM including training data

https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "There are two reasons why using a publicly available LLM such as ChatGPT might not be appropriate for processing internal documents. Confidentiality is the first and obvious one. But the second reason, also important, is that the training data of a public LLM did not include your internal company information. Hence that LLM is unlikely to give useful answers when asked about that information.

Enter retrieval-augmented generation, or RAG. RAG is a technique used to augment an LLM with external data, such as your company documents, that provide the model with the knowledge and context it needs to produce accurate and useful output for your specific use case. RAG is a pragmatic and effective approach to using LLMs in the enterprise.

In this article, I’ll briefly explain how RAG works, list some examples of how RAG is being used, and provide a code example for setting up a simple RAG framework." https://www.infoworld.com/article/3712860/retrieval-augmented-generation-step-by-step.html

kellogh, to python
@kellogh@hachyderm.io avatar

years ago, the “language of machine learning” was split between #R and but it’s been steadily shifting toward python. At this point, after all the developments, i think it’s clearly python. i don’t see much R in the LLM world at all. And increasingly, i’m seeing being the “systems language of

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Q. You've reported quite a lot on technology and algorithms in the past. How do you think journalists should cover the rise of generative AI?

A. Journalists should stop speaking about AI models as if they have personalities, and they are sentient. That is really harmful because it changes the conversation from something that we as humans control to a peer-to-peer relationship. We built these tools and we can make them do what we want.

Another thing I would recommend is talking about AI specifically. Which AI model are we talking about? And how does that compare to the other AI models? Because they are not all the same. We also need to talk about AI in a way that’s domain-specific. There’s a lot of talk about what AI will do to jobs. But that is too big a question. We have to talk about this in each field.

A classic example of that is that people have been predicting forever that AI is going to replace radiologists and it hasn't happened. So I would like to know why. That's the kind of question you can answer. So part of what we’d like to do at Proof News is focusing on a testable hypothesis. Focusing on a testable hypothesis forces you to be a little more rigorous in your thinking." https://reutersinstitute.politics.ox.ac.uk/news/julia-angwin-fears-public-sphere-about-get-worse-ai-makes-it-easier-flood-zone-misinformation

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "...[T]he AI hype of the last year has also opened up demand for a rival perspective: a feeling that tech might be a bit disappointing. In other words, not optimism or pessimism, but scepticism. If we judge AI just by our own experiences, the future is not a done deal.

Perhaps the noisiest AI questioner is Gary Marcus, a cognitive scientist who co-founded an AI start-up and sold it to Uber in 2016. Altman once tweeted, “Give me the confidence of a mediocre deep-learning skeptic”; Marcus assumed it was a reference to him. He prefers the term “realist”.

He is not a doomster who believes AI will go rogue and turn us all into paper clips. He wants AI to succeed and believes it will. But, in its current form, he argues, it’s hitting walls.

Today’s large language models (LLMs) have learnt to recognise patterns but don’t understand the underlying concepts. They will therefore always produce silly errors, says Marcus. The idea that tech companies will produce artificial general intelligence by 2030 is “laughable”.

Generative AI is sucking up cash, electricity, water, copyrighted data. It is not sustainable. A whole new approach may be needed. Ed Zitron, a former games journalist who is now both a tech publicist and a tech critic based in Nevada, puts it more starkly: “We may be at peak AI.”" https://www.ft.com/content/648228e7-11eb-4e1a-b0d5-e65a638e6135

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

#AI #GenAI #LLM #LLMs #OpenAI #ChatGPT #GPT #GPT4 #Sora #Gemini

cassidy,
@cassidy@blaede.family avatar

I guess we wait this one out until the “AI” bubble bursts due to the incredible subsidization the entire industry is undergoing. It is not profitable. It is not sustainable.

It will not last—but the damage to our planet and fallout from the immense amount of wasted resources will.

https://arstechnica.com/information-technology/2023/10/so-far-ai-hasnt-been-profitable-for-big-tech/

#AI #LLM #LLMs #GenAI #ChatGPT #GPT #OpenAI #Copilot #GitHubCopilot #Gemini #Sora

gerrymcgovern, to random
@gerrymcgovern@mastodon.green avatar

Asked if a restaurant could serve cheese nibbled on by a rodent, the Microsoft / New York City government official AI chatbot replied:

“Yes, you can still serve the cheese to customers if it has rat bites,” before adding that it was important to assess the “the extent of the damage caused by the rat” and to “inform customers about the situation.”

AI is spewing out this sort of surreal garbage all over the world right now. AI is a monumental grift.

https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21

simon_brooke,
@simon_brooke@mastodon.scot avatar

@ikt @gerrymcgovern This is a misunderstanding. have no semantic layer; consequently they have no concept of truth and falsity. All they know is whether there is a statistical probability that words will fit together in a particular order.

No LLM can ever be 'right', except by accident (which, statistically, will sometimes happen).

ppatel, to ai
@ppatel@mstdn.social avatar

Are we getting to another bubble?

Is Putting the Silicon Back in Silicon Valley

A new startup called MatX from former Google engineers reflects a renewed enthusiasm for chipmakers.

https://www.bloomberg.com/news/articles/2024-03-26/ai-chip-startups-like-matx-storm-silicon-valley

ppatel, to LLMs
@ppatel@mstdn.social avatar

Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The appearance of large language models (LLMs) and other forms of generative AI portend a new era of disruption and innovation for the news industry, this time focused on the production and consumption of news rather than on its distribution. Large news organizations, however, may be surprisingly well-prepared for at least some of this disruption because of earlier innovation work on automating workflows for personalized content and formats using structured techniques. This article reviews this work and uses examples from the British Broadcasting Corporation (BBC) and other large news providers to show how LLMs have recently been successfully applied to addressing significant barriers to the deployment of structured approaches in production, and how innovation using structured techniques has more generally framed significant editorial and product challenges that might now be more readily addressed using generative AI. Using the BBC's next-generation authoring and publishing stack as an example, the article also discusses how earlier innovation work has influenced the design of flexible infrastructure that can accommodate uncertainty in audience behavior and editorial workflows – capabilities that are likely to be well suited to the fast-approaching AI-mediated news ecosystem." https://onlinelibrary.wiley.com/doi/10.1002/aaai.12168

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Models like ChatGPT and Claude are deeply dependent on training data to improve their outputs, and their very existence is actively impeding the creation of the very thing they need to survive. While publishers like Axel Springer have cut deals to license their companies' data to ChatGPT for training purposes, this money isn't flowing to the writers that create the content that OpenAI and Anthropic need to grow their models much further. It's also worth considering that these AI companies may already have already trained on this data. The Times sued OpenAI late last year for training itself on "millions" of articles, and I'd bet money that ChatGPT was trained on multiple Axel Springer publications along with anything else it could find publicly-available on the web.

This is one of many near-impossible challenges for an AI industry that's yet to prove its necessity. While one could theoretically make bigger, more powerful chips (I'll get to that later), AI companies face a kafkaesque bind where they can't improve a tool for automating the creation of content without human beings creating more content than they've ever created before. Paying publishers to license their content doesn't actually fix the problem, because it doesn't increase the amount of content that they create, but rather helps line the pockets of executives and shareholders. Ironically, OpenAI's best hope for survival would be to fund as many news outlets as possible and directly incentivize them to do in-depth reporting, rather than proliferating a tech that unquestionably harms the media industry." https://www.wheresyoured.at/bubble-trouble/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Youngstown
  • everett
  • anitta
  • slotface
  • GTA5RPClips
  • rosin
  • thenastyranch
  • kavyap
  • mdbf
  • Leos
  • modclub
  • osvaldo12
  • Durango
  • khanakhh
  • provamag3
  • cisconetworking
  • ngwrru68w68
  • cubers
  • tester
  • ethstaker
  • megavids
  • normalnudes
  • lostlight
  • All magazines