remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "More broadly, several regulatory approaches under consideration are likely to have a disproportionate impact on open foundation models and their developers, without meaningfully reducing risk. Even though these approaches do not differentiate between open and closed foundation model developers, they yield asymmetric compliance burdens. For example, legislation that holds developers liable for content generated using their models or their derivatives would harm open developers as users can modify their models to generate illicit content. Policymakers should exercise caution to avoid unintended consequences and ensure adequate consultation with open foundation model developers before taking action."

https://hai.stanford.edu/issue-brief-considerations-governing-open-foundation-models

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Sadly, the Franco-German-Italian volte face has a simpler, more sordid, explanation: the power of the corporate lobbying that has been brought to bear on everyone in Brussels and European capitals generally. And in that context, isn’t it interesting to discover (courtesy of an investigation by Time that while Sam Altman (then and now again chief executive of OpenAI after being fired and rehired) had spent weeks touring the world burbling on about the need for global AI regulation, behind the scenes his company had lobbied for “significant elements of the EU’s AI act to be watered down in ways that would reduce the regulatory burden on the company”, and had even authored some text that found its way into a recent draft of the bill.

So, will the EU stand firm on preventing AI companies from marking their own homework? I fervently hope that it does. But only an incurable optimist would bet on it."

https://www.theguardian.com/commentisfree/2023/dec/02/eu-artificial-intelligence-safety-bill-silicon-valley-lobbying

Bundesverband, to ai German
@Bundesverband@verbraucherzentrale.social avatar

Der -Act darf bei nicht auf wachsweiche Selbstverpflichtung der Unternehmen setzen wie von DE, FR, IT gefordert. Es ist gut, dass @SLagodinsky und @AxelVossMdEP gegenhalten, sich für Verbraucher:innen einsetzen und verbindliche -Regeln fordern!
https://x.com/AxelVossMdEP/status/1726908870831096242?s=20

ErikJonker, to Bulgaria
@ErikJonker@mastodon.social avatar

Prof. Yoshua Bengio warns in an op-ed for German newspaper Tagesspiegel: exempting foundation models from the AI Act would be both dangerous & economically costly. It would make the AI Act "outdated from day one".
https://background.tagesspiegel.de/digitalisierung/die-eu-droht-eine-einzigartige-chance-zu-verspielen

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.

The AI Act is a landmark bill to regulate Artificial Intelligence following a risk-based approach. The file is currently in the last phase of the legislative process, with the main EU institutions gathered in so-called trilogues to hash out the final dispositions of the law.

Foundation models have become the sticking point in this late phase of the negotiations. With the rise of ChatGPT, a popular chatbot based on OpenAI’s powerful GPT-4 model, EU policymakers have been wondering how best to cover this type of AI in the upcoming law.

At the last political trilogue on 24 October, there seemed to be a consensus to introduce rules for foundation models following a tiered approach, namely, introducing tighter rules for the most powerful ones bound to have more impact on society."

https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-act-negotiations-hit-the-brakes-over-foundation-models/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "The final layer is made of General Purpose AI systems like ChatGPT, intended as systems “that may be based on an AI model, can include additional components such as traditional software and through a user interface has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

The Spanish presidency proposed obligations for General Purpose AI system providers when they enter into licensing agreements with downstream economic operators that might employ the system for one or more high-risk use cases.

These obligations include stating in the instructions the high-risk uses for which the system may be used providing technical documentation and all the information relevant for the downstream AI provider to comply with the high-risk requirements.

The providers of General Purpose AI systems can also prohibit certain high-risk uses. In this case, they have to take all necessary and proportionate measures to detect and enforce possible misuses."

https://www.euractiv.com/section/artificial-intelligence/news/spanish-presidency-pitches-obligations-for-foundation-models-in-eus-ai-law/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "The DMA failed to designate any hyperscaler as a gatekeeper because its quantitative thresholds did not fit the cloud sector.

Euractiv understands that France and Germany are pushing the European Commission to launch a market investigation following the qualitative criterion. Still, this process could take years and might take years of litigation to conclude.

Meanwhile, the AI market is moving at break-necking speed, with new generations of foundation models released every few months.

According to Jonathan Sage, a senior policy advisor at Portland, without the DMA’s cloud designation, there is little the EU can do to prevent them from creating dependencies between their cloud infrastructure and the foundation models."

https://www.euractiv.com/section/artificial-intelligence/news/are-eu-regulators-ready-for-concentration-in-the-ai-market/

asusarla, to random

New piece for @TheConversationUS on the Biden Adminstration's sweeping new executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"

#aisafety #executiveorder #responsibleai #foundationmodels

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

University of Chicago researchers seek to “poison” AI art generators with Nightshade - Enlarge (credit: Getty Images)

On Friday, a team of researcher... - https://arstechnica.com/?p=1978501

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The top-scoring model scores only 54 out of 100. No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry.

The mean score is a just 37%. Yet, 82 of the indicators are satisfied by at least one developer, meaning that developers can significantly improve transparency by adopting best practices from their competitors.

Open foundation model developers lead the way. Two of the three open foundation model developers get the two highest scores. Both allow their model weights to be downloaded. Stability AI, the third open foundation model developer, is a close fourth, behind OpenAI."

https://crfm.stanford.edu/fmti/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Foundation models are already being integrated into commonly used applications: Google and Microsoft’s Bing embed them in search engines, Photoshop integrates image generation models,2 and firms like Morgan Stanley use large language models (LLMs) for internal knowledge search and retrieval.3

There is some optimism in policy, public sector and industry settings about the potential for these models to enhance public services by:

  • automating review of complex contracts and case files (document analysis)
  • catching errors and biases in policy drafts (decision support)
  • powering real-time chatbots for public enquiries (improvements in management of public enquiries)
  • consolidating knowledge spread across databases into memos (knowledge management).

However, there are also risks around issues like biases, privacy breaches, misinformation, security threats, overreliance, workforce harms and unequal access. As AI technologies advance rapidly, the government must consider carefully how to use foundation models in the public sector responsibly and beneficially. This report provides policymakers and public-sector leaders with information to help them to do this. We start with an overview of foundation models and their potential use cases in central and local government in the UK. We then consider their risks and opportunities in the public sector, as highlighted by public-sector leaders, researchers and civil society."

https://www.adalovelaceinstitute.org/evidence-review/foundation-models-public-sector/

KathyReid, to TwitterMigration
@KathyReid@aus.social avatar

Good morning everyone! Here's my latest post, where I curate interesting accounts for you to follow from across the :fediverse:

@maryrobinette is a , and I am listening to her incredible series at the moment. If you love (esp hard scifi) you should read it, too! 🇺🇸

@sayashk is a candidate at , who is researching failures in (he's also co-running a workshop on open in about 15 hours, see my previous posts for more info) 🇺🇸

@michcampbell is Dr Micha Campbell and she is a living on country 🇦🇺

@mthv is a who works in at 🇫🇷

@astrolori is Lori and she is into , , and 🇨🇦

@pandas_dev is the official account for , the tool 🐍 📊

@jessie is a lover of and helps run , @mozilla 's open set, which now supports over 100 languages. She also teaches and loves . She's awesome you should follow her 🇬🇧

That's all for now, please do share your own lists so we can create deeper connections, and a tightly-connected community here

I'm reminded here of @maryrobinette's short story - "Red Rockets" - "She built something better than fireworks. She built community."

danmcquillan, to ai
@danmcquillan@kolektiva.social avatar

"Models will... demonstrate advanced medical reasoning abilities". No, they will scale machinic malpractice. Just fund doctors, nurses and community service properly ffs.
'Foundation models for generalist medical artificial intelligence'
https://www.nature.com/articles/s41586-023-05881-4
# healthcare

0x58, to random
  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • InstantRegret
  • thenastyranch
  • mdbf
  • vwfavf
  • Youngstown
  • slotface
  • hgfsjryuu7
  • Durango
  • rosin
  • kavyap
  • osvaldo12
  • PowerRangers
  • DreamBathrooms
  • anitta
  • magazineikmin
  • khanakhh
  • GTA5RPClips
  • ethstaker
  • cubers
  • ngwrru68w68
  • tacticalgear
  • everett
  • tester
  • Leos
  • cisconetworking
  • modclub
  • provamag3
  • All magazines