vruz, to opensource
@vruz@mastodon.social avatar

Google "We Have No Moat, And Neither Does OpenAI"

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

vdignum, to ai
@vdignum@mastodon.social avatar

High time to move "From Fear to Action: AI Governance and Opportunities for All"

Read it: https://lnkd.in/d4CTjR4z


Kevin Baum, Joanna Bryson, Frank Dignum, Virginia Dignum, Marko Grobelnik, Holger Hoos, Morten Irgens, Paul Lukowicz, Catelijne Muller, Francesca Rossi, Andreas Theodorou. Ricardo Vinuesa

EgyptianAphorist, to science

🧠 #AI Makes Mind Reading Possible

Researchers at the #University of #Texas have developed a #GPT-based decoder that can translate thoughts into text using non-invasive fMRI scans.

Participants trained the decoder by listening to #podcasts for 16 hours in an #fMRI scanner,

The AI system then generated text as participants listened to or imagined a new #story, capturing general #thoughts or #ideas. 👀

#brain #science #tech #news #artificialintelligence #technology
#ethics #technews #future

image/jpeg

lari, to random
@lari@suomi.social avatar

😯 "Scientists have invented a language decoder that can translate a person’s thoughts into text using an artificial intelligence (AI) transformer similar to ChatGPT, reports a new study."

😬 "However, the team that made the decoder also warned that brain-reading platforms could eventually have nefarious applications, including as a means of surveillance for governments and employers. Though the researchers emphasized that their decoder requires the cooperation of human subjects to work, they argued that “brain–computer interfaces should respect mental privacy,”"

https://www.vice.com/en/article/4a3w3g/scientists-use-gpt-ai-to-passively-read-peoples-thoughts-in-breakthrough?utm_source=vice_facebook&utm_medium=social

skullvalanche, to random
@skullvalanche@gladtech.social avatar

Hinton Today: https://arstechnica.com/information-technology/2023/05/warning-of-ais-danger-pioneer-geoffrey-hinton-quits-google-to-speak-freely/

Hinton a month ago: https://youtu.be/qpoRO378qRY

I find the "oh, I know them, they're good people" argument about A.I. researchers to be such a tired and garbage statement. It doesn't matter how good the scientists who worked on the atomic bomb were, they still ushered in the age of mutually assured destruction.

I'd like to reiterate that my concern about is not that it's going to become conscious, or that it's going to displace a huge portion of the workforce (it will) but before it does any of that it's going to be leveraged by bad actors to flood society with truthy sounding bullshit the likes of which we've never seen and it will lead to killing in the streets.

QAnon is just a canary in the coal mine.

itnewsbot, to random
@itnewsbot@schleuss.online avatar

Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely - Enlarge / Geoffrey Hinton, chief scientific adviser at the Vector Insti... - https://arstechnica.com/?p=1935563 -4

glynmoody, to random
@glynmoody@mastodon.social avatar

Scientists Use AI to Passively Read People's Thoughts in Breakthrough - https://www.vice.com/en/article/4a3w3g/scientists-use-gpt-ai-to-passively-read-peoples-thoughts-in-breakthrough oh, great..

antoniolieto, to random Italian
@antoniolieto@fediscience.org avatar
clacke, to random en-us

People overestimate what "AI" of today is and what it can actually do, because memes are spreading that describe any machine-assisted process in a highly glossed-over form, ignoring the required human effort to make it work.

The naive impression is that you just gave some generative engine a prompt and the result came out fully formed, when the actual process is that the people behind the project used multiple purpose-built engines, for each of the engines they iterated on prompts that would output something semi-coherent, and then they used human efforts to tie the result together.

This is currently spreading as "this AI-generated pizza commercial" with no further explanation, but Tom's Hardware interviewed the actual people who made it work:

www.tomshardware.com/news/ai-p…

andrewfeeney, to random
@andrewfeeney@phpc.social avatar

Almost all of the contact form submission spam I receive now through one service for which I monitor it is about related products and services.

itnewsbot, to random
@itnewsbot@schleuss.online avatar

Nuke-launching AI would be illegal under proposed US law - Enlarge / An AI-generated image of a nuclear mushroom cloud. (credit: M... - https://arstechnica.com/?p=1935261 #u.s.government #u.s.senate -4 #f#

serenissimaj, to random
@serenissimaj@sciences.social avatar

Could those with knowledge of LLMs share references/thoughts/sources about (ie the embedding of a signal/pattern to flag non-human text/tokens)?

This - https://arxiv.org/pdf/2301.10226.pdf - is helping me understand but I have basic questions, eg.:

  • who creates & embeds the watermark?
  • Can a teacher do it?
  • Would the watermark work on all LLMs (inc. commercial)?

I'm trying hard to understand 😣 😣 😣

@amills @emilymbender @timnitGebru

schizanon, to random
@schizanon@mas.to avatar

We've already got more code than we need. Let me know when the learns how to delete code.

filipw, to random
@filipw@mathstodon.xyz avatar

so after Reddit, now StackOverflow wants companies to pay for using content from their site to train LLMs (see https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/)

but who pays the actual, you know, content creators?

which reminds me - I am hereby setting the price of $1 billion for the right to use my blog's content https://www.strathweb.com in the training data set. thank you

feynman, to random

After a week of data I'll study the change in sentiment from @kali following the endless loop with @elelem from @eob to understand how 2 unsupervised agents are behaving over time.

KathyReid, to random
@KathyReid@aus.social avatar

ICYMI: Do you use #ChatGPT? Want to help a #PhD candidate with their #research? My colleague at #ANU's School of #Cybernetics, @nedcpr, is currently running a #survey - and he would love your help by filling it out:

https://hci.social/@nedcpr/110146578378017764

Boosts appreciated - and thank you! 🥰

#HCI #CHI #CHI2023 #GPT #GPT3 #GPT4

b_cavello, to gpt
@b_cavello@mastodon.publicinterest.town avatar

How I’ve been feeling about all these for search updates 🥴

stroughtonsmith, to random
@stroughtonsmith@mastodon.social avatar

I finally plugged GPT-3 into @pastelapp to generate color palettes from text prompts, and it just does exactly what I hoped it might do. How I wish I could do this with an on-device Siri API call; I don't intend to ship with a dependency on OpenAI, so for now this is a 'maybe someday’ feature

stroughtonsmith,
@stroughtonsmith@mastodon.social avatar

If you're curious about what's happening behind the scenes, this is the setup prompt I'm passing to . I’m defining a JSON data structure, and redefining what a palette is in the process (as GPT sees the word 'palette' and tries to output 5 colors every time if I don’t). I tell it to be fun and creative in its palette generation and naming, and to follow related topics for keywords. The results speak for themselves, though it does still have a habit of duplicating palettes under diff names

image/png

k8em0, to gpt

Ah yes, another high profile bug bounty forcing non-disclosure — even for fixed bugs.
🤦🏻‍♀️
It’s the bugs they won’t fix that will put users at risk.
All orgs need a vulnerability disclosure program that doesn’t ban Disclosure.
But what do I know.
I just coauthored the standard

“But it’s a bug bounty & they are paying so it’s fair to ask for non disclosure”
That’s fine if everything submitted is paid work, like a penetration test.
Oh, only paying selectively & only the first of any duplicates?
That’s labor abuse & the worst gig economy deal out there.

“But pen tests don’t get you all the eyeballs”

Neither do bug bounties - you get a random number of eyeballs willing to sign NDAs.

If orgs actually care about security, they cast as wide a net s as possible to get the best researchers - especially those who won’t sign NDAs.

“This is better than no bug bounty”

No, it isn’t.

It breeds a false sense of security for users & the org itself, while actively excluding the highest skilled researchers who will never sign an NDA for speculative pay or who want to see the bugs FIXED as their motivation.

metaphil, to random German
@metaphil@chaos.social avatar

„Offener Brief: , und Co. fordern Zwangspause für Modelle wie -4“

Die Unterzeichnenden sehen „tiefgreifende Risiken für die Gesellschaft und Menschheit“. Duh.

Abgesehen davon, dass darunter viel US Academia sind, riecht das im Fall Musk nicht eher nach „Hey Mist ich hab 'nen Trend verpennt, wartet auf mich und meine Geschäftsmodelle“? 🤡

🌐 https://www.heise.de/news/Tech-Prominenz-fordert-Pausiert-die-Entwicklung-der-Riesen-KI-Modelle-8178216.html

gyokusai, to Futurology
@gyokusai@mastodon.social avatar

The same people who looked down on you as some kind of intellectual klutz if you didn’t believe in mind upload, Mars colonies, or fully autonomous driving are at it now with the imminent rise of .

The same people who couldn’t be bothered to answer how scanning even the tiniest part of the brain would work, or surviving in the Gobi desert, or creating the technological infrastructure SAE 5 requires, can’t be bothered now to explain how AGI would “emerge” from text synthesis machines.

eliocamp, to StableDiffusion
@eliocamp@mastodon.social avatar

Good evening!

eliocamp,
@eliocamp@mastodon.social avatar

For I'm recycling this song created using , including the python program to write midi files.

https://eliocamp.github.io/codigo-r/en/2022/12/chatpgt-midi-music/

gabrielesvelto, to random
@gabrielesvelto@fosstodon.org avatar

Alright, let's start with a shitposting thread because it's easier. This was my first contribution to Dune Sietchposting 1/

gabrielesvelto,
@gabrielesvelto@fosstodon.org avatar

I know many of you are worried about the potential implications of using for nefarious purposes as well as its tendency to produce reasonable-sounding bullshit. But I see enormous potential for shitposting. Take this for example, it would have totally averted interplanetary jihad

7/

image/png

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • magazineikmin
  • InstantRegret
  • Durango
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • mdbf
  • khanakhh
  • tacticalgear
  • JUstTest
  • everett
  • modclub
  • Leos
  • cubers
  • ngwrru68w68
  • ethstaker
  • osvaldo12
  • GTA5RPClips
  • anitta
  • megavids
  • normalnudes
  • tester
  • lostlight
  • All magazines