18+ urusan,
@urusan@fosstodon.org avatar

This is an interesting video:
https://youtu.be/dDUC-LqVrPU

TL;DW We're starting to see early evidence of diminishing returns with our current AI architectures. If this is true, then eventually they start to improve in a logarithmic manner, making superintelligence (at least using our current architectures) impossible to achieve from a practical standpoint. The issue is that we need too much data on specific things for it to perform well on them all.

18+ dcz,
@dcz@fosstodon.org avatar

@urusan Superintelligence in the "singularity" way is only achievable by self-improvement, i.e. new intelligent systems make creating intelligent systems of the next generation easier.

Are AI researchers using AI to research AI? I'm not seeing that talked about, AI's most visible applications are outside of AI.

On the other hand...

https://www.nature.com/articles/d41586-021-01515-9?error=cookies_not_supported&code=26aafe42-a4f2-4731-ada8-9fdfb918b6a9

I wonder why it got retracted. Still, that's a feedback loop measured in years.

bornach,
@bornach@masto.ai avatar

@dcz @urusan
Probably retracted because peer reviewers later found out that Google had given their AI an unfair advantage through the use of EDA tools (Synopsys suite)
https://www.theregister.com/2023/03/27/google_ai_chip_paper_nature/

18+ urusan,
@urusan@fosstodon.org avatar

The evidence provided in the video is pretty light (though more is alluded to), but I do think this is the most likely possibility.

Humans seem to be larger scale than our current AI systems and we aren't superintelligent in this hypothesized "knows everything" kind of way.

The concept that larger scale will achieve outsized results seems to be predicated on one of these:

  • Humans are inferior to this technology
    OR
  • Humans are capable of knowing everything too
bornach,
@bornach@masto.ai avatar

@urusan
See also
https://youtu.be/nkdZRBFtqSs
on the implications for post-AI-bubble applications for all the LLM systems that are predicted to fall far short of the hype.

18+ deshipu,
@deshipu@fosstodon.org avatar

@urusan surprised_pikachu.jpg

The whole idea of "inhuman intelligence" is an oxymoron. Intelligence itself can only be defined in human terms. You can make a machine with a "puzzle solving power" surpassing that of a human (in fact it's trivial to do it), but it won't give it any advantage in dealing with human problems. There is a reason why the most successful people out there are kinda moronic if you look at them closer.

18+ dcz,
@dcz@fosstodon.org avatar

@deshipu @urusan Defining intelligence narrowly has its benefits, but then you need a word for the kind of ???gence that makes your dog smart, or the machine helpful.

I don't think anyone is using the narrow definition in this context, it's pretty useless for what is being said.

18+ deshipu,
@deshipu@fosstodon.org avatar

@dcz @urusan I don't think anyone is using any definition in this context, to be honest. I haven't heard any workable one, at least.

Everyone just goes "imagine this man in a box that is incredibly smart, but completely under your control, not like those other smart people who always outsmarted you", and the stupid "thought leaders" just can't stop salivating.

And no, we don't need a separate word for what makes a dog seem intelligent. It's still the same human thing.

18+ dcz,
@dcz@fosstodon.org avatar

@deshipu @urusan I agree that the definitions are unclear and no one bothers to define them. But choosing a definition which is clearly useless in the context used by one side doesn't help in finding truth.

18+ deshipu,
@deshipu@fosstodon.org avatar

@dcz @urusan But it's not useless. I would say it is in fact extremely useful in that it exposes the meaningless drivel they are trying to feed us for what it is: a bunch of lies and fancies without any trace of internal logic, based entirely on vague feelings and handwaving. If that is not a good use of a definition, then I don't know what is.

18+ dcz,
@dcz@fosstodon.org avatar

@deshipu @urusan It's meaningless if you redefine it to remove the original meaning, that's for sure.

And it's useless if you want to have a conversation rather than a monologue.

18+ deshipu,
@deshipu@fosstodon.org avatar

@dcz @urusan I'm not redefining it, I'm using it how it was used for as long as it existed in the human languages. I'm not aware of any meaning more original than that. It's those who try to apply the term to mechanisms who are redefining it.

I might be interested in discussing problems involving imaginary things, like angels dancing on the point of a needle or intelligent pieces of rock, but only in the context of the story being told, for fun.

18+ dcz,
@dcz@fosstodon.org avatar

@deshipu @urusan Sure. You merely chose a definition which has nothing to do with the one that the other person was using. Good luck having a conversation.

18+ maegul,
@maegul@hachyderm.io avatar

@urusan

Yea, if true, it's basically the close on this hype curve.

Some would say "already!", but back around GPT1 or 2, IIRC, the returns seemed roughly linear and so something like GPT3 was vaguely foreseeable because, IIRC, the thinking was we weren't quite sure how good they'd get if they got bigger.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • thenastyranch
  • DreamBathrooms
  • tacticalgear
  • magazineikmin
  • khanakhh
  • everett
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ethstaker
  • InstantRegret
  • kavyap
  • ngwrru68w68
  • megavids
  • cisconetworking
  • cubers
  • osvaldo12
  • modclub
  • GTA5RPClips
  • tester
  • Durango
  • provamag3
  • anitta
  • Leos
  • normalnudes
  • JUstTest
  • lostlight
  • All magazines