neurovagrant, (edited )
@neurovagrant@masto.deoan.org avatar

deleted_by_author

  • Loading...
    rylancole,

    @neurovagrant @pluralistic This stuff is incredibly expensive & it’s amazing how much money must being burned right now by some players to establish a market… but not everybody can become Uber.

    I can see VC money drying up eventually, but I’m curious about the tech giants that are all in on sinking money into this; Microsoft, Meta, Google, & Amazon. The new Meta Imagine image generator is crazy fast compared to others, yet completely free

    These companies want to become platforms for everyone else to run their AI and I think they’ll be willing to run AI services as loss leaders for a very long time, which could subsidize this bubble for years & years unlike something like the crypto bubble

    mttaggart,

    @neurovagrant @pluralistic I have to say, I think this is a rare miss.

    While I agree about the high-risk applications and the low-value, risk-tolerant ones, I feel like a massive middle ground has been elided here for reasons that are unclear.

    We're already seeing content producers move to using LLMs to do this work in lieu of human labor. That decrease in cost is a value proposition that, as long as the cost of operating anything like a workable model doesn't explode, will continue to be appealing.

    Maybe, maybe there's a tipping point where we all collectively agree that LLM output isn't good enough to pass muster. But I find that highly unlikely.

    Instead, I suspect it will be good enough to slip by all but the closest scrutiny, and that is at once valuable for content creation farms and extraordinarily dangerous for everyone else.

    pluralistic,
    @pluralistic@mamot.fr avatar

    @mttaggart @neurovagrant Those are low-value applications. Individual "content producers" are low-waged and precarious and while they may pay for automation, they can't afford much. Their bosses, meanwhile, will only pay a lot if they can fire their workforce - which would leave the news, textbooks, and other risk-intolerant applications in the automation trap.

    mttaggart,

    @pluralistic @neurovagrant I am not clear which "automation trap," you're referring to—sorry if I didn't do some prior reading!

    But I'm not sure that "the news" and other content sources are as risk-intolerant as is being claimed here. To say nothing of the means in which people ingest news—see 2016. It's not always via traditional outlets. There's plenty of money to be made in generating disinformation, which is both low-risk and lucrative at scale, a scale which is enabled by LLMs.

    But at any rate, when the bubble pops, if it pops, I contend the real casualty will be factual text being the majority of what's present on the internet.

    pluralistic,
    @pluralistic@mamot.fr avatar

    @mttaggart @neurovagrant You can't ask an AI to produce the news unless you get a reporter to verify what the AI says. If that reporter does a good enough job to call it "the news" it will takes nearly as many hours as reporting the news from scratch. The time-consuming, labor-intensive part of "the news" isn't writing the words.

    nazgul,

    @pluralistic @mttaggart @neurovagrant And furthermore, correcting and proofing generated content that is mostly correct, but wrong in weird ways, turns out to be a job that humans are not good at. This is a similar problem to Tesla autopilot. The technology is good enough to bore us into not noticing when it’s wrong.

    mttaggart,

    @pluralistic @neurovagrant Ideally, yes, but we are hardly in the era of responsible journalism. Call it what you want, but what people consume as "the news" falls far short of that standard now, and I see no reason to believe that access to LLMs would diminish that.

    pluralistic,
    @pluralistic@mamot.fr avatar

    @mttaggart @neurovagrant That's not "the news." That's just spam. It makes money from remnant ads at CPMs so low as to be nearly indistinguishable from zero. They do not have disposable income to buy high-dollar licenses. They are a canonical low-value, risk-tolerant applications.

    mttaggart,

    @pluralistic @neurovagrant How would you classify something like, say, Gizmodo, or Ars Technica, or other "magazine" publications that have a demonstrated interest in cutting costs at the expense of quality content? Imagining a state of the art slightly improved from now, is there no incentive for them to use LLMs to reduce costs to produce content?

    pluralistic,
    @pluralistic@mamot.fr avatar

    @mttaggart @neurovagrant That is a gross mischaracterization of those two outlets. They employ both skilled investigative journalists and hard-nosed fact-checkers.

    I mean, this is a thing you are empirically wrong about.

    Source; I have written for both and have first-hand knowledge of their internal processes.

    mttaggart,

    @pluralistic @neurovagrant I know they do! And I like those outlets a lot, but don't they want to reduce costs?

    Regardless, I certainly don't have insider knowledge of these operations like you do, and so if there's really no way you can imagine LLMs being of value to them, or to any organization that produces content, then that's that.

    For what it's worth, I really, really hope you're right.

    pluralistic,
    @pluralistic@mamot.fr avatar

    @mttaggart @neurovagrant Yes, they want to reduce costs, but only in service to making more money. If they cut costs and this relegates them to running bottom-feeder remnant ads at low CPMs, they will see a net loss.

    mttaggart,

    @pluralistic @neurovagrant Agreed. So it seems like at the end of the day, the issue is the threshold for acceptable content. I suspect my estimated threshold for what people will tolerate (and share) is lower than yours. And again, I really hope you're right.

    darkuncle,

    @mttaggart @pluralistic @neurovagrant the encouraging flipside here is that as the deluge of disinfo and generated garbage content grows, the value of human-curated, trustworthy content also goes up (as its rarity increases).

    mttaggart,

    @darkuncle @pluralistic @neurovagrant Yes, indeed, although the same number of needles (or fewer) in an exponentially larger haystack is not a great state of affairs.

    darkuncle,

    @mttaggart I have a pretty strong feeling that after the hype cycle dies down, and only the use cases with actual legitimate value remain, we will eventually see less garbage output. (Risk: quality improves as well, to the point that it becomes very difficult to sort the generated stuff from the real stuff, and eventually people start asking what even is the difference and how do you define "real” …)

    RyunoKi,
    @RyunoKi@layer8.space avatar

    @darkuncle @mttaggart

    I hope before we reach that point, the impact on the environment (energy, water) becomes so grave that Big Data isn't sustainable.

    mttaggart,

    @RyunoKi @darkuncle I mean I'd like to avoid that impact altogether...

    And it kinda seems like as we get smaller models trained, some of this is ameliorated. Also we don't know what more specialized hardware will do to the compute demands.

    But broadly, yes, I'd love for this to stop before we get to an AI-induced energy crisis.

    RyunoKi,
    @RyunoKi@layer8.space avatar

    @mttaggart @darkuncle Realistically business is only starting to care once we stop externalising the effects and make them pay.

    We have CO2 reports (that often look bleak) and Corporate Social Responsibility + ESG. That might be a start.

    A friend introduced me to the concept of

    https://limited.systems/articles/frugal-computing/

    RyunoKi,
    @RyunoKi@layer8.space avatar

    @mttaggart @darkuncle

    As a web developer, I consider this interesting:

    https://github.com/thegreenwebfoundation/co2.js/

    RyunoKi,
    @RyunoKi@layer8.space avatar

    @mttaggart @darkuncle As bloggers we could include API checks to

    https://www.thegreenwebfoundation.org/green-web-check/

    to nudge more towards sustainable hosting.

    mttaggart,

    @RyunoKi @darkuncle This is great; thank you!

    RyunoKi,
    @RyunoKi@layer8.space avatar

    @mttaggart @darkuncle You're welcome.

    There are papers on the subject if you want to drill deeper.

    sidereal,

    @pluralistic @mttaggart @neurovagrant This is why I don't understand what this type of AI is "for"

    People are like "you can use it to write a novel" but I actually like writing novels, I don't want to skip that part.

    People are like "you can use it to summarize a text" but how can I trust it?

    I just don't see a use case. I've been feeling like I live in the Emperor's New Clothes for like a year now.

    LLM's replace typing. The difficult part of writing/coding etc is not the typing.

    jamalix,

    @sidereal @pluralistic @mttaggart @neurovagrant One serious application could be generating those meaningless slides management likes to show its employees every now and then to prove they're actually being useful. I can't imagine anyone likes building those and they usually don't contain any relevant info so it might as well be fabricated.

    I don't think that justifies the existence of this kind of "AI" though, because the existence of these slides isn't justified in the first place...

    pluralistic,
    @pluralistic@mamot.fr avatar

    @jamalix @sidereal @mttaggart @neurovagrant no one pays a lot of money for meaningless slides

    ZILtoid1991,

    @sidereal @pluralistic @mttaggart @neurovagrant
    I'm pretty much thinking like this when it comes to using AI for development.

    I need an algorithm to do something? I look for a library.

    I need an algorithm? I just google it, then translate the example to whatever language I'm working with.

    I need to write boilerplate code? I just use the language's metaprogramming features for that.

    ZILtoid1991,

    @sidereal @pluralistic @mttaggart @neurovagrant
    At one point, AI, namely SEO text generated with it, made my job harder, as I was looking for some Lua alternatives (it's library is a mess), but all the results were SEO-ridden messes, with no real info for me. I needed Lua as an embeddable scripting language, not some general purpose language to calculate me things.

    SidFudd,
    @SidFudd@4bear.com avatar

    @sidereal @pluralistic @mttaggart @neurovagrant I think I just figured out the use case for AI - it's strictly for entertainment purposes. Accuracy: nil. Reliability: nil. But wow, what results! AI is a psychic hotline.

    elronxenu,
    @elronxenu@mastodon.cloud avatar

    @SidFudd @sidereal @pluralistic @mttaggart @neurovagrant I'm completely over AI for entertainment purposes. I don't want any more AI-generated text foisted on me.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • DaftPunk
  • DreamBathrooms
  • InstantRegret
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • modclub
  • Youngstown
  • ngwrru68w68
  • slotface
  • osvaldo12
  • kavyap
  • mdbf
  • thenastyranch
  • JUstTest
  • everett
  • cubers
  • cisconetworking
  • normalnudes
  • Durango
  • anitta
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • megavids
  • Leos
  • lostlight
  • All magazines