@neurovagrant@pluralistic This stuff is incredibly expensive & it’s amazing how much money must being burned right now by some players to establish a market… but not everybody can become Uber.
I can see VC money drying up eventually, but I’m curious about the tech giants that are all in on sinking money into this; Microsoft, Meta, Google, & Amazon. The new Meta Imagine image generator is crazy fast compared to others, yet completely free
These companies want to become platforms for everyone else to run their AI and I think they’ll be willing to run AI services as loss leaders for a very long time, which could subsidize this bubble for years & years unlike something like the crypto bubble
While I agree about the high-risk applications and the low-value, risk-tolerant ones, I feel like a massive middle ground has been elided here for reasons that are unclear.
We're already seeing content producers move to using LLMs to do this work in lieu of human labor. That decrease in cost is a value proposition that, as long as the cost of operating anything like a workable model doesn't explode, will continue to be appealing.
Maybe, maybe there's a tipping point where we all collectively agree that LLM output isn't good enough to pass muster. But I find that highly unlikely.
Instead, I suspect it will be good enough to slip by all but the closest scrutiny, and that is at once valuable for content creation farms and extraordinarily dangerous for everyone else.
@mttaggart@neurovagrant Those are low-value applications. Individual "content producers" are low-waged and precarious and while they may pay for automation, they can't afford much. Their bosses, meanwhile, will only pay a lot if they can fire their workforce - which would leave the news, textbooks, and other risk-intolerant applications in the automation trap.
@pluralistic@neurovagrant I am not clear which "automation trap," you're referring to—sorry if I didn't do some prior reading!
But I'm not sure that "the news" and other content sources are as risk-intolerant as is being claimed here. To say nothing of the means in which people ingest news—see 2016. It's not always via traditional outlets. There's plenty of money to be made in generating disinformation, which is both low-risk and lucrative at scale, a scale which is enabled by LLMs.
But at any rate, when the bubble pops, if it pops, I contend the real casualty will be factual text being the majority of what's present on the internet.
@mttaggart@neurovagrant You can't ask an AI to produce the news unless you get a reporter to verify what the AI says. If that reporter does a good enough job to call it "the news" it will takes nearly as many hours as reporting the news from scratch. The time-consuming, labor-intensive part of "the news" isn't writing the words.
@pluralistic@mttaggart@neurovagrant And furthermore, correcting and proofing generated content that is mostly correct, but wrong in weird ways, turns out to be a job that humans are not good at. This is a similar problem to Tesla autopilot. The technology is good enough to bore us into not noticing when it’s wrong.
@pluralistic@neurovagrant Ideally, yes, but we are hardly in the era of responsible journalism. Call it what you want, but what people consume as "the news" falls far short of that standard now, and I see no reason to believe that access to LLMs would diminish that.
@mttaggart@neurovagrant That's not "the news." That's just spam. It makes money from remnant ads at CPMs so low as to be nearly indistinguishable from zero. They do not have disposable income to buy high-dollar licenses. They are a canonical low-value, risk-tolerant applications.
@pluralistic@neurovagrant How would you classify something like, say, Gizmodo, or Ars Technica, or other "magazine" publications that have a demonstrated interest in cutting costs at the expense of quality content? Imagining a state of the art slightly improved from now, is there no incentive for them to use LLMs to reduce costs to produce content?
@mttaggart@neurovagrant That is a gross mischaracterization of those two outlets. They employ both skilled investigative journalists and hard-nosed fact-checkers.
I mean, this is a thing you are empirically wrong about.
Source; I have written for both and have first-hand knowledge of their internal processes.
@pluralistic@neurovagrant I know they do! And I like those outlets a lot, but don't they want to reduce costs?
Regardless, I certainly don't have insider knowledge of these operations like you do, and so if there's really no way you can imagine LLMs being of value to them, or to any organization that produces content, then that's that.
For what it's worth, I really, really hope you're right.
@mttaggart@neurovagrant Yes, they want to reduce costs, but only in service to making more money. If they cut costs and this relegates them to running bottom-feeder remnant ads at low CPMs, they will see a net loss.
@pluralistic@neurovagrant Agreed. So it seems like at the end of the day, the issue is the threshold for acceptable content. I suspect my estimated threshold for what people will tolerate (and share) is lower than yours. And again, I really hope you're right.
@mttaggart@pluralistic@neurovagrant the encouraging flipside here is that as the deluge of disinfo and generated garbage content grows, the value of human-curated, trustworthy content also goes up (as its rarity increases).
@darkuncle@pluralistic@neurovagrant Yes, indeed, although the same number of needles (or fewer) in an exponentially larger haystack is not a great state of affairs.
@mttaggart I have a pretty strong feeling that after the hype cycle dies down, and only the use cases with actual legitimate value remain, we will eventually see less garbage output. (Risk: quality improves as well, to the point that it becomes very difficult to sort the generated stuff from the real stuff, and eventually people start asking what even is the difference and how do you define "real” …)
And it kinda seems like as we get smaller models trained, some of this is ameliorated. Also we don't know what more specialized hardware will do to the compute demands.
But broadly, yes, I'd love for this to stop before we get to an AI-induced energy crisis.
@sidereal@pluralistic@mttaggart@neurovagrant One serious application could be generating those meaningless slides management likes to show its employees every now and then to prove they're actually being useful. I can't imagine anyone likes building those and they usually don't contain any relevant info so it might as well be fabricated.
I don't think that justifies the existence of this kind of "AI" though, because the existence of these slides isn't justified in the first place...
@sidereal@pluralistic@mttaggart@neurovagrant
At one point, AI, namely SEO text generated with it, made my job harder, as I was looking for some Lua alternatives (it's library is a mess), but all the results were SEO-ridden messes, with no real info for me. I needed Lua as an embeddable scripting language, not some general purpose language to calculate me things.
@sidereal@pluralistic@mttaggart@neurovagrant I think I just figured out the use case for AI - it's strictly for entertainment purposes. Accuracy: nil. Reliability: nil. But wow, what results! AI is a psychic hotline.
Add comment