Uranium3006,
Uranium3006 avatar

now that the low hanging fruit of internet scraping is exhausted, we're gonna have to start purpose-building datasets. this will be expensive and might be the new bottleneck on AI progress.

Aceticon, (edited )

For a rough approach, imagine a parrot taught by another parrot, which was in turn taught by another parrot which was taught by a human.

Sure, some things might survive as somewhat understandable vaguelly human sounding sentences, but overall it’s still going to be pretty bad a few parrots down the chain.

ClamDrinker,

Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.

T156,

Especially since they can just pay someone to sit down and sift through it, or re-use the old training data that they already have from before it all blew up.

___, (edited )

Most people here don’t understand what this is saying.

We’ve had “pure” human generated data, verifiably so since LLMs and ImageGen didn’t exist. Any bot generated data was easily filterable due to lack of sophistication.

ChatGPT and SD3 enter the chat, generate nearly indistinguishable data from humans, but with a few errors here and there. These errors while few, are spectacular and make no sense to the training data.

2 years later, the internet is saturated with generated content. The old datasets are like gold now, since non of the new data is verifiably human.

This matters when you’ve played with local machine learning and understand how these machines “think”. If you feed an AI generated set to an AI as training data, it learns the mistakes as well as the data. Every generation it’s like mutations form until eventually it just produces garbage.

Training models on generated sets slowly by surely fail without a human touch. Scale this concept to the net fractionally. When 50% of your dataset is machine generated, 50% of your new model trained on it will begin to deteriorate. Do this long enough and that 50% becomes 60 to 70 and beyond.

Human creativity and thought have yet to be replicated. These models have no human ability to be discerning or sleep to recover errors. They simple learn imperfectly and generate new less perfect data in a digestible form.

theneverfox,
@theneverfox@pawb.social avatar

Ok, seriously? Fuck this research. It’s bullshit.

Want to know how I can declare that so confidently? Because I wrote a program called duo. It’s literally two chatbots instead of one, running locally on 5+ year old hardware. These are low powered llama’s fine tuned by the community for general purpose last year

I just played a DND campaign with a chatbot and her hallucinated girlfriend (ai 1 wrote the prompt for AI 2, no edits or modifications). I’ve never played DND before, but they said they wanted to go to a haunted escape room. I have been to one of the most haunted locations in America, so I decided to be DM, and apparently they come with their own dice. Tomorrow I’m going to send the transcript to a friend who was looking for a DND player

Yes, clickbait is terrible training data, and low grade LLMs can really pump it out.

I had enough fun I fell asleep at my desk, and I did nothing but describe a location I’ve been to and the sounds I heard (and some urban legends)…I could spend a month and have replaced myself in the experience.

Other times I’ve let them run with no interaction on my part they’ve hallucinated (feasible) apps I’m not making to the point I could throw it into a design document, and games good enough to land on my to-do list.

Why don’t people see this for the miracle technology this is? If it isn’t reliable on one pass, do a second to evaluate the first, another to run chain of thought on problem areas, another one to flesh it out and rinse and repeat if you need to.

This is such a simple engineering problem it’s not even funny

NotAtWork,

this comment reads like it was written by a LLM.

theneverfox,
@theneverfox@pawb.social avatar

That’s how someone with ADHD sounds without a filter (we can understand each other at least). All I did is leave out the transitions that links these (to me, obviously related) concepts together

LLMs are the other way around - way to much transition with little substance.

Everything about my experiences experimenting with LLMs sounds unhinged without proof anyways. So I don’t see a need to edit my late night rant, eventually I’ll start a blog to lay out my methodology and chat logs to support it

spawnsalot,

It would be hilarious if we entered the deep fried Marquaud era of ai where responses degenerate into rehashed responses that just get progressively more jumbled and unintelligible as the models cannibalise each other's generated content

afraid_of_zombies, (edited )

Like a billion hours of YouTube videos out there I am not seeing the issue plus the entire library of Congress

gapbetweenus,

Wasn’t there a paper not long time ago that it was possible to generate data with AI as a training set for AI? I was surprised (and the math is to much for me to check out my self) but that seems to solve that problem.

realharo,

As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.

Were there any cases of an already state of the art model using this method to improve itself?

gapbetweenus, (edited )

I will search for the paper.

EDIT: can’t find it, dang.

General_Effort,

Sorta. This “model collapse” thing is basically an urban legend at this point.

The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

gapbetweenus,

Training data for models in general was a big problem when I studied systems biology. Interesting that we finding works around, since it sounded rather fundamental to me. I found your metaphor rather helpful, thanks.

jacksilver,

I wouldn’t say we’ve really found a workaround. AI companies hire lots of people to parse and clean data. That can work for things like pose estimation, which are largely a once and done thing. But for things that are constantly evolving, language/art/videos, it may not be a viable long term strategy.

danielbln,

Microsoft’s Phi model was largely trained on synthetic data derived from GPT-4.

gapbetweenus, (edited )

I’m to lazy to search for the paper, not sure it was Microsoft, but with my rather basic knowledge of modeling (studied system biology) - it seemed rather crazy and impossible, so I remembered it.

BananaTrifleViolin,

The “solutions” to model collapse - essentially retraining on the original data set - suggests LLMs plateau or deteriorate. Especially without a way to separate out good and bad quality data (or ad they euohemistically try and say human vs AI data).

Were increasingly seeing the limitations and flaws with LLMs. “Hallucinations” or better described as serious errors, model collapse and complete collapse suggest the current approach to LLMs is probably not going to lead to some gone of general AI. We have models we don’t really understand that have fundamental flaws and limitations.

Unsurprising that they probably can’t live up to the hype.

zwaetschgeraeuber,

Even if it will plateau, same was said with moorrs law, which held up way longer than expected. There are so many ways to improve this. Open source community is getting to the point where you can actually run decent models on normal private hardware (talking about 70-120b model)

Wiitigo,

If the AI generated content is labeled, or has context, or has comments or descriptions created by people, then wouldn’t it just be the same as synthetic training data? Which is shown to still be very useful for training.

lolcatnip,

Most AI-generated data in the wild won’t have labels because there’s no incentive to label it, and in a lot of cases there are incentives to not label it.

linearchaos,
@linearchaos@lemmy.world avatar

Yes it’s still useful and it’s basically how we made our last couple of jumps. An AI training on AI generated data being graded by another AI. We’ve hit diminishing returns though.

General_Effort,

Sorta. This “model collapse” thing is basically an urban legend at this point.

The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

mods_are_assholes,

Exactly what percentage of AI data in the wild is labeled?

Close to zero I’d say.

PoliticallyIncorrect, (edited )
@PoliticallyIncorrect@lemmy.world avatar

The AIrmageddon…

numberfour002,

Anecdotally speaking, I’ve been suspecting this was happening already with code related AI as I’ve been noticing a pretty steep decline in code quality of the code suggestions various AI tools have been providing.

Some of these tools, like GitHub’s AI product, are trained on their own code repositories. As more and more developers use AI to help generate code and especially as more novice level developers rely on AI to help learn new technologies, more of that AI generated code is getting added to the repos (in theory) that are used to train the AI. Not that all AI code is garbage, but there’s enough that is garbage in my experience, that I suspect it’s going to be a garbage in, garbage out affair sans human correction/oversight. Currently, as far as I can tell, these tools aren’t really using much in the way of good metrics to rate whether the code they are training on is quality or not, nor whether it actually even works or not.

More and more often I’m getting ungrounded output (the new term for hallucinations) when it comes to code, rather than the actual helpful and relevant stuff that had me so excited when I first started using these products. And I worry that it’s going to get worse. I hope not, of course, but it is a little concerning when the AI tools are more consistently providing useless / broken suggestions.

___,

There will soon be a filter on the “best” developers, if there isn’t one already.

holycrap,

My team has been calling models that use ai generated data “Habsberg models”

mierdabird,

Lmao that’s a perfect name for it

BlueMagma,

I feel there is a good joke here, but I miss the knowledge to understand it. Care to enlighten me?

dwemthy,

The Habsburg royal family line was famously inbred with a distinctive chin.

vaultdweller013,

For anyone curious on how bad it got look up the coroners report on Charles of Spain. Its fucken grizzly.

holycrap,
Jagger2097,

I thought it was called centipeding

webghost0101,

Back when i was though concept art as a subject at college my teacher had a name for this.

“Incest” cause every generation of art that references other art becomes more and more strange looking and detached from reality.

If you thought Skyrim weapons look ridiculous you should have seen my classmates Skyrim inspired weapons.

Grandwolf319,

If you think that looked ridiculous, you should have seen the Skyrim weapons inspired by your classmates weapons.

gravitas_deficiency,
  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • slotface
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • khanakhh
  • ethstaker
  • JUstTest
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • megavids
  • lostlight
  • All magazines