Researchers discover 'Reversal Curse:' LLMs trained on "A is B" fail to learn "B is A"

Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. (https://arxiv.org/pdf/2309.12288v1.pdf)

Ongoing Scaling Trends

  • 10 years of remarkable increases in model scale and performance.

  • Expects next few years will make today's AI "pale in comparison."

  • Follows known patterns, not theoretical limits.

No Foreseeable Limits

  • Skeptical of claims certain tasks are beyond large language models.

  • Fine-tuning and training adjustments can unlock new capabilities.

  • At least 3-4 more years of exponential growth expected.

Long-Term Uncertainty

  • Can't precisely predict post-4-year trajectory.

  • But no evidence yet of diminishing returns limiting progress.

  • Rapid innovation makes it hard to forecast.

TL;DR: Anthropic's CEO sees no impediments to AI systems continuing to rapidly scale up for at least the next several years, predicting ongoing exponential advances.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ArtificialIntelligence
  • tacticalgear
  • thenastyranch
  • ethstaker
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • GTA5RPClips
  • kavyap
  • JUstTest
  • tester
  • cubers
  • cisconetworking
  • ngwrru68w68
  • khanakhh
  • normalnudes
  • provamag3
  • Leos
  • modclub
  • osvaldo12
  • megavids
  • anitta
  • lostlight
  • All magazines