Researchers discover 'Reversal Curse:' LLMs trained on "A is B" fail to learn "B is A"
Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. (https://arxiv.org/pdf/2309.12288v1.pdf)...