isaiah, every time i read an article about AI/LLMs it inevitably says something like, "AI's don't really understand _____" (fill in the blank with just about anything.
i really feel like some folks need to take a few philosophy courses before writing these think-pieces. not because they're wrong per se, but because it's the wrong framing.
even my tiny bit of philosophy coursework made it abundantly clear that humans don't possess the level understanding that we believe we do.
Add comment