At tomorrow’s Aurelius Podcast interviewI’ll be answering questions like: Why do LLMs keep changing their answers? How come AI outperforms the smartest humans at Chess and graduate math competitions it can’t count asterisks or answer simple questions without hallucinating?