People have been betting on independent reasoning as an emergent property of AI without much success so far. So it was exciting when OpenAI said their AI had scored at a Gold Medal level at the International Mathematical Olympiad (IMO), a test of Math reasoning among the best of high school math students.
However, Australian mathematician Terence Tao says it may not be as impressive as it seems. In short, the test conditions were potentially far easier for the AI than the humans, and the AI was given way more time and resources to achieve the same results. On top of which, we don't know how many wrong results there were before OpenAI selected the best. Something else that doesn't happen with the human test.
There's another problem, too. Unlike with humans, AI being good at Math is not a good indicator for general reasoning skills. It's easy to copy techniques from the corpus of human knowledge it's been trained on, which gives the semblance of understanding. AI still doesn't seem good at transferring that reasoning to novel, unrelated problems.