A prediction on AI and mathematics

Ever since 2020 it has seemed to me that superhuman AI mathematicians are pretty close, and nowadays they seem quite a bit closer. While the exact year and date seem to me to depend heavily on various local contingencies - who decides to invest how many person-hours into it, etc., I'll say that I would be surprised if it happened within 1 year from now, or if it didn't happen within 7 years from now.

I am not too fussed about the exact definition of "superhuman" here because I believe very likely that, whenever we get anything that fits any "reasonable" definition of a superhuman AI mathematician, within very short time (a year or two) we'll have AI mathematicians which are obviously super(superhuman AI mathematicians), and in another short time we'll get super(super(superhuman ...)) - certainly not ad infinitum, but it will for sure be ad nauseam for some. One corollary of this prediction is that I expect there to be as much progress in mathematics in the next 10ish years as there has been up until now.

I wouldn't really care as much about that, though, if it was only mathematics in play. But I imagine these AIs are likely to be used for making an astounding amount of progress in Why, then, this focus on superhuman AI mathematicians, as opposed to superhuman ML scientists? Why get mathematics into this, if it is mostly to serve an instrumental value? One reason is that pure mathematics, as opposed to most applied subjects, provides such an incredibly fertile ground for almost unlimited self-play, that it seems to me quite likely that AIs are first going to be superhuman pure mathematicians first and applied mathematicians (including ML scientists) second - though not with much time inbetween, as I expect mathematical ability to generalize far. -- we very quickly get many years of progress in finding more efficient architectures, training procedures, etc. etc. For, sure, ML has a sizeable empirical component, so the AIs will probably need to spin up an experiment or two here or there, but these experiments should likely provide much more valuable information than most human-level ones. So, soon after superhuman AI mathematicians, we have superhuman everything.

To put it more straightforwardly: superhuman AI mathematicians seem to me to be both a sufficient and a necessary component of the near-term transformation of the world. If you get superhuman AI mathematicians you can leverage them into transformation of the world - and if you don't have them, the road to transformation is much longer.

I am writing this not only to mark the prediction, but also because I believe it is worth some forethought trying to figure out how to leverage this (possible) future state of affairs in order to advance AI safety. I'll get back to that in a future post.