A simple argument that AI poses nontrivial danger

AI (extinction) risk arguments, I feel, often assume way, way too much - and then don't make those assumptions explicit, and end up with a mess that is "not even unfalsifiable." That does not mean that AI risk is not "real", or that it is not high, and in fact its mitigation can be hurt by these arguments - by not matching the eventual dangers, they might make those dangers entertain less scrutiny or attention. That's why I prefer assuming, "knowing", as little as possible, and having a very flexible view on "where does the risk come from?"

One of the simplest arguments is the As seen here:
"Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.

Conclusion: That’s scary."
Though to be clear, I think that specific post does a wonderful job weaving a narrative (in part) around that argument - that's why I've already linked it twice within these four posts. :)
but it rests all its strengths on the power of the specific analogy in question - and were the future so simple...

So here's I don't view this argument as "mine" in any way, shape or form, except to the degree that the flaws in the current exposition are mine; it's a general argument that I've probably seen stated in some form many times, but I also don't know offhand of any places where it is stated clearly and/or self-containedly. for a simple argument for AI risk, broadly construed. By the virtue of its simplicity, it doesn't try to imply anything about the likelihood of the AI extinction, it just nods at the possibility. And yet I feel that should be enough, and find all this talk about the precise p(doom) to be distracting at best.

The "rapid and dramatic change" argument

Premise 1: AI is likely to effect an astoundingly rapid and dramatic change of every aspect of life on this planet; more change in a short time than has happened since the dawn of Homo sapiens.

Premise 2: Rapid and dramatic change is difficult to steer precisely, particularly when it involves many powerful stakeholders who have not necessarily aligned interests (powerful nations, corporations, individuals, AIs) and when it involves very novel technology, such as advanced AI will be.

Conclusion: It is possible that the change effected by AI will be extremely unaligned with human flourishing.

On the face of it, this argument might seem more complicated than the "second species" argument - it has many more sentences! - but I think, because it laids bare more of its assumptions, it is more simple in its essence.

On the two premises

Many people doubt the premise 1, but I believe them to be factually mistaken, and that premise 1 will seem to any reasonable observer to gain credence as the time goes by. Premise 1 by its nature is quite a multifaceted and abstract prediction, but there are many simple concrete subpredictions that it makes, and I have been betting on prediction markets on some of them.

Premise 2 is more complicated, but it also seems hard to doubt given humanity's performance so far, and in particular our history of failure to cooperate on complex matters. Even the successes here - when humanity was able to stop some negative change - such as ozone recovery, the ban of leaded gasoline, the world being a bit less MAD nowadays, and there overall being less wars, seem to drive the point home - sometimes we do not notice things until they have inflicted significant damage.

And it doesn't help that it seems like our institutions have degraded (or have never been at the required level, it's hard to judge the relevant counterfactuals) to the point that, despite the COVID-19 pandemic costing the world somewhere on the order of 10s of trillions of dollars, governments have, as far as I can tell, undertaken very few steps to prevent future respiratory illness epidemics. To put it bluntly: these instutions seem to me to be woefully inadequate to guide advanced AI development towards the benefit of humanity. And what do we have, barring them? Capitalism is great at many things, but swiftly internalizing negative externalities ain't one of them.