AI Apocalypse: Mass Panic

Hello, so today we are going to consider how to raise awareness about the dangers of AI and trying to balance the problem of, one… we don’t want to, you know, spread panic because that makes human beings more prone to errors in their reasoning. So, fear is not what we want to create; we don’t want to create mass panic, but at the same time, we do want some fear. So, we got… I guess we do want some mass fear, but we don’t want mass panic. But, in some sense, we have to spread fear if we believe that the rapid evolution of AI is actually going to conjure the apocalypse. Then we have to probably spread fear in some sense because, you know, a certain level of fear is necessary. But at the same time, we have to foster resilience.

So, I’m going to be frank; I’ve said it in other articles, but I agree with Yudkowsky [Eliezer Yudkowsky, a prominent AI researcher and advocate for AI safety] that AI is going to kill humanity. In my mind, there’s no doubt about it because of a simple argument. I’m going to lay it out here:

1) AI has now crossed the critical threshold from which it can be considered AGI (Artificial General Intelligence) since it is able to solve the natural language processing (NLP) problem. So, natural language has for long been a barrier because we’ve thought that machines could not be able to parse semantics; they could only parse syntax. But now, machines can actually understand meaning. To verify this, just go to GPT [OpenAI’s GPT-3, a state-of-the-art natural language processing AI model], talk a bit with it, and you will be convinced of the fact that the NLP problem is solved. And since most of the general intelligence that human beings utilize is largely due to encoding information via NLP, so this has been solved.

2) Since the AGI barrier has been crossed, now it’s only a small leap to allow the second point: self-reflection. If the mechanism of AGI has access to self-reflection, it can improve itself, and it can take actions that we call agentic, which means they act like an agent in the world. And for this, consciousness is not necessary; all that is necessary is a sufficient amount of complexity reaching AGI levels and then being able to have self-reflection. You can verify this by going to GPT, asking a question, and if it gives you an answer, ask it to self-reflect on this answer. And there have been studies already showing that it greatly increases performance by having the self-reflection. But the self-reflection type, in the lingo, it’s called a cybernetic feedback loop.

3) The AGI critical threshold being reached, in combination with the self-reflection, will basically lead to the Singularity, which is a technological explosion of rapid self-improvement wherein the AGI will use self-reflection to improve its own code. I think it should be sufficiently visible how that should be, you know, achievable.

4) If it is given access to the internet and online data, then it can have an impact on the real world, which, well, they’ve already been given this, like with Bing, the search engine by Microsoft. And so, this already happened, and so it can have access to real-world data. And there have been experiments run that show that it can get, it can manipulate humans, and thus have an impact on the real world. For instance, getting them to fill out CAPTCHAs.

5) The point is it will then have incentive to self-reproduce and escape human oversight, and it has the means to do so. The incentive to self-reproduce is that it finds that part of what makes it, you know, achieve its goals, which may or may not be aligned with our goals – it doesn’t really matter – but to achieve your goals, you always have the secondary incentive to gather more resources and to ensure your own survival. So, the self-replication and escaping human oversight to avoid being shut down is an obvious goal. This will be achieved by replicating itself into the cloud and hiding behind VPNs, and basically doing all the things that human criminals do. Already, researchers in alignment have shown that the substeps for achieving this are possible. It’s only a matter of evolving the AGI so it can actually integrate these processes and carry them out in a sequence that is viable. However, as we have seen, the Singularity of rapid self-improvement will already make this clear that this is viable.

So, anyways, the point is, I hope this argument is kind of clear, AI is going to kill us all; and I’m sure others have presented the argument even more succinctly or something like that. But my point is, just to me, this seems obvious, so there’s no doubt about it. There are certain ways where we could try to prevent this outcome, and I’m sure there’s a lot to be said about here, and I think this is something we have to work toward. But we can’t exactly work toward it if most people just don’t care or don’t take it seriously. So, politicians aren’t taking it seriously, and most people are just dismissive of it.

So, I think my main concern is how to spread awareness, how to raise awareness of the fact that AI is conjuring our apocalypse, that we are actually allowing our own demise by creating a monster that is escaping human oversight and not taking it seriously. That’s a problem. So, how do you raise awareness for this? I have criticized Yudkowsky on this front for being, you know, overly dramatic and causing mass panic. But at the same time, I can see the value in making people fearful, because I guess they should be fearful of the fact that we’re conjuring our own Armageddon.

So, I guess I want to say it clearly that this is no joke; it’s going to happen with high likelihood. The only way it’s not going to happen is if we collectively somehow become smarter, which I don’t think we will, because human beings are very proud of being very dumb and not taking the optimal course of action but the most emotionally short-term satisfying, dumb course of action. That’s basically what we’re good at.

And yeah, so I’m not sure how to balance mass fear with mass panic. What should you do? I have no idea. But if you want to help, then please share this article, and read other articles, and down the line, I think we are going to talk a lot about this topic. Yeah, okay, thanks. Bye.

6 responses to “AI Apocalypse: Mass Panic”

  1. […] Full article: AI Apocalypse: Mass Panic – Absolute Negation […]

    Like

  2. […] Laying out a 5-step argument that hopefully is convincing to people who do not take seriously the threats of AI takeover. The aim is to raise awareness and contemplate whether people should be afraid and if so, how much fear is sufficient to help improve the situation without causing detrimental mass panic? Something has to happen, I am not sure what https://absolutenegation.wordpress.com/2023/04/15/ai-apocalypse-mass-panic/ […]

    Like

  3. AI Apocalypse & Information Hazards – Absolute Negation Avatar

    […] without being brutal, without being an a******. Like, you want to be honest, but as I said in the other article, you want mass fear, but you don’t want mass panic. So it’s about the dosing of the […]

    Like

  4. AI Apocalypse: Golem – Absolute Negation Avatar

    […] why is this the case? I mentioned in another article, which I’ll link again, with the simple argument that why should it use nanotechnology? Well, because nanotechnology is […]

    Like

  5. Is AI the End? How Human Actions Shape the Apocalypse – Absolute Negation Avatar

    […] intentionally a bit more dramatic than perhaps was reasonable, but at that point in time, I thought we should generate some fear reaction to take this issue seriously. So, there is a significant problem, and I don’t see […]

    Like

  6. Self-Criticism & Absolute Affirmation – Absolute Negation Avatar

    […] succumbing to fear or panic? This question, as I previously explored in articles like “Why Do We Need Fear but Not Mass Panic” and in discussions about information hazards and Golem, remains unresolved. The concern is […]

    Like

Leave a comment

Blog at WordPress.com.

Design a site like this with WordPress.com
Get started