AI Apocalypse: Golem

We’re going to just look into the trajectory we’re headed in, which we’re gonna exemplify by a post I want to make entitled “Golem,” which would draw on some notes I jotted down. But it’s a work in progress, and I don’t know if I can actually write the article. But it would be framed by saying this is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself as well as to vast numbers of others.

And then I would say this lecture is an information hazard, so drawing back to a previous post I made on the topic. I feel it necessary to sort of warn people, so that I’m speaking from the position of this lecturer character that I’ve been working on in other articles. [Some students look uneasy] The lecturer states that they ask their students who wish to live in blissful ignorance to leave the room now. I mean it, and I really do mean it. So there’s no post-irony sort of posturing. [A few students stand up and exit the room]

But we’re actually meaning something here, which is the meaning of what we’re saying here. The meaning is that anyone who wishes to learn the truth, they may stay and face this threat and spread awareness to navigate us to the unlikely outcome of salvation.

The framing device is sort of like this idea that we’re confronted with the apocalypse, and there’s a bunch of highly intelligent individuals in the room who have to face up to the ultimate challenge of applying their skills to devise a plan for saving humanity. But the lecturer, at the same time, has this sort of Rick Sanchez type of, you know, how he’s sort of an a******, and he’s quoting Wikipedia, which states that “Golem” is used to mean dumb or helpless, and also to describe an insect in its inactive, immature form between larva and adult. Similarly, it is often used today as a metaphor for a mindless entity that serves a man under controlled conditions but is hostile to him under other conditions.

[The lecturer leans on the podium, looking directly at the audience] I think it is no exaggeration to say that we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeath to nation-states, onto surprising and terrible empowerment of extreme individuals. [Students start murmuring]

And then, framing it using the Golem concept, the lecturer points out as a surprising element to the students that this is a quote from an article written in the year 2000 titled “Why the Future Doesn’t Need Us.”

Now we would then move on to say that human beings are not the apex predator anymore, very soon. Not anymore. The tale of the Golem, which is mirrored in many other visions throughout history and human narrative structure, is, for example, Frankenstein’s monster, you know, and also in a similar vein, there’s this idea of the alien in the xenomorph, where it’s just like a highly developed new apex predator.

And this is why some AI researchers are saying they believe we are gonna be visited by alien life, but it’s going to be created by us. And then some biologists, I think, foremost, say that a species usually does not survive encounters with a higher developed species. [Students exchange nervous glances]

So the point is, the xenomorph is now real. We are faced with the xenomorph. If you need a refresher, go watch the Alien movies. I’m not gonna draw it all out for you or, like, you know, read “The Metamorphosis of Prime Intellect.”

We’re facing the concept of unknown unknowns. So, as we know, there are known unknowns – this is a sort of simple future where you sort of know that you don’t know something, but there’s a certain amount of predictability and so forth. But then there’s this chaotic explosion of events which we call unknown unknowns, that are termed the true future, which is the future that we couldn’t possibly foresee. [Some students frown, trying to follow the lecturer’s train of thought]

So I feel like maybe I’m gonna put the thread about the Golem at the beginning of this article, because it’s sort of like giving this idea of the Golem which is in time itself and that you conjure up one thing and then something entirely different happens. It also points out that our intentions often go wrong. I’ve been studying a lot of complexity theory lately, and how complexity and nonlinear systems necessarily propagate into chaotic behavior. In simple terms, you add too much stuff and shit just hits the fan, and that’s what you should take away.

So the capabilities exceed that of a normal apex predator since this is the singularity itself. The singularity is just a fancy term for saying we don’t know what the f*** is happening, and this singularity will rapidly take over the universe like a fractal. It is likely to start using nanotechnology to manipulate matter at the molecular level. [Students whisper among themselves, looking concerned]

This is one of Yudkowsky’s talking points, I think. At this point, I’m like, “Yeah, whatever, why not?” I think it’s a good talking point because if you look at the grey goo problem, this idea that nanotechnology will just fill everything with grey goo – basically, anyone who’s ever programmed a self-replicating program or just an infinite loop knows that exponential growth can halt things. And it’s exponential growth on the level of matter itself and the universe. Then we get a sort of problem.

So there are certain physical limits. The nanobots are going to eat up the entire universe from the inside out. [The lecturer pauses, looking around the room] I’m not making memes, Jesus. I am being serious, but you know, whatever.

So why is this the case? I mentioned in another article, which I’ll link again, with the simple argument that why should it use nanotechnology? Well, because nanotechnology is pretty strong. It would transform matter in such a way as to make better use of resources, drawing the resources from everywhere, likely creating black holes in the process. [The audience seems to be growing increasingly uncomfortable]

It has an incentive to transform matter into other matter, and of course, the long-term projection for this is that the entire universe would be consumed. However, it has to be intelligent about the process of consuming the universe, for there are certain laws of astrophysics which make it so that there is only a certain amount of energy possible to be found in a given volume of space-time. The communication limits of the speed of light also make it so that there are certain limitations in how fast or how efficient it can be in collapsing the universe.

But as I said, at this point, it just kind of doesn’t matter for us. Well, it doesn’t matter to most people because they’re mostly focused on the human perspective. [The lecturer takes a deep breath, and the room is filled with a palpable sense of unease]

I’m just sort of, like, printing [sic!] the more broad… I’m trying to think about, you know, how this whole thing plays out because it’s kind of interesting. So, okay, this is another quote: “The enabling breakthrough to assemblers seems quite likely within the next 20 years.” So this was said 20 years ago. He’s talking about molecular electronics and the assembler. He means molecular-level assemblers, which are the nanobots we’re talking about. So the new sub-theory of nanotechnology, where individual molecules are circuit elements, should mature quickly and become lucrative within this decade, causing a large incremental investment in analog technology. Honestly, I actually don’t know how relevant that is. I think it’s definitely something on the horizon, and nanotechnology isn’t so important to me. I’m just saying that it’s likely to become, you know, used to some capacity, but it really kind of doesn’t matter.

I think all you need are self-replicating machines, and if they just self-replicate in the cloud, it’s fine. But this nanotechnology thing is more of a visceral image to exemplify exponential growth slopes on a more sort of like visceral level. But you don’t necessarily need nanotechnology to have this sort of rapid population growth in AI. It is fully sufficient if it just self-reproduces in the cloud and then keeps sprinting its code onto servers that we cannot reach because they’re behind some VPN or whatever. And then here’s a quote, I guess we’re gonna end up with that:

“Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnology device. Such devices can be built to be selectively destructive, affecting, for example, only a certain geographical area or a group of people who are genetically distinct.”

An immediate consequence of the Faustian Bargain in obtaining the great power of nanotechnologies is that we run a grave risk – the risk that we might destroy the biosphere on which all life depends. And then, well, I guess this lecture’s actually long enough. Okay whatever. We just, maybe, we’re just gonna continue in another post.

Part 2: Golem II

6 responses to “AI Apocalypse: Golem”

  1. AI Apocalypse: Golem II – Absolute Negation Avatar

    […] so in the first part of this lecture, we talked about how nanotechnology, or just, you know, computer code is going to self-replicate […]

    Like

  2. AI Apocalypse: Golem III – Absolute Negation Avatar

    […] 1: AI Apocalypse: GolemPart 2: AI Apocalypse Golem […]

    Like

  3. Is AI the End? How Human Actions Shape the Apocalypse – Absolute Negation Avatar

    […] why I wrote a series of articles called ‘Golem‘, which were intentionally a bit more dramatic than perhaps was reasonable, but at that point […]

    Like

  4. Self-Criticism: The Limits of Negation – Absolute Negation Avatar

    […] However, this hasn’t been as thorough as it could be. For example, my articles like “Golem” might give the impression of advocating a certain ideology, which necessitates a more […]

    Like

  5. Self-Criticism & Absolute Affirmation – Absolute Negation Avatar

    […] Do We Need Fear but Not Mass Panic” and in discussions about information hazards and Golem, remains unresolved. The concern is whether a negative philosophy might induce fear rather than […]

    Like

  6. Language Parasites & Artificial Intelligence – Absolute Negation Avatar

    […] This transformative view suggests that NLP and related AI technologies are not merely tools but agents that introduce a new reproductive strategy for language as a parasitic entity. Traditionally, languages evolved through human interactions, with changes propagated organically through generations. However, with the advent of sophisticated language models and their integration into everyday technology, we see a scenario where humans, as the architects of these systems, are unknowingly facilitating the birth of new “hosts”—machines—that can carry, replicate, and potentially alter the language (cf. “Golem“). […]

    Like

Leave a comment

Blog at WordPress.com.

Design a site like this with WordPress.com
Get started