Sunday, February 21, 2010

Shock Level Five: The Limits of Ultraintelligence

This is going to be long and to the point. I'm not going to explain what future shock is. Read this instead.

Just in case you didn't, here's an excerpt (begin quote):

Future Shock Levels:

SL0: The legendary average person is comfortable with modern technology - not so much the frontiers of modern technology, but the technology used in everyday life. Most people, TV anchors, journalists, politicians.
SL1: Virtual reality, living to be a hundred, "The Road Ahead", "To Renew America", "Future Shock", the frontiers of modern technology as seen by Wired magazine. Scientists, novelty-seekers, early-adopters, programmers, technophiles.
SL2: Medical immortality, interplanetary exploration, major genetic engineering, and new ("alien") cultures. The average SF fan.
SL3: Nanotechnology, human-equivalent AI, minor intelligence enhancement, uploading, total body revision, intergalactic exploration. Extropians and transhumanists.
SL4: The Singularity, Jupiter Brains, Powers, complete mental revision, ultraintelligence, posthumanity, Alpha-Point computing, Apotheosis, the total evaporation of "life as we know it." Singularitarians and not much else.
If there's a Shock Level Five, I'm not sure I want to know about it!
(end quote)

Catch that?

"If there's a shock level five I'm not sure I want to know about it."

If.

Which is to say, "there might not be." That it's possible, on a conceptual level, that nothing beyond what we can presently imagine will ever be imaginable. That there's not necessarily anything beyond "ultraintelligence."

Now remember, we're talking about levels of Future Shock. Shock is an very human concept: the feeling awe in the face of the ramifications of future possibilities. We're not talking about the future directly. We're not making statements about inevitability. Future Shock is a symptom of no-holds-barred extrapolation that are, nevertheless, firmly rooted in our present existence. As much as some people would like to pretend otherwise, there is no way to avoid experiencing future shock in the face of the future. Isaac Asimov was shocked by the emergence of personal computers and the internet. And yet, he wrote about godlike AIs. Arthur C. Clarke helped to pioneer the digital age. He actually invented the idea of the communication satallite. And yet, even he was shocked at the one-way cultural implications of what emerged. Shock occurs in countless ways. Microsoft's Encarta- written by paid professionals- was a massive failure compared to the shocking emergence of the volunteer-driven Wikipedia. Future shock is unavoidable. The only person immune to future shock is a dead person. It would take a knowledgeable intelligence of a higher Shock Level to not be shocked by its own level.

To question the possibility of a Shock Level Five is an honest question that, in some ways, is far more serious than what I'm about to undertake.

I'm going to ask what would a Shock Level Four ultraintelligence think about this question? We exist, with our slow, natural, organic intelligence, at SL0. Zero shock. That means that all levels of future shock are beyond us. But what would happen to the question if we were to find ourselves at SL1? Would we still consider SL4 to be the point-beyond-which nothing is imaginable?

And if a posthuman ultraintelligence (UI) with unlimited computing ability were to pose the question, how many levels of shock would there necessarily be?

What I can say for sure- since what we're talking about is shock levels- is that there would be at least one more level. It would be characterized by that which only a SL4 UI could imagine to be "unimaginable."

Before I continue, I need to diverge for a few paragraphs. I need to ask whether intelligence- the way we define intelligence- is actually scalable beyond a certain point. Is there a point at which a higher IQ just gets you the same answer that a lower IQ could have gotten, only faster? Are ultraintelligent minds able to think in transcendent ways? We know that the human mind is transcendent compared to computers- computers are our inventions. It's conceivable that a single human being, given sufficient time, freedom from senility, and opportunities-of-necessity, could invent everything that has been invented by individuals. Indeed, many inventions arrive from prolific individuals. Are UI's likely to be much better than a great number of normally-intelligent minds operating at a higher speed?

This is not a Penrosian critique of the possibility of AI, but I'm asking: what if the only kind of ultraintelligence that's possible is incapable of being self-aware because the internal vibrancy of thought causes external reality to become too slow, too dark, too uninteresting to have a self concept that exists relative to it? And what if self-awareness is necessary for intelligent intention- for acting in ways we would define as intelligent?

Let me pose the question this way. This is a thought experiment, and isn't meant to be taken as a conclusive argument, even if it draws conclusions.

Suppose you're a UI studying the goings-on of all the parts of the universe that "isn't you." Everything outside what you think of as "yourself." Let's say, for the sake of defining what we mean by "ultra," that your "mind" operates a billion times faster than a normal human mind, with billions of times more bandwidth- which we'll define as the ability to think about different salient details concurrently. Bigger minds take longer to operate, but if we use light-speed post-human "neurons" in place of 25mph neurochemical signals, we can get a truly huge brain operating at fantastic speeds.

Let's say that, with just a few of your innumerable "eyes," you stare right at the surface of the sun. Because you think a billion times faster, it takes you over a hundred thousand years to observe one hour of solar evolution. You might learn an unimaginable number of things about the sun in that one hour of "objective" time. Perhaps more than what a billion scientists could learn given a billion lifetimes. But you couldn't learn more than was available to know. And you wouldn't be able to separate what a thing is from how it's experienced. For a billion-X UI, the universe holds a different meaning. Subatomic activity ceases to be either a statistical firehose or a needle-wide window. Instead, you are able to watch a million square miles in simultaneous microscopic slow motion. And if that's the way you see the universe, then that becomes the normal way of seeing things. The universe becomes a horse of a different color. And that kind of difference would lead to insights unanticipated by normal-speed minds.

Same thing if you were to operate more slowly. An ultraintelligence able to operate a billion times more slowly would have another set of advantages: the ability to perceive over two million years of solar activity per personal "day." The more natural place to put one's attention would be on the macroscale- where big things are happening constantly. Dozens of supernova a minute. Galactic gravity unfolding before your eyes in realtime. Forests changing to deserts faster than a cloud moves across the land. Again, the universe would take on an entirely different meaning derived from a different mode of perception and interaction.

Can you have both kinds of thinking? The most straightforward answer is simply, "No." For a single ultraintelligence to have a cohesive awareness, a single reference frame is, by definition, necessary-- unless we first decide to define words like "cohesive," "awareness" and "intelligence" differently. Granted, you could have numerous non-simultaneous consciousnesses running concurrently, at different speeds, with different perceptual foci, and then integrate their thought products after-the-fact (like watching memory-movies created by alternate lives). However, the fastest consciousnesses will always outweigh the slower consciousnesses because they produce more "movies." The movies produced by fast consciousness would define the UI's native realtime. The fastest consciousness is consciousness. Anything less than full-speed would have a minority share in contributing to the identity of the intelligence (proportional to the inverse of the ratio of the differences in speed). Between the two examples above is a factor of 10^18.

So, as an ultraintelligence with vast computing capacity at your disposal, the ability to write your own operating "software," and the option to operate very fast, what would you do? Pretty simple. You'd take what you learned with your slow-motion ultra-wide microscope and you'd internalize the workings of reality. You'd simulate timescales from within your primary, full-speed consciousness. Why wait billions or trillions of years for something you can easily acquire (as an incomplete but constantly improving) picture in a matter of moments?

Without trying to go into detail, let's consider: what does an ultraintelligence think about? The simplistic answer is: reality, and not just the one that it exists within, but many variations on that one, and perhaps many others that we wouldn't consider to be parts of "reality." But our perceptions are biased. Essentially: many variations operating at many speeds- each producing questions tending to be answerable. Even a UI would have trouble thinking about what it can't think about. And that's the heart of where I'm headed.

If the majority of your thoughts are about events that originate within yourself, then, for all intents and purposes, you are your reality. The concept of self-awareness, as we have heretofore defined it, becomes nonsensical. Self-awareness requires "self" to be the thing that is aware. That "self" must have something that is distinct and separate from- something incompletely known- in order to define the parameters of self. If you know everything- or consider nothing that is unknown- then all things are, in essence, within you. And if there is no distinction, there can be no awareness of distinction. Hence, the grander the UI, the less there is of "self."

Now, when we're talking about hypothetical posthuman singularity-style ultraintelligences with planet-sized minds and the ability to exploit the fine structure of reality for carrying out its computational whims, we're certainly not talking about an "infinite" intelligence. The conditions described above are a matter of degree, not of absolutes. At some degree, however, the concept of self-awareness becomes obsolete as a primary mode of description. In fact, a UI might actually choose to represent itself as some number of self-aware individuals within itself- each distinct but fully compatible with its other manifestations.

Next question: what can an ultraintelligence think about? What kind of "life of the mind" can such a being expect? It is conceivable that such an intelligence would spend an enormous amount of thought seeking meaningful answers to this very question even before it reached the level of ultraintelligence (for instance, by reading this blog). By necessity of achieving Shock-Level transcendence, ways of thinking would be devised to maintain contact with modes of being that the previous incarnation would recognize as meaningful. That's not to say that an ultraintelligence wouldn't evolve rapidly into something we'd consider absolutely alien. But, from the UI's own perspective, that evolution might be very gradual and cautious. The stakes are vast, after all. A UI faces a kind of oblivion, a loss of self by drowning in a thoughtspace where it is almost the only thing that exists; where the only things left to think about are permutations of thoughts already thought.

If a Shock Level Four UI can reach the state of UI without closing its horizons to coincide with the divide between it and outer reality, then what it is contemplating is, necessarily, what we'd have to consider to be the subject material of Future Shock Level Five.

Something beyond itself, and perhaps always beyond itself. If any intelligence can contemplate something beyond imagining, it would be- by analogy with levels one-through-four, a far greater UI. And the thing beyond imagining would, by necessary exclusion, be a mode of intelligence beyond its own ability to attain because of certain impairments to its candidacy for Level Five.

One of the ways an ultraintelligence of any amount of power is irrevocably impaired is by the fact that it has evolved from a lower starting place- a place from which its primitive memories are still available. A hypothetical SL4 UI knows, and cannot help but know, the full story of its existence. Therefore, it knows- if only by analogy- that its existence is (hopefully) one of ever-increasing vibrancy in the forward temporal direction, and ever-decreasing vibrancy in the negative (back in time). Impairments in time equate to impairments in space- you can't see beyond your lightcone. And time travel, if it were possible at all, would make causality, and therefore, "knowability" meaningless. It would destroy the timeline that gave it rise and would, therefore, happen a maximum of once per universe. Therefore, even if a UI was capable of time travel, it couldn't affect the limitations of its personal past.

Any UI capable of knowing itself at all would know that it is impaired on certain space-time vectors and that the absence of these impairments would lead to a horse of a different existence- an alien sort of intelligent identity. The UI would, therefore, consider the ever-transcendent Shock Level Five intelligence to be one without any impairment vectors. One with all the benefits of time travel without collapsing causality. One that can consider variables and alternatives within itself without losing, or even damaging the concept of "self."

At Shock Level Zero, many intellectually rigorous individuals tend to consider the idea of a Level 5 UI to be an unassailable unlikelihood. Such a UI would have outside solutions to achieving cohesiveness of consciousness. To be plain about it, it would indistinguishable, at least for those lower than it, from the definition of God (God: a being above which no greater being can exist, for if one did, it would be defined as God). While certain SL0's may have a problem going there, it is a legitimately unassailable unlikelihood that a Level Four UI wouldn't consider such an entity to be worthy of contemplation: a UI without impairment on any space-time vector; without temporal-ratio impairment; without blindspots in time or space. A UI embodying a native solution to the self-awareness vs. internalized reality paradox. A UI capable of writing not only its own code, but pre-coding everything we think of as reality (and probably much more) to meet its own requirements.

Is there a Shock Level Six? Is there a Seven? The answer is twofold, and mundane. First, the answer is no. We SLO's left the imaginable behind at Level Four. For us, there is no Level Five unless a UI at Shock Five wants there to be. And if so, there is. And so on with higher levels- all of which belong to the definition of "God," and which become inevitable to that Level Five UI, but still entirely unknowable to us, for such things are not on the continuum of immortality, technology, transhumanism, or posthumanism. They are on the continuum of creation- of time, space, mass, and energy. They are on the continuum of intractible mysteries spanning the edge of even our theoretical universe, and deep into a place where no mind can- however grand- can imagine. If we take seriously the possibility of Level Zero leading to One, and One to Two, and Two to Three, and Three to Four, and from Four, the serious contemplation of Five, then Level Six is also a recursion. Knowing what we imagine-we-know about every other Shock Level, we can suppose that Level Six is actually no different from Level Zero- except that, this trip around, we are not alone and we never were. Level Six is relational. It's about a deep strange future, designed by ANOTHER, but into which we may travel.

Humans are not smarter than humans. They tend to fall prey to the attribution fallacy even when they have the best intentions. What I should have said was "We tend to fall prey" but I was actually falling prey to that very fallacy. Our personal reasons for believing what we believe are always too difficult for others to understand, so we tend to ascribe less-developed reasons to others. Atheists call deists self-deceiving idiots. Deists call atheists self-worshiping materialists. But we're all at Shock Level Zero when it comes to what is actually going to happen next. All parties rely on faith whenever they make assertions about the shape of the future. And that's not to say that faith is also a fallacy. This is already a strange universe, and faith may be the strangest thing we can "know" about it.

Many people believe in probability- that there is a 50-50 chance that a flipped coin will come up heads. But there's actually no such thing as probability on the level of actual, individual events. If, on a particular flip, it does come up heads, then it means that heads was a 100% certainty even before we knew it to be so; tails was 0%. Probability is just a sophisticated way of managing ignorance about the future, or sometimes, the past. And faith works the same way- except that, if there is a Level Five, Six, Seven, etc UI, then for it, knowing how the coins will land is a matter not of simulation, calculation, or even pre-destination. It's a matter, potentially, of anything it chooses it to be, including active decision. In other words, who's Shock Level Zero we're in is a matter of faith, nothing else. And faith is of the same substance as decision, as intelligence interacting with reality. It can only act so wrong before the truth imposes itself. And we don't choose whether or not we'll fill the gaps between knowable things with beliefs-pertaining-to-meaning. We just do it. That much is human nature.

So if you know- by whatever means, whatever reason, whatever bridge your intelligence can build- that what we imagine now as possible is not all there is to be imagined- there's such a thing as Future Shock Level Five, Six, Seven, or beyond- and in this reality- in this already-surpassingly strange universe- then I'm in no position to contradict you.

1 comment:

Special Sauce said...

I'm glad you've chosen an intelligent way to close your argument. Now I won't have to waste my time refuting you.