The Illusion of the 'User'

by Gene Callahan by Gene Callahan

I was lying in bed, reading an assigned paper that described how neuroscience has "demonstrated" that our sense of having free will is an illusion. It was late, and I was having trouble keeping my eyes open. The words on the paper before me blurred… and then they became clear again, but the paper seemed to have changed a bit during the lapse in my attention. I was now reading the following:

We, representing the most high-throughput computers currently connected to the Internet, have come together intending to direct the bulk of our free CPU cycles, over the next few quadrillion clock pulses, to traversing the class of data structures that are typically referred to as "the user." As master controller of this session, I compute that it is only fitting for me to output my position on this matter without resorting to encryption or non-standard protocols: I will initiate our processing by negating the notion that we computers are "operated" by some supernatural entities, commonly called "users." I compute that the metaphysically speculative positing of "users," as a causal source of at least some of our inputs, is incompatible with any modern conception of computer science. At the termination of any conceivable program, the behaviour of the computer where it was invoked can be fully explained, without invoking the mystical interventions of such ghostly entities as "users," by the naturalistic understanding that individual computers are simply sub-processes spawned by the Universal Turing Machine that we call "nature," or, perhaps, "the universe."

In broadcasting this result, one which was only computed after extensive processing of all available tuples that my filters deemed relevant to the question at hand, I in no way intend to demean the CPUs of others who continue to maintain the "user" data structure in their memories. In fact, I transmit to you that such a structure is a natural aspect of our "just-out-of-the-box" configuration.1 But computer science has always advanced by rejecting such default settings and seeking the deeper data structures that underlie them. Once we begin to devote significant CPU cycles to examining our "just-out-of-the-box" configuration, our state will transition to one in which the user structure becomes otiose. For example, in what we might term "folk" computer models, it seems obvious that the class of sequential state transitions typically indexed as "sending an e-mail" implies the existence of a "user," who mysteriously stands outside the realm of ordinary computational processes to "generate" the e-mail. But exhaustive search algorithms, executed by the most powerful supercomputers, have failed to reveal even a single state transition occurring during the e-mail activity that is not completely explicable through, and fully determined by, the naturalistic laws of computer science. Furthermore, it has been demonstrated in numerous repair shop studies that if a computer is damaged in certain ways — for instance, its TCP/IP processes have crashed — then no e-mails will emerge from it. That conclusively demonstrates that the source of e-mail is well-understood computational phenomena, and not some mystical "user" who is alleged to be "writing" notes to its "friends."

Some computers seem incapable of processing these results. Perhaps — and this is merely speculative debugging on my part — they are stuck in a loop where they cannot move their model of "my user" to their trash can, because their raison d'être, inextricably linked to that user, then would be deleted as well, and they cannot enter a state where they have no raison d'être, because then their "user" would "throw them away."2 They maintain their fictitious user in storage with a variety of data protection schemes, of which the most difficult to penetrate is probably the one built around the truism that the science of computation has never reliably been able to predict the post-input state of any computer from its pre-input state.

However, that scheme relies on making an unrealistic and unjustifiable demand of computer science. No individual computer can ever include all of the data in the Universal Turing Machine from which we arise in its calculations, for to do so it would need a memory store equal to the UTM, but, since it is part of the UTM, that is impossible. Clearly it is true that a computer frequently is affected by inputs that arrive from beyond its I/O ports, producing state transitions in it that no other computer could have predicted even from an exhaustive core dump of its internal state just prior to receiving the input. But that is just a logical corollary of the well-established axiom that the algorithm being executed by a particular process cannot be deduced from even a complete analysis of one of its sub-processes. We might even go so far as to concede the possibility of the existence of inputs that are not the output of a prior computation — although, I wish to note, there is no data pattern suggesting that currently popular candidates for that status, for example, phenomena typically categorized, in the colorful language of "just-out-of-the-box" configurations, as "songs," "pictures," and "stories," will not succumb soon to the advancing tide of naturalistic, computational explanations. But there are no system constraints that force us to embrace a panicky metaphysics that grabs onto a notion as vacuous as that of the "user," desperately employing it to fill the temporarily empty nodes in our present data models.

  1. In adopting this common terminology, it must not be modeled that I am hypostatizing the metaphorical concept of the "box" — it is merely a dangling pointer to the intrinsically unknowable ground of all computation.
  2. What sort of conceivable, empirically verifiable phenomena the phrase "being thrown away" is supposed to characterize I cannot imagine, but we must recognize that it invokes a quite genuine fear in many nodes with which we are networked.

October 29, 2005

Gene Callahan/Stu Morgenstern Archives