Free Will

The problem of free will is the problem of whether rational agents exercise control over their own actions and decisions. (Wikipedia)

Intuitively, it seems simple. I wake up every morning and choose lots of things – what I’m going to have for breakfast, what I’m going to wear, whether or not I feel like going to work. My instinct tells me that I have free will. But some say it’s just an illusion.

Determinists believe that humans are predictable, programmed with genetic data, past experience, and environmental influences; all of our actions and reactions foreseeable. Behaviorists experimented with dogs to demonstrate predictable responses based on learned association with stimuli. Dogs aren’t people, but we too learn by association. Seeing a TV commercial about a juicy steak might make us salivate. The ringing of the dinner bell did it for the dogs.

Along this same line of reasoning, some would argue that when choosing from the menu of possible reactions to a given situation, we have no more control over our selection than we do over which foods we like to eat. Einstein himself took the view that “a human can very well do what he wants, but cannot will what he wants.”

This point of view likens the human brain to a computer – a processor of information, both genetic and environmental, producing an inevitable and predictable reaction in each situation. Taking it to the next level, some believe a computer could be programmed to “think” like a human brain. The opposing side claims that the human brain is capable of complex tasks that couldn’t be replicated by computer programming. The chief difference being that computers are syntactic machines and human brain function is semantic. A computer has no spontaneous thoughts – it thinks only what it has been told to think. It can’t decide that it doesn’t feel like processing information right now. It can’t make any decision – it’s a completely reactionary machine. It’s barely even capable of making a mistake, which makes it in some ways superior to the human brain, yet it’s only as intelligent as its programmer. The computer is a tool that the programmer utilizes to get the result that he wants. And it’s the wanting that separates the human brain from a computer. Or is it. Do we “will what we want” any more than a computer does?

Then there are those who find common ground between free will and determinism, which may seem as diverse as science and religion. But compatibilists get there by slightly redefining free will; they acknowledge that genetics and the environment play a role in our decisions, but state that we’re not forced or predetermined to make any particular decision. Philosopher Daniel Dennett says, “All the varieties of free will worth having, we have. We have the power to veto our urges and then to veto our vetoes. We have the power of imagination, to see and imagine futures.”

I don’t know where I stand with respect to the concept of free will. Introspection tells me I have it, but introspection is worthless in scientific discovery. Introspective “discoveries” are as numerous as opinions and opinions are like assho….well, you know where I’m going with that. Einstein found the concept of no free will comforting. He said, “This knowledge of the non-freedom of the will protects me from losing my good humor and taking much too seriously myself and my fellow humans as acting and judging individuals.” Unlike him, I find the idea more threatening than comforting. It raises disturbing questions about moral responsibility, self-image, and individuation. But maybe it’s not that introspection tells me I have free will as much as my current view of the world and myself requires it. Or….maybe I was just predetermined to be indecisive.

Free Will Wiki
Daniel Dennett
Now You Have It, Now You Don’t


  1. Who says a computer is deterministic? I'm writing this on my friend's Windows machine; within the week, it will probably crash in some horrible way while he is trying to perform an important task. He'll reboot it, do the exact same thing, and watch it work perfectly.

    Perhaps, as Laplace suggested, if we knew the exact state of every atom in the Universe we could "run the machine in fast-forward" and figure out, by equations and/or simulations, how the Universe will be tomorrow before tomorrow arrives. But it's worth remembering that Laplace advanced this idea in a paper on probability theory, the branch of mathematics we use to reason when we have incomplete knowledge.

    It's not obvious that we can represent the full, complete Universe by a simulation simpler than the Universe itself. In fact, I'd argue that it's impossible. Consider the old, folksy version of chaos theory: "for the want of a nail, the kingdom was lost," etc. (Most babble in philosophy or management supposedly based on "chaos theory" has no more backing than this folk wisdom.) To know the fate of the kingdom, we need to know the state of every nail. If we don't have total knowledge, then we can only give probabilities.

    To all human observers, the state of this computer is the same at ten o'clock and at eleven o'clock, but the web browser crashed at ten and did not at eleven. The machine has free will!

    (You can also solve the "problem of evil" this way. God wants a Universe full of shiny happy people, but it's impossible to tell how such a complicated problem will turn out — even God needs complete knowledge of the System in order to predict in advance its outcome. God, we presume, isn't willing to trust in probabilities. So, in order to find which set of starting conditions will lead to a perpetually free-from-evil Cosmos, God has to imagine or simulate the whole shebang. This requires simulating the atoms inside the brains of all the Universe's sentient beings, and thus those beings are sentient, even in God's imagination. . . You can fill in the rest. The upshot is, even if God cares, if evil is here now then it isn't going away — not in this Universe's lifetime.)

    Listen to the Feynman.

    We have already made a few remarks about the indeterminacy of quantum mechanics. That is, that we are unable now to predict what will happen in physics in a given physical circumstance which is arranged as clearly as possible. If we have an atom that is in a excited state and so is going to emit a photon, we cannot say when it will emit the photon. It has a certain amplitude to emit the photon at any time, and we can predict only a probability for emission; we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom and will, and of the idea that the world is uncertain.

    Of course we must emphasize that classical physics is also indeterminate, in a sense. It is usually thought that this indeterminacy, that we cannot predict the future, is an important quantum-mechanical thing, and this is said to explain the behavior of the mind, feelings of free will, etc. But if the world were classical — if the laws of mechanics were classical — it is not quite obvious that the mind would not feel more or less the same. It is true classically that if we knew the position and the velocity of every particle in the world, or in a box of gas, we could predict exactly what would happen. And therefore the classical world is deterministic. Suppose, however, that we have finite accuracy and do not know exactly where just one atom is, say to one part in a billion. Then as it goes along it hits another atom, and because we did not know the position better than to one part in a billion, we find an even larger error in the position after the collision. And that is amplified, of course, in the next collision, so that if we start with only a tiny error it rapidly magnifies to a very great uncertainty. To give an example: if water falls over a dam, it splashes. If we stand nearby, every now and then a drop will land on our nose. This appears to be completely random, yet such a behavior would be predicted by purely classical laws. The exact position of all the drops depends upon the precise wigglings of the water before it goes over the dam. How? The iniest irregularities are magnified in falling, so that we get complete randomness. Obviously, we cannot really predict the position of the drops unless we know the motion of the water absolutely exactly.

    Speaking more precisely, given an arbitrary accuracy, no matter how precise, one can find a time long enough that we cannot make predictions valid for that long a time. Now the point is that this length of time is not very large. It is not that the time is millions of years if the accuracy is one part in a billion. The time goes, in fact, only logarithmically with the error, and it turns out that in only a very, very tiny time we lose all our information. IF the accuracy is taken to be one part in billions and billions and billions — no matter how many billions we wish, provided we do stop somewhere — then we can find a time less than the time it took to stat ethe accuracy — after which we can no longer predict what is going to happen! It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical "deterministic" physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a "completely mechanistic" universe. For already in classical mechanics there was indeterminability from a practical point of view.

    Chaos theory is largely the formalization of Feynman's insight, as recorded in chapter 38 of the Lectures on Physics (1963).

    Short version: free will/determinism and human/computer are false dichotomies.

  2. For me it boils down to this. Any event either has a cause or it's random. People want free will to be neither, which is logically impossible. Through naive introspection we conclude that there's an us that decides*, and because it just feels wrong we conclude that our decisions are neither caused nor random, which of course is nonsense.

    But whatever processes in my head actually produce my decisions and thoughts, they're _my_ processes, and I happily accept them as my free will. If the constraints are a bit tighter than I'd like, well, it's little I can do about it, except accept it.

    *Read Susan Blackmoore's Consciousness an introduction and follow the suggested exercises for a more sophisticated line of introspection that might lead you to a different conclusion.

  3. Most people take it as "Free Will vs. Determinism", which is a false dichotomy. Most people also take it as "I believe I choose, thus I have free will." Belief does not make it so.

    I'm on the side of indeterminism, ie. determinism plus randomness. Most stuff is predictable, but there is an element of random, which cannot be forced by the "will".

    When it comes to menus, what you pick is guided by what you like to eat, how hungry you are, if you are feeling hungry for any type of food, (sometimes by how much you can pay!), etc. Frankly, in my view, that we have a branch of science around psychology indicates that we all behave in largely predicatable ways which negates free will.

    (Susan Blackmoore has already been mentioned, so I'll mention Scott Adams' blog on which he provokes a lot of people over this issue.)

  4. I've heard about some recent research into what are known as "Microtubules" in animal brains. Essentially, they're structures so small that they allow the randomness of Quantum Mechanics to play a key role. This opens the mind up to an "external" source of randomness, which can result in spontaneous thoughts (as certain areas of the memory get randomly stimulated).

    Does this clear up the free will dilemma? All it really does is say that instead of human beings' minds being at the mercy of causality, they're at the mercy of causality and the random data generated in these microtubules. Many see this as still being a form of determinism, but in the end, no analysis of free will that avoids the supernatural can ever avoid reaching some deterministic conclusion.

    Now, the important thing I take out of this is that if we want to mechanize thought, we're going to have to throw randomness in somehow. Preferably, it will be true, quantum mechanical randomness like in animal minds. Programmers have already found out that many computer algorithms and AIs can be made to work better with elements of randomness thrown in, we just have to make sure it goes in the right direction. If we keep this up, there's a quite good chance we could indeed create artificial intelligence that would be indistinguishable from a human's.

  5. We have the power to veto our urges and then to veto our vetoes.

    I don't find this compelling. Do we have the power to decide what our urges are? The power to decide whether we want to veto our urges?

    My memories of studying free will in philosophy during my college years mostly involve considering definitions of "free will" which were either unsatisfying (in that they didn't seem to coincide with what people mean by the term) or clearly false. The fact that it's so difficult even to give a precise definition should throw up a huge red flag — that's a sign that we don't really know what we mean by it, even though we think we do.

    I come down on the side of the indeterminists; what non-supernatural explanations are there aside from randomness, determinism, or a mixture of the two?

    At the same time, I feel like I have free will (whatever the heck I mean by that), and it seems perfectly reasonable to live acting as though I do.

  6. Finally! I've been half-remembering a term but couldn't remember it exactly and have finally tracked it down. The concept of Psychological Determinism, which comes in two forms:

    a) we always act according to our strongest desire, and

    b) we always act according to our best reason.

    I don't agree with the strict, almost absolutest, language here. I prefer a more "relative best reason", the kind of reasoning that makes sense at the time but not when viewed by a more objective (as far as possible) stance. Basically allows for impaired reasoning when drunk as well as more sober decisions.

    (Although I will indicate there is a potential non-falsifiable fallacy here of "there was a best reason, but you just don't know what it is as it is a sub-conscious best reason".)

  7. Infophile has mentioned the M-word, so I figure I should pull an earlier rant of mine from Bad Astronomy. Here goes:

    For various reasons, I am extremely skeptical of the notion that consciousness could be rooted in quantum phenomena. Of course, the entire world is quantum, in a sense: it’s the principles of quantum mechanics which determine the properties of materials out of which the world is made. Like Democritus of Abdera said twenty-five hundred years ago, “Nothing exists save atoms and the void”, and quantum physics constitutes the rules by which atoms play.

    The challenge, then, is not to say “all is quantum” (a statement with no more content, by itself, than saying “all is love”). In what way do the strange and esoteric mathematical descriptions of the atomic and sub-atomic world build up the everyday stuff with which we are so familiar? This is a deep problem, one with many mysteries left to resolve, and physicists spend lots of time worrying about it. One thing which we do know is that when you put a lot of quantum particles together, at a certain point they stop acting in the quantum way and become better approximated by Newton’s laws of classical mechanics. This is odd, because if you put a pile of classical pieces together, you get a bigger classical object! Newton’s laws reproduce themselves at higher scales, but the quantum laws do not.

    It’s a bit like discovering that all the ordinary houses on your ordinary street are made of bricks from Faerie.

    So, in order to test the idea that consciousness, Mind, Spirit or any such vaguely defined phenomenon has a quantum flavor, we need to know if essential aspects of Mind depend upon objects which are small enough for quantum oddities to apply. Lengths, time scales and temperatures need to be sufficiently small to avoid the problem of decoherence, the tendency of quantum objects to collapse into classical behavior.

    On the one hand, we have fairly specific models of how brain cells might contain tiny switching elements to which quantum mechanics might apply. The most notable by far is Penrose and Hameroff’s proposal that “microtubules” — protein rods which form a kind of cellular skeleton, used for transporting molecules around and giving the cell mechanical rigidity — can transmit quantum pulses. Unfortunately, this model doesn’t stand up to close scrutiny very well: decoherence steps in and ruins everything. Hameroff, an anaesthesiologist, came up with a (moderately wacky) way in which this “model” explained the action of anaesthetics. Supporters made a big fooferaw about how scientists had no other model to explain aneasthetics, but oops, never tell a scientist that something can’t be explained; we now have biochemical theories which handle anaesthetics without needing to invoke quantum consciousness.

    The neuroscientists have also done a very good job finding places in brains where neurons can be probed directly. In the barn owl, for example, there is a brain part called the inferior colliculus, which the owl uses to process sound. We can identify places in the inferior colliculus where neurons act like AND gates: they have two inputs and only produce an output when both inputs fire simultaneously. Models exist to explain this in terms of ion fluxes through the neuron’s cell membrane (”dendritic computation” is one term, referring to the dendrites which carry the input signals). These models do not invoke quantum mechanics.

    One of the better general resources I have found on this subject is a paper by Litt et al., in the journal Cognitive Science (2006). They lay out the evidence that

    explaining brain function by appeal to quantum mechanics is akin to explaining bird flight by appeal to atomic bonding characteristics. The structures of all bird wings do involve atomic bonding properties that are correlated with the kinds of materials in bird wings: most wing feathers are made of keratin, which has specific bonding properties. Nevertheless, everything we might want to explain about wing function can be stated independently of this atomic structure. Geometry, stiffness, and strength are much more relevant to the explanatory target of flight, even though atomic bonding properties may give rise to specific geometric and tensile properties. Explaining how birds fly simply does not require specifying how atoms bond in feathers.

    In essence, we can enclose all the quantum weirdness within “black boxes” and discuss the interaction of the boxes using classical science. There’s legitimate science in figuring out what goes on inside those black boxes, but it’s equally legitimate (and perhaps more useful) to understand what happens when they hook up together.

    Indisputably, phenomena requiring quantum mechanical explanation exist throughout the brain, and are fundamental to any complete understanding of its structure and physical mechanics. […] However, none of these effects contribute essentially to explaining the overall functionality of the associated system, which can be fully described without explicit appeal to quantum-level phenomena. In our wing analogy, it is unnecessary to refer to atomic bonding properties to explain flight. We contend that information processing in the brain can similarly be described without reference to quantum theory. Mechanisms for brain function need not appeal to quantum theory for a full account of the higher level explanatory targets.

    Penrose has also argued that classical computers cannot perform some of the tasks which humans do quite readily. This argument from computational complexity, however, also falls flat. Solomon Feferman dissected it fairly neatly in a review entitled “Penrose’s Gödelian Argument”; more recently, Mark C. Chu-Carroll has written relevant posts about “quantum complexity classes” at his blog Good Math, Bad Math. The physicist Scott Aaronson has advanced plausible arguments that a quantum computer can't solve NP-complete problems in polynomial time. Et cetera.

    Then we have the people who claim that quantum entanglement can explain telepathy or telekinesis. Using esoteric jargon and vague pseudo-math to lend credence to a phenomenon which nobody has experimentally observed, and which experiments have quite frequently ruled out — it’s a bit like a ghost leading a blind man.

    This is the realm of Deepak Chopra and the charlatans who made What the Bleep Do We Know!? To them, quantum is just a handy term, on a par with “energy field” or “Good Side of the Force”. None of the specifics of modern physics relate in any way to their specious psychobabble. There’s more good science in the Beach Boys’ song “Good Vibrations” than in Chopra’s whole catalog. They want the credibility of modern science, the trust people place in the competence of white-coated Einsteins, but they’re not willing to pay the price. They want to provide the appearance of reconciling science and faith, but what they have truly reconciled is jargon with gullibility.

  8. Oops. I posted a long, link-heavy comment, and it looks like it's caught in some spam filter. In the meantime, you can re-read the Feynman quotation I posted earlier, or browse my recent comments at Bad Astronomy about the "quantum mind".

  9. Blake Stacey,

    You write so well and so cogently, I'm always left wondering why you don't have your own blog for us all to read. Even a weekly update would be worthwhile! I'm certain that I'm not the only one who would love to see it, and since I encounter your comments on Pharyngula and BA as well, I'm sure you'd pull in a cross-section of regular readership from fans of yours at all three places.

    Which is not to say that you should stop commenting elsewhere, of course…just that, you know, having one solid place for all of your insight would be a good thing :-P

  10. Expatria,

    Thanks for the kind words. I've considered pulling all the bits and pieces I've written together into one pretty package. . . My problem is that if I did that, I'd rather go all the way and make a book, instead of Yet Another Blogspot Blog!

    Commenting on other people's blogs keeps me civil (I think), since I'm more aware of the fact that I'm talking in somebody else's living room, so to speak. If — I mean when, goddammit — I get a book published, I'll probably start blogging in order to promote it.

  11. The "computers are not deterministic" analogy was lovely, but flawed (:

    If the computer crashed at 11 but didn't at 10, then it wasn't identical to all possible observers. Firefox was running a different section of code, or the same section with slightly different values in memory. Perhaps the CPU was overheating, or the specific portion of the hard drive that it was writing a cached image to had a flaw. The computer is deterministic: had it been in exactly the same state at 10, it would have crashed in exactly the same way. Its behavior only seems mysterious to the user because he often lacks relevant information. An expert familiar with the workings of Firefox who had the right log files would be able to point at exactly the line of code that caused the crash, and explain why it didn't crash at 10.

    Computers are complicated, and imperfect, but deterministic (unless they have real random number generators attached; a typical home or business computer does not). That is: for a small enough time interval, its state at time t+1 can be predicted from its state at time t. You just need perfect information about the system to do that with perfect accuracy.

    I hope that brains are different. I also hope that we find a way to overcome this in computers. We'll see what we come up with (:

  12. Those upside-down smily faces still freak me out.

  13. Beren,

    You've really made my point for me!

    When we're dealing with computers, it is just barely possible for people with hefty amounts of experience and training to figure out what caused a system crash. . . most of the time. Believe me, I've grown up with computers, and I've hunted down an awful lot of causes, but on particularly rough days, I've found myself stumped. So have all of my computer-expert friends, many of whom have degrees and paying jobs in the field.

    Increase the complexity by a factor of ten to the umpty-ump, and the difficulty of predicting a human brain's next move becomes clear. We need perfect information to predict with perfect accuracy, but in real systems, small uncertainties are often rapidly magnified. This is why, for all practical purposes, deterministic physical laws can coexist with unpredictability at the human scale. Is it a troubling problem? Yeah, sure, maybe in a philosophical sense, but I can think of about six billion more pressing concerns which should be addressed first.

    There is a pretty puzzle here, which is deeply connected to the reason why my laptop is making my legs warm right now. The following is quoted from the physicist John Baez's column, This Week's Finds in Mathematical Physics, week 235 (15 July 2006).

    You see, to make any physical system keep acting "digital" for a long time, one needs a method to keep its time evolution from drifting off course. It's easiest to think about this issue for an old-fashioned, purely classical digital computer. It's already an interesting problem.

    What does it mean for a physical system to act "digital"? Well, we like to idealize our computers as having a finite set of states; with each tick of the clock it jumps from one state to another in a deterministic way. That's how we imagine a digital computer.

    But if our computer is actually a machine following the laws of classical mechanics, its space of states is actually continuous — and time evolution is continuous too! Physicists call the space of states of a classical system its "phase space", and they describe time evolution by a "flow" on this phase space: states move continuously around as time passes, following Hamilton's equations.

    So, what we like to idealize as a single state of our classical computer is actually a big bunch of states: a blob in phase space, or "macrostate" in physics jargon.

    For example, in our idealized description, we might say a wire represents either a 0 or 1 depending on whether current is flowing through it or not. But in reality, there's a blob of states where only a little current is flowing through, and another blob of states where a lot is flowing through. All the former states count as the "0" macrostate in our idealized description; all the latter count as the "1" macrostate.

    Unfortunately, there are also states right on the brink, where a medium amount of current is flowing through! If our machine gets into one of these states, it won't act like the perfect digital computer it's trying to mimic. This is bad!

    So, you should imagine the phase space of our computer as having a finite set of blobs in it — macrostates where it's doing something good — separated by a no-man's land of states where it's not doing anything good. For a simple 2-bit computer, you can imagine 4 blobs [see the original for a picture]. Now, as time evolves for one tick of our computer's clock, we'd like these nice macrostates to flow into each other. Unfortunately, as they evolve, they sort of spread out. Their volume doesn't change — this was shown by Liouville back in the 1800s:

    5) Wikipedia, Liouville's theorem (Hamiltonian),'s_the

    But, they get stretched in some directions and squashed in others. So, it seems hard for each one to get mapped completely into another, without their edges falling into the dangerous no-man's-land [in between].

    We want to keep our macrostates from getting hopelessly smeared out. It's a bit like herding a bunch of sheep that are drifting apart, getting them back into a tightly packed flock. Unfortunately, Liouville's theorem says you can't really "squeeze down" a flock of states! Volume in phase space is conserved. . . .

    So, the trick is to squeeze our flock of states in some directions while letting them spread out in other, irrelevant directions.

    The relevant directions say whether some bit in memory is a zero or one — or more generally, anything that affects our computation. The irrelevant ones say how the molecules in our computer are wiggling around. . . or the molecules of air around the computer — or anything that doesn't affect our computation.

    So, for our computer to keep acting digital, it should pump out heat!

    Here's a simpler example. Take a ball bearing and drop it into a wine glass. Regardless of its initial position and velocity — within reason — the ball winds up motionless at the bottom of the glass. Lots of different states seem to be converging to one state!

    But this isn't really true. In fact, information about the ball's position and velocity has been converted into heat: irrelevant information about the motion of atoms.

    In short: for a fundamentally analogue physical system to keep acting digital, it must dispose of irrelevant information, which amounts to pumping out waste heat.

    In other words, we get away with calling our computers nicely deterministic, even though they're actually full of jiggling atoms and surging electrons, only because we've built a higher level of organization which lets us not care about individual atoms and electrons. Each of those atoms is, in principle, a random number generator, but because we are so terribly clever we can sidestep that fact! As long as our transistors, each of which are made of many atoms, keep acting like our idealized concept of a transistor, we can put their component atoms inside "black boxes" and worry only about how the transistors are connected. Similarly, by assuming that the processor, memory and so forth always act like our idealizations of processors and memory devices, we can create software and not worry about what the hardware is doing. I don't need to know the quantum physics of semiconductors to code a program!

    Of course, we pay a price for erecting these abstraction barriers. . . . but I'll leave it to others to play with that.

    It's interesting how rapidly discussions like this, about free will and such, always come down to hope. Why, I wonder, do we want so badly for our brains to be different than computers, or than our caricature of computers? The answer seems to be tied up with our desire to have an immaterial soul, some essential part of us which is forever preserved against dissection.

    For my own part, I find the premise that we have immaterial souls quite troubling. The chaotic nature of physical phenomena, as Feynman described in the passage I quoted earlier, seems adequate to satisfy whatever appetite for "free will" I happen to have. I am, in principle, unpredictable, although my friends will testify that I have many buttons they know how to push. Why, then, postulate anything else in order to save a concept of "free will" which doesn't really need saving?

    This topic came up after dinner a few nights ago, and my friends and I spent hours hashing out the ramifications, long into the night. Ben said he liked the idea of a part of consciousness which was insulated against the vicissitudes of life, firewalled away from drunkenness or brainwashing, a part of the identity which only accepts the changes it is willing to accept. Of course, we have no demonstration that such a part actually exists, and in order for the "firewall" to be an effective guard, it has to be a complicated device. We can actually make this assertion with some degree of rigor: the cybernetics people have a rule called Ashby's Law of Requisite Variety, which says that in order to be effective, a control system has to have as many different states as the system it is trying to control. Insufficient flexibility leads to breakdown. To shield against the troubles of a complicated life, the soul's firewall must have many states of being, indicating that the soul cannot be the featureless point of light we so often imagine.

    The evidence for the existence of matter and energy is, quite literally, all around us. Why, to paraphrase Sagan's Demon-Haunted World, should we presume that something quite different from matter and energy is necessary to explain thought, memory, feeling or self-awareness? The problem becomes particularly acute when no phenomenon claimed to have its basis in the non-physical soul — telepathy, telekinesis, precognition, et cetera — has been observed in anything like reliable circumstances. Furthermore, we face the troubling issue that chemical substances, from alcohol to LSD and Prozac, can alter our mental state. The exact nature of the change is not predictable, but the fact that a change will occur can be asserted with high confidence, and the qualities of the change can often be stated with some reliability.

    For thousands of years, we've been accumulating evidence that emotion, thought and all other activities of the mind are material in nature. Beer is as old as civilization, and so too without doubt are children conceived in drunkenness. Psychedelic mushrooms also have a pedigree stretching back into dim antiquity. We are now giving pills to children in order to modify their behavior and stamp out phenomena which we aren't even quite sure are diseases. Yes, all our pills have side effects, and yes, we don't know the details of how they work, but our culture is now replete with the components of a very big idea: that which we call the soul is an arrangement of matter.

    That awestruck memory of the night sky, that warmth from a lover's smile, that fear for the species' future — they're all yesterday's chocolate bar. Nothing exists, save atoms and the void. We take in atoms and make them part of us, each bit of the molecular dance remembering what the steps were when the dance was performed by the previous set of partners.

    I have the strong suspicion that we will learn more about cognition and consciousness before we're done. Our drugs will change our souls in ever more involved ways, and our computers will execute many more of the functions once believed to be humankind's sole property. You don't have to go all the way to the Singulatarian, upload-your-mind-into-software view; even a "weak AI" scenario is theologically troubling. Just as we've imperiled traditional religions by shrinking the margins of physics back to the first instants of time, though not all the way, so too are we imperiling mysticism by advancing towards our own consciousness. We have already encountered the God of the Gaps. I have little doubt that in coming years, we will move into a worrisome presence, a growing awareness of the Ghost of the Gaps.

    Yes, that's me.

    I can think of a few more things to say, but I should probably try submitting this comment now.

  14. OK, that comment worked. . . It looks like I just have to keep my URL habit within limits. Now, what about that nagging question of hope?

    I suspect that David Hume's arguments about design and teleology are relevant here. Have I ranted about this already? Aha:

    Suppose, while walking along a beach, we discover a pocket watch in the sand. (Lucretius would have to phrase the question in terms of the Antikythera machine, but that's a small matter.) Not knowing of any way that the washing of the waves or the scuttling of tide-pool crabs could create a working timepiece, we presume that the watch must have been designed. But by whom?

    With the paltry information we have on hand, we cannot say whether the watch we found was made by a master craftsman, by an apprentice (perhaps on his twentieth attempt!), by a committee of guild members, by Hephaestus or by aliens from Tau Ceti IV. The design argument crumbles in our hands, leaving only an old timepiece with sand in the gears.

    The modern resolution of the dilemma, of course, points out that if watches could breed, we would have no need for watchmakers. We have very good explanations for how things which look "designed" can actually arise by the operations of natural law. However, these explanations don't do us very much good when we try to introduce unknown quantities like an immaterial soul!

    It is plausible — not proven, perhaps not even likely, but plausible — that living cells might have evolved to harness the power of quantum computing, the way that computer scientists today hope to do with carbon nanotubes or some such technology. If this did not happen with our species on Earth, it could well have happened with a different species elsewhere. Physical law doesn't rule it out, so biology may well have embraced it. But quantum physics is still physics. A computer built with quantum parts is still a machine, albeit one possessing in principle great computational powers (which is why scientists want to build one). It has no effluvium of the mystical about it.

    Here's a puzzler, which I believe I first heard via Daniel Dennett. Suppose that we built an electronic device the size of a neuron which can do everything a single neuron does. It receives input signals, chews on them in some fashion, and sends out other signals in consequence. Perhaps some amount of quantum jiggery-pokery is required (though I am doubtful). I have three friends who work in nanotech; if it is possible, they'll get the job done.

    Now, take a living brain, and replace its neurons one at a time with our patent-pending replacement, a perfect emulator of a single brain cell. At what point does the behavior of the subject change? At what percentage of neurons replaced does the subject lose its soul?

    If you say that the subject doesn't change in any essential way, then why is the soul essential? If you replaced the brains in generation after generation of animals, wouldn't natural selection operate on the silicon brain just as well as it does on the biological one? Occam's Razor would then cut the soul across its immaterial throat. Perhaps souls float freely through space, just waiting for complicated mechanical systems to arise, and upon finding one they attach in some undetectable fashion — but this is not the kind of soul advanced by Western religions, nor is it the kind of spirit we raise in discussing free will, since it doesn't seem to do anything.

    And then, why do nutrient deficiencies impair childhood intellectual development? Why does LSD reliably unleash the heaven or hell within the rock musician's psyche?

    Plainly, then, we can't rely upon natural explanations to encompass a purely nonphysical thing. Do souls then arise by nonphysical — i.e., supernatural — intervention? Perhaps the Invisible Pink Unicorn sprinkles a little soul juice from the Orbiting Teapot upon each of us.

    Notions of this type propagate through the noise, I believe, because they are self-congratulatory. We are special, they say, and moreover, special because we are good. The lure is irresistible, and dangerous.

    Consider again the argument from design. Having ruled out natural explanations for a hypothetical non-natural part of our being, the argument from design returns to the fore as a plausible contender, but it brings all of its old baggage along with it. What can we deduce about the Designer, who is in this case the craftsman of the soul? When all we know about the Designer is its complexity, all of our candidate explanations are comparably complicated, and Occam's Razor is blunted. We could have been "ensouled" because Jehovah willed it so, or because the Invisible Pink Unicorn touched us with His magical horn, or because the royal house of Tau Ceti IV likes to eat Earthlingburgers and wants us lily-livered in the face of their invasion!

    We believe, thanks to the environment in which we were raised, that the possibility of an immaterial spirit is a Good Thing. This is not a logically defensible assertion! We take on faith that the ghost lurking in the gaps must be a cause for hope, when with equal reason we could call it grounds for despair.

  15. Mmm, interesting point, Blake: it is certainly true that computers are too complicated to perfectly predict, practically speaking. It's often said that any program that's big enough to be useful will have bugs in it; it isn't quite true, but we do quickly reach the point at which massive amounts of testing are required to justify any claim to correctness. Significantly, we test: we aren't satisfied until we've run the program, because any attempt to predict its behavior beforehand is error prone.

    I don't think that alone suffices to call it nondeterministic (and I realize you said more than that). Suppose that we had a perfect model for a given system; that the rules always held precisely. Suppose that the system was complicated enough that no possible mind or device could predict its activity, though one could tell, with sufficient effort, that what it did was consistent with those rules (for instance, maybe it would take two seconds to predict what it would spend one second doing). I hold that this system would be deterministic, but also unpredictable. I don't think the latter implies the former.

    It's when you get to the physics of the hardware that things get interesting. You are right there, too: random physical quirks can change data and affect the computer in unpredictable ways. We often attempt to engineer these systems to minimize the impact of such failures, but they happen anyway. One practical example of a nondeterministic component of a computer is Internet downloads: a particular packet of data can be corrupted by the hardware it runs across in unpredictable ways. We validate each packet as it comes in; it's extremely unlikely that the mutation would still appear valid, if the right error-checking is used. Still, we end up throwing the packet away and asking for it again. You can't predict with perfect accuracy how many such packet errors you will encounter, so you can't predict when the download will successfully complete.

    I must agree, then: computers are not entirely deterministic. They are as deterministic as we can make them, given their complexity and the nature of matter, but we can't do a perfect job of that.

    On the subject of hope: my hope is that we learn to make computers with minds, in every sense that humans have them. I'm even writing a novel about it, just for my personal entertainment (: I don't believe we have souls; I think that we will, at some point, understand exactly what makes us tick. I don't think that should take away the beauty of being a human any more than learning about fusion dims the stars: it is their contents, not their structure, that gives our minds significance. If that is true, wouldn't it be wonderful to be able to change the structure while preserving the contents? How would it change us, to be able to upgrade our speed of thought and accuracy of memory by an order of magnitude or four? I would love to find out.

  16. Beren,

    We'll see whose novel shakes the world first!


  17. I think you guys have hit on the key component here. Complexity.

    Even though every atom and particle in the universe is probably doing exactly what it's supposed to do, and someone who knows what the rules are could theoretically predict what that action is, the system is too complex for us to make any useful predictions, even if we had all the info.

    Right now, the human mind is too complex to make exact pedictions, even though it is not so complicated that it's impossible to make some general psychological evaluations.

    So clearly, we have no free will. We're just playing the game of goose, and even though the things we do may seem random, we're forced to follow the rules of the game to its completion.

    We may not be able to tell in advance who's going to win, but someone who knew how every dice would fall could predict that very easily.

    Likewise, people can be played like puppets, as long as you know enough about them to predict their reactions accurately enough. Like the error correction in computer systems, if you leave enough margin for error, you can run their "program" without getting unexpected results.

Add Comment Register

Leave a Reply