Without Guilt & Justice – Decidophobia

« Preface | The Death of Retributive Justice »

1

HUMANITY has always lived in the shadow of fears. Yet next to, nothing was known about fear until Freud made a beginning with the study of unusual phobias. A little later, some existentialist philosophers suggested that one dread is common to all mankind: the dread of death. This suggestion was couched in such obscure language that discussions of it have generally revolved around the meaning of phrases in books and have not dealt with the facts. It might have been better to ask what leads some writers to express themselves in ways that seem designed to forestall understanding and hence also criticism, and why legions of professors and students thrive on texts like that. The creeping microscopism that meets the eye all over academia is related to a deep dread that still lacks a name.

Humanity craves but dreads autonomy. One does not want to live under the yoke of guilt and fear. Autonomy consists of making with open eyes the decisions that give shape to one’s life. But being afraid of making fateful decisions, one is tempted to hide autonomy in a metaphysical fog and to become sidetracked and bogged down in puzzles about free will and determinism. It is far easier to define autonomy out of existence than it is to achieve autonomy in the very meaningful sense in which it can be attained. The difference between making the decisions that govern our lives with our eyes open and somehow avoiding this is all-important. The best way to begin to understand autonomy is to examine some of the major strategies people use to avoid it; and this I shall do.

It is important to be specific and concrete. Talk of ”freedom” and “the fear of freedom” immediately invites irrelevant questions about “freedom.” That term has so many meanings that we need a more precise term. “Autonomy” has fewer associations, and once I have defined my meaning, other uses of the term should not keep creeping in. The fear of autonomy is a nameless dread, which leaves me free to coin a name for it: decidophobia.

In the fateful decisions that mold our future, freedom becomes tangible; and they are objects of extreme dread. Every such decision involves norms, standards, goals. Treating these as given lessens this dread. The comparison and choice of goals and standards arouses the most intense decidophobia.

Other -phobia words also mix New Latin with Old Greek: claustrophobia, for example. Moreover, the Latin decido has two very appropriate meanings. It can mean “decide,” which is the primary meaning intended here. But it can also mean “falloff” (hence plants are called deciduous if their leaves fall off in winter), and decidophobia has something in common with acrophobia, the fear of precipitous heights. Although the two Latin verbs have different roots (caedo and cado), our expression “take the plunge” suggests the relevance of both meanings. Decidophobia is also the fear of falling.

People do not fear all decisions. Decidophobes, far from dreading meticulous distinctions, may actually revel in them. For immersion in microscopic decisions is one good way of avoiding fateful decisions.

John B. Watson, the founder of Behaviorism, argued that only two fears are innate: the fear of sudden loud noises and the fear of falling – of suddenly being without support. His thesis was based on experiments with infants, and it is ‘widely accepted. Decidophobia cannot be proved to be innate, nor does it matter greatly whether it is. What does matter is that it can be mastered, although it is much more difficult to overcome old fears than it is to acquire new ones.

It is easy to understand why parents cultivate acrophobia in their children: precipitous heights are dangerous, and having been taught to dread them, one communicates one’s dread to one’s children. That is much easier than teaching them prudence, self-reliance, and the skills required to enjoy peaks. All this applies just as much to decidophobia.

Anyone making fateful decisions that affect others without feeling any apprehension would be a menace. Anyone who would unhesitatingly plunge into choices that are likely to mold his own character and future would be so unpredictable that he, too, would endanger the social fabric. The easiest way to insure stability is to engender fear. Teaching the skills required for responsible decision making is much harder.

Choosing responsibly means that one weighs alternatives. (This theme will be developed further in the chapter on “The New Integrity.”) But comparing fateful alternatives and choosing between them with one’s eyes open, fully aware of the risks, is what frightens the decidophobe. Basically, he has three options: to avoid fateful decisions; to stack the cards so that one alternative is clearly the right one, and there seems to be no risk involved at all; and to decline responsibility. He need not even choose between these options: they can be combined. In brief: avoid, if possible; if that does not work, stack; and in any case make sure that you do not stand alone.

It would be reasonable to feel apprehension in direct proportion to the number of those whom our decision is likely to affect importantly; but people tend to attach disproportionate importance to themselves. The decisions they dread most are those that shape their character and their future.

I shall examine ten strategies that help decidophobes to avoid dizziness. All of them involve the refusal to scrutinize significant alternatives. When anyone shuts his eyes in a crisis, it is plausible to assume that he is afraid. But if he merely acts as if he were afraid, he is still open to criticism. My critique of decidophobia applies also to those who are not afraid but merely behave as if they were.

2

Before I consider the ten strategies, let me comment very briefly on two writers who have illuminated decidophobia and one who has not.

Kierkegaard was the father of existentialism. Fear and guilt were central in his thought, nowhere more so than in The Concept of Dread (1844). Here he wrote as a Christian about original sin, but he also showed how there is a close connection between dread and freedom, and he called dread “the dizziness of freedom.” The image of dizziness brings to mind acrophobia and the fear of falling. Indeed, Christianity called the first assertion of man’s freedom and his first fateful decision “the fall.” But Kierkegaard failed to see his own leap into faith as an expression of decidophobia. In fact, he failed to recognize most of the major strategies.

Jean-Paul Sartre has gone further toward an understanding of decidophobia. His famous declaration in 1943 that man is “condemned to be free” suggests clearly that man finds freedom hard to bear. In his fiction and philosophy, Sartre has exposed some of the ways in which people try to hide their freedom from themselves: they pretend that their hands are tied, that they are the victims of their parents or of circumstance, although in fact the freedom to make fateful decisions is inalienable. Even a pris0ner condemned to death retains this freedom. Man, according to the early Sartre, is freedom but always tends to look upon himself as if he were a thing. Thus he succumbs to what Sartre calls mauvaise foi. In my language, this bad faith and these constant self-deceptions are prompted by decidophobia.

Unfortunately Sartre’s philosophical discussions of these mechanisms were heavily influenced by German existentialism, and particularly by Martin Heidegger and his “fundamental ontology”: they were designed to explicate truths about Being. At times Sartre approached Heidegger’s obscurantism. This kept him from seeing how his argument suffered from some serious confusions; and the later Sartre has followed the later Heidegger as well as Kierkegaard into exegetical thinking – one of the ten major strategies of decidophobia. The great diagnostician has succumbed to the disease that he had analyzed.

Erich Fromm called an early book Escape from Freedom, but despite that title he shed little light on decidophobia. He remained within the framework of a sociological school that had undertaken studies of what it called the authoritarian personality, and he found the great example of this type and of the escape from freedom in Germany, particularly in the rise of Nazism. Now it might indeed seem as if the rise to power of totalitarian governments depended on decidophobia; but this is a serious mistake. Wherever totalitarianism has triumphed, other explanations are in order.

In Germany, for example, a minority of the voters favored Hitler when the president of the Weimar Republic called on him to form a cabinet, and he had to form a coalition government. His was the largest single party, but there were many parties; and most of those who did vote for Hitler had no conception of the loss of freedom that awaited them. They were far from fastidious about the liberties of others, but they did not crave liberation from their own freedom. Their motives included resentment of the Treaty of Versailles and of the inability of democratic statesmen to get it altered; fear of Communism; dreams of national glory; and hatred of Jews. But no combination of these motives would have brought Hitler close to power if the republic had not been undermined by economic disaster.

Before World War I Germany had been very prosperous. The loss of the war, the expulsion of the Kaiser, the advent of the republic, and an inflation that quickly reached the point where ordinary postage stamps cost twenty billion marks were experienced as a syndrome. People saw their savings evaporate, and soon the inflation was followed by a vast economic depression and intolerable unemployment. Desperation reached the point where millions became willing to try almost anything. Many became Communists, while others were willing to try Hitler to see if he could provide jobs. The choice did not seem irrevocable; many liberals saw Hitler as a rabble rouser who would quickly be discredited in a position of power that he was ill equipped to fill, and many Communists thought that a few weeks of Hitler would prepare the way for them.

Even after the Reichstag fire, which Hitler used to outlaw the Communist Party, to imprison many socialists, and to intimidate the opposition, the parliamentary elections of March 1933 still did not provide him with a majority, and he had to continue with a coalition government. The nationalists who joined forces with him did not want to escape from freedom or let him make all fateful decisions: they felt sure that he would be no match for them and that they would govern Germany.

There is no case on record in which the voters chose a government because it offered them less freedom. Where people did opt for rulers who took away their liberties, something seemed to be drastically wrong with all alternatives, and the men who were chosen did not make clear to the voters how their freedom would be curtailed. Men do not crave slavery or concentration camps. On the contrary, such images evoke the will to fight and even to risk one’s life for freedom. Nor are there two types of people: those who love freedom and those who prefer slavery. Such myths obstruct the comprehension of decidophobia. There are subtler ways to avoid fateful decisions. I shall examine ten.

3

One strategy for avoiding such decisions is religion. In Dostoevsky’s Brothers Karamazov the Grand Inquisitor shows at length how the Roman Catholic church has liberated people from the burden of having to make fateful decisions. His disquisition left its mark on Sartre and Fromm. Oddly, however, in Dostoevsky’s the case is made out only against the church of Rome. The Grand Inquisitor claims that the “craving for community of worship is the chief misery of every man”; for, I might add, any confrontation with fateful alternatives engenders dread. He argues that to save men truly one must take possession of their freedom, and he suggests that what people ultimately want is to be united “in one unanimous and harmonious ant heap.”

There is no suggestion in the novel that the same charges could be brought against the Greek Orthodox church, or that other religions, too, have told men what is good and evil, right and wrong, thus obviating difficult decisions. Religion says: Do this and don’t do that! Or: Thou shalt, and thou shalt not. Instead of inviting us to evaluate alternative standards, it gives us norms as well as detailed applications. In fact, religions have evolved traditions that shield the observant from situations in which tragic choices might become inevitable.

The most obvious illustration is monasticism, which requires one great decision, once – to renounce the freedom of making major decisions. A Jesuit’s position in his order is a little less extreme. As usual, there are degrees. But those who become monks or nuns no longer need to face such fateful decisions as how to live, with whom, where, what to do, and what to believe. As a rule one does not even decide to submit to the authority of a religion: one is born into the fold and then confirmed at the threshold of adolescence before one has had any chance to explore alternatives and make a choice. One does not so much decide to stay as one does not decide to leave. Decidophobia keeps one in the fold.

Of course, this is not all there is to religion; and I have dealt at length with other aspects of religion in other books. Nor is allegiance to a religion always prompted by decidophobia. Perhaps this point is best made by choosing suicide as an illustration. I am not including this among the ten strategies because relatively few people have recourse to it. Still, it is often prompted by the inability to stand alone and make fateful decisions. Yet it need not be inspired by decidophobia. In many situations a human being may choose suicide with open eyes after considering what speaks against it and examining the major alternatives. Suicide can be wholly admirable. Nor need it be primarily an act of either fear or courage; it can also be an attempt at revenge or a form of protest. Similarly, not every member of every religion is a decidophobe.

Nevertheless, religion represents one of the most popular strategies for avoiding the most fateful decisions; in fact, it is nothing less than the classical strategy. On the whole it worked well not only during the Middle Ages but even quite recently in villages and small towns where almost everybody shared the same religion. In the twentieth century, however, this strategy has broken down more and more – since World War II even among Roman Catholics. Clergymen of the same religion have taken to adopting widely different public positions on crucial moral questions. Still, many people shut their eyes to this plain fact and manage to persuade themselves that their own moral views do not depend on any decision of their own but are simply part of being Jewish, Christian, or, say, Hindu. If this strategy were not in a process of disintegration, there would be less need for so many other strategies.

Drifting represents another, even less deliberate, strategy. It comes in two forms. Model A is extremely popular with those over thirty without being confined to them: status quoism. Instead of choosing how to live, with whom, where, what to do, and what to believe, one simply drifts along in the status quo. All decisions are made, none need to be made. Some people need a regular supply of alcohol or tranquilizers to remain satisfied with Model A.

This form of inauthenticity is readily perceived by many students. A few go to the opposite extreme: Model B. One drops out, has no ties, and is not guided by tradition; one has no code, no plan, no major purpose. One lives from moment to moment, rarely knowing in advance what one will do next. Model B can also be lubricated with alcohol, but since World War II this kind of drifting has been associated more often with other drugs. Conversely, in the past opiates have often reconciled the oppressed to the status quo.

Some of those who have drifted into Model B are afraid of making almost any decision. If they hitchhike, they go wherever they are taken. They leave things to chance. Everything depends on whatever impulse happens to be felt at the moment. To be governed by caprice is to drift. The hero of Camus’s novel The Stranger illustrates this orientation.

4

When this way of life breeds a sense of emptiness and despair, one becomes receptive to the siren song of commitment. This state of mind was described by Hermann Hesse in his Journey to the East, a novel published in Germany in 1932, less than a year before the Nazis came to power. Deeply dissatisfied both with traditional life styles and with being adrift, many people join a movement – or drift into a movement. There need not be any momentous decision to join. It may be a matter of conformity with those among whom one happens to find oneself. Allegiance to a movement is the third strategy.

Such allegiance, again, is not always decidophobic. Some movements have little bearing on faith and morals, goals and life styles. If so, membership is marginal, although it may still be prompted by a fear of standing alone and some sense that there is safety in numbers. Total immersion, in which no crucial decisions at all remain to be made, is the exception, not the rule. Most of the strategies I shall consider from now on have a less total effect than the first two: usually, they work only in some areas of life.

“Of necessity, the party man becomes a liar,” said Nietzsche. Those who realize how closely words like “party” and Parteigenosse were associated with the German anti-Semitic movement even then, may pardon his hyperbole. In any case, he explained his meaning more fully: “By lie I mean: wishing not to see something that one does see; wishing not to see something as one sees it.” And he added:

The most common lie is that with which one lies to oneself; lying to others is relatively exceptional. Now this wishing not to see what one does see, this wishing not to see as one sees, is almost the first condition for all who are party in any sense: of necessity, the party man becomes a liar.

These themes are developed in Eric Hoffer’s True Believer and Sartre’s “Portrait of the Anti-Semite.” Sartre himself never joined the Communist Party though for years he made common cause with it. Others have joined parties or movements or retained their religion without any sacrifice of the intellect. They live in a tension, occasionally acute, between their loyalty and their intellectual conscience. As usual, there are innumerable possibilities and degrees.

At one extreme is the type sketched by Nietzsche and portrayed more elaborately by Sartre: he has made a decision once and henceforth needs only to extrapolate from that. His views come nowhere near doing justice to the complexity of fact, but he makes a virtue of simplicity and despises subtlety and cleverness. In the words of Pascal’s famous wager, he has made himself stupid. He prizes certainty above truth or considers it, untenably, a warrant of truth, and he takes intellectual scrupulousness for cowardice and a lack of manly decisiveness. He fails to recognize his own acrophobia, his own dread of standing alone without support.

In 1970 a spokesman for what was then simply called “the movement” in the United States kept saying “we” in an argument. Asked whom he meant, he hedged, but finally, being pressed, replied: “Me and my mother.” It was a sudden inspiration and obviously struck him as a witty way of putting down his questioner. Yet it revealed in a flash the infantile fear of standing alone.

At the time, Erik Erikson’s first reaction to this story was that it was too good to be true. Yet it was exactly what had happened. “Me and my mother” was supposed to be funny because “the movement” represented a revolt against middle-class mothers. But it is no accident – to use an expression dear to Marxists – that the Communist Party thinks of itself as a mother, just as the Catholic church does. “The movement,” too, functioned as a surrogate mother, and the We-We orientation is infantile. All talk of community notwithstanding, it recognizes no singular You. Only an I can say You to an individual. The We-We orientation is not progressive. It is regressive and takes us back to the “craving for community” which Dostoevsky’s Grand Inquisitor associated with the desire to be united “in one unanimous and harmonious ant heap.”

In 1970 “the movement” in the United States was Left; in the thirties “the movement” in Germany was the Nazi Party, and visitors to Munich drove past road signs that proclaimed it “The Capital of the Movement.” But even some people who had joined the Nazi party found themselves confronted again and again by the need for hair-raising decisions, and a few actually made very courageous choices. Again, there were many different types. What was true even there applied much more obviously to the New Left, which was never a party in which one took out membership. To call all who belonged in some sense to this movement decidophobes would be stupid no less than applying the term to all who are religious in some sense.

By 1972, “the movement” in the United States referred more often to women’s fight for equality than to the New Left. Here the goals were much better defined and mattered more to many women than did any sense of belonging. No woman could hope to exert enough pressure as an individual to end invidious forms of discrimination that made it far more difficult for women than for men to live autonomous lives; hence there was a real need for concerted action – and no need whatever for any woman who approved of this “movement” to use it to avoid autonomy. Nevertheless, it is important to recognize the attraction of movements for decidophobes. And in individual cases one must ask how important this attraction has been, and to what extent it may reduce all arguments with otherwise intelligent people to futility.

Those seeking liberation must ask themselves whether they are really advancing toward autonomy or whether they have merely exchanged one kind of conformity for another. Renouncing a religion, a creed, or a code and throwing off the blinders that went with it does not necessarily spell liberation. The question remains whether one has turned to a surrogate and put on a new pair of blinders.

5

Allegiance to a school of thought sounds like a mere variant of allegiance to a movement, but it is actually importantly different. Membership in a movement is generally palpable and overt, and one’s consciousness of it is usually crucial: it helps to give one an identity. Allegiance to a school of thought can be like that but usually is not. Typically, it is quite unselfconscious and even denied outright. When granted, it is often felt to be irrelevant.

Those who belong to a school of thought are usually more interested in their small differences with fellow members than they are in what they have in common. These differences can be spelled out without much trouble, and in their publications those who write develop differences of this sort. What one has in common with those with whom one differs is much harder to specify. Distance is required to behold such family resemblances, and those inside the family lack this distance. But they rarely find it difficult to say who does not belong.

Can one say what the members of a school have in common, without even specifying any school? They tend to deal with a few clusters of problems, not with others, and they tend to deal with them in the same way. They share a way of thinking, a style, and a tradition that they see in much the same perspective. A few writers may be key figures in more than one tradition, but different schools will see them differently. Thus Heidegger and his admirers do not see Aristotle the way the Oxford philosophers do, and the Aristotle of the Thomists is different again.

Spelling out the shared assumptions of a school may require exceptional insight and skill. For most of these assumptions do not function like dogmas; they do their job without rising to consciousness. They provide a largely unquestioned framework in which a person can make all sorts of small decisions and tangible contributions without ever coming face to face with shattering decisions.

Once basic assumptions are spelled out, they can be questioned. It is much safer to keep them buried. In Heidegger’s philosophical jargon, questions that might cast doubt on his whole edifice can hardly come up; and questions asked in a different language can be shrugged off as subphilosophical: they show no inkling of what it is all about; they expose the questioner; all is safe.

The same goes for Thomism and analytical philosophy, phenomenology and Marxism, psychoanalysis and other schools of thought. The basic decision has been made, usually without one’s being conscious of making any decision, and the choices that remain are small enough to be enjoyable. One has chosen the game and the rules and can have a good time planning one’s moves. Microscopism spells safety.

Choosing the college one attended and the teachers with whom one studied, one had no clear notion of alternatives. If one made a choice, it was a haphazard choice, determined by accidents of geography, financial conditions, and who happened to be where at a certain time. One became a member of a school of thought not by making a decision but by being trained by someone who was there.

Henceforth one no longer asks what are right methods, right questions, right style, right models, and right rejections. Alternatives do not call for painful choices but can be ruled out of court because one does not do things that way. Those who present them need not be taken seriously and therefore do not call the decidophobe back to freedom.

The most common reaction to members of a rival school is simply lack of interest. Rival schools are not so much tolerated as they are ignored. And those who go it alone are typically shrugged off as crackpots until one of them succeeds in capturing the public imagination and is therefore perceived as a threat. When that happens, material heresies do not elicit as much wrath as formal heresies; it is easier to be rational about what one takes to be false results than it is to deal deliberately with a radically different approach that calls into question one’s whole style of thinking

6

Exegetical thinking differs from interpretation. Indeed, I shall use the term in a distinctive way to label the fifth strategy. Interpretation is inevitable; exegetical thinking is not. Exegetical thinking assumes that the text that one interprets is right. Thus the text is treated as an authority. If what it seems to say is wrong, the exegesis must be inadequate: the interpreter is wrong, never the text.

Actually, the interpreter is on trial as well as the text; neither he alone nor the text alone. For the exegetical thinker the text is as God. The paradigm is a text that is supposed to be revealed by God. This case takes one back to religion and need not be considered here. One might think that Kierkegaard was an exegetical thinker only because he was a Christian, but the notion that there are two kinds of existentialism, Christian and atheist, is shallow; Heidegger’s and Sartre’s development closely resembles Kierkegaard’s. All three exemplify what I shall call the existentialist pattern.

First one adopts a subjectivism so extreme that it is found to be intolerable. One spurns the drifters, “the crowd,” das Man (the anonymous “one”), and summons the solitary individual to commitment, resoll1teness, engagement. Lukewarmness and routine are spurned; conviction, courage, and decision are called for. But then the terrifying question arises: does anything go, then, if only it is chosen with a will?

Heidegger’s Being and Time (1927) had left this question open. When the Nazis came to power, Heidegger joined the movement. After the war he hinted that he had soon become disillusioned. If so, he kept his resolution to himself without incurring any liabilities. This episode has attracted considerable attention, but Heidegger’s descent into exegetical thinking is far more instructive. It began with his book on Kant, became very clear in his exegeses of Hölderlin’s poems, and found its fullest expression in his writings on the pre-Socratic philosophers. The texts he chose in his later period share a fascinating incoherence and an oracular quality; they invite noncontextual and arbitrary readings; and the exegesis could be made to share their charismatic quality. From the start it was one of Heidegger’s avowed principles that “an interpretation must necessarily use force.” This cult of force (Gewalt) is fused with a scornful renunciation of logic and reason. To escape from an extreme subjectivism that invites intellectual and moral anarchy, the philosopher casts about for some authority to save him. But the leading existentialists have been too individualistic to accept for long the authority of any party or church. What option remains? Exegetical thinking permits the exegete to read his own ideas into a text and get them back endowed with authority.

The exegetical thinker avoids standing by himself and saying what he thinks; for he might be wrong and would not know what to say if others followed his example and said what they thought. Such a situation would call for the evaluation of alternatives and invite the use of reason and the assessment of evidence. He is suspicious of reason and associates evidence with science and positivism. There would be no telling in advance where the argument might lead. Moreover, the result would be provisional, pending further evidence and argument. Confronted with the prospect of acrophobia, the exegetical thinker looks for a prop, for something to lean on. Being a man of words, he finds a text.

Heidegger, for example, casts a few aspersions on the Philistines who see the pre-Socratics as mere primitives or, in his own words, as “a kind of high-grade Hottentots,” and who believe that “compared to them modern science represents infinite progress.” In a similar vein one can easily disparage those who fail to see the grandeur of H6lderlin’s late verse. If one succeeds in communicating one’s own reverence and enthusiasm for the texts and gets across their fascination along with their obscurity and their relevance, one is almost certain to be hailed as a great teacher. Then, having made clear how dark the text is, one makes some sense of it, using force – but takes care that the meaning one discovers is not too plain lest one destroy the charisma. If all this were done in good faith, it would still be an example of mauvaise foi, of self-deception.

The later Sartre exemplifies the same pattern. By 1946 he felt dissatisfied with the extreme subjectivism of his early existentialism, and in his famous lecture “Existentialism is a Humanism” he cast about for some objective standards to meet the charge of irresponsibility. The discussion after the lecture convinced him that he had not succeeded. Eventually he turned to Marxism. But Sartre’s Marxism is rather like Kierkegaard’s Christianity: a highly subjective version that is unacceptable both to careful scholars and to fellow Christians or Marxists. It is a way of endowing one’s own views with authority.

This suggestion may seem strange to those who concede no authority to Marxism or to Christianity, to Hölderlin’s or the pre-Socratics. The whole strategy, however, depends on the assumption that certain texts or figures or traditions have authority. Not every text is equally suitable, and some texts have to be built up by the exegetical thinker before he can proceed to read his own thoughts into them.

Sartre himself said in his lecture in 1946 that his existentialism was like Heidegger’s but unlike Kierkegaard’s because Kierkegaard was a Christian. But he himself sounded like a Christian theologian when he said in 1961: “Russia is not comparable to other countries. It is only permissible to judge it when one has accepted its undertaking, and then only in the name of that undertaking.” Such special pleading would be instantly familiar if the first sentence began: “Christianity is not comparable to other religions.” And Sartre’s concern in the same essay that “we didn’t even have the right to call ourselves Marxists” brings to mind Kierkegaard’s anxiety about his right to call himself a Christian. Here the ways of interpretation and exegetical thinking part. A decent scholar of Marx, Nietzsche, or Plato does not fret about his right to call himself a Marxist, Nietzscheaan, or Platonist.

Exegetical thinking is also exemplified by the liberal who believes in inalienable rights to life and liberty, in the equality of all men, or in other similar articles of faith of which he feels sure that they are true, the only question being how they ought to be interpreted. He feels bound to interpret the old formulas in such a way that they will turn out to be true, and to his mind an appealing exegesis has a much stronger claim to assent than any impartial inquiry would suggest; for he feels that it has all the authority of the old dogma.

Even simple acts have many motives, and exegetical thinking is not always motivated in exactly the same way. In some traditions this way of thinking is so deeply ingrained and taught from such an early age that one could not point to any period in a person’s life when he had succumbed to decidophobia. He is the slave of a childhood habit. He is part of a culture that has succumbed to decidophobia.

Within such cultures one may encounter odd variants. Thus there are Catholic scholars who, impelled both by a streak of independence and a powerful elective affinity, devote themselves to the exegesis of Heidegger rather than St. Thomas. Meanwhile Heidegger himself, after breaking with the Catholicism of his childhood and expounding radical subjectivism, found a refuge in exegetical thinking.

I have given so much attention to the existentialist pattern because it is so ironical that the existentialists who have given such pride of place to decision should have succumbed again and again to decidophobia. In many ways they are late romantics, and at this point they resemble those early romantics who first made their reputations as subjectivists and then converted to Catholicism, like Friedrich Schlegel. But exegetical thinking is subtler. Those who engage in it rarely understand what they are doing.

7

The first five strategies aim at making no fateful decisions at all, or at most the one decision to make no more fateful decisions from now on. Four of the five involve some recourse to authority; drifting does not. The next two strategies are basically different.

The sixth strategy is Manichaeism. The Maanichaean insists on the need for a decision, but the choice is loaded and practically makes itself. It is like being asked to choose between two dishes of food and being told that this one is poisoned and will make you sick, while that one tastes incomparably better and will improve your health and expand your consciousness. All good is on one side, all evil on the other.

Inconvenient facts are ignored or denied; the falsification of history becomes an indispensable crutch; and uncomfortable arguments are discredited as coming from the forces of evil. There is no need for quandaries that keep men sleepless.

It is easier to ridicule this strategy than it is to resist it. Indeed, it has been so popular in so many different periods and contexts that one may wonder whether man is not doomed to think in black and white. But he is not. The ancient Greeks, for example, resisted this temptation to a remarkable degree.

Conflict is at the heart of Homer’s Iliad and of Greek tragedy, but Homer and the tragic poets found humanity on both sides of the contests they described. When the gods participated, some took this side and some that, and like the heroes they were neither wholly good nor altogether evil. In Aeschylus’ Libation Bearers, Orestes actually says: “Right clashes with right.” This theme is no less obvious in Aeschylus’ Eumenides. It is a central motif in his work. Hegel’s notion that it is the essence of tragedy to represent collisions in which both sides are justified was based squarely on Greek tragedy; but he overshot the mark when he claimed occasionally that both sides are equally justified. As a rule, wrong clashes with greater wrong, not only in Greek tragedy but also in life and in history.

When Thucydides, who called himself “the Athenian,” recorded the epic war between Athens and Sparta, he breathed the same un-Manichaean spirit. He did not even suggest that both sides were equally justified. He realized that as a rule wrong clashes with greater wrong.

No doubt, most Greeks were not that free of the tendency to think in black and white; but Manichaeism as a world view is part of the legacy of Persia, the rising world power that Aeschylus helped to defeat at Marathon. It was probably less than a hundred years before this battle that Zarathustra had taught his people that there were two great cosmic forces: light and good versus darkness and evil; and he summoned man to help the former to vanquish the latter.

Some Zoroastrian ideas gained entrance into Judaism without achieving any great prominence in the Old Testament. But the New Testament speaks of the sheep and the goats, the children of light and the children of darkness; and according to both Matthew (12:30) and Luke (11:23) Jesus said: “He who is not with me is against me.” In Christianity the Devil became a far more powerful figure than Satan had been in the Hebrew Bible; he became the Evil One, the Lord of Hell; and humanity was split into two camps – those headed for salvation and those headed for everlasting torment.

Even so, Christianity did not follow Zarathustra all the way. In the third century another Persian prophet, Mani, preached a more Zoroastrian version of Christianity: Manichaeism. For a while its impact in the Roman Empire rivaled that of Christianity, and Augustine came under its spell. Eventually the church condemned “the Manichaean heresy,” and as a religion it died. But Manichaeism is far from dead if the name is used inclusively to label views in which history is a contest between the forces of light and darkness, with all right on one side.

The perennial appeal of Manichaeism is due not only to the fact that it flatters its followers but also to the way in which it makes the most complex and baffling issues marvelously simple. There is no need for difficult decisions; the choice is perfectly obvious.

In times of war, Manichaeism flourishes; and during the cold war that followed World War II it did, too. What is more surprising is that this strategy is also encountered in the work of some philosophers who at first glance seem rather subtle. Thus Heidegger contrasted two life styles in Being and Time: authentic and inauthentic. He described the latter at great length before finding the mark of authenticity in “resoluteness.” He never showed that resolution was incompatible with inauthenticity. Of course, it is not, as his own decision for Hitler in 1933 illustrates. A resolute leap into faith or into a movement is quite compatible with dishonesty, decidophobia, and heteronomy. But in his Manichaean way, Heidegger assumed that all good must be on one side; and since he considered resoluteness good and inauthenticity bad, he failed to see that they can occur together. Manichaeism permeates much of traditional morality, and beyond that also Western thought about reality. Indeed, many people assume that Manichaeism is based squarely on the facts. But there are no opposites in nature. What would be the opposite of this rose or that Austrian pine? Or of the sun, or of this human being? Only human thought introduces opposites. Neither individual beings nor classes of such beings – such as roses, pines, or human beings – have opposites; nor do colors, sounds, textures, feelings.

But are not hard and soft opposites? As abstract concepts they are; but the feel of a rock and the feel of moss are not. It is only by disregarding most of the qualities of botl1 experiences and classifying one as hard and the other as soft that people think of them as opposites. Playing with fire and rolling in the snow are not opposites – far from it – but hot and cold are. No specific degree of heat or coldness has any opposite, only the concepts do. The starry heavens and a sunny sky are not opposites, but day and night are. And the Manichaean looks everywhere for day and night concepts.

Temperatures are arranged on a linear scale, like hard and soft, fast and slow. Day and night, like summer and winter or spring and fall, are best represented by a circle, like colors. Colors that are across from each other on a color wheel are not opposites; no two colors are any more than two times of day. Nothing temporal, nothing living, nothing that is in process has an opposite.

To understand the world and to bring some order into the chaos of human impressions one needs concepts and abstractions; one disregards what in some particular context is less relevant. Scientists, engineers, and analytical philosophers generally realize how indispensable analysis is. The neoromantics who extol direct experience and feeling are much more prone to catch the virus of Mani. Why?

Thoughtful people are at least dimly aware of the claims of both feeling and understanding. Even those who incline heavily toward one side usually feel some need for the other. Thus the analytically minded tend to leave the realms of faith and morals, if not politics, to feeling and intuition, while the romantics, who stress the importance of feeling and intuition, indulge in a bare minimum of analysis and tend to favor polarities.

Neither analysis nor direct experience entails any form of Manichaeism. The Manichaean limps on both legs: he curtails both the understanding and direct experience, settling for very little of each. He all but shuts both eyes and is a decidophobe.

He supposes not only that truth and error are opposites but even that there are children of truth and children of error. The notion of degree, and especially degrees of truth, is anathema to him. His thinking is as simplistic as a true-or-false test. “Abraham Lincoln was born on February 11, 1809: True or False?” False, but hardly the opposite of the truth, seeing that he was born February 12, 1809. Even a multiple-choice test would allow a little more subtlety if it distinguished between degrees of falsehood or approximations of the truth. But such complexities frighten those who seek refuge in Manichaeism. They like decisions that make themselves. The Manichaeans think in black and white; the autonomous think in color.

8

The seventh strategy is much the subtlest of the lot. I shall call it moral rationalism. It claims that purely rational procedures can show what one ought to do or what would constitute a just society. There is then no need at all to choose between different ideals, different societies, different goals. Once again, no room is left for tragic quandaries or fateful choices.

Various philosophers have devoted considerable acumen to the development of different versions of moral rationalism, and one cannot prove all of them wrong in a few paragraphs. But my critique of the idea of justice in the next three chapters will join this issue and should show that moral rationalism is untenable.

My repudiation of moral rationalism does not entail an acceptance of what I call moral irrationalism. Anyone supposing that it must would commit the Manichaean fallacy. I repudiate both.

Moral irrationalism claims that because reason by itself cannot show people what to do, reason is irrelevant when one is confronted with fateful decisions. This view is exemplified in different ways by Kierkegaard and Heidegger and widely associated with existentialism. It is compatible with any of the first six strategies and need not be considered here at length as a separate strategy. The moral irrationalist says more or less explicitly that when it comes to ultimate commitments reason is irrelevant; and the choice of a religion or a movement or a school of thought, of a life style like drifting or a way of thinking like exegetical thinking or possibly even Manichaeism, involves to his mind an ultimate commitment. This is a way of saying that while it may be reasonable to keep your eyes open when making relatively petty decisions, it makes no sense to keep them open and examine your impulsive preferences as well as the most significant alternatives when a choice is likely to mold your future. In other words, be careful when you drive slowly, but when you go over fifty miles per hour shut your eyes!

Both moral rationalism and moral irrationalism involve an inadequate conception of reason and responsibility. Kant, an exemplary moral rationalist, thought that his ethic had the great distinction of being autonomous. Heidegger, an exemplary moral irrationalist, suggests that his stance, and only his, is authentic. Both claims are untenable.

I have considered seven ways of avoiding autonomy: (1) religion, (2) drifting, (3) allegiance to a movement, (4) allegiance to a school of thought, (5) exegetical thinking, (6) Manichaeism, and (7) moral rationalism. It is possible to systematize these seven strategies under two headings: First, avoiding fateful decisions, possibly excepting the one decision not to make any more fateful decisions (methods 1 to 5); second, making fateful decisions, but stacking the cards in some way so that the choice will make itself and there is no possibility of tragedy (6-7).

More important, one can combine several of these strategies. Thomists, for example, combine 1, 4, 5, and 7 with a dash of 6; and Thomists who joined the Fascist party in Italy, the Nazi party in Germany, or some of their cognates in Hungary or Slovakia get six out of a possible seven points. They miss out only on drifting.

Herbert Marcuse does almost as well. In his work one finds all but the first two strategies: religion and drifting. His fusion of Manichaeism and moral rationalism in his widely read essay on “Repressive Tolerance” is instructive because it furnishes such a gross example of both.

His Manichaeism finds expression in his central plea for “intolerance against movements from the Right, and toleration of movements from the Left.” He attacks “the active, official tolerance granted to the Right as well as to the Left, to movements of aggression as well as to movements of peace, to the party of hate as well as that of humanity.” His whole case depends on the assumption that there are two camps, the Left and the Right, the children of light and the children of darkness, and that the former are for peace and humanity, and are “intelligent” and “informed,” while the latter are for aggression and hate, “stupid” and “misinformed.” His plea for “the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, or which oppose the extension of public services, social security, medical care, etc.” hinges on the notion that all good, all humanity, intelligence, and information are on one side.

His moral rationalism finds expression when he says that “the distinction between liberating and repressive, human and inhuman teachings and practices . . . is not a matter of value-preference but of rational criteria.” Three pages later this becomes the “distinction between true and false, progressive and regressive.” The early Heidegger, under whom Marcuse had studied and to whom he had dedicated his first book, had fused Manichaeism with moral irrationalism.

When one considers how many different combinations are possible, seven strategies may seem to be enough, but when it comes to avoiding fateful decisions people are most inventive and use other means as well. No exhaustive list is possible, but something will be gained by adding three more to my list.

9

The eighth strategy for avoiding autonomy is pedantry. It plays a central part in the creeping microscopism mentioned earlier; and I have noted previously that as long as one remains absorbed in microscopic distinctions one is in no great danger of coming face to face with fateful decisions.

Of course, careful attention to detail is not only compatible with autonomy but a requirement of intellectual integrity. Pedantry becomes decidophobic at the point where a person never gets around to considering major decisions with any care or actually closes his eyes to macroscopic alternatives. The same criteria apply to all the other strategies.

Pedantry is often part of a mixed strategy and may appear as an ingredient of religion, belonging to a school of thought, exegetical thinking, or moral rationalism. In Heidegger’s early work (1927) it appears along with moral irrationalism and Manichaeism. But pedantry can also be a person’s one and only strategy. If so, he is not likely to become famous; hence no great examples come to mind. But Grand, a character in Camus’s novel The Plague, may serve as an illustration: He has, he says, his work, which consists of writing a book, but the first sentence is giving him no end of trouble, and he keeps rewriting it – spending whole weeks on one word.

The ninth strategy is the faith that one is riding the wave of the future. This, too, is usually part of a mixed strategy and frequently associated with religion, allegiance to a movement, belonging to a school of thought, or Manichaeism. But even if the later Sartre did not succumb to these four lures, he certainly deserves a point for this faith in addition to the point he gets for exegetical thinking, and this is a very telling objection to his later work. Sartre endows Marxism with authority because it is “the philosophy of our time” (1960) and the wave of the future, and this exempts him from any need to see what speaks against it and what speaks for various alternatives. In fact, the wave of the future would possess no moral authority even if we could predict it. Anne Morrow Lindbergh, who first said, “The wave of the future is coming and there is no fighting it,” meant Hitler – in 1940. Even if the future had belonged to him, an autonomous person might well have chosen to go down fighting against the Nazis.

Those who employ the ninth strategy never stand alone or unsupported: they always feel backed up by force majeure. Consider a very different example. Wallenstein, the great seventeenth-century general who commanded the imperial army for almost a decade during the Thirty Years War, has been brought to life on the stage by Friedrich Schiller as an exemplary decidophobe: he keeps delaying his crucial break with the emperor and rationalizes his indecision by recourse to astrology. Schiller suggests that if Wallenstein had acted sooner he probably would have succeeded; but he waited until events forced his hand, and he failed and was murdered. Astrology, oracles, and the Chinese I Ching, which achieved such immense popularity in the United States during the 1960s, have always attracted decidophobes. Nor is it merely a great help in specific cases to have an authoritative prognosis of the future. Millions find it frightening to face up to the lack of necessity in human affairs. For the Soviet Writers’ Secretariat, which considered Alexander Solzhenitsyn’s Cancer Ward unpublishable as written – they were generous with offers to help him rewrite it! – one of the major provocations was the concluding image of the novel: “An evil man threw tobacco in the Macaque Rhesus’s eyes. Just like that . . .” The affront was not so much that Stalin was likened to an evil man, but that the author implicitly denied the Marxist philosophy of history and insisted on the element of caprice in human affairs. One does not have to be a member of the Soviet Writers’ Secretariat to be dizzied by the thought that what some individual decides “just like that” might determine the misery and death of millions. To avoid this dizziness, people have always found it tempting to believe in a divine government, the stars, or “History.”

Solzhenitsyn’s opposition to all forms of historical determinism is central in his August 1914. Here he develops a view of history that stands squarely opposed to Marxism and to that “Tolstoyan philosophy, with its ‘worship of passive sanctity and meekness of simple, ordinary people’ ” which one of his Soviet detractors had found in his early work. For obvious reasons, the polemic against Marxism is not formulated explicitly, but Tolstoy’s ideas about history are rejected expressly. The subtlety and richness of this novel cannot be discussed here, but the points that bear on autonomy can be stated succinctly.

In the first part of August 1914 the author shows how decrepit, obsolete, and hopeless the Tsar’s army was. Soon one feels that there is no need to go on in this vein; the disastrous Russian defeat at Tannenberg was overdetermined, and anyone or two of the endless reasons mentioned would have been enough. The reader is led to feel that it did not require the superlative efficiency and technological superiority of the German army to defeat such a wretched force. But then Solzhenitsyn tries to show that if the celebrated German victors, Hindenburg and Ludendorff, had been obeyed, the Russian army would not have been encircled and destroyed: the shattering Russian defeat was accomplished by two German generals who disobeyed orders. And the Russian officers who defied their stupid orders and fought courageously inflicted serious defeats on the Germans and broke through the encirclement. Solzhenitsyn calls upon his readers to reject the false faith in the wave of the future and to make decisions for themselves, fearlessly.

Yet Solzhenitsyn is far from feeling contempt for those who lack the rare qualities required for successful insubordination and autonomy. His compassion for the sufferings of the less gifted – Ivan Denisovich, Matryona, and the wives of some of the prisoners in The First Circle, for example – sears the heart. In August 1914 his sympathetic portrayal of General Samsonov, the commander of the encircled Russian army, becomes one of the glories of world literature precisely when we are shown how a severely limited man dies from the inside out, how despair and death permeate his body. Had Samsonov been more independent, defying his orders, he might have avoided defeat and failure; but he had some sense of decency, courage enough to wish to die with his troops and, when that proved impossible, to commit suicide – and he did not tell lies.

Solzhenitsyn’s hatred of dishonesty is a physical thing and finds superlative expression in the overwhelming final scene of the book, in which a colonel simply cannot keep quiet even though his explosion may not do any good and is almost certain to ruin him. Nothing in Solzhenitsyn’s works is more obviously autobiographical than the description of the feelings of this man. But the same passion for honesty finds succinct expression in an aside in the early story, “Matryona’s House”: “There was nothing evil about either the mice or the cockroaches, they told no lies.” Autonomy does not entail any “elitist” scorn for simple folk. But it does require courage and, as I hope to show, high standards of honesty. And it precludes any deference to the wave of the future.

10

The tenth strategy, finally, often spells total relief, like the first two: marriage. At first glance, it looks quite different from the others and therefore out of place. But it is probably the most popular strategy of all. When getting married, legions of women have echoed Ruth’s beautiful words (which in the Bible are not spoken to a husband): “Your people shall be my people, your god my god.” Henceforth they agree to make no more fateful decisions; they will leave that to their husbands. This pattern is deeply ingrained in many cultures: it is what a woman is expected to do when she gets married; and she is supposed to get married.

Actually, it does not always work that way. The man who boasts of making all the big decisions while he leaves the small ones to his wife may admit when asked to explain: Big decisions concern what we should do about China; small decisions deal with such matters as buying a house and where to live. Figuratively speaking, many men marry their mothers.

It would be wrong to suppose either that marriage must involve decidophobia or that when it does only one spouse can have succumbed. This strategy can work for both husband and wife. Often a couple is a committee of two and makes decisions the way committees usually do: a consensus is presumed and not questioned if all goes well. But if things turn out badly, one does not feel altogether responsible; one merely went along; left to one’s own devices one might have acted quite differently. In a bad marriage such excuses are stated expressly; in a “good” marriage they are entertained privately. However unworthy it may be to harbor such thoughts, there is much more than a grain of truth in them. Left to their own devices, both partners – or on a committee, most or even all members – might indeed have made a different decision. As it happened, nobody made any decision at all, and that was one of the main features of the whole arrangement from the start: marriage is a way of avoiding the necessity of having to make fateful decisions. Instead of making a decision, one talks until something “transpires.”

Another way of putting this point is less nasty and is unassailably true. In marriage one no longer stands alone. Both partners have somebody to lean on – if all goes well.

It does require a fateful decision to get married in the first place. But that decision may have been prompted by decidophobia, by the desire to escape loneliness, by an unwillingness to make decisions in solitude. There is nothing paradoxical in that. Kierkegaard’s famous leap into commitment is quite typically the plunge one takes from a solitary height to be rid of freedom. It would require a fateful decision to go to a surgeon and say, Please, doctor, give me a frontal lobotomy! But it would not be in the least paradoxical to say that anyone who made that choice was a decidophobe who had come to the conclusion that he could not take it any more.

Getting married does not have to be like that; it is never quite like that; but it is often a little like that. Marriage can be an expansion of consciousness. Getting married can involve the will to incur additional responsibilities and to see a myriad things in two perspectives. Climbing with another person may be prompted partly by the will to reach peaks that one cannot reach alone.

The same is often true of some of the other strategies. A religion or a movement may be embraced because it holds out the same promise. But it is easy to deceive oneself and to credit oneself with a courage that one lacks. One should realize at that point that one is actually hedging one’s bet; however bold one’s intentions, one is making it easy for oneself to succumb to decidophobia in the future if not immediately. It is the exceptional person who keeps resisting this temptation.

The ten strategies could be arranged in a table as follows:

A. Avoid fateful decisions
1. Strategies involving recourse to authority: 1, 3, 4, 5, 9.
2. Strategies that do not involve recourse to authority and are compatible with going it alone: 2, 8.
B. Stack the cards to make one alternative clearly right and remove all risk: 6, 7.
C. Decline responsibility: 10.

But it is only by exploring some of these strategies in detail that one can show what is involved in autonomy, and what lures have to be resisted. Obviously, one must also resist the temptation of thinking of autonomy in Manichaean terms. Autonomy provides no guarantee of happiness or even goodness; and decidophobes may be very decent, altruistic people, good scholars, or fine artists. Their lives may be blessed with warmth, security, and the comfort of strong convictions.

Too often those who denounce conformity see it merely as an expression of cowardice and laziness. It can be that. But the tendency to believe that views held strongly by people whom one knows well and likes must be largely right is extremely powerful and difficult to overcome. One cannot begin to understand the appeal of some of these ten strategies if one ignores this fact.

11

Two questions about decidophobia remain to be answered. First: is a new word really needed? Wouldn’t “self-alienation” or “loss of freedom” do just as well? The point of coining a new term is to move the phenomena discussed here clearly into focus. “Alienation” is a very troublesome word, and it is extremely important not to fudge the differences between decidophobia and other forms of alienation. Moreover, “alienation” immediately suggests to many people a specifically modern phenomenon, as if things used to be better in the past. Finally, it is widely felt that the cure for alienation must be sought in some sort of community; but I have shown that the search for community looms large among the strategies of decidophobia.

“Loss of freedom” suggests that one had freedom before one lost it. “Escape from freedom” has similar overtones. Such phrases are therefore grossly misleading. Again, an illustration may help. One chapter in Charles Reich’s immensely popular book The Greening of America (1970) bears the title “The Lost Self.” We are transposed into a fairy tale: We had a self before some ogre (“the Corporate State”) took it away, and “When self is recovered, the power of the Corporate State will be ended, as miraculously as a kiss breaks a witch’s evil enchantment.”

This fairy-tale quality pervades the whole book. We are asked to suspend our critical faculties when we are told of World War II (!) that “the source of the war is in the barren, frustrated lives that are led in America; lives that lead men to aggression, force, and power.” The war in Vietnam, too, becomes part of the fairy tale: “Report after report from Vietnam shows that G.I.s, sent out to search and destroy those whom the State considers ‘enemies,’ simply seek the safety of some foliage and peacefully smoke marijuana, rap, and sleep.”

Thus those who are troubled about themselves and their children are urged to take heart: The children of light who are numbered in the millions are even now approaching on the wave of the future, sitting on the Left.

The other question we must face is whether it is at all possible to resist all ten lures, to master decidophobia and become liberated. If I point to some illustrious examples to show that autonomy is attainable, you may feel that what was possible for people of such stature is not necessarily possible for ordinary human beings. But if I mentioned people who are not famous and therefore not widely known, I would be asking you in effect to take my word for it that it is possible and actually has been done. Clearly, the first course represents the lesser evil, the more so because autonomy is difficult to attain.

Characters from literature are beside the point, but it is worth noting that Aeschylus created at least two autonomous figures: Prometheus, who is almost autonomy incarnate, and Clytemnestra, who reminds us that autonomy is no warrant of virtue. (Aeschylus did not mean to suggest that married women, if liberated, must kill their husbands.)

Western philosophy has been to some extent a quest for autonomy, and the pre-Socratics are considered the first Western philosophers because they were free thinkers who leaned neither on religion nor on exegetical thinking but took stands of their own. Heraclitus comes to life as an individual rather more than the others, and although knowledge of him is limited it seems clear that he did not employ any of the ten strategies. The most dramatic illustrations in the long history of Western philosophy, however, are Socrates and Nietzsche. A few interpreters, to be sure, have tried to saddle Socrates with Plato’s moral rationalism; but the Apology, the conclusion of the Theaetetus, and some other passages suggest forcibly that Socrates made a point of not knowing what he did not know. But even if he should not have defied the fear of freedom with complete success, he clearly went much further than most men, and contemplation of his thought and posture helps us understand what is involved in mastering decidophobia.

The case of Nietzsche illustrates not only autonomy but also two phobic gambits, employed by those who feel stung by such freedom. The first gambit is to turn those who have mastered decidophobia into something else – say, by posthumously baptizing Socrates as an Anglican or by claiming that Nietzsche was a fascist. The second – indeed, the classical phobic gambit, equally popular with religious apologists and members of political movements, Left as well as Right – is to say: Those who examine their own preferences as well as alternatives end up by never making up their minds; they keep arguing when the time for argument is ‘long past; they never get around to drawing a conclusion and taking a stand; they shrink from decisions. No doubt, there are people of that kind, but it is also possible to make decisions responsibly.

The autonomous individual does not treat his own conclusions and decisions as authoritative but chooses with his eyes open, and then keeps his eyes open. He has the courage to admit that he may have been wrong even about matters of the greatest importance. He objects to the ten strategies not on account of their putative psychological origins but because they preclude uninhibited self-criticism.

There is no need here to recapitulate my interpretation of Nietzsche as a man of this type or to show that he did get around to drawing conclusions and taking stands, My disagreements with him are legion, but his books reveal a truly liberated spirit. It will suffice here to quote a single epigram from his notebooks: “A very popular error: having the courage of one’s convictions; rather it is a matter of having the courage for an attack on one’s convictions!!!”

Among poets there are few whose lives are as well documented as Goethe’s, and nobody can accuse him of having succumbed to any of the ten strategies. Incidentally, he married, as Socrates did, illustrating the point that marriage does not necessarily involve decidophobia.

Coming to our own time, Eleanor Roosevelt was an autonomous woman but did not come fully into her own until after her husband’s death. In some ways, being a President’s wife offers a woman exceptional opportunities; but it is also confining because she must always consider how her words and actions will affect the President. This helps to explain why no other President’s wife played a comparable role. It is harder to understand why others did not use their experience and prestige for the good of humanity once their husbands were out of office or dead, especially in cases in which widespread sympathy and admiration would have made it relatively easy. But the women who marry extraordinarily ambitious men are rarely looking for autonomy; they are much more likely to use marriage as a decidophobic strategy, perhaps even along with religion and allegiance to a party. Moreover, years in the limelight, in which every move must be scrutinized lest it undercut the husband’s career, must be crushing. All this makes Eleanor Roosevelt’s achievement even more imposing. She did not allow her difficult marriage to one of the strongest personalities in the world destroy her own will and spirit, and she never simply accepted his political or moral views, nor those of the Democratic Party. She kept her own counsel and after his death showed all the world what it means to be autonomous, using every resource at her command for the benefit of those who needed help.

My final example exhibits the most awesome courage: Solzhenitsyn. Rarely has it been so difficult for any man to stand alone, utterly alone, without any prop of any kind. The First Circle, Cancer Ward, Solzhenitsyn: A Documentary Record, and August 1914 show how he succeeded in resisting all ten temptations, making one fateful decision after another against seemingly insuperable odds. His life is autonomy in action.

« Preface | The Death of Retributive Justice »