Articles about the times we live in, 2004 to the present



Please CUT AND PASTE only what you need on to a word document;

otherwise, you will end up printing this whole page.






Our family wanted to watch a Christmas video, so my son picked out "It's a Wonderful Life." Although it's one of my favorite movies, I wasn't enthusiastic. Too many viewings had dulled some of its appeal for me.

But we watched it all the way through and, like an unwelcome chill, a sense of familiarity came over me as I recognized a way in which the movie parallels our lives today: We're living in Pottersville.

If you haven't watched the 1946 classic, here's a quick synopsis. George Bailey (played by James Stewart) is a post-World War II man stuck in the family finance business, anchored down in his hometown of Bedford Falls when he'd rather be traveling the world. He considers suicide when his uncle's carelessness — and the dishonesty of the town autocrat, Mr. Potter — allows $8,000 in cash to vanish. The misunderstanding could send Bailey to prison. But a guardian angel named Clarence shows Bailey what a void would exist if he'd never been born. In the end, Bailey's friends bail him out of the financial crisis because of their love for such a kind man.

Here's where Pottersville enters the plot. Clarence shows Bailey what his town would look like without his quiet influence. His brother Harry would be drowned; soldiers would later die because Harry wasn't there to save them; and Potter would steamroll opposition so much that Bailey's hometown itself would be renamed Pottersville, a coarse place filled with anger, callousness and vice.

Do I speak of politics? Yes and no. Democrats and Republicans have no shortage of scandals, but the current environment seems like Pottersville 2.0. The pace of terrorism and warfare seems to quicken each year. The world has wearied of fearful immigrants fleeing for their lives, while those who remained in their homes in places such as Syria, Nigeria and Sudan are often doomed to methodical genocide. The worldview of too many global leaders has no moorings beyond power and vindictiveness.

In the film, Stewart's character realizes the error of his ways after seeing the carnage that his absence would cause. He prays on the Bedford Falls bridge where he once mulled suicide, asking God and Clarence to intervene and get him back to life as he knew it, regardless of the cost. The prayer is answered instantly, and director Frank Capra's ending builds joy upon joy and a sense of community until I am usually an emotional mess by the credits.

Ah, if only it were as easy today to find a solution. As a person of faith, I believe like George Bailey that prayer is an absolute necessity here — but we have work to do on our own souls.

I haven't seen swastikas appearing in my town yet, or witnessed Mexican immigrants harassed, but the unrelenting anger and hate on social media saddens me. And then there is everyday life.

Last week, while grocery shopping, we heard the shouting from three aisles away before we saw it. It went on for more than a minute, and by the time we rounded the corner, near the cold cuts and cheese, we saw them. A woman in her 30s, with two kids in tow, was shouting, constantly, at a relatively quiet older woman — perhaps in her 70s — and her middle-age daughter.

"Back off! Just back off!"

The younger woman kept saying over and over that she was a single mom and that the other two women had verbally attacked her children. We came upon the scene near the end as three store managers tried to calm the situation, so we had no way of knowing what exactly was going on. But it was ugly and wouldn't have been out of place in Pottersville.

Our collective lack of kindness and compassion for our next-door neighbor or a stranger on Facebook threatens to crush any hope for community or the repudiation of Pottersville. Massachusetts Institute of Technology professor Sherry Turkle, author of the recent "Reclaiming Conversation" as well as "Alone Together," uses her research to show how face-to-face human interaction can fuel the empathy and friendship we all need but seemingly no longer know how to achieve.

In "It's a Wonderful Life," George Bailey wins. The movie ends before we find out what happens next, whether Potter eventually destroys Bailey by some other method. In the same way, we don't know what 2017 will hold for any of us. For my part, I hope I am able to move beyond my own tendency to fight fire with fire in an attempt to overcome Pottersville.

If there is one term that describes Pottersville for me, it is "mean-spirited." Yet I too find it hard to love my enemies.

In an ideal world, Mr. Potter, like Ebenezer Scrooge, would turn his life around and become kind and altruistic. Whether I influence my enemies or whether I don't, the gifts I pray for at Christmas and the qualities I work toward in the new year should be these: "Love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, self-control; against such things there is no law."

Those words were penned by a badder dude than Mr. Potter — or Donald Trump. They came from Paul, the apostle who once killed Christians or tossed them in jail. He later gave his life for that same faith.

Maybe there is still hope for Pottersville, and me, and you, at these holidays.

Patrick Kampert is a Chicago writer and a former Chicago Tribune editor and reporter





















































































Susan Sontag
For a long time -- at least six decades -- photographs have laid down the tracks of how important conflicts are judged and 
remembered. The Western memory museum is now mostly a visual one. Photographs have an insuperable power to determine 
what we recall of events, and it now seems probable that the defining association of people everywhere with the war that the
United States launched pre-emptively in Iraq last year will be photographs of the torture of Iraqi prisoners by Americans in the 
most infamous of Saddam Hussein's prisons, Abu Ghraib. The Bush administration and its defenders have chiefly sought to limit 
a public-relations disaster – the dissemination of the photographs -- rather than deal with the complex crimes of leadership and 
of policy revealed by the pictures. There was, first of all, the displacement of the reality onto the photographs themselves. The 
administration's initial response was to say that the president was shocked and disgusted by the photographs -- as if the fault 
or horror lay in the images, not in what they depict. There was also the avoidance of the word ''torture.'' The prisoners had 
possibly been the objects of ''abuse,'' eventually of ''humiliation'' -- that was the most to be admitted. ''My impression is that 
what has been charged thus far is abuse, which I believe technically is different from torture,'' Secretary of Defense Donald 
Rumsfeld said at a press conference. ''And therefore I'm not going to address the 'torture' word.'' Words alter, words add, 
words subtract. It was the strenuous avoidance of the word ''genocide'' while some 800,000 Tutsis in Rwanda were being 
slaughtered, over a few weeks' time, by their Hutu neighbors 10 years ago that indicated the American government had no 
intention of doing anything. To refuse to call what took place in Abu Ghraib -- and what has taken place elsewhere in Iraq and 
in Afghanistan and at Guantanamo Bay -- by its true name, torture, is as outrageous as the refusal to call the Rwandan genocide 
a genocide. Here is one of the definitions of torture contained in a convention to which the United States is a signatory: ''any act 
by which severe pain or suffering, whether physical or mental, is intentionally inflicted on a person for such purposes as obtaining 
from him or a third person information or a confession.'' (The definition comes from the 1984 Convention Against Torture and 
Other Cruel, Inhuman or Degrading Treatment or Punishment. Similar definitions have existed for some time in customary law 
and in treaties, starting with Article 3 -- common to the four  Geneva conventions of 1949 – and many recent human rights 
conventions.) The 1984 convention declares, ''No exceptional circumstances  whatsoever, whether a state of war or a threat of 
war, internal political instability or any other public emergency, may be invoked as a justification of  torture.'' And all covenants 
on torture specify that it includes treatment intended to humiliate the victim, like leaving prisoners naked in cells and corridors. 
Whatever actions this administration undertakes to limit the damage of the widening revelations of the torture of prisoners in Abu 
Ghraib and elsewhere-- trials, courts-martial, dishonorable discharges, resignation of senior military figures and responsible 
administration officials and substantial compensation to the victims -- it is probable that the ''torture'' word will continue to be 
banned. To acknowledge that Americans torture their prisoners would  contradict everything this administration has invited the 
public to believe about the virtue of American intentions and America's right, flowing from  that virtue, to undertake unilateral 
action on the world stage. 
Even when the president was finally compelled, as the damage to America's reputation everywhere in the world widened and 
deepened, to use the ''sorry'' word, the focus of regret still seemed the damage to America's claim to moral superiority. Yes, 
President Bush said in Washington on May 6, standing alongside King Abdullah II of Jordan, he was ''sorry for the humiliation 
suffered by the Iraqi prisoners and the humiliation suffered by their families.'' But, he went on, he was ''equally sorry that people 
seeing these pictures didn't understand the true nature and heart of America.'' 
To have the American effort in Iraq summed up by these images must seem, to those who saw some justification in a war that 
did overthrow one of the monster tyrants of modern times, ''unfair.'' A war, an occupation, is inevitably a huge tapestry of actions. 
What makes some actions representative  and others not? The issue is not whether the torture was done by individuals (i.e., ''not 
by everybody'') -- but whether it was systematic.Authorized. Condoned. All acts are done by individuals. The issue is not whether 
a majority or a minority of Americans performs such acts but whether the nature of the policies prosecuted by this administration 
and the hierarchies deployed to carry them out makes such acts likely. 
Considered in this light, the photographs are us. That is, they are representative of the fundamental corruptions of any foreign 
occupation together with the Bush adminstration's distinctive policies. The Belgians in the Congo, the French in Algeria, 
practiced torture and sexual humiliation on despised recalcitrant natives. Add to this generic corruption the mystifying, near-total 
unpreparedness of the American rulers of Iraq to deal with the complex realities of the country after its ''liberation.'' And add to 
that the overarching, distinctive doctrines of the Bush administration, namely that the United States has embarked on an endless 
war and that those detained in this war are, if the president so decides, ''unlawful combatants'' -- a policy enunciated by Donald 
Rumsfeld for Taliban and Qaeda prisoners as early as January 2002 -- and thus, as Rumsfeld said, ''technically'' they ''do not have 
any rights under the Geneva Convention,'' and you have a perfect recipe for the cruelties and crimes committed against the 
thousands incarcerated without charges or access to lawyers in American-run prisons that have been set up since the attacks of 
Sept. 11, 2001. 
So, then, is the real issue not the photographs themselves but what the photographs reveal to have happened to ''suspects'' in 
American custody? No: the horror of what is shown in the photographs cannot be separated from the horror that the 
photographs were taken -- with the perpetrators posing, gloating, over their helpless captives. German soldiers in the Second 
World War took photographs of the atrocities they were committing in Poland and Russia, but snapshots in which the 
executioners placed themselves among their victims are exceedingly rare, as may be seen in a book just published, 
''Photographing the Holocaust,'' by Janina Struk. If there is something comparable to what these pictures show it would be 
some of the photographs of black victims of lynching taken between the 1880's and 1930's, which show Americans grinning 
beneath the naked mutilated body of a black man or woman hanging behind them from a tree. The lynching photographs 
were souvenirs of a collective action whose participants felt perfectly justified in what they had done. So are the pictures from 
Abu Ghraib. 
The lynching pictures were in the nature of photographs as trophies -- taken by a photographer in order to be collected, stored in 
albums, displayed. The pictures taken by American soldiers in Abu Ghraib, however, reflect a shift in the use made of pictures -- 
less objects to be saved than messages to be disseminated, circulated. A digital camera is a common possession among soldiers. 
Where once photographing war was the province of photojournalists, now the soldiers themselves are all photographers -- 
recording their war, their fun, their observations of what they find picturesque, their atrocities -- and swapping images among 
themselves and e-mailing them around the globe. 
There is more and more recording of what people do, by themselves. At least or especially in America, Andy Warhol's ideal of 
filming real events in real time -- life isn't edited, why should its record be edited? – has become a norm for countless Webcasts, 
in which people record their day, each in his or her own reality show. Here I am -- waking and yawning and stretching, brushing 
my teeth, making breakfast, getting the kids off to school. People record all aspects of their lives, store them in computer files 
and sendthe files around. Family life goes with the recording of family life --even when, or especially when, the family is in the 
throes of crisis and disgrace. Surely the dedicated, incessant home-videoing of one another, in conversation and monologue, 
over many years was the most astonishing material in ''Capturing the Friedmans,'' the recent documentary by Andrew Jarecki 
about a Long Island family embroiled in pedophilia charges. 
An erotic life is, for more and more people, that which can be captured in digital photographs and on video. And perhaps the 
torture is more attractive, as something to record, when it has a sexual component. It is surely revealing, as more Abu Ghraib 
photographs enter public view, that torture photographs are interleaved with pornographic images of American soldiers having 
sex with one another. In fact, most of the torture photographs have a sexual theme, as in those showing the coercing of prisoners 
to perform, or simulate, sexual acts among themselves. One exception, already canonical, is the photograph of the man made 
to stand on a box, hooded and sprouting wires, reportedly told he would be electrocuted if he fell off. Yet pictures of prisoners 
bound in painful positions, or made to stand with outstretched arms, are infrequent. That they count as torture cannot be doubted. 
You have only to look at the terror on the victim's face, although such ''stress'' fell within the Pentagon's limits of the acceptable. 
But most of  the pictures seem part of a larger confluence of torture and pornography: a young woman leading a naked man 
around on a leash is classic dominatrix imagery. And you wonder how much of the sexual tortures inflicted on the inmates of 
Abu Ghraib was inspired by the vast repertory of pornographic imagery available on the Internet -- and which ordinary people, 
by sending out Webcasts of themselves, try to emulate. 
To live is to be photographed, to have a record of one's life, and therefore to go on with one's life oblivious, or claiming to be 
oblivious, to the camera's nonstop attentions. But to live is also to pose. To act is to share in the community of actions recorded 
as images. The expression of satisfaction at the acts of torture being inflicted on helpless, trussed, naked victims is only part of the 
story. There is the deep satisfaction of being photographed, to which one is now more inclined to respond not with a stiff, direct 
gaze (as in former times) but with glee. The events are in part designed to be photographed. The grin is a grin for the camera. 
There would be something missing if, after stacking the naked men, you couldn't take a picture of them. 
Looking at these photographs, you ask yourself, How can someone grin at the sufferings and humiliation of another human being? 
Set guard dogs at the genitals and legs of cowering naked prisoners? Force shackled, hooded prisoners to masturbate or simulate 
oral sex with one another? And you feel naive for asking, since the answer is, self-evidently, People do these things to other people. 
Rape and pain inflicted on the genitals are among the most common forms of torture. Not just in Nazi concentration camps and in 
Abu Ghraib when it was run by Saddam Hussein. Americans, too, have done and do them when they are told, or made to feel, 
that those over whom they have absolute power deserve to be humiliated, tormented. They do them when they are led to believe 
that the people they are torturing belong to an inferior race or religion. For the meaning of these pictures is not just that these acts 
were performed, but that their perpetrators apparently had no sense that there was anything wrong in what the pictures show. 
Even more appalling, since the pictures were meant to be circulated and seen by many people: it was all fun. And this idea of fun 
is, alas, more and more -- contrary to what President Bush is telling the world -- part of ''the true nature and heart of America.'' 
It is hard to measure the increasing acceptance of brutality in American life, but its evidence is everywhere, starting with the video 
games of killing that are a principal entertainment of boys -- can the video game ''Interrogating the Terrorists'' really be far behind? 
-- and on to the violence that has become endemic in the group rites of youth on an exuberant kick. Violent crime is down, yet 
the easy delight taken in violence seems to have grown. From the harsh torments inflicted on incoming students in many American 
suburban high schools -- depicted in Richard Linklater's 1993 film, ''Dazed and Confused'' -- to the hazing rituals of physical 
brutality and sexual humiliation in college fraternities and on sports teams, America has become a country in which the fantasies 
and the practice of violence are seen as good entertainment, fun. 
What formerly was segregated as pornography, as the exercise of extreme sadomasochistic longings -- as in Pier Paolo Pasolini's 
last, near-unwatchable film, ''Salo'' (1975), depicting orgies of torture in the Fascist redoubt in northern Italy at the end of the 
Mussolini era -- is now being normalized, by some, as high-spirited play or venting. To ''stack naked men'' is like a college 
fraternity prank, said a caller to Rush Limbaugh and the many millions of Americans who listen to his radio show. Had the caller, 
one wonders, seen the photographs? No matter. The observation -- or is it the fantasy? -- was on the mark. What may still be 
capable of shocking some Americans was Limbaugh's response: ''Exactly!'' he exclaimed. ''Exactly my point. This is no different 
than what happens at the Skull and Bones initiation, and we're going to ruin people's lives over it, and we're going to hamper 
our military effort, and then we are going to really hammer them because they had a good time.'' ''They'' are the American soldiers, 
the torturers. And Limbaugh went on: ''You know, these people are being fired at every day. I'm talking about people having a 
good time, these people. You ever heard of emotional release?'' 
 Shock and awe were what our military promised the Iraqis. And shock and the awful are what these photographs announce to the 
world that the Americans have delivered: a pattern of criminal behavior in open contempt of international humanitarian conventions. 
Soldiers now pose, thumbs up, before the atrocities they commit, and send off the pictures to their buddies. Secrets of private life 
that, formerly, you would have given nearly anything to conceal, you now clamor to be invited on a television show to reveal. 
What is illustrated by these photographs is as much the culture of shamelessness as the reigning admiration for unapologetic brutality. 
The notion that apologies or professions of ''disgust'' by the president and the secretary of defense are a sufficient response is an 
insult to one's historical and moral sense. The torture of prisoners is not an aberration. It is a direct consequence of the 
with-us-or-against-us doctrines of world struggle with which the Bush administration hassought to change, change radically, 
the international stance of the United States and to recast many domestic institutions and prerogatives. The Bush administration 
has committed the country to a pseudo-religious doctrine of war, endless war -- for ''the war on terror'' is nothing less than that. 
Endless war is taken to justify endless incarcerations. Those held in the extralegal American penal empire are ''detainees''; 
''prisoners,'' a newly obsolete word, might suggest that they have the rights accorded by international law and the laws of all 
civilized countries. This endless ''global war on terrorism'' -- into which both the quite justified invasion of Afghanistan and the 
unwinnable folly in Iraq have been folded by Pentagon decree -- inevitably leads to the demonizing and dehumanizing of anyone 
declared by the Bush administration to be a possible terrorist: a definition that is not up for debate and is, in fact, usually made in 
The charges against most of the people detained in the prisons in Iraq and Afghanistan being nonexistent – the Red Cross 
reports that 70 to 90 percent of those being held seem to have committed no crime other than simply being in the wrong place 
at the wrong time, caught up in some sweep of ''suspects'' -- the principal justification for holding them is ''interrogation.'' 
Interrogation about what? About anything. Whatever the detainee might know. If interrogation is the point of detaining prisoner  
indefinitely, then physical coercion, humiliation and torture become inevitable. 
Remember: we are not talking about that rarest of cases, the ''ticking time bomb'' situation, which is sometimes used as a limiting 
case that justifies torture of prisoners who have knowledge of an imminent attack. This is general or nonspecific 
information-gathering, authorized by American military and civilian administrators to learn moreof a shadowy empire of evildoers 
about whom Americans know virtually nothing, in countries about which they are singularly ignorant: in principle, any information 
at all might be useful. An interrogation that produced no information (whatever information might consist of) would count as a 
failure. All the more justification for preparing prisoners to talk. Softening them up, stressing them out -- these are the euphemisms
for the bestial practices in American prisons where suspected terrorists are being held. Unfortunately, as Staff Sgt. Ivan (Chip) 
Frederick noted in his diary, a prisoner can get too stressed out and die. The picture of a man in a body bag with ice on his chest 
may well be of the man Frederick was describing. 
The pictures will not go away. That is the nature of the digital world in which we live. Indeed, it seems they were necessary to 
get our leaders to acknowledge that they had a problem on their hands. After all, the conclusions of reports compiled by the 
International Committee of the Red Cross, and other reports by journalists and protests by humanitarian organizations about 
the atrocious punishments inflicted on ''detainees'' and ''suspected terrorists'' in prisons run by the American military, first in 
Afghanistan and later in Iraq, have been circulating for more than a year. It seems doubtful that such reports were read by 
President Bush or Vice President Dick Cheney or Condoleezza Rice or Rumsfeld. Apparently it took the photographs to get 
their attention, when it became clear they could not be suppressed; it was the photographs that made all this ''real'' to Bush and 
his associates. Up to then, there had been only words, which are easier to cover up in our age of infinite digital self-reproduction 
and self-dissemination, and so much easier to forget.  
So now the pictures will continue to ''assault'' us – as many Americans are bound to feel. Will people get used to them? Some 
Americans are already  saying they have seen enough. Not, however, the rest of the world. Endless war: endless stream of 
photographs. Will editors now debate whether showing more of them, or showing them uncropped (which, with some of the 
best-known images, like that of a hooded man on a box, gives a different and in some instances more appalling view), would be 
in ''bad taste'' or too implicitly political? By ''political,'' read: critical of the Bush administration's imperial project. For there can 
be no doubt that the photographs damage, as Rumsfeld testified, ''the reputation of the honorable men and women of the armed 
forces who are courageously and responsibly and professionally defending our freedom across the globe.'' This damage -- to our 
reputation, our image, our success as the lone superpower -- is what the Bush administration principally deplores. How the 
protection of ''our freedom'' -- the freedom of 5 percent of humanity -- came to require having American soldiers ''across the 
globe'' is hardly debated by our elected officials. 
Already the backlash has begun. Americans are being warned against indulging in an orgy of self-condemnation. The continuing 
publication of the pictures is being taken by many Americans as suggesting that we do not have the right to defend ourselves: 
after all, they (the terrorists) started it. They -- Osama bin Laden? Saddam Hussein? what's the difference? -- attacked us first. 
Senator James Inhofe of Oklahoma, a Republican member of the Senate Armed Services Committee, before which Secretary 
Rumsfeld testified, avowed that he was sure he was not the only member of the committee ''more outraged by the outrage'' over 
the photographs than by what the photographs show. ''These prisoners,'' Senator Inhofe explained, ''you know they're not there 
for traffic violations. If they're in Cellblock 1-A or 1-B, these prisoners, they're murderers, they're terrorists, they're insurgents. 
Many of them probably have American blood on their hands, and here we're so concerned about the treatment of those 
individuals.''It's the fault of ''the media'' which are provoking, and will continue to provoke, further violence against Americans 
around the world. More Americans will die. Because of these photos. 
There is an answer to this charge, of course. Americans are dying not because of the photographs but because of what the 
photographs reveal to be happening, happening with the complicity of a chain of command -- so Maj. Gen. Antonio Taguba 
implied, and Pfc. Lynndie England said, and (among others) Senator Lindsey Graham of South Carolina, a Republican, 
suggested, after he saw the Pentagon's full range of images on May 12. ''Some of it has an elaborate nature to it that makes me 
very suspiciousof whether or not others were directing or encouraging,'' Senator Graham said. Senator Bill Nelson, a Florida 
Democrat, said that viewing an uncropped version of one photo showing a stack of naked men in a hallway -- a version that 
revealed how many other soldiers were at the scene, some not even paying attention -- contradicted the Pentagon's assertion 
that only rogue soldiers were involved. ''Somewhere along the line,'' Senator Nelson said of the torturers, ''they were either told 
or winked at.'' An attorney for Specialist Charles Graner Jr., who is in the picture, has had his client identify the men in the 
uncropped version; according to The Wall Street Journal, Graner said that four of the men were military intelligence and one 
a civilian contractor working with military intelligence. 
But the distinction between photograph and reality – as between spin and policy -- can easily evaporate. And that is what the 
administration wishes to happen. ''There are a lot more photographs and videos that exist,'' Rumsfeld acknowledged in his 
testimony. ''If these are released to the public, obviously, it's going to make matters worse.'' Worse for the administration and 
its programs, presumably, not for those who are the actual -- and potential? -- victims of torture. 
The media may self-censor but, as Rumsfeld acknowledged, it's hard to censor soldiers overseas, who don't write letters home, 
as in the old days, that can be opened by military censors who ink out unacceptable lines. Today's soldiers instead function like 
tourists, as Rumsfeld put it, ''running around with digital cameras and taking these unbelievable photographs and then passing them 
off, against the law, to the media, to our surprise.'' The administration's effort to withhold pictures is proceeding along several fronts. 
Currently, the argument is taking a legalistic turn: now the photographs are classified as evidence in future criminal cases, whose 
outcome may be prejudiced if they are made public. The Republican chairman of the Senate Armed Services Committee, John 
Warner of Virginia, after the May 12 slide show of image after image of sexual humiliation and violence against Iraqi prisoners, 
said he felt ''very strongly'' that the newer photos ''should not be made public. I feel that it could possibly endanger the men and 
women of the armed forces as they are serving and at great risk.'' 
But the real push to limit the accessibility of the photographs will come from the continuing effort to protect the administration 
and cover up our misrule in Iraq -- to identify ''outrage'' over the photographs with a campaign to undermine American military 
might and the purposes it currently serves. Just as it was regarded by many as an implicit criticism of the war to show on television 
photographs of American soldiers who have been killed in the course of the invasion and occupation of Iraq, it will increasingly 
be thought unpatriotic to disseminate the new photographs and further tarnish the image of America. 
After all, we're at war. Endless war. And war is hell, more so than any of the people who got us into this rotten war seem to 
have expected. In our digital hall of mirrors, the pictures aren't going to go away. Yes, it seems that one picture is worth a 
thousand words. And even if our leaders choose not to look at them, there will be thousands more snapshots and videos. 
© Susan Sontag, NY Times 2004

Thomas Friedman

In 1492 Christopher Columbus set sail for India, going west. He had the Nina, the Pinta and the Santa Maria. He never did find India, but he called the people he met ''Indians'' and came home and reported to his king and queen: ''The world is round.'' I set off for India 512 years later. I knew just which direction I was going. I went east. I had Lufthansa business class, and I came home and reported only to my wife and only in a whisper: ''The world is flat.''

And therein lies a tale of technology and geoeconomics that is fundamentally reshaping our lives -- much, much more quickly than many people realize. It all happened while we were sleeping, or rather while we were focused on 9/11, the dot-com bust and Enron -- which even prompted some to wonder whether globalization was over. Actually, just the opposite was true, which is why it's time to wake up and prepare ourselves for this flat world, because others already are, and there is no time to waste.

I wish I could say I saw it all coming. Alas, I encountered the flattening of the world quite by accident. It was in late February of last year, and I was visiting the Indian high-tech capital, Bangalore,

working on a documentary for the Discovery Times channel about outsourcing. In short order, I interviewed Indian entrepreneurs who wanted to prepare my taxes from Bangalore, read my X-rays from Bangalore, trace my lost luggage from Bangalore and write my new software from Bangalore. The longer I was there, the more upset I became -- upset at the realization that while I had been off covering the 9/11 wars, globalization had entered a whole new phase, and I had missed it. I guess the eureka moment came on a visit to the campus of Infosys Technologies, one of the crown jewels of the Indian outsourcing and software industry. Nandan Nilekani, the Infosys C.E.O., was showing me his global video-conference room, pointing with pride to a wall-size flat-screen TV, which he said was the biggest in Asia. Infosys, he explained, could hold a virtual meeting of the key players from its entire global supply chain for any project at any time on that supersize screen. So its American designers could be on the screen speaking with their Indian software writers and their Asian manufacturers all at once. That's what globalization is all about today, Nilekani said. Above the screen there were eight clocks that pretty well summed up the Infosys workday: 24/7/365. The clocks were labeled U.S. West, U.S. East, G.M.T., India, Singapore, Hong Kong, Japan, Australia.

''Outsourcing is just one dimension of a much more fundamental thing happening today in the world,'' Nilekani explained. ''What happened over the last years is that there was a massive investment in technology, especially in the bubble era, when hundreds of millions of dollars were invested in putting broadband connectivity around the world, undersea cables, all those things.'' At the same time, he added, computers became cheaper and dispersed all over the world, and there was an explosion of e-mail software, search engines like Google and proprietary software that can chop up any piece of work and send one part to Boston, one part to Bangalore and one part to Beijing, making it easy for anyone to do remote development. When all of these things suddenly came together around 2000, Nilekani said, they ''created a platform where intellectual work, intellectual capital, could be delivered from anywhere. It could be disaggregated, delivered, distributed, produced and put back together again -- and this gave a whole new degree of freedom to the way we do work, especially work of an intellectual nature. And what you are seeing in Bangalore today is really the culmination of all these things coming together.''

At one point, summing up the implications of all this, Nilekani uttered a phrase that rang in my ear. He said to me, ''Tom, the playing field is being leveled.'' He meant that countries like India were now able to compete equally for global knowledge work as never before -- and that America had better get ready for this. As I left the Infosys campus that evening and bounced along the potholed road back to Bangalore, I kept chewing on that phrase: ''The playing field is being leveled.''

''What Nandan is saying,'' I thought, ''is that the playing field is being flattened. Flattened? Flattened? My God, he's telling me the world is flat!''

Here I was in Bangalore -- more than 500 years after Columbus sailed over the horizon, looking for a shorter route to India using the rudimentary navigational technologies of his day, and returned safely to prove definitively that the world was round -- and one of India's smartest engineers, trained at his country's top technical institute and backed by the most modern technologies of his day, was telling me that the world was flat, as flat as that screen on which he can host a meeting of his whole global supply chain. Even more interesting, he was citing this development as a new milestone in human progress and a great opportunity for India and the world -- the fact that we had made our world flat!

This has been building for a long time. Globalization 1.0 (1492 to 1800) shrank the world from a size large to a size medium, and the dynamic force in that era was countries globalizing for resources and imperial conquest. Globalization 2.0 (1800 to 2000) shrank the world from a size medium to a size small, and it was spearheaded by companies globalizing for markets and labor. Globalization 3.0 (which started around 2000) is shrinking the world from a size small to a size tiny and flattening the playing field at the same time. And while the dynamic force in Globalization 1.0 was countries globalizing and the dynamic force in Globalization 2.0 was companies globalizing, the dynamic force in Globalization 3.0 -- the thing that gives it its unique character -- is individuals and small groups globalizing. Individuals must, and can, now ask: where do I fit into the global competition and opportunities of the day, and how can I, on my own, collaborate with others globally? But Globalization 3.0 not only differs from the previous eras in how it is shrinking and flattening the world and in how it is empowering individuals. It is also different in that Globalization 1.0 and 2.0 were driven primarily by European and American companies and countries. But going forward, this will be less and less true. Globalization 3.0 is not only going to be driven more by individuals but also by a much more diverse -- non-Western, nonwhite -- group of individuals. In Globalization 3.0, you are going to see every color of the human rainbow take part.

''Today, the most profound thing to me is the fact that a 14-year-old in Romania or Bangalore or the Soviet Union or Vietnam has all the information, all the tools, all the software easily available to apply knowledge however they want,'' said Marc Andreessen, a co-founder of Netscape and creator of the first commercial Internet browser. ''That is why I am sure the next Napster is going to come out of left field. As bioscience becomes more computational and less about wet labs and as all the genomic data becomes easily available on the Internet, at some point you will be able to design vaccines on your laptop.''

Andreessen is touching on the most exciting part of Globalization 3.0 and the flattening of the world: the fact that we are now in the process of connecting all the knowledge pools in the world together. We've tasted some of the downsides of that in the way that Osama bin Laden has connected terrorist knowledge pools together through his Qaeda network, not to mention the work of teenage hackers spinning off more and more lethal computer viruses that affect us all. But the upside is that by connecting all these knowledge pools we are on the cusp of an incredible new era of innovation, an era that will be driven from left field and right field, from West and East and from North and South. Only 30 years ago, if you had a choice of being born a B student in Boston or a genius in Bangalore or Beijing, you probably would have chosen Boston, because a genius in Beijing or Bangalore could not really take advantage of his or her talent. They could not plug and play globally. Not anymore. Not when the world is flat, and anyone with smarts, access to Google and a cheap wireless laptop can join the innovation fray.

When the world is flat, you can innovate without having to emigrate. This is going to get interesting. We are about to see creative destruction on steroids.

How did the world get flattened, and how did it happen so fast?

It was a result of 10 events and forces that all came together during the 1990's and converged right around the year 2000. Let me go through them briefly. The first event was 11/9. That's right -- not 9/11, but 11/9. Nov. 9, 1989, is the day the Berlin Wall came down, which was critically important because it allowed us to think of the world as a single space. ''The Berlin Wall was not only a symbol of keeping people inside Germany; it was a way of preventing a kind of global view of our future,'' the Nobel Prize-winning economist Amartya Sen said. And the wall went down just as the windows went up -- the breakthrough Microsoft Windows 3.0 operating system, which helped to flatten the playing field even more by creating a global computer interface, shipped six months after the wall fell.

The second key date was 8/9. Aug. 9, 1995, is the day Netscape went public, which did two important things. First, it brought the Internet alive by giving us the browser to display images and data stored on Web sites. Second, the Netscape stock offering triggered the dot-com boom, which triggered the dot-com bubble, which triggered the massive overinvestment of billions of dollars in fiber-optic telecommunications cable. That overinvestment, by companies like Global Crossing, resulted in the willy-nilly creation of a global undersea-underground fiber network, which in turn drove down the cost of transmitting voices, data and images to practically zero, which in turn accidentally made Boston, Bangalore and Beijing next-door neighbors overnight. In sum, what the Netscape revolution did was bring people-to-people connectivity to a whole new level. Suddenly more people could connect with more other people from more different places in more different ways than ever before.

No country accidentally benefited more from the Netscape moment than India. ''India had no resources and no infrastructure,'' said Dinakar Singh, one of the most respected hedge-fund managers on Wall Street, whose parents earned doctoral degrees in biochemistry from the University of Delhi before emigrating to America. ''It produced people with quality and by quantity. But many of them rotted on the docks of India like vegetables. Only a relative few could get on ships and get out. Not anymore, because we built this ocean crosser, called fiber-optic cable. For decades you had to leave India to be a professional. Now you can plug into the world from India. You don't have to go to Yale and go to work for Goldman Sachs.'' India could never have afforded to pay for the bandwidth to connect brainy India with high-tech America, so American shareholders paid for it. Yes, crazy overinvestment can be good. The overinvestment in railroads turned out to be a great boon for the American economy. ''But the railroad overinvestment was confined to your own country and so, too, were the benefits,'' Singh said. In the case of the digital railroads, ''it was the foreigners who benefited.'' India got a free ride.

The first time this became apparent was when thousands of Indian engineers were enlisted to fix the Y2K -- the year 2000 -- computer bugs for companies from all over the world. (Y2K should be a national holiday in India. Call it ''Indian Interdependence Day,'' says Michael Mandelbaum, a foreign-policy analyst at Johns Hopkins.) The fact that the Y2K work could be outsourced to Indians was made possible by the first two flatteners, along with a third, which I call ''workflow.'' Workflow is shorthand for all the software applications, standards and electronic transmission pipes, like middleware, that connected all those computers and fiber-optic cable. To put it another way, if the Netscape moment connected people to people like never before, what the workflow revolution did was connect applications to applications so that people all over the world could work together in manipulating and shaping words, data and images on computers like never before.

Indeed, this breakthrough in people-to-people and application-to-application connectivity produced, in short order, six more flatteners -- six new ways in which individuals and companies could collaborate on work and share knowledge. One was ''outsourcing.'' When my software applications could connect seamlessly with all of your applications, it meant that all kinds of work -- from accounting to software-writing -- could be digitized, disaggregated and shifted to any place in the world where it could be done better and cheaper. The second was ''offshoring.'' I send my whole factory from Canton, Ohio, to Canton, China. The third was ''open-sourcing.'' I write the next operating system, Linux, using engineers collaborating together online and working for free. The fourth was ''insourcing.'' I let a company like UPS come inside my company and take over my whole logistics operation -- everything from filling my orders online to delivering my goods to repairing them for customers when they break. (People have no idea what UPS really does today. You'd be amazed!). The fifth was ''supply-chaining.'' This is Wal-Mart's specialty. I create a global supply chain down to the last atom of efficiency so that if I sell an item in Arkansas, another is immediately made in China. (If Wal-Mart were a country, it would be China's eighth-largest trading partner.) The last new form of collaboration I call ''informing'' -- this is Google, Yahoo and MSN Search, which now allow anyone to collaborate with, and mine, unlimited data all by themselves.

So the first three flatteners created the new platform for collaboration, and the next six are the new forms of collaboration that flattened the world even more. The 10th flattener I call ''the steroids,'' and these are wireless access and voice over Internet protocol (VoIP). What the steroids do is turbocharge all these new forms of collaboration, so you can now do any one of them, from anywhere, with any device.

The world got flat when all 10 of these flatteners converged around the year 2000. This created a global, Web-enabled playing field that allows for multiple forms of collaboration on research and work in real time, without regard to geography, distance or, in the near future, even language. ''It is the creation of this platform, with these unique attributes, that is the truly important sustainable breakthrough that made what you call the flattening of the world possible,'' said Craig Mundie, the chief technical officer of Microsoft.

No, not everyone has access yet to this platform, but it is open now to more people in more places on more days in more ways than anything like it in history. Wherever you look today -- whether it is the world of journalism, with bloggers bringing down Dan Rather; the world of software, with the Linux code writers working in online forums for free to challenge Microsoft; or the world of business, where Indian and Chinese innovators are competing against and working with some of the most advanced Western multinationals -- hierarchies are being flattened and value is being created less and less within vertical silos and more and more through horizontal collaboration within companies, between companies and among individuals.

Do you recall ''the IT revolution'' that the business press has been pushing for the last 20 years? Sorry to tell you this, but that was just the prologue. The last 20 years were about forging, sharpening and distributing all the new tools to collaborate and connect. Now the real information revolution is about to begin as all the complementarities among these collaborative tools start to converge. One of those who first called this moment by its real name was Carly Fiorina, the former Hewlett-Packard C.E.O., who in 2004 began to declare in her public speeches that the dot-com boom and bust were just ''the end of the beginning.'' The last 25 years in technology, Fiorina said, have just been ''the warm-up act.'' Now we are going into the main event, she said, ''and by the main event, I mean an era in which technology will truly transform every aspect of business, of government, of society, of life.''

As if this flattening wasn't enough, another convergence coincidentally occurred during the 1990's that was equally important. Some three billion people who were out of the game walked, and often ran, onto the playing field. I am talking about the people of China, India, Russia, Eastern Europe, Latin America and Central Asia. Their economies and political systems all opened up during the course of the 1990's so that their people were increasingly free to join the free market. And when did these three billion people converge with the new playing field and the new business processes? Right when it was being flattened, right when millions of them could compete and collaborate more equally, more horizontally and with cheaper and more readily available tools. Indeed, thanks to the flattening of the world, many of these new entrants didn't even have to leave home to participate. Thanks to the 10 flatteners, the playing field came to them!

It is this convergence -- of new players, on a new playing field, developing new processes for horizontal collaboration -- that I believe is the most important force shaping global economics and politics in the early 21st century. Sure, not all three billion can collaborate and compete. In fact, for most people the world is not yet flat at all. But even if we're talking about only 10 percent, that's 300 million people -- about twice the size of the American work force. And be advised: the Indians and Chinese are not racing us to the bottom. They are racing us to the top. What China's leaders really want is that the next generation of underwear and airplane wings not just be ''made in China'' but also be ''designed in China.'' And that is where things are heading. So in 30 years we will have gone from ''sold in China'' to ''made in China'' to ''designed in China'' to ''dreamed up in China'' -- or from China as collaborator with the worldwide manufacturers on nothing to China as a low-cost, high-quality, hyperefficient collaborator with worldwide manufacturers on everything. Ditto India. Said Craig Barrett, the C.E.O. of Intel, ''You don't bring three billion people into the world economy overnight without huge consequences, especially from three societies'' -- like India, China and Russia -- ''with rich educational heritages.''

That is why there is nothing that guarantees that Americans or Western Europeans will continue leading the way. These new players are stepping onto the playing field legacy free, meaning that many of them were so far behind that they can leap right into the new technologies without having to worry about all the sunken costs of old systems. It means that they can move very fast to adopt new, state-of-the-art technologies, which is why there are already more cellphones in use in China today than there are people in America.

If you want to appreciate the sort of challenge we are facing, let me share with you two conversations. One was with some of the Microsoft officials who were involved in setting up Microsoft's research center in Beijing, Microsoft Research Asia, which opened in 1998 -- after Microsoft sent teams to Chinese universities to administer I.Q. tests in order to recruit the best brains from China's 1.3 billion people. Out of the 2,000 top Chinese engineering and science students tested, Microsoft hired 20. They have a saying at Microsoft about their Asia center, which captures the intensity of competition it takes to win a job there and explains why it is already the most productive research team at Microsoft: ''Remember, in China, when you are one in a million, there are 1,300 other people just like you.''

The other is a conversation I had with Rajesh Rao, a young Indian entrepreneur who started an electronic-game company from Bangalore, which today owns the rights to Charlie Chaplin's image for mobile computer games. ''We can't relax,'' Rao said. ''I think in the case of the United States that is what happened a bit. Please look at me: I am from India. We have been at a very different level before in terms of technology and business. But once we saw we had an infrastructure that made the world a small place, we promptly tried to make the best use of it. We saw there were so many things we could do. We went ahead, and today what we are seeing is a result of that. There is no time to rest. That is gone. There are dozens of people who are doing the same thing you are doing, and they are trying to do it better. It is like water in a tray: you shake it, and it will find the path of least resistance. That is what is going to happen to so many jobs -- they will go to that corner of the world where there is the least resistance and the most opportunity. If there is a skilled person in Timbuktu, he will get work if he knows how to access the rest of the world, which is quite easy today. You can make a Web site and have an e-mail address and you are up and running. And if you are able to demonstrate your work, using the same infrastructure, and if people are comfortable giving work to you and if you are diligent and clean in your transactions, then you are in business.''

Instead of complaining about outsourcing, Rao said, Americans and Western Europeans would ''be better off thinking about how you can raise your bar and raise yourselves into doing something better. Americans have consistently led in innovation over the last century. Americans whining -- we have never seen that before.''

Rao is right. And it is time we got focused. As a person who grew up during the cold war, I'll always remember driving down the highway and listening to the radio, when suddenly the music would stop and a grim-voiced announcer would come on the air and say: ''This is a test. This station is conducting a test of the Emergency Broadcast System.'' And then there would be a 20-second high-pitched siren sound. Fortunately, we never had to live through a moment in the cold war when the announcer came on and said, ''This is a not a test.''

That, however, is exactly what I want to say here: ''This is not a test.''

The long-term opportunities and challenges that the flattening of the world puts before the United States are profound. Therefore, our ability to get by doing things the way we've been doing them -- which is to say not always enriching our secret sauce -- will not suffice any more. ''For a country as wealthy we are, it is amazing how little we are doing to enhance our natural competitiveness,'' says Dinakar Singh, the Indian-American hedge-fund manager. ''We are in a world that has a system that now allows convergence among many billions of people, and we had better step back and figure out what it means. It would be a nice coincidence if all the things that were true before were still true now, but there are quite a few things you actually need to do differently. You need to have a much more thoughtful national discussion.''

If this moment has any parallel in recent American history, it is the height of the cold war, around 1957, when the Soviet Union leapt ahead of America in the space race by putting up the Sputnik satellite. The main challenge then came from those who wanted to put up walls; the main challenge to America today comes from the fact that all the walls are being taken down and many other people can now compete and collaborate with us much more directly. The main challenge in that world was from those practicing extreme Communism, namely Russia, China and North Korea. The main challenge to America today is from those practicing extreme capitalism, namely China, India and South Korea. The main objective in that era was building a strong state, and the main objective in this era is building strong individuals.

Meeting the challenges of flatism requires as comprehensive, energetic and focused a response as did meeting the challenge of Communism. It requires a president who can summon the nation to work harder, get smarter, attract more young women and men to science and engineering and build the broadband infrastructure, portable pensions and health care that will help every American become more employable in an age in which no one can guarantee you lifetime employment.

We have been slow to rise to the challenge of flatism, in contrast to Communism, maybe because flatism doesn't involve ICBM missiles aimed at our cities. Indeed, the hot line, which used to connect the Kremlin with the White House, has been replaced by the help line, which connects everyone in America to call centers in Bangalore. While the other end of the hot line might have had Leonid Brezhnev threatening nuclear war, the other end of the help line just has a soft voice eager to help you sort out your AOL bill or collaborate with you on a new piece of software. No, that voice has none of the menace of Nikita Khrushchev pounding a shoe on the table at the United Nations, and it has none of the sinister snarl of the bad guys in ''From Russia With Love.'' No, that voice on the help line just has a friendly Indian lilt that masks any sense of threat or challenge. It simply says: ''Hello, my name is Rajiv. Can I help you?''

No, Rajiv, actually you can't. When it comes to responding to the challenges of the flat world, there is no help line we can call. We have to dig into ourselves. We in America have all the basic economic and educational tools to do that. But we have not been improving those tools as much as we should. That is why we are in what Shirley Ann Jackson, the 2004 president of the American Association for the Advancement of Science and president of Rensselaer Polytechnic Institute, calls a ''quiet crisis'' -- one that is slowly eating away at America's scientific and engineering base.

''If left unchecked,'' said Jackson, the first African-American woman to earn a Ph.D. in physics from M.I.T., ''this could challenge our pre-eminence and capacity to innovate.'' And it is our ability to constantly innovate new products, services and companies that has been the source of America's horn of plenty and steadily widening middle class for the last two centuries. This quiet crisis is a product of three gaps now plaguing American society. The first is an ''ambition gap.'' Compared with the young, energetic Indians and Chinese, too many Americans have gotten too lazy. As David Rothkopf, a former official in the Clinton Commerce Department, puts it, ''The real entitlement we need to get rid of is our sense of entitlement.'' Second, we have a serious numbers gap building. We are not producing enough engineers and scientists. We used to make up for that by importing them from India and China, but in a flat world, where people can now stay home and compete with us, and in a post-9/11 world, where we are insanely keeping out many of the first-round intellectual draft choices in the world for exaggerated security reasons, we can no longer cover the gap. That's a key reason companies are looking abroad. The numbers are not here. And finally we are developing an education gap. Here is the dirty little secret that no C.E.O. wants to tell you: they are not just outsourcing to save on salary. They are doing it because they can often get better-skilled and more productive people than their American workers.

These are some of the reasons that Bill Gates, the Microsoft chairman, warned the governors' conference in a Feb. 26 speech that American high-school education is ''obsolete.'' As Gates put it: ''When I compare our high schools to what I see when I'm traveling abroad, I am terrified for our work force of tomorrow. In math and science, our fourth graders are among the top students in the world. By eighth grade, they're in the middle of the pack. By 12th grade, U.S. students are scoring near the bottom of all industrialized nations. . . . The percentage of a population with a college degree is important, but so are sheer numbers. In 2001, India graduated almost a million more students from college than the United States did. China graduates twice as many students with bachelor's degrees as the U.S., and they have six times as many graduates majoring in engineering. In the international competition to have the biggest and best supply of knowledge workers, America is falling behind.''

We need to get going immediately. It takes 15 years to train a good engineer, because, ladies and gentlemen, this really is rocket science. So parents, throw away the Game Boy, turn off the television and get your kids to work. There is no sugar-coating this: in a flat world, every individual is going to have to run a little faster if he or she wants to advance his or her standard of living. When I was growing up, my parents used to say to me, ''Tom, finish your dinner -- people in China are starving.'' But after sailing to the edges of the flat world for a year, I am now telling my own daughters, ''Girls, finish your homework -- people in China and India are starving for your jobs.''

I repeat, this is not a test. This is the beginning of a crisis that won't remain quiet for long. And as the Stanford economist Paul Romer so rightly says, ''A crisis is a terrible thing to waste.''

 © Thomas Friedman, New York Times 2005



Martin Scott

The first hallucination I had was while shopping an Eagle supermarket in Iowa City. In the fresh fruit and vegetable aisle, my field of vision began to get cloudy and saffron-colored, and by the time I made it to frozen foods, I could barely see. Going to the cashier took almost more courage than I had, because I was afraid she would notice something wrong with me, that I wouldn’t be able to write the check and would have to ask for help, embarrassed by my crippled tongue—I could not have described or classified my ailment. I put on what I hoped was a neutral face and strolled up, and when my turn came, by God, I wrote the check, clumsily but accurately enough, and the cashier smiled indulgently. I had to walk home across several busy streets, but I made it, and spent the rest of the day writhing on my bed in what, it turns out, was my first migraine.

The onset of a migraine is accompanied by a peculiar feeling that something is going to happen, as if I can feel the nervous disruption rising through the brain like a swimmer to the surface, as anxious for air as the migraine is to make itself known. And if I watch it come, the malaise becomes more general and unfathomable, as if I can feel my body in the process of deformation, my head swelling to one side and shrinking on the other, my eyes bulging, almost to the size of my palm. It is then I run for ice and medication, if I can, but if I can’t, submit myself to this meditation on pain.

Classical migraines, the type I suffer from, not only have excruciating headache neuralgia, usually above an eye or at the temple, not only the intense nausea of the common migraine, but also a blind spot in the field of vision. The hallucination or scotoma (meaning shadow) starts as a small glittering flake and builds, after a few minutes, into a huge purple amoebae that shimmers and pulsates, a horrible planet Jupiter hovering over the visible world. Wherever I turn my head, it is there. It completely obscures, or more accurately erases, whatever should be there, and so leaves me with an attendant sensation of confusion—where has the world gone? What has taken it away? The world has disappeared with my ability to perceive it, or so the child in me, prominent under stress, believes.

The migraine hallucination often begins as an irregular zigzag streak through the central part of my line of vision. This creates a kind of Cubist effect in the faces I look into: the migraine etches through them, distorting an eye lower or higher than it should be, or removing it altogether. And as I turn my head from side to side, I can look around the hallucination, but not through it; I can fill in the blanks, but when I stare face-on it seems that the person has exploded. The migraine attacks the object in view almost the way a Cubist does, cutting it up into planes and pieces, as if the continents of the mind were drifting apart and the halves of vision, rational and irrational, breaking apart in opposite directions, leaving a gap through which a light pokes so that depth and dimension become illusory. A headache can disrupt the pillars of my assumptions: that the world is solid and more or less stationary in space. During a migraine, I discover perception is a two-dimensional screen that can be ripped apart, revealing oblivion, not solidity, beneath me.

What is this oblivion? I have wondered if it might be constricted blood vessels in the eye, or a problem of disoriented synapses. Oliver Sacks, in his wonderful book Migraine, thinks it might be electrical disturbances in the optic nerve. Considered aesthetically, the scotoma is artwork, a kind of deep symbol, and I wonder if it might not be an emblem of the imagination itself, since its job is to appropriate the innocent appearances of the world and distort them. Coleridge said:

It is clear, then, that the imagination must derive its very power from the act of dissolution, the acidic destruction of primary perception into material malleable enough for the imagination to work upon, just as the potter must work the clay before he considers it suitable. It dissolves, diffuses, dissipates, in order to recreate; it inverts the material to make it anew.
Migraine is destructive in that it causes intense pain, as if someone were pressing a diamond through my eye to find a particular nerve cord. It is a hard, constant pain so relentless and out of control that it must come from outside myself, though I know it doesn’t. The pain is caused by blood vessels contracting and then expanding wildly so that blood seeps out into surrounding tissue, but this physical damage seems incommensurate to the pain, as if the migraine were lavishing itself on me.

But there is something constructive about migraine. Just as the imagination dissolves the world to create it anew, so the migraine dissolves perception in a vial of pain to create it anew in hallucination, like a wild slash of paint. William Carlos Williams said, “The imagination is a --,” meaning that the imagination becomes whatever the senses feed it. Like the migraine, the imagination removes the world in favor of itself, replacing the “real world” the optic nerve picks up with its own artwork.

And this artwork, scrawl as it is, seems hostile and bitter, like graffiti. I can’t help feeling it has a message, a hidden language that will probably never find its translator; it has a kind of symbolist-solipsist perfection in its abstract, unknowable, God-like detachment.

My second migraine was in a Long John Silver’s fast food restaurant—part of the fun the affliction has with you is the places it picks to come at you, places where you find yourself either stranded for a time or compelled not to scream out loud. This time the hallucination came in a clearish purple and white, rather than saffron, as I worked through my plate of fried fish and, laughing nervously out of macho dismissal of pain, described the symptoms to my (then) wife who correctly diagnosed me and insisted we begin the long walk home before the attack turned worse. Of course, it was winter with the sun glaring on the ice, so as she predicted it got worse and worse. I craved darkness and silence, and ever since then I can barely stand the sun on a cloudy day, let alone a bright one.

This made me realize that my eyes, the instruments of vision, are also the instruments of torture. Any bright light, especially hot summer glare, can cause a blind spot to appear, the way harsh images of light stay on the eye even after you’ve turned away. Brilliant days, the most beautiful days, are also the most dangerous. I avoid looking at crisp, glossy surfaces and chrome, glass and mirrors, and even strong headlights in traffic at night. Migraine elaborates the glare, using it as a springboard to overpower the surfaces of what I perceive. It even affects the speech. Often during an attack, I can’t produce the word I am looking for, so randomly related words are substituted by the confused brain working to get around the void and paralysis.

The migraine aura also produces an extreme sense of detachment from all my surroundings, and a feeling of hibernation or distance, as if I were packed inside a wall of Styrofoam. The world continues to go on as if nothing has happened, little realizing that for me everything has happened: I am having a migraine, I must lie down, I am blind. And this defines me, creates a horrible fascination for the hallucination that verges on the decadent and narcissistic. As the scotoma grows outward or larger like an approaching cloud of unknowing, so does the psychic malaise grow as it both strives to see around the cloud, see into it, and repress it, all at the same time. In appearance, it is usually a silvery-purplish scintillation of crystalline light that seems to shimmer with the same consistency as a mirage on a highway. It contains no exact form, though its edges seem geometrically sharp, almost like the edges of a circular saw. In this formlessness it seems like some primal material awaiting a demiurge, unless, as I suspect, I simply cannot see the forms that are there—vague crystalline castles rising from liquid streets, much like those sold in pill form to children through comic books; once dropped in water, the crystals form themselves into arabesques of red and blue, like bizarre towers or perverse trees, forbidding, dangerous, death-like. But it resembles nothing so much as nothingness, a piece of film that has been scratched so that, when projected, the light itself shows through on the screen.

Although the scotoma seems large while undergoing the hallucination, part of the horror is that I know it is really small, a part of me, projected onto the thin screen between the world and me. I watch it the way I watch a movie in a cinema, as a passive observer, making tenuous abstractions on the “meaning” of the images. Projected film, like sensuous perception, floats on an ocean of light always ready to swallow it back up and return it to its primal state, oblivion. Once in the Threepenny Theater in Chicago, I watched a film get stuck in the projector and the intense heat of the lamp melt the frame from the inside out. That is what a migraine looks like, only slower, the center of an empty house of mirrors. From glass to glass, the empty light forms a pattern of weaving and interweaving with nothing to interrupt it until it catches something—a face—and dissolves it into endless reflection. The headache begins to subside, the hallucination to sink back into the cracks of vision. It is as if the tide were going out.

But there is another paradoxical side to this, the personality of the hallucination. It will not leave the field of vision, it persistently follows me no matter how deliberately I flee, it is specific to me as only I can see, as if it were an Old Testament vision, wheels within wheels. And like the prophets, it bears a message of destruction and mortality, a kind of banner: this is a taste of death, now it will come and overwhelm you like a claw punching through a curtain.

Migraines are most terrifying when I get them while teaching, because I feel a strong urge to stop the class, though I do not want to admit a weakness as strange as blind spots, so I continue with the scotoma obscuring more and more of the page I am reading aloud, until I am sure I am making mistakes and fear that the students know something is wrong, but I dare not stop for fear the last bit of order in my class will collapse in embarrassment. One time I was teaching a High School for the Arts poetry workshop while the aura crept over me, with the added problem of the youthful situation, leaving me blind and close to speech impaired. My fear of admitting weakness proved the stronger—I did not stop, since I was close to the end of the period, even though I later realized I must have been staring strangely into the face of one student, without realizing it as I could not see her. I am afraid to wonder what she must have thought, especially because the class before she had surprised me with her entire senior project—all nude photos of herself—and she was quite something to behold. And there I was, nearly drooling, staring into her face.

As the hallucination reaches its height, the disruption of the motor functions begins to taper off; cold vapors of numbness crawl over the hands and arms, the headache begins to subside, and the hallucination itself begins to subside, break up and sink back into the cracks of vision. It is as if the tide were going out. Nevertheless, the right side of my face retains a tingling sensation, as if a cold hand were still on it, disinclined to let go. And for hours the malaise, or the sense of having been shocked by something, lingers, as if the body needed a kind of decompression after rising through cold, deep water. Migraine is a kind of visitor from another world, which, for a while, tears a bit of my world out and replaces it with its own creation, a shimmering creature that erases everything in its path, a kind of mad, blind imagination roving over me, a rebuke and a censure. The pain subsides and washes out of the senses, though the body still feels marked. But my vision is restored.

Becoming a patient of migraines was part of my coming of age: it happened when I was first learning seriously about writing poetry, when my first marriage was dissolving due to lack of interest, when my second was appearing out of the wings, when my middle twenties and the middle Eighties were about to turn late. Perhaps this is why I associate migraines with the imagination: trial by fire, ordeal of initiation, they imitate the purging work of writing, revision, where for me the essence of imagination lies. A migraine closes off the world and reworks you until it gets you where it wants you. And the sense of relief when one passes over like a thunderstorm is very like the feeling when I more or less have finished a draft of a poem: something has gone out of me, something almost tactile. This year (1988) I turn thirty, and though I don’t by any means feel middle-aged, I do feel something going out of me, replaced by mortal time accumulating like film on the take-up reel of a projector. Could migraines be an emblem of the interior clock, a sort of visual alarm going off to signal it is time to wake up after all this sleepwalking? I have never felt more awake than I feel after a migraine, but it is a bittersweet taste to regain the control of your bodily senses with the knowledge of how limited, how fragile they are.

© Martin Scott 2005




Ching-Ching Ni


5 girls' deaths highlight child-labor woes in China

Some estimate 10 million kids help fuel the manufacturing engine, often in harsh conditions

By Ching-Ching Ni
Tribune Newspapers: Los Angeles Times

May 22, 2005

BEIXINZHUANG, China -- Christmas was just two days away, and snow was falling when the five factory girls finished their shift. They had been working 12 hours, it was already after 1 a.m., and their dorm was freezing cold. One of them ran out to grab a bucket and some burning coal. The room warmed slightly. They drifted off to sleep.

Next morning, none of them woke up, poisoned by the fumes. But their parents believe at least two of the girls died a much more horrible death.

They charge that the owner of the canvas factory where they worked was so impatient to cover up the fact that three of the unconscious workers were underage that he rushed the girls into caskets while some were still alive.

"You see the damage on the corner of the box, the bruises on the side of her head, and the vomit in her hair?" said Jia Haimin, the mother of 14-year-old Wang Yajuan, pointing to pictures of her daughter lying in a cardboard casket stained with vomit and appearing to show evidence of a struggle.

"Dead people can't bang their heads against the box. Dead people can't vomit. My child was still alive when they put her in there."

The case, made public months later by New York-based Human Rights in China, highlights this country's often hidden problem of child labor. The Chinese government officially forbids children younger than 16 from working, but critics say it does little to enforce the law.

Statistics are hard to come by, but by some estimates, as many as 10 million school-age children are doing their part to turn China into a low-cost manufacturing powerhouse.

"We know enough about the problem to know child labor is extremely widespread," said Robin Munro, research director at China Labor Bulletin, a Hong Kong-based labor-rights organization focused on mainland China.

"The rural education system in many parts of the countryside is in a state of virtual collapse," Munro said. "There is a high dropout rate of children under 16. They are not just sitting around doing nothing. It is safe to assume most are engaged in some kind of work illegally."

Children, some as young as 4, roam China's prosperous coastal cities, begging on the streets or selling roses deep into the night, apparently victims of schemes that use youngsters as bait. Even infants are being rented out as maternal cover for women selling pirated porn movies on the streets.

In 2000, state media reported that 84 children had been kidnapped from southern China's Guizhou province to work in coastal cities assembling Christmas lights. The youngest was 10.

In 2001, an explosion at a rural school in Jiangxi province killed 42 people, most of them 3rd and 4th graders who were believed to be making fireworks.

Classes of kids contracted out

Labor activists say a growing number of rural schools have contracted entire classes of students to work in urban factories, supposedly to help defray school costs.

"They call it work-study programs," Munro said.

In principle, China is committed to ending child labor. According to the International Labor Organization, China has ratified two ILO conventions on labor practices. Article 138 forbids minors under 15 from working. Article 182 bans the worst forms of child labor, including prostitution and slave labor.

But this is a country where making laws is much easier than implementing them.

"This is a society in transition," said Hans van de Glind of the ILO's office in Beijing. "The intention is there to make progress."

On the dusty plains of Beixinzhuang village, in northern China's Hebei province, grieving parents blame poverty and lack of opportunity for sending their children to the factories.

"Rural families are not like city people--not all children can afford to go to school. So they work to help alleviate the family's burdens," said Sun Jiangfen, the mother of Jia Wanyun, one of the 14-year-old canvas-factory workers who died in December. "In this village, every family has a child working in a factory."

Sun's daughter had been on the job about a month when the accident occurred in the girls' sleeping quarters. She had quit school the previous spring, moving about 35 miles away to an industrial suburb of Shijiazhuang, because her parents needed her help to put her 12-year-old brother through school.

Many rural girls drop out because their families can't afford to pay more than one tuition. Two children in school, which costs about $300 per student, would have been too much for her migrant construction-worker father and farmer mother.

The girl was promised about $100 a year in wages, but she hadn't been paid because she was still considered an apprentice, her mother said.

Both of Wang Shuangzheng's daughters had worked at the canvas factory spinning yarn. His 21-year-old stopped recently after marrying; his younger daughter, Jia Shiwei, picked up the slack when she was 15 and had been working there two years before the accident.

Family tries to carry on

The family last saw her during autumn harvest when she came home to help. Her grandmother suffered a stroke when she learned of the girl's death, and the family is still in shock.

"She wanted to go, and I couldn't stop her. My son's getting married, and we need the money," said Wang, a farmer.

Another villager, Wang Shuhai, has been ill for years with a heart condition. He is unable to work, and his family is deep in debt because of his medical expenses. He is tormented by the thought that his daughter, Wang Yajuan, died because of him. She had called only once since leaving for work last fall, he said.

"She said she didn't want to stay there anymore. The work was too hard, and the food was terrible," said Wang, holding up a school photo of a fresh-faced little girl in a ponytail. "I told her to stay, because if you leave, you wouldn't be paid. The child listened to me."

According to relatives, the girls rarely talked about how hard the conditions were.

"They don't want us to worry," said Jia Shitong, 24, Jia Shiwei's brother. "But think about it--12 hours a day with no weekends off. How can it not be exhausting?"

Quick meal, bed, death

The day of their final shift, parents say, the girls ate a quick meal before going to bed, sleeping five to a room, sharing two single beds shoved together for maximum warmth.

"It must have been really cold," said Sun, Jia Wanyun's mother.

For a while, the families fought the official ruling that their children had not been buried alive. They persisted even after the long-awaited autopsy that came late last month, reconfirming the government's earlier report.

"They ripped my daughter's heart out. The least they can do is give me some justice," said Jia Haimin, Wang Yajuan's mother.

Eventually they accepted a compensation package of about $12,000 each and agreed to drop all charges, according to the families' Beijing lawyer, Li Wusi.

"Sure, there are still lingering doubts about how they died," Li said. "But what choices do their parents have? Farmers have very low status in Chinese society. Farmers' daughters are the lowest of the low."

Copyright © 2005, Chicago Tribune



By Louis Rene Beres.

Louis Rene Beres is a professor in the political science department at Purdue University who cannot be reached by cell phone

June 12, 2005

I belong. Therefore I am.

This is the unheroic credo expressed by cell-phone addiction, a not-so-stirring manifesto that social acceptance is vital to survival and that real happiness is solely the privilege of mediocrity.

This largely undiagnosed techno-condition represents much more than a reasonable need to remain connected. After all, when one looks closely at these communications a clear message is delivered: Talking on a cell phone makes the caller feel more important, more valuable, less alone, less lonely.

At a time when "rugged individualism" has become a nostalgic myth in America, being witnessed in conversation with another--any other--is presumed to be absolutely vital. Certainly, the nature or urgency of the particular phone conversation is mostly irrelevant. In many readily observable cases the exchange consists of meaningless blather punctuated by monosyllabic grunts. There is no vital content here; certainly nothing to resemble a serious reflex of thought or feeling.

All that really matters is that the caller be seen talking with another human being and that the conversation push away emptiness and anxiety.

How sad. The known universe is now said to be about 68 billion light-years "across," and yet here, in the present-day United States, being seen on the phone--preferably while walking briskly with rapt inattention to one's immediate surroundings, including life-threatening car traffic or heavy rain--is a desperate cry to every other passerby: "I am here; I have human connections; I count for something; I am not unpopular; I am not alone."

The cell phone, of course, has not caused people to display such feelings. Rather, it is merely an instrument that lets us see what might otherwise lie dormant in a society of dreadful conformance and passionless automatism. Ringingly, it reveals that we have become a lonely crowd driven by fear and trembling.

There exists, as Freud understood, a universal wish to remain unaware of oneself, and this wish generally leads individuals away from personhood and toward mass society. Hiding what might express an incapacity to belong, trying to be a good "member," the anxious American soon learns that authenticity goes unrewarded and that true affirmations of self will likely be unpardonable.

Humans often fear ostracism and exclusion more acutely even than death, a personal calculus that is largely responsible for war, terrorism and genocide. It is small wonder, then, that something as harmless as a cell phone should now have become a proud shiny badge of group standing.

The inner fear of loneliness expressed by cell-phone addiction gives rise to a very serious and far-reaching social problem. Nothing important, in science or industry or art or music or literature or medicine or philosophy can ever take place without some loneliness.

To be able to exist apart from the mass--from what Freud called the reconstituted "primal horde" or Nietzsche the "herd" or Kierkegaard the "crowd"--is notably indispensable to intellectual development and creative inquiry. Indeed, to achieve any sense of spirituality in life, one must also be willing to endure being alone.

All of the great religious leaders and founders sought essential meanings "inside," in seclusion, deep within themselves.

But personal sadness in America seems to grow more intense wherever communication is difficult and wherever fears are incommunicable. In one sense, cell-phone addiction is less an illness than an imagined therapy.

Ultimately, in a society filled with devotees of a pretended happiness, it is presumptively an electronic link to redemption.

But the presumption is all wrong.

Trying to fill some vacancy within themselves, the compulsive cell-phone users should now remind us of a revealing image from T.S. Eliot: They are the "hollow men," they are the "stuffed men," leaning together as they experience painful feelings of powerlessness. More than anything else, they fear finding themselves alone, and so they cannot find themselves at all.

The noisy and shallow material world has infested our solitude; upon all of us the predictable traces of herd life have now become indelible. Facing an indecent alloy of banality and apocalypse, we Americans seek both meaning and ecstasy in techno-connections.

But we discover instead that the way is cruelly blocked by an insipid mimicry and endless apprehension. Do we dare to disturb the universe, or must we continue to die slowly, prudently, always in responsible increments, without ever taking the chance of becoming fully born?

One conveniently forgets that life is always death's prisoner.

Yet, once we can come to grips with this liberating idea we can begin to take our numbered moments with more intense pleasure and with true confidence in ourselves as unique. For now only our self-doubt seems inexhaustible, but this is because we routinely look to others to define who we are and because we despair when we do not measure up to these manufactured definitions.

In a sense, the attraction of the cell-phone machine is derivative from our own machine-like existence, a push-button metaphysics wherein every decision and every passion follows a standardized and uniformly common pathway.

We believe that we are the creators of all machines, and strictly speaking, of course, this is correct. But there is also an unrecognized reciprocity here between creator and creation, an elaborate pantomime between user and used.

Increasingly our constructions are making a machine out of man. In an unforgivable inversion of Genesis, it now even appears that we have been created in the image of the machine.

Cell-phone addiction is merely the very visible symptom of a pervasive pathology. The underlying disease is a social order built upon nonsense, a literally mindless network of jingles, advertised meanings and ready-made ideas that deplores individuality and celebrates slogans.

Our American society has lost all sense of awe in the world.

Cell phones in hand, we talk on and on because we would rather not think, and we would rather not think because there is no apparent emotional or material payoff for serious thought.

Holding fast to our cell phones, our fondest wish is that we should soon become interchangeable. We should be careful what we wish for.

Copyright © 2005, Chicago Tribune



By Cynthia Hubert

 June 13, 2005

SACRAMENTO -- Sergio Chaparro's information-technology students had more than just a healthy attachment to their cell phones.

When he asked them to shut them off for three days, they panicked.

"They were afraid. They were truly afraid," Chaparro, then an instructor at Rutgers University in New Jersey, recalled of the assignment last year. "They thought it was going to be a painful experience, and they were right."

Only three of about 220 students managed to complete the assignment.

To Chaparro, now an assistant professor at Simmons College in Boston, the experiment confirmed what he strongly suspected was a widespread psychological dependence on cell phones.

"I think it's critical that people realize their level of dependency, and possibly do something about it," he said.

Business executives. Soccer moms. Travelers. Teenagers. All of them adore their cell phones. But when does love turn into addiction?

High school paranoia

A Korean study found recently that nearly a third of high school students showed signs of addiction, including paranoia, when they were without their phones, and two-thirds were "constantly worried" that they would miss a text message when their phones were off.

In Britain, researchers concluded that people are so intimately connected with their cell phones that they see them as "an essential item, an extension of self."

"No other medium has infiltrated society so widely and so quickly" to alter lifestyles, and "no other portable medium is used so frequently," wrote researchers for Teleconomy Group. They surveyed 210 consumers about their use of mobile phones.

Here in America, research on emotional attachment to cell phones has been sparse. But Joseph Tecce, an associate professor of psychology at Boston College, said it is a rich field to be mined.

Like substance abuse, Tecce said, excessive use of cell phones can lead to personal problems.

"If you try to exert control over your use of the phone and you can't do it, that's dependence. That's addiction," said Tecce, who studies "psychobiological behavior" including addictions and phobias.

"People who instantly reach for the cell phone every time they feel uneasy or anxious about a problem are relying too much on it," he said.

Ultimately, said Tecce, such behavior undermines self-reliance and reduces self-esteem.

"Like many rewarding experiences, leaning heavily on cell phones for advice or psychological nurturance is effective in reducing anxiety in the short term, but harmful in the long term," he said.

"How? By taking away control of one's behavior and placing it in the hands of others. After all, a problem might arise without a handy cell phone, and then helplessness rules the hour."

Too much yapping on the cell phone, Tecce added, can also lead to "a constant state of distraction" that "takes away a key component of happiness, the pleasure of total absorption of one activity to the exclusion of everything else."

Tecce recommended that cell phone abusers "put themselves on a quota system, either so many minutes per day or so many calls per day" in an effort to break a serious habit.

Dependence on electronic devices is hardly limited to cell phones, said Bill Lampton, a communications specialist and author in Georgia. Electronic mail, he said, is equally addictive.

"Not long ago my e-mail system was down for 24 hours," recalled Lampton, author of the book "The Complete Communicator."

"How did I feel? Isolated, marooned, in a sense almost rejected because I couldn't contact business and personal associates."

As for the cell phone, "It's not an exaggeration to say that it has become our contemporary pacifier," Lampton said. "As long as we're holding it, we don't show signs of unrest.

"The difficulty comes when we lose our perspective on a tool that we're supposed to control -- not let control us."

David Mullinax, a lobbyist who does business in Santa Barbara and Sacramento, admitted an addiction to his BlackBerry, a wireless gadget that, among other things, transmits e-mail.

`Information overload'

"Absolutely," he said.

"`Crack'-Berry is appropriate nomenclature."

Despite his attachment to the device, Mullinax said, it often makes him feel "bludgeoned with information overload" and ultimately feeling "weak and ineffectual."

"It's like being caught in a wave and being tossed around like a rag doll, unable to control where you're going and not able to assimilate the information into anything truly worthwhile," he said.

Cell phones are particularly seductive because they are relatively cheap, readily available and highly portable, Chaparro said.

"Society as a whole has created a dependency," he said.

Marketing of cell phones is relentless, and access to pay phones and other "land lines" is growing more and more limited, Chaparro noted. So people feel they "have" to carry cell phones. And once they do, they tend to overuse them.

In his class last year, Chaparro said, he learned "amazing things" about the cell phone culture of his students.

Phone's a lifeline

"For most of them, the phone was a lifeline in many ways," he said.

"I had one student who went on a spring break trip to Florida, lost her cell phone, and her mom had to FedEx another one from home right away. She said, `I didn't feel secure, Sergio. I couldn't even call to rent a car.'"

Against his better judgment, Chaparro said, he recently broke down and bought a cell phone for himself.

"And let me tell you, it's addictive," he said.

"I have the very simplest one, the cheapest one ever, no camera, no text, nothing. I pay the minimum. But sometimes I feel I can't leave home without it."

As cell phones become ubiquitous, people's "addiction" is likely to increase, he said.

"We need some voices out there to tell people to be cautious," Chaparro said.

"It's not about stopping progress. It's about making people realize there are other ways to interact."

Copyright © 2005,Chicago Tribune  



By Andrew Martin
Tribune national correspondent

June 15, 2005

SOMERVILLE, Mass. -- Gary Hirshberg isn't the first parent to become frustrated by the lack of healthy fast-food options for his kids during a road trip. But as president of Stonyfield Farm yogurt company, Hirshberg was in a position to do something about it.

Now, six years after that fateful vacation in northern California, Hirshberg is overseeing a project he hopes will spark a revolution in the fast-food industry. It's called O'Naturals, a small chain of fast-food restaurants in the Northeast that offers everything from carrot ginger soup and organic smoked tofu to bison meatloaf sandwiches, and macaroni and cheese, much of it made with organic or natural ingredients.

O'Naturals opened its fourth store in April in a former bakery at the edge of the trendy Davis Square neighborhood in Somerville, a Boston suburb, and plans are under way to expand the chain across the nation through franchises.

"We call it `fast food with a mission,'" said Hirshberg, who envisions his restaurants shaking up the restaurant business in much the same way that Whole Foods Market shook up the grocery trade.

At a time when McDonald's is championing salads and Burger King is offering a veggie burger, Hirshberg and a handful of other entrepreneurs are taking things a step further, emphasizing how the food is produced as much as how it tastes.

One of the most successful examples is the Chipotle chain, which has 450 restaurants nationwide and is partly owned by McDonald's. Several years ago it began offering tacos and burritos with Niman Ranch pork, which means the pigs are raised outdoors and do not eat feed containing hormones or antibiotics. Pork sales at the restaurant jumped sixfold, and Chipotle recently started offering antibiotic-free chicken at many of its restaurants.

"If you take a Niman Ranch pork chop and take it next to a factory farm pork chop, there's a big difference," said Steve Ells, founder and chief executive officer of Chipotle. Besides selling food that tastes better, Ells said, selling food raised in a more humane and environmentally sustainable way is the right thing to do.

Doubters remain

"We really think we can change the supply chain for the better," he said, noting that Chipotle is slowly adding organic ingredients if supplies are available and not too costly.

"We don't want to serve an $18 burrito," Ells said, adding, "You can't flip a switch and have it all free-ranging and organic overnight."

Not everyone is convinced that healthy fast food will succeed any better than it has in the past. "If I had a nickel for every time somebody told me times are different, I'd be a millionaire," said Harry Balzer, vice president of the NPD Group, which tracks what people eat.

In Colorado, the Good Times burger chain is following a formula similar to Chipotle's, making hamburgers from "all-natural" beef that doesn't contain hormones or antibiotics.

In Tampa, the EVOS chain offers tacos made with free-range beef and hormone-free chicken. Its french fries are baked rather than fried, reducing fat by three-quarters.

"I think the trends are definitely going in that direction," said Steven Hoffman, president of Compass Natural Marketing in Colorado, a marketing consultant for organic and natural-foods companies. "Look at Whole Foods Market. It's a $4 billion company. This is spilling out into fast food. It is probably the last frontier for healthy foods."

McLean Deluxe failed

While Hoffman is optimistic that the new healthy fast-food restaurants will succeed, a history of spectacular health-food failures suggests otherwise. Remember the McLean Deluxe? The burger was lower in fat thanks to a seaweed derivative mixed with the meat, but when McDonald's offered the sandwich to consumers in the early 1990s, few were interested.

Similarly, the D'Lites chain in the early 1980s offered low-fat burgers on multigrain buns, vegetarian sandwiches and salads, and it quickly grew to 104 restaurants, prompting one of its founders to proclaim, "We're on the leading edge of an up-and-coming consumer wave." But the D'Lites wave crashed about five years after it began.

Harry George, an Evanston native who lives in Arizona, hoped to turn the Blind Faith Cafe, the popular vegetarian restaurant in Evanston, into a chain of restaurants, albeit more formal than fast-food stores. He opened a second Blind Faith Cafe on Lincoln Avenue several years ago but it closed after a year, a failure he attributes to poor location and inadequate parking.

"Our thought was to expand Blind Faith Cafe to five units," George said. "Five is kind of the magic number for getting outside revenue for restaurants, because then you can prove it's the concept and not the location."

Balzer said O'Naturals may well find a sizable niche like Whole Foods, which now has 170 stores and 59 in the works. But if Hirshberg hopes to play at the level of McDonald's, with more than 30,000 locations, his restaurant will have to compete with other mega-chains on taste, convenience and price, Balzer said.

"The major shift in the supermarket industry was not Whole Foods," he said. "It was Wal-Mart. How did they do it? By making it healthier? By making it easier? They own cheap."

But Hirshberg said the success of Stonyfield Farm, now America's largest organic yogurt business, shows that Americans are willing to pay more for higher-quality food. He likens the costs at O'Naturals, where sandwiches cost $6 to $7.50, to the Panera Bread chain.

"Some of our fastest-growing items are a dollar or more than our competition," Hirshberg said, referring to Stonyfield. Noting the success of Cosi and Panera, he said there are "big, big changes happening in this country as people realize that you get what you pay for."

The inspiration for O'Naturals was a Hirshberg family vacation in northern California in 1999, when family members became "frustrated hostages to junk food." He later had an epiphany when he took a carload of kids to the deli counter at Whole Foods, and they happily dined on organic pizza and other healthy fare.

Hirshberg enlisted an old friend to run O'Naturals day-to-day, a former executive from L.L. Bean named Mac McCabe. They opened the first O'Naturals in Portland, Maine, in 2001 and now envision not only more freestanding stores, but also O'Naturals counters in airports and supermarkets.

Noting the checkered history of health-food joints, Hirshberg said he and McCabe were careful to focus first on taste. O'Naturals uses organic ingredients whenever possible and food that is less processed than the fare found at traditional fast-food restaurants.

"We don't really talk about healthy anywhere in our restaurant," Hirshberg said. "That's very intentional. Not because it isn't. We want them to enjoy the food for the food, and then to feel that health is a benny."

- - -

THE LATEST FAST-FOOD TREND isn't about secret sauces, it's about promoting freshness, quality ingredients and healthful eating. Among the claims:


Locations: 4

Opened: 2001

Ingredients do not contain additives or preservatives. The chain also uses organically raised or grown ingredients in many of its items.


Locations: 450

Opened: 1993

Uses mostly animals raised outdoors without hormones or antibiotics.


Locations: More than 30,000

Opened: 1955

Offers a fruit and walnut salad.


Locations: More than 11,000

Opened: 1954

Has added Morningstar Farms garden veggie burgers to the menu.


Copyright © 2005,CHICAGO TRIBUNE




John Kass

Published June 15, 2005


Up until the jury acquitted Michael Jackson of sexually molesting a young boy the other day, the criminal case against him was unremarkable. Except for the celebrity, it was something common, something you'd find at the bottom of a garbage can on a hot morning in June.

But from now on it becomes remarkable. Our reaction to the Jackson verdict tells us much about how far we'll go to accommodate celebrity, just as the court case told us about the predators, from Jackson and the alleged victim's family to the show-biz reporters and shrieking legal shills who attached themselves to this story, so many rubbing their fingers together, quickly, furtively.

There were the delighted squeals of Jackson's fans outside the courthouse. There was Rev. Jesse Jackson sobbing some grateful public tears at the good news for the pop singer before leading a prayer of thanksgiving, appropriately enough in a hotel bar in Chicago.

Again, Jackson was acquitted, so in the eyes of the law he is not a sexual predator. And while some jurors and much of the public consider the singer to be a creep, his record is clear--he was not convicted. Yet there is clearly something predatory about Jackson's behavior. He has lavished favor on parents, with access to celebrity, and in exchange for all the goodies he gets what he wants--a boy in his bed.

And any mother who would put her child into the bed of a strange man as she accepts his gifts is a pimp. No, I take that back. She's worse. A pimp sells another person's flesh for money, and a mother who'd do such a thing sells her own. As they balanced these in the hierarchy of evil, the jury sided with the predatory star, and not with the woman they saw as a lying schemer who pimped her son.

I avoided this story because it was so show-biz, the sexually confused and infantile pop icon, once black and now bleached, whose appetite has been shielded by music money and Hollywood power, and by the mother who didn't call police, but first called a lawyer to craft a hefty settlement.

Except for the Hollywood angle, and our peculiar American attitude toward celebrity, our yearning for it, our eagerness to forgive it, the rest is as pathetic as what you'd learn at any criminal court building: that some kids are sold for drugs, some for an adult's peace of mind.

"I feel that Michael Jackson has probably molested boys," juror Raymond Hultman, 62, told CNN, echoing the statements of other jurors. "To be in your bedroom for 365 straight days and not do something more than just watch television and eat popcorn, that doesn't make sense to me. ... But that doesn't make him guilty of the charges that were presented."

There was reason for doubt. And, naturally, the prosecutor will be criticized for being overzealous. But many prosecution critics have shielded themselves from moral responsibility by publicly hoping that Jackson won't take any more little boys to his bed. Well, here's some news to soothe them.

"He's not going to [take little boys to bed with him] because it makes him vulnerable to false charges," a Jackson spokesman/lawyer/spinner was quoted as saying Tuesday.

And, it's not good business to get caught taking little boys to bed, either, as music and Hollywood executives have been saying. As one celebrity journalist said on Fox News after the verdict, shaking his head, sighing, "Kids will bring you down if you sleep with them."

There's also the new race angle, which is odd because Michael Jackson's attitude toward racial identification is ambivalent at best. Yet according to a Gallup poll, whites by a roughly 2-1 margin thought he was guilty; and non-whites thought he was innocent by the same ratio.

Like the O.J. Simpson murder trial, it is believed that the particulars were layered against historical grievance, as if the alleged victim and Jackson carried the issues of slavery and discrimination with them, rather than the pathology of a formerly black pop star who likes sleeping with little boys, including the one whose mother is Hispanic and presumably not the descendant of Southern plantation owners and segregationists.

We've also been given the soap opera angle, the "Who dissed who in court" business, and how the jury didn't like the mother or the noise she made under oath. "I disliked it intently [sic] when she snapped her finger at us," one juror, a 79-year-old great-grandmother, was quoted as saying. "That's what I thought, `Don't snap your fingers at me, lady.'"

And don't forget the rehabilitation angle: Will Jackson's image be reformed by those who stand to profit by what's left of him?

Show biz is somewhat about talent, but it's more about promotion, about money, about pimping the fantasy of flesh. It's been that way since the first Hollywood starlet was privately interviewed by the first Hollywood big shot after the first big Hollywood lunch.

Can't you hear the buzzing of the flies?




Shaila Dewan   June 22, 2005

PHILADELPHIA, Miss., June 21 - In what is likely to be the final chapter in a story that has troubled a generation, a jury pronounced Edgar Ray Killen guilty of manslaughter on Tuesday in the deaths of three young and idealistic civil rights workers who disappeared on a summer night here exactly 41 years ago.

Mr. Killen, 80, sat in a wheelchair, the thin, greenish tubes of an oxygen tank under his nose, his expression impassive as the verdict was read aloud. Throughout the courtroom, people wept - the Killen family on the right, the victims' relatives on the left, as well as townspeople deeply invested in seeing the case brought to trial in hopes that Neshoba County could overcome its past.

Roscoe Jones, a tall, elderly black man with tear-rimmed eyes who had worked alongside the three men who died, pushed his way through the crowd to the side of Rita Bender, a diminutive white woman who had been married to one of them. "Excuse me," Mr. Jones said, politely urgent. "Excuse me." When he reached Ms. Bender, they embraced.

The disappearance of the three men, Andrew Goodman, 20, Michael Schwerner, 24, and James Earl Chaney, 21, on June 21, 1964, drew the national news media and hundreds of searchers to Neshoba County, while Mississippi officials said publicly that the disappearance was a hoax intended to draw attention. When the three bodies - two white, one black - were found under 15 feet of earth on a nearby farm, the nation's horror helped galvanize the civil rights movement. The case, dramatized in the movie "Mississippi Burning," is one of the biggest in what some have called the South's "atonement trials" revisiting civil-rights-era atrocities.

Jurors said the evidence fell short of what they needed to convict Mr. Killen, a former member of the Ku Klux Klan, of murder.

"I should say I heard a number of very emotional statements from some of the white jurors," said Warren Paprocki, 54, a white juror. "They had tears in their eyes, saying that if they could just have better evidence in the case that they would have convicted him of murder in a minute. Our consensus was the state did not produce a strong enough case."

The defense plans to appeal. "At least he wasn't found guilty of a willful and wanton act," said James McIntyre, one of Mr. Killen's lawyers. "Manslaughter is a negligent act."

Although the federal government tried 18 men, including Mr. Killen, on a conspiracy charge in 1967, Mr. Killen - a preacher and sawmill operator - was the first to be charged by the state. The 1967 jury deadlocked over Mr. Killen, and he has maintained his innocence. He faces up to 20 years in prison on each count when he is sentenced on Thursday.

As he was wheeled out of the courthouse, Mr. Killen swatted away television cameras and microphones.

With witnesses dead and memories fading, he could be the only one of the mob of Klansmen responsible for the killings to be tried. Prosecutors say that a grand jury heard all the available evidence against the eight original defendants still living but returned only one indictment, against Mr. Killen. While some in Neshoba County said it was too late and too painful to revisit the episode, others thought that in doing so, the county might find redemption.

"Finally, finally, finally," said Jim Prince, the editor of the local weekly newspaper, The Neshoba Democrat. "This certainly sends a message, I think, to the criminals and to the thugs that justice reigns in Neshoba County, unlike 41 years ago."

Ben Chaney, James Earl Chaney's younger brother, said he spoke briefly to his 82-year-old mother after the verdict. "She's happy," he said. "She finally believes that the life of her son has some value to the people in this community."

But for some of those who had hoped to see Mr. Killen convicted of murder, the manslaughter verdict was less than a total victory. "The fact that some members of this jury could have sat through that testimony, indeed could have lived here all these years and could not bring themselves to acknowledge that these were murders, that they were committed with malice, indicates that there are still people unfortunately among you who choose to look aside, who choose to not see the truth," Ms. Bender, who was married to Mr. Schwerner, said after the trial.

To Nettie Cox, the first black to run for mayor in Philadelphia, the verdict was an affront. "Manslaughter," Ms. Cox said, putting her hands to her temples. "I just can't absorb manslaughter."

But two jurors interviewed said there was not enough evidence that Mr. Killen, who was accused of orchestrating the killings and recruiting the mob that abducted the men and beat Mr. Chaney, shooting all three, had intended for the men to die.

Both the defense and the prosecution failed to impress the jury of nine whites and three blacks. Jurors said neither presented enough witnesses and that the case relied too heavily on transcripts from the federal trial.

On Monday evening, after deliberating more than two hours, the jury reported to Judge Marcus Gordon that they were evenly divided, and he dismissed them for the night. But after deliberating for nearly three hours on Tuesday morning, they reached a unanimous verdict.

Jurors disputed an inference that manslaughter may have been a compromise verdict. One, Troy Savell, a white history teacher and coach, said he was initially in favor of acquittal, but his opinion changed as the jury deliberated. "I think the reasonable doubt was not there that he didn't have anything to do with it," Mr. Savell said.

Mr. Paprocki said race did not play a role in the deliberations. One of the three blacks on the jury was vocally in favor of a murder conviction at first, Mr. Paprocki said, but he was not sure where the other two stood.

Willis Lyon, the only one of the three black jurors who could be reached by phone on Tuesday, said: "The only thing I'll say to that regard is that we were as fair with Mr. Killen as we could have been. I think we gave him as fair a verdict on his behalf as was allowable."

Reached by phone, Shirley Vaughan, the forewoman, said she was emotionally drained by the trial and reluctant to speak. "With the little amount of evidence that we had, we did the very best that we could," she said.

Mark Duncan, the county district attorney, said he did not blame the jury for finding Mr. Killen guilty of a lesser charge than murder, pointing out that three of the four key witnesses were dead. "I think it was asking a lot of a jury to convict a man based on testimony of people who they couldn't see. All they had were their words on paper."

Mr. Duncan's partner in the prosecution, Attorney General Jim Hood, said two witnesses that had come forward since the case was reopened in 1999 had died, one by suicide and the other under questionable circumstances.

There were other obstacles for the prosecutors. Although seven of the original defendants, besides Mr. Killen, are still alive, they refused to testify before the grand jury in exchange for immunity, Mr. Hood said.

Confessions and other statements about the crime were not admissible if the witness was not available for cross-examination, they said.

Asked about the possible legacy of the trial, Mr. Hood said: "I'm just a prosecutor, I don't pretend to be a sociologist. I will allow the historians to analyze what impact this trial may have on this community and the state of Mississippi and its reputation throughout the world."

Jurors, on the other hand, said they were keenly aware of the significance and symbolism of the trial. "I felt dispirited last night because of the six-six split," Mr. Paprocki said. "I was very concerned that in the event of a hung jury that it would just reinforce the prejudicial stereotypes that have been attached to Philadelphia and Neshoba County. I was very much saddened by the fact."

But on Tuesday, he said, "Folks that had been fairly well saying no, they just couldn't convict him, said, 'Well, manslaughter.' I don't know what happened. It was fairly dramatic. It made quite an impression on me."

Ariel Hart contributed reporting from Atlanta for this article.

© New York Times 2005




By Salim Muwakkil

a senior editor at In These Times

July 18, 2005

Sen. Barack Obama (D-Ill.) has taken America by storm. Like Kanye West, another fast-rising black Chicagoan, Obama seems to have the wind of fortune at his back. That wind, once blocked by barriers of racial bias, is pushing many black men toward unprecedented success in the fields of cinema, athletics, media, medicine, theater and many other areas of American life.

These are genuine milestones for a nation steeped in racial segregation and racist violence. But they obscure a much more painful reality: Black men are vanishing.

A huge gender gap has long been a feature of life for African-Americans in the public-housing developments of America's large cities where black men are like ghosts. A vast majority of leaseholders in the Chicago Housing Authority are women, for example. This chronic gender imbalance has debilitated many neighborhoods once defiantly robust.

But as we enter the 21st Century, the growing gap between black men and women is threatening the viability of the entire African-American community.

An article by Jonathan Tilove in the May 8 edition of the Star Ledger in Newark, N.J., noted that according to census numbers "there are nearly 2 million more black adult women than men in America." This imbalance is alarming enough, but with nearly another million incarcerated or in the military the real gap is "2.8 million, or 26 percent. The comparable figure for whites was 8 percent," he wrote.

Tilove's article focused on East Orange, N.J., and found there are 37 percent more adult women than men. As the black population ages, the gap widens. "By the time people reach their 60s in East Orange, there are 47 percent more black women than men."

Besieged by poverty and the disease, violence and mass incarceration that accompany it, African-American men increasingly are missing in action, leaving black communities everywhere haunted by their absence.

There are more than 30 percent more black women than men in Baltimore, New Orleans, Chicago and Cleveland, Tilove noted. In New York City the number is 36 percent and 37 percent in Philadelphia.

This growing gender gap has devastating implications for the future of black America. The most obvious is the growing lack of marriageable mates for black women. Among well-educated, professional black women--a group that is growing rapidly--the gap is a chasm.

Black women are making unprecedented progress, and that's good news. But as they advance, black men are falling even further behind and that pattern is likely to continue.

The recent edition of The Journal of Blacks in Higher Education warns that a "large and growing gender gap in African-American higher education has become a troublesome trend casting a shadow on overall black educational progress." The journal reports that in 2001 there were 1.1 million black women enrolled in higher education, while only 604,000 black men were enrolled. The more successful a black woman becomes, the more likely she will end up alone, said Walter Farrell of the University of Wisconsin in a March 2002 Washington Monthly piece. Professional black women are having fewer children, and that means a growing percentage of black children are being born into less-educated, less-affluent families.

Black women's prospects for family formation may be getting slimmer, but they remain reluctant to marry outside of their race. The 2000 census revealed that in 73 percent of black-white couples, the husband was black and the wife was white.

It's important to note that black women are the fastest-growing group of inmates in the nation's prisons. And they still bear the brunt of urban poverty as single parents in the commercial wastelands that too often are their neighborhoods. But overall, they are rising as black men are falling.

The reasons for this imbalance are many: high rates of infant mortality, homicide and AIDS are big factors in the winnowing of black youth. Through middle age and beyond, black men are disproportionately felled by cardiovascular disease and cancer (particularly prostate cancer).

But many experts are concluding that mass incarceration may be the leading culprit in accelerating this gap, and that diagnosis seems right to me.

This nation begins tracking black men into the criminal justice system at a very early age. According to a study by researchers at Yale University titled "Pre-Kindergartners Left Behind," the process begins in pre-kindergarten, where black children are more than twice as likely to be expelled as children of other races. The study found that boys are expelled 4.5 times more often than girls.

The tracking continues in elementary school, where African-American males are assumed to be academically deficient and inclined toward criminality. A vivid example of that process was unearthed in a study by Chicago's North Lawndale Accountability Commission, which found there were more than 8,000 in-school arrests in the city in 2003, with 10 percent involving children 12 and younger. The group said black children received the harshest treatment.

Many black boys grow resentful of a system that routinely dismisses their potential and become alienated from scholastic activity and fail to graduate from high school. Lacking marketable skills, these men are resolutely shunned by employers. The Community Service Society of New York last year found that only 51.8 percent of black men ages 16 to 64 were employed in that city from 2000 to 2003.

What's more, the city's Commission on Human Rights recently released a survey that found white men with felony convictions have just as much chance of getting a job in New York City as black men with no criminal history.

Little is left for these men but a ruthless underground economy of drug commerce that takes its toll in lethal violence (homicide is a leading cause of death for young black men) or eventual incarceration. And this chronic process of mass incarceration has introduced a host of destabilizing elements into African-American culture.

This is a crisis that demands strenuous race-specific intervention. It may be fashionable to dismiss this demand as a relic of days past. But unless we act urgently, the real relic may be our hopes for a future of racial harmony.

Copyright © 2005, Chicago Tribune







Alessandra Stanley


August 8, 2005


He was not warm or cozily familiar. He was cool and even a little supercilious. If you invited Peter Jennings into your living room, he would be likely to raise an eyebrow at the stains on the coffee table. He was not America's best friend or kindly uncle. But in an era of chatty newscasters, jousting analysts and hyperactive commentators, he was a rare voice of civility.

That old-school formality is what will most be missing on the network news. On ABC, Mr. Jennings was a smooth, sophisticated anchor who could gracefully wing his way through the rawest breaking crises, from the Challenger explosion in 1986 to the Sept. 11 attacks. But so can many of the men and women who have been groomed to take his place someday.

What Mr. Jennings had that will be harder to replace was a worldliness that was rooted in his personality and also in his rich background of experience in the field.

Mr. Jennings, who died on Sunday, worked hard his entire life to overcome a flighty beginning: he never attended college, and got his start on Canadian television with the help of his father, a senior executive at the Canadian Broadcasting Company. Mr. Jennings became famous as the host of a dance show for teenagers and was only 26 when ABC News recruited him to be an anchor, more on the basis of his good looks and smooth delivery than anything else. He made up for it later, working as a correspondent in Vietnam, Beirut and Europe. His colleagues teased him about his dashing trench coats, but nobody looked better in Burberry or in black tie.

He took himself and the news seriously, so seriously that after the networks cut back on convention coverage in 2004, he insisted on anchoring those events gavel to gavel on ABC's tiny digital cable channel.

When bad things happened to the country, he was reassuringly calm and self-possessed, delivering live coverage of Sept. 11 without alarm or emotionalism. (And those few moments when he let some feeling show, choking a little and urging viewers to "call your children," brought home the gravity of the attack all the more poignantly.)

When bad things happened to him, he showed the same aplomb. When Mr. Jennings announced that he had to step down to be treated for advanced lung cancer in April, he shunned any hint of self-pity, thanking viewers for their support in the most reticent way possible.

"I will continue to do the broadcast; on good days my voice will not always be like this," he said, straining to sound jaunty. "Certainly, it's been a long time. And I hope it goes without saying that a journalist who doesn't value - deeply - the audience's loyalty should be in another line of work."

Mr. Jennings was not the last of the great white male news presenters, though it might have seemed that way after Tom Brokaw retired from NBC, Dan Rather resigned from CBS and CBS's chairman, Les Moonves, declared that the era of Voice of God anchors was over.

Brian Williams on NBC is as natty, self-possessed and buttoned-down as Mr. Brokaw and Mr. Jennings combined. Charles Gibson, who stepped in most often to replace Mr. Jennings when he began cancer treatment, proved a comfortingly familiar, competent face. For now at least, Bob Schieffer at CBS has introduced a no-nonsense note of the elder statesman after the nightly roller-coaster ride that was Dan Rather.

All of them remain in the classic anchor mold, but not one of them has the hauteur and dignity that Mr. Jennings brought to the news. Network newscasts have lost much of their audience and authority, but throughout all the setbacks, erosions and even his own fatal illness, he never lost his uncommon touch.

© New York Times



Frank Rich

August 14, 2005

Like the Japanese soldier marooned on an island for years after V-J Day, President Bush may be the last person in the country to learn that for Americans, if not Iraqis, the war in Iraq is over. "We will stay the course," he insistently tells us from his Texas ranch. What do you mean we, white man?

A president can't stay the course when his own citizens (let alone his own allies) won't stay with him. The approval rate for Mr. Bush's handling of Iraq plunged to 34 percent in last weekend's Newsweek poll - a match for the 32 percent that approved L.B.J.'s handling of Vietnam in early March 1968. (The two presidents' overall approval ratings have also converged: 41 percent for Johnson then, 42 percent for Bush now.) On March 31, 1968, as L.B.J.'s ratings plummeted further, he announced he wouldn't seek re-election, commencing our long extrication from that quagmire.

But our current Texas president has even outdone his predecessor; Mr. Bush has lost not only the country but also his army. Neither bonuses nor fudged standards nor the faking of high school diplomas has solved the recruitment shortfall. Now Jake Tapper of ABC News reports that the armed forces are so eager for bodies they will flout "don't ask, don't tell" and hang on to gay soldiers who tell, even if they tell the press.

The president's cable cadre is in disarray as well. At Fox News Bill O'Reilly is trashing Donald Rumsfeld for his incompetence, and Ann Coulter is chiding Mr. O'Reilly for being a defeatist. In an emblematic gesture akin to waving a white flag, Robert Novak walked off a CNN set and possibly out of a job rather than answer questions about his role in smearing the man who helped expose the administration's prewar inflation of Saddam W.M.D.'s. (On this sinking ship, it's hard to know which rat to root for.)

As if the right-wing pundit crackup isn't unsettling enough, Mr. Bush's top war strategists, starting with Mr. Rumsfeld and Gen. Richard Myers, have of late tried to rebrand the war in Iraq as what the defense secretary calls "a global struggle against violent extremism." A struggle is what you have with your landlord. When the war's über-managers start using euphemisms for a conflict this lethal, it's a clear sign that the battle to keep the Iraq war afloat with the American public is lost.

That battle crashed past the tipping point this month in Ohio. There's historical symmetry in that. It was in Cincinnati on Oct. 7, 2002, that Mr. Bush gave the fateful address that sped Congressional ratification of the war just days later. The speech was a miasma of self-delusion, half-truths and hype. The president said that "we know that Iraq and Al Qaeda have had high-level contacts that go back a decade," an exaggeration based on evidence that the Senate Intelligence Committee would later find far from conclusive. He said that Saddam "could have a nuclear weapon in less than a year" were he able to secure "an amount of highly enriched uranium a little larger than a single softball." Our own National Intelligence Estimate of Oct. 1 quoted State Department findings that claims of Iraqi pursuit of uranium in Africa were "highly dubious."

It was on these false premises - that Iraq was both a collaborator on 9/11 and about to inflict mushroom clouds on America - that honorable and brave young Americans were sent off to fight. Among them were the 19 marine reservists from a single suburban Cleveland battalion slaughtered in just three days at the start of this month. As they perished, another Ohio marine reservist who had served in Iraq came close to winning a Congressional election in southern Ohio. Paul Hackett, a Democrat who called the president a "chicken hawk," received 48 percent of the vote in exactly the kind of bedrock conservative Ohio district that decided the 2004 election for Mr. Bush.

These are the tea leaves that all Republicans, not just Chuck Hagel, are reading now. Newt Gingrich called the Hackett near-victory "a wake-up call." The resolutely pro-war New York Post editorial page begged Mr. Bush (to no avail) to "show some leadership" by showing up in Ohio to salute the fallen and their families. A Bush loyalist, Senator George Allen of Virginia, instructed the president to meet with Cindy Sheehan, the mother camping out in Crawford, as "a matter of courtesy and decency." Or, to translate his Washingtonese, as a matter of politics. Only someone as adrift from reality as Mr. Bush would need to be told that a vacationing president can't win a standoff with a grief-stricken parent commandeering TV cameras and the blogosphere 24/7.

Such political imperatives are rapidly bringing about the war's end. That's inevitable for a war of choice, not necessity, that was conceived in politics from the start. Iraq was a Bush administration idée fixe before there was a 9/11. Within hours of that horrible trauma, according to Richard Clarke's "Against All Enemies," Mr. Rumsfeld was proposing Iraq as a battlefield, not because the enemy that attacked America was there, but because it offered "better targets" than the shadowy terrorist redoubts of Afghanistan. It was easier to take out Saddam - and burnish Mr. Bush's credentials as a slam-dunk "war president," suitable for a "Top Gun" victory jig - than to shut down Al Qaeda and smoke out its leader "dead or alive."

But just as politics are a bad motive for choosing a war, so they can be a doomed engine for running a war. In an interview with Tim Russert early last year, Mr. Bush said, "The thing about the Vietnam War that troubles me, as I look back, was it was a political war," adding that the "essential" lesson he learned from Vietnam was to not have "politicians making military decisions." But by then Mr. Bush had disastrously ignored that very lesson; he had let Mr. Rumsfeld publicly rebuke the Army's chief of staff, Eric Shinseki, after the general dared tell the truth: that several hundred thousand troops would be required to secure Iraq. To this day it's our failure to provide that security that has turned the country into the terrorist haven it hadn't been before 9/11 - "the central front in the war on terror," as Mr. Bush keeps reminding us, as if that might make us forget he's the one who recklessly created it.

The endgame for American involvement in Iraq will be of a piece with the rest of this sorry history. "It makes no sense for the commander in chief to put out a timetable" for withdrawal, Mr. Bush declared on the same day that 14 of those Ohio troops were killed by a roadside bomb in Haditha. But even as he spoke, the war's actual commander, Gen. George Casey, had already publicly set a timetable for "some fairly substantial reductions" to start next spring. Officially this calendar is tied to the next round of Iraqi elections, but it's quite another election this administration has in mind. The priority now is less to save Jessica Lynch (or Iraqi democracy) than to save Rick Santorum and every other endangered Republican facing voters in November 2006.

Nothing that happens on the ground in Iraq can turn around the fate of this war in America: not a shotgun constitution rushed to meet an arbitrary deadline, not another Iraqi election, not higher terrorist body counts, not another battle for Falluja (where insurgents may again regroup, The Los Angeles Times reported last week). A citizenry that was asked to accept tax cuts, not sacrifice, at the war's inception is hardly in the mood to start sacrificing now. There will be neither the volunteers nor the money required to field the wholesale additional American troops that might bolster the security situation in Iraq.

WHAT lies ahead now in Iraq instead is not victory, which Mr. Bush has never clearly defined anyway, but an exit (or triage) strategy that may echo Johnson's March 1968 plan for retreat from Vietnam: some kind of negotiations (in this case, with Sunni elements of the insurgency), followed by more inflated claims about the readiness of the local troops-in-training, whom we'll then throw to the wolves. Such an outcome may lead to even greater disaster, but this administration long ago squandered the credibility needed to make the difficult case that more human and financial resources might prevent Iraq from continuing its descent into civil war and its devolution into jihad central.

Thus the president's claim on Thursday that "no decision has been made yet" about withdrawing troops from Iraq can be taken exactly as seriously as the vice president's preceding fantasy that the insurgency is in its "last throes." The country has already made the decision for Mr. Bush. We're outta there. Now comes the hard task of identifying the leaders who can pick up the pieces of the fiasco that has made us more vulnerable, not less, to the terrorists who struck us four years ago next month.

© New York Times



Dexter Filkins

BAGHDAD, Iraq — Inside the heavily fortified Green Zone, a group of prominent Iraqis has struggled for weeks to complete the country's new constitution, haggling over the precise meaning of words like "Islam," "federalism" and "nation."

Out on the streets, meanwhile, a new bit of Arabic slang has slipped into the chatter of ordinary Iraqis: "allas," a word that denotes an Iraqi who leads a group of killers to their victim, usually for a price. The allas typically points out the Shiites living in predominantly Sunni neighborhoods for the gunmen who are hunting them. He usually wears a mask.

"The allas is from the neighborhood, and he had a mask on," said Haider Mohammed, a Shiite, whose relative was murdered recently by a group of Sunni gunmen. "He pointed out my uncle, and they killed him."

The uncle, Hussein Khalil, was found in a garbage dump 100 yards from the spot where his Daewoo sedan had been run off the road. Two bullets had entered the back of Mr. Khalil's skull and exited through his face.

Around the same time, someone found some leaflets, drawn up by a group called the Liberation Army. "We are cleansing the area of dirty Shia," the leaflet declared.

The rise of the allas (pronounced ah-LAS) stands as a grim reminder of how little can be reasonably expected from the Iraqi constitution, no matter how beautiful its language or humane its intent.

In 28 months of war and occupation here, Iraq has always contained two parallel worlds: the world of the Green Zone and the constitution and the rule of law; and the anarchical, unpredictable world outside.

Never have the two worlds seemed so far apart.

From the beginning, the hope here has been that the Iraq outside the Green Zone would grow to resemble the safe and tidy world inside it; that the success of democracy would begin to drain away the anger that pushes the insurgency forward. This may have been what Secretary of State Condoleezza Rice was referring to when, in an interview published in Time magazine this month, she said that the insurgency was "losing steam" and that "rather quiet political progress" was transforming the country.

But in this third summer of war, the American project in Iraq has never seemed so wilted and sapped of life. It's not just the guerrillas, who are churning away at their relentless pace, attacking American forces about 65 times a day. It is most everything else, too.

Baghdad seems a city transported from the Middle Ages: a scattering of high-walled fortresses, each protected by a group of armed men. The area between the forts is a lawless no man's land, menaced by bandits and brigands. With the daytime temperatures here hovering at around 115 degrees, the electricity in much of the city flows for only about four hours a day.

With armed guards in tow, I drove across the no man's land the other day to pay a visit to Ahmad Chalabi, the deputy prime minister. Unlike many senior Iraqi officials, who have long since retreated into the Green Zone, Mr. Chalabi still lives in a private home. To get there, you must pass through a series of checkpoints at the outskirts of his neighborhood, manned by guards and crisscrossed by concrete chicanes. At the entrance to Mr. Chalabi's street, there is another checkpoint, made of concrete and barbed wire, and more armed guards. Then, in front of Mr. Chalabi's house, stands yet another blast wall. When Mr. Chalabi walks into his front yard, even inside his own compound, a dozen armed guards surround him.

Inside his house, Mr. Chalabi described one of his most recent efforts, to help broker a cease-fire in the city of Tal Afar, 200 miles to the north.

"I had all the sheiks here with me," he said.

On my way home, I noticed that a car was following me. Three times, the mysterious car accelerated to get close. Two men inside: a young man, maybe in his 30's, and a bald man behind the wheel. As the car drew close, my chase car - a second vehicle, filled with armed guards, deployed to follow my own - cut the men off in traffic. I sped away.

Americans, here and in the United States, wait for the day when the Iraqi police and army will shoulder the burden and let them go home.

One night last month, according to the locals, the Iraqi police and army surrounded the Sunni neighborhood of Sababkar in north Baghdad, and pulled 11 young men from their beds.

Their bodies were found the next day with bullet holes in their temples. The cheeks of some of the men had been punctured by electric drills. One man had been burned by acid. The police denied that they had been involved.

"This isn't the first time this sort of thing has happened," Adnan al-Dulami, a Sunni leader, said.

For much of last year, the soldiers of the First Cavalry Division oversaw a project to restore the river-front park on the east bank of the Tigris River. Under American eyes, the Iraqis planted sod, installed a sprinkler system and put up swing sets for the Iraqi children. It cost $1.5 million. The Tigris River Park was part of a vision of the unit's commander, Maj. Gen. Peter W. Chiarelli, to win the war by putting Iraqis to work.

General Chiarelli left Iraq this year, and the American unit that took over had other priorities. The sod is mostly dead now, and the sidewalks are covered in broken glass. The sprinkler heads have been stolen. The northern half of the park is sealed off by barbed wire and blast walls; Iraqis are told stay back, lest they be shot by American snipers on the roof of a nearby hotel.

Zalmay Khalilzad, the new American ambassador here, has publicly prodded the Iraqis to finish the constitution by Aug. 15, the date they set for themselves. On several occasions, Mr. Khalilzad has described the Iraqi constitution as a national compact, a document symbolizing the consensus of the nation.

And there's the rub. When the Americans smashed Saddam Hussein's regime two and half years ago, what lay revealed was a country with no agreement on the most basic questions of national identity. The Sunnis, a minority in charge here for five centuries, have not, for the most part, accepted that they will no longer control the country. The Shiites, the long-suppressed majority, want to set up a theocracy. The Kurds don't want to be part of Iraq at all. There is only so much that language can do to paper over such differences.

Last week, one of the country's largest Shiite political parties held a ceremony to commemorate the death of Ayatollah Muhammad Bakr al-Hakim, a moderate Shiite cleric who was assassinated by a huge car bomb two years ago. The rally was held in the Tigris River villa once occupied by Tariq Aziz, one of Mr. Hussein's senior henchmen. Nowadays, the house is controlled by the Supreme Council for the Islamic Revolution in Iraq, one of the dominant parties in the Shiite coalition that heads the Iraqi government.

Inside a tent where the ceremony unfolded, a large poster depicted three men: Mr. Hakim, the dead ayatollah; Grand Ayatollah Ali al-Sistani, the nation's most revered Shiite leader; and Abdul-Aziz al-Hakim, the late ayatollah's brother and, as the head of the Supreme Council, perhaps the country's most powerful political leader. The portraits stood as a kind of trinity, symbolizing the fusion of Islam and politics.

Outside the tent, a third member of the Hakim family stood in a receiving line. Amar al-Hakim, Abdul-Aziz's son and heir to the family dynasty, seemed in an upbeat mood. Like most Shiite political leaders here, Mr. Hakim seemed untroubled by the disputes in the constitution.

"We can all get along," Mr. Hakim said, smiling, "but I don't think we have to give anything up."

Throughout the ceremony, Mr. Hakim's compound was guarded by members of the Badr Brigade, the party's black-booted Iranian-trained militia. When the Americans were in charge here, they leaned hard on Mr. Hakim to disband it. But in one of his first official acts, Mr. Hakim publicly legalized his own private army.

With all the hubbub at Mr. Hakim's house, it was easy to miss what was going on in the house next door. Jalal Talabani, the Iraqi president and Kurdish leader, was getting ready to hold a dinner for the country's senior political leaders, Mr. Hakim included, to break the logjam over the constitution. Mr. Talabani's house, too, was guarded by a militia, but a different one from Mr. Hakim's. Here, it was the pesh merga who stood by with their guns, loyal only to Mr. Talabani.

The pesh merga fighters, milling about outside Mr. Talabani's villa and smoking cigarettes, said they had come all the way from the mountains of Kurdistan to protect their boss. None of them spoke a word of Arabic. To them, Baghdad was a foreign land.

Amid such bleakness, it is a wonder that anyone comes forward at all. Yet still the Iraqis do, even at the threat of death. One of them is Fakhri al-Qaisi, a dentist and Sunni member of the committee charged with drafting the constitution. Dr. Qaisi knows people close to the Sunni insurgency and, as such, has come under suspicion by the Americans and the Shiite-dominated government.

By Dr. Qaisi's count, the Americans have raided his home 17 times, once driving a tank into his dental office. Members of the Badr Brigade, the Shiite militia, recently killed his brother-in-law, Dr. Qaisi said, and appear to be aiming at him too. Now, because he has joined the constitutional committee, he has begun receiving death threats from Sunni insurgents as well.

"Everyone wants to kill me!" Dr. Qaisi said with a laugh, seated in a Green Zone lounge during a break from constitution drafting. "The Americans want to kill me, the Shiites want to kill me, the Kurds want to kill me and even the insurgents."

"Every night, a different car passes by my house," he said.

To protect himself, Dr. Qaisi has taken to spending nights in his car, though he allows that he sometimes stops by his home during the day to visit one of his three wives.

For all his problems, and all the problems facing Iraq, Dr. Qaisi expressed a firm belief that national reconciliation in Iraq was still possible, if leaders like himself could show the strength to give a little.

In this regard, as in so many others here, it's impossible to know. In the middle of a conversation, Dr. Qaisi stopped talking, recognizing that at the table next to him was Abu Hassan al-Amiri, the leader of the Badr Brigade. That's the organization that Dr. Qaisi believes killed his brother-in- law, and the same group, he believes, that would like to kill him now.

Dr. Qaisi rose from his seat, and so did Mr. Amiri.

"It's so nice to see you," Dr. Qaisi said. "We should get together."

The two men embraced, and kissed each other's cheeks.

"Yes," Mr. Amiri said, his arms wrapped around Dr. Qaisi. "We really should."

© New York Times



Nathan Bronner

For those who long considered it folly to settle a handful of Jews among hundreds of thousands of Palestinians in the Gaza Strip, the decision to remove them starting this week seems an acceptance of the obvious. What possible future could the settlers have had? How could their presence have done the state of Israel any good?

But for those, like Prime Minister Ariel Sharon, who created and nurtured the settlements, the move to dismantle them is something very different. It is an admission not of error but of failure. Their cherished goal - the resettlement of the full biblical land of Israel by contemporary Jews - is not to be. The reason: not enough of them came.

"We have had to come to terms with certain unanticipated realities," acknowledged Arye Mekel, Israeli consul general in New York. "Ideologically, we are disappointed. A pure Zionist must be disappointed because Zionism meant the Jews of the world would take their baggage and move to Israel. Most did not."

David Kimche, who was director general of Israel's foreign ministry in the 1980's, noted: "The old Zionist nationalists' anthem was a state on 'the two banks of the River Jordan.' When that became impractical, we talked about 'greater Israel,' from the Jordan to the sea. But people now realize that this, too, is something we won't be able to achieve."

The failure has two main sources. First, contrary to the expectations of the early Zionists, as Ambassador Mekel noted, most of the world's Jews have not joined their brethren to live in Israel. Of the world's 13 million to 14 million Jews, a minority - 5.26 million - make their home in Israel, and immigration has largely dried up. Last year, a record low 21,000 Jews immigrated to Israel.

Of course, Israel is a remarkably successful state, a democracy with a high standard of living and many proud accomplishments. Yet the misery that Zionists expected Jews elsewhere to suffer has not materialized. More than half a century after the establishment of the Jewish state, more Jews live in the United States than in Israel.

The second explanation for the shift in settlement policy is that the Palestinian population has grown far more rapidly - and Palestinians have proved far more willing to fight - than many on the Israeli right had anticipated. On Thursday, the newspaper Haaretz reported that the proportion of Jews in the combined population of Israel, the West Bank and Gaza had dropped below 50 percent for the first time. This means, many Israelis argue, that unless they yield territory, they will have to choose a Jewish state or a democratic one; they will not be able to have both.

While all acknowledge that Jewish immigration never achieved anticipated levels and that the Palestinian population has ballooned, the question of the role played by Palestinian violence in Mr. Sharon's decision to disengage is hotly contested. Some argue that the two Palestinian intifadas, or uprisings, from 1987 to 1993 and from 2000 to the present, drove Israel out. Others say that Israel's increasingly effective counterterror measures - the building of a barrier, killings of terror leaders and military reoccupation of selective Palestinian cities - broke the back of the insurgents, allowing Israel the sense of strength to walk away. In fact, both factors seem likely to have played a role.

"Of course terror has a role in the disengagement," said Michael Oren, a senior fellow at the Shalem Institute, a conservative Jerusalem research group. "It convinced us that Gaza was not worth holding onto and awakened us to the demographic danger. It took two intifadas for a majority of Israelis to decide that Gaza is not worth it."

A senior Israeli official who spent years closely associated with Likud leaders, speaking on condition of anonymity because of the sensitivity of the topic, said that Israelis long had little respect for Palestinians as fighters, but that had changed.

"The fact that hundreds of them are willing to blow themselves up is significant," he said. "We didn't give them any credit before. In spite of our being the strongest military power in the Middle East, we lost 1,200 people over the last four years. It finally sank in to Sharon and the rest of the leadership that these people were not giving up."

Some came to a similar conclusion much earlier. The Israeli left has been calling for a withdrawal from Gaza for years, and even many on the right believed settlement there to be futile and counterproductive. Mr. Kimche, the former foreign ministry official, recalled that when Prime Minister Yitzhak Shamir of the conservative Likud party was running against Yitzhak Rabin of Labor in the early 1990's, several Shamir advisers told him: "Unless you withdraw from Gaza, you're going to lose these elections." He did not withdraw; he lost.

Mr. Rabin himself said that he decided to negotiate a withdrawal with the Palestinians when he realized how unpopular military service in Gaza had become.

"He said privately - I heard him say it - that military reservists don't want to serve in the occupied territories and while they are not formally refusing they are finding excuses to stay away," Yoel Esteron, managing editor of Yediot Aharonot, recalled. "That put a real burden on the army and it meant we couldn't stay there forever."

With Gaza soon no longer in their hands, Israelis will face a much more complex set of decisions regarding the occupied West Bank. Settlements in distant corners of the West Bank are also being dismantled in the coming weeks, but no one knows how much more land Mr. Sharon and his successors will be willing to yield. What is clear, however, is that the internal Israeli logic of what is taking place this week - a scaling back of ambition in the face of reality - could lead to traumatic withdrawals of larger numbers of people on the West Bank.

As Mr. Sharon said in an interview with Yediot published on Friday, when asked about other isolated settlements, "Not everything will remain."

© New York Times



By Ronald Kotulak
Tribune science reporter

September 9, 2005

Could you use more brainpower?

Nature apparently thinks you can, according to two University of Chicago studies providing the first scientific evidence that the human brain is still evolving, a process that may ultimately increase people's capacity to grow smarter.

Two key brain-building genes, which underwent dramatic changes in the past that coincided with huge leaps in human intellectual development, are still undergoing rapid mutations, evolution's way of selecting for new beneficial traits, Bruce Lahn and his U. of C. colleagues reported in Friday's issue of the journal Science.

The researchers found that not everyone has these genes but that evolutionary pressures are causing them to increase in the population at an unprecedented rate. Lahn's group is also trying to determine just how smart these genes may have made humans.

One of the mutated genes, called microcephalin, began its swift spread among human ancestors about 37,000 years ago, a period marked by a creative explosion in music, art, religious expression and tool-making.

The other gene, ASPM (abnormal spindle-like microcephaly-associated), arose only about 5,800 years ago, right around the time of writing and the first civilization in Mesopotamia, which dates to 7000 B.C.

"People have this sense that as 21st century humans we've gotten as high as we're going to go," said Greg Wray, director of Duke University's center for evolutionary genomics. "But we're not played out as a species. We're still evolving and these studies are a pretty good example of that."

Just as major environmental changes in the past, such as dramatic shifts in the climate, food supply or geography, favored the selection of genetic traits that increased survival skills, the pressures on gene selection today come from an increasingly complex and technologically oriented society, said Lahn, a professor of human genetics and a Howard Hughes Medical Institute investigator.

"Our studies indicate that the trend that is the defining characteristic of human evolution--the growth of brain size and complexity--is likely still going on," he said.

"Meanwhile, our environment and the skills we need to survive in it are changing faster then we ever imagined. I would expect the human brain, which has done well by us so far, will continue to adapt to those changes."

Evolutionary changes occur when a member of a species experiences a mutation in a gene that gives him a new skill, like running faster, seeing farther or thinking better. The genetic mutation increases his likelihood of survival and having more children, thereby allowing the new mutation to spread quickly through the population.

That's what happened to the microcephalin mutation, which now occurs in 70 percent of all people, and the ASPM gene mutation, which so far has spread to 30 percent of all people.

Other experts called the U. of C. studies stunning but said that while the two genes appear to make people smarter by helping to engineer bigger brains, there are many more genes involved in brain building and human intelligence and cognition.

"It's very exciting but it's really just the beginning of a whole new phase of research," Wray said. "These aren't going to be the only genes and these aren't going to be the only changes. We don't even really know exactly what these changes mean, but it's a glimpse into the future of our understanding of how the human brain came to be and function the way it does."

Probing the genes of intelligence has been controversial in the past and is likely to be so now because of fears that the knowledge could be misused to grade people's intelligence based on their genes.

But intelligence is a complex issue that is greatly influenced not only by the genes people inherit, but also by their early learning experiences.

Researchers have learned over the last two decades that genes and the environment work together--genes provide for a range of possible outcomes and the environment determines which specific outcome is likely to occur.

Most of the brain, for instance, gets built after birth when learning experiences determine the way in which brain cells connect to each other. How a brain gets wired directly affects its computing power.

"There are genetic differences that make each of us unique," Wray said. "But there's no way for you to look at a single gene and say `OK, you've got this mutation, you're smarter than someone else.' Maybe at some point we will know that but not with these genes."

Ever since the human line diverged from other primates between 6 and 8 million years ago the human brain grew steadily bigger as a result of selective genetic mutations. Chimps, our closest primate relative, on the other hand, stayed pretty much the same.

Some 200,000 years ago, the anatomically modern human emerged with a brain three times the size of a chimp's. As humans got smarter, Lahn said, selection pressure for smartness became intensified.

The microcephalin and ASPM genes played a big role in expanding the size of the brain. People born with defects in these genes develop brains that look normal but are only one-third the size of a full-grown human brain. As a result, their mental capacity is sharply reduced and they cannot live on their own.

To show that brain evolution is an ongoing process, Lahn's team studied the genes of more than 1,000 people representing 59 ethnic populations worldwide. Their genes were compared with those of the chimpanzee to provide a historical marker as to what the genes looked like before they diverged.

Both the microcephalin and ASPM genes come in a number of different varieties. They all do the important job of building the brain but with slightly different variations that occur among specific population groups. At this point scientists are trying to understand what extra benefits seem to be conferred by the variations.

The new variations in the microcephalin and ASPM genes occurred at a frequency far higher than would be expected by chance, indicating that natural selection was driving their spread in the population.

The U. of C. researchers found that one variety of the ASPM gene identified as haplogroup D occurs more frequently in Europeans and surrounding populations, including North Africans, Middle Easterners and South Asians. A specific variety of the microcephalin gene, also called haplogroup D, was most abundant in populations outside of sub-Saharan Africa.

"What we're seeing is that there is genetic variation in the human population that selection cares about," Wray said. "It means that evolution is still happening."

Copyright © 2005, Chicago Tribune



By David Mamet

September 16, 2005

ONE NEEDS TO know but three words to play poker: call, raise or fold.

Fold means keep the money, I'm out of the hand; call means to match your opponents' bet. That leaves raise, which is the only way to win at poker. The raiser puts his opponent on the defensive, seizing the initiative. Initiative is only important if one wants to win.

The military axiom is "he who imposes the terms of the battle imposes the terms of the peace." The gambling equivalent is: "Don't call unless you could raise"; that is, to merely match one's opponent's bet is effective only if it makes the opponent question the caller's motives. And that can only occur if the caller has acted aggressively enough in the past to cause his opponents to wonder if the mere call is a ruse de guerre.

If you are branded as passive, the table will roll right over you — your opponents will steal antes without fear. Why? Because the addicted caller has never exhibited what, in the wider world, is known as courage.

In poker, one must have courage: the courage to bet, to back one's convictions, one's intuitions, one's understanding. There can be no victory without courage. The successful player must be willing to wager on likelihoods. Should he wait for absolutely risk-free certainty, he will win nothing, regardless of the cards he is dealt.

For example, take a player who has never acted with initiative — he has never raised, merely called. Now, at the end of the evening, he is dealt a royal flush. The hand, per se, is unbeatable, but the passive player has never acted aggressively; his current bet (on the sure thing) will signal to the other players that his hand is unbeatable, and they will fold.

His patient, passive quest for certainty has won nothing.

The Democrats, similarly, in their quest for a strategy that would alienate no voters, have given away the store, and they have given away the country.

Committed Democrats watched while Al Gore frittered away the sure-thing election of 2000. They watched, passively, while the Bush administration concocted a phony war; they, in the main, voted for the war knowing it was purposeless, out of fear of being thought weak. They then ran a candidate who refused to stand up to accusations of lack of patriotism.

The Republicans, like the perpetual raiser at the poker table, became increasingly bold as the Democrats signaled their absolute reluctance to seize the initiative.

John Kerry lost the 2004 election combating an indictment of his Vietnam War record. A decorated war hero muddled himself in merely "calling" the attacks of a man with, curiously, a vanishing record of military attendance. Even if the Democrats and Kerry had prevailed (that is, succeeded in nullifying the Republicans arguably absurd accusations), they would have been back only where they started before the accusations began.

Control of the initiative is control of the battle. In the alley, at the poker table or in politics. One must raise. The American public chose Bush over Kerry in 2004. How, the undecided electorate rightly wondered, could one believe that Kerry would stand up for America when he could not stand up to Bush? A possible response to the Swift boat veterans would have been: "I served. He didn't. I didn't bring up the subject, but, if all George Bush has to show for his time in the Guard is a scrap of paper with some doodling on it, I say the man was a deserter."

This would have been a raise. Here the initiative has been seized, and the opponent must now fume and bluster and scream unfair. In combat, in politics, in poker, there is no certainty; there is only likelihood, and the likelihood is that aggression will prevail.

The press, quiescent during five years of aggressive behavior by the White House, has, perhaps, begun to recover its pride. In speaking of Karl Rove, Scott McClellan and the White House's Valerie Plame disgrace, they have begun to use words such as "other than true," "fabricated." The word that they circle, still, is "lie." The word the Democratic constituency, heartsick over the behavior of its party leaders, has been forced to consider applying to them is "coward."

One may sit at the poker table all night and never bet and still go home broke, having anted away one's stake.

The Democrats are anteing away their time at the table. They may be bold and risk defeat, or be passive and ensure it.

 © David Mamet 2005


By Bill McKibben
September 15, 2005, © Harpers

Only 40 percent of Americans can name more than four of the Ten Commandments, and a scant half can cite any of the four authors of the Gospels. Twelve percent believe Joan of Arc was Noah's wife. This failure to recall the specifics of our Christian heritage may be further evidence of our nation's educational decline, but it probably doesn't matter all that much in spiritual or political terms. Here is a statistic that does matter: Three quarters of Americans believe the Bible teaches that "God helps those who help themselves." That is, three out of four Americans believe that this uber-American idea, a notion at the core of our current individualist politics and culture, which was in fact uttered by Ben Franklin, actually appears in Holy Scripture. The thing is, not only is Franklin's wisdom not biblical; it's counter-biblical. Few ideas could be further from the gospel message, with its radical summons to love of neighbor. On this essential matter, most Americans-most American Christians-are simply wrong, as if 75 percent of American scientists believed that Newton proved gravity causes apples to fly up.

Asking Christians what Christ taught isn't a trick. When we say we are a Christian nation-and, overwhelmingly, we do-it means something. People who go to church absorb lessons there and make real decisions based on those lessons; increasingly, these lessons inform their politics. (One poll found that 11 percent of U.S. churchgoers were urged by their clergy to vote in a particular way in the 2004 election, up from 6 percent in 2000.) When George Bush says that Jesus Christ is his favorite philosopher, he may or may not be sincere, but he is reflecting the sincere beliefs of the vast majority of Americans.

And therein is the paradox. America is simultaneously the most professedly Christian of the developed nations and the least Christian in its behavior. That paradox-more important, perhaps, than the much touted ability of French women to stay thin on a diet of chocolate and cheese-illuminates the hollow at the core of our boastful, careening culture.

Ours is among the most spiritually homogenous rich nations on earth. Depending on which poll you look at and how the question is asked, somewhere around 85 percent of us call ourselves Christian. Israel, by way of comparison, is 77 percent Jewish. It is true that a smaller number of Americans-about 75 percent-claim they actually pray to God on a daily basis, and only 33 percent say they manage to get to church every week. Still, even if that 85 percent overstates actual practice, it clearly represents aspiration. In fact, there is nothing else that unites more than four fifths of America. Every other statistic one can cite about American behavior is essentially also a measure of the behavior of professed Christians. That's what America is: a place saturated in Christian identity.

But is it Christian? This is not a matter of angels dancing on the heads of pins. Christ was pretty specific about what he had in mind for his followers. What if we chose some simple criterion-say, giving aid to the poorest people-as a reasonable proxy for Christian behavior? After all, in the days before his crucifixion, when Jesus summed up his message for his disciples, he said the way you could tell the righteous from the damned was by whether they'd fed the hungry, slaked the thirsty, clothed the naked, welcomed the stranger, and visited the prisoner. What would we find then?

In 2004, as a share of our economy, we ranked second to last, after Italy, among developed countries in government foreign aid. Per capita we each provide fifteen cents a day in official development assistance to poor countries. And it's not because we were giving to private charities for relief work instead. Such funding increases our average daily donation by just six pennies, to twenty-one cents. It's also not because Americans were too busy taking care of their own; nearly 18 percent of American children lived in poverty (compared with, say, 8 percent in Sweden). In fact, by pretty much any measure of caring for the least among us you want to propose-childhood nutrition, infant mortality, access to preschool-we come in nearly last among the rich nations, and often by a wide margin. The point is not just that (as everyone already knows) the American nation trails badly in all these categories; it's that the overwhelmingly Christian American nation trails badly in all these categories, categories to which Jesus paid particular attention. And it's not as if the numbers are getting better: the U.S. Department of Agriculture reported last year that the number of households that were "food insecure with hunger" had climbed more than 26 percent between 1999 and 2003.

This Christian nation also tends to make personal, as opposed to political, choices that the Bible would seem to frown upon. Despite the Sixth Commandment, we are, of course, the most violent rich nation on earth, with a murder rate four or five times that of our European peers. We have prison populations greater by a factor of six or seven than other rich nations (which at least should give us plenty of opportunity for visiting the prisoners). Having been told to turn the other cheek, we're the only Western democracy left that executes its citizens, mostly in those states where Christianity is theoretically strongest. Despite Jesus' strong declarations against divorce, our marriages break up at a rate-just over half-that compares poorly with the European Union's average of about four in ten. That average may be held down by the fact that Europeans marry less frequently, and by countries, like Italy, where divorce is difficult; still, compare our success with, say, that of the godless Dutch, whose divorce rate is just over 37 percent. Teenage pregnancy? We're at the top of the charts. Personal self-discipline-like, say, keeping your weight under control? Buying on credit? Running government deficits? Do you need to ask?



September 14, 2005, © The New York Times

I hate spending time in hospitals and nursing homes. I find them to be some of the most depressing places on earth.

Maybe that's why the stories of the sick and elderly who died, 45 in a New Orleans hospital and 34 in St. Rita's nursing home in the devastated St. Bernard Parish outside New Orleans, haunt me so.

You're already vulnerable and alone when suddenly you're beset by nature and betrayed by your government.

At St. Rita's, 34 seniors fought to live with what little strength they had as the lights went out and the water rose over their legs, over their shoulders, over their mouths. As Gardiner Harris wrote in The Times, the failed defenses included a table nailed against a window and a couch pushed against a door.

Several electric wheelchairs were gathered near the front entrance, maybe by patients who dreamed of evacuating. Their drowned bodies were found swollen and unrecognizable a week later, as Mr. Harris reported, "draped over a wheelchair, wrapped in a shower curtain, lying on a floor in several inches of muck."

At Memorial Medical Center, victims also suffered in 100-degree heat and died, some while waiting to be rescued in the four days after Katrina hit.

As Louisiana's death toll spiked to 423 yesterday, the state charged St. Rita's owners with multiple counts of negligent homicide, accusing them of not responding to warnings about the hurricane. "In effect," State Attorney General Charles Foti Jr. said, "I think that their inactions resulted in the death of these people."

President Bush continued to try to spin his own inaction yesterday, but he may finally have reached a patch of reality beyond spin. Now he's the one drowning, unable to rescue himself by patting small black children on the head during photo-ops and making scripted attempts to appear engaged. He can keep going back down there, as he will again on Thursday when he gives a televised speech to the nation, but he can never compensate for his tragic inattention during days when so many lives could have been saved.

He made the ultimate sacrifice and admitted his administration had messed up, something he'd refused to do through all of the other screw-ups, from phantom W.M.D. and the torture at Abu Ghraib and Guantánamo to the miscalculations on the Iraq occupation and the insurgency, which will soon claim 2,000 young Americans.

How many places will be in shambles by the time the Bush crew leaves office?

Given that the Bush team has dealt with both gulf crises, Iraq and Katrina, with the same deadly mixture of arrogance and incompetence, and a refusal to face reality, it's frightening to think how it will handle the most demanding act of government domestic investment since the New Deal.

Even though we know W. likes to be in his bubble with his feather pillow, the stories this week are breathtaking about the lengths the White House staff had to go to in order to capture Incurious George's attention.

Newsweek reported that the reality of Katrina did not sink in for the president until days after the levees broke, turning New Orleans into a watery grave. It took a virtual intervention of his top aides to make W. watch the news about the worst natural disaster in a century. Dan Bartlett made a DVD of newscasts on the hurricane to show the president on Friday morning as he flew down to the Gulf Coast.





Among 19th-century thinkers it was an uncontestable commonplace that religion's cultural centrality was a thing of the past. For Georg Hegel, following in the footsteps of the Enlightenment, religion had been surpassed by reason's superior conceptual precision. In The Essence of Christianity (1841), Ludwig Feuerbach depicted the relationship between man and divinity as a zero-sum game. In his view, the stress on godliness merely detracted from the sublimity of human ends. In one of his youthful writings, Karl Marx, Feuerbach's most influential disciple, famously dismissed religion as "the opium of the people." Its abolition, Marx believed, was a sine qua non for human betterment. Friedrich Nietzsche got to the heart of the matter by having his literary alter ego, the brooding prophet Zarathustra, brusquely declaim, "God is dead," thereby pithily summarizing what many educated Europeans were thinking but few had the courage actually to say. And who can forget Nietzsche's searing characterization of Christianity as a "slave morality," a plebeian belief system appropriate for timorous conformists but unsuited to the creation of a future race of domineering Übermenschen? True to character, the only representatives of Christianity Nietzsche saw fit to praise were those who could revel in a good auto-da-fé -- Inquisition stalwarts like Ignatius Loyola.

Twentieth-century characterizations of belief were hardly more generous. Here, one need look no further than the title of Freud's 1927 treatise on religion: The Future of an Illusion.

Today, however, there are omnipresent signs of a radical change in mentality. In recent years, in both the United States and the developing world, varieties of religious fundamentalism have had a major political impact. As Democratic presidential hopefuls Howard Dean and John Kerry learned the hard way, politicians who are perceived as faithless risk losing touch with broad strata of the electorate.

Are contemporary philosophers up to the challenge of explaining and conceptualizing these striking recent developments? After all, what Freud, faithfully reflecting the values of the scientific age, cursorily dismissed as illusory seems to have made an unexpected and assertive comeback -- one that shows few signs of abating anytime soon.

Jürgen Habermas may be the living philosopher most likely to succeed where angels, and their detractors, fear to tread. Following Jacques Derrida's death last October, it would seem that Habermas has justly inherited the title of the world's leading philosopher. Last year he won the prestigious Kyoto Prize for Arts and Philosophy (previous recipients include Karl Popper and Paul Ricoeur), capping an eventful career replete with honors as well as a number of high-profile public debates.

The centerpiece of Habermas's moral philosophy is "discourse ethics," which takes its inspiration from Immanuel Kant's categorical imperative. For Kant, to count as moral, actions must pass the test of universality: The actor must be able to will that anyone in a similar situation should act in the same way. According to Kant, lying and stealing are immoral insofar as they fall beneath the universalization threshold; only at the price of grave self-contradiction could one will that lying and stealing become universal laws. Certainly, we can envisage a number of exceptional situations where we could conceivably justify lying or stealing. In Kant's example, at your door is a man intent on murdering your loved one and inquiring as to her whereabouts. Or what if you were too poor to purchase the medicine needed to save your spouse's life?

In the first case you might well think it would be permissible to lie; and in the second case, to steal. Yet on both counts Kant is immovable. An appeal to circumstances might well complicate our decision making. It might even elicit considerable public sympathy for otherwise objectionable conduct. But it can in no way render an immoral action moral. It is with good reason that Kant calls his imperative a categorical one, for an imperative that admits of exceptions is really no imperative at all.

Habermas's approach to moral philosophy is Kantian, although he takes exception to the solipsistic, egological framework Kant employs. Habermas believes that, in order to be convincing, moral reasoning needs a broader, public basis. Discourse ethics seeks to offset the limitations of the Kantian approach. For Habermas, the give and take of argumentation, as a learning process, is indispensable. Through communicative reason we strive for mutual understanding and learn to assume the standpoint of the other. Thereby we also come to appreciate the narrowness of our own individual perspective. Discourse ethics proposes that those actions are moral that could be justified in an open-ended and genuine public dialogue. Its formula suggests that "only those norms can claim to be valid that meet (or could meet) with the appro-val of all affected in their capacity as participants in a practical discourse."

Until recently Habermas was known as a resolutely secular thinker. On occasion his writings touched upon religious subjects or themes. But these confluences were exceptions that proved the rule.

Yet a few years ago the tonality of his work began to change ever so subtly. In fall 2001 Habermas was awarded the prestigious Peace Prize of the German Publishers and Booksellers Association. The title of his acceptance speech, "Faith and Knowledge," had a palpably theological ring. The remarks, delivered shortly after the September 11 terrorist attacks, stressed the importance of mutual toleration between secular and religious approaches to life.

Last year Habermas engaged in a high-profile public dialogue with Cardinal Joseph Ratzinger -- who, on April 19, was named as Pope John Paul II's successor -- at the cardinal's behest. A number of the philosopher's left-wing friends and followers were taken aback by his willingness to have a dialogue with one of Europe's most conservative prelates. In 2002 Habermas had published In Defense of Humanity, an impassioned critique of the risks of biological engineering and human cloning. It was this text in particular, in which the philosopher provided an eloquent defense of the right to a unique human identity -- a right that cloning clearly imperils -- that seems to have piqued the cardinal's curiosity and interest. Yet if one examines the trajectory of Habermas's intellectual development, the Ratzinger exchange seems relatively unexceptional.

Glance back at Habermas's philosophical chef d'oeuvre, the two-volume Theory of Communicative Action (1981), and you'll find that one of his key ideas is the "linguistification of the sacred" (Versprachlichung des Sakrals). By this admittedly cumbersome term, Habermas asserts that modern notions of equality and fairness are secular distillations of time-honored Judeo-Christian precepts. The "contract theory" of politics, from which our modern conception of "government by consent of the governed" derives, would be difficult to conceive apart from the Old Testament covenants. Similarly, our idea of the intrinsic worth of all persons, which underlies human rights, stems directly from the Christian ideal of the equality of all men and women in the eyes of God. Were these invaluable religious sources of morality and justice to atrophy entirely, it is doubtful whether modern societies would be able to sustain this ideal on their own.

In a recent interview Habermas aptly summarized those insights: "For the normative self-understanding of modernity, Christianity has functioned as more than just a precursor or a catalyst. Universalistic egalitarianism, from which sprang the ideals of freedom and a collective life in solidarity, the autonomous conduct of life and emancipation, the individual morality of conscience, human rights, and democracy, is the direct legacy of the Judaic ethic of justice and the Christian ethic of love."

Three years ago the MIT Press published Religion and Rationality: Essays on Reason, God, and Modernity, an illuminating collection of Habermas's writings on religious themes. Edited and introduced by the philosopher Eduardo Mendieta, of the State University of New York at Stony Brook, the anthology concludes with a fascinating interview in which the philosopher systematically clarifies his views on a variety of religious areas. (A companion volume, The Frankfurt School on Religion: Key Writings by the Major Thinkers, also edited by Mendieta, was published in 2004 by Routledge.)

On the one hand, religion's return -- Habermas, perhaps with the American situation foremost in mind, goes so far as to speak of the emergence of "post-secular societies" -- presents us with undeniable dangers and risks. While theodicy has traditionally provided men and women with consolation for the harsh injustices of fate, it has also frequently taught them to remain passively content with their lot. It devalues worldly success and entices believers with the promise of eternal bliss in the hereafter. Here the risk is that religion may encourage an attitude of social passivity, thereby contravening democracy's need for an active and engaged citizenry. To wit, the biblical myth of the fall perceives secular history as a story of decline or perdition from which little intrinsic good may emerge.

On the other hand, laissez-faire's success as a universally revered economic model means that, today, global capitalism's triumphal march encounters few genuine oppositional tendencies. In that regard, religion, as a repository of transcendence, has an important role to play. It prevents the denizens of the modern secular societies from being overwhelmed by the all-encompassing demands of vocational life and worldly success. It offers a much-needed dimension of otherness: The religious values of love, community, and godliness help to offset the global dominance of competitiveness, acquisitiveness, and manipulation that predominate in the vocational sphere. Religious convictions encourage people to treat each other as ends in themselves rather than as mere means.

One of Habermas's mentors, the Frankfurt School philosopher Max Horkheimer, once observed that "to salvage an unconditional meaning" -- one that stood out as an unqualified Good -- "without God is a futile undertaking." As a stalwart of the Enlightenment, Habermas himself would be unlikely to go that far. But he might consider Horkheimer's adage a timely reminder of the risks and temptations of all-embracing secularism. Habermas stressed in a recent public lecture "the force of religious traditions to articulate moral intuitions with regard to communal forms of a dignified human life." As forceful and persuasive as our secular philosophical precepts might be -- the idea of human rights, for example -- from time to time they benefit from renewed contact with the nimbus of their sacral origins.

Last April Habermas presented a more systematic perspective on religion's role in contemporary society at an international conference on "Philosophy and Religion" at Poland's Lodz University. One of the novelties of Habermas's Lodz presentation, "Religion in the Public Sphere," was the commendable idea that "toleration" -- the bedrock of modern democratic culture -- is always a two-way street. Not only must believers tolerate others' beliefs, including the credos and convictions of nonbelievers; it falls due to disbelieving secularists, similarly, to appreciate the convictions of religiously motivated fellow citizens. From the standpoint of Habermas's "theory of communicative action," this stipulation suggests that we assume the standpoint of the other. It would be unrealistic and prejudicial to expect that religiously oriented citizens wholly abandon their most deeply held convictions upon entering the public sphere where, as a rule and justifiably, secular reasoning has become our default discursive mode. If we think back, for instance, to the religious idealism that infused the civil-rights movement of the 1950s and 1960s, we find an admirable example of the way in which a biblical sense of justice can be fruitfully brought to bear on contemporary social problems.

The philosopher who addressed these issues most directly and fruitfully in recent years was John Rawls. In a spirit of collegial solidarity, Habermas, in his Lodz paper, made ample allusion to Rawlsian ideals. Perhaps Rawls's most important gloss on religion's role in modern politics is his caveat or "proviso" that, to gain a reasonable chance of public acceptance, religious reasons must ultimately be capable of being translated into secular forms of argumentation. In the case of public officials -- politicians and the judiciary, for example -- Rawls raises the secular bar still higher. He believes that, in their political language, there is little room for an open and direct appeal to nonsecular reasons, which, in light of the manifest diversity of religious beliefs, would prove extremely divisive. As Habermas affirms, echoing Rawls: "This stringent demand can only be laid at the door of politicians, who within state institutions are subject to the obligation to remain neutral in the face of competing worldviews." But if that stringent demand is on the politician, Habermas argues, "every citizen must know that only secular reasons count beyond the institutional threshold that divides the informal public sphere from parliaments, courts, ministries, and administrations."

With his broad-minded acknowledgment of religion's special niche in the spectrum of public political debate, Habermas has made an indispensable stride toward defining an ethos of multicultural tolerance. Without such a perspective, prospects for equitable global democracy would seem exceedingly dim. The criterion for religious belief systems that wish to have their moral recommendations felt and acknowledged is the capacity to take the standpoint of the other. Only those religions that retain the capacity to bracket or suspend the temptations of theological narcissism -- the conviction that my religion alone provides the path to salvation -- are suitable players in our rapidly changing, post-secular moral and political universe.

Richard Wolin is a professor of history, comparative literature, and political science at the Graduate Center of the City University of New York. His books include The Seduction of Unreason: The Intellectual Romance With Fascism From Nietzsche to Postmodernism (Princeton University Press, 2004).



by Lowell Monke

Thomas Edison was a great inventor but a lousy prognosticator. When he proclaimed in 1922 that the motion picture would replace textbooks in schools, he began a long string of spectacularly wrong predictions regarding the capacity of various technologies to revolutionize teaching. To date, none of them—from film to television—has lived up to the hype. Most were quickly relegated to the audiovisual closet. Even the computer, which is now a standard feature of most classrooms, has not been able to show a consistent record of improving education.

"There have been no advances over the past decade that can be confidently attributed to broader access to computers," said Stanford University professor of education Larry Cuban in 2001, summarizing the existing research on educational computing. "The link between test-score improvements and computer availability and use is even more contested." Part of the problem, Cuban pointed out, is that many computers simply go unused in the classroom. But more recent research, including a University of Munich study of 174,000 students in thirty-one countries, indicates that students who frequently use computers perform worse academically than those who use them rarely or not at all.

Whether or not these assessments are the last word, it is clear that the computer has not fulfilled the promises made for it. Promoters of instructional technology have reverted to a much more modest claim—that the computer is just another tool: "it's what you do with it that counts." But this response ignores the ecological impact of technologies. Far from being neutral, they reconstitute all of the relationships in an environment, some for better and some for worse. Installing a computer lab in a school may mean that students have access to information they would never be able to get any other way, but it may also mean that children spend less time engaged in outdoor play, the art supply budget has to be cut, new security measures have to be employed, and Acceptable Use Agreements are needed to inform parents (for the first time in American educational history) that the school is not responsible for the material a child encounters while under its supervision.

The "just-a-tool" argument also ignores the fact that whenever we choose one learning activity over another, we are deciding what kinds of encounters with the world we value for our children, which in turn influences what they grow up to value. Computers tend to promote and support certain kinds of learning experiences, and devalue others. As technology critic Neil Postman has observed, "What we need to consider about computers has nothing to do with its efficiency as a teaching tool. We need to know in what ways it is altering our conception of learning."

If we look through that lens, I think we will see that educational computing is neither a revolution nor a passing fad, but a Faustian bargain. Children gain unprecedented power to control their external world, but at the cost of internal growth. During the two decades that I taught young people with and about digital technology, I came to realize that the power of computers can lead children into deadened, alienated, and manipulative relationships with the world, that children's increasingly pervasive use of computers jeopardizes their ability to belong fully to human and biological communities—ultimately jeopardizing the communities themselves.

Several years ago I participated in a panel discussion on Iowa Public Television that focused on some "best practices" for computers in the classroom. Early in the program, a video showed how a fourth grade class in rural Iowa used computers to produce hypertext book reports on Charlotte's Web, E. B. White's classic children's novel. In the video, students proudly demonstrated their work, which included a computer-generated "spider" jumping across the screen and an animated stick-figure boy swinging from a hayloft rope. Toward the end of the video, a student discussed the important lessons he had learned: always be nice to each other and help one another.

There were important lessons for viewers as well. Images of the students talking around computer screens dispelled (appropriately, I think) the notion that computers always isolate users. Moreover, the teacher explained that her students were so enthusiastic about the project that they chose to go to the computer lab rather than outside for recess. While she seemed impressed by this dedication, it underscores the first troubling influence of computers. The medium is so compelling that it lures children away from the kind of activities through which they have always most effectively discovered themselves and their place in the world.

Ironically, students could best learn the lessons implicit in Charlotte's Web—the need to negotiate relationships, the importance of all members of a community, even the rats—by engaging in the recess they missed. In a school, recess is not just a break from intellectual demands or a chance to let off steam. It is also a break from a closely supervised social and physical environment. It is when children are most free to negotiate their own relationships, at arm's length from adult authority. Yet across the U.S., these opportunities are disappearing. By the year 2000, according to a 2001 report by University of New Orleans associate professor Judith Kieff, more than 40 percent of the elementary and middle schools in the U.S. had entirely eliminated recess. By contrast, U.S. Department of Education statistics indicate that spending on technology in schools increased by more than 300 percent from 1990 to 2000.

Structured learning certainly has its place. But if it crowds out direct, unmediated engagement with the world, it undercuts a child's education. Children learn the fragility of flowers by touching their petals. They learn to cooperate by organizing their own games. The computer cannot simulate the physical and emotional nuances of resolving a dispute during kickball, or the creativity of inventing new rhymes to the rhythm of jumping rope. These full-bodied, often deeply heartfelt experiences educate not just the intellect but also the soul of the child. When children are free to practice on their own, they can test their inner perceptions against the world around them, develop the qualities of care, self-discipline, courage, compassion, generosity, and tolerance—and gradually figure out how to be part of both social and biological communities.

It's true that engaging with others on the playground can be a harrowing experience, too. Children often need to be monitored and, at times, disciplined for acts of cruelty, carelessness, selfishness, even violence. Computers do provide an attractively reliable alternative to the dangers of unsupervised play. But schools too often use computers or other highly structured activities to prevent these problematic qualities of childhood from surfacing—out of fear or a compulsion to force-feed academics. This effectively denies children the practice and feedback they need to develop the skills and dispositions of a mature person. If children do not dip their toes in the waters of unsupervised social activity, they likely will never be able to swim in the sea of civic responsibility. If they have no opportunities to dig in the soil, discover the spiders, bugs, birds, and plants that populate even the smallest unpaved playgrounds, they will be less likely to explore, appreciate, and protect nature as adults.

Computers not only divert students from recess and other unstructured experiences, but also replace those authentic experiences with virtual ones, creating a separate set of problems. According to surveys by the Kaiser Family Foundation and others, school-age children spend, on average, around five hours a day in front of screens for recreational purposes (for children ages two to seven the average is around three hours). All that screen time is supplemented by the hundreds of impressive computer projects now taking place in schools. Yet these projects—the steady diet of virtual trips to the Antarctic, virtual climbs to the summit of Mount Everest, and trips into cyber-orbit that represent one technological high after another—generate only vicarious thrills. The student doesn't actually soar above the Earth, doesn't trek across icy terrain, doesn't climb a mountain. Increasingly, she isn't even allowed to climb to the top of the jungle gym. And unlike reading, virtual adventures leave almost nothing to, and therefore require almost nothing of, the imagination. In experiencing the virtual world, the student cannot, as philosopher Steve Talbott has put it, "connect to [her] inner essence."

On the contrary, she is exposed to a simulated world that tends to deaden her encounters with the real one. During the decade that I spent teaching a course called Advanced Computer Technology, I repeatedly found that after engaging in Internet projects, students came back down to the Earth of their immediate surroundings with boredom and disinterest—and a desire to get back online. This phenomenon was so pronounced that I started kidding my students about being BEJs: Big Event Junkies. Sadly, many readily admitted that, in general, their classes had to be conducted with the multimedia sensationalism of MTV just to keep them engaged. Having watched Discovery Channel and worked with computer simulations that severely compress both time and space, children are typically disappointed when they first approach a pond or stream: the fish aren't jumping, the frogs aren't croaking, the deer aren't drinking, the otters aren't playing, and the raccoons (not to mention bears) aren't fishing. Their electronic experiences have led them to expect to see these things happening—all at once and with no effort on their part. This distortion can also result from a diet of television and movies, but the computer's powerful interactive capabilities greatly accelerate it. And the phenomenon affects more than just experiences with the natural world. It leaves students apathetic and impatient in any number of settings—from class discussions to science experiments. The result is that the child becomes less animated and less capable of appreciating what it means to be alive, what it means to belong in the world as a biological, social being.

So what to make of the Charlotte's Web video, in which the students hunch over a ten-by-twelve-inch screen, trying to learn about what it means to be part of a community while the recess clock ticks away? It's probably unfair to blame the teacher, who would have had plenty of reasons to turn to computers. Like thousands of innovative teachers across the U.S., she must try to find alternatives to the mind-numbing routine of lectures, worksheets, and rote memorization that constitutes conventional schooling. Perhaps like many other teachers, she fully acknowledges the negative effects of computer instruction as she works to create something positive. Or her instructional choices may have simply reflected the infatuation that many parents, community leaders, school administrators, and educational scholars have had with technology. Computer-based education clearly energizes many students and it seems to offer children tremendous power. Unfortunately, what it strips away is much less obvious.

When I was growing up in rural Iowa, I certainly lacked for many things. I couldn't tell a bagel from a burrito. But I always and in many ways belonged. For children, belonging is the most important function a community serves. Indeed, that is the message that lies at the heart of Charlotte's Web. None of us—whether of barnyard or human society—thrives without a sense of belonging. Communities offer it in many different ways—through stories, through language, through membership in religious, civic, or educational organizations. In my case, belonging hinged most decisively on place. I knew our farm—where the snowdrifts would be the morning after a blizzard, where and when the spring runoff would create a temporary stream through the east pasture. I knew the warmest and coolest spots. I could tell you where I was by the smells alone. Watching a massive thunderstorm build in the west, or discovering a new litter of kittens in the barn, I would be awestruck, mesmerized by mysterious wonders I could not control. One of the few moments I remember from elementary school is watching a huge black-and-yellow garden spider climb out of Lee Anfinson's pant cuffs after we came back from a field trip picking wildflowers. It set the whole class in motion with lively conversation and completely flummoxed our crusty old teacher. Somehow that spider spoke to all of us wide-eyed third graders, and we couldn't help but speak back. My experience of these moments, even if often only as a caring observer, somehow solidified my sense of belonging to a world larger than myself—and prepared me, with my parents' guidance, to participate in the larger community, human and otherwise.

Though the work of the students in the video doesn't reflect it, this kind of experience plays a major role in E. B. White's story. Charlotte's Web beautifully draws a child's attention to something that is increasingly rare in schools: the wonder of ordinary processes of nature, which grows mainly through direct contact with the real world. As Hannah Arendt and other observers have noted, we can only learn who we are as human beings by encountering what we are not. While it may seem an impossible task to provide all children with access to truly wild territories, even digging in (healthy) soil opens up a micro-universe that is wild, diverse, and "alien." Substituting the excitement of virtual connections for the deep fulfillment of firsthand engagement is like mistaking a map of a country for the land itself, or as biological philosopher Gregory Bateson put it, "eat[ing] the menu instead of your meal." No one prays over a menu. And I've never witnessed a child developing a reverence for nature while using a computer.

There is a profound difference between learning from the world and learning about it. Any young reader can find a surfeit of information about worms on the Internet. But the computer can only teach the student about worms, and only through abstract symbols—images and text cast on a two-dimensional screen. Contrast that with the way children come to know worms by hands-on experience—by digging in the soil, watching the worm retreat into its hole, and of course feeling it wiggle in the hand. There is the delight of discovery, the dirt under the fingernails, an initial squeamishness followed by a sense of pride at overcoming it. This is what can infuse knowledge with reverence, taking it beyond simple ingestion and manipulation of symbols. And it is reverence in learning that inspires responsibility to the world, the basis of belonging. So I had to wonder why the teacher from the Charlotte's Web video asked children to create animated computer pictures of spiders. Had she considered bringing terrariums into the room so students could watch real spiders fluidly spinning real webs? Sadly, I suspect not.

Rather than attempt to compensate for a growing disconnect from nature, schools seem more and more committed to reinforcing it, a problem that began long before the use of computers. Western pedagogy has always favored abstract knowledge over experiential learning. Even relying on books too much or too early inhibits the ability of children to develop direct relationships with the subjects they are studying. But because of their power, computers drastically exacerbate this tendency, leading us to believe that vivid images, massive amounts of information, and even online conversations with experts provide an adequate substitute for conversing with the things themselves. As the computer has amplified our youths' ability to virtually "go anywhere, at any time," it has eroded their sense of belonging anywhere, at any time, to anybody, or for any reason. How does a child growing up in Kansas gain a sense of belonging when her school encourages virtual learning about Afghanistan more than firsthand learning about her hometown? How does she relate to the world while spending most of her time engaging with computer-mediated text, images, and sounds that are oddly devoid of place, texture, depth, weight, odor, or taste—empty of life? Can she still cultivate the qualities of responsibility and reverence that are the foundation of belonging to real human or biological communities?

During the years that I worked with young people on Internet telecollaboration projects, I was constantly frustrated by individuals and even entire groups of students who would suddenly disappear from cyber-conversations related to the projects. My own students indicated that they understood the departures to be a way of controlling relationships that develop online. If they get too intense, too nasty, too boring, too demanding, just stop communicating and the relationship goes away. When I inquired, the students who used e-mail regularly all admitted they had done this, the majority more than once. This avoidance of potentially difficult interaction also surfaced in a group of students in the "Talented and Gifted" class at my school. They preferred discussing cultural diversity with students on the other side of the world through the Internet rather than conversing with the school's own ESL students, many of whom came from the very same parts of the world as the online correspondents. These bright high school students feared the uncertain consequences of engaging the immigrants face-to-face. Would they want to be friends? Would they ask for favors? Would they embarrass them in front of others? Would these beginning English speakers try to engage them in frustrating conversations? Better to stay online, where they could control when and how they related to strange people—without much of the work and uncertainty involved with creating and maintaining a caring relationship with a community.

If computers discourage a sense of belonging and the hard work needed to interact responsibly with others, they replace it with a promise of power. The seduction of the digital world is strong, especially for small children. What sets the computer apart from other devices, such as television, is the element of control. The most subtle, impressive message promoted by the Charlotte's Web video was that children could take charge of their own learning. Rather than passively listening to a lecture, they were directly interacting with educational content at their own pace. Children, who have so little control over so many things, often respond enthusiastically to such a gift. They feel the same sense of power and control that any of us feels when we use the computer successfully.

To develop normally, any child needs to learn to exert some control over her environment. But the control computers offer children is deceptive, and ultimately dangerous. In the first place, any control children obtain comes at a price: relinquishing the uniquely imaginative and often irrational thought processes that mark childhood. Keep in mind that a computer always has a hidden pedagogue—the programmer—who designed the software and invisibly controls the options available to students at every step of the way. If they try to think "outside the box," the box either refuses to respond or replies with an error message. The students must first surrender to the computer's hyper-rational form of "thinking" before they are awarded any control at all.

And then what exactly is awarded? Here is one of the most under appreciated hazards of the digital age: the problematic nature of a child's newfound power—and the lack of internal discipline in using it. The child pushes a button and the computer draws an X on the screen. The child didn't draw that X, she essentially "ordered" the computer to do it, and the computer employed an enormous amount of embedded adult skill to complete the task. Most of the time a user forgets this distinction because the machine so quickly and precisely processes commands. But the intensity of the frustration that we experience when the computer suddenly stops following orders (and our tendency to curse at, beg, or sweet talk it) confirms that the subtle difference is not lost on the psyche. This shift toward remote control is akin to taking the child out of the role of actor and turning her into the director. This is a very different way of engaging the world than hitting a ball, building a fort, setting a table, climbing a tree, sorting coins, speaking and listening to another person, acting in a play. In an important sense, the child gains control over a vast array of complex abstract activities by giving up or eroding her capacity to actually do them herself. We bemoan the student who uses a spell-checker instead of learning to spell, or a calculator instead of learning to add. But the sacrifice of internal growth for external power generally operates at a more subtle level, as when a child assembles a PowerPoint slideshow using little if any material that she actually created herself.

Perhaps more importantly, however, this emphasis on external power teaches children a manipulative way of engaging the world. The computer does an unprecedented job of facilitating the manipulation of symbols. Every object within the virtual environment is not only an abstract representation of something tangible, but is also discrete, floating freely in a digital sea, ready at hand for the user to do with as she pleases. A picture of a tree on a computer has no roots in the earth; it is available to be dragged, cropped, shaded, and reshaped. A picture of a face can be distorted, a recording of a musical performance remixed, someone else's text altered and inserted into an essay. The very idea of the dignity of a subject evaporates when everything becomes an object to be taken apart, reassembled, or deleted. Before computers, people could certainly abstract and manipulate symbols of massive objects or living things, from trees to mountainsides, from buildings to troop movements. But in the past, the level of manipulative power found in a computer never rested in the hands of children, and little research has been done to determine its effect on them. Advocates enthuse over the "unlimited" opportunities computers afford the student for imaginative control. And the computer environment attracts children exactly because it strips away the very resistance to their will that so frustrates them in their concrete existence. Yet in the real world, it is precisely an object's resistance to unlimited manipulation that forces a child (or anyone) to acknowledge the physical limitations of the natural world, the limits of one's power over it, and the need to respect the will of others living in it. To develop normally, a child needs to learn that she cannot force the family cat to sit on her lap, make a rosebud bloom, or hurt a friend and expect to just start over again with everything just as it was before. Nevertheless, long before children have learned these lessons in the real world, parents and educators rush to supply them with digital tools. And we are only now getting our first glimpse of the results—even among teenagers, whom we would expect to have more maturity than their grade school counterparts.

On the day my Advanced Computer Technology classroom got wired to the Internet, it suddenly struck me that, like other technology teachers testing the early Internet waters, I was about to give my high school students more power to do more harm to more people than any teens had ever had in history, and all at a safe distance. They could inflict emotional pain with a few keystrokes and never have to witness the tears shed. They had the skill to destroy hours, even years, of work accomplished by others they didn't know or feel any ill-will toward—just unfortunate, poorly protected network users whose files provided convenient bull's-eyes for youth flexing their newfound technical muscles. Had anyone helped them develop the inner moral and ethical strength needed to say "no" to the flexing of that power?

On the contrary, we hand even our smallest children enormously powerful machines long before they have the moral capacities to use them properly. Then to assure that our children don't slip past the electronic fences we erect around them, we rely on yet other technologies—including Internet filters like Net Nanny—or fear of draconian punishments. This is not the way to prepare youth for membership in a democratic society that eschews authoritarian control.

That lesson hit home with particular force when I had to handle a trio of very bright high school students in one of the last computer classes I taught. These otherwise nice young men lobbied me so hard to approve their major project proposal—breaking through the school's network security—that I finally relented to see if they intended to follow through. When I told them it was up to them, they trotted off to the lab without a second thought and went right to work—until I hauled them back and reasserted my authority. Once the external controls were lifted, these teens possessed no internal controls to take over. This is something those who want to "empower" young children by handing them computers have tended to ignore: that internal moral and ethical development must precede the acquisition of power—political, economic, or technical—if it is to be employed responsibly.

Computer science pioneer Joseph Weizenbaum long ago argued that as the machines we put in our citizens' hands become more and more powerful, it is crucial that we increase our efforts to help people recognize and accept the immense responsibility they have to use those machines for the good of humanity. Technology can provide enormous assistance in figuring out how to do things, Weizenbaum pointed out, but it turns mute when it comes time to determine what we should do. Without any such moral grounding, the dependence on computers encourages a manipulative, "whatever works" attitude toward others. It also reinforces the exploitative relationship to the environment that has plagued Western society since Descartes first expressed his desire to "seize nature by the throat." Even sophisticated "environmental" simulations, which show how ecosystems respond to changes, reinforce the mistaken idea that the natural world conforms to our abstract representations of it, and therefore has no inherent value, only the instrumental value we assign to it through our symbols. Such reductionism reinforces the kind of faulty thinking that is destroying the planet: we can dam riparian systems if models show an "acceptable" level of damage, treat human beings simply as units of productivity to be discarded when inconvenient or useless, and reduce all things, even those living, to mere data. The message of the medium—abstraction, manipulation, control, and power—inevitably influences those who use it.

None of this happens overnight, of course, or with a single exposure to a computer. It takes time to shape a worldview. But that is exactly why it is wrong-headed to push such powerful worldview-shapers on impressionable children, especially during elementary school years. What happens when we immerse our children in virtual environments whose fundamental lesson is not to live fully and responsibly in the world, but to value the power to manipulate objects and relationships? How can we then expect our children to draw the line between the symbols and what they represent? When we remove resistance to a child's will to act, how can we teach that child to deal maturely with the Earth and its inhabitants?

Our technological age requires a new definition of maturity: coming to terms with the proper limits of one's own power in relation to nature, society, and one's own desires. Developing those limits may be the most crucial goal of twenty-first-century education. Given the pervasiveness of digital technology, it is not necessary or sensible to teach children to reject computers (although I found that students need just one year of high school to learn enough computer skills to enter the workplace or college). What is necessary is to confront the challenges the technology poses with wisdom and great care. A number of organizations are attempting to do just that. The Alliance for Childhood, for one, has recently published a set of curriculum guidelines that promotes an ecological understanding of the relationship between humans and technology. But that's just a beginning.

In the preface to his thoughtful book, The Whale and the Reactor, Langdon Winner writes, "I am convinced that any philosophy of technology worth its salt must eventually ask, 'How can we limit modern technology to match our best sense of who we are and the kind of world we would like to build?'" Unfortunately, our schools too often default to the inverse of that question: "How can we limit human beings to match the best use of what our technology can do and the kind of world it will build?" As a consequence, our children are likely to sustain this process of alienation—in which they treat themselves, other people, and the Earth instrumentally—in a vain attempt to materially fill up lives crippled by internal emptiness. We should not be surprised when they "solve" personal and social problems by turning to drugs, guns, hateful Web logs, or other powerful "tools," rather than digging deep within themselves or searching out others in the community for strength and support. After all, this is just what we have taught them to do.

At the heart of a child's relationship with technology is a paradox—that the more external power children have at their disposal, the more difficult it will be for them to develop the inner capacities to use that power wisely. Once educators, parents, and policymakers understand this phenomenon, perhaps education will begin to emphasize the development of human beings living in community, and not just technical virtuosity. I am convinced that this will necessarily involve unplugging the learning environment long enough to encourage children to discover who they are and what kind of world they must live in. That, in turn, will allow them to participate more wisely in using external tools to shape, and at times leave unshaped, the world in which we all must live.

© Lowell Monke 2005



By Geoff Boucher

October 9, 2005

Bob Dylan is singing "The Times They Are A-Changin'" in a television ad for health-care giant Kaiser Permanente these days, and who could argue? With Led Zeppelin pitching Cadillacs, the Rolling Stones strutting in an Ameriquest Mortgage ad and Paul McCartney warbling for Fidelity Investments, it's clear that the old counterculture heroes of classic rock are now firmly entrenched as the house band of corporate America.

That only makes the case of John Densmore all the more intriguing.

Once, back when rock 'n' roll still seemed dangerous, Densmore was the drummer for the Doors, the band with dark hits such as "Light My Fire" and "People Are Strange." That band more or less went into the grave with lead singer Jim Morrison in 1971, but, like all top classic-rock franchises, it now has the chance to exploit a lucrative afterlife in television commercials. Offers keep coming in, such as the $15 million dangled by Cadillac last year to lease the song "Break on Through (To the Other Side)" to hawk its luxury SUVs.

A no-brainer

To the surprise of the corporation and to the chagrin of his former bandmates, Densmore vetoed the idea. He said he did the same when Apple Computer called with a $4-million offer, and every time "some deodorant company wants to use `Light My Fire.'"

The reason?

"People lost their virginity to this music, got high for the first time to this music," Densmore said. "I've had people say kids died in Vietnam listening to this music, other people say they know someone who didn't commit suicide because of this music. Onstage, when we played these songs, they felt mysterious and magic. That's not for rent."

That not only sets the Doors apart from the long, long list of classic rock acts that have had their songs licensed for major U.S. commercial campaigns, it has also added considerably to Densmore's estrangement from former bandmates Ray Manzarek and Robbie Krieger, a trio that last set eyes on one another in the corridors of the Los Angeles County Superior Courthouse last year.

"Everyone wanted him to do it," says John Branca, an attorney who worked on the Cadillac proposal. "I told him that, really, people don't frown on this anymore. It's considered a branding exercise for the music. He told me he just couldn't sell a song to a company that was polluting the world.

"I shook my head," Branca said, "but, hey, you have to respect that. How many of your principles would you reconsider when people start talking millions of dollars?"

Densmore relented once. Back in the 1970s, he agreed to let "Riders on the Storm" be used to sell Pirelli Tires in a TV spot in England. When he saw it he was sick. "I gave every cent to charity. Jim's ghost was in my ear, and I felt terrible. If I needed proof that it was the wrong thing to do, I got it."

Since then, the animus between the drummer and Manzarek and Krieger has intensified.

In August, Los Angeles Superior Court Judge Gregory W. Alarcon ruled that Manzarek and Krieger could no longer tour together as "Doors of the 21st Century." The pair, with former Cult singer Ian Astbury handling Morrison's old vocal duties, were in Canada at the time and grudgingly switched their marquee to the acronym "D21C."

Joined by Morrison's estate

Densmore had filed the suit in 2003 to block the neo-Doors from using any permutation of the old band's name. In this battle, he was joined by the Morrison estate.

Manzarek said the view that Densmore is selflessly protecting the Doors legacy is laughable.

"John is going to get about a million dollars for doing nothing," Manzarek said. "He gets an equal share as us, and we were out there working. A free million bucks. That's a gig I'd like."

Manzarek, 66, said his old friend should join the neo-Doors, not try to undermine them. "He should come and play drums with us," Manzarek said, "not fight us at every turn. We're all getting older. We should, the three of us, be playing these songs because, hey, the end is always near. Morrison was a poet, and above all, a poet wants his words heard."

Perhaps more years of life would have changed his view, but in 1969 it was quite clear that the poet of the Doors did not want to be a pitchman.

The Doors had formed in 1965. As the decade was coming to a close, they were hailed in some quarters as the "Rolling Stones of America." An advertising firm came to the band with an offer: $50,000 to allow their biggest hit, "Light My Fire," to be used in a commercial for the Buick Opel.

Morrison was in Europe and his bandmates voted in his absence; Densmore, Krieger and Manzarek agreed to the deal. Morrison returned and was furious, vowing to sledgehammer a Buick onstage at every concert if the commercial went forward. It did not.

In November 1970, the lesson learned from the Buick fiasco was put in writing. The Doors members agreed that any licensing agreement would require a unanimous vote. Even before that, the band had agreed that the members would share equally in all music-publishing rights, an arrangement that set them apart from most bands. Those agreements also set the stage for Densmore to be a human handbrake that again and again stops the Doors profit machine from speeding down new avenues.

"There's a lot of pressure, from everyone," Densmore said recently with a weary sigh. "Pressure from the guys, the manager, the Morrisonestate."

He was sitting in the back-house office of his Santa Monica, Calif., home. The walls are covered with photos and newspaper clippings, among them a framed Morrison poem about the vantage point of man beyond the grave. Among the lines:

"No more money/no more fancy dress/This other kingdom seems by far the best."

Popularity surge

Morrison is dead but hardly forgotten. Just the opposite, his popularity has surged in the years after his heart gave out. There was the one-two punch of the 1979 release of the film "Apocalypse Now," with its signature moments using the band's music, and the 1980 publication of the band tell-all book "No One Here Gets Out Alive" by Jerry Hopkins and Danny Sugarman. In 1991, another revival was stirred by Oliver Stone's movie "The Doors." Since that film's release, 14 million Doors albums have been sold in the United States alone.

Those album sales combine with the money generated by radio airplay, merchandising and the other royalty streams to put steady deposits into the bank accounts of the surviving members and the Morrison estate. Densmore said that the money coming in should relieve pressure on the band to drift into areas that will trample the legacy. "When Ray calls, I always ask him, `What is it you want to buy?'"

When Cadillac offered $15 million last year, the money made Densmore dizzy ("More money than any of us have made on anything we've ever done," he said), but he was resolute. "Robbie was on the fence; Ray wanted to do it," Densmore said. "All of it made me think about this book I want to write. It's about greed."

Manzarek, on the other hand, describes the car commercial in tie-dyed hues." Cadillac said we could all fly out to Detroit and give input as they start putting together their hybrid models and the way they would be presented to the public . . . artists and corporations working together, that's the 21st Century. That's the true age of Aquarius. But John's ego wouldn't let him see it was a good thing to do.

In the end, Cadillac held onto the motto "Break Through" but used a different dark anthem -- the commercial, now in heavy rotation, features Zeppelin's frenetic 1972 single "Rock and Roll." Cadillac's eight-figure offer was enough to coax the band to take its first plunge into the advertising profit stream.

Even among the classic-rock purist audience, there is a shift in expectation. Pete Howard, editor in chief of Ice magazine, a music publication tailored to audiophiles and intense rock music collectors, not only thinks the Doors should take money for the songs of the past, he believes they are risking their future if they don't.

"They get a gold star for integrity, but they are missing a train that is leaving the station," Howard said. "Advertising is no longer a dirty word to the Woodstock generation, and in fact, in this landscape, the band will find that if it relies on people who hear the music in films, on radio in prerecorded formats, that with each decade their niche among music fans will narrow. It's advertising -- with its broad audience and ubiquity -- that gets new ears."

Densmore now waits to see if his old bandmates will appeal the court decision banning the use of the Doors name for their concert tours. For the time being, Manzarek has said that the band will continue with the name Riders on the Storm. Densmore said he would not dispute them on that. Manzarek said the fans and reviews have been great, and Astbury has the same "dark, shamanistic, powerful, Celtic-Christian, mystical" vibe as his old friend Morrison. Manzarek says the group will soon record a new studio album.

"It doesn't matter what we call it, it's still Robbie and I together playing `Light My Fire' and `Love Me Two Times.' John should come and play and let us celebrate and keep this music alive," Manzarek said. "Look, what do I say to the cynics? I would like to play with Jim Morrison again. But you know what? I can't call him. I'm sorry. He's dead. He's busy. He's in eternity."

So what about that invitation from Manzarek?

"I would love to play with the Doors and play those songs again. I would. And I will play again as the Doors. Just as soon as Jim shows up."

© 2005, Chicago Tribune



By Jerry Saltz

May 26, 2005

I've got a little, make that a big, problem with contemporary art auctions. Last fall I set out to investigate it further. Beginning November 3, the day after the U.S. presidential election, I went to as many sales during New York's big fall auction week as I could stomach. This turned out to be a half-dozen spread out among the major auction houses—Christie's, Sotheby's, and Phillips. I learned a lot, but the problem only grew.

Contemporary art auctions are bizarre combinations of slave market, trading floor, theater, and brothel. They are rarefied entertainments where speculation, spin, and trophy hunting merge as an insular caste enacts a highly structured ritual in which the codes of consumption and peerage are manipulated in plain sight. Everyone says auctions are about "quality." In fact, auctions are altars to the disconnect between the inner life of art and the outer life of consumption, places where artists are cut off from their art. Auctions have nothing to do with quality. At auctions new values are assigned and desire is fetishized. Consumption becomes a sort of sacrament, art plays the role of sacrificial lamb, and the Ponzi scheme that surrounds it all rolls on.

Auctions are like stripteases: They rely on people being enticed by what's just out of reach. The auctioneer announces the lot number, the crowd stirs, and a turntable revolves to reveal the forlorn-looking work that is frequently guarded by one of the only black people in the room. Echoing this antebellum nightmare, petite blondes in little dresses sit close by. They're there to spot bids but rarely actually do anything. Occasionally, a handsome swain, assumedly another employee, approaches one of them from behind and whispers something into an ear. This lends a certain kinkiness to the proceedings. Of the hundreds, sometimes thousands of people in these rooms, only a tiny handful actually bid. The rest are there for kicks, networking, tracking values, and who knows what. Meanwhile the ghostly "phone bidders" buy their art in public and private at the same time.

In my season at the auctions I saw seas of white people captivated by ruby-throated auctioneers (all European, all insanely artificial in their mannerisms). These showmen gesticulate and croon their odious, melodious, mathematical songs, "I have $4 million. Do I hear 5 million?" I looked on as three-quarters of a billion dollars changed hands and silently bade farewell to works of art that will probably not be seen again in public in my lifetime. I saw a Mondrian, a Modigliani, and a Gauguin bring $21 million, $31 million, and $39 million respectively. A Johns drawing fetched $11 million, a Warhol "Race Riot" $15 million, and a Maurizio Catalan went for $3 million. I felt faint when a Matthew Barney photo in an edition of 10 that I had seen in his studio years before went for $200,000, then felt fainter as a restaurateur from a canceled reality-TV show bid up a photograph while a stunning blonde ground her pelvis against his groin every time he waved his paddle. I was dumbfounded as a third-rate painting by the second-rate Marlene Dumas broke the million-dollar mark, bought by one of her dealers. Numbers didn't matter any longer as the crowd carried on and the tote board tallied prices in international currencies. Suddenly, Bush's election victory seemed quaint.

What's out of whack at the auctions, however, isn't just the monetary values of the art, it's the values of the people who are buying and selling it. Wealthy collectors and their spawn tell everyone they love the art they own. Then—instead of selling the top several works and being set for life and perhaps giving the rest to a museum and changing the course of that museum forever (while receiving a tax benefit for themselves)—they sell their collections at auction and everyone applauds. These collectors don't have a clue about what it means to own art. Like the auction houses, they're only interested in money and publicity.

The greed, stupidity, and cupidity of many of the people who buy and sell their art at auctions has created what I call the "parallel market": artists whose auction prices far exceed what their work costs in galleries. Christie's international co-head of post-war contemporary art, Amy Cappellazzo, ruefully admits, "Some people prefer to spend $500,000 at auction on something they could buy privately for $50,000." She calls these people "traders." Auction houses rely on the fact that many of their buyers either don't know that much about art, that they'll buy almost anything if it has the right name on it, or that they don't care. So much for "quality."

Art worlders continually grouse about the skyrocketing prices and privately say they hate auctions. Yet, too many of them are making too much money off the system as it is to step away. It’s becoming a vicious cycle. Recently The New York Times featured a front page "Arts & Leisure" article titled "The X Factor" about why the art of women doesn't sell at auction for the same prices as that by men. Among other examples, it cited an Elizabeth Peyton painting that "only" sold for $300,000 while John Currin and Luc Tuymans fetched more than a million. What the article and all of us should ask is why any of them—good or bad—should sell for much more than, say, $100,000.

Auctions have always and only been commercial. By now, they're so craven they make you see that art fairs might be ways that artists and gallerists can take back at least some of the control. When this moneyed phase ends, a lot of people are going to be making a lot of excuses or maintaining that they were never part of this. Whatever anyone says, auctions are nasty pieces of work. © 2005



by Doug Ireland

November 06, 2005
Saturday night was the 10th day of the spreading youth riots that have much of France in flames -- and it was the worst night ever since the first riot erupted in a suburban Paris ghetto of low-income housing, with 1295 vehicles -- from private cars to public buses -- burned last night, a huge jump from the 897 set afire the previous evening. And, for the first time, the violence born in the suburban ghettos last night invaded the center of Paris -- some 40 vehicles were set alight in Le Marais (the pricey home to the most famous gay ghetto in Paris, around the Place de la Republique nearby, and in the bourgeois 17th arrondissement, only a stone's throw from the dilapidated ghetto of the Goutte d'Or in the 18th arrondissement.
As someone who lived in France for nearly a decade, and who has visited those suburban ghettos, where the violence started, on reporting trips any number of times, I have not been surprised by this tsunami of inchoate youth rebellion that is engulfing France. It is the result of thirty years of government neglect: of the failure of the French political classes -- of both right and left -- to make any serious effort to integrate its Muslim and black populations into the larger French economy and culture; and of the deep-seated, searing, soul-destroying racism that the unemployed and profoundly alienated young of the ghettos face every day of their lives, both from the police, and when trying to find a job or decent housing.
To understand the origins of this profound crisis for France, it is important to step back and remember that the ghettos where festering resentment has now burst into flames were created as a matter of industrial policy by the French state.
If France's population of immigrant origin -- mostly Arab, some black -- is today quite large (more than 10% of the total population), it is because there was a government and industrial policy during the post-World War II boom years of reconstruction and economic expansion which the French call "les trentes glorieuses" -- the 30 glorious years -- to recruit from France's foreign colonies laborers and factory and menial workers for jobs which there were no Frenchmen to fill. These immigrant workers were desperately needed to allow the French economy to expand due to the shortage of male manpower caused by two World Wars, which killed many Frenchmen, and slashed the native French birth-rates too. Moreover, these immigrant workers were considered passive and unlikely to strike (unlike the highly political French working class and its Communist-led unions.) This government-and-industry-sponsored influx of Arab workers (many of whom saved up to bring their families to France from North Africa) was reinforced following Algerian independence by the Harkis.
The Harkis (whose story is movingly told by Dalila Kerchouche in her Destins de Harkis) were the native Algerians who fought for and worked with France during the post-war anti-colonial struggles for independence -- and who for their trouble were horribly treated by France. Some 100,000 Harkis were killed by the Algerian FLN (National Liberation Front) after the French shamelessly abandoned them to a lethal fate when the French occupying army evacuated itself and the French colonists from Algeria. Moreover, those Harki families who were saved, often at the initiative of individual military commanders who refused to obey orders not to evacuate them, once in France were parked in unspeakable, filthy, crowded concentration camps for many long years and never benefited from any government aid -- a nice reward for their sacrifices for France, of which they were, after all, legally citizens. Their ghettoized children and grandchildren, naturally, harbor certain resentments.
France's other immigrant workers were warehoused in huge, high-rise low-income housing ghettos -- known as "cités" (Americans would say "the projects") -- specially built for them, and deliberately placed out of sight in the suburbs around most of France's major urban agglomerations, so that their darker-skinned inhabitants wouldn't pollute the center cities of Paris, Lyon, Toulouse, Lille, Nice and the others of white France's urban centers today encircled by flames. Often there was only just enough public transport provided to take these uneducated working class Arabs and blacks directly to their jobs in the burgeoning factories of the "peripherique" -- the suburban peripheries that encircled Paris and its smaller sisters -- but little or none linking the ghettos to the urban centers.
Now 30, 40, and 50 years old, these high-rise human warehouses in the isolated suburbs are today run-down, dilapidated, sinister places, with broken elevators that remain unrepaired, heating systems left dysfunctional in winter, dirt and dog-shit in the hallways, broken windows, and few commercial amenities -- shopping for basic necessities is often quite limited and difficult, while entertainment and recreational facilities for youth are truncated and totally inadequate when they're not non-existent. Both apartments and schools are over-crowded (birth control is a cultural taboo in the Muslim culture the immigrants brought with them and transmitted to their children, and even for their male grandchildren of today -- who've adopted hip-hop culture and created their own French-language rap music of extraordinary vitality (which often embodies stinging social and political content) -- condoms are a no-no because of Arab machismo, contributing to rising AIDS rates in the ghettos.
The first week in December will mark the 22nd anniversary of the Marche des Beurs (Beur means Arab in French slang). I was present to see the cortege of 100,000 arrive in Paris -- it was the Franco-Arab equivalent of Dr. Martin Luther King's 1963 March on Washington for Jobs and Justice The Marche des Beurs was organized from Lyon's horrific, enormous suburban high-rise ghetto, Les Minguettes, with the help of a charismatic left-wing French Catholic worker-priest, Father Christian Delorme, and its central theme was the demand to be recognized as French "comme les autres" -- like everyone else ... a demand, in sum, for complete integration. But for the mass of Franco-Arabs, little has changed since 1983 -- and the integrationist movement of "jeunes beurs" created around that march petered out in frustration and despair. In recent years, its place has been taken by Islamist fundamentalists operating through local mosques -- the mediatic symbol of this retreat into a separatist, communitarian-religious politics is the slick demagogue Tariq Ramadan, a philosophy professor who uses one cosmetically democratic discourse when he's speaking on French TV, and a fiery, hard-line fundamentalist discourse in the Arab-language cassettes of his speeches that sell like hotcakes to Franco-Arab ghetto youth. (Ramadan's double language has been meticulously documented by the Arab-speaking journalist Caroline Fourest in her book published last fall by Editions Grasset, "Frere Tariq: discourse, methode et strategie de Tariq Ramadan," extracts from which have been published in the weekly l'Express.) But the current rebellion has little to do with Islamic fundamentalism.
In 1990, Francois Mitterrand -- the Socialist President then -- described what life was like for jobless ghetto youths warehoused in the overcrowded "cités":
"What hope does a young person have who's been born in a quartier without a soul, who lives in an unspeakably ugly high-rise, surrounded by more ugliness, imprisoned by gray walls in a gray wasteland and condemned to a gray life, with all around a society that prefers to look away until it's time to get mad, time to FORBID."
Well, Mitterrand's perceptive and moving words remained just that -- words -- for his urban policy was an underfunded, unfocussed failure that only put a few band-aids on a metastasizing cancer -- and 15 years after Mitterrand's diagnosis, the hopelessness and alienation of these ghetto youths and their "gray lives" has only become deeper and more rancid still.
The response to the last ten days of violent youth rebellion by the conservative government has been inept and tone-deaf. For the first four days of the rebellion, Chirac and his Prime Minister, Dominique de Villepin decided to let the hyper-ambitious, megalomaniacal Interior Minister, Nicolas Sarkozy, lead the government's response to the youth's violence and arson. Chirac and Villepin detest Sarkozy, who has been openly campaigning to replace Chirac as president in 2007 (Villepin was made P.M. in the hopes that he could block Sarkozy for the right's presidential nomination), The President and his P.M. thought that "Sarko," as he's commonly referred to in France -- who won his widespread popularity as a hardline, law-and-order demagogue on the issue of domestic insecurity -- would be unable to stop the violence, and thus damage his presidential campaign.
But Sarkozy only poured verbal kerosene on the flames, dismissing the ghetto youth in the most insulting and racist terms and calling for a policy of repression. "Sarko" made headlines with his declarations that he would "karcherise" the ghettos of "la racaille"-- words the U.S. press has utterly inadequately translated to mean "clean" the ghettos of "scum." But these two words have an infinitely harsher and insulting flavor in French. "Karcher" is the well-known brand name of a system of cleaning surfaces by super-high-pressure sand-blasting or water-blasting that very violently peals away the outer skin of encrusted dirt -- like pigeon-shit -- even at the risk of damaging what's underneath. To apply this term to young human beings and proffer it as a strategy is a verbally fascist insult and, as a policy proposed by an Interior Minister, is about as close as one can get to hollering "ethnic cleansing" without actually saying so. It implies raw police power and force used very aggressively, with little regard for human rights. I wonder how many Anglo-American correspondents get the inflammatory, terribly vicious flavor of the word in French? The translation of "karcherise" by "clean" just misses completely the inflammatory violence of what Sarko was really saying. And "racaille" is infinitely more pejorative than "scum" to French-speakers -- it has the flavor of characterizing an entire group of people as subhuman, inherently evil and criminal, worthless, and is, in other words, one of the most serious insults one could launch at the rebellious ghetto youth.
As the rebellion has spread beyond the Paris suburbs as far south as Marseilles and Nice and as far north as Lille, Sarkozy has been thundering that the spreading violence is centrally "organized." But on the telephone this morning from Paris, the dean of French investigative reporters -- Claude Angeli, editor of Le Canard Enchaine -- told me, "That's not true -- this isn't being organized by the Islamist fundamentalists, as Sarkozy is implying to scare people. Sure, kids in neighborhoods are using their cellphones and text messages to warn each other where the cops are coming so they can move and pick other targets for their arson. But the rebellion is spreading because the youth have a sense of solidarity that comes from watching television -- they imitate what they're seeing, and they sense themselves targeted by Sarkozy's inflammatory rhetoric. The rebellion is spreading spontaneously -- driven especially by racist police conduct that is the daily lot of these youths. It's incredible the level of police racism -- they're arrested or controlled and have their papers checked because they have dark skins, and the police are verbally brutal, calling them 'bougnoules' [a racist insult, something like the American "towel-heads", only worse] and telling them, 'Lower your eyes! Lower your eyes!' as if they had no right to look a policeman in the face. It's utterly dehumanizing. No wonder these kids feel so divorced from authority."
A team report in today's French daily, Liberation (where I was once a columnist), interviews ghetto youths, and asks them to explain the reasons for their anger. And, the paper reports, "All, or almost all, cite 'Sarko'....a 22-year old student says, 'Sarkozy owes us his excuses for what he said. When I see what's happened, I come back to the same image: Sarkozy when he went to Argenteuil, raising his head and thundering, Madame, we're going to clean all that up. Result? Sarko sent every body over the top, he showed a total disrespect toward everybody" in the ghetto." A 13-year-old tells the Liberation reporters: "'It's us who are going to put Sarkozy through the Karcher...Will I be out making trouble tonight?' He smiles and says, 'that's classified information.'" Another 28-year-old youth: "Who's setting the fires? They're kids between 14 and 22, we don't really know who they are because they put on masks, don't talk, and don't brag about it the next day ... but instead of fucking everything up where they live, it would be better if they held a demo, or went and fucked up the people and the stores in Paris. We've got minister, Sarko, who says 'You're all the same.' Me, I say non, we all say non -- but in reply we still get, 'You're all the same.' That response from the government creates something in common between all of us, a kind of solidarity. These kids want to get attention, to let people know they exist. So, they same to themselves, 'If we get nasty and create panic, they won't forget us, they'll know we're in a neighborhood where we need help."
Yesterday, when Sarkozy -- who is Minister of Religion as well as Interior Minister -- wanted to make an appearance at the Catholic Bishops' conference in Paris, they refused to let him speak -- and instead, the Bishops issued a ringing statement denouncing "those who would call for repression and instill fear" instead of responding to the economic, social, and racial causes of the riots. This was an unusually sharp rebuke directed squarely at Sarkozy.
Under the headline "Budget Cuts Exasperate Suburban Mayors," Le Monde reports today on how Chirac and his conservatives have compounded 30 years of neglect of the ghettos by slashing even deeper into social programs: 20% annual cuts in subsidies for neighborhood groups that work with youths since 2003, cuts in youth job-training programs and tax credits for hiring ghetto youth, cuts in education and programs to teach kids how to read and write, cuts in neighborhood police who get to know ghetto kids and work with them (when Sarkozy went to Toulouse, he told the neighborhood police: "You're job is not to be playing soccer with these kids, your job is to arrest them!") With fewer and fewer neighborhood cops to do preventive work that defuses youth alienation and violence, the alternative is to wait for more explosions and then send in the CRS (Compagnies Republicaines de Securite, hard-line paramilitary SWAT teams). Budget cuts for social programs plus more repression, is a prescription for more violence.
That's why Le Monde's editorial today warned that a continuation of this blind policy creates a big risk of provoking a repeat of 2002, when the neo-fascist Jean-Marie Le Pen made it into the runoff.
And a majority of the country, empoisoned even more by racism after the violence of the last ten days, seems willing to accept more and more repression: a poll released last night on France 2 public TV shows that 57% of the French support Nicolas Sarkozy's hard-line approach to the ghetto youths' rebellion, now spreading right across France. Sarko's demagogy seems to be working -- at least with the electorate -- but it won't stop the violence, it will only increase it.
Doug Ireland, a longtime radical journalist and media critic, runs the blog DIRELAND, where this article appeared Nov. 6, 2005.
Doug Ireland © 2005



By Stephen Franklin
Tribune staff reporter

November 20, 2005

TAMPA -- With a cheery smile, Mary Mavrick strolled the store's aisles, handing out leaflets to workers while expecting to be nabbed any minute by security guards.

"What's this? Oh!" a gray-haired Wal-Mart worker said out loud, as she read from the one-page flier from the Wal-Mart Workers Association, a union-backed experiment to win support from the retailer's workers without traditional union organizing.

The flier read:

"Do you deserve respect from management? Does your family deserve more? Do you deserve more."

Mavrick was already in the parking lot, her leaflets distributed to surprised Wal-Mart workers by the time the guards told her to leave. She vowed to be back.

Her effort is a tiny part of a widening escalation of moves and countermoves between the behemoth 1.3 million-worker company and its foes, which include environmentalists, community activists and workers' rights advocates, much of it directed by the Service Employees International Union.

But it's not the typical union effort. Staffed largely by national political campaign veterans, it's a highly sophisticated publicity and Internet war aimed at the hearts, minds and wallets of Wal-Mart shoppers and employees.

"Actually, I'm shocked at how much we've penetrated into Wal-Mart's consciousness," said Andy Stern, SEIU's president. "We usually don't get this much attention so quickly."

Stern says the goal isn't to launch an immediate effort to organize the retailer's workers, but to change the company's behavior. That includes, he says, boosting workers' salaries, providing better health care and paying more attention to the U.S. communities in which the firm's 3,151 stores are located.

Wal-Mart officials said they were not worried about those who saw a fiercely anti-Wal-Mart documentary, "Wal-Mart, the High Cost of Low Price." It was shown last week in 8,000 venues, including at least 1,000 places of worship, said Robert Greenwald, its director and producer, who is not connected to the union drive.

The traditional route of showing the $1.8 million movie in theaters would have attracted people whose minds were already made up, Greenwald said. Instead, he wanted to reach "undecided or neutral" viewers.

The union bought 4,000 copies of the DVD for showings.

Company offers rebuttal

"The people who go there aren't the people who want to hear our message," said Wal-Mart spokesman Bob McAdam. Nonetheless, Wal-Mart promptly churned out a detailed written rebuttal of most of the points raised in the documentary.

"The company hasn't changed, but how we talk about us, has," said McAdam.

But others point out that Wal-Mart has clearly taken steps to garner public support. They include making Wal-Mart stores more environmentally friendly, supporting a hike in the minimum wage and introducing a less expensive health-care plan.

A Wal-Mart memo that was recently leaked to opponents shows the company's concern about "growing public scrutiny" and suggested ways to "move the needle on Wal-Mart's public reputation."

While Wal-Mart officials say the memo was only a proposal dealing with the growth of benefits costs, foes leaped on its suggestion that the company rely more on workers with fewer years on the job because they receive less in benefits.

So, too, the memo admitted that some critics were "correct" and that the company's health-care coverage "is expensive for low-income" families. Nearly half of the children of Wal-Mart employees are "either on Medicaid or are uninsured," the memo said.

How much of a price Wal-Mart has paid for the bad publicity is hard to measure.

"There's been an impact, but it's far less than the impact of energy prices and a lagging job market," said Mark Miller, an analyst with William Blair & Co., Chicago.

Chris Ohlinger of Service Industry Research Systems in suburban Cincinnati said consumers' trust in the company has "significantly declined" from several years ago. And that can parlay into a 1 percent drop in sales, he added.

Ohlinger doubted that the impact would be long lasting. Most consumers, he said, are concerned more about low prices than anything else.

The difference now is that the battle against Wal-Mart has become much more like a polished political campaign.

Coordinating like-minded allies across the U.S. is Wal-Mart Watch, based in Washington, D.C., and initially funded by SEIU. It has reached out to environmental and religious groups for support.

Wake-Up Wal-Mart is a smaller Washington-based group that was launched by the United Food and Commercial Workers Union. Frustrated by its failure to gain a footing over a decade among Wal-Mart's workers, the UFCW now favors virtual campaigning instead of traditional organizing.

"It's community organizing in a new way," said Paul Blank, who heads Wake Up Wal-Mart and was national political director of Howard Dean's presidential campaign. "You give people the downloads and let them go out in their neighborhoods and come up with new ways to get your message out."

Wake Up Wal-Mart recently set up the Wal-Mart Workers Association and invited workers to join online or by telephone for the non-dues paying, non-union association. Blank said thousands of workers have shown an interest.

A loss after a four-month-long strike in 2004 by the food workers union against major grocers in Southern California convinced Stern that his own union needed to mount an offensive against Wal-Mart.

"It just crystallized to me that a company with this kind of size and power is going to either raise standards or lower standards," Stern recalled.

Group has multiple names

So Stern established Wal-Mart Watch, an umbrella organization. To workers it is known as the Wal-Mart Workers Association and to community groups and others, whom it also tries to reach, it is known as the Wal-Mart Alliance for Reform.

Much of the thinking behind the effort comes from Wade Rathke, the founder of ACORN, a nationwide organization of grass-roots activists, and chief organizer for an SEIU local in New Orleans.

"The notion of collective bargaining as it exists today is not feasible with a workforce of this scale or a company of this kind," he explained.

With nothing else like it within organized labor in the U.S., Rathke calls the Florida drive a step-by-step experiment. "We are very humble about this task. Certainly the results are tentative and embryonic."

Rick Smith, a one-time auto-parts worker from Toledo and longtime SEIU organizer who heads the Tampa-based effort, says Wal-Mart workers often are baffled by the concept.

There's no talk of a union election or contract. Monthly dues are $5, which only some of the 300 Wal-Mart workers who have signed up pay regularly. The dues, Rathke says, are more symbolic than anything else.

Though Wal-Mart has thousands of workers in the region from Tampa to Orlando, Smith is not discouraged by the small number of recruits.

"We've pretty much proven you can organize the workers," Smith said as he shepherded showings of the anti-Wal-Mart movie.

Since beginning its work earlier this year, his group has aided Wal-Mart workers, whose hours have been drastically cut by the company, to apply for partial unemployment benefits. It has helped them link up to find baby-sitters and car pools, and learn how to talk up their rights with company managers.

But it hasn't been easy, as organizers like Mavrick explained. It's hard reaching workers outside of the stores. It's hard persuading them to think as a group. And it is hard, they said, getting them to have the self-esteem to feel that they can do better for themselves.

Donna Geierman has no problems in speaking up on behalf of the fledgling association. But Geierman, a 13-year Wal-Mart veteran snarled in a worker's compensation dispute with the company, says most of her colleagues are too fearful.

"They are living from paycheck to paycheck. And they worry that they can lose their jobs," she said.

Visiting a group of Wal-Mart workers taking a break outside their store here, the whispered talk between Geierman and a handful of workers was about pressure to work the hours mandated by their bosses.

"If you don't work their hours, they say `Hit the road,'" a young man told Geierman. But then he stopped, dropped his head and lowered his eyes.

Another worker, not so friendly toward the association, had just taken a seat.




Patrick Goldstein

November 22, 2005

Showbiz people are prone to exaggeration, but when everybody is exaggerating about the same thing, you know something bad is happening. There's a dark cloud of unease hovering over Hollywood. A top CAA agent calls it "mayhem." A studio marketer says "it feels like Armageddon." A production chief puts it this way: "Each weekend there's more blood in the water."

Malcolm Gladwell might call it a tipping point.

The era of moviegoing as a mass audience ritual is slowly but inexorably drawing to a close, eroded by many of the same forces that have eviscerated the music industry, decimated network TV and, yes, are clobbering the newspaper business. Put simply, an explosion of new technology — the Internet, DVDs, video games, downloading, cellphones and iPods — now offers more compelling diversion than 90% of the movies in theaters, the exceptions being "Harry Potter"-style must-see events or the occasional youth-oriented comedy or thriller.

Anywhere you look, the news has been grim. Disney just reported a $313-million loss for films and DVDs in its fiscal fourth quarter. Sony has had a disastrous year, with only one $100-million hit ("Hitch") among a string of costly flops. DreamWorks not only has had theatrical duds but also saw its stock plummet when its "Shrek 2" DVD sales fell 5 million short of expectations. Even Warners, the industry's best-run studio, laid off 400 staffers earlier this month.

Although the media have focused on the economic issues behind this slump, the problem is cultural too. It's become cool to dismiss movies as awful. Wherever I go, teenagers say, with chillingly casual adolescent contempt, that movies suck and cost too much — the same stance they took about CDs when the music business went into free fall. When MPAA chief Dan Glickman goes to colleges, preaching his anti-piracy gospel, kids hiss, telling him his efforts don't help the public, only a few rich media giants. Say what you will about their logic, but, as anyone in the music business can attest, those sneers are the deadly sign of a truly disgruntled consumer.

There are still optimists who say the sky isn't falling, who insist that a few hits will turn things around, or gas prices will come down, or that the business being off 7% this year has more to do with the absence of a left-field sensation such as "The Passion of the Christ" than a long-term decline in moviegoing.

To them, I say — go ye to Costco or Best Buy and watch the giant HDTVs zooming out the door, the TVs that used to cost $7500 that now go for $1995 and allow middle-class people to have a marvelous moviegoing experience right at home without $10.50 tickets, $4 popcorn, 20 minutes of annoying commercials and some guy in the next row yakking away on his cellphone.

Once people spend all that money on a home entertainment system, they've got to feed the machine. I've watched friends who used to regularly go to theaters mutate into adjunct professors in DVD-ology, scanning the ads for the new video releases and rhapsodizing over Netflix the way other people swoon over TiVo or XM radio.

This only highlights the biggest crack in the system: that most of the movies in theaters don't deserve a theatrical release, at least not by the rules of today's game. Until the DVD and pay TV money kicks in, they're money losers. Yet the studios are forced to spend more marketing money every year to chase after increasingly resistant moviegoers, then go dark for months before spending another big chunk to remind people the DVD has arrived.

The studios have no one but themselves to blame. Motivated, as always, by an obsession with quarterly earnings, they began shrinking the DVD window from nine months to six months to 90 days. Universal's "The Skeleton Key," which opened in theaters in mid-August, made its DVD debut last week, barely three months later. When the six-month window still held sway, the theater beckoned — half a year felt like a long time away. Three months seems like just around the corner. All too many movies, even ones with big stars in them, including "The Weather Man," "In Her Shoes" and "Dreamer," have died on the vine, with millions of Americans staring at the TV spots and thinking, "I'll wait and see that on DVD."

And that's just the adult side of the equation. What's really driving the studio folks crazy is that a huge chunk of their core constituency — young moviegoers — has evaporated. Poof! They've scattered to the winds. Young males aren't just AWOL from movie theaters, they're also not seeing the studio's TV ads — either because they've stopped watching TV altogether, or because they've got the TV, iPod and IM all going at the same time — not exactly a situation in which an ad leaves much of an imprint. The only movies that are reliable drawing cards today are behemoths such as "War of the Worlds" or "Harry Potter," or cheap youth-oriented genre films such as "Saw II" or "The 40 Year-Old Virgin."

One of the movie industry's crucial failings is that it's simply too slow to keep up with the lightning speed of new technology. Who would've believed six months ago that the day after "Desperate Housewives" aired on ABC you could download the whole show on your video iPod? But when someone pitches a movie, it takes at least 18 to 24 months — if not far longer — between conception and delivery to the movie theaters. In a world now dominated by the Internet, studios are at a huge disadvantage in terms of ever lassoing the zeitgeist. Everybody is making movies based on video games, but it seems clear from the abject failure of movies such as "Doom" that it's almost impossible, given the slow pace of filmmaking, to launch a video game movie before the game has started to lose its sizzle.

New technology is also accelerating word of mouth. Thanks to instant messaging and BlackBerries, bad buzz about a bad movie hits the streets fast enough to stop suckers from lining up to see a new stinker. Even worse, the people who run studios are living in such cocoons that they've become wildly out of touch with reality.

That's the only explanation for why Sony Pictures could've imagined there was any compelling reason this summer to see a wan remake of "Bewitched." Or why any of the studio's highly paid executives didn't wonder why it should shoehorn an obscure family movie into the one-week window between the Disney-powered "Chicken Little" and the latest "Harry Potter" juggernaut, especially when the movie, "Zathura," has a title that sounds like it should be followed by the warning "side effects may include leakage or sexual dysfunction."

The ultimate perk of being a studio chief is having your own screening room, which puts only more distance between you and the rabble — ahem, your customers — who spend $75 to take the family to a movie. Too often studio people have the same ideas about the same things, a groupthink that has led to them anointing one Hot New Thing after another, from Josh Hartnett to Brittany Murphy to Kate Hudson to Colin Farrell, who've yet to connect with rank 'n file filmgoers.

What should studios do to come to grips with this new era? In a world bursting at the seams with new technology, it's hard to justify the antiquated idea of studio development, which keeps churning out movies such as "Be Cool," an Elmore Leonard novel from 1999 that was hilariously out of date by the time it reached theaters, having a storyline that revolved around Chili Palmer's exploits in the music business, perhaps the least cool place on the planet.

Hollywood needs a new mindset, one that sees a movie as something that comes in all shapes and sizes, not something that is wedded to the big screen. Studios have to do what record companies refused to do until they nearly went out of business: embrace the future.

People increasingly want to see movies on their terms, today on a big TV at home, tomorrow on an iPod or cellphone. It breaks my heart that people have fallen out of love with movie theaters, but if I were king, I'd start releasing any movie with multi-generational appeal on DVD at the same time it hit theaters, so the kids could get out of the house and the parents could watch at home.

The music business has already adopted this two-tier system, selling downloads and CDs simultaneously. TV networks are starting to do the same thing with their shows. It's only a matter of time before movies are forced to do the same. The day isn't far away — desperation being a great motivator for innovation — when a studio opens a blockbuster on Friday in theaters and on Saturday on pay-per-view (at $75 a shot) so fans could watch it with a bunch of friends at home.

As it stands, Hollywood has become a prisoner of a corporate mindset that is squeezing the entrepreneurial vitality out of the system. It's not just that studios are making bad movies — they've been doing that for years. They've lost touch with any real cultural creativity. When you walk down the corridors at Apple or a video game company, there's an electricity in the air that encourages people into believing they could dream up a new idea that could blow somebody's mind.

At the big studios, the creative voltage is sometimes so low that you wonder if you've wandered into an insurance office. The dreamers have left the building. Back in the 1950s, David Selznick, out walking one night with Ben Hecht, glumly said, "Hollywood's like Egypt, full of crumbling pyramids. It'll just keep on crumbling until finally the wind blows the last studio prop across the sands." As I said, show people like to exaggerate, but these days when I go around Hollywood, I can see the crumbling pyramids too.

© Patrick Goldstein, Los Angeles Times



By Jeffrey Fleishman

Tribune Newspapers: Los Angeles Times

December 6, 2005

BERLIN -- Martin "Amok" Thomas is jabbing a right, but Frank "so-cool-he-doesn't-need-a-nickname" Stoldt is as elusive as a ribbon in the wind. He can't be hit.


The gloves come off, and the men hurry across the canvas to the chessboard. (You heard it right.) Amok took a couple of body shots, and he's breathing hard, but he had better focus. That Stoldt, though, everyone in the gym knows he's this warrior-thinker, slamming the speed clock, cunningly moving his queen amid unraveling bandages and dripping sweat, daring Amok to leave him a sliver of opportunity.


Velcro rips. Amok slides back into his Everlast gloves, bites down on his mouthpiece, dances around the ropes. His king's in trouble, and his punches couldn't knock lint off a jacket. Stoldt floats toward him like a cloud of big hurt.

Such is the bewildering beauty of chessboxing. That's one word, as in alternating rounds of four minutes of chess followed by two minutes of boxing.


Victory is claimed in a number of ways, some of them tedious, but the most thrilling are by checkmate and knockout. The sport's godfather, Iepe "the Joker" Rubingh, believes that chessboxing is destined for the Olympics.

"It has enormous potential," says the Joker. "Chess and boxing are very different worlds. Chessboxers move around in both. It's extremely demanding, but extremely rewarding. It's all about control over your physical and mental being. The adrenalin rush in boxing must be lowered to concentrate on chess strategy."

Some will snicker. The Joker knows this. But he is not deterred.

Former world heavyweight champion Lennox Lewis is a devoted chess player. Ukrainian Vitali Klitschko, another heavyweight champ who recently retired, has a keen intellect, and knows what to do when a queen sidles toward his king. That's the kind of brawn and brain a clever marketing guy like the Joker thinks he can turn into success, not the novelty kind of success, but genuine prime-time, Caesars Palace spotlight success.

"I'd love to get them together," the Joker says of Lewis and Klitschko. "What do you think they want -- $30 million?"

Without marquee names, however, there is a potential drawback. Will people buy a beer and a hot dog and watch bare-chested smart guys in colorful satin shorts play chess? They will, the Joker believes, if the match coincides with the possibility of a knockout or spilled blood.

The World Chess Boxing Organization, founded by the Joker, 31, and some business partners, held its first European tournament in Berlin in October. Five hundred fans attended as Bulgarian Tihomir "Tigertad" Titschko be-came the new champion.

Titschko peers over a chessboard as if he's trying to deconstruct the theory of relativity, and he hits like a big man who just found out his girlfriend is cheating. He defeated Andreas "D" Schneider, a German actor in dark trunks who punched well but succumbed in the ninth round to Titschko's blistering chess attack, de-scribed as "the Dragon variation of the Sicilian defense."

11 rounds: 6 chess, 5 boxing

Chessboxers use words such as "aesthetics" and "arduous." They ponder performance art, science, philosophy; they study grids, angles and buried meanings in obscure books.

The rules might be considered simple: Eleven rounds, six of chess and five of boxing. The first round is always chess. "That's because," says the Joker, "if you go down in boxing there is no chess." A one-minute pause between rounds allows opponents to slip on and off gloves and for the chessboard to be moved in and out in the ring. If all is equal on the chessboard and the boxing scorecard after the 11 rounds, according to the rules, "the opponent with the black pieces wins."

Players are required to wear headphones during the chess part of the match. "This is so no one in the audience can yell out, `Hey, be careful of the knight on E-6,' " the Joker says.

The inspiration for chessboxing came to the Joker in 2003 after he glimpsed some dark magical realism in a comic by Enki Bilal. "It's a futuristic story, and there's a guy watching TV," says the Joker, "and on TV is a kind of chessboxing match."

The Joker learned chess young from his father, with whom he also watched American boxing matches on TV. The Joker studied German cultural history in college. A painter, photographer and video artist, he followed the bohemians to Berlin in 1997. Six years later, he traveled west to Amsterdam and took on Luis the Lawyer in the world's first "official" chessboxing match.

"We're too focused on defining sport in one way," he says. "Look at the old Olympics and the ancient Greeks. They had poets in the games, but in our society we want to divide things. I don't like borders. You try to tell a story through a game. Look at Muhammad Ali and George Foreman in the Rumble in the Jungle, or Bobby Fischer playing the Russians in chess." The Joker grabs his gear and crosses the street to the gym.

He slips into the basement, past trophies and punching bags. This is his domain, his club, where he trains potential stars. The guys are preparing.

They're an interesting bunch: There's Stoldt, a muscular Berlin cop and former amateur boxer, whose wife was searching the Internet one day when she came upon chessboxing and convinced him it might be his calling. There is Victor Abraham, a classically trained baritone with close-cropped black hair and a mustache, who also paints in the Bauhaus style. Jan Schulz, the club's trainer, can play two games of chess at once and still seem as if he could handle physics. And there's Amok, a Web site designer with good-size fists and long arms.

The Joker orders up a bout. Stoldt versus Amok.

Knights and bishops get a workout first. Then into the ring.

Though Amok has a nice reach, Stoldt is slipping in jabs and Amok is tiring.


Amok slides his queen to A-4, Stoldt drops a knight on 3-G. Stoldt takes a knight with a bishop. Gloves back on. Amok is jabbing, but his arms are heavy. Gloves off. Amok goes to F-4 with a knight. Stoldt's pressuring. The queens wipe each other out. The ring again. Amok is sucking wind.

Smooth and quick, Stoldt goes for the kill. Knight to H-4.

The Joker pats Amok on the shoulder. Amok may be a contender one day.

The Joker has that same thought: Wouldn't it be the ultimate marketing coup for chessboxing to arrange a match between Lewis and Klitschko?

He smiles at the possibility. Here comes another thought: "Look at Russia, Ukraine. They're chessboxing nations and they don't even know it yet."

- - -

Other sport-game combinations we'd like to see

Twister kickboxing (right foot, green! left foot, opponent's face!)

Battleship water polo (you sank my goaltender!)

NASCAR Operation (remove gearshift from spleen)

Tai chi charades (you're a statue! no,a butterfly! toaster?)

Pole vaulting Monopoly (Miss your height, go directly to jail)

Sudoku soccer (you gotta do something with all that down time)


Copyright © 2005, Chicago Tribune


Steven Johnson

Nov 29, 2001

The Nation

In 1813, as part of a correspondence with Isaac McPherson, Thomas Jefferson penned a mini-disquisition on the peculiar issues confronting patent law: "That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement, or exclusive appropriation."

Information, to borrow a more recent slogan, wants to be free. According to Lawrence Lessig's dazzling new book, The Future of Ideas, that freedom is under assault, despite recent technological developments that would seem to embody the Jeffersonian vision. "The digital world is closer to the world of ideas than to the world of things," Lessig writes. "We in cyberspace, that is, have built a world that is close to the world of ideas that nature (in Jefferson's words) created: stuff in space can 'freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition."

And yet the freedom of cyberspace and its capacity for mutual instruction is under fire. The very ethos of the web--a kind of organized anarchy, free of both government and private-sector control--has been gravely injured by recent events: changes in copyright law, changes to the underlying architecture of the net, changes in the competitive landscape of the digital economy. "The essence of the changes in the environment of the Internet that we are observing now are changes that alter the balance between control and freedom on the Net. The tilt of these changes is pronounced: control is increasing."

The word "control" itself is used advisedly. Lessig, now a professor at Stanford Law School, begins The Future of Ideas with a shout-out to his former colleague Andrew Shapiro, whose book The Control Revolution discussed the battle between control and freedom without necessarily predicting which side would win (or even which deserved to win). "Shapiro did not predict which future would be ours," Lessig explains. "Indeed, his argument was that bits of each future were possible, and that we must choose a balance between them. His account was subtle, but optimistic. If there was a bias to the struggle, he, like most of us then, believed the bias would favor freedom. This book picks up where Shapiro left off. Its message is neither subtle nor optimistic.... we are far enough along to see the future we have chosen. In that future, the counter-revolution prevails. The forces that the original Internet threatened to transform are well on their way to transforming the Internet."

Translated into another revolutionary's language, Lessig's is a story of all that is air melting into solid. But is our digital future--not to mention the present--really as grim as Lessig claims? Whether you accept the premise of Lessig's argument, The Future of Ideas confirms what his first book, Code and Other Laws of Cyberspace, originally promised: Lessig is one of the brightest minds grappling with the consequences of the digital world today, as deft and original with technical intricacies as he is with broad legal theory. He manages to breathe new life into seemingly exhausted economic ideas--his take on the tragedy of the commons is likely to entrance even the most jaded game theorist--and tell some fascinating stories along the way, on the freewheeling early days of the radio spectrum, or the distributed computing project that harnesses spare processing cycles from thousands of computers around the world to search for evidence of extraterrestrial life.

The Future of Ideas is also a deeply iconoclastic work, at least when measured against the standard assumptions of American politics. Lessig is sometimes cast as a trustbusting progressive after his brief involvement as "special master" in the Microsoft antitrust case (appointed by Judge Thomas Penfield Jackson to advise the court, he was subsequently removed after Microsoft protested that he was biased against the software giant). Lessig's positions can seem contradictory: The book is fundamentally a celebration of decentralized innovation, and yet it is deeply distrustful of too much power concentrating in the hands of the private sector. Lessig is no middle-of-the-road New Democrat: He's a radical critic who doesn't fit readily into any existing ideological camp. In a sense, you can see his politics as distinctly net-native, closest in spirit to those of open-source software, the semi-anarchic collective movement that engineered the now-legendary Linux operating system.

Open-source software projects tilt heavily in the direction of freedom: No one owns the underlying code behind Linux, and thousands have contributed to it. The software grows more sophisticated over time for three central reasons: (1) The ethos of the hacker community has a strong communitarian tradition that encourages contributions, which are rewarded only by the respect of one's peers, (2) modern software applications are modular enough to be built by committee, with thousands of dispersed participants chipping in their ideas, and (3) because the code base is openly shared with anyone interested in looking at it--unlike Microsoft's hidden Windows source code--interesting new ideas "freely spread from one to another over the globe," if not for the moral and mutual instruction of man, then at least for the improvement of his printer drivers.

This is the story of the triumph of the commons--free of both government and corporate control. Its principles animate nearly every page of Lessig's book: a mix of the libertarian's contempt for centralized control and the socialist's belief in the power of communal property. This would sound schizophrenic and impractical if it weren't for the empirical success of open-source projects like Linux, or the widely used web server Apache--or indeed the web itself, which was founded on nonproprietary standards. What are the politics of these new systems? It is not, according to Lessig, "the traditional struggle between Left and Right or between conservative and liberal. To question assumptions about the scope of 'property' is not to question property. I am fanatically pro-market, in the market's proper sphere.... The arguments I draw upon...are as strongly tied to the Right as to the Left.... Instead, the real struggle at stake now is between old and new."

The trouble, as Lessig sees it, is that the new is being challenged by the old. We are in the midst of a kind of digital-age Restoration, in which the old emperors of centralized control are returning to power after a brief but dizzying spell of Glorious Revolution. The free flow of code and information is being channeled once again in conventional directions, and the burst of innovation and media diversity that the Net produced over the past decade is regressing to the days of concentrated media oligarchies. "The promise of many-to-many communication that defined the early Internet will be replaced by a reality of many, many ways to buy things and many, many ways to select among what is offered," Lessig writes. "What gets offered will be just what fits within the current model of the concentrated systems of distribution."

Lessig cites a number of recent developments to support his grim prognosis, including the increased role of cable companies in the net economy:

As the Internet moves from telephone wires to cable, which model should govern? When you buy a book from, you don't expect AOL to demand a cut. When you run a search at Yahoo!, you don't expect your MSN network to slow down anti-Microsoft sites. You don't expect that because the norm of neutrality on the Internet is so strong...
      But the same neutrality does not guide our thinking about cable. If the cable companies prefer some content over others, that's the natural image of a cable provider. If your provider declines to show certain stations, that's the sort of freedom we imagine it should have...
      So which model should govern when the Internet moves to cable. Freedom or control?

Lessig is not optimistic about the cable companies' ability to adapt to the open-access neutrality that has been a founding principle of the Internet to date--particularly when those companies are part of massive content empires like AOL Time Warner. Lessig is typically persuasive in his argument against these controlled systems, an argument that he brilliantly mounts not by thundering against "evil" corporations but rather by pointing to the success of previous open systems whose existence we now take for granted. "When the United States built its highway system, we might have imagined that rather than fund the highways through public resources, the government might have turned to Detroit and said, Build it as you wish, and we will protect your right to build it to benefit you. We might then imagine roads over which only American cars can run efficiently, or exits and entrances that tilt against anything built outside Detroit." Instead, the government built a highway system that was open to all users and (almost all) uses--a foundation for commerce and recreation that was biased only in the sense that it "tilted" against mass transport. The Net was an equivalently open platform, engendering a thousand unforeseen uses--everything from sharing music files to creating hypertext archives of public domain books to hosting online auctions for Pez dispensers and million-dollar artworks. The strength of the system lay in the fact that there were no gatekeepers deciding which were approved activities and which weren't.

Lessig is particularly concerned about the resurgence of gatekeepers in the domain of copyright law. The past few years have witnessed a dramatic expansion in the legal rights granted to copyright holders: Books can now take more than a hundred years to enter the public domain, and entertainment industry organizations like the Motion Picture Association of America and the Recording Industry Association of America have won a number of high-profile lawsuits--most notably against Napster and against the hackers who broke the DVD compression scheme and distributed it over the web. Lessig's fear is that the great connectedness produced by the Net may lead to a system of near-perfect copyright control, as all appropriations of intellectual property can theoretically be tracked, and unlawful appropriations prohibited. Jefferson's "freely spreading ideas" starts to look more like Foucault's take on Bentham's panopticon, with every bitstream monitored for pilfered data. "The content layer--the ability to use content and ideas--is closing," Lessig writes. "It is closing without a clear showing of the benefit this closing will provide and with a fairly clear showing of the harms it will impose.... [It is] mindless locking up of resources that spur innovation. Control without reason."

The Future of Ideas succeeds marvelously at its primary task, which is to persuade the reader of the virtues of a balance between control and freedom in this new world, and of the importance of understanding how technological changes can unintentionally alter that balance. (In this respect, the book builds on the argument of Code, which demonstrated the ways in which software architecture possesses the force of law in digital environments.) There may be no thinker today grappling more tenaciously with the legal issues unleashed by the digital revolution, and the book's maverick positioning on the conventional political spectrum should make it a landmark work for that reason alone. Ever since the open-source software movement entered mainstream culture, its followers have been wondering about what a larger political philosophy based on its values would look like. The Future of Ideas is the first significant step in the formulation of that philosophy.

That said, it's hard to read a book that makes such bold claims about such a dynamic and complex field, and not pose a few counterarguments, even if they run against the grain of your habitual assumptions about the world. I've long shared Lessig's amazement at the explosion of ideas and new voices unleashed by the Net over the past decade, but because his argument rests so heavily on this premise--the uncontrolled nature of the Net's underlying architecture as an unparalleled engine for innovation--I found myself questioning the assumption the more I heard it repeated. Two potential objections spring to mind. First, the Internet proper is more than three decades old; its open protocols have been evolving steadily since then, and yet compared with other high-tech industries over that period--personal computers, semiconductors, nontelecom software--its overall rate of innovation was not particularly noteworthy until the mid-1990s, when the web took off. The period that followed was without question one of tremendous innovation, but it was also a period bankrolled by an unprecedented infusion of venture capital, which fueled both the exploration of just about every conceivable web-based activity and the mass adoption of the medium itself. Now, it may well be that the capital influx was a secondary effect, and the primary cause of the explosion was the Net's open protocols. But then why did it take so long to blossom?

The distinction wouldn't matter so much if Lessig didn't point to the Net's track record of innovation so often in his argument for maintaining--or replicating--its distinctive balancing act between too much control and too much freedom. Consider another area of software development: applications created for the DOS/Windows platform over the past ten years. In areas where Microsoft doesn't control the market with its own products--pretty much everything other than the core applications in MS Office--the Windows-based software industry has produced a staggering number of programs in a short amount of time, including whole new genres of software: sales-force-automation applications, accounting packages, video-editing tools, games. The Windows software ecosystem is broad enough to support huge corporate giants with millions of customers, along with niche producers selling to tiny markets. (It has also managed to cultivate something that the web has had trouble with thus far: profitable companies.) And yet Windows is the epitome of a closed architecture, its source code controlled by a mighty centralized authority and defended by a phalanx of lawyers. So where does that innovation come from? It's worth remembering that the Napster client software itself, while inconceivable without the underlying connectivity offered by the Net, was nonetheless originally written for the Windows platform.

Napster brings us back to the question of Lessig's pessimism, and his vision of a control counterrevolution. Nowhere is Lessig's dark outlook more convincing than in his survey of recent changes to copyright law, and yet even here the dystopian tone seems unwarranted: "The content layer--the ability to use content and ideas--is closing." Closing on what time scale? Compare my ability to copy books, music tracks and video clips today with what it was just five years ago. Electronic books barely existed, and so copying books meant a laborious trip to Kinko's; borrowing music from a friend meant swapping cassette tapes; and the idea of high-quality video residing on your hard drive was laughable, given the slower CPUs and smaller hard drives of the day. Even after the shutdown of Napster, I have access to terabytes of music files via the more distributed--and thus harder to shut down--Gnutella service, and soon those Napster-descendants will be serving full-length movies as well. The law may be cracking down on the technological explosion that made all this possible, and thus in some sense it might be true to say that "control is increasing," particularly if you're trained as a law professor. But on the ground--or perhaps it's better to say in the ether--the technology is still outmaneuvering the counterrevolutionaries. That's not cause to ignore Lessig's warnings, or ignore the remarkably sophisticated model of technopolitics that he develops in The Future of Ideas. But perhaps it's reason to hope that the forces of freedom--if they have technology on their side--are still stronger than the forces of control.



By Karen Breslau And Daniel McGinn


Oct. 17, 2005 issue - Given his fondness for movies, it's not surprising that Reed Hastings thinks about the future of home entertainment in terms that sound like they're drawn straight from "Star Wars." The story line, according to the founder of the DVDs-by-mail pioneer Netflix, goes something like this: as DVDs slowly give way to online movies, consumers will face a stark choice. Will they side with the "Forces of Control" or the "Forces of Freedom"? The Forces of Control are the cable and satellite companies, which offer 50 to 500 channels of content that's chosen by network programmers.

Opposing this bunch are the Forces of Freedom, a group of companies that includes Netflix, TiVo, Apple, AOL and Yahoo. Together the freedom fighters will offer something like 5 million channels via the Internet, giving consumers the ability to watch just about anything they'd like, whenever they'd like. "Instead of an electronic program guide that viewers scroll through with a remote control, the keyboard will be your remote control and your program guide will be Google or Yahoo," says Hastings.

It's no secret which team Hastings, 45, is betting on. He built his empire on the premise that people would rather have their rental movies delivered to their mailboxes than have to haul themselves to the video store, enduring the inconvenience and dreaded late fees along the way. As a tech company, Netflix was a contrarian play. Even at the height of the dot-com boom, when Silicon Valley buzzed with the promise of "transformative technologies" and "fat pipes" that would allow consumers to quickly download all manner of content, Hastings built Netflix on two disarmingly retro technologies: the DVD and the United States Postal Service. For a monthly subscription fee averaging $17.99, consumers would be treated to an unlimited number of rented DVDs, most delivered within a day of being ordered online. "People were talking about beaming movies to wristwatches," Hastings says. "We tried not to get drunk on the future, but actually to predict it accurately."

So far, so good. Today Netflix has 3.2 million members, and last quarter it did $5.7 million in profits. In the imitation-is-flattery department, last year Blockbuster executives, who once sneered that Netflix would never appeal to the masses, launched a service modeled on Hastings's creation. Earlier this year Wal-Mart pulled out of a similar effort to copy Netflix, signaling that even America's biggest retailer couldn't compete with Hastings's freedom fighters. But as far as the Netflix gang has come on old-school technology, its founder doesn't dispute the notion that before long, many people will begin watching movies at home over broadband. As that era dawns, Netflix faces a tricky pivot, requiring a new business model. Although Hastings believes digital delivery will not eclipse DVDs for at least a decade—and that high-definition DVDs will linger long after that—he is moving now to transform the company from a mail-order retailer into an Internet content provider.

As Silicon Valley visionaries go, Hastings has an unusually diverse resume. After graduating from Bowdoin in the early 1980s, he joined the Peace Corps and taught math in Swaziland. He returned to California and became a computer programmer, selling his first company, Pure Software, in the mid-1990s for $750 million. He then attended Stanford's graduate school of education before earning a master's in computer science. In 2000, he teamed up with venture capitalist John Doerr on a ballot initiative to improve school funding in California. After that Hastings served briefly as head of TechNet, the Silicon Valley lobbying group, and became president of the California Board of Education.

He got the idea for Netflix after renting "Apollo 13" from his local Blockbuster store. After running up $40 in late fees, Hastings began wondering why video stores don't offer members unlimited access for a flat fee, like a health club. His first business plan relied on mailing VHS cassettes to customers, but in 1997 he realized that DVDs, new at the time, would be easier to mail. In 1998, he started Netflix from a warehouse in Silicon Valley.

Today there are 35 warehouses around the country, with an inventory of 42 million discs. Each day they receive 100,000 new movies from studios and mail out 1 million DVDs to customers. A corner of the original warehouse in Los Gatos serves as the Netflix museum, including plastic remnants of the racks that workers used to pluck DVDs from, by hand. Today those chores are automated. Workers still open envelopes, stopping every 45 minutes for ergonomic exercises.

They'll need to stay limber: in tech-savvy markets like San Jose, Oakland and Fremont, Calif., more than 10 percent of households subscribe to Netflix. Hastings expects to have more than 4 million subscribers at the year-end, and more than 20 million by 2010. Hastings sees downloadable movies as a long-term threat, and predicts that his snail-mail model will be around another 20 to 30 years, both because DVDs are very portable and convenient to customers and because movie studios "want DVDs to last a long time because it's such a revenue and profit driver."

But that may be optimistic. More than half of American households now have a broadband connection to the Internet. And flourishing peer-to-peer tools such as BitTorrent make it easy for those consumers to download TV shows and movie clips. Digital music sharing has driven worldwide music sales down 13 percent in the past six years, and video downloads could do the same to mail-order movies. Forrester Research analyst Ted Schadler says its "outrageous" to expect the Netflix model to dominate the market for decades: "I'd give it five years or so" before video on demand and other rivals eat away at its competitive position.

Netflix boosters say it can survive even if DVDs go obsolete. Marketing textbooks traditionally talk about how railroad companies stopped growing because they saw only railroads, not the wider transportation business; but analysts seem to trust Netflix's ability to think big. Safa Ratschy, managing partner at Piper Jaffray, says Netflix is a lot more than a DVD-by-mail distribution system; its real asset is a subscriber base that has learned to order on the Web, and is likely to stick with Netflix as the digital era arrives.

Hastings envisions Netflix becoming a destination Web site offering a mix of content: free, ad-supported, premium pay-per-view and subscription. And with many Americans upgrading to big-screen, high-definition TVs, Hastings is betting they won't be watching on computer monitors. That is part of the rationale behind a deal announced last year between Netflix and TiVo, which makes personal video recorders that connect televisions to the Internet. Hastings won't elaborate on plans for a "joint entertainment offering" —and says that Netflix is interested "generically" in all platforms linking TV and Internet—but outsiders say it's a smart match. "If you can deliver video to the box that's connected to the TV and not to the PC, you're way ahead of the game," says Gartner media analyst Laura Behrens.

In talking about Netflix's future, Hastings repeatedly brings up AOL. He acknowledges it's an odd choice, given the online pioneer's disastrous merger with Time Warner. "It's not like we're going to say, 'OK, we want to be the next AOL'," he says. Still, Hastings sees in AOL a subscription-based survivor: once a dial-up service for online newbies, today AOL acts more like an infotainment portal, and has retained many subscribers who get broadband service from another company. He also sees HBO and DirecTV as having elements of the future Netflix model; both are nationwide distributors that can cater to niche and local markets. Of course, if a big media conglomerate moves in to buy Netflix, a very attractive target, the future won't be Hastings's problem.

There's another, lower-tech industry that may foreshadow the challenges Netflix faces: automobiles. For years pundits have prophesied the replacement of the gasoline engine by pollution-free fuel-cell-powered cars. Those predictions inspired automakers to spend billions in R&D to prepare for this new world, which seems to be coming much more slowly than experts expected. Similarly, Netflix is preparing for the future, even though it believes the era of online movies is more distant than some others do. "We're starting to invest now, even though there's no real market for it today, so that when it comes, we're ready," Hastings says. "[But] DVD will last as long as the gasoline engine, newspaper—any of your 'obsolete in the very long term' industries." If he's wrong, those 42 million DVDs he owns will make for one heckuva yard sale.

With Nicole Joseph in New York and Brad Stone in San Francisco



Jonathan Mahler

December 25, 2005

Cmdr. James Stockdale parachuted out of his nose-diving Skyhawk over the North Vietnamese jungle in September 1965, the war was still young. Little was known about the fate that awaited American prisoners of war. It didn't take Stockdale long to gain a clearer sense. After a few months in solitary confinement in Hoa Lo prison in Hanoi, he was introduced to "the ropes," a torture technique in which a prisoner was seated on the floor - legs extended, arms bound behind him - as a guard stood on his back and drove his face down until his nose was mashed into the brick floor between his legs. The North Vietnamese knew they were overmatched militarily, but they figured they could at least win the propaganda war by brutalizing American P.O.W.'s until they denounced their government and "confessed" that they had bombed schoolchildren and villagers.


For his part, Stockdale intended to return home with his honor intact. One afternoon, he was given a razor and led to the bathroom - a sure sign that he was being readied for a propaganda film. Instead of shaving, Stockdale gave himself a reverse Mohawk, tearing up his scalp in the process. More determined than ever now, his captors locked him in the interrogation room for a few minutes while they fetched a hat for him. Stockdale glanced around, looking for an appropriate weapon. He considered a rusty bucket and a windowpane before settling on a 50-pound stool, and proceeded to beat himself about the face. Then, realizing that his eyes were not yet swollen shut, he beat himself some more. By the time the guards had returned, blood was running down the front of his shirt. For the next several weeks, Stockdale kept himself unpresentable by surreptitiously bashing his face with his fists. The North Vietnamese never did manage to film him.


As Hoa Lo filled with American shootdowns - it would become known among prisoners as the Hanoi Hilton - Stockdale transformed a loose colony of destabilized P.O.W.'s into a tightly knit underground resistance movement with its own language (an alphabetical tap code) and laws. Stockdale was the highest-ranking Navy P.O.W., but his authority derived less from seniority than from that rare blend of virtues that enables a small minority of men to thrive in what the Prussian military philosopher Karl von Clausewitz called the province of danger.


Inside the interrogation room, the military's Code of Conduct, which presupposes adherence to the Geneva Conventions, was of little value. The torture was simply too intense to limit statements to name, rank, serial number and date of birth. So Stockdale created new rules designed both to protect America's war effort and to keep P.O.W.'s alive. Stockdale ordered his men to endure as much physical abuse as they could before acceding to any of their interrogators' demands - the key, in his view, to preserving a sense of dignity - and to always confess to fellow inmates everything they had been forced to divulge. To carry an unclean conscience was to risk descending into a spiral of guilt and shame that would make them only more vulnerable to themselves and their captors.


Desperate as he was to return to his wife and four boys in Southern California, Stockdale was so adept at living through privation and pain that he came to feel at home inside Hoa Lo. He recalled long-forgotten details from his childhood, calculated natural logarithms with a stick in the dust and pondered the physics of musical scales. As he saw it, he was still at war, only it wasn't the Navy that had prepared him for this sort of battle, it was two ancient Greek philosophers. From Aristotle, Stockdale had learned that free will can exist within a state of imprisonment. From Epictetus, the influential Stoic, he had learned about our ability to shape experience by perception: as months of solitary confinement in leg irons and brutal beatings turned to years, Stockdale would remind himself that "men are disturbed not by things but by the view that they take of them." Most of all, he became absorbed in his battles with his captors, whether that meant planting fake notes for guards to discover or gleefully "tapping" his tales of interrogation-room intransigence to his neighbors.


Not long after he was finally released in early 1973, Stockdale said he had no intention of becoming a professional ex-P.O.W., yet his 2,714 days in captivity powerfully shaped the rest of his life. Stockdale drifted professionally - not like the stereotypically disillusioned Vietnam vet, but in nevertheless unmistakable ways. He was given different peacetime commands, all of which felt like comedowns from his service in Vietnam, both as a commander and as an underground prison boss. "In those jobs under life-and-death pressure, what I said, what I did, what I thought, really had an effect on the state of affairs of my world," he would later reflect.


Stockdale retired from the Navy in 1979 to become president of the Citadel, a civilian military college in South Carolina, but quit a year later when the board blocked his efforts to rein in the school's out-of-control culture of hazing. ("When you've been tortured by professionals, you do not have to put up with amateurs," he told a friend, explaining his abrupt decision to resign.)

Then came Stockdale's ill-fated foray into politics. His friend Ross Perot had assured him that he would be only a placeholder until he could find a suitable running mate for the 1992 presidential election - a couple of weeks, Perot told him. Stockdale had spent longer blindfolded, naked on the floor, with an untreated broken leg in his cell in Vietnam. He figured he could get through this fine.


He didn't. After delivering the unforgettable opening line in the vice presidential debate - "Who am I? Why am I here?" - Stockdale was reduced to a national laughingstock. Even then there was a whiff of tragedy, a sense that he deserved better, but he disappeared from the public stage before much more could be said about him. He was last seen by many Americans in the person of Phil Hartman on "Saturday Night Live."

The former fighter pilot found solace in the world of ideas. He was inevitably pulled back to Hoa Lo, and to a better understanding of the qualities that enable certain men to stand up and turn their world around - "the rising of the few," as he called it. For guidance, Stockdale turned to the writings of other ex-prisoners: Viktor Frankl, Aleksandr Solzhenitsyn, Fyodor Dostoyevsky. Stockdale gradually came to see heroism not as a matter of consistent good judgment but as a single act, or series of acts, performed in a particular context. And he came to see heroes not as people who had carried out their duty with distinction but as individuals who had, like himself, done something no reasonable person would ever have felt justified asking them to do.




December 25, 2005

He was prodded to his feet by rifle barrels. This was northern Tunisia, Nov. 29, 1942. Just after midnight, following faulty intelligence, he had led his men into enemy territory. Now he stood surrounded, trembling, sickened by the gravity of his mistake. A German sergeant jammed a pistol into Joe Frelinghuysen's gut and joked that he'd been easier to catch than a rabbit.

This indignity stayed with him. He was flown to Italy and put in a series of Fascist-run prison camps in a ountainous area northeast of Rome. There he studied the walls, filled with self-recrimination.


He had been easier to catch than a rabbit. It wouldn't happen again. He nourished himself with potato peels sneaked from the cook's garbage.


He did push-ups, paced the fence line of the camp. A guard gave him an Italian grammar book, which he studied feverishly, memorizing 30 words a day.


He had been imprisoned nearly 10 months - had lost 70 pounds and was tormented by nightmares - when Dick Rossbach showed up. Rossbach didn't give a damn. He had tried to escape twice already, once by flinging himself out the window of a moving train near the Austrian border. Fear, Rossbach liked to say, was better than a martini.Nine days later, they left together, wriggling in the twilight under one barbed-wire fence, then another, then under two more and finally scaled a wall. They climbed a dark mountain and did not look back.


Life from here on out boiled down to one thing: not getting caught.


With his grasp of Italian, Frelinghuysen was in charge of begging for food. In a small, silent village, he encountered an old woman who, upon seeing him, gathered her skirts and fled. The Germans had posted signs: anyone caught helping a prisoner would be shot. So they slept in caves and in clearings in the woods. The only warm night came when a sympathetic shepherd allowed them to sleep pressed up against his pregnant sow. They ate what was handed to them: apples, cheese, a sheep's lung wrapped in newspaper.


What felt perverse was how beautiful the setting was. Frelinghuysen would later describe in his memoir the stunning beech and oak forests of those valleys in the region called Abruzzi - the flaming colors of autumn and the toylike villages stacked into the hills.


After three weeks, it felt like the end. Rossbach had damaged his knee and could barely walk. Their skin was filthy; their bodies were malnourished. If this was freedom, they were ready to trade it for something that less approximated a slow, limping death.

But then, high in the Apennine mountains, more than an hour's walk from the nearest road, they came upon a stooped man with a black mustache, working his fields with a hoe. He wore a sweat-stained hat and smiled easily. When they asked for food, he directed them toward a stone farmhouse.


Being hunted by soldiers with dogs can kill a person's faith, but there are simple things that revive it. Inside the farmhouse, Rosa DiGiacomantonio - the wife of the man with the hoe - served them sausage with steaming polenta, scooped from an iron caldron on the hearth, flavored with garlic and pepper, drizzled with rich olive oil. She poured two glasses of red wine.

Already, she had fed dozens of these filthy, starved men. She'd chopped, simmered and ladled up just about everything her family had harvested that summer. But the prisoners kept coming. The farmhouse sat on a grassy saddle between mountain ranges, a natural stopping point for P.O.W.'s picking their way south toward the Allies. The only things she wouldn't share were the chickens' eggs, which she fed to Letizia, her hugely pregnant daughter-in-law. That night, when the two fugitives asked for a barn to sleep in, Antonio, Rosa's husband, led them instead to a candlelit room with a double bed and spotless white sheets and a pink blanket. He all but tucked them in.


Back home, Joe Frelinghuysen had a wife named Emily and two small children who wouldn't recognize him. To think of them was painful, so he avoided it. But for more than two weeks in October 1943, he and Rossbach had the DiGiacomantonios, who fed and helped shelter them and who jubilantly shared a bottle of tawny wine with them on the night when Letizia, cloistered upstairs, gave birth to a son. Letizia's husband, Berardino, became a friend - traveling to town to gather news of the war, arranging for a doctor to look at Rossbach's knee. The doctor wrapped his knee, but there was no time for it to mend. Nazi patrols were now searching houses in the valleys below.


One day when the two Americans were out doing reconnaissance, they were surrounded by three German soldiers on a hillside. Rossbach grabbed the barrel of the corporal's gun and pointed it to his chest, announcing that he was injured and demanding that they shoot him right there. It was grand theater - Frelinghuysen knew this - a diversion so that he, the only one able, could run. When the soldiers marched them inside a nearby hut, Rossbach offered food to their bewildered captors. When the Germans set down their rifles to eat, Rossbach looked impatientlyat Frelinghuysen, then at the door.


He ran like hell. Leaving his friend to an uncertain fate, he hurled himself outside and down a slick embankment. He ran for miles, soaked by a hard, cold rain. For a while, they chased him. Their bullets ricocheted off the trees.


He ran, then walked, then finally dragged himself toward safety.


He crossed into British territory 12 days later on Nov. 15, 1943.


He returned to his house in New Jersey - to Emily and the kids, to his 1937 Dodge and his job in insurance. He was snappish, disoriented. This time he had made it - and still hewas full of self-recrimination.


Some soldiers put war behind them, and others live it forever. Joe Frelinghuysen wanted only to make amends. He tracked down Dick Rossbach, who had been badly beaten by the three soldiers and then trucked to German prisons, eventually landing in a Russian camp. Undaunted, Rossbach escaped again, talking his way to the American Embassy in Moscow. When Frelinghuysen unloaded his guilt over having abandoned him, Rossbach dismissed it. "Oh, for God's sake, Joe," he said. For many years they met regularly in Manhattan for lunch.


The DiGiacomantonios survived the war, though German soldiers lined up 26 people against a stone wall in a nearby village and shot them for helping prisoners. In 1956, Frelinghuysen returned to the farmhouse, carrying a gift of gold rosary beads for Rosa. While his parents were content to stay on the farm, Berardino expressed a desire to emigrate. Frelinghuysen found a way, creating a position in his family's dairy business for an Italian cheesemaker. He settled Berardino, Letizia and their children into a home near his own, and over time, let go of his guilt.


They live there still, Berardino and his wife. During 1943 and 1944, they say, their family harbored more than 100 escaped American and British soldiers. They can recall the desperation and hunger and nervous gratitude these men showed at the time. But only one found them again after the war. No one else even sent a letter.






December 25, 2005

In February 1942, a little more than two months after the attack on Pearl Harbor, Franklin Roosevelt signed Executive Order 9066, which effectively decreed that West Coast residents of Japanese ancestry - whether American citizens or not - were now "enemy aliens." More than 100,000 Japanese-Americans reported to government staging areas, where they were processed and taken off to 10 internment camps. Fred Korematsu, the son of Japanese immigrants, was at the time a 23-year-old welder at Bay Area shipyards. His parents left their home and reported to a racetrack south of San Francisco, but Korematsu chose not to follow them. He stayed behind in Oakland with his Italian-American girlfriend and then fled, even having plastic surgery on his eyes to avoid recognition. In May 1942, he was arrested and branded a spy in the newspapers.


In search of a test case, Ernest Besig, then the executive director of the American Civil Liberties Union for Northern California, went to see Korematsu in jail and asked if he would be willing to challenge the internment policy in court. Korematsu said he would. Besig posted $5,000 bail, but instead of freeing him, federal authorities sent him to the internment camp at Topaz, Utah. He and Besig sued the government, appealing their case all the way to the Supreme Court, which, in a 6-to-3 decision that stands as one of the most ignoble in its history, rejected his argument and upheld the government's right to intern its citizens.

After the war, Korematsu married, returned to the Bay Area and found work as a draftsman. He might have been celebrated in his community, the Rosa Parks of Japanese-American life; in fact, he was shunned. Even during his time in Topaz, other prisoners refused to talk to him. "Allof them turned their backs on me at that time because they thought I was a troublemaker," he later recalled. His ostracism didn't end with the war. The overwhelming majority of Japanese-Americans had reacted to the internment by acquiescing to the government's order, hoping to prove their loyalty as Americans. To them, Korematsu's opposition was treacherous to both his country and his community.


In the years after the war, details of the internment were lost behind a wall of repression. It was common for Japanese-American families not to talk about the experience, or to talk about it only obliquely. Korematsu, too, remained silent, but for different reasons. "He felt responsible for the internment in a sort of backhanded way, because his case had been lost in the Supreme Court," Peter Irons, a legal historian, recalled in a PBS documentary. Korematsu's own daughter has said she didn't learn of his wartime role until she was a junior in high school.


Korematsu might have faded into obscurity had it not been for Irons, who in 1981 asked the Justice Department for the original documents in the Korematsu case. Irons found a memo in which a government lawyer had accused the solicitor general of lying to the Supreme Court about the danger posed by Japanese-Americans. Irons tracked down Korematsu and asked if he would be willing, once again, to go to court.


Perhaps Korematsu had been waiting all those years for a chance to clear his name. Or maybe he saw, in Irons's entreaty, an opportunity to vindicate himself with other Japanese-Americans. Whatever his thinking, not only did Korematsu agree to return to court but he also became an ardent public critic of the internment.


When government lawyers offered Korematsu a pardon, he refused. "As long as my record stands in federal court," Korematsu, then 64, said in an emotional courtroom oration, "any American citizen can be held in prison or concentration camps without a trial or a hearing." The judge agreed, ruling from the bench that Korematsu had been innocent. Just like that, the legality of the internment was struck down forever.


In the last decade of his life, Korematsu became, for some Americans, a symbol of principled resistance. President Clinton awarded Korematsu the Presidential Medal of Freedom in 1998. Six years later, outraged by the prolonged detention of prisoners at Guantánamo Bay, Korematsu filed an amicus brief with the Supreme Court, warning that the mistakes of the internment were being repeated. Still, Korematsu's place among contemporaries in his own community remained obscured by lingering resentments and a reluctance to revisit the past. When he died from a respiratory illness in March, not a single public building or landmark bore his name. It wasn't until last month that officials in Davis, Calif., dedicated the Fred Korematsu Elementary School. It was an especially fitting tribute for Korematsu, whose legacy rested with a generation of Japanese-Americans who were beginning to remember, at long last, what their parents had labored to forget.





 Chicago Tribune Editorial

December 28, 2005

Did President Bush intentionally mislead this nation and its allies into war? Or is it his critics who have misled Americans, recasting history to discredit him and his policies? If your responses are reflexive and self-assured, read on.

On Nov. 20, the Tribune began an inquest: We set out to assess the Bush administration's arguments for war in Iraq. We have weighed each of those nine arguments against the findings of subsequent official investigations by the 9/11 Commission, the Senate Intelligence Committee and others. We predicted that this exercise would distress the smug and self-assured--those who have unquestioningly supported, or opposed, this war.


The matrix below summarizes findings from the resulting nine editorials. We have tried to bring order to a national debate that has flared for almost three years. Our intent was to help Tribune readers judge the case for war--based not on who shouts loudest, but on what actually was said and what happened.

The administration didn't advance its arguments with equal emphasis. Neither, though, did its case rely solely on Iraq's alleged illicit weapons. The other most prominent assertion in administration speeches and presentations was as accurate as the weapons argument was flawed: that Saddam Hussein had rejected 12 years of United Nations demands that he account for his stores of deadly weapons--and also stop exterminating innocents. Evaluating all nine arguments lets each of us decide which ones we now find persuasive or empty, and whether President Bush tried to mislead us.

In measuring risks to this country, the administration relied on the same intelligence agencies, in the U.S. and overseas, that failed to anticipate Sept. 11, 2001. We now know that the White House explained some but not enough of the ambiguities embedded in those agencies' conclusions. By not stressing what wasn't known as much as what was, the White House wound up exaggerating allegations that proved dead wrong.

Those flawed assertions are central to the charge that the president lied. Such accusations, though, can unfairly conflate three issues: the strength of the case Bush argued before the war, his refusal to delay its launch in March 2003 and his administration's failure to better anticipate the chaos that would follow. Those three are important, but not to be confused with one another.

After reassessing the administration's nine arguments for war, we do not see the conspiracy to mislead that many critics allege. Example: The accusation that Bush lied about Saddam Hussein's weapons programs overlooks years of global intelligence warnings that, by February 2003, had convinced even French President Jacques Chirac of "the probable possession of weapons of mass destruction by an uncontrollable country, Iraq." We also know that, as early as 1997, U.S. intel agencies began repeatedly warning the Clinton White House that Iraq, with fissile material from a foreign source, could have a crude nuclear bomb within a year.

Seventeen days before the war, this page reluctantly urged the president to launch it. We said that every earnest tool of diplomacy with Iraq had failed to improve the world's security, stop the butchery--or rationalize years of UN inaction. We contended that Saddam Hussein, not George W. Bush, had demanded this conflict.

Many people of patriotism and integrity disagreed with us and still do. But the totality of what we know now--what this matrix chronicles-- affirms for us our verdict of March 2, 2003. We hope these editorials help Tribune readers assess theirs.


Biological and chemical weapons


The Bush administration said Iraq had stockpiled weapons of mass destruction. Officials trumpeted reports from U.S. and foreign spy agencies, including an October 2002 CIA assessment: "Baghdad has chemical and biological weapons, as well as missiles with ranges in excess of UN restrictions."


Many, although not all, of the Bush administration's assertions about weapons of mass destruction have proven flat-out wrong. What illicit weaponry searchers uncovered didn't begin to square with the magnitude of the toxic armory U.S. officials had described before the war.


There was no need for the administration to rely on risky intelligence to chronicle many of Iraq's other sins. In putting so much emphasis on illicit weaponry, the White House advanced its most provocative, least verifiable case for war when others would have sufficed.

Iraq rebuffs the world


In a speech that left many diplomats visibly squirming in their chairs, President Bush detailed tandem patterns of failure: Saddam Hussein had refused to obey UN Security Council orders that he disclose his weapons programs--and the UN had refused to enforce its demands of Hussein.


Reasonable minds disagree on whether Iraq's flouting of UN resolutions justified the war. But there can be no credible assertion that either Iraq or the UN met its responsibility to the world. If anything, the administration gravely understated the chicanery, both in Baghdad and at the UN.


Hussein had shunted enough lucre to enough profiteers to keep the UN from challenging him. In a dozen years the organization mass-produced 17 resolutions on Iraq, all of them toothless. That in turn enabled Hussein to continue his brutal reign and cost untold thousands of Iraqis their lives.

The quest for nukes


Intelligence agencies warned the Clinton and Bush administrations that Hussein was reconstituting his once-impressive program to create nuclear weapons. In part that intel reflected embarrassment over U.S. failure before the Persian Gulf war to grasp how close Iraq was to building nukes.


Four intel studies from 1997-2000 concurred that "If Iraq acquired a significant quantity of fissile material through foreign assistance, it could have a crude nuclear weapon within a year." Claims that Iraq sought uranium and special tubes for processing nuclear material appear discredited.


If the White House manipulated or exaggerated the nuclear intelligence before the war in order to paint a more menacing portrait of Hussein, it's difficult to imagine why. For five years, the official and oft-delivered alarms from the U.S. intelligence community had been menacing enough.

Hussein's rope-a-dope


The longer Hussein refuses to obey UN directives to disclose his weapons programs, the greater the risk that he will acquire, or share with terrorists, the weaponry he has used in the past or the even deadlier capabilities his scientists have tried to develop. Thus we need to wage a pre-emptive war.


Hussein didn't have illicit weapons stockpiles to wield or hand to terrorists. Subsequent investigations have concluded he had the means and intent to rekindle those programs as soon as he escaped UN sanctions.


Had Hussein not been deposed, would he have reconstituted deadly weaponry or shared it with terror groups? Of the White House's nine arguments for war, the implications of this warning about Iraq's intentions are treacherous to imagine--yet also the least possible to declare true or false.

Waging war on terror


Iraq was Afghanistan's likely successor as a haven for terror groups. "Saddam Hussein is harboring terrorists and the instruments of terror ... " the president said. "And he cannot be trusted. The risk is simply too great that he will use them, or provide them to a terror network."


The White House echoed four years of intel that said Hussein contemplated the use of terror against the U.S. or its allies. But he evidently had not done so on a broad scale. The assertion that Hussein was "harboring terrorists and the instruments of terror" overstated what we know today.


The drumbeat of White House warnings before the war made Iraq's terror activities sound more ambitious than subsequent evidence has proven. Based on what we know today, the argument that Hussein was able to foment global terror against this country and its interests was exaggerated.

Reform in the Middle East


Supplanting Hussein's reign with self-rule would transform governance in a region dominated by dictators, zealots and kings. The administration wanted to convert populations of subjects into citizens. Mideast democracy would channel energy away from resentments that breed terrorism.


U.S. pressure has stirred reforms in Lebanon, Egypt and Saudi Arabia and imperiled Syria's regime. "I was cynical about Iraq," said Druze Muslim patriarch Walid Jumblatt. "But when I saw the Iraqi people voting . . . it was the start of a new Arab world... The Berlin Wall has fallen."


The notion that invading Iraq would provoke political tremors in a region long ruled by despots is the Bush administration's most successful prewar prediction to date. A more muscular U.S. diplomacy has advanced democracy and assisted freedom movements in the sclerotic Middle East.

Iraq and Al Qaeda


President Bush: "... Iraq and the Al Qaeda terrorist network share a common enemy--the United States of America. We know that Iraq and Al Qaeda have had high-level contacts that go back a decade.... Iraq has trained Al Qaeda members in bombmaking and poisons and deadly gases."


Two government investigative reports indicate that Al Qaeda and Iraq had long-running if sporadic contacts. Several of the prewar intel conclusions likely are true. But the high-ranking Al Qaeda detainee who said Iraq trained Al Qaeda in bombmaking, poisons and gases later recanted.


No compelling evidence ties Iraq to Sept. 11, 2001, as the White House implied. Nor is there proof linking Al Qaeda in a significant way to the final years of Hussein's regime. By stripping its rhetoric of the ambiguity present in the intel data, the White House exaggerated this argument for war.

The Butcher of Baghdad


Then-Secretary of State Colin Powell: "For more than 20 years, by word and by deed, Saddam Hussein has pursued his ambition to dominate Iraq and the broader Middle East using the only means he knows--intimidation, coercion and annihilation of all those who might stand in his way."


Human Rights Watch estimates that Hussein exterminated 300,000 people. Chemical weapons killed Iraqi Kurds and Iranians; Iraqi Shiites also were slaughtered. Tortures included amputation, rape, piercing hands with drills, burning some victims alive and lowering others into acid baths.


In detailing how Hussein tormented his people--and thus mocked the UN Security Council order that he stop--the White House assessments were accurate. Few if any war opponents have challenged this argument, or suggested that an unmolested Hussein would have eased his repression.

Iraqis liberated


President Bush and his surrogates broached a peculiar notion: that the Arab world was ready to embrace representative government. History said otherwise--and it wasn't as if the Arab street was clamoring for Iraq to show the way.


The most succinct evaluation comes from Sen. Joseph Lieberman (D-Conn.): "Every time the 27 million Iraqis have been given the chance since Saddam Hussein was overthrown, they have voted for self-government and hope over the violence and hatred the 10,000 terrorists offer them."


The White House was correct in predicting that long subjugated Iraqis would embrace democracy. And while Kurds, Sunnis and Shiites have major differences to reconcile, a year's worth of predictions that Sunni disaffection could doom self-rule have, so far, proven wrong.



AP Business Writer

January 20, 2006, 4:47 AM CST

SAN FRANCISCO -- Google Inc. is rebuffing the Bush administration's demand for a peek at what millions of people have been looking up on the Internet's leading search engine -- a request that underscores the potential for online databases to become tools for government surveillance.

Mountain View-based Google has refused to comply with a White House subpoena first issued last summer, prompting U.S. Attorney General Alberto Gonzales this week to ask a federal judge in San Jose for an order to hand over the requested records.

The government wants a list all requests entered into Google's search engine during an unspecified single week -- a breakdown that could conceivably span tens of millions of queries. In addition, it seeks 1 million randomly selected Web addresses from various Google databases.

In court papers that the San Jose Mercury News reported on after seeing them Wednesday, the Bush administration depicts the information as vital in its effort to restore online child protection laws that have been struck down by the U.S. Supreme Court.

Yahoo Inc., which runs the Internet's second-most used search engine behind Google, confirmed Thursday that it had complied with a similar government subpoena.

Although the government says it isn't seeking any data that ties personal information to search requests, the subpoena still raises serious privacy concerns, experts said. Those worries have been magnified by recent revelations that the White House authorized eavesdropping on civilian communications after the Sept. 11 attacks without obtaining court approval.

"Search engines now play such an important part in our daily lives that many people probably contact Google more often than they do their own mother," said Thomas Burke, a San Francisco attorney who has handled several prominent cases involving privacy issues.

"Just as most people would be upset if the government wanted to know how much you called your mother and what you talked about, they should be upset about this, too."

The content of search request sometimes contain information about the person making the query.

For instance, it's not unusual for search requests to include names, medical profiles or Social Security information, said Pam Dixon, executive director for the World Privacy Forum.

"This is exactly the kind of thing we have been worrying about with search engines for some time," Dixon said. "Google should be commended for fighting this."

Every other search engine served similar subpoenas by the Bush administration has complied so far, according to court documents. The cooperating search engines weren't identified.

Sunnyvale, Calif.-based Yahoo stressed that it didn't reveal any personal information. "We are rigorous defenders of our users' privacy," Yahoo spokeswoman Mary Osako said Thursday. "In our opinion, this is not a privacy issue."

Microsoft Corp. MSN, the No. 3 search engine, declined to say whether it even received a similar subpoena. "MSN works closely with law enforcement officials worldwide to assist them when requested," the company said in a statement.

As the Internet's dominant search engine, Google has built up a valuable storehouse of information that "makes it a very attractive target for law enforcement," said Chris Hoofnagle, senior counsel for the Electronic Privacy Information Center.

The Department of Justice argues that Google's cooperation is essential in its effort to simulate how people navigate the Web.

In a separate case in Pennsylvania, the Bush administration is trying to prove that Internet filters don't do an adequate job of preventing children from accessing online pornography and other objectionable destinations.

Obtaining the subpoenaed information from Google "would assist the government in its efforts to understand the behavior of current Web users, (and) to estimate how often Web users encounter harmful-to-minors material in the course of their searches," the Justice Department wrote in a brief filed Wednesday

Google -- whose motto when it went public in 2004 was "do no evil" -- contends that submitting to the subpoena would represent a betrayal to its users, even if all personal information is stripped from the search terms sought by the government.

"Google's acceding to the request would suggest that it is willing to reveal information about those who use its services. This is not a perception that Google can accept," company attorney Ashok Ramani wrote in a letter included in the government's filing.

Complying with the subpoena also wound threaten to expose some of Google's "crown-jewel trade secrets," Ramani wrote. Google is particularly concerned that the information could be used to deduce the size of its index and how many computers it uses to crunch the requests.

"This information would be highly valuable to competitors or miscreants seeking to harm Google's business," Ramani wrote.

Dixon is hoping Google's battle with the government reminds people to be careful how they interact with search engines.

"When you are looking at that blank search box, you should remember that what you fill can come back to haunt you unless you take precautions," she said.





Confused about new drug benefit, pharmacists and administrators stick patients with the bill

By Judith Graham, Tribune staff reporter. The Associated Press contributed to this report

March 15, 2006

Frank Cartalino, a transplant patient, was distraught. No one could tell him why his pharmacy had suddenly billed him $500 for the drugs he needs to stay alive.

An Illinois program had covered the expense for years. Now, Cartalino had a letter saying the program had changed and he needed to get the medications through a Medicare drug plan. But the plan was refusing to pay the bill.

Cartalino called Medicare's national hot line repeatedly. He called an insurance company working with the Illinois program. He called Humana Inc., his Medicare drug plan. He called drug companies, begging to get on their financial assistance programs. No one, it seemed, was able to help.

Then, Cartalino, 42, who lives on a fixed income, undergoes dialysis three times a week, and takes drugs that prevent his body from rejecting a double organ transplant, called the Chicago Tribune.

Days of research revealed the root of his problem: Staff members working with pharmacies, insurance plans and government agencies don't really understand how Medicare's new drug benefit coordinates with other parts of the vast health program. And thousands of patients with organ transplants and other illnesses are getting caught in the middle.

On Tuesday, President Bush defended the prescription drug benefit as a good deal for seniors and taxpayers. But he acknowledged that the program had been plagued by problems in its early days.

"Anytime Washington passes a new law, sometimes the transition period can be interesting," the president said.

Interesting isn't the word senior citizen advocates use to describe it.

"It's an enormous mess. ... a real nightmare," said Jeanne Finberg, an attorney at the National Senior Citizens Law Center.

The risk, of course, is that patients won't get needed medications because of mix-ups or, like Cartalino, they'll end up paying for expensive drugs out of limited personal funds.

The government recognizes this is a serious matter, and officials have been busy clarifying policies and consulting with medical providers, pharmacists, drug plans and advocates, said Dr. Jeffrey Kelman, chief medical officer for Medicare's Center for Beneficiary Choices. It has been a learning experience, he said.

That's something of an understatement. It took more than a dozen phone calls for the Tribune to sort through the mind-numbing complexities of Medicare and figure out where things had gone wrong for Cartalino, who lives in southwest suburban Worth.

This is the issue: Two parts of Medicare, known in bureaucratese as Part B and Part D, now cover drugs. But there's no simple way to describe which program covers what drugs for which patients. As a result, some pharmacists and many customer service representatives are getting Part B and Part D mixed up.

Part B covers a limited number of medications administered primarily in doctors' offices and nursing homes. Part D covers a much broader universe of medications, including those most people take for common medical conditions. If Part B picks up the bill for a medication, Part D coverage isn't supposed to pay, to prevent double billing.

In practice, the way the Medicare programs interact is anything but straightforward, Kelman said.

Take methitrexate, a drug that can be used to treat transplant patients as well as patients with cancer or rheumatoid arthritis. Part B will pay for the medication for transplant and cancer patients, but not for people with arthritis. That falls to Part D.

Another example: Part B will pay for albuterol, a medication taken by people with asthma, when it's administered by nebulizer, a machine that sprays medicine into the mouth, in a person's home. But if a senior citizen with asthma gets albuterol through a nebulizer in a nursing home, the medication is covered by Part D. And if albuterol comes in a hand-held unit, it's also a Part D benefit.

There's more: If a patient gets a transplant while on Medicare, like Cartalino did, Part B will pay for anti-rejection medication. But if a patient wasn't on Medicare at the time, Part D will pay for the drugs.

That's part of what tripped up Cartalino and the many people who tried to answer his questions this year. But there were other factors.

The Illinois Comprehensive Health Insurance Plan, ICHIP, sent out misleading material to Cartalino and about 1,000 other disabled Medicare patients in October and December.

ICHIP is a program for state residents who can't get health insurance through traditional channels because of pre-existing medical conditions. For people like Cartalino who have Medicare, ICHIP pays for medical charges that Medicare doesn't cover in full.

Because of changes in Medicare, ICHIP informed members that it would stop paying for all prescription drugs. The letters urged people to sign up with a new Medicare drug plan so they would still get some help with medication expenses.

Nowhere did the ICHIP letters mention that the program would still pay for a limited set of drugs under Medicare Part B. (Medicare pays 80 percent of the cost of these drugs; ICHIP had been paying the remaining 20 percent.) That was explained but not highlighted in ICHIP's annual explanation of benefits.

That's the equivalent of asking someone to read the fine print buried on a drug label--no one does it.

When Cartalino got ICHIP's letter, he went shopping for a Medicare drug plan. After careful research, he decided on a Humana plan that promised to supply the medications, Prograf and Rapamune, which together cost nearly $2,500 a month.

But when it came time to fill his anti-rejection prescriptions, Cartalino learned that the Humana plan wouldn't authorize payment because the drugs were deemed a Part B, not a Part D, benefit.

Cartalino, a former printer who lives alone on a fixed monthly income of $2,000, scrambled to find almost $500--the monthly amount ICHIP had been paying previously for the medications. Then he began working the phones, but no one could answer his questions.

Poor training of customer service representatives at every level appears to be a real problem. Each time Cartalino called, he reached people who didn't understand his situation or who didn't know how to help him.

A Tribune reporter encountered the same difficulties. Medicare, drug plan, and ICHIP officials all say they've worked hard on training customer service staff.

With the intervention of Robert Herskovitz, a Chicago Medicare official, Cartalino finally learned that ICHIP would cover his transplant drugs after all. It had been in the policy all along, though materials didn't make that clear.

The good news in all this is that Medicare's new drug benefit, Part D, fills an important gap for seniors who have had transplants but no way previously of paying for drugs for conditions such as high cholesterol or hypertension.

Cartalino is now getting his anti-rejection medications without any problem after weeks of getting the runaround.

"Thank God," he said. "Without this help, I just couldn't have made it."

ONLINE Download our "Navigating Medicare" series at medicarehelp

Copyright © 2006, Chicago Tribune




By Frank James
Washington Bureau

March 15, 2006

WASHINGTON -- A disturbing number of high school students and adults are reporting early signs of hearing loss, and hearing experts think they know the culprits: iPods and similar portable devices that allow people to funnel loud sounds into their ears for hours on end.

More research is needed to conclusively establish the link between the cords dangling from millions of ears and hearing difficulties. But scientists suspect the increasing prevalence of the devices is contributing to the rising number of people reporting some form of hearing loss.

Fears and debates about loud music have been around since the dawn of rock 'n' roll, of course, from Elvis Presley to the Beatles, Black Sabbath to Nirvana.

But the leaps in technology that are allowing commuters on a bus or kids walking to high school to feel like they're at a deafening concert are also channeling ever higher volumes of music more directly, and longer, onto eardrums.

Hearing experts who called a news conference here Tuesday to voice their fears didn't use the words "crisis" or "epidemic," but it was clear they were worried about the results of a survey conducted last month by the polling firm Zogby International.

Survey worries experts

Twenty-eight percent of high school students questioned said they had to turn up the volume on a TV or radio to hear it better, for example, and 29 percent of the teenagers said they often found themselves saying, "What?" and "Huh?" during normal conversation.

Though that may sound like ordinary behavior for some teenagers, audiologists are taking it seriously, especially because the adult percentages weren't much lower.

"The results should give pause to anyone who's concerned about the nation's hearing health," said Alex Johnson, president of the American Speech-Language-Hearing Association, based in Rockville, Md. The survey was conducted for the group.

"While the cause of the symptoms was not identified, the polling showed that people are listening louder and longer--habits made easier by strides in listening technology, but ones that may also contribute to hearing damage," said Johnson, chairman of the audiology and speech-language pathology department at Wayne State University.

The polling results and warnings mirror concerns voiced by other hearing experts in recent years. These experts estimate that more than 28 million Americans have some hearing loss, a figure that some think will reach 80 million in 25 years as Baby Boomers age.

Johnson and others suggested that consumers take precautions, including parents monitoring the volume of the music as well as how long their children listen to it.

The experts also recommended consumers buy the often pricey headphones that block out external sounds like subway or airplane noise, the idea being that consumers then wouldn't need to crank up the volume to overcome background noise.

And they suggested that manufacturers limit the volume on their products. Dean Garstecki, a communication sciences and medical professor at Northwestern University, said, "I think companies who produce these products have an obligation to limit the output of the devices to a level that does not cause hearing loss." He noted that hearing-aid makers do as much in order to prevent causing additional hearing loss.

Government limits

If manufacturers of the portable devices do not act voluntarily, Garstecki suggested the U.S. government could follow the French example. France set a 100-decibel limit on iPods and other devices, but there is no such limit in the U.S. Apple Computer Inc., iPod's manufacturer, temporarily removed the devices from French stores to update the software to meet the legal restriction, according to a lawsuit filed against Apple by a Louisiana man who claimed an iPod damaged his hearing.

Steve Dowling, an Apple spokesman, said he had no comment on the Zogby poll, conducted Feb. 20-22.

The two lawmakers who appeared at Tuesday's news conference--Reps. Edward Markey (D-Mass.) and Mike Ferguson (R-N.J.)--did not seem intent on a legislative fix.

Seeming to wax nostalgic, Ferguson said, "Listening to music that annoys parents at incredibly high volumes is a rite of passage for kids of any generation."

But he put much of the onus on parents. "As parents, we talk to our kids about looking both ways before crossing the street," he said. "We talk to our kids about not talking to strangers. We also need to talk with them about the lifelong damage that could be caused by misusing personal music devices."

Markey said he plans to work with Ferguson to press the industry and the National Institutes of Health for more research on the role portable media devices play in hearing loss and solutions.

The experts also warned that no one should take hearing loss lightly, that it can have major consequences, even when it is seemingly only minimal and particularly when it occurs in children.

Anne Marie Tharpe, a hearing and speech sciences professor at Vanderbilt University, said research indicated children with such deficits were "failing in school at a rate of 10 times their peers.

"My point is that minimal hearing loss is not inconsequential for these children," she said.

Copyright © 2006, Chicago Tribune



By Ronald Kotulak
Tribune science reporter

March 24, 2006

By the time puberty is over in the middle to late teens, when adult height and full reproductive capacity have been achieved, the body is at its peak--the strongest, swiftest and healthiest it will ever be.

But the brain lags behind, laboring to adapt to the most complex society that has existed.

This mismatch--between a fully grown body and an immature brain that is trying to cope with emotions, sexual urges, poor judgment, thrill seeking and risk taking--is a key factor making motor vehicle accidents the No. 1 cause of death among adolescents and young adults, followed by murder and suicide.

Using powerful new imaging technology to look inside the brain, scientists are beginning to unravel the biology behind this critical period of development. They are finding that an adolescent's brain undergoes a previously unsuspected biological makeover--a massive growth of synaptic connections between brain cells.

This spectacular surge kicks off an extensive renovation of the brain that is not complete until the mid-20s. Scientists say the resulting learning curve, when teens struggle to shed childish thoughts for adult ones, is why adolescence is such a prolonged and perilous journey for so many.

It helps explain not only why teens are more prone to crash a car than at any other time of life, but why they are more likely to engage in risky sex, drug abuse or delinquency. Although teens often can think as logically as an adult, the process can be easily derailed by flaring emotions or other distractions.

"The reason that kids take chances when they drive is not because they're ignorant," said Temple University psychologist Laurence Steinberg. "It's because other things undermine their better judgment."

The synaptic growth spurt that occurs in puberty is similar to the ones that occur after birth, when the brain first begins to learn. The early exposure to the outside world enables the brain to connect to the body, developing its capacity for processing sound, sight, smell, touch and taste, and to make sense of them.

Learning occurs only after excess synapses not stimulated by experience are eliminated, much like the pieces of marble that have to be chipped away to create a work of art.

Now scientists have found that a second wave of growth and pruning occurs in adolescence. Synapses that are not incorporated into neural networks for memory, decision-making and emotional control are eliminated to make way for a leaner, more efficient brain.

This late blossoming of synapses, it is thought, provides the brain with a new capacity for learning and allows the brain to make the transition from childhood to adulthood.

For frazzled parents, the findings may provide new understanding and patience as their teens navigate this increasingly rough passage. Science is finally beginning to see what's going on in the teen mind.

"We're able to actually visualize what the changes are that are happening in the brain and how the brain is adapting to its environment and changing to help it deal with all these challenges that are happening during adolescence," said Dr. Sanjiv Kumra of the Albert Einstein College of Medicine, New York.

The discovery of the adolescent brain's synaptic blooming and pruning was first made in 1999 by a National Institute of Mental Health research team headed by Jay N. Giedd, chief of brain imaging at the institution's child psychiatry branch.

Puberty normally begins between the ages of 9 and 13 in girls and 10 and 16 in boys. Giedd's team found that synaptic growth reaches its peak in girls at about 11 years of age in girls and in boys at 14--a discovery that may provide a biological basis for why girls start maturing sooner than boys both physically and mentally.

After the growth peaks, the whittling away of unused synapses begins. This is also when the fibers connecting brain cells are wrapped in a thicker coat of myelin insulation to enhance their communication.

The changes in the brain are tied up with the changes associated with puberty, which prepares the body for sexual maturity. "Adolescence is a time where the most important function is really preparing you for mating," Kumra said. "All these brain changes are happening to prepare the organism to be able to carry out that central and important function."

Long after puberty is over, however, the brain is still developing--a process lasting into the mid-20s, researchers say.

"The notion that the brain wasn't done, was still under construction so late, was pretty surprising because by 18 you can vote, get married and go to war," Giedd said.

But the more Giedd thought about it, the more it made sense. The long period of maturation, he says, has made it easier for the brains of modern humans to adapt to an increasingly complex society.

"The same brain that was used in the past for hunting and gathering berries now programs a computer," he said. "The key to all of that is having the plasticity built into the brain."

But this long period of brain development also has a significant downside when teens get behind the wheel of a car.

Brain scientists like to joke that car rental companies must have the best neuroscientists because they won't let a person rent a car until age 25. But the real reason is clear to any actuary: Every year between 5,000 and 6,000 teenagers are killed in motor vehicle accidents and 300,000 are injured.

Teen crashes are not just caused by showing off, substance abuse, aggression, thrill seeking or speeding, although they play a role, said Giedd.

Recent research suggests that an important culprit is the immaturity of the teenage brain and its lack of multitasking skills--especially in boys. The last part of the brain to mature is the prefrontal cortex, Giedd said, which may not fully develop until the mid-20s.

That's important, he explained, because this part of the brain controls decision-making, judgment and impulse control, all of which are involved in multitasking, or processing more than one thing at a time.

"The more multitasking that you do--talking on a cell phone, adjusting the volume of a stereo, talking to people in the car--the more trouble you're asking for," Giedd said. "And it fits into the sex differences: Women are better at multitasking than males at every age and they have a strikingly lower rate of car accidents."

Most teens multitask behind the wheel, a recent survey by the Allstate Foundation found. Sixty-five percent say they look at things other than the road, 56 percent make and answer phone calls, 44 percent say they drive with friends in the car and 47 percent find passengers sometimes distracting.

Researchers say the time it now takes for the brain to reach adulthood may help explain why modern adolescence lasts far longer than in traditional societies, where the time between going through puberty and becoming a breadwinner is two to four years.

True adulthood arrives not with sexual or physical maturity but with taking on a social role and being responsible for one's own actions, said pediatric psychiatrist Dr. Ronald Dahl of the University of Pittsburgh School of Medicine.

"If your adult task is to gather food, have babies or kill an animal with a spear, the interval between puberty and adulthood is much shorter," Dahl said. "Whereas if what you want to do requires finishing high school, four years of college and going to graduate school, it's going to take the brain a lot longer."

The National Institute of Mental Health team's newest discovery shows that the longer the brain takes to mature, the smarter it becomes.

"The later the peak [in synaptic growth and pruning] the higher the IQ, which is good news for late bloomers," Giedd said. "If you have the brain being more responsive to the environment for longer, then these changes can make it better suited to deal with the environment."

Adolescence has now become so extended that it runs to about age 25, experts say. "What sits in the middle of this stretched-out adolescence are incredible increases in behavioral and emotional health problems, and brain changes that take a long time and lots of practice to acquire necessary skills," Dahl said.

The brain's facility for early learning is remarkable: It's as good at reasoning by age 16 as it is in adulthood, Steinberg said. "So then the question is: `Well, if kids are as smart as adults, why do they do such dumb things?'" said Steinberg, who presented new findings this month at the Society for Research on Adolescence meeting.

"We think the reason doesn't have to do with their basic intelligence. It has to do with ways in which emotional and social factors impair their judgment. This means that it takes longer than we probably thought for people to develop mature impulse control."

His study, which looked at 950 people of different racial and ethnic backgrounds between the ages of 10 and 30 in five countries, found that while reasoning powers mature early, things like impulse control, thinking of future consequences to behavior and resisting peer pressure take much longer. In fact, they slowly mature through the 20s, Steinberg said.

In a simulated driving study, Steinberg found that when teens were in the room by themselves their driving skills were the same as adults'. But when they tried to perform the same driving tasks with two friends in the room, the number of chances they took doubled. The presence of friends did not affect the driving of the adults.

Adolescence, Dahl said, is a time when passions can hijack the brain's ability to make decisions and control behavior, with potentially deadly results.

For some youngsters living in impoverished conditions, this is a particularly dangerous time. They reach adult body size but are being led by a brain that clings to childish impulses and passions--and might see nothing worthwhile in the future.

"The system is precarious, tipping on one side toward strong emotions and drives and on the other side not yet supported well enough by self-control," Dahl said.

"There's an important role for parents, coaches, teachers, other responsible adults and social systems to help support kids so that they can take some risks, do some experimenting, develop some ability for self-control, but not spiral into those terrible outcomes--death, disability, addictions, reckless sex, HIV and all the other problems that are so rampant in adolescence."

Given what science has learned about the developing adolescent brain, Jay N. Giedd of the National Institute of Mental Health says parents and society might consider the following steps to reduce the risk of teen driving accidents:

- "Graduated licensing where first you prove yourself safe for a certain amount of time, then you go to the next level."

- "Limiting multitasking, especially in the early stages of learning to drive. Being aware that talking on the cell phone and doing other things are risky for adults too, but multiplied for teens."

- Raising the age for a driver's license. "If you look at other countries where they start driving later, they drive safer."

Copyright © 2006, Chicago Tribune



By Jimmy Greenfield and David Haugh
Tribune staff reporters
Published March 28, 2006

Until a few weeks ago, Paul Marszalek's page had photos of him and his friends partying, dancing and drinking alcohol.

Not anymore.

Marszalek, 18, a UIC freshman, deleted his MySpace Web page after becoming nervous that law firms where he was applying for internships might see the photos.

"You never know who's looking at it," he said.

Marszalek isn't being paranoid.

What you post online could catch up with you.

High schools, colleges and businesses have begun to use social networking sites such as MySpace, Xanga and Facebook to keep tabs on students and employees.

"Some of these [postings] are incredibly incriminating," said Steve Jones, a UIC communications professor who studies the Internet. "I wouldn't be surprised if 20 years from now somebody who's running for an office has to answer a lot of questions to what they had on MySpace 20 years ago."

Some consequences are more immediate, especially for college athletes who are in the public eye more than their fellow students.

Last May, Louisiana State kicked two swimmers off the team after school officials found out the pair had posted derogatory comments about their coaches. At Arizona, several female athletes discontinued their Facebook accounts after one of them feared she was being stalked by someone who learned personal details from the site.

Last December, Colorado banned the use of Facebook in the student-athletes' academic lab computers after a football player and cross-country runner were caught sending racially insensitive threats to another runner.

Athletic directors concerned

Many college students have abandoned MySpace for Facebook to post their party pictures. One reason is Facebook, which has close to 5 million users, according to Nielsen/NetRatings, requires a university e-mail address for access.

The site has become the cyberspace version of a college singles bar, allowing users to communicate by exchanging photos occasionally suggestive or obscene in nature, letters and personal information. The result is alarmingly open and unfettered access into aspects of campus life previously left to the imagination of parents and administrators.

But many students don't realize that alumni--who may include police or prospective employers--can get a university e-mail address at some schools and start snooping around Facebook, Jones said.

After sifting uncomfortably through online profiles at Facebook last fall, Loyola University athletic director John Planek decided he had to do something to protect the image of his school and the safety of its student-athletes. Planek threatened to take away the scholarships of Loyola athletes who did not remove their profiles rather than expose them to gamblers, agents, cyber-stalkers and embarrassment.

"I've gotten the `Planek is an idiot' stuff, but when their moms and dads drop their student-athletes off on our campus, I'm the dad here, and it's my job to look out for them," Planek said unapologetically. "I can't control the whole Internet, but I can do my part."

Monday morning, for example, George Mason basketball player Lamar Butler picked up 100 new names on his "friends" list in about an hour for a total of 1,128. As popular as Butler had become, George Mason might have to win the NCAA title to catch up to Illinois star Dee Brown, who stopped accepting online friends when the number had reached 2,000, according to school officials.

"How many of the friends on an athlete's list are ne'er-do-wells?" Planek asked.

Planek has found support around the country from other athletic directors. Florida State officials gave the school's athletes 10 days last December to shut down their Facebook accounts--or else. Baylor athletic director Ian McCaw used a blanket e-mail to remind his 400 student-athletes that they were "always in the public eye."

The trend toward curbing the computer habits of college students, even scholarship athletes bound by a behavioral code, makes some civil libertarians uneasy. Representatives from did not reply to e-mail requests for an interview, but the local chapter of the American Civil Liberties Union questioned how far a university or its athletic department should go to control its image.

Said Edwin C. Yohnka, director of communications for the ACLU of Illinois, "The notion that this sort of [free] expression gets tamped down at a university when students are ready to explore who they are is a real concern."

Patrolling Internet

Of course, there could be bigger issues for students than just getting busted in a photo with a beer in their hand.

"If you don't want it to be my business, then don't post it," Barrington police officer James McNamee said.

McNamee, who specializes in Internet safety, said it's his job to keep tabs on anybody posting possibly incriminating information on MySpace. It's very easy to do so, he said.

He just goes to the "browse" section, types in criteria for age and gender, then searches for anything suspicious in a 5-mile radius using Barrington's zip code as a guide. MySpace doesn't require entering a city or ZIP code in your profile, but McNamee has found that many users do.

"Everything pops up," he said. "We'll look at the pictures and the names. We'll punch up on their site and see what we get."

McNamee has found sexually explicit pictures and personal diaries. He recently came across photos some high school students had posted of themselves painting graffiti on the school. He declined to say what discipline the students faced, because they were juveniles.

"They basically posted a confession online," he said.

McNamee compares police officers searching MySpace to driving around in a patrol car looking for suspicious activity, and he dismisses any suggestion it's an invasion of privacy.

"Are you saying we shouldn't patrol it?" he said. "There's too much stuff out there."

Other worries

And then there's that future job market to consider.

"In the future, if Google buys Facebook, who's to say they're not going to make all Facebook content searchable?" UIC's Jones said.

Job recruiters say students' lack of discretion online will catch up to them in their professional lives. A 2005 study conducted by executive job-search agency ExecuNet found that 75 percent of recruiters already use Web searching as part of the applicant screening process, according to a Columbia News Service report. More than a quarter of these same recruiters say they have eliminated candidates based on information they found online.

"I hope that students get a wake-up call," Steven Rothberg, who runs the largest national employment Web site for recent university graduates,, told the Columbia News Service. "I think of social networking sites much like a tattoo: It seems like a great idea at the time, but you have to live with it the rest of your life."


By Jason George
Tribune staff reporter
Published June 27, 2006

Sowing fields surrounded by subdivisions, James Culver is well aware that the world around his Plainfield farm has changed.

From a sleepy stagecoach stop when his family settled there in 1834 to the fastest growing city in Illinois today, Plainfield has long since traded tractors for tract housing. Its population nearly tripled from 1990 to 2000, and doubled again since then, according to census estimates released last week.

But Culver, a self-labeled 75-year-old country boy who still cherishes his copy of the 1906 Sears and Roebuck catalog, is not just a man out of step with this century. He was out of step with the last one too.

In 1975, a Tribune article declared him to be among the last of a dying breed--farmers who work their fields with draft horses, the massive beasts that once pulled everything from plows to streetcars.

Thirty-one years later, Chicagoland farm bureau officials say that Culver is no longer one of the last. He is the last.

Even though he appears stronger than the average man a third his age, Culver fears that this year could be his last in the fields, following triple bypass surgery in 2005. Last week he headed back into the hospital to fix an arterial blockage in his right leg.

Although he has leased much of his land to other farmers who employ heavy machinery, Culver has continued to work his hayfields, which feed his four Belgian mares.

"It didn't used to be that hard," he said one day recently, as he struggled to remove the heavy horse tack from the only females in his life: Ruby, Dora, Connie and Jane.

"I want to keep them if I can take care of them," he said. "I've never been without my horses."

Living without horses also would have been unimaginable to Culver's great-great-grandfather Daniel Culver, who moved to Plainfield, then known as Walker's Grove, in 1834. Horses literally pulled Chicago through the 19th Century, and draft horses made up the majority of them.

"They are too fond of the heavy draft breeds, almost wholly neglecting the lighter and finer grades of stock," an 1872 Tribune article said of farmers at the time.

Ollie Ziegler, the historian for the Belgian Draft Horse Corporation of America, said the preference was a matter of necessity.

"Back in those days that was the only source of strong farm power," he said.

As the century turned, steam and gasoline-powered tractors and cars began filling the horses' roles in both the city and the countryside.

"1952 and 1953 were the lowest numbers of registration of any draft horse," Ziegler said, adding that most registration gains since then can be attributed to their increasing popularity as show and exhibition horses.

"I saw the last of them leave the farms and go to the killer plants."

On the Culver farm, though, draft horses were more than a means to an end.

Until this winter, Culver always made sure to feed the horses before he prepared his own breakfast. Now he waits until the helpful daylight shines on the farm before heading out to the barn.

"I'm out here alone," said the lifelong bachelor.

Although Culver never married--"I guess I was too busy," he jokes--he is not without regular, potential suitors: real estate agents.

"I get a lot of that, you bet," he chuckled.

Culver and his sister, who lives in Lockport, own about 240 acres in the Plainfield area, which several in-town agents estimated is valued at $20 million, give or take a few million.

But turning the fields into home sites and enjoying a poolside retirement is not in the future for a man who wears work boots held together by tape and determination. The land, simply put, has never been--nor will it ever be--for sale as long as Culver's alive.

"It's been in the family too long," he said.

"I don't need it, and I don't want it," he said of the potential millions. "And once you spend it you never get it back."

After feeding his horses, Culver fits them with their collars, bellybands and cruppers to ready them for the fields. He has worked in this barn since he was a boy, and its walls feature several scrawls of "James" from when he was learning to write his name.

A Farmers Mutual calendar marks the date as December 1977.

Culver ends most days as the sun sets, sitting in his living room either reading or watching television. "It's getting really hard to find something I like," he said.

Next to his recliner are photo albums of horses he's owned over the years, gospel 45s and arthritis balm.

Often he'll pop a tape into the VCR that shows him plowing fields of wheat, which was filmed so that schoolchildren could learn about history. It shows a scene that looks no different from what drivers who pass Culver's farm see everyday.

"Time and time again I've been told I couldn't make a living, but I've never worked a day in town myself," he said, seated next to a stitched pillow that read: "Life without horses ... I don't think so!"

"This is all I've ever done."

 © Chicago Tribune 2006



by Laura Bohannan

(from Natural History, Aug/Sept. 1966)

Just before I left Oxford for the Tiv in West Africa, conversation turned to the season at Stratford. "You Americans," said a friend, "often have difficulty with Shakespeare. He was, after all, a very English poet, and one can easily misinterpret the universal by misunderstanding the particular."

I protested that human nature is pretty much the same the whole world over; at least the general plot and motivation of the greater tragedies would always be clear--everywhere--although some details of custom might have to be explained and difficulties of translation might produce other slight changes. To end an argument we could not conclude, my friend gave me a copy of Hamlet to study in the African bush: it would, he hoped, lift my mind above its primitive surroundings, and possibly I might, by prolonged meditation, achieve the grace of correct interpretation.

It was my second field trip to that African tribe, and I thought myself ready to live in one of its remote sections--an area difficult to cross even on foot. I eventually settled on the hillock of a very knowledgeable old man, the head of a homestead of some hundred and forty people, all of whom were either his close relatives or their wives and children. Like the other elders of the vicinity, the old man spent most of his time performing ceremonies seldom seen these days in the more accessible parts of the tribe. I was delighted. Soon there would be three months of enforced isolation and leisure, between the harvest that takes place just before the rising of the swamps and the clearing of new farms when the water goes down. Then, I thought, they would have even more time to perform ceremonies and explain them to me.

I was quite mistaken. Most of the ceremonies demanded the presence of elders from several homesteads. As the swamps rose, the old men found it too difficult to walk from one homestead to the next, and the ceremonies gradually ceased. As the swamps rose even higher, all activities but one came to an end. The women brewed beer from maize and millet. Men, women, and children sat on their hillocks and drank it.

People began to drink at dawn. By midmorning the whole homestead was singing, dancing, and drumming. When it rained, people had to sit inside their huts: there they drank and sang or they drank and told stories. In any case, by noon or before, I either had to join the party or retire to my own hut and my books. "One does not discuss serious matters when there is beer. Come, drink with us." Since I lacked their capacity for the thick native beer, I spent more and more time with Hamlet. Before the end of the second month, grace descended on me. I was quite sure that Hamlet had only one possible interpretation, and that one universally obvious.

Early every morning, in the hope of having some serious talk before the beer party, I used to call on the old man at his reception hut--a circle of posts supporting a thatched roof above a low mud wall to keep out wind and rain. One day I crawled through the low doorway and found most of the men of the homestead sitting huddled in their ragged cloths on stools, low plank beds, and reclining chairs, warming themselves against the chill of the rain around a smoky fire. In the center were three pots of beer. The party had started.

The old man greeted me cordially. "Sit down and drink." I accepted a large calabash full of beer, poured some into a small drinking gourd, and tossed it down. Then I poured some more into the same gourd for the man second in seniority to my host before I handed my calabash over to a young man for further distribution. Important people shouldn't ladle beer themselves.

"It is better like this," the old man said, looking at me approvingly and plucking at the thatch that had caught in my hair. "You should sit and drink with us more often. Your servants tell me that when you are not with us, you sit inside your hut looking at a paper."

The old man was acquainted with four kinds of "papers": tax receipts, bride price receipts, court fee receipts, and letters. The messenger who brought him letters from the chief used them mainly as a badge of office, for he always knew what was in them and told the old man. Personal letters for the few who had relatives in the government or mission stations were kept until someone went to a large market where there was a letter writer and reader. Since my arrival, letters were brought to me to be read. A few men also brought me bride price receipls, privately, with requests to change the figures to a higher sum. I found moral arguments were of no avail, since in laws are fair game, and the technical hazards of forgery difficult to explain to an illiterate people. I did not wish them to think me silly enough to look at any such papers for days on end, and I hastily explained that my "paper" was one of the "things of long ago" of my country.

"Ah," said the old man.''Tell us."

I protested that I was not a storyteller. Storytelling is a skilled art among them; their standards are high, and the audiences critical--and vocal in their criticism. I protested in vain. This morning they wanted to hear a story while they drank. They threatened to tell me no more stories until I told them one of mine. Finally, the old man promised that no one would criticize my style "for we know you are struggling with our language." "But," put in one of the elders, "you must explain what we do not understand, as we do when we tell you our stories." Realizing that here was my chance to prove Hamlet universally intelligible, I agreed.

The old man handed me some more beer to help me on with my storytelling. Men filled their long wooden pipes and knocked coals from the fire to place in the pipe bowls; then, puffing contentedly, they sat back to listen. I began in the proper style, "Not yesterday, not yesterday, but long ago, a thing occurred. One night three men were keeping watch outside the homestead of the great chief, when suddenly they saw the former chief approach them."

"Why was he no longer their chief?"

"He was dead," I explained. "That is why they were troubled and afraid when they saw him."

"Impossible," began one of the elders, handing his pipe on to his neighbor, who interrupted, "Of course it wasn't the dead chief. It was an omen sent by a witch. Go on."

Slightly shaken, I continued. "One of these three was a man who knew things"--the closest translation of scholar, but unfortunately it also meant witch. The second elder looked triumphantly at the first. "So he spoke to the dead chief saying, 'Tell us what we must do so you may rest in your grave,' but the dead chief did not answer. He vanished, and they could see him no more. Then the man who knew things--his name was Horatio--said this event was the affair of the dead chief's son, Hamlet."

There was a general shaking of heads round the circle. "Had the dead chief no living brothers? Or was this son the chief?"

"No," I replied. "That is, he had one living brother who became the chief when the elder brother died."

The old men muttered: such omens were matters for chiefs and elders, not for youngsters; no good could come of going behind a chief's back; clearly Horatio was not a man who knew things.

"Yes, he was," I insisted, shooing a chicken away from my beer. "In our country the son is next to the father. The dead chief's younger brother had become the great chief. He had also married his elder brother's widow only about a month after the funeral."

"He did well," the old man beamed and announced to the others, "I told you that if we knew more about Europeans, we would find they really were very like us. In our country also," he added to me, "the younger brother marries the elder brother's widow and becomes the father of his children. Now, if your uncle, who married your widowed mother, is your father's full brother, then he will be a real father to you. Did Hamlet's father and uncle have one mother?"

His question barely penetrated my mind; I was too upset and thrown too far off balance by having one of the most important elements of Hamlet knocked straight out of the picture. Rather uncertainly I said that I thought they had the same mother, but I wasn't sure--the story didn't say. The old man told me severely that these genealogical details made all the difference and that when I got home I must ask the elders about it. He shouted out the door to one of his younger wives to bring his goatskin bag.

Determined to save what I could of the mother motif, I took a deep breath and began again. "The son Hamlet was very sad because his mother had married again so quickly. There was no need for her to do so, and it is our custom for a widow not to go to her next husband until she has mourned for two years."

"Two years is too long," objected the wife, who had appeared with the old man's battered goatskin bag. "Who will hoe your farms for you while you have no husband?"

"Hamlet," I retorted without thinking, "was old enough to hoe his mother's farms himself. There was no need for her to remarry." No one looked convinced. I gave up. "His mother and the great chief told Hamlet not to be sad, for the great chief himself would be a father to Hamlet.

Furthermore, Hamlet would be the next chief: therefore he must stay to learn the things of a chief. Hamlet agreed to remain, and all the rest went off to drink beer."

While I paused, perplexed at how to render Hamlet's disgusted soliloquy to an audience convinced that Claudius and Gertrude had behaved in the best possible manner, one of the younger men asked me who had married the other wives of the dead chief.

"He had no other wives," I told him.

"But a chief must have many wives! How else can he brew beer and prepare food for all his guests?"

I said firmly that in our country even chiefs had only one wife, that they had servants to do their work, and that they paid them from tax money.

It was better, they returned, for a chief to have many wives and sons who would help him hoe his farms and feed his people; then everyone loved the chief who gave much and took nothing--taxes were a bad thing.

I agreed with the last comment, but for the rest fell back on their favorite way of fobbing off my questions: "That is the way it is done, so that is how we do it."

I decided to skip the soliloquy. Even if Claudius was here thought quite right to marry his brother's widow, there remained the poison motif, and I knew they would disapprove of fratricide. More hopefully I resumed, "That night Hamlet kept watch with the three who had seen his dead father. The dead chief again appeared, and although the others were afraid, Hamlet followed his dead father off to one side. When they were alone, Hamlet's dead father spoke."

"Omens can't talk!" The old man was emphatic.

"Hamlet's dead father wasn't an omen. Seeing him might have been an omen, but he was not." My audience looked as confused as I sounded. "It was Hamlet's dead father. It was a thing we call a 'ghost.'" I had to use the English word, for unlike many of the neighboring tribes, these people didn't believe in the survival after death of any individuating part of the personality.

"What is a 'ghost?' An omen?"

"No, a 'ghost' is someone who is dead but who walks around and can talk, and people can hear him and see him but not touch him."

They objected. "One can touch zombis."

"No, no! It was not a dead body the witches had animated to sacrifice and eat. No one else made Hamlet's dead father walk. He did it himself."

"Dead men can't walk," protested my audience as one man.

I was quite willing to compromise. "A 'ghost' is the dead man's shadow."

But again they objected. "Dead men cast no shadows."

"They do in my country," I snapped.

The old man quelled the babble of disbelief that arose immediately and told me with that insincere, but courteous, agreement one extends to the fancies of the young, ignorant, and superstitious, "No doubt in your country the dead can also walk without being zombis." From the depths of his bag he produced a withered fragment of kola nut, bit off one end to show it wasn't poisoned, and handed me the rest as a peace offering.

"Anyhow," I resumed, "Hamlet's dead father said that his own brother, the one who became chief, had poisoned him. He wanted Hamlet to avenge him. Hamlet believed this in his heart, for he did not like his father's brother." I took another swallow of beer. "In the country of the great chief, living in the same homestead, for it was a very large one, was an important elder who was often with the chief to advise and help him. His name was Polonius. Hamlet was courting his daughter but her father and her brother . . . [I cast hastily about for some tribal analogy] warned her not to let Hamlet visit her when she was alone on her farm, for he would be a great chief and so could not marry her."

"Why not?" asked the wife, who had settled down on the edge of the old man's chair. He frowned at her for asking stupid questions and growled, "They lived in the same homestead."

"That was not the reason," I informed them. "Polonius was a stranger who lived in the homestead because he helped the chief, not because he was a relative."

"Then why couldn't Hamlet marry her?"

"He could have," I explained, "but Polonius didn't think he would. After all, Hamlet was a man of great importance who ought to marry a chief's daughter, for in his country a man could have only one wife. Polonius was afraid that if Hamlet made love to his daughter, then no one else would give a high price for her."

"That might be true," remarked one of the shrewder elders, "but a chief's son would give his mistress's father enough presents and patronage to more than make up the difference. Polonius sounds like a fool to me."

"Many people think he was," I agreed. "Meanwhile Polonius sent his son Laertes off to Paris to learn the things of that country, for it was the homestead of a very great chief indeed. Because he was afraid that Laertes might waste a lot of money on beer and women and gambling, or get into trouble by fighting, he sent one of his servants to Paris secretly, to spy out what Laertes was doing. One day Hamlet came upon Polonius's daughter Ophelia. He behaved so oddly he frightened her. Indeed" --I was fumbling for words to express the dubious quality of Hamlet's madness ''the chief and many others had also noticed that when Hamlet talked one could understand the words but not what they meant. Many people thought that he had become mad." My audience suddenly became much more attentive. "The great chief wanted to know what was wrong with Hamlet, so he sent for two of Hamlet's age mates [school friends would have taken long explanation] to talk to Hamlet and find out what troubled his heart. Hamlet, seeing that they had been bribed by the chief to betray him, told them nothing. Polonius, however, insisted that Hamlet was mad because he had been forbidden to see Ophelia, whom he loved."

"Why," inquired a bewildered voice, "should anyone bewitch Hamlet on that account?"

"Bewitch him?''

"Yes, only witchcraft can make anyone mad, unless, of course, one sees the beings that lurk in the forest."

I stopped being a storyteller, took out my notebook and demanded to be told more about these two causes of madness. Even while they spoke and I jotted notes, I tried to calculate the effect of this new factor on the plot. Hamlet had not been exposed to the beings that lurk in the forest. Only his relatives in the male line could bewitch him. Barring relatives, not mentioned by Shakespeare, it had to be Claudius who was attempting, to harm him. And, of course, it was.

For the moment I staved off questions by saying that the great chief also refused to believe that Hamlet was mad for the love of Ophelia and nothing else. "He was sure that something much more important was troubling Hamlet heart."

"Now Hamlet's age mates," I continued, "had brought with them a famous storyteller. Hamlet decided to have this man tell the chief and all his homestead a story about a man who had poisoned his brother because he desired his brother's wife and wished to be chief himself. Hamlet was sure the great chief could not hear the story without making a sign if he was indeed guilty, and then he would discover whether his dead father had told him the truth.''

The old man interrupted, with deep cunning, "Why should a father lie to his son?" he asked.

I hedged: "Hamlet wasn't sure that it really was his dead father." It was impossible to say anything, in that language, about devil inspired visions.

"You mean," he said, "it actually was an omen, and he knew witches sometimes send false ones. Hamlet was a fool not to go to one skilled in reading omens and divining the truth in the first place. A man-who sees the truth could have told him how his father died, if he really had been poisoned, and if there was witchcraft in it; then Hamlet could have called the elders to settle the matter."

The shrewd elder ventured to disagree. "Because his father's brother was a great chief, one who sees the truth might therefore have been afraid to tell it. I think it was for that reason that a friend of Hamlet's father--a witch and an elder--sent an omen so his friend's son would know. Was the omen true?"

"Yes," l said, abandoning ghosts and the devil; a witch sent omen it would have to be. "It was true, for when the storyteller was telling his tale before all the homestead, the great chief rose in fear. Afraid that Hamlet knew his secret he planned to have him killed."

The stage set of the next bit presented some difficulties of translation. I began cautiously. "The great chief told Hamlet's mother to find out from her son what he knew. But because a woman's children are always first in her heart, he had the important elder Polonius hide behind a cloth that hung against the wall of Hamlet's mother's sleeping hut. Hamlet started to scold his mother for what she had done."

There was a shocked murmur from everyone. A man should never scold his mother.

"She called out in fear, and Polonius moved behind the cloth. Shouting, 'A rat!' Hamlet took his machete and slashed through the cloth." I paused for dramatic effect. "He had killed Polonius!"

The old men looked at each other in supreme disgust. "That Polonius truly was a fool and a man who knew nothing! What child would not know enough to shout, 'It's me!'" With a pang, I remembered that these people are ardent hunters, always armed with bow, arrow, and machete; at the first rustle in the grass an arrow is aimed and ready, and the hunter shouts "Game!" If no human voice answers immediately, the arrow speeds on its way. Like a good hunter Hamlet had shouted, "A rat!"

I rushed in to save Polonius's reputation. "Polonius did speak. Hamlet heard him. But he thought it was the chief and wished to kill him to avenge his father. He had meant to kill him earlier that evening...." I broke down, unable to describe to these pagans, who had no belief in individual afterlife, the difference between dying at one's prayers and dying "unhousell'd, disappointed, unaneled."

This time I had shocked my audience seriously. "For a man to raise his hand against his father's brother and the one who has become his father--that is a terrible thing The elders ought to let such a man be bewitched."

I nibbled at my kola nut in some perplexity, then pointed out that after all the man had killed Hamlet's father.

"No," pronounced the old man, speaking less to me than to the young men sitting behind the elders. "If your father's brother has killed your killer, you must appeal to your father's age mates; they may avenge him. No man may use violence against his senior relatives." Another thought struck him. "But if his father's brother had indeed been wicked enough to bewitch Hamlet and make him mad that would be a good story indeed, for it would be his fault that Hamlet, being mad, no longer had any sense and thus was ready to kill his father's brother."

There was a murmur of applause. Hamlet was again a good story to them, but it no longer seemed quite the same story to me. As I thought over the coming complications of plot and motive, I lost courage and decided to skim over dangerous ground quickly.

"The great chief," I went on, "was not sorry that Hamlet had killed Polonius. It gave him a reason to send Hamlet away, with his two treacherous age mates, with letters to a chief of a far country, saying that Hamlet should be killed. But Hamlet changed the writing on their papers, so that the chief killed his age mates instead." I encountered a reproachful glare from one of the men whom I had told undetectable forgery was not merely immoral but beyond human skill. I looked the other way.

"Before Hamlet could return, Laertes came back for his father's funeral. The great chief told him Hamlet had killed Polonius. Laertes swore to kill Hamlet because of this, and because his sister Ophelia, hearing her father had been killed by the man she loved, went mad and drowned in the river."

"Have you already forgotten what we told you?" The old man was reproachful. "One cannot take vengeance on a madman; Hamlet killed Polonius in his madness. As for the girl, she not only went mad, she was drowned Only witches can make people drown. Water itself can't hurt anything. It is merely something one drinks and bathes in."

I began to get cross. "If you don't like the story, I'll stop."

The old m;man made soothing noises and himself poured me some more beer. "You tell the story well, and we are listening. But it is clear that the elders of your country have never told you what the story really means. No, don't interrupt! We believe you when you say your marriage customs are different, or your clothes and weapons. But people are the same everywhere; therefore, there are always witches and it is we, the elders, who know how witches work. We told you it was the great chief who wished to kill Hamlet, and now your own words have proved us right. Who were Ophelia's male relatives?"

"There were only her father and her brother." Hamlet was clearly out of my hands.

"There must have been many more; this also you must ask of your elders when you get back to your country. From what you tell us, since Polonius was dead, it must have been Laertes who killed Ophelia, although I do not see the reason for it."

We had emptied one pot of beer, and the old men argued the point with slightly tipsy interest. Finally one of them demanded of me, "What did the servant of Polonius say on his return?"

With difficulty I recollected Reynaldo and his mission. "I don't think he did return before Polonius was killed."

"Listen," said the elder, "and I will tell you how it was and how your story will go, then you may tell me if I am right. Polonius knew his son would get into trouble, and so he did. He had many fines to pay for fighting, and debts from gambling. But he had only two ways of getting money quickly. One was to marry off his sister at once, but it is difficult to find a man who will marry a woman desired by the son of a chief. For if the chief's heir commits adultery with your wife, what can you do? Only a fool calls a case against a man who will someday be his judge. Therefore Laertes had to take the second way: he killed his sister by witchcraft, drowning her so he could secretly sell her body to the witches."

I raised an objection. "They found her body and buried it. Indeed Laertes jumped into the grave to see his sister once more--so, you see, the body was truly there. Hamlet, who had just come back, jumped in after him."

"What did I tell you?" The elder appealed to the others. "Laertes was up to no good with his sister's body. Hamlet prevented him, because the chief's heir, like a chief, does not wish any other man to grow rich and powerful. Laertes would be angry, because he would have killed his sister without benefit to himself. In our country he would try to kill Hamlet for that reason. Is this not what happened?"

"More or less," I admitted. "When the great chief found Hamlet was still alive, he encouraged Laertes to try to kill Hamlet and arranged a fight with machetes between them. In the fight both the young men were wounded to death. Hamlet's mother drank the poisoned beer that the chief meant for Hamlet in case he won the fight. When he saw his mother die of poison, Hamlet, dying, managed to kill his father's brother with his machete."

"You see, I was right!" exclaimed the elder.

''That was a very good story," added the old man, "and you told it with very few mistakes. There was just one more error, at the very end. The poison Hamlet's mother drank was obviously meant for the survivor of the fight, whichever it was. If Laertes had won, the great chief would have poisoned him, for no one would know that he arranged Hamlet's death. Then, too, he need not fear Laertes' witchcraft; it takes a strong heart to kill one's only sister by witchcraft.

"Sometime," concluded the old man, gathering his ragged toga about him, "you must tell us some more stories of your country. We, who are elders, will instruct you in their true meaning, so that when you return to your own land your elders will see that you have not been sitting in the bush, but among those who know things and who have taught you wisdom."


WASHINGTON, D.C.—U.S. Representative Rahm Emanuel (D-IL) released the following statement as delivered on the House floor during the debate on H.Res.861:  

            “Mr. Speaker, since day one of the war in Iraq, Democrats have provided the President with everything he asked for, yet Republicans have denied the President the one thing he needed: oversight.

            “In a post 9-11 world, the American people need the vigilance and patriotic determination of every Member of Congress to demand answers to the questions their constituents are asking.

            “Instead, the Republican Congress sat and watched the Administration make mistake, after mistake, after mistake. 

            “And don’t listen to just one member of Congress.

            “Consider the words of a Three Star General Greg Newbold, Top Operations Officer for the Joint Chiefs of Staff:

            “After a scathing critique of Secretary Rumsfeld, he says:

            ‘The Bush Administration and senior military officials are not alone in their culpability. Members of Congress . . . defaulted in fulfilling their constitutional responsibility for oversight.’

            “General Anthony Zinni, former Commander of the U.S. Central Command – in the Middle East:

            ‘We are paying the price for the lack of credible planning, or the lack of a plan. Ten years of planning were thrown away.’

            “Major General Batiste, who Commanded 22,000 soldiers on the ground in Iraq:

            ‘Rumsfeld and his team turned what should have been a deliberate victory in Iraq into a prolonged challenge.’

            “8 generals have raised serious questions concerning secretary Rumsfeld’s leadership. I dunno, Maybe the Pentagon suffers from the soft bigotry of low expectations and social promotion as a policy. 

            “Maybe these Generals weren’t just qualified.

            “Or maybe, just maybe, they had to speak up because the Republican Congress was silent.

            “This Congress has adopted an approach of see no evil, hear no evil, speak no evil with abandon.

            “America was told this would be a quick war and it turned into a long war, this Congress walked away from its oversight responsibility. 

            “America was told 130,000 troops would be enough, but more were clearly necessary, this Congress, the Republican Congress, walked away from its oversight responsibility. 

            “America was told this would be a conventional war, it turned into an insurgency, this Congress walked away from its oversight responsibility.

            “America was told oil would pay for reconstruction, and the taxpayers were left with a $480 billion tab, this Congress walked away from its oversight responsibility.

            “America was told this we would be greeted as liberators, but have become and treated like occupiers, this Congress walked away from its oversight responsibility. 

            “And, when Don Rumsfeld, a man who expressed contempt for the idea of nation-building, was assigned the responsibility of rebuilding Iraq and mismanaged the war against the insurgency, this Congress, the Republican Congress, walked away from its oversight responsibility. 

            “Mr. Speaker, the Republicans want to portray the greatest foreign policy challenge of a generation as simply the choice between more of the same or a new direction and we Democrats welcome that.

            “The debate today is about whether the American people want to stay the course with an administration and a Congress that has walked away from its obligations or pursue a real strategy for success in the war on terror.

            “2,500 brave Americans -- male and female -- have given their lives trying to stabilize Iraq.

            “Last month was one of the bloodiest in Iraq. 

            “According to Maj. Gen. Rick Lynch, attacks against civilians increased 80 percent since November 2005.

            “We cannot achieve the end of victory and continue to sit and watch, stand pat, stay put, status quo and that is the Republican policy.

            “Democrats are determined to take the fight to the enemy.

            “In the words of President John Kennedy: ‘we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty.’ 

            “Democrats will never put American service members in harms way without a plan, and without support.

            “For that, you need the sit and watch complacency of a Republican Congress.”



Kathleen Parker
August 9, 2006

Every historic moment has its iconic image.

Vietnam had Gen. Nguyen Ngoc Loan executing a Viet Cong on the street; the Oklahoma City bombing had a fireman holding a dying child in his arms; Abu Ghraib had the hooded torture victim standing on a box.

And today, the Israeli-Hezbollah war has Qana--the Lebanese village where Israeli rockets killed civilians, including 16 children (down from the initially reported 27).

Or did they?

The blogosphere has been buzzing the past several days about doctored photographs, faked footage and even the possibility that Qana was manipulated, if not orchestrated, by Hezbollah.

True or false? That seems increasingly to be a question for news consumers, who have to be detectives as they digest the day's headlines and photo captions.

In the past week, for instance, at least two photos shot in Lebanon and distributed by Reuters were determined to have been doctored. Best known of the two is an image showing black smoke plumes allegedly caused by an Israeli strike on south Beirut.

The photo, snapped and enhanced by Lebanese freelance photographer Adnan Hajj, was altered to make damage from the strike seem much worse than it was, as revealed by blogger Charles Johnson of Little Green Footballs (

Subsequently, Reuters ended its relationship with Hajj and shut down his photo archive of more than 900 images. The news agency acknowledged that at least one other Hajj photo had been doctored to show three flares dropping from an Israeli jet instead of just one.

These distortions may not rise to the level of wholesale deceit, but they are intentionally misleading and prejudicial toward Israel at a time when the stakes are lethal.

Yet another Hajj photo series under close scrutiny from bloggers concerns a bombed-out bridge in southern Lebanon, though it's hard to tell exactly where. Two clearly different bridges are both labeled Qasmiya Bridge near Tyre, an honest enough mistake. In several frames taken at one of the bridges, however, an overturned car appears to have been perhaps digitally moved to produce a more compelling image.

These photos can be viewed at Power Line (, where three attorneys keep close tabs on the various war fronts. These are the same fellows responsible for sizing up the fonts on the "inaccurate-but-true" documents Dan Rather presented as detailing George W. Bush's military history.

Power Line's treatment of the bridge photos is fair and open-minded--it's asking rather than asserting whether something might not be quite right in Tyre. Meanwhile, others are questioning whether the Qana tragedy might have been staged by Hezbollah

Thus are conspiracy theories born. When the media fail to carefully police their own, others will. And in that dead space between a forged document--or a faked photograph--and the "gotcha" reflex among bloggers are lost trust and moral confusion.

How can people make honest judgments about events--whether the war on terror, the war in Iraq or Israel's response to Hezbollah--if they can't rely on news from the front?

Equally troubling is that these images have the power to sway public opinion and to alter the course of history. After pictures of the Qana children were flashed around the world, for instance, public outrage was directed at Israel, prompting Israeli officials to declare a 48-hour cease-fire. The emotional power of imagery can't be underestimated, nor can its manipulative power be ignored.

In yet another series of photographs being closely reviewed for staging, British blogger Dr. Richard North of EU Referendum ( has raised questions about Qana based on photos and frames captured from video.

He identifies two men--"Mr. White T-Shirt" and "Mr. Green Helmet"--who seem to be calculating their actions--and their emotions--for the cameras. Away from cameras, they're dispassionate, even bored-looking bystanders to the rubble and death. Closer to photographers, they seem to emote as if on cue.

It's by no means conclusive that the men's emotions are necessarily manufactured, but as presented by North, they can be viewed as false. Does that make the pictures inaccurate? Unfair? Misleading? North, at least, seems to conclude that the men are more likely Hezbollah apparatchiks than mere civilians wracked by grief.

These few examples remind us that the digital media age is a curse and a blessing. We have access to more information than imaginable even a decade ago, and yet we seem to have less reliable truth than ever.

The iconic image for these times may well be the humble Underwood typewriter--symbol of simpler times when a thousand words could paint a good enough picture.


Kathleen Parker is a syndicated columnist. E-mail:

Copyright © 2006, Chicago Tribune



Journalism without journalists.


Issue of 2006-08-07     Posted 2006-07-31

On the Internet, everybody is a millenarian. Internet journalism, according to those who produce manifestos on its behalf, represents a world-historical development—not so much because of the expressive power of the new medium as because of its accessibility to producers and consumers. That permits it to break the long-standing choke hold on public information and discussion that the traditional media—usually known, when this argument is made, as “gatekeepers” or “the priesthood”—have supposedly been able to maintain up to now. “Millions of Americans who were once in awe of the punditocracy now realize that anyone can do this stuff—and that many unknowns can do it better than the lords of the profession,” Glenn Reynolds, a University of Tennessee law professor who operates one of the leading blogs, Instapundit, writes, typically, in his new book, “An Army of Davids: How Markets and Technology Empower Ordinary People to Beat Big Media, Big Government and Other Goliaths.”

The rhetoric about Internet journalism produced by Reynolds and many others is plausible only because it conflates several distinct categories of material that are widely available online and didn’t use to be. One is pure opinion, especially political opinion, which the Internet has made infinitely easy to purvey. Another is information originally published in other media—everything from Chilean newspaper stories and entries in German encyclopedias to papers presented at Micronesian conferences on accounting methods—which one can find instantly on search and aggregation sites. Lately, grand journalistic claims have been made on behalf of material produced specifically for Web sites by people who don’t have jobs with news organizations. According to a study published last month by the Pew Internet & American Life Project, there are twelve million bloggers in the United States, and thirty-four per cent of them consider blogging to be a form of journalism. That would add up to more than four million newly minted journalists just among the ranks of American bloggers. If you add everyone abroad, and everyone who practices other forms of Web journalism, the profession must have increased in size a thousandfold over the last decade.

As the Pew study makes clear, most bloggers see themselves as engaging only in personal expression; they don’t inspire the biggest claims currently being made for Internet journalism. The category that inspires the most soaring rhetoric about supplanting traditional news organizations is “citizen journalism,” meaning sites that publish contributions of people who don’t have jobs with news organizations but are performing a similar function.

Citizen journalists are supposedly inspired amateurs who find out what’s going on in the places where they live and work, and who bring us a fuller, richer picture of the world than we get from familiar news organizations, while sparing us the pomposity and preening that journalists often display. Hong Eun-taek, the editor-in-chief of perhaps the biggest citizen-journalism site, Oh My News, which is based in Seoul and has a staff of editors managing about forty thousand volunteer contributors, has posted a brief manifesto, which says, “Traditional means of news gathering and dissemination are quickly falling behind the new paradigm. . . . We believe news is something that is made not only by a George W. Bush or a Bill Gates but, more importantly, by people who are all allowed to think together. The news is a form of collective thinking. It is the ideas and minds of the people that are changing the world, when they are heard.”

That’s the catechism, but what has citizen journalism actually brought us? It’s a difficult question, in part because many of the truest believers are very good at making life unpleasant for doubters, through relentless sneering. Thus far, no “traditional journalist” has been silly enough to own up to and defend the idea of belonging to an élite from which ordinary citizens are barred. But sometimes one will unwittingly toss a chunk of red meat to the new-media visionaries by appearing not to accord the Internet revolution the full measure of respect it deserves—as John Markoff, a technology reporter for the Times, did in 2003 in an interview with Online Journalism Review. Jeff Jarvis, a veteran editor, publisher, and columnist, and, starting in September, a professor at the City University of New York’s new journalism school, posted the interview on his blog, BuzzMachine, with his own post-facto reactions added, so that it reads, in part, this way:

MARKOFF: I certainly can see that scenario, where all these new technologies may only be good enough to destroy all the old standards but not create something better to replace them with. I think that’s certainly one scenario.
JARVIS: Pardon me for interrupting, but that made no frigging sense whatsoever. Can you parse that for me, Mr. Markoff? Or do you need an editor to speak sense? How do new standards “destroy” old standards? Something won’t become a “standard” unless it is accepted by someone in power—the publishers or the audiences. This isn’t a game of PacMan.
MARKOFF: The other possibility right now—it sometimes seems we have a world full of bloggers and that blogging is the future of journalism, or at least that’s what the bloggers argue, and to my mind, it’s not clear yet whether blogging is anything more than CB radio.
JARVIS: The reference is as old-farty and out-of-date as the sentiment. It’s clear that Markoff isn’t reading weblogs and doesn’t know what’s there.
Hey, fool, that’s your audience talking there. You should want to listen to what they have to say. You are, after all, spending your living writing for them. If you were a reporter worth a damn, you’d care to know what the marketplace cares about. But, no, you’re the mighty NYT guy. You don’t need no stinking audience. You don’t need ears. You only need a mouth.

To live up to its billing, Internet journalism has to meet high standards both conceptually and practically: the medium has to be revolutionary, and the journalism has to be good. The quality of Internet journalism is bound to improve over time, especially if more of the virtues of traditional journalism migrate to the Internet. But, although the medium has great capabilities, especially the way it opens out and speeds up the discourse, it is not quite as different from what has gone before as its advocates are saying.

Societies create structures of authority for producing and distributing knowledge, information, and opinion. These structures are always waxing and waning, depending not only on the invention of new means of communication but also on political, cultural, and economic developments. An interesting new book about this came out last year in Britain under the daunting title “Representation and Misrepresentation in Later Stuart Britain: Partisanship and Political Culture.” It is set in the late seventeenth and early eighteenth centuries, and although its author, Mark Knights, who teaches at the University of East Anglia, does not make explicit comparisons to the present, it seems obvious that such comparisons are on his mind.

The “new media” of later Stuart Britain were pamphlets and periodicals, made possible not only by the advent of the printing press but by the relaxation of government censorship and licensing regimes, by political unrest, and by urbanization (which created audiences for public debate). Today, the best known of the periodicals is Addison and Steele’s Spectator, but it was one of dozens that proliferated almost explosively in the early seventeen-hundreds, including The Tatler, The Post Boy, The Medley, and The British Apollo. The most famous of the pamphleteers was Daniel Defoe, but there were hundreds of others, including Thomas Sprat, the author of “A True Account and Declaration of the Horrid Conspiracy Against the Late King” (1685), and Charles Leslie, the author of “The Wolf Stript of His Shepherd’s Cloathing” (1704). These voices entered a public conversation that had been narrowly restricted, mainly to holders of official positions in church and state. They were the bloggers and citizen journalists of their day, and their influence was far greater (though their audiences were far smaller) than what anybody on the Internet has yet achieved.

As media, Knights points out, both pamphlets and periodicals were radically transformative in their capabilities. Pamphlets were a mass medium with a short lead time—cheap, transportable, and easily accessible to people of all classes and political inclinations. They were, as Knights puts it, “capable of assuming different forms (letters, dialogues, essays, refutations, vindications, and so on)” and, he adds, were “ideally suited to making a public statement at a particular moment.” Periodicals were, by the standards of the day, “a sort of interactive entertainment,” because of the invention of letters to the editor and because publications were constantly responding to their readers and to one another.

Then as now, the new media in their fresh youth produced a distinctive, hot-tempered rhetorical style. Knights writes, “Polemical print . . . challenged conventional notions of how rhetoric worked and was a medium that facilitated slander, polemic, and satire. It delighted in mocking or even abusive criticism, in part because of the conventions of anonymity.” But one of Knights’s most useful observations is that this was a self-limiting phenomenon. Each side in what Knights understands, properly, as the media front in a merciless political struggle between Whigs and Tories soon began accusing the other of trafficking in lies, distortions, conspiracy theories, and special pleading, and presenting itself as the avatar of the public interest, civil discourse, and epistemologically derived truth. Knights sees this genteeler style of expression as just another political tactic, but it nonetheless drove print publication toward a more reasoned, less inflamed rhetorical stance, which went along with a partial settling down of British politics from hot war between the parties to cold. (Full-dress British newspapers, like the Times and the Guardian, did not emerge until the late eighteenth and early nineteenth centuries, well into this calmer period and long after Knights ends his story.) At least in part, Internet journalism will surely repeat the cycle, and will begin to differentiate itself tonally, by trying to sound responsible and trustworthy in the hope of building a larger, possibly paying audience.

American journalism began, roughly speaking, on the later Stuart Britain model; during Colonial times it was dominated by fiery political speechmakers, like Thomas Paine. All those uplifting statements by the Founders about freedom of the press were almost certainly produced with pamphleteers in mind. When, in the early nineteenth century, political parties and fast cylinder printing presses developed, American journalism became mainly a branch of the party system, with very little pretense to neutral authority or ownership of the facts.

A related development was the sensational penny press, which served the big cities, whose populations were swollen with immigrants from rural America and abroad. It produced powerful local newspapers, but it’s hard to think of them as fitting the priesthood model. William Randolph Hearst’s New York papers, the leading examples, were flamboyant, populist, opinionated, and thoroughly disreputable. They influenced politics, but that is different from saying, as Glenn Reynolds says of the Hearst papers, that they “set the agenda for public discussion.” Most of the formal means of generating information that are familiar in America today—objective journalism is only one; others are modern academic research, professional licensing, and think tanks—were created, in the late nineteenth and early twentieth centuries, explicitly to counter the populist inclinations of various institutions, one of which was the big media.

In fact, what the prophets of Internet journalism believe themselves to be fighting against—journalism in the hands of an enthroned few, who speak in a voice of phony, unearned authority to the passive masses—is, as a historical phenomenon, mainly a straw man. Even after the Second World War, some American cities still had several furiously battling papers, on the model of “The Front Page.” There were always small political magazines of all persuasions, and books written in the spirit of the old pamphlets, and, later in the twentieth century, alternative weeklies and dissenting journalists like I. F. Stone. When journalism was at its most blandly authoritative—probably in the period when the three television broadcast networks were in their heyday and local newspaper monopoly was beginning to become the rule—so were American politics and culture, and you have to be very media-centric to believe that the press established the tone of national life rather than vice versa.

Every new medium generates its own set of personalities and forms. Internet journalism is a huge tent that encompasses sites from traditional news organizations; Web-only magazines like Slate and Salon; sites like Daily Kos and NewsMax, which use some notional connection to the news to function as influential political actors; and aggregation sites (for instance, Arts & Letters Daily and Indy Media) that bring together an astonishingly wide range of disparate material in a particular category. The more ambitious blogs, taken together, function as a form of fast-moving, densely cross-referential pamphleteering—an open forum for every conceivable opinion that can’t make its way into the big media, or, in the case of the millions of purely personal blogs, simply an individual’s take on life. The Internet is also a venue for press criticism (“We can fact-check your ass!” is one of the familiar rallying cries of the blogosphere) and a major research library of bloopers, outtakes, pranks, jokes, and embarrassing performances by big shots. But none of that yet rises to the level of a journalistic culture rich enough to compete in a serious way with the old media—to function as a replacement rather than an addendum.

The most fervent believers in the transforming potential of Internet journalism are operating not only on faith in its achievements, even if they lie mainly in the future, but on a certainty that the old media, in selecting what to publish and broadcast, make horrible and, even worse, ignobly motivated mistakes. They are politically biased, or they are ignoring or suppressing important stories, or they are out of touch with ordinary people’s concerns, or they are merely passive transmitters of official utterances. The more that traditional journalism appears to be an old-fashioned captive press, the more providential the Internet looks.

Jay Rosen, a professor of journalism at New York University who was the leading champion of “civic journalism” even before there was an Internet, wrote in the Washington Post in June that he started his blog, PressThink, because “I was tired of passing my ideas through editors who forced me to observe the silences they kept as professional journalists. The day after President Bush was re-elected in 2004, I suggested on my blog that at least some news organizations should consider themselves the opposition to the White House. Only by going into opposition, I argued, could the press really tell the story of the Bush administration’s vast expansion of executive power. That notion simply hadn’t been discussed in mainstream newsrooms, which had always been able to limit debate about what is and isn’t the job of the journalist. But now that amateurs had joined pros in the press zone, newsrooms couldn’t afford not to debate their practices.”

In PressThink, Rosen now has the forum that he didn’t before; and last week he announced the launch of a new venture, called NewAssignment.Net, in which a “smart mob” of donors would pay journalists to pursue “stories the regular news media doesn’t do, can’t do, wouldn’t do, or already screwed up.” The key to the idea, in Rosen’s mind, is to give “people formerly known as the audience” the assigning power previously reserved for editors. “NewAssignment.Net would be a case of journalism without the media,” he wrote on PressThink. “That’s the beauty part.”

Even before the advent of NewAssignment.Net, and even for people who don’t blog, there is a lot more opportunity to talk back to news organizations than there used to be. In their Internet versions, most traditional news organizations make their reporters available to answer readers’ questions and, often, permit readers to post their own material. Being able to see this as the advent of true democracy in what had been a media oligarchy makes it much easier to argue that Internet journalism has already achieved great things.

Still: Is the Internet a mere safety valve, a salon des refusés, or does it actually produce original information beyond the realm of opinion and comment? It ought to raise suspicion that we so often hear the same menu of examples in support of its achievements: bloggers took down the 2004 “60 Minutes” report on President Bush’s National Guard service and, with it, Dan Rather’s career; bloggers put Trent Lott’s remarks in apparent praise of the Jim Crow era front and center, and thereby deposed him as Senate majority leader.

The best original Internet journalism happens more often by accident, when smart and curious people with access to means of communication are at the scene of a sudden disaster. Any time that big news happens unexpectedly, or in remote and dangerous places, there is more raw information available right away on the Internet than through established news organizations. The most memorable photographs of the London terrorist bombing last summer were taken by subway riders using cell phones, not by news photographers, who didn’t have time to get there. There were more ordinary people than paid reporters posting information when the tsunami first hit South Asia, in 2004, when Hurricane Katrina hit the Gulf Coast, in 2005, and when Israeli bombs hit Beirut this summer. I am in an especially good position to appreciate the benefits of citizen journalism at such moments, because it helped save my father and stepmother’s lives when they were stranded in New Orleans after Hurricane Katrina: the citizen portions of the Web sites of local news organizations were, for a crucial day or two, one of the best places to get information about how to drive out of the city. But, over time, the best information about why the hurricane destroyed so much of the city came from reporters, not citizens.

Eyewitness accounts and information-sharing during sudden disasters are welcome, even if they don’t provide a complete report of what is going on in a particular situation. And that is what citizen journalism is supposed to do: keep up with public affairs, especially locally, year in and year out, even when there’s no disaster. Citizen journalists bear a heavy theoretical load. They ought to be fanning out like a great army, covering not just what professional journalists cover, as well or better, but also much that they ignore. Great citizen journalism is like the imagined Northwest Passage—it has to exist in order to prove that citizens can learn about public life without the mediation of professionals. But when one reads it, after having been exposed to the buildup, it is nearly impossible not to think, This is what all the fuss is about?

Oh My News seems to attract far more readers than any other citizen-journalism site—about six hundred thousand daily by its own count. One day in June, readers of the English-language edition found this lead story: “Printable Robots: Advances in Inkjet Technology Forecast Robotic Origami,” by Gregory Daigle. It begins:

From the diminutive ASIMO from Honda to the colossus in the animated film Iron Giant, kids around the world know that robots are cool yet complex machines. Advances in robotics, fuel plans from NASA that read like science fiction movie scripts.
Back on Earth, what can we expect over the next few years in robot technology for the consumer?
Reprogram your Roomba? Boring.
Hack your Aibo robot dog? Been there.
Print your own robot? Whoa!

On the same day, Barista of Bloomfield Avenue, the nom de Web of Debbie Galant, who lives in a suburban town in New Jersey and is one of the most esteemed “hyperlocal bloggers” in the country, led with a picture from her recent vacation in the Berkshires. The next item was “Hazing Goes Loony Tunes,” and here it is in its entirety:

Word on the sidewalk is that Glen Ridge officialdom pretty much defeated the class of 2007 in the annual senior-on-freshman hazing ritual yesterday by making the rising seniors stay after school for several minutes in order to give freshmen a head start to run home. We have reports that seniors in cars, once released from school, searched for slow-moving freshman prey, while Glen Ridge police officers in cars closely tracked any cars decorated with class of 2007 regalia. Of course, if any freshman got pummelled with mayonnaise, we want to know about it.

What is generally considered to be the most complete local citizen-journalism site in the United States, the Northwest Voice, in Bakersfield, California (which also has a print version and is owned by the big daily paper in town), led with a story called “A Boost for Business Women,” which began:

So long, Corporate World. Hello, business ownership—family time, and happiness. At least, that’s how Northwest resident Jennifer Meadors feels after the former commercial banking professional started her own business for Arbonne International, a skin care company, about eight months ago. So far, it’s been successful, professionally and personally.

Another much praised citizen-journalism site is, headquartered in the suburbs of Washington, D.C. Last month, it sponsored a contest to pick the two best citizen-journalism stories; the prize was a free trip to a conference held by Oh My News, in Seoul. One winner was Liz Milner, of Reston, Virginia, for a story that began this way:

Among the many definitions of “hero” given in The American Heritage Dictionary is “A person noted for special achievement in a particular field.” Reston is a community of creative people, so it seems only right that our heroes should be paragons of creativity. Therefore, I’m nominating Reston musician and freelance writer, Ralph Lee Smith for the post of “Local Hero, Creative Category.”

Through his performances, recordings, writings teaching and museum exhibitions, this 78-year-old Reston resident has helped bring new life to an art form that had been on the verge of extinction—the art of playing the mountain dulcimer. He has helped to popularize the repertoire for this instrument so that now mountain music is everywhere—even in slick Hollywood films.

In other words, the content of most citizen journalism will be familiar to anybody who has ever read a church or community newsletter—it’s heartwarming and it probably adds to the store of good things in the world, but it does not mount the collective challenge to power which the traditional media are supposedly too timid to take up. Often the most journalistically impressive material on one of the “hyperlocal” citizen-journalism sites has links to professional journalism, as in the Northwest Voice, or Chi-Town Daily News, where much of the material is written by students at Northwestern University’s Medill School of Journalism, who are in training to take up full-time jobs in news organizations. At the highest level of journalistic achievement, the reporting that revealed the civil-liberties encroachments of the war on terror, which has upset the Bush Administration, has come from old-fashioned big-city newspapers and television networks, not Internet journalists; day by day, most independent accounts of world events have come from the same traditional sources. Even at its best and most ambitious, citizen journalism reads like a decent Op-Ed page, and not one that offers daring, brilliant, forbidden opinions that would otherwise be unavailable. Most citizen journalism reaches very small and specialized audiences and is proudly minor in its concerns. David Weinberger, another advocate of new-media journalism, has summarized the situation with a witty play on Andy Warhol’s maxim: “On the Web, everyone will be famous to fifteen people.

Reporting—meaning the tradition by which a member of a distinct occupational category gets to cross the usual bounds of geography and class, to go where important things are happening, to ask powerful people blunt and impertinent questions, and to report back, reliably and in plain language, to a general audience—is a distinctive, fairly recent invention. It probably started in the United States, in the mid-nineteenth century, long after the Founders wrote the First Amendment. It has spread—and it continues to spread—around the world. It is a powerful social tool, because it provides citizens with an independent source of information about the state and other holders of power. It sounds obvious, but reporting requires reporters. They don’t have to be priests or gatekeepers or even paid professionals; they just have to go out and do the work.

The Internet is not unfriendly to reporting; potentially, it is the best reporting medium ever invented. A few places, like the site on Yahoo! operated by Kevin Sites, consistently offer good journalism that has a distinctly Internet, rather than repurposed, feeling. To keep pushing in that direction, though, requires that we hold up original reporting as a virtue and use the Internet to find new ways of presenting fresh material—which, inescapably, will wind up being produced by people who do that full time, not “citizens” with day jobs.

Journalism is not in a period of maximal self-confidence right now, and the Internet’s cheerleaders are practically laboratory specimens of maximal self-confidence. They have got the rhetorical upper hand; traditional journalists answering their challenges often sound either clueless or cowed and apologetic. As of now, though, there is not much relation between claims for the possibilities inherent in journalist-free journalism and what the people engaged in that pursuit are actually producing. As journalism moves to the Internet, the main project ought to be moving reporters there, not stripping them away.

© The New Yorker



By Michael Wilmington, Tribune movie critic. This review contains material from Wilmington's Screen Gems column

December 29, 2006

"`The Rules of the Game' taught me the rules of the game."

--Robert Altman

There are about a dozen genuine miracles in the history of cinema, and one of them is Jean Renoir's supreme 1939 tragi-comedy "The Rules of the Game," opening Friday in a new, digitally restored 35 mm print at the Music Box Theatre.

"Rules" was not a film immediately embraced. Released on the eve of World War II, it was at first the worst flop of Renoir's career: savaged by critics and audiences, cut to shreds and then lost for years. Its reputation was kept alive by devotees, and when "Rules" was reconstructed and re-released in 1962, it was instantly hailed as one of the century's greatest films. So it is.

Renoir's masterpiece--whose echoes can be seen in films from Ingmar Bergman's "Smiles of a Summer Night" to Robert Altman's "Gosford Park"--is a love roundelay that's also the most complex, astonishingly varied and brilliant of all ensemble comedy-drama films, a taleof frantically crisscrossing amours, set to the music of Mozart, Saint-Saens and Chopin, in a form that switches freely from farce to romance, satire to tragedy.

The movie is set mostly at the French chateau of the wealthy French-Jewish aristocrat Marquis Robert de La Chesnaye (Marcel Dalio) at a weekend shooting party involving his spurned mistress (Mila Parely); his tantalizing Austrian wife, Christine (Nora Gregor); her over-idealistic aviator lover, Andre Jurieu (Roland Toutain); and the lover's best buddy, Octave (Renoir himself, playing a role he intended for his actor brother Pierre).

Meanwhile, in the servants' quarters, a violent triangle simmers and finally erupts among the marquis' gamekeeper Schumacher (Gaston Modot), the local poacher-turned-manservant Marceau (Julien Carette) and the gamekeeper's saucy wife--and maid to Christine--Lisette (Paulette Dubost).

For the cinephiles who see it over and over, "Rules" creates a little world unto itself: the marquis' chateau, in its sunny tree-lined grounds, grand ballroom, bustling kitchen, packed corridors and creamy boudoirs, filled with the marquis' cherished mechanical toys and with guests and servants, who are alive and spontaneous down to the smallest roles. The fact that almost everyone is so personable, funny and full of life and wit--and so determined to amuse themselves--lulls us to a degree. The story keeps letting in more and more moments of sadness, gradually darkening into pathos and tragedy.

"Rules" is about the conflict between artifice and nature, passion and manners. Love--often adulterous love--preoccupies most of the characters, and tragedy springs from the promiscuous way the affairs overlap and spill into each other. Meanwhile, Renoir shows us a transcontinental flight landing, a near-fatal car crash, a devastating rabbit hunt and a final bacchanalian party full of games, sports, music hall diversions and a series of illicit affairs.

It all ends tragically because two of the participants--dour Schumacher and romantically obsessed Jurieu--follow a different set of rules than the ones that govern almost everyone else; theirs is a more rigid morality, a more possessive romantic desire. The rule of the aristocrats is subtle and selfish, that of the servants, theatrical and servile. But Jurieu believes that you marry the woman you love, even if she is married already, and Schumacher that you shoot poachers, romantic or otherwise. "Every game has its rules," Renoir once said, in explanation of his film, "If you break the rules, you lose the game." And the movie itself is a game, whose structure and rules we grasp more and more as it winds through delight to chaos to bleak aftermath.

That inclusiveness and that broad emotional and stylistic range have made "Rules of the Game" one of the most admired of all 20th Century films. "Tout le monde a ses raisons," ("Everyone has his reasons") is something Renoir says in his "Rules" role as the sidekick and hanger-on Octave, trying to explain to de La Chesnaye the sad, "terrible" fact that in any tragedy or conflict, all parties may truly believe in their own viewpoint. The phrase may be deceptive, since it suggests that Renoir never chooses sides but envelops the world in a mushy and all-embracing generosity. This isn't true: Renoir was famously a man of the left in the '30s, but what is still so extraordinary about his work is the way he can extend understanding even to those characters with whom he disagrees, sometimes profoundly.

You can see "Rules of the Game" now in the best version ever since the film's first, tumultuous release in 1939--and you should. No other film has a final effect quite like "Rules." One walks away from it drained and exhilarated, after experiencing a whole world and seemingly every possible emotion in a few swift golden hours.

- - -

`The Rules of the Game'


Directed by Jean Renoir; written by Renoir with Carl Koch; photographed by Jean Bachelet; edited by Marguerite Renoir; production designed by Eugene Lourie, Max Douy; music by Mozart, Monsigny, Salabert, Saint-Saens, Chopin; produced by Claude Renoir Sr. In French, with English subtitles. A Janus Films release; opens Friday at the Music Box Theatre. Running time: 1:46.

The marquis ............. Marcel Dalio

Christine ............... Nora Gregor

Andre ................... Roland Toutain

Octave .................. Jean Renoir

Lisette ................. Paulette Dubost

Schumacher .............. Gaston Modot

Marceau ................. Julien Carette

Genevieve ............... Mila Parely

No MPAA rating (discussions of adultery and some violence).

© Chicago Tribune, Michael Wilmington





Release Date: 1939

Roger Ebert / Feb 29, 2004

I've seen Jean Renoir's "The Rules of the Game" in a campus film society, at a repertory theater and on laserdisc, and I've even taught it in a film class -- but now I realize I've never really seen it at all. This magical and elusive work, which always seems to place second behind "Citizen Kane" in polls of great films, is so simple and so labyrinthine, so guileless and so angry, so innocent and so dangerous, that you can't simply watch it, you have to absorb it.

But for many years you couldn't even watch it properly. Without going into detail about how it was butchered after its first release and then finally restored into a version that was actually longer than the original running time, let it be admitted that it always looked dim and murky -- even on the Criterion laserdisc. Prints shown on TV or 16 mm were even worse. Now comes a new Criterion DVD of the film so clear it sparkles, it dances, and the famous deep-focus photography allows us to see clearly what all those characters are doing lurking about in the background. Like Criterion's restoration of "The Children of Paradise," it is a masterpiece reborn.

The movie takes the superficial form of a country house farce, at which wives and husbands, lovers and adulterers, masters and servants, sneak down hallways, pop up in each other's bedrooms and pretend that they are all proper representatives of a well-ordered society. Robert Altman, who once said "I learned the rules of the game from 'The Rules of the Game,'" was not a million miles off from this plot with his "Gosford Park" -- right down to the murder.

But there is a subterranean level in Renoir's film that was risky and relevant when it was made and released in 1939. It was clear that Europe was going to war. In France, left-wing Popular Front members like Renoir were clashing with Nazi sympathizers. Renoir's portrait of the French ruling class shows them as silly adulterous twits, with the working classes emulating them within their more limited means.

His film opens with a great national hero, the aviator Andre Jurieu, completing a heroic trans-Atlantic solo flight (only 10 years after Lindbergh) and then whining on the radio because the woman he loves did not come to the airport to meet him. Worse, the characters in the movie who do try to play by the rules are a Jewish aristocrat, a cuckolded gamekeeper, and the embarrassing aviator.

This did not go over well with French audiences on the eve of war. The film is preceded by a little introduction by jolly, plump Renoir, looking like an elderly version of the cherub so often painted by his father Auguste. He recalls that a man set fire to his newspaper at the movie's premiere, trying to burn the theater down. Audiences streamed out, the reviews were savage, and the film was a disaster, even before it was banned by the occupying Nazis. The French like to be funny, but they do not much like to be made fun of. "We were dancing on a volcano," Renoir says.

After a prologue at the airport and an elegant establishing scene in Paris, most of the action takes place at La Coliniere, the country estate of Robert de la Chesnaye (Marcel Dalio) and his wife Christine (Nora Gregor). Among the guests are Robert's mistress Genevieve (Mila Parely), and the aviator (Roland Toutain), who is in love with Christine.

During the course of the week, Robert and his gamekeeper Schumacher (Gaston Modot) apprehend a poacher named Marceau (Julien Carette), who is soon flirting with Christine's very willing maid Lisette (Paulette Dubost) -- who is married to Schumacher. Another ubiquitous guest is the farcical Octave, played by Renoir himself, who casts himself as a clown to conceal his insecurity. And there are others -- a retired general, various socialites, neighbors, a full staff of servants.

On the Criterion disc there is a fascinating conversation, filmed many years later on the steps of the chateau, between Renoir and the actor Dalio (you may remember him as the croupier in "Casablanca"). Together they try to decide whether the story has a center, or a hero. Renoir doubts it has either. It is about a world, not a plot. True to his nature, he plunged into the material, improvised as he went along, trusted to instinct. He will admit to one structural fact: The murder at the end is foreshadowed by the famous sequence in the middle of the film, where the guests blaze away with hunting rifles, killing countless birds and rabbits. The death of one rabbit in particular haunts the film's audiences; its final act is to fold its paws against its chest.

As for a center, well, it may come during that same hunting scene, when Christine is studying a squirrel with binoculars and lowers them to accidentally see her husband Robert kissing his mistress Genevieve. He had promised his wife the affair was over. And so in a way it was; when we see them together, they seem to be playing at the intrigue of adultery without soiling themselves with the sticky parts. This leads Christine, an innocent soul who believes in true love, to wonder if she should take mercy on the aviator. Soon after, Marceau is smooching with Lisette and Schumacher is chasing him around the corridors. It is when the upstairs and downstairs affairs accidentally mingle that the final tragedy takes place (in true farcical style, over a case of mistaken identity).

Much has been made of the deep focus in "Citizen Kane" -- the use of lighting and lenses to allow the audience to observe action in both the front and back of deep spaces. "The Rules of the Game" is no less virtuoso, and perhaps inspired Welles. Renoir allows characters to come and go in the foreground, middle distance and background, sometimes disappearing in the distance and reappearing in closeup. Attentive viewing shows that all the actors are acting all of the time, that subplots are advancing in scarcely noticeable ways in the background while important action takes place closer to the camera.

All of this comes to a climax in the famous sequence of the house party, which includes an amateur stage performance put on for the entertainment of guests and neighbors. This sequence can be viewed time and again, to appreciate how gracefully Renoir moves from audience to stage to backstage to rooms and corridors elsewhere in the house, effortlessly advancing half a dozen courses of action, so that at one point during a moment of foreground drama a door in the background opens and we see the latest development in another relationship. "In the years before the Steadicam," says the director Wim Wenders, "you wonder how a film camera could possibly have been so weightless."

It is interesting how little actual sexual passion is expressed in the movie. Schumacher the gamekeeper is eager to exercise his marital duties, but Lisette cannot stand his touch and prefers for him to stay in the country while she stays in town as Christine's maid. The aviator's love for Christine is entirely in his mind. The poacher Marceau would rather chase Lisette than catch her. Robert and his mistress Genevieve savor the act of illicit meetings more than anything they might actually do at them.

It is indeed all a game, in which you may have a lover if you respect your spouse and do not make the mistake of taking romance seriously. The destinies of the gamekeeper and the aviator come together because they both labor under the illusion that they are sincere. I said they are two of the three who play by the rules of the game -- but alas, they are not playing the same game as the others.

It is Robert (Dalio) who understands the game and the world the best, perhaps because as a Jew he stands a little outside of it. His passion is for mechanical wind-up mannequins and musical instruments, and there is a scene where he unveils his latest prize, an elaborate calliope, and stands by proudly as it plays a tune while little figures ring bells and chime notes. With such a device, at least everything works exactly as expected.

Dalio and Renoir discuss this scene in their conversation. Dalio says he was embarrassed, because it seemed simple to stand proudly beside his toy, yet they had to reshoot for two days. Yes, says Renoir, because the facial expression had to be exact -- proud, and a little embarrassed to be so proud, and delighted, but a little shy to reveal it. The finished shot, ending with Robert's face, is a study in complexity, and Renoir says it may be the best shot he ever filmed. It captures the buried theme of the film: That on the brink of war they know what gives them joy but play at denying it, while the world around them is closing down joy, play and denial.



Gina Kim

January 4, 2007

SACRAMENTO -- The reports of its death were greatly exaggerated. Pundits declared irony dead after the Sept. 11 terrorist attacks, but five years later, not only is it alive -- it ruled 2006.

"Given the way the world has gone, we're in more need of irony," says Jerry Herron, a professor of English and American studies at Wayne State University in Detroit. "What 9/11 produced was a world where pettifoggery, obfuscation, half-truths and double dealing are more rampant than ever before."

Irony, the grand dame of the zeitgeist, is pop culture's weapon against hopelessness, experts say. It's a tool that transfers power to the powerless. And in a time of a continuing war, citizens jailed without charges, and a government that knows what we're checking out at the library and searching for on the Internet, it's a key to understanding what's happening to the world -- with a little humor too.

"The reason irony is more fun than the truth is that it's more fun than the truth," Herron says. "Jon Stewart is fun to watch because it seems to give the feeling of being in a club where everyone's smarter than everyone else. And the whole world seems to be pretty dumb."

Along with raised eyebrows and knowing looks, irony puts us in the know. We become members of the sorority of sagacity. And it gives us some semblance of controlling what we're being told, experts say.

But what is irony?

Merriam-Webster says it's using words to mean the opposite of their literal meaning. But in today's cultural climate, irony is anything said with your tongue firmly planted in your cheek. It's sarcastic humor with an exaggerated message.

"At a time when people feel they're being lied to and treated as though they're too stupid to get it, it lets you regain the claim on your own intelligence," Herron says. "I'm going to tell a lie, too, but I'm going to tell it knowingly and as a joke."

Irony has existed in Western culture ever since there was a Western culture, says author Ken Kalfus.

"I'm not sure what ironic forms there are in, say, Afghan culture," he says. "You need a pretty well-developed idea of the individual. . . . Irony is one of the first things that goes in a dictatorship."

The smug smile of irony bares its teeth when conditions are ripe -- there's overarching disillusionment with the establishment and the public is trying to separate fact from fiction.

"You can look historically at times that seem to be caught up in not telling the truth and irony flourishes," Herron says. "Like in 18th Century England, when King George was going mad on the throne and the world was falling apart, irony was a thriving form."

It came to dominate our culture in the 1970s as a way to question authority, says Martin Kaplan, associate dean of the Annenberg School for Communication at the University of Southern California.

"It was a response to things like governmental lying and to the commercialization and commoditization and corporatization of everything," he says. "The only appropriate way to react to what was going on was to be a smart aleck and to say, `Yeah, right,' to any assertion by the powerful. There was always someone trying to make a sucker out of you."

In the immediate aftermath of the 9/11 terrorist attacks, Graydon Carter, editor of Vanity Fair magazine and co-founder of the defunct satirical Spy magazine, was quoted as saying, "It's the end of the age of irony. Things that were considered fringe and frivolous are going to disappear."

But instead of ushering in an age of sincerity, when people help each other, join together and believe in a better world, we're more interested than ever in whether celebrities wear underwear on a regular basis and have no qualms elbowing that person reaching for the last PlayStation 3 on the shelf.

"Many people, I was probably among them, said irony was dead and in the face of horrors unimaginable, the only appropriate response was authenticity and realism -- postmodern winking was no longer appropriate," Kaplan says. "It probably was about six months that that lasted. Irony is very much alive and well."

It's an era in which comedian Stephen Colbert's ironic roast of President Bush at a White House correspondents dinner is now legend. And, according to a study by Harvard University's Institute of Politics, more 18-to-24-year-olds watch "The Daily Show With Jon Stewart" than read the print edition of a major newspaper.

"It's become very hard to figure out what is real and what isn't," says stand-up comic Marc Maron. "By nature of that, there's more irony.

"The idea that O.J. Simpson was about to publish a book about what he would have done had he killed his wife and her [friend], that should be an ironic joke, but it's completely real and horrifying."

The lines of reality are so blurred, irony is the only way to formulate some type of understanding, says Maron, who will be on Comedy Central's "Comedy Central Presents" Jan. 12.

"That's why fake news is resonating much more with people than the real news," he says. "Because when you can exaggerate or be sarcastic or be ironic, the real message is revealed. Sometimes it takes irony to cut through a lot of the bull."

Plus, it can sometimes drive messages home more efficiently than the truth.

"People don't like honesty. They find it boring or too draining for them to engage with," Maron says. "If something's put across in a smug or condescending way, it's got some safety built into it -- you can take it in, laugh at it, and it assumes you're in on the joke."

Today's irony can run the gamut from a simple wisecrack, knee-jerk and silly, to something much darker, says John Tomasic, managing editor of the online pop culture commentary Pop and Politics. But in the process, it can bring people together, as long as you know you're not immune.

"You use it to mock, but you use it best if you're prepared to be mocked," he says.

While irony cuts across age groups, ethnicities and gender, it is best understood by the younger generation, who have known irony their entire lives, Tomasic says.

"Young people, by and large, are not confused about the rules of the game. They have grown up with irony. It's their best friend and worst enemy. It's their playground pal, their video game instructor, their movie script writer," he says. "Young people are not at all confused, for example, about `The Daily Show,' a source of bafflement to the serious men and women in the skyboxes of life."

In the months after the 9/11 attacks, author Kalfus began to formulate a novel in his head based on the media's glorification of each victim.

"Everyone who was killed supposedly was a perfect husband, a perfect wife, a perfect father or mother. They were all heroes," Kalfus says. "I wanted to see them as people, not the way they were killed but by the way they lived their lives. And most probably lived messy lives, like the rest of us."

Kalfus' book, "A Disorder Peculiar to the Country" (Ecco, 256 pages, $24.95), was published in July and is based on a couple who thought the other spouse had been killed in the terrorist attacks, and both were secretly happy about it.

"Loosely speaking, irony is a method of humor that deflates a cliche or deflates a particular way of thinking by showing that it's taking itself too seriously," he says. "It's just one literary method of making us see the world a little more clearly in the fog of myths."

Irony never accepts anything at face value. Instead, it delves deeper, looks further and questions every premise, Kalfus says. And in the process, some form of the truth is discovered.

"You try to puncture a cliche in a straightforward way, you only dent it," Kalfus says. "Irony, by ridiculing the supports for the cliche, can actually bring it down."

Irony shifts the reins of power -- taking information from the top, altering it, changing it, and maybe in the process, getting closer to the truth. Like a sword of disillusionment, irony is a defense mechanism that gives the public some say in world events that are unfolding.

"What has happened since 9/11 to promote the recidivism of our ironic culture was the run-up to the war," USC's Kaplan says. "It's hard to see what's going on, and when you do see what's going on, to not be cynical about the nature of the world and the nature of power.

"And so making fun of it, using it as the grist for parody, as masters such as Jon Stewart and Stephen Colbert are doing, seems to be the best way to let the air out of the balloon of power."

Copyright © 2007, Chicago Tribune






By Sara Paretsky. Sara Paretsky, creator of the V.I. Warshawski novels, is the author of the collection of essays "Writing in an Age of Silence," coming from Verso/Norton this month

April 1, 2007

The night we began our invasion of Iraq -- March 20, 2003 -- I was speaking at the Toledo public library. The day before, my speakers bureau told me that the library wanted me to change my proposed remarks; my talk on how the Patriot Act was affecting writers, readers and libraries was too political. The library wanted instead the kind of humorous anecdotes that other writers used. With war imminent, the library felt that a criticism of the Bush administration was an insult to local families who had relatives in the service.

Days before I spoke, the Dixie Chicks had roused widespread fury by criticizing the impending war and saying they were "ashamed" that the president came from their home state, Texas. Mobs destroyed Chicks albums, lead singer Natalie Maines got death threats, and a Colorado Springs radio station suspended two disc jockeys for playing the Chicks' music.

I had experienced some of that anger. My 2003 novel "Blacklist," in which my detective, V.I. Warshawski, encounters the powers the Patriot Act gives the FBI, prompted readers to send e-mails or post on my Web site telling me I hated America and loved terrorists. Over and over I read, "I will never pick up one of her books again." Hard words to see. They shook me up all over again when I went back to look at them while writing this essay.

Confrontation scares me; when the Toledo library asked my speakers bureau to help rein me in, I thought seriously about changing my talk. Then I thought of the times -- too many of them -- that I had caved in to this kind of pressure, and remembered the sense of degradation I suffered afterward.

When Disney made a movie based on my detective, I caved in to studio pressure not to talk about my experience with the moviemakers. When editors have cut scenes from my books that they found offensive, I've let it go without an argument. The many times as a young adult I let my parents veto any moves away from their authority still sit uneasy in my gut 40 years later.

The lecture I planned to give in Toledo addressed issues of censorship and silence. If I let my voice be muffled, could I ever speak in public again?

When I walked into the auditorium, I was shaking so badly that I had to clutch the lectern throughout my talk. Five hundred people came out in a heavy rainstorm to hear me, and when I was done, they gave me a standing ovation. Afterward, many people said the administration campaign to deride and marginalize all opposition was so effective that they had felt alone and isolated, as if each was the only person in America to doubt the truth of what we were being told by the White House.

The Patriot Act gives the Department of Justice power to demand all records from a library or store on the basis of either a national security letter or a subpoena. To get a subpoena, the government does not have to show probable cause to a judge, it merely has to tell a judge that the target of its investigation "may" have a connection to a terrorist organization.

If you receive such a letter or subpoena, the act says you can go to prison for up to 5 years for telling anyone about it -- your spouse, your lawyer, your boss. Two years ago, in fact, four librarians in Windsor, Conn., faced Justice Department sanctions, including a gag order that carried a threat of imprisonment, for consulting the library's lawyer when they received a demand to produce user documents.

According to The Washington Post, the FBI is issuing 30,000 of these letters each year now, compared with 300 a year before the act was passed. When Post reporter Barton Gellman interviewed FBI agents to find out how many of these letters led directly or indirectly to uncovering a terror threat, the answer was none.

We don't know how many libraries have been served with these letters, because -- like most of us -- librarians fear imprisonment. But a survey conducted by the University of Illinois' Library Research Center in Champaign, in which libraries could report anonymously, found that about 11 percent of libraries nationwide had received subpoenas or national security letters in the act's first year.

The Patriot Act gives the Department of Justice sweeping "sneak-and-peek" authority, meaning agents can break into our houses when we're away and take our books, papers and computer files without ever telling us they broke in.

I used that authority as part of a scene in "Blacklist," and that scene, as well as the ambiguity around the FBI's search for an Egyptian teenager, sparked the outpouring of fury over my work.

Because my books deal with issues of law, justice and society, in Europe they have always been considered political. In the United States, it was only when I wrote explicitly about the Patriot Act that readers felt I was a political writer. Although "Fire Sale," my next novel, looked at social justice issues on the South Side of Chicago, readers and reviewers lauded me for returning to what they saw as my proper function of storyteller and entertainer.

I'm not a fan of propaganda novels, novels written to show four legs are better than two, or that women deserve to be raped and beaten, or that men are testosterone-crazed thugs. But I don't know how to divorce myself and my fictions from the urgent concerns of my life: Who is allowed to speak? Who listens? Who is silenced?

The more I thought about these questions -- in the wake of the response to "Blacklist," and in the wake of my experience in Toledo -- I felt the need to go back into my own life, my own history, to understand why issues of speech and silence matter so much to me.

Herman Melville talked about the "the silent grass-growing mood" that writers need in order to write. I think of that as a kind of interiority, a way of getting as deep inside oneself as possible in order to write in an honest, authentic voice. That kind of introspection can be painful as well as rewarding, but it can't take place easily in an atmosphere of fear or in the tumult of the marketplace. In such an atmosphere it's hard to hear our own voice, to find out what we really have to say.

I grew up in a tangled nest of outsideness. My father was the first Jew the University of Kansas hired for a tenured position -- a daring experiment that left us as the town giraffes, always on display, not treated with hostility, but as oddities.

As is true for many Jews of my generation, the Holocaust cast a long shadow over our lives. For my family, as for many, America was a haven, the Bill of Rights its most valuable treasure. Much of my family was obliterated by a government that imprisoned and killed its citizens for no reason except their religion, or their race, or their political beliefs. This history has made me acutely sensitive to acts by the American government that infringe on our cherished rights.

In my family I was also a kind of outsider. The only girl among five children, I was constrained from the age of 9 to give up my own childhood in becoming the caretaker of my young brothers. My childhood home was run on the lines of the old-fashioned patriarchy, where what boys did mattered and what girls did was second-rate.

Because the pervasive segregation codes of the time extended to Jews as well as to African-Americans, my parents bought a house in the country, which led to my living a life of intense isolation. My brothers could use the family cars to come and go as they wished, but I was forbidden to do anything outside the home except attend school. At home, I looked after the small children and cleaned the house. When my youngest brother started school, he didn't know I was his sister; he thought he had two mommies.

Every Saturday, from the time I was 7 until I left my parents' home at 17, I baked for my father and brothers. My parents were highly educated and highly literate. But though they borrowed money to send my brothers to expensive colleges far from home, they sent me to secretarial school and told me that if I wanted a university education, it would be at my own expense, and in my home state. So I worked my way through the University of Kansas.

My parents would not permit me to leave Kansas. When I finally found the strength to leave, I set out for Chicago. In 1968, I started graduate work at the University of Chicago. My father told me not to be surprised if I failed, because Chicago was a first-rate school and mine was a second-rate mind. That criticism, in many different guises, was a constant of my childhood. There are still days when the words start to sink me, and I lack the energy to rise above their effects.

When I started graduate school, I could barely speak above a whisper. A good friend from those years says that when she first met me, she thought she was going deaf when I spoke.

It was a long, slow journey for me, from the silence of the margins to speech. Because of my upbringing, I don't think I will ever turn away from questions of power and powerlessness, in my fiction, or in my lectures. The questions of who gets to speak, and who listens, are central to how I view the world. These are the issues I explore in my new collection of essays "Writing in an Age of Silence." I hope through these essays I can persuade some of the readers who responded so angrily to "Blacklist" that silence is more dangerous and more crippling than dissenting from power.

Sara Paretsky, creator of the V.I. Warshawski novels, is the author of the collection of essays "Writing in an Age of Silence," coming from Verso/Norton this month.

Copyright © 2007, Chicago Tribune



By Jose Rivera. Jose Rivera is the Oscar-nominated writer of "The Motorcycle Diaries." "Massacre (Sing to Your Children)" is his second play with Teatro Vista and the Goodman Theatre. It opens Monday

April 1, 2007

Radiant bombs fell on the sad, unlucky city of Baghdad the cool spring night I started writing my play "Massacre (Sing to Your Children)."

On CNN, the bombs exploded without sound, so the full measure of the terror they caused that night was unknowable to the many Americans who, like me, nevertheless watched appalled and impotent, nauseated and angry.

It amazed me that a nation that finally, on 9/11, felt the full fury and madness of a massive bombing on its own soil would so utterly fail to be sensitized by that terrifying violence. Knowing how horrible it was, how could we wish that horror on other people? Instead, like an angry child, we struck back at the nearest perceived enemy, completely lacking in compassion, completely unable to understand that the terror we felt that day would be equaled, then infinitely magnified, by the terror we were poised to unleash.

That spring night in 2003, I started to write. I felt it was necessary for me to channel the disgust and anger and sorrow I felt as we launched another unnecessary war on a distant people too feeble and poor to strike back. I wanted to capture something from that night of radiant bombs. What is it that really frightens us? What do we really know, as Americans, about the nature of mass violence? What do we know about revenge? What is it like to kill?

At the same time, I had been preoccupied with the resurgence of the Bush Dynasty. We on the (slight) left had, through Bill Clinton, gotten rid of the first Bush. Suddenly, though, like some kind of appalling lightning, the Bushes were back. We hadn't gotten rid of the influence of this political family. We had only temporarily forgotten it, pushed it out of our minds and headlines as new national preoccupations took over. Happy in our amnesia, we thought it was over. But that was just an illusion.

I realized that this resurrection of the Bush family was a metaphor for so many other things we think we've successfully dealt with: AIDS, the environment, race hatred, class tensions, ignorance on a massive scale, world hunger, the slow disappearance of water. But we've never solved these things. We've merely fallen into a periodic and comforting amnesia about them.

These were some of the themes and emotions that collided in my mind to eventually become my play "Massacre (Sing to Your Children)," which began previews March 24 at the Goodman Theatre in a co-production with Teatro Vista.

Now, happily, I know enough about theater to know that a good play is not a political tract, but a living, breathing metaphor acted out in front of an engaged populace. So, in "Massacre," you're not going to hear political speeches lamenting Bush and his sick war. (That kind of talk is reserved for essays such as this.) Instead you'll meet several ordinary citizens of a small town called Granville, N.H.

One night these seven people get together to kill their neighbor Joe.

For years Joe has been terrorizing this town through random violence, rape, extortion, destruction of crops, show trials. The play asks: What happens to people when they are driven to such extremes? When is violence justified? What do you feel after an act so brutal? How do you go back to your job the next day?

In the course of the long development of this play -- workshops in New York, Los Angeles and Chicago -- many people have offered their own interpretations of the character of Joe. He's Bush. He's Saddam. He's God. He's the Devil. He's America. He's the neo-cons.

The existence of Joe raises questions about the nature of fear, of guilt, of complicity -- about the potential for evil in all of us. And about the ability in all of us to redeem ourselves, to make something good out of evil, to save our friends and neighbors.

To balance things out a bit, let me just say that the play isn't all grim and theoretical. Blood courses through the veins of this play, and it's not just Joe's blood -- there's the blood of sexual awakening caused by the liberation of Granville, the blood of hope, the gallows humor of a group of people who have gone through hell together. There's music and the poetry of desire. There's that amazing human talent for regeneration, optimism and rebirth. There is even some singing and the presence of children.

Why a play about all these preoccupations? Why not a film?

I feel that theater is the place where we can best discuss and examine the demons and angels of our inner nature. The place where society is examined, put on trial and tested. Where our values as a community are discussed and challenged. Where change is not only talked about but demonstrated. What happens when we act altruistically? When we speak truth to power? What does it look like when the masks worn by our leaders are taken off?

I love the theater for its communal energy. For its living, electric dialogue between audience and performer. I love it because it's not lonely. It's not distant, or in the past. The actors are performing those actions, right now, right in front of you. And the potential impact of your communion with those people and those ideas and feelings is staggering. It's enough to make you change your mind about the world you thought you knew.

I was restless as the bombs fell on Baghdad, and I turned to the theater to help me understand what happened to the country I loved so much, the country that began to frighten me with its hubris and cynicism. I didn't want this time to pass without registering my opposition, without leaving something behind for future audiences to know what our time in this bloody history felt like.

And that's the cool thing about theater and why it fills me with such hope. Theater is drenched in dialogue among characters, dialogue between actor and spectator, and between today and our future -- a future greatly shaped, influenced and colored by the wars and art we make today.

Jose Rivera is the Oscar-nominated writer of "The Motorcycle Diaries." "Massacre (Sing to Your Children)" is his second play with Teatro Vista and the Goodman Theatre. It opens Monday.

Copyright © 2007, Chicago Tribune



By Larry Heinemann. Larry Heinemann is the author of "Close Quarters," "Paco's Story" -- recipient of the National Book Award -- and the memoir "Black Virgin Mountain." He is the writer-inresidence a

April 1, 2007

When our war with Iraq began four years ago, I was teaching at Hue University in central Vietnam on a Fulbright Scholarship.

During the Vietnam War, I was a soldier with the Army and have written two novels and a memoir about it. I call them my accidental trilogy. I did not intend to devote my writing career to the war, but sometimes you are given a story and you simply have to do the best you can with it.

All that winter of 2003, half a world away, I read with increasingly sour resentment of our president?s intention-- all but shouted with a braggart's ease--to make war on Saddam Hussein and Iraq. War was coming, and as an ex-soldier I felt a clear sense of dread settle over me, as it did for a lot of veterans I know. Surely my government cannot mean to do this, I said to myself. The stupidity of it is only too clear.

Whatever else we can say about the war, at the very least we can agree that any enterprise begun with a lie is doomed to fail. Our war in Vietnam began with a fairy tale about North Vietnamese gunboats attacking two Navy destroyers. In Iraq, the president flat-out lied to us about Hussein?s weapons of mass destruction, as well as the man's connivance with the terrorists responsible for the cowards? attack on the World Trade Center.

Since then there has been much political talk, shouting matches, sandbagging and blowing smoke about the war. But all of this, then and since, is irrelevant to ordinary soldiers given the job of fighting the war down where the rubber meets the road. Trust me, once you strap it on, lock and load, and step off, the rightness or the wrongness of war becomes irrelevant in the face of the more urgent, practical matters of grind-it-out soldiers' work. And for soldiers who may well be on their second or third combat tours, a grind is exactly what the war has become--screw the politics, the palaver and the propaganda.

As an ex-soldier and a writer, I have to admit that I cannot wait to hear the story of the war told by my brothers of the blood. What to make of the day-in, day-out grind for the "boots on the ground" and what I can only describe as the heart-killing madness?

Let us first make a distinction between the news of the war, history?s roughest draft, given to us by journalists and reporters who have trouble seeing past their own noses (it's not their fault, it's the nature of their work), and the story of the war given to us by ordinary soldiers who breathed it in and sweated it out and came home with enough of a sense of themselves to sit down and tell their story.

For them it will be a story so compelling to their spirit and imagination that it simply will not go away; the story that makes them grind their teeth in their sleep and makes their hearts sore at the thought of it; the story that informs their spirit?s sense of right and wrong, their choice of a life's work, their attitudes and hearts' desires, how they raise their children and how they look at the world.

The first writing will be a rush of war stories, pure and simple. Body count stories. These are always the stories quickly done, lavishly praised and as quickly forgotten. It will be several years after the war that the more considered writing will emerge, after the writers among them have had a chance to do their homework. To "set and drank and thank," as my friend Riley would say. Every generation of soldiers has to find its own language and a way of telling its story, and that takes time.

And from where will these writers emerge? They will come from the ranks of ordinary soldiers, the ground pounders, and not the generals, the journalists or the talking heads. I would bet the ranch on it. The best of the writing has always come through the plain-spoken language of the common soldier and has always expressed a simple human truth. This is what I saw, this is what I did and this is what I had become. The simple validation of a viciously unique and remarkable time in their lives. The exasperating stupidity of war, never mind the obscenity of its pornographic violence. The slab-of-meat hopelessness, and the physical and spiritual exhaustion. The compacted grief that backs up like the junk in a 4-inch soil pipe all the way to the curb. The lingering aftermath of vivid sensation?odors, textures and terror, literally body memories. What a friend of mine calls "the black years." The deeply felt humanity and exquisite generosity rediscovered only afterward, and you connect yourself, once more and gratefully, with the rest of the human race.

The story will be bluntly honest in its moral obscenities and street language barracks' slang, darkly acid in its ironic humor, poignantly shocking in its ambiguities, angrily bitter in its conclusions, but then the story of war has always been about the snap-your-head-back downward path to wisdom. And we will be upbraided, perhaps even shamed, by what the stories tell us about ourselves.

The novels that emerged from WW II were thick and elaborate blockbusters of realism. The novels that emerged from Vietnam were, strangely enough, ghost stories (both American and Vietnamese). What form the stories of the war in Iraq, both American and Iraqi, will take is anybody?s guess.

But I'll tell you one thing. I can't wait to find out what really happened.

Copyright © 2007, Chicago Tribune




John Kass  April 18, 2007

If you don't have children, you can't understand how millions of parents are dealing with Cho Seung Hui, who put a bullet into his own head after killing 32 people, most of them students, at Virginia Tech this week.

Unless you're a parent, you can't know. We think of the dead, also of our own kids and of a fearful silence. We think of how quiet our own homes would be if the unspeakable happened, how still those 32 households will remain.

It's not your fault if you don't have kids, if you wanted them but it didn't work out, if you're young and single, or if you've never wanted children. There's no crime in it, and many of us have been there, knowing that the childless are often burdened, expected to cover for the rest of us at work during the holidays or when something happens with the kids at home.

So I don't want to hurt any feelings. But I'm thinking about moms and dads reading this today, and only if you're a parent can you comprehend that look we give each other when Virginia Tech is mentioned.

Before children, you may have thought you cared about kids as much as any parent could care, until you have your own. Only then do you realize, quietly, without any speeches, how wrong you've been. Only then does it open and bloom, for your own kids, for the children of others.

I could be wrong on this as I am wrong on so many things, but that's how it blossomed for us, with our own sons, born 12 springs ago now after years and years of trying.

A colleague who earns his living dissecting what others write noted that in the next few days, there will be much nonsense written about the murders at Virginia Tech, as writers try to make sense out of what happened, as we reveal ourselves as foolish graspers.

He's probably correct. I'm a parent, and I'm grasping. That's what parents do at times like this. The other thing we do is shut down. So when the horrifying news was breaking on Monday, I tried shutting down.

There is a hill in the large park in our suburb, looking down on the soccer fields where the traveling teams practice. From there, on that sunny evening with innocent blood in the news, moms and dads watched their children run. On the hill, no one spoke of the killings, but you could tell we knew by how we avoided it.

After soccer practice ended, my boys trudged over, my wife had their baseball uniforms ready, and they changed in the van to get ready for a Little League game. The baseball diamonds are on the other side of the big hill, and I stood in foul territory with the other coaches, with Jerry and Marty, and somebody mentioned Virginia Tech, but we dropped it without a word and warmed up our pitchers and hit grounders to the boys.

The other coaches in the other dugout did the same. We focused on a well-played, low-scoring game, the kids intense, fastballs smacking catcher's mitts, a few of the boys with Adam's apples beginning to bob in their throats, girls hanging along the fence watching the boys, adolescence rushing up on us parents.

So in my foolish grasping, I tried to slow time down, to confine it to the baseball diamond, to these years when we parents fool ourselves into thinking we can use our will to keep them safe.

Something happens when you have children. Is it chemical? Social? Who knows. The thing is, you can't stop thinking about them. You touch your wallet with their photographs inside.

Or, you get that rare night out alone with your wife, or the two of you with another couple and driving to the restaurant you imagine discussing issues, ideas, books, film or some play. Though such topics are released between that first drink and the appetizer, just bring up the kids and look into the eyes of a mom or dad across the table and you'll see what I mean.

Obviously, not all parents care. The list of abused and battered and ruined children is always increasing, and most of us avoid it, except for those who can't, the police and teachers and social workers and judges who deal with the unspeakable. They deal with broken children every day, in ones and twos and threes. But now we've got 32 dead, and the shooter, too, and we're forced to confront it.

Parents aren't saints, and I'm no father of the year. We work through our own crippling neuroses, though it cripples you even more to see your faults in your children, wondering how these are visited upon them, whether by blood or example.

One way we cripple them is by hovering, as Americans become increasingly risk-averse, thinking we can protect our young by ordering their lives. I do it. Surely, others see it in themselves. We wonder about the timid American culture yet to come. Then something like Virginia Tech happens.

That said, most parents do their best. Many of us drive old cars and wear old shoes and old suits and save for college.

So if you don't have kids and your house is quiet, then it is merely a quiet house. It's always been that way, comfortably ordered. But if you have kids and the house is strangely still, your mind isn't still, and what you're thinking about is this: Are they safe?

And now there are all those households with that stillness pressed upon them.

Copyright © 2007, Chicago Tribune




by Timothy Garton Ash

Iraq is over. Iraq has not yet begun. These are two conclusions from the American debate about Iraq.

Iraq is over insofar as the American public has decided that most U.S. troops should leave. In a Gallup poll earlier this month, 71% favored “removing all U.S. troops from Iraq by April 1 of next year, except for a limited number that would be involved in counter-terrorism efforts.” CNN’s veteran political analyst, Bill Schneider, observes that in the latter years of the Vietnam War, the American public’s basic attitude could be summarized as “either win or get out.” He argues that it’s the same with Iraq. Most Americans have now concluded that the U.S. is not winning. So: Get out.

Because this is a democracy, their elected representatives are following where the people lead. Although the Democrats did not get the result they wanted in an all-night marathon on the floor of the Senate, from Tuesday to Wednesday this week, no one in Washington doubts that this is the way the wind blows. Publicly, there’s still a sharp split along party lines, but leading Republicans are already breaking ranks to float their own phased troop-reduction plans.

President Bush says he’s determined to give the commanding general in Iraq, David Petraeus, the troop levels he asks for when he reports back in September, and the White House may hold the line for now against a Democrat-controlled Congress. Leading Republican contenders for the presidency are still talking tough. However, the most outspoken protagonist of hanging in there to win in Iraq, John McCain, has seen his campaign nosedive. Even if the next president is a hard-line Republican, all the current Washington betting will be confounded if he does not, at the very least, rapidly reduce the number of U.S. troops in Iraq. After all, that’s what the American people plainly say they want.

The American people’s verdict is remarkably sharp on other aspects of the Iraq debacle. In a poll for CNN, 54% said the United States’ action in Iraq was not morally justified. In one for CBS, 51% endorsed the assessment - shared by most of the experts - that U.S. involvement in Iraq was creating more, not fewer, terrorists hostile to the United States. If once Americans were blind, they now can see. For all its plenitude of faith, this is a reality-based nation.

So Iraq is over. But Iraq has not yet begun. Not yet begun in terms of the consequences for Iraq itself, the Middle East, the United States’ own foreign policy and its reputation in the world. The most probable consequence of rapid U.S. withdrawal from Iraq in its present condition is a further bloodbath, with even larger refugee flows and the effective dismemberment of the country. Already, about 2 million Iraqis have fled across the borders, and more than 2 million are internally displaced.

Now a pained and painstaking study from the Brookings Institution argues that what its authors call “soft partition” - the peaceful, voluntary transfer of an estimated 2 million to 5 million Iraqis into distinct Kurdish, Sunni and Shiite regions, under close U.S. military supervision - would be the lesser evil. The lesser evil, that is, assuming that all goes according to plan and that Americans are prepared to allow their troops to stay in sufficient numbers to accomplish that thankless job - two implausible assumptions. A greater evil is more likely.

In an article for the Web magazine Open Democracy, Middle East specialist Fred Halliday spells out some regional consequences. Besides the effective destruction of the Iraqi state, these include the revitalizing of militant Islamism and enhancement of the international appeal of the Al Qaeda brand; the eruption, for the first time in modern history, of internecine war between Sunni and Shiite, “a trend that reverberates in other states of mixed confessional composition”; the alienation of most sectors of Turkish politics from the West and the stimulation of authoritarian nationalism there; the strengthening of a nuclear-hungry Iran; and a new regional rivalry pitting the Islamic Republic of Iran and its allies, including Syria, Hezbollah and Hamas, against Saudi Arabia, Egypt and Jordan.

For the United States, the world is now, as a result of the Iraq war, a more dangerous place. At the end of 2002, what is sometimes tagged “Al Qaeda Central” in Afghanistan had been virtually destroyed, and there was no Al Qaeda in Iraq. In 2007, there is an Al Qaeda in Iraq, parts of the old Al Qaeda are creeping back into Afghanistan and there are Al Qaeda emulators spawning elsewhere, notably in Europe.

Osama bin Laden’s plan was to get the U.S. to overreact and overreach itself. With the invasion of Iraq, Bush fell slap-bang into that trap. The U.S. government’s own latest National Intelligence Estimate, released this week, suggests that Al Qaeda in Iraq is now among the most significant threats to the security of the American homeland.

The U.S. has probably not yet fully woken up to the appalling fact that, after a long period in which the first motto of its military was “no more Vietnams,” it faces another Vietnam. There are many important differences, but the basic result is similar: The mightiest military in the world fails to achieve its strategic goals and is, in the end, politically defeated by an economically and technologically inferior adversary.

Even if there are no scenes of helicopters evacuating Americans from the roof of the U.S. Embassy in Baghdad, there will surely be some totemic photographic image of national humiliation as the U.S. struggles to extract its troops.

Abu Ghraib and Guantanamo have done terrible damage to the U.S. reputation for being humane; this defeat will convince more people around the world that it is not even that powerful. And Bin Laden, still alive, will claim another victory over the death-fearing weaklings of the West.

In history, the most important consequences are often the unintended ones. We do not yet know the longer-term unintended consequences of Iraq. Maybe there is a silver lining hidden somewhere in this cloud. But as far as the human eye can see, the likely consequences of Iraq range from the bad to the catastrophic.

Looking back over a quarter of a century of chronicling current affairs, I cannot recall a more comprehensive and avoidable man-made disaster.

© Los Angeles Times 2007



By Ty Burr
The Boston Globe

August 10 2007

The world of cinema mourned the passing of two titans last week. Ingmar Bergman was 89, Michelangelo Antonioni, 94. Front-page obituaries celebrated their accomplishments, and the nightly news tossed up 30-second clips of "The Seventh Seal" (Bengt Ekerot's Death coldly moving his pawn) and "Blow-Up" to remind us of their greatness.

The two filmmakers almost seemed relevant again.

In truth, they're anything but. The hallowed days of post-World War II art-house cinema -- that period from the mid-1950s to the late 1970s when people went to the movies expecting metaphysical transcendence to go with their popcorn -- are long gone, and all the Criterion DVDs in the world won't bring them back.

I was reminded of this the morning Bergman died, as I put together the Globe obituary. One of our department interns -- a 20-year-old student who knows her pop history better than most -- admitted she had never actually seen any of his movies. After a pause, she confessed she had always confused Ingmar Bergman with Ingrid Bergman, and what did he actually do?

The next day was worse: She hadn't heard of Antonioni at all.

I relate this not to beat up on the intern -- whose only crime, after all, is youth -- but to underline that culture moves on, that today's artistic rebel is tomorrow's old fart, and that the ground beneath cinema's feet has irrevocably shifted since Bergman and Antonioni were in their prime.

Can "Persona" and "The Passenger" speak to college students today? Of course they can -- if you can get them to consider the films in the first place. But why should they when there are so many movies unspooling right under their noses? There's a new Wes Anderson coming out in the fall and bleeding-edge videos to watch on YouTube, and that Irish rock musical you still haven't seen, not to mention the Korean horror flick -- and wait, they've re-edited "Grindhouse" as two separate films for DVD.

All things an attuned young moviegoer should attend to, and that's the tip of the iceberg. Yes, Bergman and Antonioni also made movies you had to see to be culturally conversant -- 40 years ago. The Swede's last major work was "Fanny and Alexander," in 1982. Antonioni's was "The Passenger," in 1975. I don't think my intern's parents were out of high school by then.

The more pressing question is one of the past: What place does cinema's back catalog have for today's filmgoer? What place should it have?

The answer used to be obvious. If you wanted to see an old movie three decades ago -- and you were lucky enough to live in a big city -- you went to a revival theater and joined the worshipers at the altar. The art houses played new work, too, by Bergman and Antonioni and Godard and Bunuel and dozens of other once-necessary names. They were churches of cinema. No wonder the films were so serious.

The video revolution killed off the revival houses and the community that went with them. Ironically, it's far easier to find older movies today -- the DVDs are right there on Amazon, and the prints are great -- but you have to watch them on your own. What was once a vibrant communal experience has become a solitary pursuit. As with so many other things in the 21st Century, movie history is a Balkanized casualty of an attention-deficit culture.

Also vanished is a sense of higher purpose in filmgoing. You didn't walk out of "The Seventh Seal" talking about the movie, you came out talking about life. The great art-house and foreign-language classics of the '50s, '60s and '70s were good, and they were good for you. But that makes them sound like medicine now, and who wants that when there's so much tasty fast food available?

Today's must-see directors (the Coen brothers, Wes and Paul Thomas Anderson, Spike Jonze and Michel Gondry and others) get at their truths through whimsy and trickiness. They know we need the spoonful of sugar, and they're skilled at amusing us. The ironic detachment that the great post-war directors saw as a symptom of malaise has become the primary way of doing business.

So if the Globe intern and her hipster friends do get around to checking out Bergman's "The Virgin Spring," say, or Antonioni's "Blow-Up," the slow pacing and high seriousness may seem even more foreign than the language. Perhaps they'll get bored and switch back to "Entourage."

Or perhaps not. Maybe they'll be inspired to dive deeper and rediscover Fellini, Ozu, the French New Wave, the German renaissance of the '70s -- a cinematic past with forgotten claims on the present. Maybe they'll recognize these films as the products of a time when movies weren't afraid to tackle the big questions. Maybe they'll wonder what we're scared of today.

Copyright © 2007, The Chicago Tribune



Claudia Driefus

One morning recently at a post desert resort in Tucson, Arizona hundreds of the computer industry's elite huddled in small groups, swapping shop talk, making deals. In one corner, Michael Kinsley, editor of Microsoft's new on-line magazine, Slate, chatted with Bernard Vergnes, head of Microsoft's European operations. In another, Jim Barksdale, president of Netscape, talked sotto voce with Steve Case, chairman of America Online.

What brought together the 500 cybernauts was the annual PC Forum, spearheaded by Esther Dyson, a 44-year-old writer, futurist, philanthropist and venture capitalist who has become one of the most influential figures and certainly the most influential woman in all the computer world.

How Dyson makes her living is hard to classify. She is the editor and publisher of the widely respected newsletter Release 1.0 (and of its Eastern European cousin, named with the double pun Rel-East). She is chairwoman of the Electronic Frontier Foundation, an industry-financed civilliberties watchdog group. She runs EDventure Ventures, an investment fund that plugs Western dollars into Eastern European technology startups. And she manages this conference, which is to the computer world something like what the Cannes Festival is to film.

Dyson comes from a famously brilliant clan. Her father is the physicist and author Freeman Dyson. Her mother, Verena Huber-Dyson, is a mathematician who graduated from the Swiss institute where Albert Einstein studied. Her brother, George, is the world's leading expert on the kayak. At 14, Esther began studying Russian; at 16, she was at Harvard; at 25, she was reporting for Forbes, and by 30, she was analyzing technology stocks for Wall Street. In 1982, Ben Rosen, now chairman of the Compaq Computer Corporation, asked her to help him put out his Rosen Electronics Letter, a pioneering publication about new technology, which the following year he sold to Dyson, along with PC Forum. Now, 13 years later, Release 1.0 circulates to 1,600 computer industry leaders attracted by its thoughtful inquiries into thorny issues like intellectual property. "What I try to do," Dyson says, "is find worthy ideas and people and get attention for them. I meet a lot of people, read a lot of stuff and try to promote new ideas."

Q: Microsoft's chairman, Bill Gates, is rumored to have once denounced you as a "socialist." Why? DYSON: There was some misunderstanding. He thought I was going around saying that intellectual property should be free. Actually, as the Web expands, the big effect will be that intellectual property is likely to lose a lot of its market value.

Let me explain. In the past, there was a relative shortage of creative work. There was a limited amount of content and people had a limited amount of time, and both were pretty much matched at current price levels. Now [since the Net became popular], there's much less cost associated with the distribution of content. If you put a book or a magazine up, all the costs attributable to paper, printing, inventory, holding publications in stores go away. The other thing that is happening is that everybody can get up on the Net, sing their own songs, write their own poetry. You no longer need a publishing house to get a book published. So economics would say that since the supply of content is increasing, the costs of duplication and distribution are diminishing and people have the same amount of time or less, we are all going to pay less.

The idea of copyright will still be important because it is the law and it is moral. Second, a content producer will still want to control the integrity of a work. Even if I get no royalties, I want to make sure that my work isn't dumbed down and sold under someone else's name.

Whenever I talk about this, content producers go nuts. All I'm saying is that you need to figure out how to be paid for producing content because the business models are going to change.

Q: If intellectual property is to have little monetary value, how will writers, artists, composers make a living? A: They'll get money for performances, readings, for going on line and interacting with their audiences. The free copies of content are going to be what you use to establish your fame. Then you go out and milk it. Also, a lot of creators will get paid by audience gatherers rather than the public. Content will be sponsored somewhat in the way network television programming is today.

© 1996


 By David Brooks

 I don’t know about you, but while the events of the past five years haven’t really changed the patterns of my everyday life, they’ve certainly transformed the way I see the world.
I used to see the world as a landscape of rolling hills. There were different nations, tribes and societies, but the slopes connecting those groups were gradual and hospitable. It seemed relatively easy to travel from society to society, to understand and commune with one another.

Globalization seemed to be driving events, the integration of markets, communications and people. It seemed to be creating, with fits and starts, globalized individuals, who had one foot in a particular culture and another foot in a shared flow of movies, music, products and ideas.

I spent much of the 1990’s (that most deceptive decade) abroad — in Europe, the former Soviet Union and the Middle East. People everywhere seemed to want the same things: to live in normal societies, to be free, to give their children better lives.

Now it seems that was an oversimplified view of human nature. It’s true people everywhere want to satisfy their desires, but they also require moral systems that will restrain and give shape to their desires. It’s true people everywhere love their children, but they also require respect and recognition and they will sacrifice their own lives, and even their children’s lives, in wars for status. It’s true people everywhere hate oppression, but they also require identity, and human beings build identities by collectively hating groups that represent what they are not.

All these other parts of human nature impel people to become tribal. People form groups to realize their need for status, moral order and identity. The differences between these groups can be vast and irreconcilable.

Now my mental image of the landscape of humanity is not made up of rolling hills. It’s filled with chasms, crevices, jagged cliffs and dark forests. The wildernesses between groups seem stark and perilous.

People who live in societies where authority is united — as under Islam — are really different from people who live in societies where authority is divided. People in honor societies — where someone will kill his sister because she has become polluted by rape — are different from people in societies where people are judged by individual intentions. People who live in societies where the past dominates the present are different from people who live in societies where the future dominates the present.

Samuel Huntington once looked at the vast differences between groups and theorized that humanity is riven into different civilizations. That’s close but not quite right. Today’s divisions aren’t permanent. Instead, groups are constantly being formed and revised in a process of Schumpeterian creative destruction.

Yesterday’s high-tech entrepreneurs look like pikers compared to the social entrepreneurs of today. Islamist entrepreneurs have quickly built the world’s most vibrant and destructive movement by combining old teachings, invented traditions, imagined purities and new technologies. The five most important people in the Arab world, according to a recent survey, are the leaders of Hezbollah, Iran, Hamas, Al Qaeda and the Muslim Brotherhood. Microsoft’s market conquest is nothing compared to that.

Other and more benign groups are being created as well: Pentecostal sects,, Hugo Chávez populists and whatever groups are invisibly forming among left-behind peasants in India and China.

The chief driver of events right now is not only globalization — the integration of economies and peoples. It’s also the contest among cultures over the power of consecration — the power to define what is right and wrong. Rising hegemons like Iran (and the U.S.) see themselves not only as nations but also as moral movements.

Since 9/11, the U.S. has had little success in influencing distant groups. Americans blew the postwar administration of Iraq because they assumed they were liberating a nation sort of like their own. And yet I can’t seem to renounce my own group, which is America. It would feel like cultural suicide to repress the central truths of my society, that all human beings are endowed with inalienable rights and democracy is the most just and effective form of government.

The hard lesson of the last five years — that we live in a jagged world filled with starkly different and contesting groups — makes democracy promotion more difficult but more necessary. Only democratic habits will prevent the inevitable clash of the tribes from turning into a war of nuclear annihilation.



By Geoffrey Stone

August 29, 2007

"The first thing we do, let's kill all the lawyers."

Alberto Gonzales' sorry tenure in the Bush administration would seem to give credence to Shakespeare's oft-cited incitement against the legal profession.

The primary responsibility of the attorney general is to uphold the Constitution and laws of the United States in a fair and evenhanded manner. In failing to comprehend this responsibility, Gonzales compromised himself, his office, the Constitution and, ultimately, the president who appointed him.

The responsibility every attorney general owes the nation is to raise hard legal and constitutional questions whenever a president is tempted to overreach the limits of his authority. Gonzales, however, chose to function more like President Bush's personal legal strategist, doing everything in his power to justify Bush's apparent desire to authorize torture, deny detainees access to the writ of habeas corpus, order unlawful electronic surveillance and institute legal proceedings that defy due process of law.

There is no excuse, other than cronyism and personal weakness, for Gonzales' confusion about his appropriate role and, in point of fact, he and future officeholders could learn much from the extraordinarily disciplined and principled actions of some of his predecessors who also served our nation in perilous times.

After the outbreak of World War II, Atty. Gen. Robert Jackson warned the nation's prosecutors that "times of fear or hysteria" have often resulted in cries "for the scalps" of those with dissenting views. He exhorted his U.S. attorneys to steel themselves to be "dispassionate and courageous" in dealing with "so-called subversive activities."

After Franklin Roosevelt appointed Jackson to the U.S. Supreme Court, he was succeeded as attorney general by Francis Biddle. On Dec. 15, 1941, Biddle reminded the nation that in time of war, "hysteria and fear and hate" run high, and "every man who cares about freedom must fight to protect it for other men" as well as for himself. Even when Roosevelt pressured his attorney general to prosecute those who criticized his policies, Biddle resisted. Later, when the public began to call for the wholesale internment of individuals of Japanese descent, Biddle furiously opposed such a policy as "ill-advised, unnecessary and unnecessarily cruel."

In a face-to-face meeting with Roosevelt, Biddle told the president that such a program could not be justified "as a military measure." Although Roosevelt overrode Biddle's objections largely for political reasons, he later rightly observed that the episode had shown "the power of suggestion which a mystic cliche like 'military necessity' can exercise." He added sadly that the nation had missed a unique opportunity to "assert the human decencies for which we were fighting."

In 1971, the public began to learn that the FBI, the CIA, the National Security Agency and the Army had engaged in a widespread program of investigation and secret surveillance of Vietnam war protesters in an effort "to expose, disrupt and otherwise neutralize" the anti-war movement. A congressional committee found that the government, "operating primarily through secret informants," had "undertaken the secret surveillance of citizens on the basis of their political beliefs," and that the FBI had "developed over 500,000 domestic intelligence files" on public officials, journalists, entertainers, professors and ordinary citizens.

In the face of such revelations, and in his role as attorney general, Edward Levi created stringent guidelines that reiterated and reaffirmed the rights of all Americans by clearly and carefully circumscribing the investigative authority of the FBI. The "Levi guidelines" expressly prohibited the FBI from investigating, discrediting or disrupting any group or individual on the basis of protected 1st Amendment activity. These guidelines were rightly hailed as a major advance in law enforcement and a critical step forward in protecting the rights of Americans against overzealous and misguided government officials. Gonzales helped eviscerate the Levi guidelines during the Bush presidency.

Of course, it is not all Gonzales' fault. In truth, he should never have had the privilege of serving as attorney general. Jackson, Biddle and Levi were men of great intellectual distinction, integrity and character. Gonzales is not. But for his long-standing friendship with Bush, he would never have been, and should never have been, within hailing distance of a position of such responsibility. He was in over his head.

By failing to protect American values and individual liberties, Gonzales has not just discredited himself, his office and his profession. He has also compromised the Constitution.

"The first thing we do, let's kill all the lawyers."

It is worth recalling that these words were uttered in "Henry VI" not by a lawyer's disgruntled client, but by a conspirator in Cade's rebellion who was plotting to overthrow the rights and liberties of the English people.

"The first thing we do, let's kill all the lawyers."

It is men like Robert Jackson, Francis Biddle and Edward Levi who represent the highest ideals of public service and the true spirit of the legal profession. It is men like Alberto Gonzales who give the profession a bad name.

Geoffrey R. Stone is a professor of law at the University of Chicago.



By John Leonard

Susan Faludi, a relentless reporter, an unapologetic feminist and a brilliant scourge, begins her CAT scan of our traumatized psyche with a demurral: “The Terror Dream,” she says, is about only “one facet” of the American response to the hijacker bombings of Sept. 11: the cover story and screenplay promptly confabulated by our government ministers and news media heavies, a “security myth” and a “national fantasy” starring John Wayne and Dirty Harry as the Last of the Mohicans. But after escorting us briskly from witch hunts in Puritan New England to regime change and Manifest Destiny on the Great Plains and lynching bees in the Old South, from hostage-taking by Barbary pirates to sleeper cells in the cold war all the way up to a patriarchal White House and a quagmired Iraq, she concludes with a curse: “There are consequences to living in a dream.” We’ve sleepwalked into hallucination, regression and psychosis.

As in her best-selling “Backlash” (1991), which roughed up Robert Bly and Allan Bloom while debunking news media myths about “the man shortage” and “the infertility epidemic,” as well as her underappreciated “Stiffed” (1999), which construed the baffled manhood of laid-off Navy shipyard workers and McDonnell Douglas engineers, Citadel cadets and Charleston drag queens, porn stars and Promise Keepers, so in “The Terror Dream” a skeptical Faludi reads everything, second-guesses everybody, watches too much talking-head TV and emerges from the archives and the pulp id like an exorcist and a Penthesilea. Sept. 11 may have been as infamous a day as Pearl Harbor, but “the summons to actual sacrifice never came,” she writes. “No draft ensued, no Rosie the Riveters were called to duty, no ration cards issued, no victory gardens planted. ... What we had was a chest beater in a borrowed flight suit, instructing us to max out our credit cards.”

What we also had, after a hijacking of the meaning of the event by chicken hawks and theocons, were the Culture Wars redux. On the one side, all of a sudden contemptible, were grief counselors, sisterhood, “femocracy” and anything remotely “Oprahesque,” plus “girlie boys,” “dot-com geeks,” “Alan Alda clones,” “metrosexuals” and what Jerry Falwell called “the pagans, and the abortionists, and the feminists, and the gays and the lesbians,” as well as such uppity critics of American foreign policy as Susan Sontag, Katha Pollitt, Barbara Kingsolver and Naomi Klein, all of whom were instructed to return immediately to their assigned seats. On the other, triumphalist shore, making a Rocky/Rambo comeback, were traditional gender roles and rescue fantasies, traditional medieval torture and the “alpha male” and “manly man”: Duke in “The Searchers”; Rudy with his “command presence”; “Rumstud,” the “babe magnet” secretary of defense; and New York City’s firefighters, “Green Berets in red hats.”


How, Faludi wonders, did smoking out Osama Bin Laden  out in his Tora Bora tunnel somehow morph, on the home front, into a “sexualized struggle between depleted masculinity and overbearing womanhood”? Answering this question takes her from ground zero to the Oval Office, the op-ed page, the Hollywood studio, network television, ’50s sci-fi, “penny-dreadful” Davy Crockett westerns, the daydreams of James Fenimore Cooper, the nightmares of Increase Mather, and the captivity narratives of brave and resourceful pilgrim and pioneer women. Along the way she interviews Jessica Lynch who was written up first as a heroine of the war in Iraq and then as a victim, although she was neither. (A useful bookend here might have been the Pat Tillman story, about a young man who quit pro football to volunteer in Iraq, only to die from friendly fire that the Pentagon lied about.) She debunks such wishful news media thinking as the post-9/11 rush to matrimony, “patriotic pregnancy” and a baby boomlet that never happened (not to mention articles in this newspaper, less factual than fanciful, about well-educated women opting out of high-powered careers and deciding to be moms instead). She disinters the true story of Cynthia Ann Parker, whose abduction at age 9 by Comanches in Texas in 1836 had to be improved upon by Alan Le May’s novel and John Ford’s film version of “The Searchers,” since Cynthia Ann seems to have ended up preferring her Comanche husband to her Anglo relatives. (In Le May’s novel Faludi finds the original “terror dream” — “the fear of a small helpless child, abandoned and alone in the night ... an awareness of something happening in some unknown dimension not of the living world.” And she reminds us of indispensable history books by Richard Slotkin (“Regeneration Through Violence”), John Demos (“The Unredeemed Captive”) and Mary Beth Norton (“In the Devil’s Snare”).


What we gather from these books and Faludi’s is that the script America reverted to in the fall of 2001 was the oldest in our literary imagination, our frontier fear that savages (“dark-skinned, non-Christian combatants”) would seize our defenseless women while our girlie men were watching Oprah Never mind that 9/11 had nothing to do with gender politics. If we weren’t invincible, we must have been impotent. Somehow, like Cynthia Ann’s kidnapping, “an assault on the urban workplace” (global capitalism’s edifice complex) had to be rewritten as “a threat to the domestic circle,” and so we willed ourselves “back onto a frontier where pigtailed damsels clutched rag dolls and prayed for a male avenger to return them to the home.” Think of the entire nation as a distressed damsel. Think of Homeland Security as Wyatt Earp. Think of hate radio and Fox News as Sergio Leone. Think of geopolitics as a video game. Think of “Death Wish,” “High Noon,” original sin, alien abduction, demonic possession, zombies, vampires, satanic day-care child molesters and job-stealing immigrant hordes.


There are other ways to look at 9/11, as anything from Armageddon to coup d’état. And other ways to account for an America so fearful that we feed the Bill of Rights to our Biggest Brother. Freud, Marx and Veblen are periscopes and magnifying glasses for oral fixation, overproduction and forced consumption. Through the green eyes of ecothink, nuclear winter and silent spring season the dread. Joseph Schumpeter’s “creative destruction” also comes to skittish mind. We are, besides, insecure and negligent in our parenthood and our citizenship, caught between a public sphere (bear garden, hippodrome, killing field) that feels hollow and a private sphere (sanctuary, holding cell) that feels besieged. We are no longer safe on the tribal streets, equally weightless in orbit or in cyberspace, tiddlywinks on the credit grid, lost and yet still stalked, void where prohibited. To the usual millennial heebie-jeebies, add a subprime mortgage mess and collateralized debt obligations up the Limpopo without a paddle.


But feminism is Faludi’s compass and her lens, her furnace and her fuel. Feminism — fierce, supple, focused, filigreed and chivalrous — has steered her inquiries and sensitized her apprehensions of a celebrity/media culture and national security state that honors men more as warriors, actors, cowboys, athletes and killers than for skilled labor, company loyalty, civic duty, steadfast fatherhood, homesteading, caretaking and community-building, and that tells women to lie down and shut up. Feminism, like a trampoline, has made possible this splendid provocation of a book, levitating to keep company with Hunter Thompson’s fear and loathing, Leslie Fielder’s love and death and Edmund Wilson’s patriotic gore.


John Leonard, who reviews books for Harper’s Magazine and television for New York magazine, is writing a memoir.

© New York Times 2007 




Cambridge in autumn. The pathways of Harvard Yard are strewn with acorns. Sunlight plays on ivy-covered brick. Inside Emerson Hall, Helen Vendler takes the podium of her popular undergraduate lecture class, “Poets, Poems, Poetry.” A few late arrivals straggle in, seeking a rare empty seat. Armed with just a clip-on microphone and an extraordinary command of the lyric tradition, Vendler quickly runs through “The Grasshopper” by E. E. Cummings before alighting on the main topic of the day: Keats’s “To Autumn,” which she introduces simply as “one of the best poems in the English language."


“Season of mists and mellow fruitfulness,” she reads aloud, her soft, Boston-tinged voice slightly breathy and filled with emotion. “Close-bosom friend of the maturing sun; / Conspiring with him how to load and bless / With fruit the vines that round the thatch-eves run; / To bend with apples the moss’d cottage-trees, / And fill all fruit with ripeness to the core.” Over the next hour, Vendler dissects the ode line by line, explaining how it moves outward from kernels and cores through an ever wider “sphere of expanding knowledge,” until “small gnats mourn,” “full-grown lambs bleat,” “hedge-crickets sing,” “the redbreast whistles,” and, in the poem’s final line, “gathering swallows twitter in the skies.” In her reading, Keats’s autumn, melancholy yet hopeful, has its own muted pleasures and sounds. “You wouldn’t have noticed the music of autumn if the nightingale were still singing,” she tells the class.


Vendler, who is 73, has been teaching since the early ’60s — for the last quarter-century at Harvard, where she is the A. Kingsley Porter university professor — but the lecture hall is not her only sphere of influence. She is also the leading poetry critic in America, the author of major books on Wallace Stevens, Keats and Shakespeare, and for a generation has been a powerful arbiter of the contemporary poetry scene. Her authoritative judgments have helped establish or secure the reputations of Jorie Graham, Seamus Heaney and Rita Dove, among many others. Eschewing fashionable theory, Vendler is a school of one, an impassioned aesthete who pays minute attention to the structures and words that are a poet’s genetic code. “She is a remarkably agile and gifted close reader,” the literary scholar Harold Bloom said. “I think there isn’t anyone in the country who can read syntax in poems as well as she can.”


For Vendler, syntax is not a mere technical matter but the order of a poet’s universe, a kind of secular scripture. Her office — as scholar, critic and teacher — is to serve Poetry, which she does, with the precision of a chemist and the devotion of a religious acolyte. Both qualities were in evidence this spring, when Vendler fiercely criticized a new collection of Elizabeth Bishop’s uncollected poetry, “Edgar Allan Poe & the Juke-Box,” assembled by Alice Quinn, the poetry editor of The New Yorker, who herself wields enormous clout in poetry circles. Even as other critics — including David Orr, in these pages — welcomed the book as an important addition to the Bishop oeuvre, Vendler, writing in The New Republic, said the volume “should have been called ‘Repudiated Poems.’ For Elizabeth Bishop had years to publish the poems included here, had she wanted to.” It would have been far better, in Vendler’s view, for Quinn to have published the drafts that went into Bishop’s published, polished “real poems” rather than “their maimed and stunted siblings,” adding: “I am told that poets now, fearing an Alice Quinn in their future, are incinerating their drafts.”


In October, Vendler elaborated on the controversy in an interview in her cozy, book-filled office at Harvard, not mincing words as she accused Quinn both of undermining Bishop’s legacy and of betraying something sacred, the poet’s personal trust.


“I would rather have had the drafts of the finished poems well before you got the rejected stuff from the trash can,” Vendler said, sitting on a chair facing the window and a life mask of Keats. “If you make people promise to burn your manuscripts” — as Kafka and (by legend) Virgil did — “they should,” Vendler insisted. “I think the ‘Aeneid’ should have been burned and Kafka’s works should have been burned, because personal fidelity is more important than art,” she said in her quiet, direct manner. “If I had asked somebody to promise to destroy something of mine and they didn’t do it I would feel it to be a grave personal betrayal. I wouldn’t care what I had left behind. It could have been the ‘Mona Lisa.’ ”


That assertion seems peculiar — even shocking — coming from one who has devoted her life to the study of literature. But it reflects the intense and sometimes lonely fervor Vendler brings to her vocation.


She can be harsh about those she sees as subordinating literature to an ideological agenda. In a review of David Denby’s “Great Books” (1996), the film critic’s account of how he returned to college, immersing himself in Columbia’s core curriculum, Vendler wrote, “Seeing the Columbia course use Dante and Conrad as moral examples is rather like seeing someone use a piece of embroidery for a dishrag with no acknowledgment of the difference between hand-woven silk and a kitchen towel.” In 2001, again in The New Republic, her main venue in recent years, Vendler took the critic James Fenton to task for his interpretation of Robert Frost’s 1942 poem “The Gift Outright,” a version of which was recited by the aging poet at the Kennedy inauguration in 1961. Fenton, in her view, had imposed a mistaken interpretation on a poem as much “about marriage as about colonials becoming Americans,” because “his politics has wrenched him into misreading it.” (Some argued Vendler herself was misreading the poem by choosing to ignore its subject matter.)


Vendler once wrote that the poets A. R. Ammons, John Ashbery and James Merrill embraced “the thankless cultural task of defining how an adult American mind not committed to any single ideological agenda might exist in a self-respecting and veracious way in the later 20th century.” That could also apply to her.


Vendler was born Helen Hennessy in Boston in 1933. Although she was encouraged to read poetry — her mother, a former schoolteacher, had memorized many poems, and her father taught Romance languages at a high school — her parents forbade her to attend the prestigious Boston Latin School for Girls, and later Radcliffe, because, like many devout Roman Catholics at the time, they accepted the church’s strong disapproval of secular education. Instead she enrolled in Boston’s all-female Emmanuel College, hoping to study literature, but “literature, I discovered with disgust, was taught as a branch of faith and morals,” she recalled in a 2001 lecture. That experience “inoculated me forever against adopting any ism as a single lens through which to interpret literature.”


Vendler switched to the sciences, and after graduation was awarded a Fulbright fellowship to study mathematics in Belgium. “I loved math and I loved organic chemistry because they have structures and I love structures,” she said. But on the stormy trans-Atlantic voyage, Vendler had a reckoning with herself. “And I decided, now that I was free, I would do English, because that’s what I had always wanted to do.” With the Fulbright commission’s blessing, she switched to literature.


Back in the States, Vendler enrolled in 12 English courses in a single year at Boston University so as to qualify for graduate school at Harvard. In her first week there in 1956, the chairman of the English department told her, “We don’t want any women here.” (Years later, he apologized.) Another professor, the renowned Americanist Perry Miller, considered Vendler his finest student and published one of her course papers, but denied her admission to his Melville seminar. “The men come over in my house and they sit around and drink and we talk. I wouldn’t talk like I wanted to if there was a woman there,” she recalled him explaining.


Still, she had supporters, and influences. One was I. A. Richards, a leading figure in the New Criticism, which emphasized close reading of texts over an author’s biography or historical situation. Vendler found his Harvard course electrifying. “Richards would take a single word in a poem and talk about its origins in Plato and its vicissitudes through Shakespeare and its reappearance in Shelley,” she recalled. “You would feel each word heavy with the weight of its associations.” Vendler adopted this method and dedicated her second book, “On Extended Wings,” a study of Wallace Stevens, to Richards.


In 1959, Vendler became the first woman to be offered an instructorship in Harvard’s English department. But she turned it down to follow her husband, Zeno Vendler, a philosopher and linguist, to Cornell. Several years later, divorced and raising their son alone, she taught at Smith for two years before taking a job at B.U., where she remained for two decades, for some of that time a single mother teaching a punishing 10 courses a year. “The only way I could make my life easier was to give up writing,” she wrote many years later. “But I knew that to stop writing would be a form of self-murder.” She kept writing books and reviews, but late at night, after her son had gone to sleep. “I envied my male colleagues who, in those days, seemed to have everything done for them by their spouses.” (Her son, David, is now a lawyer in California.)


In 1981, with several major books behind her, Vendler at last joined the English department at Harvard — but kept a part-time affiliation with B.U. for four years until she was convinced that Harvard took her seriously. “I didn’t want to be some little token person,” she said. By this time she already wielded considerable power outside the academy. Her first review — “my work of the left hand,” as she has called it — was a roundup of contemporary poetry published in the mid-’60s in the Massachusetts Review. That led to assignments from The New York Times Book Review, a venue whose prominence she initially found jarring. “Who was I to occupy people’s Sunday morning attention?” Vendler wrote in a 1994 essay, a complicated nod to Stevens’s famous poem “Sunday Morning,” about spiritual longing in a world without religious faith. Beyond the extra income, Vendler found she liked writing for a general audience — and excelled at it. “She’s the best since Randall Jarrell, I don’t think there’s any question about that,” the critic John Leonard said in a recent interview.


In the early ’70s, as editor of the Book Review, Leonard was having trouble navigating the insular world of poetry. Poets “would never tell you if you’d asked them to review their best friend or worst enemy or ex-lover,” Leonard recalled. “There was always some agenda, and I could never figure out what it was.” So he hired Vendler to vet the flood of poetry books, reviewing some herself and suggesting reviewers for others. All this was done quietly. “We just couldn’t tell anyone,” Leonard said. “It would put her under enormous pressure.”


But in 1975, when Harvey Shapiro, himself a poet, succeeded Leonard as editor of the Book Review, he immediately ended the arrangement. Vendler had become “kind of an anonymous power,” he said in an interview. “It didn’t seem to me a healthy arrangement.” Vendler, who says her role at the Book Review “was never a secret,” thinks the source of dismay was her quarterly poetry roundups. They “annoyed a lot of people,” she said, especially poets whose work she hadn’t included. She recalled Shapiro wanting her “to review books by women and to sequester me in the women niche.” “In truth, I have no memory of this,” Shapiro said. “If what she said is true, I owe her an apology.”


Beginning in the mid-’70s, Vendler contributed regularly to The New York Review of Books and The New Yorker, where in 1978 she became poetry critic at the request of its editor, William Shawn. As a critic, she has favored the idiosyncratic. In 1982, she criticized the “ventriloquism” of a future Nobel laureate, Derek Walcott, whom she found “peculiarly at the mercy of influence.” In a joint review in The New Yorker in 1996, she lauded August Kleinzahler for “his irreverent joy in the American demotic,” and criticized Mark Doty for his “inert” rhythms and “plaintive didacticism.”


Although influential, Vendler’s judgments of contemporary poets are not uncontested. Some have questioned her championing of Jorie Graham, in whose work Vendler sees intricacies and trilingual rhythms (Graham was raised speaking English, French and Italian) where others see opacity. And some have found Vendler’s tastes too mainstream, though she has always been unpredictable. She was, for instance, an admirer of Allen Ginsberg when his literary status was still uncertain. “The monumental quality of ‘Kaddish’ makes it one of those poems that, as Wallace Stevens said, take the place of a mountain,” she has written. The poets she admires share “nothing except intelligence and originality,” said Stephen Burt, a poetry critic and professor at Macalester College who studied with Vendler at Harvard in the early ’90s.


For so prominent a professor, Vendler is known for being extremely supportive; she supervises many dissertations and serves as mentor to numerous students. “Maybe in the end her major achievement will be as a teacher, both in terms of her pupils, and the books she’s written about canonical figures, teaching people how to read them,” said the critic and poet Adam Kirsch, who studied with Vendler as an undergraduate in the mid-’90s. The critic James Wood, who also teaches at Harvard, recalled once complaining about the demands of parenthood to Vendler, who told him it would make him a better reader. “There are very few academics who would ever say something like that, who would ever bother connecting what they do academically with the arrival of the child,” Wood said. (Vendler herself has noted the lack of great poetry about motherhood, though Sylvia Plath, she said, “made a beginning.”)


Today Vendler seldom reviews poets under 50, since their “frames of reference,” she says, are alien to her. “They’re writing about the television cartoons they saw when they were growing up. And that’s fine. It’s as good a resource of imagery as orchards. Only I’ve seen orchards and I didn’t watch these cartoons,” she said. “So I don’t feel I’m the best reader for most of the young ones.”


These days Vendler is more focused on late style. In April, she will deliver the prestigious Mellon Lectures at the National Gallery of Art in Washington. Her topic will be the final books of Wallace Stevens, Sylvia Plath, Robert Lowell, Elizabeth Bishop and James Merrill, and how each wrestled with what Yeats called “death-in-life and life-in-death”: writing about life facing impending death, and writing about death while still immersed in the world. “It used to be easier to deal with when you had heaven to believe in, when there was another place to go at the end of your poem,” Vendler said, as the late afternoon sun came through her office window. Death without heaven “produces more stylistic problems.” Vendler has recently finished the book on Yeats’s poems that she first wanted to write as a dissertation, but abandoned, she said, because at 23, “I didn’t have the life experience to penetrate them or resonate with them.” Life and life’s work, seamlessly intertwined.


Rachel Donadio is a writer and editor at the Book Review. © 2006



by Joan Didion

The New York Review of Books 2006 ©

It was in some ways predictable that the central player in the system of willed errors and reversals that is the Bush administration would turn out to be its vice-president, Richard B. Cheney. Here was a man with considerable practice in the reversal of his own errors. He was never a star. No one ever called him a natural. He reached public life with every reason to believe that he would continue to both court failure and overcome it, take the lemons he seemed determined to pick for himself and make the lemonade, then spill it, let someone else clean up. The son of two New Deal Democrats, his father a federal civil servant with the Soil Conservation Service in Casper, Wyoming, he more or less happened into a full scholarship to Yale: his high school girlfriend and later wife, Lynne Vincent, introduced him to her part-time employer, a Yale donor named Thomas Stroock who, he later told Nicholas Lemann, "called Yale and told 'em to take this guy." The beneficiary of the future Lynne Cheney's networking lasted three semesters, took a year off before risking a fourth, and was asked to leave.

"He was in with the freshman football players, whose major activity was playing cards and horsing around and talking a lot," his freshman roommate told the Yale Daily News, not exactly addressing the enigma. "Wasn't gonna go to college and buckle down" and "I didn't like the East" are two versions of how Cheney himself failed to address it. As an undergraduate at the University of Wyoming he interned with the Wyoming State Senate, which was, in a state dominated by cattle ranchers and oil producers and Union Pacific management, heavily Republican. This internship appears to have been when Cheney began identifying himself as a Republican. ("You can't take my vote for granted," his father would advise him when he first ran for Congress as a Republican.) He graduated from Wyoming in 1965 and, in the custom of the Vietnam years, went on to receive a master's degree. He never wrote a dissertation ("did all the work for my doctorate except the dissertation," as if the dissertation were not the point) and so never got the doctorate in political science for which he then enrolled at the University of Wisconsin.

Still, he persevered, or Lynne Cheney did. When, in 1968, at age twenty-seven, a no-longer-draft-eligible "academic" with a wife and a child and no Ph.D. and no clear clamor for his presence, he left Wisconsin for Washington, he managed to meet the already powerful Donald Rumsfeld about a fellowship in his House office. Cheney, by his own description and again failing to address the enigma, "flunked the interview." He retreated back to the only place at the table, the office of a freshman Republican Wisconsin congressman, Bill Steiger, for whom Cheney was said to be not a first choice and whose enthusiasm for increased environmental and workplace protections did not immediately suggest the Cheney who during his own ten years in Wyoming's single congressional seat would vote with metronomic regularity against any legislation tending in either direction.

The potential rewards of Washington appear to have mobilized Cheney as those of New Haven and Madison had not. Within the year, he was utilizing Steiger to make another move on Rumsfeld, who had been asked by Richard M. Nixon to join his new administration as director of the Office of Economic Opportunity. Cheney, James Mann wrote in Rise of the Vulcans, noticed a note on Steiger's desk from Rumsfeld, looking for advice and help in his new OEO job. Cheney spotted an opportunity. Over a weekend, he wrote an unsolicited memo for Steiger on how to staff and run a federal agency.

Rumsfeld hired Cheney, and, over the next few years, as he moved up in the Nixon administration, took Cheney with him. Again, in 1974, after the Nixon resignation, when Rumsfeld was asked to become Gerald Ford's chief of staff, he made Cheney his deputy.

In Cheney, Rumsfeld had found a right hand who took so little for granted that he would later, by the account of his daughter Mary, make a three-hour drive from Casper to Laramie to have coffee with three voters, two of whom had been in his wedding. In Rumsfeld, who would be described by Henry Kissinger as "a special Washington phenomenon: the skilled full-time politician-bureaucrat in whom ambition, ability, and substance fuse seamlessly," Cheney had found a model. In the Ford White House, where he and Rumsfeld were known as "the little Praetorians," Cheney cultivated a control of detail that extended even to questioning the use in the residence of "little dishes of salt with funny little spoons" rather than "regular salt shakers."

Together, Cheney and Rumsfeld contrived to marginalize Nelson Rockefeller as vice-president and edge him off the 1976 ticket. They convinced Ford that Kissinger was a political liability who should no longer serve as both secretary of state and national security adviser. They managed the replacement of William Colby as CIA chief with George H.W. Bush, a move interpreted by many as a way of rendering Bush unavailable to be Ford's running mate in 1976. They managed the replacement of James Schlesinger as secretary of defense with Rumsfeld himself. Cheney later described his role in such maneuvers as "the sand in the gears," the person who, for example, made sure that when Rockefeller was giving a speech the amplifier was turned down. In 1975, when Ford named Rumsfeld secretary of defense, it was Cheney, then thirty-four, who replaced Rumsfeld as chief of staff.

Relationships matter in public life, until they do not. In May, during a commencement address at Louisiana State University, Cheney mentioned this long relationship with Rumsfeld by way of delivering the message that "gratitude, in general, is a good habit to get into":

I think, for example, of the first time I met my friend and colleague Don Rumsfeld. It was back in the 1960s, when he was a congressman and I was interviewing for a fellowship on Capitol Hill. Congressman Rumsfeld agreed to talk to me, but things didn't go all that well.... We didn't click that day, but a few years later it was Don Rumsfeld who noticed my work and offered me a position in the executive branch. Note the modest elision ("it was Don Rumsfeld who noticed my work") of the speaker's own active role in these events. What Cheney wanted to stress that morning in Baton Rouge was not his own dogged tracking of the more glamorous Rumsfeld but the paths one had possibly "not expected to take," the "unexpected turns," the "opportunities that come suddenly and change one's plans overnight." The exact intention of these commencement remarks may be unknowable (a demonstration of loyalty? a warning? to whom? a marker to be called in later? all of the above?), but it did not seem accidental that they were delivered during a period when one four-star general, one three-star general, and four two-star generals were each issuing calls for Donald Rumsfeld's resignation as secretary of defense. Nor did it seem accidental that the President and the Vice President were taking equally stubborn and equally inexplicable lines on the matter of Rumsfeld's and by extension their own grasp on the war in Iraq. "I hear the voices and I read the front page and I know the speculation," George W. Bush said in response to a reporter's question during a Rose Garden event. "But I'm the decider and I decide what's best. And what's best is for Don Rumsfeld to remain as the secretary of defense."

The question of where the President gets the notions known to the nation as "I'm the decider" and within the White House as "the unitary executive theory" leads pretty fast to the blackout zone that is the Vice President and his office. It was the Vice President who took the early offensive on the contention that whatever the decider decides to do is by definition legal. "We believe, Jim, that we have all the legal authority we need," the Vice President told Jim Lehrer on PBS after it was reported that the National Security Agency was conducting warrantless wiretapping in violation of existing statutes. It was the Vice President who pioneered the tactic of not only declaring such apparently illegal activities legal but recasting them as points of pride, commands to enter attack mode, unflinching defenses of the American people by a president whose role as commander in chief authorizes him to go any extra undisclosed mile he chooses to go on their behalf.

"Bottom line is we've been very active and very aggressive defending the nation and using the tools at our disposal to do that," the Vice President advised reporters on a flight to Oman last December. It was the Vice President who maintained that passage of Senator John McCain's legislation banning inhumane treatment of detainees would cost "thousands of lives." It was the Vice President's office, in the person of David S. Addington, that supervised the 2002 "torture memos," advising the President that the Geneva Conventions need not apply. And, after Admiral Stansfield Turner, director of the CIA between 1977 and 1981, referred to Cheney as "vice president for torture," it was the Vice President's office that issued this characteristically nonresponsive statement: "Our country is at war and our government has an obligation to protect the American people from a brutal enemy that has declared war upon us."

Addington, who emerged into government from Georgetown University and Duke Law School in 1981, the most febrile moment of the Reagan Revolution, is an instructive study in the focus Cheney favors in the protection of territory. As secretary of defense for George H.W. Bush, Cheney made Addington his special assistant and ultimately his general counsel. As vice-president for George W. Bush, Cheney again turned to Addington, and named him, after the indictment of I. Lewis "Scooter" Libby in connection with the exposure of Ambassador Joseph Wilson's wife as a CIA agent, his chief of staff. "You're giving away executive power," Addington has been reported to snap at less committed colleagues. He is said to keep a photograph in his office of Cheney firing a gun. He vets every line of the federal budget to eradicate any wording that might restrain the President. He also authors the "signing statements" now routinely issued to free the President of whatever restrictive intent might have been present in whichever piece of legislation he just signed into law. A typical signing statement, as written by Addington, will refer repeatedly to the "constitutional authority" of "the unitary executive branch," and will often mention multiple points in a single bill that the President declines to enforce.

Signing statements are not new, but at the time Bill Clinton left office, the device had been used, by the first forty-two presidents combined, fewer than six hundred times. George W. Bush, by contrast, issued more than eight hundred such takebacks during the first six years of his administra-tion. Those who object to this or any other assumption of absolute executive power are reflexively said by those who speak for the Vice President to be "tying the president's hands," or "eroding his ability to do his job," or, more ominously, "aiding those who don't want him to do his job."

One aspect common to accounts of White House life is the way in which negative events tend to be interpreted as internal staffing failures, errors on the order of the little dishes of salt with the funny little spoons. Cheney did not take the lesson he might have taken from being in the White House at the time Saigon fell, which was that an administration can be overtaken by events that defeat the ameliorative power of adroit detail management. He took a more narrow lesson, the one that had to do with the inability of a White House to pursue victory if Congress "tied its hands." "It's interesting that [Cheney] became a member of Congress," former congressman Tom Downey said to Todd Purdum, "because I think he always thought we were a massive inconvenience to governing." Bruce Fein, who served in the Meese Justice Department during the Reagan administration, told Jane Mayer of The New Yorker that Cheney's absence of enthusiasm for checks and balances long predated any argument that this was a "wartime presidency" and so had special powers.

This preceded 9/11. I'm not saying that warrantless surveillance did. But the idea of reducing Congress to a cipher was already in play. It was Cheney and Addington's political agenda. "I have repeatedly seen an erosion of the powers and the ability of the president of the United States to do his job," the Vice President said after one year in office. "We are weaker today as an institution because of the unwise compromises that have been made over the last thirty to thirty-five years." "Watergate—a lot of the things around Watergate and Vietnam, both, in the '70s, served to erode the authority, I think, the President needs to be effective," he said to reporters accompanying him on that December 2005 flight to Oman. Expanding on this understanding of the separation of powers as a historical misunderstanding, the Vice President offered this:

If you want reference to an obscure text, go look at the minority views that were filed with the Iran-Contra Committee; the Iran-Contra Report in about 1987. Nobody has ever read them, but we —part of the argument in Iran-Contra was whether or not the President had the authority to do what was done in the Reagan years. And those of us in the minority wrote minority views, but they were actually authored by a guy working for me, for my staff, that I think are very good in laying out a robust view of the President's prerogatives with respect to the conduct of especially foreign policy and national security matters. There are some recognizable Cheney touches here, resorts to the kind of self-deprecation (as in "I didn't like the East" and "I flunked the interview") that derives from a temperamental grandiosity. The "obscure text" that "nobody has ever read" was the two-hundred-page minority report included in the 1987 Report of the Congressional Committees Investigating the Iran-Contra Affair, a volume printed and widely distributed by the US Government Printing Office. The unidentified "guy working for me" was Addington, at the time of the Iran-contra hearings a counsel to the committees but during the events that led to Iran-contra an assistant general counsel at William Casey's CIA, where he would have been focused early on locating the legal enablement for what Theodore Draper, in his study of Iran-contra, A Very Thin Line, called the "usurpation of power by a small, strategically placed group within the government."

This minority report, which vehemently rejects not only the conclusions of the majority but even the report's ("supposedly 'factual'") narrative, does allow that "President Reagan and his staff made mistakes" during the course of Iran-contra. Yet the broadest mistake, the demented "arms for hostages" part of the scheme, the part where we deal the HAWK missiles to Iran through Manucher Ghorbanifar and Robert McFarlane flies to Tehran with the cake and the Bible and the falsified Irish passports, is strenuously defended as a "strategic opening," an attempt to "establish a new US relationship with Iran, thus strengthening the US strategic posture throughout the Persian Gulf region."

We had heard before, and have heard recently, about "strategic openings," "new relationships" that will reorder the Middle East. "Extremists in the region would have to rethink their strategy of Jihad," Cheney told the Veterans of Foreign Wars in August 2002 about the benefits that were to accrue from invading Iraq. "Moderates throughout the region would take heart. And our ability to advance the Israeli-Palestinian peace process would be enhanced, just as it was following the liberation of Kuwait in 1991." We had heard before, and have heard recently, that what might appear to be an administration run amok is actually an administration holding fast on constitutional principle.

Watergate, Cheney has long maintained, was not a criminal conspiracy but the result of a power struggle between the legislative and executive branches. So was the 1973 War Powers Act, which restricted executive authority to go to war without consulting Congress and which Cheney believed unconstitutional. So was the attempt to get Cheney to say which energy executives attended the 2001 meetings of his energy task force. This issue, both Cheney and Bush explained again and again, had nothing to do with Enron or the other energy players who might be expecting a seat at the table in return for their generous funding (just under $50 million) of the 2000 Republican campaign. "The issue that was involved there," Cheney said, misrepresenting what had been requested, which was not the content of the conversations but merely the names of those present, "was simply the question of whether or not a Vice President can sit down and talk with citizens and gain from them their best advice and counsel on how we might deal with a particular issue."

The 1987 minority report prefigures much else that has happened since. There is the acknowledgment of "mistakes" that turn out to be not exactly the mistakes we might have expected. The "mistake" in this administration's planning for the Iraq war, for example, derived not from having failed to do any planning but from arriving "too fast" in Baghdad, thereby losing the time, this scenario seemed to suggest, during which we had meant to think up a plan. Similarly, the "mistakes" in Iran-contra, as construed by the minority report, had followed not from having done the illegal but from having allowed the illegal to become illegal in the first place. As laid out by the minority, a principal "mistake" made by the Reagan administration in Iran-contra was in allowing President Reagan to sign rather than veto the 1984 Boland II Amendment forbidding aid to contra forces: no Boland II, no illegality. A second "mistake," to the same point, was Reagan's "less-than-robust defense of his office's constitutional powers, a mistake he repeated when he acceded too readily and too completely to waive executive privilege for our Committees' investigation."

The very survival of the executive species, then, was seen by Cheney and his people as dependent on its brute ability to claim absolute power and resist all attempts to share it. Given this imperative, the steps to our current situation had a leaden inevitability: if the executive branch needed a war to justify its claim to absolute power, then Iraq, Rumsfeld would be remembered to have said on September 12, 2001, had the targets. If the executive branch needed a story point to sell its war, then the Vice President would resurrect the aluminum tubes that not even the US Department of Energy believed to be meant for a centrifuge: "It's now public that, in fact, [Saddam] has been seeking to acquire...the kinds of tubes that are necessary to build a centrifuge." The Vice President would dismiss Joseph Wilson's report that he had found no yellowcake in Niger: "Did his wife send him on a junket?"

As for the weapons themselves, the Vice President would deride the collective judgment of his own intelligence community, which believed, according to Paul R. Pillar, then the CIA national intelligence officer for the Near East and South Asia, that any development of a nuclear weapon was several years away and would be best dealt with—given that the community's own analysis of the war option projected violent conflict between Sunnis and Shiites and guerrilla attacks on any occupying power—"through an aggressive inspections program to supplement the sanctions already in place." "Intelligence," the Vice President would say dismissively in an August 2002 speech to the Veterans of Foreign Wars, "is an uncertain business." The Vice President would override as irrelevant the facts that Hans Blix and his UN monitoring team were prepared to resume such inspections and in fact did resume them, conducting seven hundred inspections of five hundred sites, finding nothing but stopping only when the war intervened. "Simply stated, there is no doubt that Saddam Hussein now has weapons of mass destruction," he would declare in the same speech to the Veterans of Foreign Wars.

A person would be right to question any suggestion that we should just get inspectors back into Iraq, and then our worries will be over.... A return of inspectors would provide no assurance whatsoever of [Saddam's] compliance with UN resolutions.

If the case for war lacked a link between September 11 and Iraq, the Vice President would repeatedly cite the meeting that neither American nor Czech intelligence believed had taken place between Mohamed Atta and Iraqi intelligence in Prague: "It's been pretty well confirmed that [Atta] did go to Prague and he did meet with a senior official of the Iraqi intelligence service in Czechoslovakia last April, several months before the attacks," he would say on NBC in December 2001. "We discovered...the allegation that one of the lead hijackers, Mohamed Atta, had, in fact, met with Iraqi intelligence in Prague," he would say on NBC in March 2002. "We have reporting that places [Atta] in Prague with a senior Iraqi intelligence officer a few months before the attacks on the World Trade Center," he would say on NBC in September 2002. "The senator has got his facts wrong," he would then say while debating Senator John Edwards during the 2004 campaign. "I have not suggested there's a connection between Iraq and 9/11."

This was not a slip of memory in the heat of debate. This was dishonest, a repeated misrepresentation, in the interests of claiming power, so bald and so systematic that the only instinctive response (Did too!) was that of the schoolyard. By June 2004, before the debate with Edwards, Cheney had in fact begun edging away from the Prague story, not exactly disclaiming it but characterizing it as still unproven, as in, on a Cincinnati TV station, "That's true. We do not have proof that there was such a connection." It would be two years later, March 2006, before he found it prudent to issue a less equivocal, although still shifty, version. "We had one report early on from another intelligence service that suggested that the lead hijacker, Mohamed Atta, had met with Iraqi intelligence officials in Prague, Czechoslovakia," he told Tony Snow on Fox News. "And that reporting waxed and waned where the degree of confidence in it, and so forth, has been pretty well knocked down at this stage, that that meeting ever took place. So we've never made the case, or argued the case, that somehow [Saddam Hussein] was directly involved in 9/11. That evidence has never been forthcoming."

What the Vice President was doing with the intelligence he received has since been characterized as "cherry-picking," a phrase suggesting that he selectively used only the more useful of equally valid pieces of intelligence. This fails to reflect the situation. The White House had been told by the CIA that no meeting in Prague between Mohamed Atta and Iraqi intelligence had ever occurred. The International Atomic Energy Agency and the US Department of Energy had said that the aluminum tubes in question "were not directly suitable" for uranium enrichment. The White House had been told by the CIA that the British report about Saddam Hussein attempting to buy yellowcake in Nigeria was doubtful.

"The British government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa," the President nonetheless declared in his 2003 State of the Union address, the "sixteen enormously overblown words" for which Condoleezza Rice would blame the CIA and for which George Tenet, outplayed, would take the hit. Nor would the President stop there: "Our intelligence sources tell us that he has attempted to purchase high-strength aluminum tubes suitable for nuclear weapons production."

What the Vice President was doing, then, was not cherry-picking the intelligence but rejecting it, replacing it with whatever self-interested rumor better advanced his narrative line. "Cheney's office claimed to have sources," Ron Suskind was told by those to whom he spoke for The One Percent Doctrine.

And Rumsfeld's, too. They kept throwing them [at the CIA]. The same information, five different ways. They'd omit that a key piece had been discounted, that the source had recanted. Sorry, our mistake. Then it would reappear, again, in a memo the next week. The Vice President would not then or later tolerate any suggestion that the story he was building might rest on cooked evidence. In a single speech at the American Enterprise Institute in November 2005 he used the following adjectives to describe those members of Congress who had raised such a question: "corrupt," "shameless," "dishonest," "reprehensible," "irresponsible," "insidious," and "utterly false." "It's not about our analysis, or finding a preponderance of evidence," he is reported by Suskind to have said in the November 2001 briefing during which he articulated the doctrine that if there was "a one percent chance" of truth in any suspicion or allegation, it must be considered true. "It's about our response."

To what end the story was being cooked was hard to know. The Vice President is frequently described as "ideological," or "strongly conservative," but little in his history suggests the intellectual commitment implicit in either. He made common cause through the run-up to Iraq with the neoconservative ideologues who had burrowed into think tanks during the Clinton years and resurfaced in 2001 in the departments of State and Defense and the White House itself, but the alliance appeared even then to be more strategic than felt. The fact that Paul Wolfowitz and Richard Perle and Elliott Abrams shared with Cheney a wish to go to war in Iraq could create, in its confluence with September 11, what many came to call a perfect storm—as if it had blown up out of the blue beyond reach of human intervention—but the perfect storm did not make Cheney a neocon.

He identifies himself as a conservative, both political and cultural. He presents himself as can-do, rock-solid even if he is forced to live in Washington (you know he only does it on our behalf), one politician who can be trusted not to stray far from whatever unexamined views were current in the intermountain West during the 1950s and 1960s. He has described a 1969 return visit to the University of Wisconsin, during which he took Bill Steiger and George H.W. Bush to an SDS rally, as having triggered his disgust with the Vietnam protest movement. "We were the only guys in the hall wearing suits that night," he told Nicholas Lemann. As a congressman he cast votes that reflected the interests of an energy-driven state that has voted Republican in every presidential election but one since 1952. His votes in the House during 1988, the last year he served there, gave him an American Conservative Union rating of 100.

Yet his move to push Nelson Rockefeller off Gerald Ford's 1976 ticket had seemed based less on philosophical differences than on a perception of Rockefeller as in the way, in the lights, a star, like Kissinger, who threatened the power Cheney and Rumsfeld wielded in the Ford White House. In 1976, unlike most who called themselves conservatives, Cheney remained untempted by Reagan and clung to Ford, his best ticket to ride. Nor did he initially back Reagan in 1980. Nor, when it has not been in his interest to do so, has he since taken consistent positions on what would seem to be his own most hardened policies.

"I think it is a false dichotomy to be told that we have to choose between 'commercial' interests and other interests that the United States might have in a particular country or region around the world," he said at the Cato Institute in 1998, during the period he was CEO of Halliburton, after he had pursued one war against Iraq and before he would pursue the second. He was arguing against the imposition by the United States of unilateral economic sanctions on such countries as Libya and Iran, two countries, although he did not mention this, in which Halliburton subsidiaries had been doing business. Nor did he mention, when he said in the same speech that he thought multilateral sanctions "appropriate" in the case of Iraq, that Iraq was a third country in which a Halliburton subsidiary would by the year's end be doing business.

The notion that he takes a consistent view of America's role in the world nonetheless remains general. The model on which he has preferred to operate is the cold war, or, to use the words in which he and the President have repeatedly described the central enterprise of their own administration, the "different kind of war," the war in which "our goal will not be achieved overnight." He has mentioned H. Bradford Westerfield, a political scientist at Yale and at the time Cheney took his introductory course a self-described hawk, as someone who influenced his thinking, but Westerfield later told the Nation correspondent John Nichols that his own hard line had softened by late 1967 or early 1968, when he came to see that Vietnam "really was unwinnable" and "the hawkish view was unrealistic."

Cheney, by then positioning himself in Washington, never drew those conclusions, nor, when he saw Westerfield in the 1990s at a memorial service for Les Aspin, did he seem to Westerfield interested in discussing them. "He seems to be determined to go his own way, no matter what facts he is confronted with," Westerfield told Nichols. "As a veteran of the political wars," Henry Kissinger later wrote about the years when Saigon was falling and Donald Rumsfeld and Richard Cheney were running the Ford White House,

Rumsfeld understood far better than I that Watergate and Vietnam were likely to evoke a conservative backlash and that what looked like a liberal tide after the election of the McGovernite Congress in fact marked the radical apogee. Rumsfeld and Cheney, in other words, had transcended what others might present as facts. They could feel the current. They knew how to catch the wave and ride it.

Cheney leaves no paper trail. He has not always felt the necessity to discuss what he plans to say in public with the usual offices, including that of the President. Nor, we learned from Ron Suskind, has he always felt the necessity, say if the Saudis send information to the President in preparation for a meeting, to bother sending that information on to Bush. Only on the evening of September 11, 2001, did it occur to Richard A. Clarke that in his role as national security coordinator he had briefed Cheney on terrorism and also Condoleezza Rice and Colin Powell, but never the President. Since November 1, 2001, under this administration's Executive Order 13233, which limits access to all presidential and vice-presidential papers, Cheney has been the first vice-president in American history entitled to executive privilege, a claim to co-presidency reinforced in March 2003 by Executive Order 13292, giving him the same power to classify information as the president has.

He runs an office so disinclined to communicate that it routinely refuses to disclose who works there, even for updates to the Federal Directory, which lists names and contact addresses for government officials. "We just don't give out that kind of information," an aide told one reporter. "It's just not something we talk about." When he visits his house in Jackson Hole and the local paper spots his plane and the anti-missile battery that accompanies him, the office until recently refused to confirm his presence: "In the past, they've been kind of weird," the paper's co-editor told The Washington Post in August. "They'd say, 'His airplane's here and the missile base is here, but we can't tell you if he's here.'"

His every instinct is to withhold information, hide, let surrogates speak for him, as he did after the quail-shooting accident on the Armstrong ranch. His own official spoken remarks so defy syntactical analysis as to suggest that his only intention in speaking is to further obscure what he thinks. Possibly the most well-remembered statement he ever made (after "Big-time") was that he did not serve in the Vietnam War because he had "other priorities." Bob Woodward, in Plan of Attack, describes an exchange that took place between Cheney and Colin Powell in September 2002, when Cheney was determined that the US not ask the UN for the resolution against Iraq that the Security Council, after much effort by Powell, passed in November:

Powell attempted to summarize the consequences of unilateral action.... He added a new dimension, saying that the international reaction would be so negative that he would have to close American embassies around the world if we went to war alone.

That is not the issue, Cheney said. Saddam and the clear threat is the issue.

Maybe it would not turn out as the vice president thinks, Powell said. War could trigger all kinds of unanticipated and unintended consequences.... Not the issue, Cheney said.

In other words the Vice President had by then passed that point at which going to war was "not about our analysis." He had passed that point at which going to war was not about "finding a preponderance of evidence." At the point he had reached by September 2002, going to war was not even about the consequences. Not the issue, he had said. The personality that springs to mind is that of the ninth-grade bully in the junior high lunchroom, the one sprawled in the letter jacket so the seventh-graders must step over his feet. There was in a June letter from Senator Arlen Specter to Cheney, made public by Specter, an image that eerily conveyed just that: "I was surprised, to say the least, that you sought to influence, really determine, the action of the Committee without calling me first, or at least calling me at some point," Specter wrote, referring to actions Cheney had taken to block his Judiciary Committee from conducting a hearing on NSA surveillance. "This was especially perplexing since we both attended the Republican Senators caucus lunch yesterday and I walked directly in front of you on at least two occasions enroute from the buffet to my table."

There was a reason, beyond the thrill of their sheer arrogance, why the words "other priorities" stuck in the national memory. They were first uttered not in but outside the room in which Cheney's 1989 confirmation hearings were held, to a Washington Post reporter who asked why the candidate for secretary of defense had sought the five (four student and one "hardship") deferments that had prevented him from serving in Vietnam. This is what the candidate said:

I had other priorities in the '60s than military service. I don't regret the decisions I made. I complied fully with all the requirements of the statutes, registered with the draft when I turned 18. Had I been drafted, I would have been happy to serve. I think those who did in fact serve deserve to be honored for their service.... Was it a noble cause? Yes, indeed, I think it was.

The words stuck because they resonated, and still do. They resonated the same way the words "fixed himself a cocktail back at the house" resonated when Katharine Armstrong, Cheney's hostess and fill-in (in the vacuum of his silence) apologist, used them to explain what he had done after the quail-hunting accident in lieu of either going to the hospital with Harry Whittington or explaining to the sheriff's office how he had just shot him. "Fixed himself a cocktail back at the house" suggested the indifference to assuming responsibility for his own mistakes that had become so noticeable in his public career. "Ultimately, I am the guy who pulled the trigger and fired the round that hit Harry," he managed, four days later, to say to Fox News in a memorable performance of a man accepting responsibility but not quite. "You can talk about all the other conditions that existed at the time, but that's the bottom line. It's not Harry's fault. You can't blame anybody else."

Like "it's not Harry's fault," which implied that you or I or any other fair observer (for example Katharine Armstrong, characterized by Cheney as "an acknowledged expert in all of this") might well conclude that it had been, "other priorities" suggested a familiar character wrinkle, in this case the same willingness to cloud an actual issue with circular arguments ("I complied fully with all the requirements of the statutes") that would later be demonstrated by the Vice President's people when they maintained that the Geneva Conventions need not apply to Afghan detainees because Afghanistan was a "failed state." What these tortured and in many cases invented legalities are designed to preclude is any acknowledgment that the issue at hand, whether it is avoiding military service or authorizing torture, might have a moral or an ethical or even a self-interested dimension that merits discussion.

This latter dimension, self-interest, which was the basis for John McCain's argument that we could not expect others to honor the Geneva Conventions if we did not do so ourselves, was dismissed by David Addington, at the time Cheney's legal architect, in the "new paradigm" memo he drafted in 2002 to go to the President over White House Counsel Alberto R. Gonzales's signature. "It should be noted that your policy of providing humane treatment to enemy detainees gives us the credibility to insist on like treatment for our soldiers," the memo read, sliding past a key point, which was that the "new paradigm" differentiated between "enemy detainees" and "illegal enemy combatants," or "terrorists," a distinction to be determined by whoever did the detaining.

Moreover, even if GPW [Geneva Convention III Relative to the Treatment of Prisoners of War] is not applicable, we can still bring war crimes charges against anyone who mistreats US personnel. Finally, I note that...terrorists will not follow GPW rules in any event. This is not law. This is casuistry, the detritus of another perfect storm, the one that occurred when the deferments of the Vietnam years met the ardor of the Reagan Revolution.

About this matter of priorities. At an October 2005 meeting at Stanford University of the American Academy of Arts and Sciences, the historian David M. Kennedy expressed concern about the absence of political accountability in a nation where

no American is now obligated to military service, few will ever serve in uniform, even fewer will actually taste battle—and fewer still of those who do serve will have ever sat in the classrooms of an elite university like Stanford.... Americans with no risk whatsoever of exposure to military service have, in effect, hired some of the least advantaged of their fellow countrymen to do some of their most dangerous business while the majority goes on with their own affairs unbloodied and undistracted.

Early in 1995, his tenure as George H.W. Bush's secretary of defense timed out, Dick Cheney was raising money for a stalled 1996 presidential run when he was asked, legendarily out of the blue on a fly-fishing trip but in fact unsurprisingly for someone with government connections in both energy and defense, to become CEO of Halliburton. In the early summer of 2000, flying home with his daughter Mary from a hunting trip, Cheney, then five years into his job at Halliburton, a period for which he had collected $44 million (plus deferments and stock options) and during which the Halliburton subsidiary Brown & Root had billed the United States $2 billion for services in Bosnia and Kosovo, told Mary that Joe Allbaugh, the national campaign manager of Bush's 2000 campaign, had asked him to consider being Bush's running mate. In July 2000, after conducting a search for another candidate and detailing the reasons why he himself would be a bad choice ("Knowing my dad, I'm sure he didn't hold anything back as he laid out the disadvantages of selecting him as the nominee"), in other words assuring himself carte blanche, Cheney agreed to join the ticket.

In February 2001, Joe Allbaugh, whose previous experience was running the governor's office for Bush in Texas, became head of FEMA, where he hired Michael D. ("Brownie, you're doing a heckuva job") Brown. In December 2002, Allbaugh announced that he was resigning from FEMA, leaving Brown in charge while he himself founded New Bridge Strategies, LLC, "a unique company," according to its Web site, "that was created specifically with the aim of assisting clients to evaluate and take advantage of business opportunities in the Middle East following the conclusion of the US-led war in Iraq."

This was the US-led war in Iraq that had not then yet begun. When David Kennedy spoke at Stanford about the vacuum in political accountability that could result from waging a war while a majority of Americans went on "with their own affairs unbloodied and undistracted," he was talking only about the absence of a draft. He was not talking about the ultimate step, the temptation to wage the war itself to further private ends, or "business opportunities," or other priorities. Nor was he talking about the intermediate step, which was to replace the manpower no longer available by draft by contracting out "logistical" support to the private sector, in other words by privatizing the waging of the war. This step, now so well known as to be a plot point on Law and Order (civilian contract employees in Iraq fall out among themselves; a death ensues; Sam Waterston sorts it out), had already been taken. There are now, split among more than 150 private firms, thousands of such contracts outstanding. Halliburton alone had by July 2004 contracts worth $11,431,000,000.

Private firms in Iraq have done more than build bases and bridges and prisons. They have done more than handle meals and laundry and transportation. They train Iraqi forces. They manage security. They interrogate prisoners. Contract interrogators from two firms, CACI International (according to its Web site "a world leader in providing timely solutions to the intelligence community") and Titan ("a leading provider of comprehensive information and communications products, solutions, and services for National Security"), were accused of abuses at Abu Ghraib, where almost half of all interrogators and analysts were CACI employees. They operate free of oversight. They distance the process of interrogation from the citizens in whose name, or in whose "defense," or to ensure whose "security," the interrogation is being conducted. They offer "timely solutions."

In his 1991 book A Very Thin Line, Theodore Draper wrote:

The Iran-contra affairs amounted to more than good plans gone wrong or even bad plans gone wildly wrong. They were symptomatic of a far deeper disorder in the American body politic. They were made possible by an interpretation of the Constitu-tion which Poindexter and North thought gave them a license to carry on their secret operations in the name of the president, in defiance of the law and without the knowledge of any other branch of government.... Somehow the highly dubious theory of a presidential monopoly of foreign policy had filtered down to them and given them a license to act as if they could substitute themselves for the entire government.

There remains a further reason why "other priorities" still nags. It suggests other agendas, undisclosed strategies. We had watched this White House effect the regulatory changes that would systematically dismantle consumer and workplace and environmental protections. We had watched this White House run up the deficits that ensured that the conservative dream of rolling back government will necessarily take place, because there will be no money left to pay for it. We had heard the Vice President speak as recently as January 2004 about our need to recolonize the world, build bases, "warm bases, bases we can fall in on, on a crisis and have present the capabilities we need to operate from." "Other priorities" suggests what the Vice President might have meant when he and the President talked about the "different kind of war," the war in which "our goal will not be achieved overnight." As a member of the House during the cold war and then as secretary of defense during the Gulf War and then as CEO of Halliburton, the Vice President had seen up close the way in which a war in which "our goal will not be achieved overnight" could facilitate the flow of assets from the government to the private sector and back to whoever in Washington greases the valves. "The first person to greet our soldiers as they arrive in the Balkans and the last one to wave goodbye is one of our employees," as he put it during his Halliburton period.

He had also seen up close the political advantage to which such a war could be put. "And so if there's a backlash pending I think the backlash is going to be against those who are suggesting somehow that we shouldn't take these steps in order to protect the country," as he put it when asked last December if his assumption of presidential autonomy might not provoke a congressional backlash. In the apparently higher interest of consolidating that political advantage he had made misrepresentations that facilitated a war that promised to further destabilize the Middle East. He had compromised both America's image in the world and its image of itself. In 1991, explaining why he agreed with George H.W. Bush's decision not to take the Gulf War to Baghdad, Cheney had acknowledged the probability that any such invasion would be followed by civil war in Iraq:

Once you've got Baghdad, it's not clear what you do with it. It's not clear what kind of government you would put in.... Is it going to be a Shia regime, a Sunni regime or a Kurdish regime? Or one that tilts toward the Baathists, or one that tilts toward the Islamic fundamentalists?... How long does the United States military have to stay to protect the people that sign on for that government, and what happens to it once we leave? By January 2006, when the prescience of these questions was evident and polling showed that 47 percent of Iraqis approved of "attacks on US-led forces," and the administration was still calculating that it could silence domestic doubt by accusing the doubter of wanting to "cut and run," the Vice President assured Fox News that the course had been true. "When we look back on this ten years hence," he said, a time frame suggesting that he was once again leaving the cleanup to someone else, "we will have fundamentally changed the course of history in that part of the world, and that will be an enormous advantage for the United States and for all of those countries that live in the region."

—September 7, 2006



By David Halberstam

August 2007 Issue, Vanity Fair ©

In the twilight of his presidency, George W. Bush and his inner circle have been feeding the press with historical parallels: he is Harry Truman - unpopular, besieged, yet ultimately to be vindicated - while Iraq under Saddam was Europe held by Hitler. To a serious student of the past, that's preposterous. Writing just before his untimely death, David Halberstam asserts that Bush's "history," like his war, is based on wishful thinking, arrogance, and a total disdain for the facts.

    We are a long way from the glory days of Mission Accomplished, when the Iraq war was over before it was over - indeed before it really began - and the president could dress up like a fighter pilot and land on an aircraft carrier, and the nation, led by a pliable media, would applaud. Now, late in this sad, terribly diminished presidency, mired in an unwinnable war of their own making, and increasingly on the defensive about events which, to their surprise, they do not control, the president and his men have turned, with some degree of desperation, to history. In their view Iraq under Saddam was like Europe dominated by Hitler, and the Democrats and critics in the media are likened to the appeasers of the 1930s. The Iraqi people, shorn of their immensely complicated history, become either the people of Europe eager to be liberated from the Germans, or a little nation that great powerful nations ought to protect. Most recently in this history rummage sale - and perhaps most surprisingly - Bush has become Harry Truman.

    We have lately been getting so many history lessons from the White House that I have come to think of Bush, Cheney, Rice, and the late, unlamented Rumsfeld as the History Boys. They are people groping for rationales for their failed policy, and as the criticism becomes ever harsher, they cling to the idea that a true judgment will come only in the future, and history will save them.

    Ironically, it is the president himself, a man notoriously careless about, indeed almost indifferent to, the intellectual underpinnings of his actions, who has come to trumpet loudest his close scrutiny of the lessons of the past. Though, before, he tended to boast about making critical decisions based on instinct and religious faith, he now talks more and more about historical mandates. Usually he does this in the broadest - and vaguest - sense: History teaches us … We know from history … History shows us. In one of his speaking appearances in March 2006, in Cleveland, I counted four references to history, and what it meant for today, as if he had had dinner the night before with Arnold Toynbee, or at the very least Barbara Tuchman, and then gone home for a few hours to read his Gibbon.

    I am deeply suspicious of these presidential seminars. We have, after all, come to know George Bush fairly well by now, and many of us have come to feel - not only because of what he says, but also because of the sheer cockiness in how he says it - that he has a tendency to decide what he wants to do first, and only then leaves it to his staff to look for intellectual justification. Many of us have always sensed a deep and visceral anti-intellectual streak in the president, that there was a great chip on his shoulder, and that the burden of the fancy schools he attended - Andover and Yale - and even simply being a member of the Bush family were too much for him. It was as if he needed not only to escape but also to put down those of his peers who had been more successful. From that mind-set, I think, came his rather unattractive habit of bestowing nicknames, most of them unflattering, on the people around him, to remind them that he was in charge, that despite their greater achievements they still worked for him.

    He is infinitely more comfortable with the cowboy persona he has adopted, the Texas transplant who has learned to speak the down-home vernacular. "Country boy," as Johnny Cash once sang, "I wish I was you, and you were me." Bush's accent, not always there in public appearances when he was younger, tends to thicken these days, the final g's consistently dropped so that doing becomes doin', going becomes goin', and making, makin'. In this lexicon al-Qaeda becomes "the folks" who did 9/11. Unfortunately, it is not just the speech that got dumbed down - so also were the ideas at play. The president's world, unlike the one we live in, is dangerously simple, full of traps, not just for him but, sadly, for us as well.

    When David Frum, a presidential speechwriter, presented Bush with the phrase "axis of evil," to characterize North Korea, Iran, and Iraq, it was meant to recall the Axis powers of World War II. Frum was much praised, for it is a fine phrase, perfect for Madison Avenue. Of course, the problem is that it doesn't really track. This new Axis turned out to contain, apparently much to our surprise, two countries, Iraq and Iran, that were sworn enemies, and if you moved against Iraq, you ended up de-stabilizing it and involuntarily strengthening Iran, the far more dangerous country in the region. While "axis of evil" was intended to serve as a sort of historical banner, embodying the highest moral vision imaginable, it ended up only helping to weaken us.

    Despite his recent conversion to history, the president probably still believes, deep down, as do many of his admirers, that the righteous, religious vision he brings to geopolitics is a source of strength - almost as if the less he knows about the issues the better and the truer his decision-making will be. Around any president, all the time, are men and women with different agendas, who compete for his time and attention with messy, conflicting versions of events and complicated facts that seem all too often to contradict one another. With their hard-won experience the people from the State Department and the C.I.A. and even, on occasion, the armed forces tend to be cautious and short on certitude. They are the kind of people whose advice his father often took, but who in the son's view use their knowledge and experience merely to limit a president's ability to act. How much easier and cleaner to make decisions in consultation with a higher authority.

    Therefore, when I hear the president cite history so casually, an alarm goes off. Those who know history best tend to be tempered by it. They rarely refer to it so sweepingly and with such complete confidence. They know that it is the most mischievous of mistresses and that it touts sure things about as regularly as the tip sheets at the local track. Its most important lessons sometimes come cloaked in bitter irony. By no means does it march in a straight line toward the desired result, and the good guys do not always win. Occasionally it is like a sport with upsets, in which the weak and small defeat the great and mighty - take, for instance, the American revolutionaries vanquishing the British Army, or the Vietnamese Communists, with their limited hardware, stalemating the mighty American Army.

    There was, I thought, one member of the first President Bush's team who had a real sense of history, a man of intellectual superiority and enormous common sense. (Naturally, he did not make it onto the Bush Two team.) That was Brent Scowcroft, George H. W. Bush's national-security adviser. Scowcroft was so close to the senior Bush that they collaborated on Bush's 1998 presidential memoir, A World Transformed. Scowcroft struck me as a lineal descendant of Truman's secretary of state George Catlett Marshall, arguably the most extraordinary of the postwar architects of American foreign policy. Marshall was a formidable figure, much praised for his awesome sense of duty and not enough, I think, for his intellect. If he lacked the self-evident brilliance of George Kennan (the author of Truman's Communist-containment policy), he had a remarkable ability to shed light on the present by extrapolating from the the past.

    Like Marshall, I think, Scowcroft has a sense of history in his bones, even if his are smaller lessons, learned piece by piece over a longer period of time. His is perhaps a more pragmatic and less dazzling mind, but he saw all the dangers of the 2003 move into Iraq, argued against the invasion, and for his troubles was dismissed as chairman of the prestigious President's Foreign Intelligence Advisory Board.

    I. The Truman Analogy

    Recently, Harry Truman, for reasons that would surely puzzle him if he were still alive, has become the Republicans' favorite Democratic president. In fact, the men around Bush who attempt to feed the White House line to journalists have begun to talk about the current president as a latter-day Truman: Yes, goes the line, Truman's rise to an ever more elevated status in the presidential pantheon is all ex post facto, conferred by historians long after he left office a beleaguered man, his poll numbers hopelessly low. Thus Bush and the people around him predict that a similar Trumanization will ride to the rescue for them.

    I've been living with Truman on and off for the last five years, while I was writing a book on the Korean War, The Coldest Winter [to be published in September by Hyperion], and I've been thinking a lot about the differences between Truman and Bush and their respective wars, Korea and Iraq. Yes, like Bush, Truman was embattled, and, yes, his popularity had plummeted at the end of his presidency, and, yes, he governed during an increasingly unpopular war. But the similarities end there.

    Even before Truman sent troops to Korea, in 1950, the national political mood was toxic. The Republicans had lost five presidential elections in a row, and Truman was under fierce partisan assault from the Republican far right, which felt marginalized even within its own party. It seized on the dubious issue of Communist subversion - especially with regard to China - as a way of getting even. (Knowing how ideological both Bush and Cheney are, it is easy to envision them as harsh critics of Truman at that moment.)

    Truman had inherited General Douglas MacArthur, "an untouchable," in Dwight Eisenhower's shrewd estimate, a man who was by then as much myth and legend as he was flesh and blood. The mastermind of America's victory in the Pacific, MacArthur was unquestionably talented, but also vainglorious, highly political, and partisan. Truman had twice invited him to come home from Japan, where, as Supreme Commander of the Allied Powers, he was supervising the reconstruction, to meet with him and address a joint session of Congress. Twice MacArthur turned him down, although a presidential invitation is really an order. MacArthur was saving his homecoming, it was clear, for a more dramatic moment, one that might just have been connected to a presidential run. He not only looked down on Truman personally, he never really accepted the primacy of the president in the constitutional hierarchy. For a president trying to govern during an extremely difficult moment in international politics, it was a monstrous political equation.

    Truman had been forced into the Korean War in 1950 when the Chinese authorized the North Koreans to cross the 38th parallel and attack South Korea. But MacArthur did not accept the president's vision of a limited war in Korea, and argued instead for a larger one with the Chinese. Truman wanted none of that. He might have been the last American president who did not graduate from college, but he was quite possibly our best-read modern president. History was always with him. With MacArthur pushing for a wider war with China, Truman liked to quote Napoleon, writing about his disastrous Russian adventure: "I beat them in every battle, but it does not get me anywhere."

    In time, MacArthur made an all-out frontal challenge to Truman, criticizing him to the press, almost daring the president to get rid of him. Knowing that the general had title to the flag and to the emotions of the country, while he himself merely had title to the Constitution, Truman nonetheless fired him. It was a grave constitutional crisis - nothing less than the concept of civilian control of the military was at stake. If there was an irony to this, it was that MacArthur and his journalistic boosters, such as Time-magazine owner Henry Luce, always saw Truman as the little man and MacArthur as the big man. ("MacArthur," wrote Time at the moment of the firing, "was the personification of the big man, with the many admirers who look to a great man for leadership.… Truman was almost a professional little man.") But it was Truman's decision to meet MacArthur's challenge, even though he surely knew he would be the short-term loser, that has elevated his presidential stock.

    George W. Bush's relationship with his military commander was precisely the opposite. He dealt with the ever so malleable General Tommy Franks, a man, Presidential Medal of Freedom or no, who is still having a difficult time explaining to his peers in the military how Iraq happened, and how he agreed to so large a military undertaking with so small a force. It was the president, not the military or the public, who wanted the Iraq war, and Bush used the extra leverage granted him by 9/11 to get it. His people skillfully manipulated the intelligence in order to make the war seem necessary, and they snookered the military on force levels and the American public on the cost of it all. The key operative in all this was clearly Vice President Cheney, supremely arrogant, the most skilled of bureaucrats, seemingly the toughest tough guy of them all, but eventually revealed as a man who knew nothing of the country he wanted to invade and what that invasion might provoke.

    II. The New Red-Baiting

    If Bush takes his cues from anyone in the Truman era, it is not Truman but the Republican far right. This can be seen clearly from one of his history lessons, a speech the president gave on a visit to Riga, Latvia, in May 2005, when, in order to justify the Iraq intervention, he cited Yalta, the 1945 summit at which Roosevelt, Stalin, and Churchill met. Hailing Latvian freedom, Bush took a side shot at Roosevelt (and, whether he meant to or not, at Churchill, supposedly his great hero) and the Yalta accords, which effectively ceded Eastern Europe to the Soviets. Yalta, he said, "followed in the unjust tradition of Munich and the Molotov-Ribbentrop pact. Once again, when powerful governments negotiated, the freedom of small nations was somehow expendable. Yet this attempt to sacrifice freedom for the sake of stability left a continent divided and unstable. The captivity of millions in Central and Eastern Europe will be remembered as one of the greatest wrongs of history."

    This is some statement. Yalta is connected first to the Munich Agreement of 1938 (in which the Western democracies, at their most vulnerable and well behind the curve of military preparedness, ceded Czechoslovakia to Hitler), then, in the same breath, Bush blends in seamlessly (and sleazily) the Molotov-Ribbentrop pact, the temporary and cynical agreement between the Soviets and Nazis allowing the Germans to invade Poland and the Soviets to move into the Baltic nations. And from Molotov-Ribbentrop we jump ahead to Yalta itself, where, Bush implies, the two great leaders of the West casually sat by and gave away vast parts of Europe to the Soviet Union.

    After some 60 years Yalta has largely slipped from our political vocabulary, but for a time it was one of the great buzzwords in American politics, the first shot across the bow by the Republican right in their long, venomous, immensely destructive assault upon Roosevelt (albeit posthumously), Truman, and the Democratic Party as soft on Communism - just as today's White House attacks Democrats and other critics for being soft on terrorism, less patriotic, defeatists, underminers of the true strength of our country. Crucial to the right's exploitation of Yalta was the idea of a tired, sick, and left-leaning Roosevelt having given away too much and betraying the people of Eastern Europe, who, as a result, had to live under the brutal Soviet thumb - a distortion of history that resonated greatly with the many Eastern European ethnic groups in America, whose people, blue-collar workers, most of them, had voted solidly Democratic.

    The right got away with it, because, of all the fronts in the Second World War, the one least known in this country - our interest tends to disappear for those battles in which we did not participate - is ironically the most important: the Eastern Front, where the battle between the Germans and Russians took place and where, essentially, the outcome of the war was decided. It began with a classic act of hubris - Hitler's invasion of Russia, in June 1941, three years before we landed our troops in Normandy. Some three million German troops were involved in the attack, and in the early months the penetrations were quick and decisive. Minsk was quickly taken, the Germans crossed the Dnieper by July 10, and Smolensk fell shortly after. Some 700,000 men of the Red Army, its leadership already devastated by the madness of Stalin's purges, were captured by mid-September 1941. The Russian troops fell back and moved as much of their industry back east as they could. Then, slowly, the Russian lines stiffened, and the Germans, their supply lines too far extended, faltered as winter came on. The turning point was the Battle of Stalingrad, which began in late August 1942. It proved to be the most brutal battle of the war, with as many as two million combatants on both sides killed and wounded, but in the end the Russians held the city and captured what remained of the German Army there.

    In early 1943, the Red Army was on the offensive, the Germans in full retreat. By the middle of 1944, the Russians had 120 divisions driving west, some 2.3 million troops against an increasingly exhausted German Army of 800,000. By mid-July 1944, as the Allies were still trying to break out of the Normandy hedgerows, the Red Army was at the old Polish-Russian border. By the time of Yalta, they were closing in on Berlin. A month earlier, in January 1945, Churchill had acknowledged the inability of the West to limit the Soviet reach into much of Eastern and Central Europe. "Make no mistake, all the Balkans, except Greece, are going to be Bolshevized, and there is nothing I can do to prevent it. There is nothing I can do for Poland either."

    Yalta reflected not a sellout but a fait accompli.

    President Bush lives in a world where in effect it is always the summer of 1945, the Allies have just defeated the Axis, and a world filled with darkness for some six years has been rescued by a new and optimistic democracy, on its way to becoming a superpower. His is a world where other nations admire America or damned well ought to, and America is always right, always on the side of good, in a world of evil, and it's just a matter of getting the rest of the world to understand this. One of Bush's favorite conceits, used repeatedly in his speeches, is that democracies are peaceful and don't go to war against one another. Most citizens of the West tend to accept this view without question, but that is not how most of Africa, Asia, South America, and the Middle East, having felt the burden of the white man's colonial rule for much of the past two centuries, see it. The non-Western world does not think of the West as a citadel of pacifism and generosity, and many people in the U.S. State Department and the different intelligence agencies (and even the military) understand the resentments and suspicions of our intentions that exist in those regions. We are, you might say, fighting the forces of history in Iraq - religious, cultural, social, and inevitably political - created over centuries of conflict and oppressive rule.

    The president tends to drop off in his history lessons after World War II, especially when we get to Vietnam and things get a bit murkier. Had he made any serious study of our involvement there, he might have learned that the sheer ferocity of our firepower created enemies of people who were until then on the sidelines, thereby doing our enemies' recruiting for them. And still, today, our inability to concentrate such "shock and awe" on precisely whom we would like - causing what is now called collateral killing - creates a growing resentment among civilians, who may decide that whatever values we bring are not in the end worth it, because we have also brought too much killing and destruction to their country. The French fought in Vietnam before us, and when a French patrol went through a village, the Vietminh would on occasion kill a single French soldier, knowing that the French in a fury would retaliate by wiping out half the village - in effect, the Vietminh were baiting the trap for collateral killing.

    III. The Perils of Empire

    You don't hear other members of the current administration citing the lessons of Vietnam much, either, especially Cheney and Karl Rove, both of them gifted at working the bureaucracy for short-range political benefits, both highly partisan and manipulative, both unspeakably narrow and largely uninterested in understanding and learning about the larger world. As Joan Didion pointed out in her brilliant essay on Cheney in The New York Review of Books, it was Rumsfeld and Cheney who explained to Henry Kissinger, not usually slow on the draw when it came to the political impact of foreign policy, that Vietnam was likely to create a vast political backlash against the liberal McGovern forces. The two, relatively junior operators back then, were interested less in what had gone wrong in Vietnam than in getting some political benefit out of it. Cheney still speaks of Vietnam as a noble rather than a tragic endeavor, not that he felt at the time - with his five military deferments - that he needed to be part of that nobility.

    Still, it is hard for me to believe that anyone who knew anything about Vietnam, or for that matter the Algerian war, which directly followed Indochina for the French, couldn't see that going into Iraq was, in effect, punching our fist into the largest hornet's nest in the world. As in Vietnam, our military superiority is neutralized by political vulnerabilities. The borders are wide open. We operate quite predictably on marginal military intelligence. The adversary knows exactly where we are at all times, as we do not know where he is. Their weaponry fits an asymmetrical war, and they have the capacity to blend into the daily flow of Iraqi life, as we cannot. Our allies - the good Iraqi people the president likes to talk about - appear to be more and more ambivalent about the idea of a Christian, Caucasian liberation, and they do not seem to share many of our geopolitical goals.

    The book that brought me to history some 53 years ago, when I was a junior in college, was Cecil Woodham-Smith's wondrous The Reason Why, the story of why the Light Brigade marched into the Valley of Death, to be senselessly slaughtered, in the Crimean War. It is a tale of such folly and incompetence in leadership (then, in the British military, a man could buy the command of a regiment) that it is not just the story of a battle but an indictment of the entire British Empire. It is a story from the past we read again and again, that the most dangerous time for any nation may be that moment in its history when things are going unusually well, because its leaders become carried away with hubris and a sense of entitlement cloaked as rectitude. The arrogance of power, Senator William Fulbright called it during the Vietnam years.

    I have my own sense that this is what went wrong in the current administration, not just in the immediate miscalculation of Iraq but in the larger sense of misreading the historical moment we now live in. It is that the president and the men around him - most particularly the vice president - simply misunderstood what the collapse of the Soviet empire meant for America in national-security terms. Rumsfeld and Cheney are genuine triumphalists. Steeped in the culture of the Cold War and the benefits it always presented to their side in domestic political terms, they genuinely believed that we were infinitely more powerful as a nation throughout the world once the Soviet empire collapsed. Which we both were and very much were not. Certainly, the great obsessive struggle with the threat of a comparable superpower was removed, but that threat had probably been in decline in real terms for well more than 30 years, after the high-water mark of the Cuban missile crisis, in 1962. During the 80s, as advanced computer technology became increasingly important in defense apparatuses, and as the failures in the Russian economy had greater impact on that country's military capacity, the gap between us and the Soviets dramatically and continuously widened. The Soviets had become, at the end, as West German chancellor Helmut Schmidt liked to say, Upper Volta with missiles.

    At the time of the collapse of Communism, I thought there was far too much talk in America about how we had won the Cold War, rather than about how the Soviet Union, whose economy never worked, simply had imploded. I was never that comfortable with the idea that we as a nation had won, or that it was a personal victory for Ronald Reagan. To the degree that there was credit to be handed out, I thought it should go to those people in the satellite nations who had never lost faith in the cause of freedom and had endured year after year in difficult times under the Soviet thumb. If any Americans deserved credit, I thought it should be Truman and his advisers - Marshall, Kennan, Dean Acheson, and Chip Bohlen - all of them harshly attacked at one time or another by the Republican right for being soft on Communism. (The right tried particularly hard to block Eisenhower's nomination of Bohlen as ambassador to Moscow, in 1953, because he had been at Yalta.)

    After the Soviet Union fell, we were at once more powerful and, curiously, less so, because our military might was less applicable against the new, very different kind of threat that now existed in the world. Yet we stayed with the norms of the Cold War long after any genuine threat from it had receded, in no small part because our domestic politics were still keyed to it. At the same time, the checks and balances imposed on us by the Cold War were gone, the restraints fewer, and the temptations to misuse our power greater. What we neglected to consider was a warning from those who had gone before us - that there was, at moments like this, a historic temptation for nations to overreach.

    David Halberstam was a Vanity Fair contributing editor and the Pulitzer Prize - winning author of The Best and the Brightest and The Fifties. He was killed in a car accident on April 23.



Philadelphia, PA | March 18, 2008

As Prepared for Delivery

"We the people, in order to form a more perfect union."

Two hundred and twenty one years ago, in a hall that still stands across the street, a group of men gathered and, with these simple words, launched America's improbable experiment in democracy. Farmers and scholars; statesmen and patriots who had traveled across an ocean to escape tyranny and persecution finally made real their declaration of independence at a Philadelphia convention that lasted through the spring of 1787.

The document they produced was eventually signed but ultimately unfinished. It was stained by this nation's original sin of slavery, a question that divided the colonies and brought the convention to a stalemate until the founders chose to allow the slave trade to continue for at least twenty more years, and to leave any final resolution to future generations.

Of course, the answer to the slavery question was already embedded within our Constitution - a Constitution that had at its very core the ideal of equal citizenship under the law; a Constitution that promised its people liberty, and justice, and a union that could be and should be perfected over time.

And yet words on a parchment would not be enough to deliver slaves from bondage, or provide men and women of every color and creed their full rights and obligations as citizens of the United States. What would be needed were Americans in successive generations who were willing to do their part - through protests and struggle, on the streets and in the courts, through a civil war and civil disobedience and always at great risk - to narrow that gap between the promise of our ideals and the reality of their time.

This was one of the tasks we set forth at the beginning of this campaign - to continue the long march of those who came before us, a march for a more just, more equal, more free, more caring and more prosperous America. I chose to run for the presidency at this moment in history because I believe deeply that we cannot solve the challenges of our time unless we solve them together - unless we perfect our union by understanding that we may have different stories, but we hold common hopes; that we may not look the same and we may not have come from the same place, but we all want to move in the same direction - towards a better future for our children and our grandchildren.

This belief comes from my unyielding faith in the decency and generosity of the American people. But it also comes from my own American story.

I am the son of a black man from Kenya and a white woman from Kansas. I was raised with the help of a white grandfather who survived a Depression to serve in Patton's Army during World War II and a white grandmother who worked on a bomber assembly line at Fort Leavenworth while he was overseas. I've gone to some of the best schools in America and lived in one of the world's poorest nations. I am married to a black American who carries within her the blood of slaves and slaveowners - an inheritance we pass on to our two precious daughters. I have brothers, sisters, nieces, nephews, uncles and cousins, of every race and every hue, scattered across three continents, and for as long as I live, I will never forget that in no other country on Earth is my story even possible.

It's a story that hasn't made me the most conventional candidate. But it is a story that has seared into my genetic makeup the idea that this nation is more than the sum of its parts - that out of many, we are truly one.

Throughout the first year of this campaign, against all predictions to the contrary, we saw how hungry the American people were for this message of unity. Despite the temptation to view my candidacy through a purely racial lens, we won commanding victories in states with some of the whitest populations in the country. In South Carolina, where the Confederate Flag still flies, we built a powerful coalition of African Americans and white Americans.

This is not to say that race has not been an issue in the campaign. At various stages in the campaign, some commentators have deemed me either "too black" or "not black enough." We saw racial tensions bubble to the surface during the week before the South Carolina primary. The press has scoured every exit poll for the latest evidence of racial polarization, not just in terms of white and black, but black and brown as well.

And yet, it has only been in the last couple of weeks that the discussion of race in this campaign has taken a particularly divisive turn.

On one end of the spectrum, we've heard the implication that my candidacy is somehow an exercise in affirmative action; that it's based solely on the desire of wide-eyed liberals to purchase racial reconciliation on the cheap. On the other end, we've heard my former pastor, Reverend Jeremiah Wright, use incendiary language to express views that have the potential not only to widen the racial divide, but views that denigrate both the greatness and the goodness of our nation; that rightly offend white and black alike.

I have already condemned, in unequivocal terms, the statements of Reverend Wright that have caused such controversy. For some, nagging questions remain. Did I know him to be an occasionally fierce critic of American domestic and foreign policy? Of course. Did I ever hear him make remarks that could be considered controversial while I sat in church? Yes. Did I strongly disagree with many of his political views? Absolutely - just as I'm sure many of you have heard remarks from your pastors, priests, or rabbis with which you strongly disagreed.

But the remarks that have caused this recent firestorm weren't simply controversial. They weren't simply a religious leader's effort to speak out against perceived injustice. Instead, they expressed a profoundly distorted view of this country - a view that sees white racism as endemic, and that elevates what is wrong with America above all that we know is right with America; a view that sees the conflicts in the Middle East as rooted primarily in the actions of stalwart allies like Israel, instead of emanating from the perverse and hateful ideologies of radical Islam.

As such, Reverend Wright's comments were not only wrong but divisive, divisive at a time when we need unity; racially charged at a time when we need to come together to solve a set of monumental problems - two wars, a terrorist threat, a falling economy, a chronic health care crisis and potentially devastating climate change; problems that are neither black or white or Latino or Asian, but rather problems that confront us all.

Given my background, my politics, and my professed values and ideals, there will no doubt be those for whom my statements of condemnation are not enough. Why associate myself with Reverend Wright in the first place, they may ask? Why not join another church? And I confess that if all that I knew of Reverend Wright were the snippets of those sermons that have run in an endless loop on the television and You Tube, or if Trinity United Church of Christ conformed to the caricatures being peddled by some commentators, there is no doubt that I would react in much the same way

But the truth is, that isn't all that I know of the man. The man I met more than twenty years ago is a man who helped introduce me to my Christian faith, a man who spoke to me about our obligations to love one another; to care for the sick and lift up the poor. He is a man who served his country as a U.S. Marine; who has studied and lectured at some of the finest universities and seminaries in the country, and who for over thirty years led a church that serves the community by doing God's work here on Earth - by housing the homeless, ministering to the needy, providing day care services and scholarships and prison ministries, and reaching out to those suffering from HIV/AIDS.

In my first book, Dreams From My Father, I described the experience of my first service at Trinity:

"People began to shout, to rise from their seats and clap and cry out, a forceful wind carrying the reverend's voice up into the rafters....And in that single note - hope! - I heard something else; at the foot of that cross, inside the thousands of churches across the city, I imagined the stories of ordinary black people merging with the stories of David and Goliath, Moses and Pharaoh, the Christians in the lion's den, Ezekiel's field of dry bones. Those stories - of survival, and freedom, and hope - became our story, my story; the blood that had spilled was our blood, the tears our tears; until this black church, on this bright day, seemed once more a vessel carrying the story of a people into future generations and into a larger world. Our trials and triumphs became at once unique and universal, black and more than black; in chronicling our journey, the stories and songs gave us a means to reclaim memories that we didn't need to feel shame about...memories that all people might study and cherish - and with which we could start to rebuild."

That has been my experience at Trinity. Like other predominantly black churches across the country, Trinity embodies the black community in its entirety - the doctor and the welfare mom, the model student and the former gang-banger. Like other black churches, Trinity's services are full of raucous laughter and sometimes bawdy humor. They are full of dancing, clapping, screaming and shouting that may seem jarring to the untrained ear. The church contains in full the kindness and cruelty, the fierce intelligence and the shocking ignorance, the struggles and successes, the love and yes, the bitterness and bias that make up the black experience in America.

And this helps explain, perhaps, my relationship with Reverend Wright. As imperfect as he may be, he has been like family to me. He strengthened my faith, officiated my wedding, and baptized my children. Not once in my conversations with him have I heard him talk about any ethnic group in derogatory terms, or treat whites with whom he interacted with anything but courtesy and respect. He contains within him the contradictions - the good and the bad - of the community that he has served diligently for so many years.

I can no more disown him than I can disown the black community. I can no more disown him than I can my white grandmother - a woman who helped raise me, a woman who sacrificed again and again for me, a woman who loves me as much as she loves anything in this world, but a woman who once confessed her fear of black men who passed by her on the street, and who on more than one occasion has uttered racial or ethnic stereotypes that made me cringe.

These people are a part of me. And they are a part of America, this country that I love.

Some will see this as an attempt to justify or excuse comments that are simply inexcusable. I can assure you it is not. I suppose the politically safe thing would be to move on from this episode and just hope that it fades into the woodwork. We can dismiss Reverend Wright as a crank or a demagogue, just as some have dismissed Geraldine Ferraro, in the aftermath of her recent statements, as harboring some deep-seated racial bias.

But race is an issue that I believe this nation cannot afford to ignore right now. We would be making the same mistake that Reverend Wright made in his offending sermons about America - to simplify and stereotype and amplify the negative to the point that it distorts reality.

The fact is that the comments that have been made and the issues that have surfaced over the last few weeks reflect the complexities of race in this country that we've never really worked through - a part of our union that we have yet to perfect. And if we walk away now, if we simply retreat into our respective corners, we will never be able to come together and solve challenges like health care, or education, or the need to find good jobs for every American.

Understanding this reality requires a reminder of how we arrived at this point. As William Faulkner once wrote, "The past isn't dead and buried. In fact, it isn't even past." We do not need to recite here the history of racial injustice in this country. But we do need to remind ourselves that so many of the disparities that exist in the African-American community today can be directly traced to inequalities passed on from an earlier generation that suffered under the brutal legacy of slavery and Jim Crow.

Segregated schools were, and are, inferior schools; we still haven't fixed them, fifty years after Brown v. Board of Education, and the inferior education they provided, then and now, helps explain the pervasive achievement gap between today's black and white students.

Legalized discrimination - where blacks were prevented, often through violence, from owning property, or loans were not granted to African-American business owners, or black homeowners could not access FHA mortgages, or blacks were excluded from unions, or the police force, or fire departments - meant that black families could not amass any meaningful wealth to bequeath to future generations. That history helps explain the wealth and income gap between black and white, and the concentrated pockets of poverty that persists in so many of today's urban and rural communities.

A lack of economic opportunity among black men, and the shame and frustration that came from not being able to provide for one's family, contributed to the erosion of black families - a problem that welfare policies for many years may have worsened. And the lack of basic services in so many urban black neighborhoods - parks for kids to play in, police walking the beat, regular garbage pick-up and building code enforcement - all helped create a cycle of violence, blight and neglect that continue to haunt us.

This is the reality in which Reverend Wright and other African-Americans of his generation grew up. They came of age in the late fifties and early sixties, a time when segregation was still the law of the land and opportunity was systematically constricted. What's remarkable is not how many failed in the face of discrimination, but rather how many men and women overcame the odds; how many were able to make a way out of no way for those like me who would come after them.

But for all those who scratched and clawed their way to get a piece of the American Dream, there were many who didn't make it - those who were ultimately defeated, in one way or another, by discrimination. That legacy of defeat was passed on to future generations - those young men and increasingly young women who we see standing on street corners or languishing in our prisons, without hope or prospects for the future. Even for those blacks who did make it, questions of race, and racism, continue to define their worldview in fundamental ways. For the men and women of Reverend Wright's generation, the memories of humiliation and doubt and fear have not gone away; nor has the anger and the bitterness of those years. That anger may not get expressed in public, in front of white co-workers or white friends. But it does find voice in the barbershop or around the kitchen table. At times, that anger is exploited by politicians, to gin up votes along racial lines, or to make up for a politician's own failings.

And occasionally it finds voice in the church on Sunday morning, in the pulpit and in the pews. The fact that so many people are surprised to hear that anger in some of Reverend Wright's sermons simply reminds us of the old truism that the most segregated hour in American life occurs on Sunday morning. That anger is not always productive; indeed, all too often it distracts attention from solving real problems; it keeps us from squarely facing our own complicity in our condition, and prevents the African-American community from forging the alliances it needs to bring about real change. But the anger is real; it is powerful; and to simply wish it away, to condemn it without understanding its roots, only serves to widen the chasm of misunderstanding that exists between the races.

In fact, a similar anger exists within segments of the white community. Most working- and middle-class white Americans don't feel that they have been particularly privileged by their race. Their experience is the immigrant experience - as far as they're concerned, no one's handed them anything, they've built it from scratch. They've worked hard all their lives, many times only to see their jobs shipped overseas or their pension dumped after a lifetime of labor. They are anxious about their futures, and feel their dreams slipping away; in an era of stagnant wages and global competition, opportunity comes to be seen as a zero sum game, in which your dreams come at my expense. So when they are told to bus their children to a school across town; when they hear that an African American is getting an advantage in landing a good job or a spot in a good college because of an injustice that they themselves never committed; when they're told that their fears about crime in urban neighborhoods are somehow prejudiced, resentment builds over time.

Like the anger within the black community, these resentments aren't always expressed in polite company. But they have helped shape the political landscape for at least a generation. Anger over welfare and affirmative action helped forge the Reagan Coalition. Politicians routinely exploited fears of crime for their own electoral ends. Talk show hosts and conservative commentators built entire careers unmasking bogus claims of racism while dismissing legitimate discussions of racial injustice and inequality as mere political correctness or reverse racism.

Just as black anger often proved counterproductive, so have these white resentments distracted attention from the real culprits of the middle class squeeze - a corporate culture rife with inside dealing, questionable accounting practices, and short-term greed; a Washington dominated by lobbyists and special interests; economic policies that favor the few over the many. And yet, to wish away the resentments of white Americans, to label them as misguided or even racist, without recognizing they are grounded in legitimate concerns - this too widens the racial divide, and blocks the path to understanding.

This is where we are right now. It's a racial stalemate we've been stuck in for years. Contrary to the claims of some of my critics, black and white, I have never been so naïve as to believe that we can get beyond our racial divisions in a single election cycle, or with a single candidacy - particularly a candidacy as imperfect as my own.

But I have asserted a firm conviction - a conviction rooted in my faith in God and my faith in the American people - that working together we can move beyond some of our old racial wounds, and that in fact we have no choice if we are to continue on the path of a more perfect union.

For the African-American community, that path means embracing the burdens of our past without becoming victims of our past. It means continuing to insist on a full measure of justice in every aspect of American life. But it also means binding our particular grievances - for better health care, and better schools, and better jobs - to the larger aspirations of all Americans -- the white woman struggling to break the glass ceiling, the white man whose been laid off, the immigrant trying to feed his family. And it means taking full responsibility for own lives - by demanding more from our fathers, and spending more time with our children, and reading to them, and teaching them that while they may face challenges and discrimination in their own lives, they must never succumb to despair or cynicism; they must always believe that they can write their own destiny.

Ironically, this quintessentially American - and yes, conservative - notion of self-help found frequent expression in Reverend Wright's sermons. But what my former pastor too often failed to understand is that embarking on a program of self-help also requires a belief that society can change.

The profound mistake of Reverend Wright's sermons is not that he spoke about racism in our society. It's that he spoke as if our society was static; as if no progress has been made; as if this country - a country that has made it possible for one of his own members to run for the highest office in the land and build a coalition of white and black; Latino and Asian, rich and poor, young and old -- is still irrevocably bound to a tragic past. But what we know -- what we have seen - is that America can change. That is the true genius of this nation. What we have already achieved gives us hope - the audacity to hope - for what we can and must achieve tomorrow.

In the white community, the path to a more perfect union means acknowledging that what ails the African-American community does not just exist in the minds of black people; that the legacy of discrimination - and current incidents of discrimination, while less overt than in the past - are real and must be addressed. Not just with words, but with deeds - by investing in our schools and our communities; by enforcing our civil rights laws and ensuring fairness in our criminal justice system; by providing this generation with ladders of opportunity that were unavailable for previous generations. It requires all Americans to realize that your dreams do not have to come at the expense of my dreams; that investing in the health, welfare, and education of black and brown and white children will ultimately help all of America prosper.

In the end, then, what is called for is nothing more, and nothing less, than what all the world's great religions demand - that we do unto others as we would have them do unto us. Let us be our brother's keeper, Scripture tells us. Let us be our sister's keeper. Let us find that common stake we all have in one another, and let our politics reflect that spirit as well.

For we have a choice in this country. We can accept a politics that breeds division, and conflict, and cynicism. We can tackle race only as spectacle - as we did in the OJ trial - or in the wake of tragedy, as we did in the aftermath of Katrina - or as fodder for the nightly news. We can play Reverend Wright's sermons on every channel, every day and talk about them from now until the election, and make the only question in this campaign whether or not the American people think that I somehow believe or sympathize with his most offensive words. We can pounce on some gaffe by a Hillary supporter as evidence that she's playing the race card, or we can speculate on whether white men will all flock to John McCain in the general election regardless of his policies.

We can do that.

But if we do, I can tell you that in the next election, we'll be talking about some other distraction. And then another one. And then another one. And nothing will change.

That is one option. Or, at this moment, in this election, we can come together and say, "Not this time." This time we want to talk about the crumbling schools that are stealing the future of black children and white children and Asian children and Hispanic children and Native American children. This time we want to reject the cynicism that tells us that these kids can't learn; that those kids who don't look like us are somebody else's problem. The children of America are not those kids, they are our kids, and we will not let them fall behind in a 21st century economy. Not this time.

This time we want to talk about how the lines in the Emergency Room are filled with whites and blacks and Hispanics who do not have health care; who don't have the power on their own to overcome the special interests in Washington, but who can take them on if we do it together.

This time we want to talk about the shuttered mills that once provided a decent life for men and women of every race, and the homes for sale that once belonged to Americans from every religion, every region, every walk of life. This time we want to talk about the fact that the real problem is not that someone who doesn't look like you might take your job; it's that the corporation you work for will ship it overseas for nothing more than a profit.

This time we want to talk about the men and women of every color and creed who serve together, and fight together, and bleed together under the same proud flag. We want to talk about how to bring them home from a war that never should've been authorized and never should've been waged, and we want to talk about how we'll show our patriotism by caring for them, and their families, and giving them the benefits they have earned.

I would not be running for President if I didn't believe with all my heart that this is what the vast majority of Americans want for this country. This union may never be perfect, but generation after generation has shown that it can always be perfected. And today, whenever I find myself feeling doubtful or cynical about this possibility, what gives me the most hope is the next generation - the young people whose attitudes and beliefs and openness to change have already made history in this election.

There is one story in particularly that I'd like to leave you with today - a story I told when I had the great honor of speaking on Dr. King's birthday at his home church, Ebenezer Baptist, in Atlanta.

There is a young, twenty-three year old white woman named Ashley Baia who organized for our campaign in Florence, South Carolina. She had been working to organize a mostly African-American community since the beginning of this campaign, and one day she was at a roundtable discussion where everyone went around telling their story and why they were there.

And Ashley said that when she was nine years old, her mother got cancer. And because she had to miss days of work, she was let go and lost her health care. They had to file for bankruptcy, and that's when Ashley decided that she had to do something to help her mom.

She knew that food was one of their most expensive costs, and so Ashley convinced her mother that what she really liked and really wanted to eat more than anything else was mustard and relish sandwiches. Because that was the cheapest way to eat.

She did this for a year until her mom got better, and she told everyone at the roundtable that the reason she joined our campaign was so that she could help the millions of other children in the country who want and need to help their parents too.

Now Ashley might have made a different choice. Perhaps somebody told her along the way that the source of her mother's problems were blacks who were on welfare and too lazy to work, or Hispanics who were coming into the country illegally. But she didn't. She sought out allies in her fight against injustice.

Anyway, Ashley finishes her story and then goes around the room and asks everyone else why they're supporting the campaign. They all have different stories and reasons. Many bring up a specific issue. And finally they come to this elderly black man who's been sitting there quietly the entire time. And Ashley asks him why he's there. And he does not bring up a specific issue. He does not say health care or the economy. He does not say education or the war. He does not say that he was there because of Barack Obama. He simply says to everyone in the room, "I am here because of Ashley."

"I'm here because of Ashley." By itself, that single moment of recognition between that young white girl and that old black man is not enough. It is not enough to give health care to the sick, or jobs to the jobless, or education to our children.

But it is where we start. It is where our union grows stronger. And as so many generations have come to realize over the course of the two-hundred and twenty one years since a band of patriots signed that document in Philadelphia, that is where the perfection begins.



Comment by the New Yorker magazine

October 13, 2008

Never in living memory has an election been more critical than the one fast approaching—that’s the quadrennial cliché, as expected as the balloons and the bombast. And yet when has it ever felt so urgently true? When have so many Americans had so clear a sense that a Presidency has—at the levels of competence, vision, and integrity—undermined the country and its ideals?

The incumbent Administration has distinguished itself for the ages. The Presidency of George W. Bush is the worst since Reconstruction, so there is no mystery about why the Republican Party—which has held dominion over the executive branch of the federal government for the past eight years and the legislative branch for most of that time—has little desire to defend its record, domestic or foreign. The only speaker at the Convention in St. Paul who uttered more than a sentence or two in support of the President was his wife, Laura. Meanwhile, the nominee, John McCain, played the part of a vaudeville illusionist, asking to be regarded as an apostle of change after years of embracing the essentials of the Bush agenda with ever-increasing ardor.

The Republican disaster begins at home. Even before taking into account whatever fantastically expensive plan eventually emerges to help rescue the financial system from Wall Street’s long-running pyramid schemes, the economic and fiscal picture is bleak. During the Bush Administration, the national debt, now approaching ten trillion dollars, has nearly doubled. Next year’s federal budget is projected to run a half-trillion-dollar deficit, a precipitous fall from the seven-hundred-billion-dollar surplus that was projected when Bill Clinton left office. Private-sector job creation has been a sixth of what it was under President Clinton. Five million people have fallen into poverty. The number of Americans without health insurance has grown by seven million, while average premiums have nearly doubled. Meanwhile, the principal domestic achievement of the Bush Administration has been to shift the relative burden of taxation from the rich to the rest. For the top one per cent of us, the Bush tax cuts are worth, on average, about a thousand dollars a week; for the bottom fifth, about a dollar and a half. The unfairness will only increase if the painful, yet necessary, effort to rescue the credit markets ends up preventing the rescue of our health-care system, our environment, and our physical, educational, and industrial infrastructure.

At the same time, a hundred and fifty thousand American troops are in Iraq and thirty-three thousand are in Afghanistan. There is still disagreement about the wisdom of overthrowing Saddam Hussein and his horrific regime, but there is no longer the slightest doubt that the Bush Administration manipulated, bullied, and lied the American public into this war and then mismanaged its prosecution in nearly every aspect. The direct costs, besides an expenditure of more than six hundred billion dollars, have included the loss of more than four thousand Americans, the wounding of thirty thousand, the deaths of tens of thousands of Iraqis, and the displacement of four and a half million men, women, and children. Only now, after American forces have been fighting for a year longer than they did in the Second World War, is there a glimmer of hope that the conflict in Iraq has entered a stage of fragile stability.

The indirect costs, both of the war in particular and of the Administration’s unilateralist approach to foreign policy in general, have also been immense. The torture of prisoners, authorized at the highest level, has been an ethical and a public-diplomacy catastrophe. At a moment when the global environment, the global economy, and global stability all demand a transition to new sources of energy, the United States has been a global retrograde, wasteful in its consumption and heedless in its policy. Strategically and morally, the Bush Administration has squandered the American capacity to counter the example and the swagger of its rivals. China, Russia, Iran, Saudi Arabia, and other illiberal states have concluded, each in its own way, that democratic principles and human rights need not be components of a stable, prosperous future. At recent meetings of the United Nations, emboldened despots like Mahmoud Ahmadinejad of Iran came to town sneering at our predicament and hailing the “end of the American era.”

The election of 2008 is the first in more than half a century in which no incumbent President or Vice-President is on the ballot. There is, however, an incumbent party, and that party has been lucky enough to find itself, apparently against the wishes of its “base,” with a nominee who evidently disliked George W. Bush before it became fashionable to do so. In South Carolina in 2000, Bush crushed John McCain with a sub-rosa primary campaign of such viciousness that McCain lashed out memorably against Bush’s Christian-right allies. So profound was McCain’s anger that in 2004 he flirted with the possibility of joining the Democratic ticket under John Kerry. Bush, who took office as a “compassionate conservative,” governed immediately as a rightist ideologue. During that first term, McCain bolstered his reputation, sometimes deserved, as a “maverick” willing to work with Democrats on such issues as normalizing relations with Vietnam, campaign-finance reform, and immigration reform. He co-sponsored, with John Edwards and Edward Kennedy, a patients’ bill of rights. In 2001 and 2003, he voted against the Bush tax cuts. With John Kerry, he co-sponsored a bill raising auto-fuel efficiency standards and, with Joseph Lieberman, a cap-and-trade regime on carbon emissions. He was one of a minority of Republicans opposed to unlimited drilling for oil and gas off America’s shores.

Since the 2004 election, however, McCain has moved remorselessly rightward in his quest for the Republican nomination. He paid obeisance to Jerry Falwell and preachers of his ilk. He abandoned immigration reform, eventually coming out against his own bill. Most shocking, McCain, who had repeatedly denounced torture under all circumstances, voted in February against a ban on the very techniques of “enhanced interrogation” that he himself once endured in Vietnam—as long as the torturers were civilians employed by the C.I.A.

On almost every issue, McCain and the Democratic Party’s nominee, Barack Obama, speak the generalized language of “reform,” but only Obama has provided a convincing, rational, and fully developed vision. McCain has abandoned his opposition to the Bush-era tax cuts and has taken up the demagogic call—in the midst of recession and Wall Street calamity, with looming crises in Social Security, Medicare, and Medicaid—for more tax cuts. Bush’s expire in 2011. If McCain, as he has proposed, cuts taxes for corporations and estates, the benefits once more would go disproportionately to the wealthy.

In Washington, the craze for pure market triumphalism is over. Treasury Secretary Henry Paulson arrived in town (via Goldman Sachs) a Republican, but it seems that he will leave a Democrat. In other words, he has come to see that the abuses that led to the current financial crisis––not least, excessive speculation on borrowed capital––can be fixed only with government regulation and oversight. McCain, who has never evinced much interest in, or knowledge of, economic questions, has had little of substance to say about the crisis. His most notable gesture of concern—a melodramatic call last month to suspend his campaign and postpone the first Presidential debate until the government bailout plan was ready—soon revealed itself as an empty diversionary tactic.

By contrast, Obama has made a serious study of the mechanics and the history of this economic disaster and of the possibilities of stimulating a recovery. Last March, in New York, in a speech notable for its depth, balance, and foresight, he said, “A complete disdain for pay-as-you-go budgeting, coupled with a generally scornful attitude towards oversight and enforcement, allowed far too many to put short-term gain ahead of long-term consequences.” Obama is committed to reforms that value not only the restoration of stability but also the protection of the vast majority of the population, which did not partake of the fruits of the binge years. He has called for greater and more programmatic regulation of the financial system; the creation of a National Infrastructure Reinvestment Bank, which would help reverse the decay of our roads, bridges, and mass-transit systems, and create millions of jobs; and a major investment in the green-energy sector.

On energy and global warming, Obama offers a set of forceful proposals. He supports a cap-and-trade program to reduce America’s carbon emissions by eighty per cent by 2050—an enormously ambitious goal, but one that many climate scientists say must be met if atmospheric carbon dioxide is to be kept below disastrous levels. Large emitters, like utilities, would acquire carbon allowances, and those which emit less carbon dioxide than their allotment could sell the resulting credits to those which emit more; over time, the available allowances would decline. Significantly, Obama wants to auction off the allowances; this would provide fifteen billion dollars a year for developing alternative-energy sources and creating job-training programs in green technologies. He also wants to raise federal fuel-economy standards and to require that ten per cent of America’s electricity be generated from renewable sources by 2012. Taken together, his proposals represent the most coherent and far-sighted strategy ever offered by a Presidential candidate for reducing the nation’s reliance on fossil fuels.

There was once reason to hope that McCain and Obama would have a sensible debate about energy and climate policy. McCain was one of the first Republicans in the Senate to support federal limits on carbon dioxide, and he has touted his own support for a less ambitious cap-and-trade program as evidence of his independence from the White House. But, as polls showed Americans growing jittery about gasoline prices, McCain apparently found it expedient in this area, too, to shift course. He took a dubious idea—lifting the federal moratorium on offshore oil drilling—and placed it at the very center of his campaign. Opening up America’s coastal waters to drilling would have no impact on gasoline prices in the short term, and, even over the long term, the effect, according to a recent analysis by the Department of Energy, would be “insignificant.” Such inconvenient facts, however, are waved away by a campaign that finally found its voice with the slogan “Drill, baby, drill!”

The contrast between the candidates is even sharper with respect to the third branch of government. A tense equipoise currently prevails among the Justices of the Supreme Court, where four hard-core conservatives face off against four moderate liberals. Anthony M. Kennedy is the swing vote, determining the outcome of case after case.

McCain cites Chief Justice John Roberts and Justice Samuel Alito, two reliable conservatives, as models for his own prospective appointments. If he means what he says, and if he replaces even one moderate on the current Supreme Court, then Roe v. Wade will be reversed, and states will again be allowed to impose absolute bans on abortion. McCain’s views have hardened on this issue. In 1999, he said he opposed overturning Roe; by 2006, he was saying that its demise “wouldn’t bother me any”; by 2008, he no longer supported adding rape and incest as exceptions to his party’s platform opposing abortion.

But scrapping Roe—which, after all, would leave states as free to permit abortion as to criminalize it—would be just the beginning. Given the ideological agenda that the existing conservative bloc has pursued, it’s safe to predict that affirmative action of all kinds would likely be outlawed by a McCain Court. Efforts to expand executive power, which, in recent years, certain Justices have nobly tried to resist, would likely increase. Barriers between church and state would fall; executions would soar; legal checks on corporate power would wither—all with just one new conservative nominee on the Court. And the next President is likely to make three appointments.

Obama, who taught constitutional law at the University of Chicago, voted against confirming not only Roberts and Alito but also several unqualified lower-court nominees. As an Illinois state senator, he won the support of prosecutors and police organizations for new protections against convicting the innocent in capital cases. While McCain voted to continue to deny habeas-corpus rights to detainees, perpetuating the Bush Administration’s regime of state-sponsored extra-legal detention, Obama took the opposite side, pushing to restore the right of all U.S.-held prisoners to a hearing. The judicial future would be safe in his care.

In the shorthand of political commentary, the Iraq war seems to leave McCain and Obama roughly even. Opposing it before the invasion, Obama had the prescience to warn of a costly and indefinite occupation and rising anti-American radicalism around the world; supporting it, McCain foresaw none of this. More recently, in early 2007 McCain risked his Presidential prospects on the proposition that five additional combat brigades could salvage a war that by then appeared hopeless. Obama, along with most of the country, had decided that it was time to cut American losses. Neither candidate’s calculations on Iraq have been as cheaply political as McCain’s repeated assertion that Obama values his career over his country; both men based their positions, right or wrong, on judgment and principle.

President Bush’s successor will inherit two wars and the realities of limited resources, flagging popular will, and the dwindling possibilities of what can be achieved by American power. McCain’s views on these subjects range from the simplistic to the unknown. In Iraq, he seeks “victory”—a word that General David Petraeus refuses to use, and one that fundamentally misrepresents the messy, open-ended nature of the conflict. As for Afghanistan, on the rare occasions when McCain mentions it he implies that the surge can be transferred directly from Iraq, which suggests that his grasp of counterinsurgency is not as firm as he insisted it was during the first Presidential debate. McCain always displays more faith in force than interest in its strategic consequences. Unlike Obama, McCain has no political strategy for either war, only the dubious hope that greater security will allow things to work out. Obama has long warned of deterioration along the Afghanistan-Pakistan border, and has a considered grasp of its vital importance. His strategy for both Afghanistan and Iraq shows an understanding of the role that internal politics, economics, corruption, and regional diplomacy play in wars where there is no battlefield victory.

Unimaginably painful personal experience taught McCain that war is above all a test of honor: maintain the will to fight on, be prepared to risk everything, and you will prevail. Asked during the first debate to outline “the lessons of Iraq,” McCain said, “I think the lessons of Iraq are very clear: that you cannot have a failed strategy that will then cause you to nearly lose a conflict.” A soldier’s answer––but a statesman must have a broader view of war and peace. The years ahead will demand not only determination but also diplomacy, flexibility, patience, judiciousness, and intellectual engagement. These are no more McCain’s strong suit than the current President’s. Obama, for his part, seems to know that more will be required than willpower and force to extract some advantage from the wreckage of the Bush years.

Obama is also better suited for the task of renewing the bedrock foundations of American influence. An American restoration in foreign affairs will require a commitment not only to international coöperation but also to international institutions that can address global warming, the dislocations of what will likely be a deepening global economic crisis, disease epidemics, nuclear proliferation, terrorism, and other, more traditional security challenges. Many of the Cold War-era vehicles for engagement and negotiation—the United Nations, the World Bank, the Nuclear Non-Proliferation Treaty regime, the North Atlantic Treaty Organization—are moribund, tattered, or outdated. Obama has the generational outlook that will be required to revive or reinvent these compacts. He would be the first postwar American President unencumbered by the legacies of either Munich or Vietnam.

The next President must also restore American moral credibility. Closing Guantánamo, banning all torture, and ending the Iraq war as responsibly as possible will provide a start, but only that. The modern Presidency is as much a vehicle for communication as for decision-making, and the relevant audiences are global. Obama has inspired many Americans in part because he holds up a mirror to their own idealism. His election would do no less—and likely more—overseas.

What most distinguishes the candidates, however, is character—and here, contrary to conventional wisdom, Obama is clearly the stronger of the two. Not long ago, Rick Davis, McCain’s campaign manager, said, “This election is not about issues. This election is about a composite view of what people take away from these candidates.” The view that this election is about personalities leaves out policy, complexity, and accountability. Even so, there’s some truth in what Davis said––but it hardly points to the conclusion that he intended.

Echoing Obama, McCain has made “change” one of his campaign mantras. But the change he has actually provided has been in himself, and it is not just a matter of altering his positions. A willingness to pander and even lie has come to define his Presidential campaign and its televised advertisements. A contemptuous duplicity, a meanness, has entered his talk on the stump—so much so that it seems obvious that, in the drive for victory, he is willing to replicate some of the same underhanded methods that defeated him eight years ago in South Carolina.

Perhaps nothing revealed McCain’s cynicism more than his choice of Sarah Palin, the former mayor of Wasilla, Alaska, who had been governor of that state for twenty-one months, as the Republican nominee for Vice-President. In the interviews she has given since her nomination, she has had difficulty uttering coherent unscripted responses about the most basic issues of the day. We are watching a candidate for Vice-President cram for her ongoing exam in elementary domestic and foreign policy. This is funny as a Tina Fey routine on “Saturday Night Live,” but as a vision of the political future it’s deeply unsettling. Palin has no business being the backup to a President of any age, much less to one who is seventy-two and in imperfect health. In choosing her, McCain committed an act of breathtaking heedlessness and irresponsibility. Obama’s choice, Joe Biden, is not without imperfections. His tongue sometimes runs in advance of his mind, providing his own fodder for late-night comedians, but there is no comparison with Palin. His deep experience in foreign affairs, the judiciary, and social policy makes him an assuring and complementary partner for Obama.

The longer the campaign goes on, the more the issues of personality and character have reflected badly on McCain. Unless appearances are very deceiving, he is impulsive, impatient, self-dramatizing, erratic, and a compulsive risk-taker. These qualities may have contributed to his usefulness as a “maverick” senator. But in a President they would be a menace.

By contrast, Obama’s transformative message is accompanied by a sense of pragmatic calm. A tropism for unity is an essential part of his character and of his campaign. It is part of what allowed him to overcome a Democratic opponent who entered the race with tremendous advantages. It is what helped him forge a political career relying both on the liberals of Hyde Park and on the political regulars of downtown Chicago. His policy preferences are distinctly liberal, but he is determined to speak to a broad range of Americans who do not necessarily share his every value or opinion. For some who oppose him, his equanimity even under the ugliest attack seems like hauteur; for some who support him, his reluctance to counterattack in the same vein seems like self-defeating detachment. Yet it is Obama’s temperament—and not McCain’s—that seems appropriate for the office both men seek and for the volatile and dangerous era in which we live. Those who dismiss his centeredness as self-centeredness or his composure as indifference are as wrong as those who mistook Eisenhower’s stolidity for denseness or Lincoln’s humor for lack of seriousness.

Nowadays, almost every politician who thinks about running for President arranges to become an author. Obama’s books are different: he wrote them. “The Audacity of Hope” (2006) is a set of policy disquisitions loosely structured around an account of his freshman year in the United States Senate. Though a campaign manifesto of sorts, it is superior to that genre’s usual blowsy pastiche of ghostwritten speeches. But it is Obama’s first book, “Dreams from My Father: A Story of Race and Inheritance” (1995), that offers an unprecedented glimpse into the mind and heart of a potential President. Obama began writing it in his early thirties, before he was a candidate for anything. Not since Theodore Roosevelt has an American politician this close to the pinnacle of power produced such a sustained, highly personal work of literary merit before being definitively swept up by the tides of political ambition.

A Presidential election is not the awarding of a Pulitzer Prize: we elect a politician and, we hope, a statesman, not an author. But Obama’s first book is valuable in the way that it reveals his fundamental attitudes of mind and spirit. “Dreams from My Father” is an illuminating memoir not only in the substance of Obama’s own peculiarly American story but also in the qualities he brings to the telling: a formidable intelligence, emotional empathy, self-reflection, balance, and a remarkable ability to see life and the world through the eyes of people very different from himself. In common with nearly all other senators and governors of his generation, Obama does not count military service as part of his biography. But his life has been full of tests—personal, spiritual, racial, political—that bear on his preparation for great responsibility.

It is perfectly legitimate to call attention, as McCain has done, to Obama’s lack of conventional national and international policymaking experience. We, too, wish he had more of it. But office-holding is not the only kind of experience relevant to the task of leading a wildly variegated nation. Obama’s immersion in diverse human environments (Hawaii’s racial rainbow, Chicago’s racial cauldron, countercultural New York, middle-class Kansas, predominantly Muslim Indonesia), his years of organizing among the poor, his taste of corporate law and his grounding in public-interest and constitutional law—these, too, are experiences. And his books show that he has wrung from them every drop of insight and breadth of perspective they contained.

The exhaustingly, sometimes infuriatingly long campaign of 2008 (and 2007) has had at least one virtue: it has demonstrated that Obama’s intelligence and steady temperament are not just figments of the writer’s craft. He has made mistakes, to be sure. (His failure to accept McCain’s imaginative proposal for a series of unmediated joint appearances was among them.) But, on the whole, his campaign has been marked by patience, planning, discipline, organization, technological proficiency, and strategic astuteness. Obama has often looked two or three moves ahead, relatively impervious to the permanent hysteria of the hourly news cycle and the cable-news shouters. And when crisis has struck, as it did when the divisive antics of his ex-pastor threatened to bring down his campaign, he has proved equal to the moment, rescuing himself with a speech that not only drew the poison but also demonstrated a profound respect for the electorate. Although his opponents have tried to attack him as a man of “mere” words, Obama has returned eloquence to its essential place in American politics. The choice between experience and eloquence is a false one––something that Lincoln, out of office after a single term in Congress, proved in his own campaign of political and national renewal. Obama’s “mere” speeches on everything from the economy and foreign affairs to race have been at the center of his campaign and its success; if he wins, his eloquence will be central to his ability to govern.

We cannot expect one man to heal every wound, to solve every major crisis of policy. So much of the Presidency, as they say, is a matter of waking up in the morning and trying to drink from a fire hydrant. In the quiet of the Oval Office, the noise of immediate demands can be deafening. And yet Obama has precisely the temperament to shut out the noise when necessary and concentrate on the essential. The election of Obama—a man of mixed ethnicity, at once comfortable in the world and utterly representative of twenty-first-century America—would, at a stroke, reverse our country’s image abroad and refresh its spirit at home. His ascendance to the Presidency would be a symbolic culmination of the civil- and voting-rights acts of the nineteen-sixties and the century-long struggles for equality that preceded them. It could not help but say something encouraging, even exhilarating, about the country, about its dedication to tolerance and inclusiveness, about its fidelity, after all, to the values it proclaims in its textbooks. At a moment of economic calamity, international perplexity, political failure, and battered morale, America needs both uplift and realism, both change and steadiness. It needs a leader temperamentally, intellectually, and emotionally attuned to the complexities of our troubled globe. That leader’s name is Barack Obama.

©The New Yorker, Oct 13, 2008



Rex W. Huppke

November 30, 2008

On Dec. 1, 1958, a fire consumed Our Lady of the Angels grade school on the West Side of Chicago, killing 92 children and three nuns.

A wire story from that day captured a fragment of the desperation:

"Max Stachura stood outside the burning building, begging his little boy, Mark, 9, to jump into his arms. Children were falling all about the father and he caught or stopped the fall of 12 of them. But little Mark was too frightened, or he didn't understand his father. Mark didn't jump."

Fifty years later, his mother has the day in focus, and adds a missing detail.

As Mark stood at that second-floor window, fire to his back, he held a small statue in his hand and waved it proudly through the black smoke, hoping his father would notice. Mark had won the statue that day—a figure of an infant Jesus—for being first to answer a quiz question.

"I guess he was just so proud of that prize," said Mary Stachura, who was at a department store when the fire broke out. "I don't think he really understood what was happening."

Few of the children trapped in the school could have grasped the enormity of the danger they faced, and few of the panicky adults on the ground—parents and neighbors and firefighters—had time to reflect. They acted, grabbing ladders of all lengths from garages, reaching through broken windows to haul small, waterlogged bodies from the flames.

Max Stachura watched as other children pushed his son back, away from the window and into the flames. The boy was later identified by a homework sheet crumpled in his pocket.

Max rarely spoke of that day. He died of a heart attack at 52.

"He was much too young," said Mary, now 85 and living in a retirement home in Bartlett. "That fire. It changed everything."

The Our Lady of the Angels fire remains one of the worst tragedies in Chicago history, a ghastly few hours on a cold, sunny afternoon that shattered families and knocked a hopeful, growing community forever off its path.

The cause of the fire was never officially determined, and no one was held accountable. Some parents who lost a child—or children—found ways to blame each other and wound up divorced. Others sold their tidy two-flats and moved away, hastening the flight of the middle class from the city's West Side.

"It seems as though people just couldn't get far enough away," said Jill Grannan, a curator at the Chicago History Museum. "That school and that parish is one that had a lot of people. . . . There was such a boom, and then people really just had to leave.

"I don't think the community ever really came back."

Few in the neighborhood now would recall the blaze. But for parents and firefighters, journalists and now-grown schoolchildren, the memories remain etched in intricate detail.

Steve Lasker, then a photographer for The Chicago American newspaper, was driving along Grand Avenue, heading to his newsroom after an assignment in
Elmwood Park. He heard a call come over a radio tuned to the police frequency: "They're jumping out the windows!"

A fire engine cut in front of him, and he quickly turned to follow. He parked and headed toward the smoke, stopping abruptly when he saw the school on Avers Avenue in flames.

"I froze for a few seconds, or maybe it was minutes, I don't know, I couldn't tell," said Lasker, now 78. "Oh my God, there's still kids in there. Mayhem was going on, and they started pulling kids out of there left and right."

From atop a firetruck, Lasker shot one of the most iconic photos of the day. It showed a helmeted firefighter, his face drawn in sorrow, carrying the wet, lifeless body of 10-year-old John Jajkowski Jr. from the building.

Lasker, then the father of a 6-month-old girl, felt his stomach churn as he watched the rescue through the lens of his camera. The cold wind froze tracks of tears on his face. Though many photos were published, 20 years would pass before he voluntarily showed them to anyone.

"I didn't want to relive it," he said. "To this day I still have dreams about that horrible scene."

He held close to his family through the years and was, perhaps, overprotective of his kids: "Tragedy hits home—everybody's home."

Grace Riley never saw the fire, but she faced its aftermath. She was 23 at the time, an emergency room nurse and a newlywed.

The first ambulance arrived without warning at St. Anne's Hospital that afternoon, carrying six boys from the 7th and 8th grades, and one 1st-grade girl. The doctors and nurses didn't know what had happened but immediately set to work, with Riley caring for the little girl.

"I was cutting her clothes off and I hear her say, 'Oh nurse, my face hurts so bad.' And I looked up, and her face was totally burned."

As more children were carted in, the acrid smell of burnt flesh became overwhelming—it sticks with Riley to this day. She helped place bodies of the dead on the floor so gurneys were available for the living.

"Ambulance by ambulance by ambulance, they just kept coming," Riley said. "It was just earth-shattering to look into a room and see all those little bodies, and to see the parents screaming, 'Where is my child? Where is my child?' "

Riley left emergency room nursing shortly after the fire. She couldn't do it anymore.

Long after wounds healed, after the bodies of the dead were honored in mass funeral services and schools across Chicago and the nation embraced new standards for fire safety, the pain lingers.

Ken Leonard was only 9, a 4th grader in Room 210. He wound up on the window ledge, too afraid to jump, too scared to realize flames were burning the backs of his legs.

A firefighter on a ladder hoisted him to safety. He spent 10 days in the hospital with second-degree burns; his two brothers were unharmed.

The three Leonard boys would all go on to serve in Vietnam. Again, they all made it out alive. Ken wound up a firefighter in
Oak Lawn, rising to chief before he retired in 2001.

Throughout his career, he kept memories of the Our Lady of the Angels fire to himself, and he still struggles to speak of that day.

"When I first got in the job, I was trying to tell my co-workers the story, but I just couldn't do it," Leonard said, voice cracking. "I assumed as time went on, it would get easier. But it never does."

Some say they were able to put the tragedy behind them, though they speak in an uncertain tone of moving on. Others lament the lack of counseling in the wake of the tragedy, saying the custom of the time—to bottle up emotions and go on living—never allowed them to come to terms with their feelings.

And some still search for answers.

Robert Chiappetta, who survived the fire but lost his sister Joan Anne, has spent 15 years obsessively researching a book about what happened. Though no investigation found fault with the church, which ran the school, or with city fire inspectors, Chiappetta believes there was a widespread cover-up.

"They had created a firetrap in there," he said, surrounded by court documents at his kitchen table in Elmwood Park. "People will see this was the crime of the century."

Chiappetta's parents, after searching hospitals the night of the fire, found his sister's body near midnight in the Cook County morgue. She could be identified only by a gold chain around her neck, one her uncle had brought her from Italy.

In the weeks after the fire, after Mary and Max Stachura had buried their son, a nun from the school explained the statue Mark had been waving at his father. She gave Mary a similar one as a keepsake. Mary still has that statue. It's kept in a trunk in her apartment—like memories of that day, it's always nearby, just not in plain sight.

Sitting recently with her younger son, John, who was in a building at the school that didn't burn, Mary showed a cherished, sepia-toned class picture of Mark. She still has the shirt and tie he wore in the picture.

"I told John that when I die, bury that shirt and that tie with me," she said. "My little boy will always be with me."



By Tim Rutten

Death's shadow frequently sends literary reputation into critical eclipse.

Not so the Nobel laureate Samuel Beckett, who has seemed to rise further in our esteem with every year that has passed since his death in 1989 at age 83. Of the great Modernists, in fact, it's Beckett who continues to speak most directly and freshly to our own experience of the world—and that includes his great friend and literary mentor, James Joyce, though saying so feels curiously like apostasy.

Now we have the most surprising addition of all to the Beckett canon, "The Letters of Samuel Beckett: Volume 1: 1929-1940," the first of four projected volumes selected from the author's astonishing, astonishingly vast correspondence. This is an extraordinary work of scholarship on the part of its main editors, Martha Dow Fehsenfeld and Lois More Overbeck. So far, more than 15,000 Beckett letters have come to light; the taciturn youth who became an artist of studied silences turns out to have been an inveterate letter writer—and, what's more, a fine one, which can't be said for many authors. Just to complicate things, Beckett had what one manuscript specialist described to the editors as "the worst handwriting of any 20th Century author" and composed his letters not only in English but also in French and German.

Finally, there are restrictions Beckett set up before his death, limiting the selection to letters bearing on his work. Thus nothing, for example, of his fraught relationship with Joyce's mad daughter, Lucia, whose unrequited affection for Beckett ultimately cast a chill over his relationship with her parents. In 1937, Beckett writes of spending 15 hours laboring over the pre-publication proofs of "Finnegans Wake" for Joyce. The perennially strapped Joyce paid Beckett just 250 French francs but threw in one of his old overcoats and five used ties. "I did not refuse," Beckett writes. "It is so much simpler to be hurt than to hurt."


Beckett also forbade the editors any commentary, and they have—perhaps—overcompensated with "contextual" notes that sometimes are helpful and, sometimes, simply become a rather creaky documentary apparatus. That's a quibble that does not rise to the level of criticism, though. What Fehsenfeld and Overbeck have produced is a revelatory triumph.

The correspondent who frequently signs himself "Sam" emerges from these letters a full human being, by turns arrogant and kindly, depressed and determined. Most of all there is a profound seriousness of purpose, a drive—despite the writer's frequent disparaging comments about his lassitude—to read seriously, listen to music and look at paintings in a serious, systematic way. Two things emerge from this process: One is a wonderful and, often surprisingly, convincing independence of judgment. (Who would have guessed that the playwright responsible for "Krapp's Last Tape" was enthralled with Jane Austen?); the other is a truly deep learning—languages, art, philosophy, literature—and contempt for pedantry.

The latter takes on a decided edge when it intersects Beckett's wickedly biting sense of humor. One of his last acts before abandoning what promised to be a dazzling academic career at Trinity was to deliver a lecture to Dublin's Modern Language Society on an avant-garde French poet and his school, both of which Beckett had invented. He particularly enjoyed the subsequent discussion in which members referred to their own familiarity with the imaginary poet and his circle.

As James Knowlson points out in his Beckett biography "Damned to Fame"—and it makes a useful companion to read alongside these letters—Beckett abandoned Trinity first of all, because he felt a corrosive contempt for its students and a distaste for his colleagues' culture: "Scholarly wit and sarcasm sounded all too often like exhibitionism, bitchiness and character assassination." Moreover, Beckett would write, "How can one write here, when every day vulgarizes one's hostility and turns anger into irritation and petulance?"

In these reactions and in Beckett's letters from the period, we glimpse the beginning of a profound transformation in which, as Knowlson says, "the arrogant, disturbed, narcissistic young man ... evolved into someone who was noted later for his extraordinary kindness, courtesy, concern, generosity and almost saintly 'good works.' " (Beckett donated his cash award for the 1969 Nobel Prize to charity and struggling writers.)

The years covered by these letters also are the ones in which Beckett would lay the intellectual and experiential foundations for the great leap into the new that his writing would make in the 1940s. Writing to his great friend, poet and art historian Tom McGreevy, in fall 1932, he says: "I'm in mourning for the integrity ... I find in Homer & Dante & Racine & sometimes Rimbaud, the integrity of the eyelids coming down before the brain knows of grit in the wind."

In this letter to German editor and translator Axel Kaun, we catch an intimation of what is to come: "It is indeed getting more and more difficult for me to write in formal English. And more and more my language appears to me like a veil which one has to tear apart in order to get to those things [or the nothingness] lying behind it. Grammar and style! To me they seem to have become as irrelevant as a Biedermeier bathing suit or the imperturbability of a gentleman. A mask. It is to be hoped the time will come, thank God, in some circles it already has, when language is best used when most efficiently abused. ... Or is literature alone to be left behind on that old, foul road long ago abandoned by music and painting?"

Here the stage is set for what Beckett would later call "the siege in the room," the four years from 1946 to 1950, when he worked in solitude, abandoned English composition for French and stripped his work of every ornament and stylistic device to become the writer we now esteem. It marked his fundamental departure out of the shadow of Joyce. If those years also produced an inner chronicle of revealing correspondence, then the second volume in this series will be valuable, indeed.

'The Letters of Samuel Beckett: Volume 1: 1929-1940' edited by Martha Dow Fehsenfeld and Lois More Overbeck




Frank Rich


April 25, 2009


WE don’t like our evil to be banal. Ten years after Columbine, it only now may be sinking in that the psychopathic killers were not jock-hating dorks from a “Trench Coat Mafia,” or, as ABC News maintained at the time, “part of a dark, underground national phenomenon known as the Gothic movement.” In the new best seller “Columbine,” the journalist Dave Cullen reaffirms that Dylan Klebold and Eric Harris were instead ordinary American teenagers who worked at the local pizza joint, loved their parents and were popular among their classmates.


On Tuesday, it will be five years since Americans first confronted the photographs from Abu Ghraib on “60 Minutes II.” Here, too, we want to cling to myths that quarantine the evil. If our country committed torture, surely it did so to prevent Armageddon, in a patriotic ticking-time-bomb scenario out of “24.” If anyone deserves blame, it was only those identified by President Bush as “a few American troops who dishonored our country and disregarded our values”: promiscuous, sinister-looking lowlifes like Lynddie England, Charles Graner and the other grunts who were held accountable while the top command got a pass.


We’ve learned much, much more about America and torture in the past five years. But as Mark Danner recently wrote in The New York Review of Books, for all the revelations, one essential fact remains unchanged: “By no later than the summer of 2004, the American people had before them the basic narrative of how the elected and appointed officials of their government decided to torture prisoners and how they went about it.” When the Obama administration said it declassified four new torture memos 10 days ago in part because their contents were already largely public, it was right.


Yet we still shrink from the hardest truths and the bigger picture: that torture was a premeditated policy approved at our government’s highest levels; that it was carried out in scenarios that had no resemblance to “24”; that psychologists and physicians were enlisted as collaborators in inflicting pain; and that, in the assessment of reliable sources like the F.B.I. director Robert Mueller, it did not help disrupt any terrorist attacks.


The newly released Justice Department memos, like those before them, were not written by barely schooled misfits like England and Graner. John Yoo, Steven Bradbury and Jay Bybee graduated from the likes of Harvard, Yale, Stanford, Michigan and Brigham Young. They have passed through white-shoe law firms like Covington & Burling, and Sidley Austin.


Judge Bybee’s résumé tells us that he has four children and is both a Cubmaster for the Boy Scouts and a youth baseball and basketball coach. He currently occupies a tenured seat on the United States Court of Appeals. As an assistant attorney general, he was the author of the Aug. 1, 2002, memo endorsing in lengthy, prurient detail interrogation “techniques” like “facial slap (insult slap)” and “insects placed in a confinement box.”

He proposed using 10 such techniques “in some sort of escalating fashion, culminating with the waterboard, though not necessarily ending with this technique.” Waterboarding, the near-drowning favored by Pol Pot and the Spanish Inquisition, was prosecuted by the United States in war-crimes trials after World War II. But Bybee concluded that it “does not, in our view, inflict ‘severe pain or suffering.’ ”


Still, it’s not Bybee’s perverted lawyering and pornographic amorality that make his memo worthy of special attention. It merits a closer look because it actually does add something new — and, even after all we’ve heard, something shocking — to the five-year-old torture narrative. When placed in full context, it’s the kind of smoking gun that might free us from the myths and denial that prevent us from reckoning with this ugly chapter in our history.


Bybee’s memo was aimed at one particular detainee, Abu Zubaydah, who had been captured some four months earlier, in late March 2002. Zubaydah is portrayed in the memo (as he was publicly by Bush after his capture) as one of the top men in Al Qaeda. But by August this had been proven false. As Ron Suskind reported in his book “The One Percent Doctrine,” Zubaydah was identified soon after his capture as a logistics guy, who, in the words of the F.B.I.’s top-ranking Qaeda analyst at the time, Dan Coleman, served as the terrorist group’s flight booker and “greeter,” like “Joe Louis in the lobby of Caesar’s Palace.” Zubaydah “knew very little about real operations, or strategy.” He showed clinical symptoms of schizophrenia.


By the time Bybee wrote his memo, Zubaydah had been questioned by the F.B.I. and C.I.A. for months and had given what limited information he had. His most valuable contribution was to finger Khalid Shaikh Mohammed as the 9/11 mastermind. But, as Jane Mayer wrote in her book “The Dark Side,” even that contribution may have been old news: according to the 9/11 commission, the C.I.A. had already learned about Mohammed during the summer of 2001. In any event, as one of Zubaydah’s own F.B.I. questioners, Ali Soufan, wrote in a Times Op-Ed article last Thursday, traditional interrogation methods had worked. Yet Bybee’s memo purported that an “increased pressure phase” was required to force Zubaydah to talk.


As soon as Bybee gave the green light, torture followed: Zubaydah was waterboarded at least 83 times in August 2002, according to another of the newly released memos. Unsurprisingly, it appears that no significant intelligence was gained by torturing this mentally ill Qaeda functionary. So why the overkill? Bybee’s memo invoked a ticking time bomb: “There is currently a level of ‘chatter’ equal to that which preceded the September 11 attacks.”


We don’t know if there was such unusual “chatter” then, but it’s unlikely Zubaydah could have added information if there were. Perhaps some new facts may yet emerge if Dick Cheney succeeds in his unexpected and welcome crusade to declassify documents that he says will exonerate administration interrogation policies. Meanwhile, we do have evidence for an alternative explanation of what motivated Bybee to write his memo that August, thanks to the comprehensive Senate Armed Services Committee report on detainees released last week.


The report found that Maj. Paul Burney, a United States Army psychiatrist assigned to interrogations in Guantánamo Bay that summer of 2002, told Army investigators of another White House imperative: “A large part of the time we were focused on trying to establish a link between Al Qaeda and Iraq and we were not being successful.” As higher-ups got more “frustrated” at the inability to prove this connection, the major said, “there was more and more pressure to resort to measures” that might produce that intelligence.


In other words, the ticking time bomb was not another potential Qaeda attack on America but the Bush administration’s ticking timetable for selling a war in Iraq; it wanted to pressure Congress to pass a war resolution before the 2002 midterm elections. Bybee’s memo was written the week after the then-secret (and subsequently leaked) “Downing Street memo,” in which the head of British intelligence informed Tony Blair that the Bush White House was so determined to go to war in Iraq that “the intelligence and facts were being fixed around the policy.” A month after Bybee’s memo, on Sept. 8, 2002, Cheney would make his infamous appearance on “Meet the Press,” hyping both Saddam’s W.M.D.s and the “number of contacts over the years” between Al Qaeda and Iraq. If only 9/11 could somehow be pinned on Iraq, the case for war would be a slamdunk.


But there were no links between 9/11 and Iraq, and the White House knew it. Torture may have been the last hope for coercing such bogus “intelligence” from detainees who would be tempted to say anything to stop the waterboarding.

Last week Bush-Cheney defenders, true to form, dismissed the Senate Armed Services Committee report as “partisan.” But as the committee chairman, Carl Levin, told me, the report received unanimous support from its members — John McCain, Lindsey Graham and Joe Lieberman included.


Levin also emphasized the report’s accounts of military lawyers who dissented from White House doctrine — only to be disregarded. The Bush administration was “driven,” Levin said. By what? “They’d say it was to get more information. But they were desperate to find a link between Al Qaeda and Iraq.”


Five years after the Abu Ghraib revelations, we must acknowledge that our government methodically authorized torture and lied about it. But we also must contemplate the possibility that it did so not just out of a sincere, if criminally misguided, desire to “protect” us but also to promote an unnecessary and catastrophic war. Instead of saving us from “another 9/11,” torture was a tool in the campaign to falsify and exploit 9/11 so that fearful Americans would be bamboozled into a mission that had nothing to do with Al Qaeda. The lying about Iraq remains the original sin from which flows much of the Bush White House’s illegality.


Levin suggests — and I agree — that as additional fact-finding plays out, it’s time for the Justice Department to enlist a panel of two or three apolitical outsiders, perhaps retired federal judges, “to review the mass of material” we already have. The fundamental truth is there, as it long has been. The panel can recommend a legal path that will insure accountability for this wholesale betrayal of American values.


President Obama can talk all he wants about not looking back, but this grotesque past is bigger than even he is. It won’t vanish into a memory hole any more than Andersonville, World War II internment camps or My Lai. The White House, Congress and politicians of both parties should get out of the way. We don’t need another commission. We don’t need any Capitol Hill witch hunts. What we must have are fair trials that at long last uphold and reclaim our nation’s commitment to the rule of law.



by Richard Hofstader

© 1964

It had been around a long time before the Radical Right discovered it—and its targets have ranged from “the international bankers” to Masons, Jesuits, and munitions makers.

    American politics has often been an arena for angry minds. In recent years we have seen angry minds at work mainly among extreme right-wingers, who have now demonstrated in the Goldwater movement how much political leverage can be got out of the animosities and passions of a small minority. But behind this I believe there is a style of mind that is far from new and that is not necessarily right-wind. I call it the paranoid style simply because no other word adequately evokes the sense of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind. In using the expression “paranoid style” I am not speaking in a clinical sense, but borrowing a clinical term for other purposes. I have neither the competence nor the desire to classify any figures of the past or present as certifiable lunatics., In fact, the idea of the paranoid style as a force in politics would have little contemporary relevance or historical value if it were applied only to men with profoundly disturbed minds. It is the use of paranoid modes of expression by more or less normal people that makes the phenomenon significant.
    Of course this term is pejorative, and it is meant to be; the paranoid style has a greater affinity for bad causes than good. But nothing really prevents a sound program or demand from being advocated in the paranoid style. Style has more to do with the way in which ideas are believed than with the truth or falsity of their content. I am interested here in getting at our political psychology through our political rhetoric. The paranoid style is an old and recurrent phenomenon in our public life which has been frequently linked with movements of suspicious discontent.

Here is Senator McCarthy, speaking in June 1951 about the parlous situation of the United States:

How can we account for our present situation unless we believe that men high in this government are concerting to deliver us to disaster? This must be the product of a great conspiracy on a scale so immense as to dwarf any previous such venture in the history of man. A conspiracy of infamy so black that, which it is finally exposed, its principals shall be forever deserving of the maledictions of all honest men.…What can be made of this unbroken series of decisions and acts contributing to the strategy of defeat? They cannot be attributed to incompetence.…The laws of probability would dictate that part of…[the] decisions would serve the country’s interest.

Now turn back fifty years to a manifesto signed in 1895 by a number of leaders of the Populist party:

As early as 1865-66 a conspiracy was entered into between the gold gamblers of Europe and America.…For nearly thirty years these conspirators have kept the people quarreling over less important matters while they have pursued with unrelenting zeal their one central purpose.…Every device of treachery, every resource of statecraft, and every artifice known to the secret cabals of the international gold ring are being used to deal a blow to the prosperity of the people and the financial and commercial independence of the country.

Next, a Texas newspaper article of 1855:

…It is a notorious fact that the Monarchs of Europe and the Pope of Rome are at this very moment plotting our destruction and threatening the extinction of our political, civil, and religious institutions. We have the best reasons for believing that corruption has found its way into our Executive Chamber, and that our Executive head is tainted with the infectious venom of Catholicism.…The Pope has recently sent his ambassador of state to this country on a secret commission, the effect of which is an extraordinary boldness of the Catholic church throughout the United States.…These minions of the Pope are boldly insulting our Senators; reprimanding our Statesmen; propagating the adulterous union of Church and State; abusing with foul calumny all governments but Catholic, and spewing out the bitterest execrations on all Protestantism. The Catholics in the United States receive from abroad more than $200,000 annually for the propagation of their creed. Add to this the vast revenues collected here.…

These quotations give the keynote of the style. In the history of the United States one find it, for example, in the anti-Masonic movement, the nativist and anti-Catholic movement, in certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy, in many alarmists about the Mormons, in some Greenback and Populist writers who constructed a great conspiracy of international bankers, in the exposure of a munitions makers’ conspiracy of World War I, in the popular left-wing press, in the contemporary American right wing, and on both sides of the race controversy today, among White Citizens’ Councils and Black Muslims. I do not propose to try to trace the variations of the paranoid style that can be found in all these movements, but will confine myself to a few leading episodes in our past history in which the style emerged in full and archetypal splendor.

Illuminism and Masonry

    I begin with a particularly revealing episode—the panic that broke out in some quarters at the end of the eighteenth century over the allegedly subversive activities of the Bavarian Illuminati. This panic was a part of the general reaction to the French Revolution. In the United States it was heightened by the response of certain men, mostly in New England and among the established clergy, to the rise of Jeffersonian democracy. Illuminism had been started in 1776 by Adam Weishaupt, a professor of law at the University of Ingolstadt. Its teachings today seem to be no more than another version of Enlightenment rationalism, spiced with the anticlerical atmosphere of eighteenth-century Bavaria. It was a somewhat naïve and utopian movement which aspired ultimately to bring the human race under the rules of reason. Its humanitarian rationalism appears to have acquired a fairly wide influence in Masonic lodges.
    Americans first learned of Illumism in 1797, from a volume published in Edinburgh (later reprinted in New York) under the title, Proofs of a Conspiracy Against All the Religions and Governments of Europe, Carried on in the Secret Meetings of Free Masons, Illuminati, and Reading Societies. Its author was a well-known Scottish scientist, John Robison, who had himself been a somewhat casual adherent of Masonry in Britain, but whose imagination had been inflamed by what he considered to be the far less innocent Masonic movement on the Continent. Robison seems to have made his work as factual as he could, but when he came to estimating the moral character and the political influence of Illuminism, he made the characteristic paranoid leap into fantasy. The association, he thought, was formed “for the express purpose of rooting out all religious establishments, and overturning all the existing governments of europe.” It had become “one great and wicked project fermenting and working all over Europe.” And to it he attributed a central role in bringing about the French Revolution. He saw it as a libertine, anti-Christian movement, given to the corruption of women, the cultivation of sensual pleasures, and the violation of property rights. Its members had plans for making a tea that caused abortion—a secret substance that “blinds or kills when spurted in the face,” and a device that sounds like a stench bomb—a “method for filling a bedchamber with pestilential vapours.”
    These notions were quick to make themselves felt in America. In May 1798, a minister of the Massachusetts Congregational establishment in Boston, Jedidiah Morse, delivered a timely sermon to the young country, which was then sharply divided between Jeffersonians and Federalists, Francophiles and Anglomen. Having read Robison, Morse was convinced of a Jacobinical plot touched off by Illuminism, and that the country should be rallied to defend itself. His warnings were heeded throughout New England wherever Federalists brooded about the rising tide of religious infidelity or Jeffersonian democracy. Timothy Dwight, the president of Yale, followed Morse’s sermon with a Fourth-of-July discourse on The Duty of Americans in the Present Crisis, in which he held forth against the Antichrist in his own glowing rhetoric. Soon the pulpits of New England were ringing with denunciations of the Illuminati, as though the country were swarming with them.
    The anti-Masonic movement of the late 1820s and the 1830s took up and extended the obsession with conspiracy. At first, this movement may seem to be no more than an extension or repetition of the anti-Masonic theme sounded in the outcry against the Bavarian Illuminati. But whereas the panic of the 1790s was confined mainly to New England and linked to an ultraconservative point of view, the later anti-Masonic movement affected many parts of the northern United States, and was intimately linked with popular democracy and rural egalitarianism. Although anti-Masonry happened to be anti-Jacksonian (Jackson was a Mason), it manifested the same animus against the closure of opportunity for the common man and against aristocratic institutions that one finds in the Jacksonian crusade against the Bank of the United States.
    The anti-Masonic movement was a product not merely of natural enthusiasm but also of the vicissitudes of party politics. It was joined and used by a great many men who did not fully share its original anti-Masonic feelings. It attracted the support of several reputable statement who had only mild sympathy with its fundamental bias, but who as politicians could not afford to ignore it. Still, it was a folk movement of considerable power, and the rural enthusiasts who provided its real impetus believed in it wholeheartedly.
    As a secret society, Masonry was considered to be a standing conspiracy against republican government. It was held to be particularly liable to treason—for example, Aaron Burr’s famous conspiracy was alleged to have been conducted by Masons. Masonry was accused of constituting a separate system of loyalty, a separate imperium within the framework of federal and state governments, which was inconsistent with loyalty to them. Quite plausibly it was argued that the Masons had set up a jurisdiction of their own, with their own obligations and punishments, liable to enforcement even by the penalty of death. So basic was the conflict felt to be between secrecy and democracy that other, more innocent societies such as Phi Beta Kappa came under attack.
    Since Masons were pledged to come to each other’s aid under circumstances of distress, and to extend fraternal indulgence at all times, is was held that the order nullified the enforcement of regular law. Masonic constables, sheriffs, juries, and judges must all be in league with Masonic criminals and fugitives. The press was believed to have been so “muzzled” by Masonic editors and proprietors that news of Masonic malfeasance could be suppressed. At a moment when almost every alleged citadel of privilege in America was under democratic assault, Masonry was attacked as a fraternity of the privileged, closing business opportunities and nearly monopolizing political offices.
    Certain elements of truth and reality there may have been in these views of Masonry. What must be emphasized here, however, is the apocalyptic and absolutistic framework in which this hostility was commonly expressed. Anti-Masons were not content simply to say that secret societies were rather a bad idea. The author of the standard exposition of anti-Masonry declared that Freemasonry was “not only the most abominable but also the most dangerous institution that ever was imposed on man.…It may truly be said to be hell’s master piece.”

The Jesuit Threat

    Fear of a Masonic plot had hardly been quieted when the rumors arose of a Catholic plot against American values. One meets here again the same frame of mind, but a different villain. The anti-Catholic movement converged with a growing nativism, and while they were not identical, together they cut such a wide swath in American life that they were bound to embrace many moderates to whom the paranoid style, in its full glory, did not appeal. Moreover, we need not dismiss out of hand as totally parochial or mean-spirited the desire of Yankee Americans to maintain an ethnically and religiously homogeneous society nor the particular Protestant commitments to individualism and freedom that were brought into play. But the movement had a large paranoid infusion, and the most influential anti-Catholic militants certainly had a strong affinity for the paranoid style.
    Two books which appeared in 1835 described the new danger to the ?American way of life and may be taken as expressions of the anti-Catholic mentality. One, Foreign Conspiracies against the Liberties of the United States, was from the hand of the celebrated painter and inventor of the telegraph, S.F.B. Morse. “A conspiracy exists,” Morse proclaimed , and “its plans are already in operation…we are attacked in a vulnerable quarter which cannot be defended by our ships, our forts, or our armies.” The main source of the conspiracy Morse found in Metternich’s government: “Austria is now acting in this country. She has devised a grand scheme. She has organized a great plan for doing something here.…She has her Jesuit missionaries traveling through the land; she has supplied them with money, and has furnished a fountain for a regular supply.” Were the plot successful, Morse said, some scion of the House of Hapsburg would soon be installed as Emperor of the United States.

“It is an ascertained fact,” wrote another Protestant militant,

that Jesuits are prowling about all parts of the United States in every possible disguise, expressly to ascertain the advantageous situations and modes to disseminate Popery. A minister of the Gospel from Ohio has informed us that he discovered one carrying on his devices in his congregation; and he says that the western country swarms with them under the name of puppet show men, dancing masters, music teachers, peddlers of images and ornaments, barrel organ players, and similar practitioners.

Lyman Beecher, the elder of a famous family and the father of Harriet Beecher Stowe, wrote in the same year his Plea for the West, in which he considered the possibility that the Christian millennium might come in the American states. Everything depended, in his judgment, upon what influences dominated the great West, where the future of the country lay. There Protestantism was engaged in a life-or-death struggle with Catholicism. “Whatever we do, it must be done quickly.…” A great tide of immigration, hostile to free institutions, was sweeping in upon the country, subsidized and sent by “the potentates of Europe,” multiplying tumult and violence, filling jails, crowding poorhouses, quadrupling taxation, and sending increasing thousands of voters to “lay their inexperienced hand upon the helm of our power.”


The Paranoid Style in Action

The John Birch Society is attempting to suppress a television series about the United Nations by means of a mass letter-writing campaign to the sponsor,…The Xerox Corporation. The corporation, however, intends to go ahead with the programs.…

The July issue of the John Birch Society Bulletin…said an “avalanche of mail ought to convince them of the unwisdom of their proposed action—just as United Air Lines was persuaded to back down and take the U.N. insignia off their planes.” (A United Air Lines spokesman confirmed that the U.N. emblem was removed from its planes, following “considerable public reaction against it.”)

Birch official John Rousselot said, ”We hate to see a corporation of this country promote the U.N. when we know that it is an instrument of the Soviet Communist conspiracy.”

—San Francisco Chronicle, July 31, 1964


    Anti-Catholicism has always been the pornography of the Puritan. Whereas the anti-Masons had envisaged drinking bouts and had entertained themselves with sado-masochistic fantasies about the actual enforcement of grisly Masonic oaths,* the anti-Catholics invented an immense lore about libertine priests, the confessional as an opportunity for seduction, licentious convents and monasteries. Probably the most widely read contemporary book in the United States before Uncle Tom’s Cabin was a work supposedly written by one Maria Monk, entitled Awful Disclosures, which appeared in 1836. The author, who purported to have escaped from the Hotel Dieu nunnery in Montreal after five years there as novice and nun, reported her convent life in elaborate and circumstantial detail. She reported having been told by the Mother Superior that she must “obey the priests in all things”; to her “utter astonishment and horror,” she soon found what the nature of such obedience was. Infants born of convent liaisons were baptized and then killed, she said, so that they might ascend at once to heaven. Her book, hotly attacked and defended , continued to be read and believed even after her mother gave testimony that Maria had been somewhat addled ever since childhood after she had rammed a pencil into her head. Maria died in prison in 1849, after having been arrested in a brothel as a pickpocket.
    Anti-Catholicism, like anti-Masonry, mixed its fortunes with American party politics, and it became an enduring factor in American politics. The American Protective Association of the 1890s revived it with ideological variations more suitable to the times—the depression of 1893, for example, was alleged to be an international creation of the Catholics who began it by starting a run on the banks. Some spokesmen of the movement circulated a bogus encyclical attributed to Leo XIII instructing American Catholics on a certain date in 1893 to exterminate all heretics, and a great many anti-Catholics daily expected a nationwide uprising. The myth of an impending Catholic war of mutilation and extermination of heretics persisted into the twentieth century.

Why They Feel Dispossessed

    If, after our historically discontinuous examples of the paranoid style, we now take the long jump to the contemporary right wing, we find some rather important differences from the nineteenth-century movements. The spokesmen of those earlier movements felt that they stood for causes and personal types that were still in possession of their country—that they were fending off threats to a still established way of life. But the modern right wing, as Daniel Bell has put it, feels dispossessed: America has been largely taken away from them and their kind, though they are determined to try to repossess it and to prevent the final destructive act of subversion. The old American virtues have already been eaten away by cosmopolitans and intellectuals; the old competitive capitalism has been gradually undermined by socialistic and communistic schemers; the old national security and independence have been destroyed by treasonous plots, having as their most powerful agents not merely outsiders and foreigners as of old but major statesmen who are at the very centers of American power. Their predecessors had discovered conspiracies; the modern radical right finds conspiracy to be betrayal from on high.
    Important changes may also be traced to the effects of the mass media. The villains of the modern right are much more vivid than those of their paranoid predecessors, much better known to the public; the literature of the paranoid style is by the same token richer and more circumstantial in personal description and personal invective. For the vaguely delineated villains of the anti-Masons, for the obscure and disguised Jesuit agents, the little-known papal delegates of the anti-Catholics, for the shadowy international bankers of the monetary conspiracies, we may now substitute eminent public figures like Presidents Roosevelt, Truman, and Eisenhower., secretaries of State like Marshall, Acheson, and Dulles, Justices of the Supreme Court like Frankfurter and Warren, and the whole battery of lesser but still famous and vivid alleged conspirators headed by Alger Hiss.
    Events since 1939 have given the contemporary right-wing paranoid a vast theatre for his imagination, full of rich and proliferating detail, replete with realistic cues and undeniable proofs of the validity of his suspicions. The theatre of action is now the entire world, and he can draw not only on the events of World War II, but also on those of the Korean War and the Cold War. Any historian of warfare knows it is in good part a comedy of errors and a museum of incompetence; but if for every error and every act of incompetence one can substitute an act of treason, many points of fascinating interpretation are open to the paranoid imagination. In the end, the real mystery, for one who reads the primary works of paranoid scholarship, is not how the United States has been brought to its present dangerous position but how it has managed to survive at all.
    The basic elements of contemporary right-wing thought can be reduced to three: First, there has been the now-familiar sustained conspiracy, running over more than a generation, and reaching its climax in Roosevelt’s New Deal, to undermine free capitalism, to bring the economy under the direction of the federal government, and to pave the way for socialism or communism. A great many right-wingers would agree with Frank Chodorov, the author of The Income Tax: The Root of All Evil, that this campaign began with the passage of the income-tax amendment to the Constitution in 1913.
    The second contention is that top government officialdom has been so infiltrated by Communists that American policy, at least since the days leading up to Pearl Harbor, has been dominated by men who were shrewdly and consistently selling out American national interests.
    Finally, the country is infused with a network of Communist agents, just as in the old days it was infiltrated by Jesuit agents, so that the whole apparatus of education, religion, the press, and the mass media is engaged in a common effort to paralyze the resistance of loyal Americans.
    Perhaps the most representative document of the McCarthyist phase was a long indictment of Secretary of State George C. Marshall, delivered in 1951 in the Senate by senator McCarthy, and later published in a somewhat different form. McCarthy pictured Marshall was the focal figure in a betrayal of American interests stretching in time from the strategic plans for World War II to the formulation of the Marshall Plan. Marshal was associated with practically every American failure or defeat, McCarthy insisted, and none of this was either accident or incompetence. There was a “baffling pattern” of Marshall’s interventions in the war, which always conduced to the well-being of the Kremlin. The sharp decline in America’s relative strength from 1945 to 1951 did not “just happen”; it was “brought about, step by step, by will and intention,” the consequence not of mistakes but of a treasonous conspiracy, “a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man.”
    Today, the mantle of McCarthy has fallen on a retired candy manufacturer, Robert H. Welch, Jr., who is less strategically placed and has a much smaller but better organized following than the Senator. A few years ago Welch proclaimed that “Communist influences are now in almost complete control of our government”—note the care and scrupulousness of that “almost.” He has offered a full scale interpretation of our recent history n which Communists figure at every turn: They started a run on American banks in 1933 that forced their closure; they contrived the recognition of the Soviet Union by the United States in the same year, just in time to save the Soviets from economic collapse; they have stirred up the fuss over segregation in the South; they have taken over the Supreme Court and made it “one of the most important agencies of Communism.”
    Close attention to history wins for Mr. Welch an insight into affairs that is given to few of us. “For many reasons and after a lot of study,” he wrote some years ago, “I personally believe [John Foster] Dulles to be a Communist agent.” The job of Professor Arthur F. Burns as head of Eisenhower’s Council of Economic Advisors was “merely a cover-up for Burns’s liaison work between Eisenhower and some of his Communist bosses.” Eisenhower’s brother Milton was “actually [his] superior and boss within the Communist party.” As for Eisenhower himself, Welch characterized him, in words that have made the candy manufacturer famous, as “a dedicated, conscious agent of the Communist conspiracy”—a conclusion, he added, “based on an accumulation of detailed evidence so extensive and so palpable that it seems to put this conviction beyond any reasonable doubt.”

Emulating the Enemy

    The paranoid spokesman sees the fate of conspiracy in apocalyptic terms—he traffics in the birth and death of whole worlds, whole political orders, whole systems of human values. He is always manning the barricades of civilization. He constantly lives at a turning point. Like religious millenialists he expresses the anxiety of those who are living through the last days and he is sometimes disposed to set a date fort the apocalypse. (“Time is running out,” said Welch in 1951. “Evidence is piling up on many sides and from many sources that October 1952 is the fatal month when Stalin will attack.”)
    As a member of the avant-garde who is capable of perceiving the conspiracy before it is fully obvious to an as yet unaroused public, the paranoid is a militant leader. He does not see social conflict as something to be mediated and compromised, in the manner of the working politician. Since what is at stake is always a conflict between absolute good and absolute evil, what is necessary is not compromise but the will to fight things out to a finish. Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated—if not from the world, at least from the theatre of operations to which the paranoid directs his attention. This demand for total triumph leads to the formulation of hopelessly unrealistic goals, and since these goals are not even remotely attainable, failure constantly heightens the paranoid’s sense of frustration. Even partial success leaves him with the same feeling of powerlessness with which he began, and this in turn only strengthens his awareness of the vast and terrifying quality of the enemy he opposes.
    The enemy is clearly delineated: he is a perfect model of malice, a kind of amoral superman—sinister, ubiquitous, powerful, cruel, sensual, luxury-loving. Unlike the rest of us, the enemy is not caught in the toils of the vast mechanism of history, himself a victim of his past, his desires, his limitations. He wills, indeed he manufactures, the mechanism of history, or tries to deflect the normal course of history in an evil way. He makes crises, starts runs on banks, causes depressions, manufactures disasters, and then enjoys and profits from the misery he has produced. The paranoid’s interpretation of history is distinctly personal: decisive events are not taken as part of the stream of history, but as the consequences of someone’s will. Very often the enemy is held to possess some especially effective source of power: he controls the press; he has unlimited funds; he has a new secret for influencing the mind (brainwashing); he has a special technique for seduction (the Catholic confessional).
    It is hard to resist the conclusion that this enemy is on many counts the projection of the self; both the ideal and the unacceptable aspects of the self are attributed to him. The enemy may be the cosmopolitan intellectual, but the paranoid will outdo him in the apparatus of scholarship, even of pedantry. Secret organizations set up to combat secret organizations give the same flattery. The Ku Klux Klan imitated Catholicism to the point of donning priestly vestments, developing an elaborate ritual and an equally elaborate hierarchy. The John Birch Society emulates Communist cells and quasi-secret operation through “front” groups, and preaches a ruthless prosecution of the ideological war along lines very similar to those it finds in the Communist enemy.* Spokesmen of the various fundamentalist anti-Communist “crusades” openly express their admiration for the dedication and discipline the Communist cause calls forth.
    On the other hand, the sexual freedom often attributed to the enemy, his lack of moral inhibition, his possession of especially effective techniques for fulfilling his desires, give exponents of the paranoid style an opportunity to project and express unacknowledgeable aspects of their own psychological concerns. Catholics and Mormons—later, Negroes and Jews—have lent themselves to a preoccupation with illicit sex. Very often the fantasies of true believers reveal strong sadomasochistic outlets, vividly expressed, for example, in the delight of anti-Masons with the cruelty of Masonic punishments.

Renegades and Pedants

    A special significance attaches to the figure of the renegade from the enemy cause. The anti-Masonic movement seemed at times to be the creation of ex-Masons; certainly the highest significance was attributed to their revelations, and every word they said was believed. Anti-Catholicism used the runaway nun and the apostate priest; the place of ex-Communists in the avant-garde anti-Communist movements of our time is well known. In some part, the special authority accorded the renegade derives from the obsession with secrecy so characteristics of such movements: the renegade is the man or woman who has been in the Arcanum, and brings forth with him or her the final verification of suspicions which might otherwise have been doubted by a skeptical world. But I think there is a deeper eschatological significance that attaches to the person of the renegade: in the spiritual wrestling match between good and evil which is the paranoid’s archetypal model of the world, the renegade is living proof that all the conversions are not made by the wrong side. He brings with him the promise of redemption and victory.
    A final characteristic of the paranoid style is related to the quality of its pedantry. One of the impressive things about paranoid literature is the contrast between its fantasied conclusions and the almost touching concern with factuality it invariably shows. It produces heroic strivings for evidence to prove that the unbelievable is the only thing that can be believed. Of course, there are highbrow, lowbrow, and middlebrow paranoids, as there are likely to be in any political tendency. But respectable paranoid literature not only starts from certain moral commitments that can indeed be justified but also carefully and all but obsessively accumulates :evidence.” The difference between this “evidence” and that commonly employed by others is that it seems less a means of entering into normal political controversy than a means of warding off the profane intrusion of the secular political world. The paranoid seems to have little expectation of actually convincing a hostile world, but he can accumulate evidence in order to protect his cherished convictions from it.
    Paranoid writing begins with certain broad defensible judgments. There was something to be said for the anti-Masons. After all, a secret society composed of influential men bound by special obligations could conceivable pose some kind of threat to the civil order in which they were suspended. There was also something to be said for the Protestant principles of individuality and freedom, as well as for the nativist desire to develop in North America a homogeneous civilization. Again, in our time an actual laxity in security allowed some Communists to find a place in governmental circles, and innumerable decisions of World War II and the Cold War could be faulted.
    The higher paranoid scholarship is nothing if not coherent—in fact the paranoid mind is far more coherent than the real world. It is nothing if not scholarly in technique. McCarthy’s 96-page pamphlet, McCarthyism, contains no less than 313 footnote references, and Mr. Welch’s incredible assault on Eisenhower, The Politician, has one hundred pages of bibliography and notes. The entire right-wing movement of our time is a parade of experts, study groups, monographs, footnotes, and bibliographies. Sometimes the right-wing striving for scholarly depth and an inclusive world view has startling consequences: Mr. Welch, for example, has charged that the popularity of Arnold Toynbee’s historical work is the consequence of a plot on the part of Fabians, “Labour party bosses in England,” and various members of the Anglo-American “liberal establishment” to overshadow the much more truthful and illuminating work of Oswald Spengler.

The Double Sufferer

    The paranoid style is not confined to our own country and time; it is an international phenomenon. Studying the millennial sects of Europe from the eleventh to the sixteenth century, Norman Cohn believed he found a persistent psychic complex that corresponds broadly with what I have been considering—a style made up of certain preoccupations and fantasies: “the megalomaniac view of oneself as the Elect, wholly good, abominably persecuted, yet assured of ultimate triumph; the attribution of gigantic and demonic powers to the adversary; the refusal to accept the ineluctable limitations and imperfections of human existence, such as transience, dissention, conflict, fallibility whether intellectual or moral; the obsession with inerrable prophecies…systematized misinterpretations, always gross and often grotesque.”
    This glimpse across a long span of time emboldens me to make the conjecture—it is no more than that—that a mentality disposed to see the world in this way may be a persistent psychic phenomenon, more or less constantly affecting a modest minority of the population. But certain religious traditions, certain social structures and national inheritances, certain historical catastrophes or frustrations may be conducive to the release of such psychic energies, and to situations in which they can more readily be built into mass movements or political parties. In American experience ethnic and religious conflict have plainly been a major focus for militant and suspicious minds of this sort, but class conflicts also can mobilize such energies. Perhaps the central situation conducive to the diffusion of the paranoid tendency is a confrontation of opposed interests which are (or are felt to be) totally irreconcilable, and thus by nature not susceptible to the normal political processes of bargain and compromise. The situation becomes worse when the representatives of a particular social interest—perhaps because of the very unrealistic and unrealizable nature of its demands—are shut out of the political process. Having no access to political bargaining or the making of decisions, they find their original conception that the world of power is sinister and malicious fully confirmed. They see only the consequences of power—and this through distorting lenses—and have no chance to observe its actual machinery. A distinguished historian has said that one of the most valuable things about history is that it teaches us how things do not happen. It is precisely this kind of awareness that the paranoid fails to develop. He has a special resistance of his own, of course, to developing such awareness, but circumstances often deprive him of exposure to events that might enlighten him—and in any case he resists enlightenment.
    We are all sufferers from history, but the paranoid is a double sufferer, since he is afflicted not only by the real world, with the rest of us, but by his fantasies as well.

Richard Hofstadter is DeWitt Clinton Professor of American History at Columbia University. His latest book, “Anti-intellectualism in American Life,” was awarded the Pulitzer Prize for General Nonfiction earlier this year. This essay is adapted from the Herbert Spencer Lecture delivered at Oxford University in November 1963.

* Many anti-Masons had been fascinated by the penalties involved if Masons failed to live up to their obligations. My own favorite is the oath attributed to a royal archmason who invited “having my skull smote off and my brains exposed to the scorching rays of the sun.”

* In his recent book, How to Win an Election, Stephen C. Shadegg cites a statement attributed to Mao Tse-tung: “Give me just two or three men in a village and I will take the village.” Shadegg comments: “ In the Goldwater campaigns of 1952 and 1958 and in all other campaigns where I have served as consultant I have followed the advice of Mao Tse-tung.” “I would suggest,” writes senator Goldwater in Why Not Victory? “that we analyze and copy the strategy of the enemy; theirs has worked and ours has not.




Steven Chapman


The 19th century American writer Henry Adams said the descent of American presidents from George Washington to Ulysses S. Grant was enough to discredit the theory of evolution. The same could be said of the pantheon of conservative political heroes, which in the last half-century has gone from Barry Goldwater and Ronald Reagan to Sarah Palin. That refutation may be agreeable to Palin, who doesn't put much stock in Darwin anyway.

You can confirm all this by looking at what the three wrote. Goldwater, the 1964 Republican presidential nominee, made his reputation four years earlier with an eloquent and intellectually coherent volume, "The Conscience of a Conservative," which laid out a blueprint for the policies he favored.

Reagan likewise made the thinking person's case for conservatism. Between 1975 and 1979, after he had finished two terms as governor of California, he did some 1,000 radio commentaries, most of which he wrote himself. They were later collected in "Reagan, In His Own Hand," which provides the texts of his handwritten manuscripts and proves that, far from being the "amiable dunce" of liberal mythology, he thought hard and clearly about the issues of his time.

Palin? Her new memoir, "Going Rogue," fills up 413 pages, but it has less policy heft than a student council speech. Where Reagan dived into the murk of arms control and Goldwater fathomed federal farm programs, Palin skims over the surface of a puddle.

Amid all the tales of savoring the aromas at the state fair and having her wardrobe vetted by snotty campaign staffers, she sets aside space to lay out her vision of what it means to be a "Commonsense Conservative." It takes up all of 11 pages and leans heavily on prefabricated lines like "I am a conservative because I deal with the world as it is" and "If you want real job growth, cut capital gains taxes."

But the priorities of "Going Rogue" are striking poses and attitudes, not making actual arguments about the proper role of government. The book is meant to create an image, or maybe a brand -- folksy but shrewd, tough but feminine, noble but beset by weaklings and traitors, ever-smiling unless you awaken her inner "Mama Grizzly Bear" by scrutinizing her loved ones. No one could be more pleased with her than she is with herself. Reading the book is like watching Palin preen in front of a mirror for hours as she tirelessly compliments herself for courage, gumption, devotion to family and maverick independence.

Who needs policy? In her world -- and the world of legions of conservatives who revere her -- the persona is the policy. Palin is beloved because she's (supposedly) just like ordinary people, which (supposedly) gives her a profound understanding of their needs.

That attitude used to be associated with the left, which claimed to speak for the ordinary folks who get shafted by the system. Logic and evidence about policy, to many liberals, were less important than empathy and good intentions. Now it's conservatives who think we should be guided by our guts, not our brains.

Palin is the embodiment of this approach, never imagining that knowledge and reflection might be of more value than instinct. When Oprah asked if she had felt any doubts about her readiness to be vice president -- which requires the readiness to be president -- Palin replied breezily, "No, no -- I didn't blink. I felt quite confident in my abilities, in my executive experience, knowing that this is an executive administrative job." (The audience tittered.)

Contrast that with Reagan, who after learning of his victory on election night 1980 told his supporters, "There's never been a more humbling moment in my life." Palin doesn't do humble.

You could almost forget that for well over a year, Republicans have ridiculed Barack Obama as lighter than a souffle, an inexperienced upstart who owes everything to arrogant presumption and a carefully crafted image. But Obama wrote a 375-page book, "The Audacity of Hope," that shows a solid, and occasionally tedious, grasp of issues.

It is hard to imagine Palin (as opposed to a ghostwriter) producing anything comparable. Almost as hard as it is to imagine that modern conservatives would expect it.

Leaders who can think? That's so 20th century.



Steve Johnson

Dec 21, 2009 Chicago Tribune -- It seems inevitable and permanent now, as much a fixture in the American mind as McDonald's or Time magazine.

But YouTube, it is easy to forget, did not exist when the current decade opened.

It didn't exist in 2001 or 2002. There was no YouTube in 2003 or 2004, either.

Not until "Me at the zoo," a video of co-founder Jawed Karim standing in front of elephants at the San Diego Zoo, was posted in April 2005, was there, really, a YouTube.

Yet despite being around for fewer than half of the last 10 years, the video-sharing service is the decade's most influential popular-culture force on the Internet.

From Karim talking about the length of the elephants' trunks in the still-available 19-second clip, it has spearheaded the widespread availability of video on the Web, everything from golf's Masters tournament, live, to brand-new episodes of popular sitcoms such as "30 Rock" in the same week they aired on TV. These developments, of course, threaten traditional and long-standing delivery systems.

YouTube became the clearinghouse for the short, shared, "viral" videos that were key to making Internet culture into mainstream culture, and started to play a role in politics, especially in the 2008 presidential campaign.

It developed as a kind of chaotic library, a go-to reference resource for people seeking video of musical artists, old cigarette commercials or the latest news sensation.

And it has championed the decade's DIY aesthetic: Skip the professionals, was YouTube's implicit message. Shoot your own video. Upload it here, fast and easy. And in the end, it doesn't matter so much if your backyard trampoline-stunt footage (ouch!) isn't great art; what matters is the validation it seems to get by being hosted on an external site.

With YouTube, if you wanted your friends to watch what you made, you didn't have to drag them into your living room and plug the camcorder into the TV. You just sent them a link, and they watched it at the same Web site that also has professional material by TV stars. The site echoed similar revolutions happening in writing, as blogs came to prominence, and in photography, where people shared photos on sites including Flickr. But with YouTube, it was even more so, because the bar to getting videos shown in public had been higher.

Professional creators of content tried to fight YouTube for a while, policing their copyrights zealously and seeking takedowns whenever possible. But eventually, they decided they'd rather switch than fight. Deals were struck, and the providers who didn't form their own YouTube channels to show highlights (as CBS, for one, does) offered the equivalent of YouTube clips and much more on their own sites or on professional aggregators, led by iTunes, from Apple, by Netflix, the DVD-by-mail service increasingly serving films as video streams, and by Hulu, a project of General Electric Co. (NBC), News Corp. (Fox) and, later, Disney (ABC).

YouTube created the expectation among consumers that video would be available online, on-demand, freed of the boundaries of network schedule or DVD.

It got so big, so fast that Google was moved to buy the service for $1.65 billion in late 2006, an admission that Google's own stab at a video-upload site, Google Videos, had lost.

It was quite a climb for the service that began with a founder at the zoo. Karim, Steve Chen and Chad Hurley had met while working at PayPal, the Web-based money-transfer service. Chen was a graduate of the Illinois Mathematics and Science Academy, in the western suburb of Aurora, and, like Karim, had studied computer science at the University of Illinois.

But even as YouTube has become a ubiquitous brand, virtually the synonym for Web-based video, it hasn't yet proved that it can translate its traffic -- it is ranked among the top 5 Web sites -- into revenue. The site has struggled to integrate advertising in a manner that won't alienate customers, who value it for instant accessibility and the lack of clutter.

And indeed, Hulu, which might be termed a professional version of YouTube, has announced that it will, next year, begin charging its users.

But the battle of getting people to pay for content on the Web -- or of getting content to pay for itself via ads -- is a, and possibly the, question for the next decade.


Death: Bad?


Published: February 12, 2009

To be “philosophical” about something, in common parlance, is to face it calmly, without irrational anxiety. And the paradigm of a thing to be philosophical about is death. Here Socrates is held to be the model. Sentenced to die by an Athenian court on the charge of impiety, he serenely drank the fatal cup of hemlock. Death, he told his friends, might be annihilation, in which case it is like a long, dreamless slumber; or it might be a migration of the soul from one place to another. Either way, it is nothing to be feared.

From "The Book of Dead Philosophers"

Cicero said that to philosophize is to learn how to die — a pithy statement, but a misleading one. There is more to philosophizing than that. Broadly speaking, philosophy has three concerns: how the world hangs together, how our beliefs can be justified, and how to live. Arguably, learning how to die fits under the third of these. If you wanted to get rhetorically elastic about it, you might even say that by learning how to die we learn how to live.

That thought is more or less the inspiration behind Simon Critchley’s Book of Dead Philosophers (Vintage, paper, $15.95). What defines bourgeois life in the West today is our pervasive dread of death — so claims Critchley, a philosophy professor at the New School in New York. (He wrote this book, he tells us more than once, on a hill overlooking Los Angeles — which, because of “its peculiar terror of annihilation,” is “surely a candidate city for the world capital of death.”) As long as we are afraid of death, Critchley thinks, we cannot really be happy. And one way to overcome this fear is by looking to the example of philosophers. “I want to defend the ideal of the philosophical death,” Critchley writes.

So he takes us on a breezy and often entertaining tour through the history of philosophy, looking at how 190 or so philosophers from ancient times to the present lived and died. Not all of the deaths recounted are as edifying as Socrates’. Plato, for example, may have died of a lice infestation. The Enlightenment thinker La Mettrie seemed to have expired after eating a quantity of truffle pâté. Several deaths are precipitated by collisions: Montaigne’s brother was killed by a tennis ball; Rousseau died of cerebral bleeding, possibly as a result of being knocked down by a galloping Great Dane; and Roland Barthes was blindsided by a dry-cleaning truck. The American pragmatist John Dewey, who lived into his 90s, came to the most banal end of all: he broke his hip and then succumbed to pneumonia.

Critchley has a mischievous sense of humor, and he certainly does not shrink from the embodied nature of his subjects. There is arch merrymaking over beans (Pythagoras and Empedocles proscribed them) and flatulence (Metrocles became suicidally distraught over a bean-related gaseous indiscretion during a lecture rehearsal). We are told of Marx’s genital carbuncles, Nietzsche’s syphilitic coprophagy and Freud’s cancerous cheek growth, so malodorous that it repelled his favorite dog, a chow. There are Woody Allenish moments, as when the moribund Democritus “ordered many hot loaves of bread to be brought to his house. By applying these to his nostrils he somehow managed to postpone his death.” And there are last words, the best of which belong to Hein­rich Heine: “God will pardon me. It’s his métier.”

How are we to cultivate the wisdom necessary to confront death? It’s hard to find a consistent message here. Montaigne trained for the end by keeping death “continually present, not merely in my imagination, but in my mouth.” Spinoza went to the contrary extreme, declaring, “A free man thinks least of all of death.” Dying philosophically means dying cheerfully — that is what one would presume from the examples cited in this book. The beau ideal is David Hume, who, when asked whether the thought of annihilation terrified him, calmly replied, “Not the least.”

The idea that death is not such a bad thing may be liberating, but is it true? Ancient philosophers tended to think so, and Critchley (along with Hume) finds their attitude congenial. He writes, “The philosopher looks death in the face and has the strength to say that it is nothing.”

There are three classic arguments, all derived from Epicurus and his follower Lucretius, that it is irrational to fear death. If death is annihilation, the first one goes, then there are no nasty post-death experiences to worry about. As Epicurus put it, where death is, I am not; where I am, death is not. The second says it does not matter whether you die young or old, for in either case you’ll be dead for an eternity. The third points out that your nonexistence after your death is merely the mirror image of your nonexistence before your birth. Why should you be any more disturbed by the one than by the other? These arguments are invoked in Critchley’s book, but their logic goes unexamined. Unfortunately, all three are pretty lousy. The American philosopher Thomas Nagel, in his 1970 essay “Death,” showed what was wrong with the first. Just because you don’t experience something as nasty, or indeed experience it at all, doesn’t mean it’s not bad for you. Suppose, Nagel says, an intelligent person has a brain injury that reduces him to the mental condition of a contented baby. Certainly this would be a grave misfortune for the person. Then is not the same true for death, where the loss is still more severe?

The second argument is just as poor. It implies that John Keats’s demise at 25 was no more unfortunate than Tolstoy’s at 82, since both will be dead for an eternity anyway. The odd thing about this argument, as the (dead) English philosopher Bernard Williams noticed, is that it contradicts the first one. True, the amount of time you’re around to enjoy the goods of life doesn’t mathematically reduce the eternity of your death. But the amount of time you’re dead matters only if there’s something undesirable about being dead.

The third argument, that your posthumous nonexistence is no more to be feared than your prenatal nonexistence, also fails. As Nagel observed, there is an important asymmetry between the two abysses that temporally flank your life. The time after you die is time of which your death deprives you. You might have lived longer. But you could not possibly have existed in the time before your birth. Had you been conceived earlier than you actually were, you would have had a different genetic identity. In other words, you would not be you.

Cultivating indifference to death is not only philosophically unsound. It can be morally dangerous. If my own death is nothing, then why get worked up over the deaths of others? The barrenness of the Epicurean attitude — enjoy life from moment to moment and don’t worry about death — is epitomized by George Santayana, one of Critchley’s exemplary dead philosophers. After resigning from Harvard, Santayana lived in Rome, where he was discovered by American soldiers after the liberation of Italy in 1944. Asked his opinion of the war by a journalist from Life magazine, Santayana fatuously replied, “I know nothing; I live in the Eternal.”

Contrast the example of Miguel de Unamuno, a 20th-century Spaniard inexplicably omitted by Critchley. No one had a greater terror of death than Unamuno, who wrote that “as a child, I remained unmoved when shown the most moving pictures of hell, for even then nothing appeared to me quite so horrible as nothingness itself.” In 1936, at the risk of being lynched by a Falangist mob, Unamuno publicly faced down the pro-Franco thug Millán Astray. Placed under house arrest, Unamuno died 10 weeks later. Aptly, the Falangist battle cry Unamuno found most repellent was “Viva la Muerte!” — long live death.



Stephen Downes

Why is the Republican Party now represented by red, when conservative parties in all other places -- and even the United States, in the past -- were represented with blue? Ben Zimmer suggests (via The Language Log) "Democrats may have wanted to appropriate the positive connotations of blue (as in true-blue)" but I wonder whether it isn't deeper than that. Because I recall over the years studies saying that teams that wear red win more frequently. Perhaps Republicans have deliberately chosen red in order to generate a subconscious association of themselves as winners.

That's speculation, but the association between political advertising and psychological preference is not. A recent Fast Company post describes the use of what they call "political neuromarketing" during the campaign. It's not really neuro marketing as it has nothing to do with neural connections. Rather, they "measure everything including the story line, level of the language, images, music. Using critical point analysis, [they] identify specifics that may drive voters away or attract them. The techniques are non-invasive, and include measuring muscle, skin and pupil response."

The success of such techniques obviously has its implications in political theory, but is also relevant in learning theory. The general principle that "the brain reveals more than spoken answers to questions" tells us that knowledge, beliefs, and other mental states are much more fine-grained than our more traditional analyses suggest. Understanding that learning -- and persuasion -- is not simply "words in -- words out" is the first step toward developing a more comprehensive theory of cognition and a more effective understanding of learning and instruction.

A recent paper from a group of leading neuroscientists outlines the understanding of learning beginning to take form. The survey paper brings together the results of dozens of studies of learning and cognition. The authors write, "Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning." In some cases, this understanding is very detailed, such as our understanding of the function of layers of neurons in the visual cortex. In other cases, our understanding is beginning to cover a broad range of psychological phenomena, such as those involved in language learning.

It is tempting to use the analogy of a computer in an effort to understand human learning. That's why we see sentences like "the brain is a machine with limited resources for processing the enormous quantity of information received by the senses." But we should not even be talking about learning in such terms. As neuro-linguists will tell you "the brain does not store precise memories in specific locations. Instead, the brain reaches decisions through the dynamic interaction of diverse areas operating in functional neural circuits." The way we store, process, and represent information in the mind is completely different from the way it is done in a computer.

This is important because it tells us that learning is not simply, or even primarily, a process of decoding linguistic expressions. We can arrive at reasonable sounding generalizations about reading as decoding -- that we need to know that letters represent sounds, say, or that words have meanings -- but these generalizations do not lead us toward an understanding of language learning, they lead us away from it, as they are based on the supposition that cognition consists of word-like and meaning-like structures, which are applied to sounds and symbols, and refer to states of affairs in the world. But this just isn't so.

What we are in fact responding to as learners, especially at a young age, are patterns of perception presented to us from the environment. Children use frequency distributions, covariation and transitional probabilities to associate spoken words with phenomena. Learning, especially in the young, is imitative rather than analytical. Goals and objectives are inferred from patterns of related phenomena, not a propositional awareness of another's mental state. Phenomena are not experienced and understood in isolation, but in context and mediated by environment, social interaction, and previous experience.

It's a bit of an overgeneralization, but we can get at many of the issues here by distinguishing between two kinds of knowledge: one that is personal, internal to ourselves, and is, shall we say, 'knowledge-in-the-brain', and the other that is public or social, external to ourselves, and is, shall we say, 'knowledge-in-the-world'. Of course there are more than just two kinds of knowledge, but that is a discussion that can wait until later. The point here is to establish that there is more than one type of knowledge; if we can establish that, the rest can follow.

The distinction of these two types of knowledge refers to the nature of the knowledge itself, not the reality that the knowledge (putatively) describes. It is tempting to say that what we have here are two distinct representational systems, and if that works for you that's fine. But I believe the knowledge itself is the representational system, and so if we have two distinct representational systems, we have two kinds of knowledge. But let's not bog down on issues of ontology and metaphysics.

These two types of knowledge are well-established in science and philosophy. One of the more well-recognized versions of this distinction is articulated by Michael Polanyi (and echoed by knowledge management specialists everywhere). Public, social or external knowledge is what we might call 'explicit' knowledge, while Polanyi called the personal or internal type of knowledge "tacit" knowledge. This distinction has been characterized several ways. One way is to describe tacit knowledge as 'knowing how' while explicit knowledge is 'knowing that'. Another way is to distinguish between knowledge we can express and knowledge we cannot express. Tacit knowledge, argues Polanyi, is ineffable. It cannot be described. "We can know more than we can tell."

What's important about this distinction is that it creates a pretty clear dividing line between what we learn and what we express. The one is very different from the other. Expressions of knowledge are essentially the production of social artifacts -- what some would call "stigmergy" -- in order to coordinate activities with other people in the world. The results of this coordination constitute the rules of grammar, the laws of nature, etc., "the patterns of categories contain, theories, methods, feelings, values and skills which can be used in a fashion that the tradition judges are valid." These are phenomena, which can be learned, but the knowledge they express is expressed externally to the self.

What we learn, even when we learn from texts and documents, is distinct from the knowledge expressed in the texts themselves. Polanyi writes, "when I receive Information by reading a letter and when I ponder the message of the letter I am subsidiarily aware not only of its text, but also of all the past occasions by which I have come to understand the words of the text, and the whole range of this subsidiary awareness is presented focally in terms of the message. This message or meaning on which attention is now focused is not something tangible; it is the conception evoked by the text." The text says one thing, but when we read, we think of (and learn about) whatever is (in ourselves) evoked by the text.

When we learn, we do not merely assimilate; we do not simply undertake a mechanical process of decoding meaning from printed or spoken text. "Our knowledge of the things denoted by words will have been largely acquired by experience in the same way as animals come to know things, while the words will have acquired their meaning by previously designating such experience, either when uttered by others in our presence or when used by ourselves." This knowledge is not merely subsymbolic, it is distinct from the knowledge contained in the symbols. A doctor's knowledge of medicine is distinct from his or her knowledge of the words describing medicine. "While the correct use of medical terms cannot be achieved in itself, without the knowledge of medicine a great deal of medicine can be remembered even after on has forgotten the use of medical terms."

Tacit knowledge is learned using the visual cortex, cerebral cortex, and the rest of the neural network that constitutes our brain and nervous system. Knowledge, seen from this perspective, is not words and sentences or even pictures and icons, but sets of connections, layered over and over on each other, a fine mesh, a deep tapestry incredibly richer and more complex than any abstraction such as spoken language could express. As Nonaka and von Krogh summarize, "tacit knowledge is acquired with little or no direct instruction, it is procedural, and above all, practically useful." And while "locked away in people's neural networks," tacit knowledge expresses itself in our actions, our responses, and our expressions.

As Ryle said, "[T]o believe that the ice is thin is to be unhesitant in telling oneself and others that it is thin, in acquiescing in other people's assertions to that effect, in objecting to statements to the contrary, in drawing consequences from the original proposition and so forth. But it is also to be prone to skate warily, to shudder, to dwell in imagination on possible disasters, and to warn other skaters. It is not only a propensity to make certain theoretical moves, but to make certain executive and imaginative moves, as well as to have certain feelings."

This is an ability we share with animals, including some (like some primates) who can learn primitive languages, and others, like birds and cats (who cannot). And as Jeffrey Klugman recently wrote in Time, animals can learn a wide range of things once thought unique to humans. We've known for some time that animals can use tools, and have evidence of vocabulary and language in primates. But animals can also plan, work cooperatively, count numbers, have emotions, have empathy for others, and have a sense of self. And while humans may have specialized mechaisms for some functions (such as Broca's area for language) the mechanisms that produce this knowledge are low-level; for example, in problem-solving, "While the specialized cells in each section of mammalian basal ganglia do equally specialized work, the undifferentiated ones in birds' brains multitask, doing all those jobs at once."

Two different types of knowledge. Two different sets of skills. If we want people to socialize, to conform, to follow rules, we'll focus on the repetition of the symbols and codes that constitute explicit knowledge, to have them become expert in what Wittgenstein called "language games," the public performance of language. But if we want people to learn, then we need to focus on the subsymbolic, the concepts, skills, procedures and other bits of tacit knowledge that underlie, and give rise to, the social conventions. We cannot simply learn the words. "A great deal of medicine can be remembered even after one has forgotten the use of medical terms."

Or, to put the same point more bluntly, we can teach to support learning, or we can teach to support the production of social artifacts. We can teach the subject, or we can teach superficial behaviours. And as Tom Hoffman notes, those who have deeper knowledge, a greater base of expertise, will tend to produce "deep learning" in a discipline, while the less experience teachers will "teach to the test." And though students of the less experienced teachers had better test scores, students who learned from more experienced instructors performed better in subsequent courses. Hoffman cites scientific evidence but the same thing was said, many years earlier, by John Holt, who observed that in a traditional classroom, children learn to play the system, "to manipulate teachers to gain clues about what the teacher really wants. Through the teacher's body language, facial expressions and other clues, they learn what might be the right answer. They mumble, straddle the answer, get the teacher to answer their own question, and take wild guesses while waiting to see what happens."

If we really want to know what students learn, we need to take into account a much wider range of phenomena than how they behave in response to the production of artifacts. Perhaps it's a bit much to measure their pupil dilation, eye gaze, brain activity, blinking, breathing and body temperature, as neuromarketers do. But it shouldn't be too much to expect to be able to map their social and search activity in a learning community, as Google does.

And when we teach, we may not need to take into account everything about the message, the way a political campaign might. We may not, as political consultant Darryl Howard does, "measure everything including the story line, level of the language, images, music." But we should understand, as educators, that learning is much more than mere presentation of facts, that students are learning from everything that goes on around them, and that even if we are not teaching this way, someone -- with perhaps less honorable motives -- is doing it.



Clifton Truman Daniel

One evening near Christmas in 1955, my grandfather, former President Harry S. Truman, came home to find my grandmother, Bess, sitting in front of a roaring fire, tossing in bundles of letters she'd written to him.

"Bess," he said, stopping her. "What are you doing? Think of history."

"Oh, I have," she said, and tossed in another bundle.

As a result of the conflagration, the Truman Presidential Library in Independence, Mo., which has 1,316 letters my grandfather wrote to my grandmother, has only 184 of the 1,300 or so she wrote to him. They were found, shortly before her death in 1982, stuffed into the backs of drawers and between the pages of books.

Ten of the letters were put on limited display at the Truman Library in 1998. The rest have never before been made public. I have collected them in a book, "Dear Harry, Love Bess," pairing each with a letter of my grandfather's to her.

Because the few escaped the flames, we know that at 10:20 p.m. on the evening of July 16, 1923, my 38-year-old grandmother was in bed, lonely and unprotected, waging war on the local insect population.

"There was a big black bug on my bed when I turned the sheet down and I had to kill it myself," she wrote indignantly.

That morning, my grandfather had taken off for the summer encampment of the Missouri National Guard, something he did annually. In fact, most of their letters from 1923 to 1933 were written back and forth between Independence and places like Fort Leavenworth, Fort Riley, and Camp Ripley.

Grandpa, who from 1922 to 1934 was a county administrator (they called them judges) under constant stress, viewed these encampments as vacations. My grandmother was more interested in the results of his annual physical.

In 1923, when he reported he'd stood a perfect physical exam, all she wanted to know was what the camp doctor said about his tonsils.

"Bet he didn't even look at them," she grumbled.

And she didn't want him recreating too boldly. In 1925, when Grandpa wrote that the camp pool was so cold "Minnesota lakes have nothing on it," her reply contained a tersely worded postscript: "Be careful of the pool — don't try any deep water swimming, please!"

Worried as she was about him, she had her own health woes. In 1923, it was a trio of infected teeth, the worst of which took her dentist more than an hour to extract.

"He was as worn out as I was," she wrote. "But it (the tooth) isn't bothering me much now. He had to give me so many hypodermics, my head feels funny."

Grandpa, needless to say, was horrified, fuming that he surely felt "like busting a dentist I know of."

In the summer of 1925, she twisted her ankle. She doesn't say how, but I like to think that my mother, who was then about 18 months old, had something to do with it. She certainly made writing difficult.

"She is pulling and slapping me and is on my back at present (I'm sitting on the floor) so if you can read this scrawl you are doing pretty well," my grandmother wrote, or tried to write. "I can't write any more — she is yanking the paper out of my hands now."

But these trials were nothing compared to trying to get Grandpa to let her cut her hair short, as was the style in the mid-1920s. He liked the long, "golden curls" she'd worn since age 5, when he'd first laid eyes on her. She, on the other hand, felt they made her "conspicuous." They had a face-to-face about it before he left for camp in 1925 then continued the fracas through the mail for two weeks.

"Come on, be a sport," she cajoled near the end. "Ask all the married men in camp about their wives' heads and I'll bet anything I have there isn't one under 60 who has long hair."

Grandpa finally relented, saying, "I've never been right sure you weren't kidding me anyway. You usually do as you like about things, and that's what I want you to do."

Fights were rare. She was more likely to tease him. When he reported that the camp showers were several blocks away, she wrote back: "Don't you want your bath slippers? I should think you'd need them traveling down the street to your bath every morning."

The impishness extended to her neighbors the Swifts. When they rose at 5 a.m. to leave on vacation, she watched them, because, she wrote, "I wouldn't have missed seeing Mrs. Swift in knickers for a hundred dollars."

Mostly, though, she and Grandpa worked hard at the simple act of communicating. They wrote as often as twice a day. If one of them missed a letter, the excuse was either very detailed … or, in her case, interesting.

"It was so blazing hot last night I didn't have the nerve to keep on enough clothes so I could have a light long enough to write a letter."

But they rarely missed a chance to express their love for one another.

"Lots and lots of love and please keep on loving me as hard as ever," she wrote in July 1925. "You know I just feel as if a large part of me has been gone for the last 10 days."

Clifton Truman Daniel, the author of "Dear Harry, Love Bess," is director of public relations at Harry S. Truman College.



By Ray Jayawardhana

I remembet the first time the concept of another world entered my mind. It was during a walk with my father in our garden in Sri Lanka. He pointed to the Moon and told me that people had walked on it. I was astonished: Suddenly that bright light became a place that one could visit.

Schoolchildren may feel a similar sense of wonder when they see pictures of a Martian landscape or Saturn’s rings. And soon their views of alien worlds may not be confined to the planets in our own solar system.

After millenniums of musings and a century of failed attempts, astronomers first detected an exoplanet, a planet orbiting a normal star other than the Sun, in 1995. Now they are finding hundreds of such worlds each year. Last month, NASA announced that 1,235 new possible planets had been observed by Kepler, a telescope on a space satellite. Six of the planets that Kepler found circle one star, and the orbits of five of them would fit within that of Mercury, the closest planet to our Sun.

By timing the passages of these five planets across their sun’s visage — which provides confirmation of their planetary nature — we can witness their graceful dance with one another, choreographed by gravity. These discoveries remind us that nature is often richer and more wondrous than our imagination. The diversity of alien worlds has surprised us and challenged our preconceptions many times over.

It is quite a change from merely 20 years ago, when we knew for sure of just one planetary system: ours. The pace of discovery, supported by new instruments and missions and innovative strategies by planet seekers, has been astounding.

What’s more, from measurements of their masses and sizes, we can infer what some of these worlds are made of: gases, ice or rocks. Astronomers have been able to take the temperature of planets around other stars, first with telescopes in space but more recently with ground-based instruments, as my collaborators and I have done.

Two and a half years ago, we even managed to capture the first direct pictures of alien worlds. There is something about a photo of an alien planet — even if it only appears as a faint dot next to a bright, overexposed star — that makes it “real.” Given that stars shine like floodlights next to the planetary embers huddled around them, success required painstaking efforts and clever innovations. One essential tool is adaptive optics technology, which, in effect, takes the twinkle out of the stars, thus providing sharper images from telescopes on the ground than would otherwise be possible.

At the crux of this grand pursuit is one basic question: Is our warm, wet, rocky world, teeming with life, the exception or the norm? It is an important question for every one of us, not just for scientists. It seems absurd, if not arrogant, to think that ours is the only life-bearing world in the galaxy, given hundreds of billions of other suns, the apparent ubiquity of planets, and the cosmic abundance of life’s ingredients. It may be that life is fairly common, but that “intelligent” life is rare.

Of course, the vast majority of the extra-solar worlds discovered to date are quite unlike our own: many are gas giants, and some are boiling hot while others endure everlasting chills. Just a handful are close in size to our planet, and only a few of those may be rocky like the Earth, rather than gaseous like Jupiter or icy like Neptune.

But within the next few years, astronomers expect to find dozens of alien earths that are roughly the size of our planet. Some of them will likely be in the so-called habitable zone, where the temperatures are just right for liquid water. The discovery of “Earth twins,” with conditions similar to what we find here, will inevitably bring questions about alien life to the forefront.

Detecting signs of life elsewhere will not be easy, but it may well occur in my lifetime, if not during the next decade. Given the daunting distances between the stars, the real-life version will almost certainly be a lot less sensational than the movies depicting alien invasions or crash-landing spaceships.

The evidence may be circumstantial at first — say, spectral bar codes of interesting molecules like oxygen, ozone, methane and water — and leave room for alternative interpretations. It may take years of additional data-gathering, and perhaps the construction of new telescopes, to satisfy our doubts. Besides, we won’t know whether such “biosignatures” are an indication of slime or civilization. Most people will likely move on to other, more immediate concerns of life here on Earth while scientists get down to work.

If, on the other hand, an alien radio signal were to be detected, that would constitute a more clear-cut and exciting moment. Even if the contents of the message remained elusive for decades, we would know that there was someone “intelligent” at the other end. The search for extraterrestrial intelligence with radio telescopes has come of age recently, 50 years after the first feeble attempt. The construction of the Allen Telescope Array on an arid plateau in northern California greatly expands the number of star systems from which astronomers could detect signals.

However it arrives, the first definitive evidence of life elsewhere will mark a turning point in our intellectual history, perhaps only rivaled by Copernicus’s heliocentric theory or Darwin’s theory of evolution. If life can spring up on two planets independently, why not on a thousand or even a billion others? The ramifications of finding out for sure that ours isn’t the only inhabited world are likely to be felt, over time, in many areas of human thought and endeavor — from biology and philosophy to religion and art.

Some people worry that discovering life elsewhere, especially if it turns out to be in possession of incredible technology, will make us feel small and insignificant. They seem concerned that it will constitute a horrific blow to our collective ego.

I happen to be an optimist. It may take decades after the initial indications of alien life for scientists to gather enough evidence to be certain or to decipher a signal of artificial origin. The full ramifications of the discovery may not be felt for generations, giving us plenty of time to get used to the presence of our galactic neighbors. Besides, knowing that we are not alone just might be the kick in the pants we need to grow up as a species.

Ray Jayawardhana, a professor of astronomy and astrophysics at the University of Toronto, is the author of “Strange New Worlds: The Search for Alien Planets and Life Beyond Our Solar System.”



Petula Dvorak

A generation of young Americans slammed the door Monday on the great big boogeyman of their childhoods with an epic woot-woot and rounds and rounds of “U.S.A.!”

At the news of Osama bin Laden’s death, thousands of people — most of them college-age and in requisite flip-floppy collegiate gear — whipped up a raucous celebration right outside the White House gates that was one part Mardi Gras and two parts Bon Jovi concert.

There were cigars, a few beers, a lacrosse-stick-turned-flagpole waved by a kid who just climbed a statue, joining others aloft in trees and atop lampposts. Well past midnight, cars zipped up and down the streets of downtown Washington with women standing up through sunroofs waving ginormous American flags and guys blowing vuvuzelas, spring break style.

It felt a little crazy, a bit much. Almost vulgar.

Because meanwhile, across the river, at the Pentagon, in the ghostly quiet of lights at the Sept. 11 memorial, a military veteran silently wept.

And many others cried, too, sickened by the death toll, the enormity of almost 10 years of fear, death and terror.

The death of bin Laden will be a grief-tinged, complicated event for many Americans. I immediately saw the mixed reactions of my peers on Twitter and Facebook. Folks who lost close friends or family in the Sept. 11 attacks orchestrated by bin Laden or the war on error that followed had a rush of new emotions and raw pain at the news of even more bloodshed.

Is it over? Everything better now that they got him? Not really.

When I saw that folks were celebrating in the streets at the news of bin Laden’s death, my first reaction was a cringe. Remember how we all felt watching videos of Muslims dancing in celebration on Sept. 11, 2001?

Are we simply creating star-spangled recruitment tapes for a new generation of terrorists killing in the name of their new martyr?

So a jacket went over my pajamas, shoes went on my feet and off I went to see the macabre jubilee downtown.

One of the first people I met was Mohsen Farshneshani, who was fist-pumping in a U.S.A. chant amid a huge crush of college kids.

“When 9/11 happened, I was in fourth grade. It changed everything,” said Farshneshani, a 19-year-old freshman at the University of Maryland. “The way people treated me, my family, the mean things everyone began saying to us.”

A Muslim who grew up in Olney, Farshneshani watched his religion get hijacked by the man he often blamed it all on: bin Laden.

He remembers a “perfect” life in third grade, when he had non-Muslim friends and it seemed as though no one cared that he practiced a different faith.

After bin Laden’s attacks, there was a seismic shift. The kids still willing to come on play dates were suddenly accompanied by their parents. At his birthday party, he watched parents sneak around the house, poking their heads into different rooms, looking, presumably, for those suspicious signs of terrorist activity the government — via highway signs and billboards — repeatedly asked us all to report.

So in the wee hours of Monday morning, with the biggest boogeyman of his young life gone, Farshneshani felt like everything might change.

“This is a new opportunity for Muslims, and a great victory,” Farshneshani said.

He’s part of a color-coded terror alert generation, the kids who open their backpacks at museums and libraries and take off their shoes at airports without being asked because that’s what you do, right?

Their daily news has been body counts and deployments. Their Halloween candy goes to soldiers; their fundraisers are for injured veterans. They are the ones who saw, way earlier than any child should, their parents cry and freak and crumble on that day in September 2001.

For Sarah Powers, 19, the specter of Osama bin Laden as ultimate bad guy was there for her entire young life.

“I remember sitting in the classroom, watching the TV that day, 9/11, and how scared everyone was,” said Powers, a freshman at George Mason University. “We grew up with war. That’s most of what we know, being in a country that’s at war. To be here tonight, when they got him. Wow.”

Yes, they deserve a night of wow, a confetti-in-the-streets moment of victory, a V-Day.

Because after this, it’s probably going to stay very, very complicated.

Washington Post May 2, 2011



Don Peck    The Atlantic Monthly

In October 2005, three Citigroup analysts released a report describing the pattern of growth in the U.S. economy. To really understand the future of the economy and the stock market, they wrote, you first needed to recognize that there was “no such animal as the U.S. consumer,” and that concepts such as “average” consumer debt and “average” consumer spending were highly misleading.

In fact, they said, America was composed of two distinct groups: the rich and the rest. And for the purposes of investment decisions, the second group didn’t matter; tracking its spending habits or worrying over its savings rate was a waste of time. All the action in the American economy was at the top: the richest 1 percent of households earned as much each year as the bottom 60 percent put together; they possessed as much wealth as the bottom 90 percent; and with each passing year, a greater share of the nation’s treasure was flowing through their hands and into their pockets. It was this segment of the population, almost exclusively, that held the key to future growth and future returns. The analysts, Ajay Kapur, Niall Macleod, and Narendra Singh, had coined a term for this state of affairs: plutonomy.

In a plutonomy, Kapur and his co-authors wrote, “economic growth is powered by and largely consumed by the wealthy few.” America had been in this state twice before, they noted—during the Gilded Age and the Roaring Twenties. In each case, the concentration of wealth was the result of rapid technological change, global integration, laissez-faire government policy, and “creative financial innovation.” In 2005, the rich were nearing the heights they’d reached in those previous eras, and Citigroup saw no good reason to think that, this time around, they wouldn’t keep on climbing. “The earth is being held up by the muscular arms of its entrepreneur-plutocrats,” the report said. The “great complexity” of a global economy in rapid transformation would be “exploited best by the rich and educated” of our time.

Kapur and his co-authors were wrong in some of their specific predictions about the plutonomy’s ramifications—they argued, for instance, that since spending was dominated by the rich, and since the rich had very healthy balance sheets, the odds of a stock-market downturn were slight, despite the rising indebtedness of the “average” U.S. consumer. And their division of America into only two classes is ultimately too simple. Nonetheless, their overall characterization of the economy remains resonant. According to Gallup, from May 2009 to May 2011, daily consumer spending rose by 16 percent among Americans earning more than $90,000 a year; among all other Americans, spending was completely flat. The consumer recovery, such as it is, appears to be driven by the affluent, not by the masses. Three years after the crash of 2008, the rich and well educated are putting the recession behind them. The rest of America is stuck in neutral or reverse.

Income inequality usually shrinks during a recession, but in the Great Recession, it didn’t. From 2007 to 2009, the most-recent years for which data are available, it widened a little. The top 1 percent of earners did see their incomes drop more than those of other Americans in 2008. But that fall was due almost entirely to the stock-market crash, and with it a 50 percent reduction in realized capital gains. Excluding capital gains, top earners saw their share of national income rise even in 2008. And in any case, the stock market has since rallied. Corporate profits have marched smartly upward, quarter after quarter, since the beginning of 2009.

Even in the financial sector, high earners have come back strong. In 2009, the country’s top 25 hedge-fund managers earned $25 billion among them—more than they had made in 2007, before the crash. And while the crisis may have begun with mass layoffs on Wall Street, the financial industry has remained well shielded compared with other sectors; from the first quarter of 2007 to the first quarter of 2010, finance shed 8 percent of its jobs, compared with 27 percent in construction and 17 percent in manufacturing. Throughout the recession, the unemployment rate in finance and insurance has been substantially below that of the nation overall.

It’s hard to miss just how unevenly the Great Recession has affected different classes of people in different places. From 2009 to 2010, wages were essentially flat nationwide—but they grew by 11.9 percent in Manhattan and 8.7 percent in Silicon Valley. In the Washington, D.C., and San Jose (Silicon Valley) metro areas—both primary habitats for America’s meritocratic winners—job postings in February of this year were almost as numerous as job candidates. In Miami and Detroit, by contrast, for every job posting, six people were unemployed. In March, the national unemployment rate was 12 percent for people with only a high-school diploma, 4.5 percent for college grads, and 2 percent for those with a professional degree.

Housing crashed hardest in the exurbs and in more-affordable, once fast-growing areas like Phoenix, Las Vegas, and much of Florida—all meccas for aspiring middle-class families with limited savings and education. The professional class, clustered most densely in the closer suburbs of expensive but resilient cities like San Francisco, Seattle, Boston, and Chicago, has lost little in comparison. And indeed, because the stock market has rebounded while housing values have not, the middle class as a whole has seen more of its wealth erased than the rich, who hold more-diverse portfolios. A 2010 Pew study showed that the typical middle-class family had lost 23 percent of its wealth since the recession began, versus just 12 percent in the upper class.

The ease with which the rich and well educated have shrugged off the recession shouldn’t be surprising; strong winds have been at their backs for many years. The recession, meanwhile, has restrained wage growth and enabled faster restructuring and offshoring, leaving many corporations with lower production costs and higher profits—and their executives with higher pay.

Anthony Atkinson, an economist at Oxford University, has studied how several recent financial crises affected income distribution—and found that in their wake, the rich have usually strengthened their economic position. Atkinson examined the financial crises that swept Asia in the 1990s as well as those that afflicted several Nordic countries in the same decade. In most cases, he says, the middle class suffered depressed income for a long time after the crisis, while the top 1 percent were able to protect themselves—using their cash reserves to buy up assets very cheaply once the market crashed, and emerging from crisis with a significantly higher share of assets and income than they’d had before. “I think we’ve seen the same thing, to some extent, in the United States” since the 2008 crash, he told me. “Mr. Buffet has been investing.”

“The rich seem to be on the road to recovery,” says Emmanuel Saez, an economist at Berkeley, while those in the middle, especially those who’ve lost their jobs, “might be permanently hit.” Coming out of the deep recession of the early 1980s, Saez notes, “you saw an increase in inequality … as the rich bounced back, and unionized labor never again found jobs that paid as well as the ones they’d had. And now I fear we’re going to see the same phenomenon, but more dramatic.” Middle-paying jobs in the U.S., in which some workers have been overpaid relative to the cost of labor overseas or technological substitution, “are being wiped out. And what will be left is a hard and a pure market,” with the many paid less than before, and the few paid even better—a plutonomy strengthened in the crucible of the post-crash years.

The Culling of the Middle Class

One of the most salient features of severe downturns is that they tend to accelerate deep economic shifts that are already under way. Declining industries and companies fail, spurring workers and capital toward rising sectors; declining cities shrink faster, leaving blight; workers whose roles have been partly usurped by technology are pushed out en masse and never asked to return. Some economists have argued that in one sense, periods like these do nations a service by clearing the way for new innovation, more-efficient production, and faster growth. Whether or not that’s true, they typically allow us to see, with rare and brutal clarity, where society is heading—and what sorts of people and places it is leaving behind.

Arguably, the most important economic trend in the United States over the past couple of generations has been the ever more distinct sorting of Americans into winners and losers, and the slow hollowing-out of the middle class. Median incomes declined outright from 1999 to 2009. For most of the aughts, that trend was masked by the housing bubble, which allowed working-class and middle-class families to raise their standard of living despite income stagnation or downward job mobility. But that fig leaf has since blown away. And the recession has pressed hard on the broad center of American society.

“The Great Recession has quantitatively but not qualitatively changed the trend toward employment polarization” in the United States, wrote the MIT economist David Autor in a 2010 white paper. Job losses have been “far more severe in middle-skilled white- and blue-collar jobs than in either high-skill, white-collar jobs or in low-skill service occupations.” Indeed, from 2007 through 2009, total employment in professional, managerial, and highly skilled technical positions was essentially unchanged. Jobs in low-skill service occupations such as food preparation, personal care, and house cleaning were also fairly stable. Overwhelmingly, the recession has destroyed the jobs in between. Almost one of every 12 white-collar jobs in sales, administrative support, and nonmanagerial office work vanished in the first two years of the recession; one of every six blue-collar jobs in production, craft, repair, and machine operation did the same.

Autor isolates the winnowing of middle-skill, middle-class jobs as one of several labor-market developments that are profoundly reshaping U.S. society. The others are rising pay at the top, falling wages for the less educated, and “lagging labor market gains for males.” “All,” he writes, “predate the Great Recession. But the available data suggest that the Great Recession has reinforced these trends.”

For more than 30 years, the American economy has been in the midst of a sea change, shifting from industry to services and information, and integrating itself far more tightly into a single, global market for goods, labor, and capital. To some degree, this transformation has felt disruptive all along. But the pace of the change has quickened since the turn of the millennium, and even more so since the crash. Companies have figured out how to harness exponential increases in computing power better and faster. Global supply chains, meanwhile, have grown both tighter and more supple since the late 1990s—the result of improving information technology and of freer trade—making routine work easier to relocate. And of course China, India, and other developing countries have fully emerged as economic powerhouses, capable of producing large volumes of high-value goods and services.

Some parts of America’s transformation may now be nearing completion. For decades, manufacturing has become continually less important to the economy, as other business sectors have grown. But the popular narrative—rapid decline in the 1970s and ’80s, followed by slow erosion thereafter—isn’t quite right, at least as far as employment goes. In fact, the total number of people employed in industry remained quite stable from the late 1960s through about 2000, at roughly 17 million to 19 million. To be sure, manufacturing wasn’t providing many new jobs for a growing population, but for decades, rising output essentially offset the impact of labor-saving technology and offshoring.

But since 2000, U.S. manufacturing has shed about a third of its jobs. Some of that decline reflects losses to China. Still, industry isn’t about to vanish from America, any more than agriculture did as the number of farm workers plummeted during the 20th century. As of 2010, the United States was the second-largest manufacturer in the world, and the No. 3 agricultural nation. But agriculture is now so mechanized that only about 2 percent of American workers make a living as farmers. American manufacturing looks to be heading down the same path.

Meanwhile, another phase of the economy’s transformation—one more squarely involving the white-collar workforce—is really just beginning. “The thing about information technology,” Autor told me, “is that it’s extremely broadly applicable, it’s getting cheaper all the time, and we’re getting better and better at it.” Computer software can now do boilerplate legal work, for instance, and make a first pass at reading X-rays and other medical scans. Likewise, thanks to technology, we can now easily have those scans read and interpreted by professionals half a world away.

In 2007, the economist Alan Blinder, a former vice chairman of the Federal Reserve, estimated that between 22 and 29 percent of all jobs in the United States had the potential to be moved overseas within the next couple of decades. With the recession, the offshoring of jobs only seems to have gained steam. The financial crisis of 2008 was global, but job losses hit America especially hard. According to the International Monetary Fund, one of every four jobs lost worldwide was lost in the United States. And while unemployment remains high in America, it has come back down to (or below) pre-recession levels in countries like China and Brazil.

Anxiety Creeps Upward

Over time, both trade and technology have increased the number of low-cost substitutes for American workers with only moderate cognitive or manual skills—people who perform routine tasks such as product assembly, process monitoring, record keeping, basic information brokering, simple software coding, and so on. As machines and low-paid foreign workers have taken on these functions, the skills associated with them have become less valuable, and workers lacking higher education have suffered.

For the most part, these same forces have been a boon, so far, to Americans who have a good education and exceptional creative talents or analytic skills. Information technology has complemented the work of people who do complex research, sophisticated analysis, high-end deal-making, and many forms of design and artistic creation, rather than replacing that work. And global integration has meant wider markets for new American products and high-value services—and higher incomes for the people who create or provide them.

The return on education has risen in recent decades, producing more-severe income stratification. But even among the meritocratic elite, the economy’s evolution has produced a startling divergence. Since 1993, more than half of the nation’s income growth has been captured by the top 1 percent of earners, and the gains have grown larger over time: from 2002 to 2007, out of every three dollars of national income growth, the top 1 percent of earners captured two. Nearly 2 million people started college in 2002—1,630 of them at Harvard—but among them only Mark Zuckerberg is worth more than $10 billion today; the rise of the super-elite is not a product of educational differences. In part, it is a natural outcome of widening markets and technological revolution, which are creating much bigger winners much faster than ever before—a result that’s not even close to being fully played out, and one reinforced strongly by the political influence that great wealth brings.

Recently, as technology has improved and emerging-market countries have sent more people to college, economic pressures have been moving up the educational ladder in the United States. “It’s useful to make a distinction between college and post-college,” Autor told me. “Among people with professional and even doctoral [degrees], in general the job market has been very good for a very long time, including recently. The group of highly educated individuals who have not done so well recently would be people who have a four-year college degree but nothing beyond that. Opportunities have been less good, wage growth has been less good, the recession has been more damaging. They’ve been displaced from mid-managerial or organizational positions where they don’t have extremely specialized, hard-to-find skills.”

College graduates may be losing some of their luster for reasons beyond technology and trade. As more Americans have gone to college, Autor notes, the quality of college education has become arguably more inconsistent, and the signaling value of a degree from a nonselective school has perhaps diminished. Whatever the causes, “a college degree is not the kind of protection against job loss or wage loss that it used to be.”

Without doubt, it is vastly better to have a college degree than to lack one. Indeed, on a relative basis, the return on a four-year degree is near its historic high. But that’s largely because the prospects facing people without a college degree have been flat or falling. Throughout the aughts, incomes for college graduates barely budged. In a decade defined by setbacks, perhaps that should occasion a sort of wan celebration. “College graduates aren’t doing badly,” says Timothy Smeeding, an economist at the University of Wisconsin and an expert on inequality. But “all the action in earnings is above the B.A. level.”

America’s classes are separating and changing. A tiny elite continues to float up and away from everyone else. Below it, suspended, sits what might be thought of as the professional middle class—unexceptional college graduates for whom the arrow of fortune points mostly sideways, and an upper tier of college graduates and postgraduates for whom it points progressively upward, but not spectacularly so. The professional middle class has grown anxious since the crash, and not without reason. Yet these anxieties should not distract us from a second, more important, cleavage in American society—the one between college graduates and everyone else.

If you live and work in the professional communities of Boston or Seattle or Washington, D.C., it is easy to forget that nationwide, even among people ages 25 to 34, college graduates make up only about 30 percent of the population. And it is easy to forget that a family income of $113,000 in 2009 would have put you in the 80th income percentile nationally. The true center of American society has always been its nonprofessionals—high-school graduates who didn’t go on to get a bachelor’s degree make up 58 percent of the adult population. And as manufacturing jobs and semiskilled office positions disappear, much of this vast, nonprofessional middle class is drifting downward.

The Bottom 70 Percent

The troubles of the nonprofessional middle class are inseparable from the economic troubles of men. Consistently, men without higher education have been the biggest losers in the economy’s long transformation (according to Michael Greenstone, an economist at MIT, real median wages of men have fallen by 32 percent since their peak in 1973, once you account for the men who have washed out of the workforce altogether). And the struggles of men have amplified the many problems—not just economic, but social and cultural—facing the country today.

Just as the housing bubble papered over the troubles of the middle class, it also hid, for a time, the declining prospects of many men. According to the Harvard economist Lawrence Katz, since the mid-1980s, the labor market has been placing a higher premium on creative, analytic, and interpersonal skills, and the wages of men without a college degree have been under particular pressure. “And I think this downturn exacerbates” the problem, Katz told me. During the aughts, construction provided an outlet for the young men who would have gone into manufacturing a generation ago. Men without higher education “didn’t do as badly as you might have expected, on long-run trends, because of the housing bubble.” But it’s hard to imagine another such construction boom coming to their rescue.

One of the great puzzles of the past 30 years has been the way that men, as a group, have responded to the declining market for blue-collar jobs. Opportunities have expanded for college graduates over that span, and for nongraduates, jobs have proliferated within the service sector (at wages ranging from rock-bottom to middling). Yet in the main, men have pursued neither higher education nor service jobs. The proportion of young men with a bachelor’s degree today is about the same as it was in 1980. And as the sociologists Maria Charles and David Grusky noted in their 2004 book, Occupational Ghettos, while men and women now mix more easily on different rungs of the career ladder, many industries and occupations have remained astonishingly segregated, with men continuing to seek work in a dwindling number of manual jobs, and women “crowding into nonmanual occupations that, on average, confer more pay and prestige.”

As recently as 2001, U.S. manufacturing still employed about as many people as did health and educational services combined (roughly 16 million). But since then, those latter, female-dominated sectors have added about 4 million jobs, while manufacturing has lost about the same number. Men made no inroads into health care or education during the aughts; in 2009, they held only about one in four jobs in those rising sectors, just as they had at the beginning of the decade. They did, however, consolidate their hold on manufacturing—those dwindling jobs, along with jobs in construction, transportation, and utilities, were more heavily dominated by men in 2009 than they’d been nine years earlier.

“I’m deeply concerned” about the prospects of less-skilled men, says Bruce Weinberg, an economist at Ohio State. In 1967, 97 percent of 30-to-50-year-old American men with only a high-school diploma were working; in 2010, just 76 percent were. Declining male employment is not unique to the United States. It’s been happening in almost all rich nations, as they’ve put the industrial age behind them. Weinberg’s research has shown that in occupations in which “people skills” are becoming more important, jobs are skewing toward women. And that category is large indeed. In his working paper “People People,” Weinberg and two co-authors found that interpersonal skills typically become more highly valued in occupations in which computer use is prevalent and growing, and in which teamwork is important. Both computer use and teamwork are becoming ever more central to the American workplace, of course; the restructuring that accompanied the Great Recession has only hastened that trend.

Needless to say, a great many men have excellent people skills, just as a great many men do well in school. As a group, men still make more money than women, in part due to lingering discrimination. And many of the differences we observe between the genders may be the result of culture rather than genetics. All of that notwithstanding, a meaningful number of men have struggled badly as the economy has evolved, and have shown few signs of successful adaptation. Men’s difficulties are hardly evident in Silicon Valley or on Wall Street. But they’re hard to miss in foundering blue-collar and low-end service communities across the country. It is in these less affluent places that gender roles, family dynamics, and community character are changing in the wake of the crash.

A Cultural Separation

In the March 2010 issue of this magazine, I discussed the wide-ranging social consequences of male economic problems, once they become chronic. Women tend not to marry (or stay married to) jobless or economically insecure men—though they do have children with them. And those children usually struggle when, as typically happens, their parents separate and their lives are unsettled. The Harvard sociologist William Julius Wilson has connected the loss of manufacturing jobs from inner cities in the 1970s—and the resulting economic struggles of inner-city men—to many of the social ills that cropped up afterward. Those social ills eventually became self-reinforcing, passing from one generation to the next. In less privileged parts of the country, a larger, predominantly male underclass may now be forming, and with it, more-widespread cultural problems.

What I didn’t emphasize in that story is the extent to which these sorts of social problems—the kind that can trap families and communities in a cycle of disarray and disappointment—have been seeping into the nonprofessional middle class. In a national study of the American family released late last year, the sociologist W. Bradford Wilcox wrote that among “Middle Americans”—people with a high-school diploma but not a college degree—an array of signals of family dysfunction have begun to blink red. “The family lives of today’s moderately educated Americans,” which in the 1970s closely resembled those of college graduates, now “increasingly resemble those of high-school dropouts, too often burdened by financial stress, partner conflict, single parenting, and troubled children.”

“The speed of change,” wrote Wilcox, “is astonishing.” By the late 1990s, 37 percent of moderately educated couples were divorcing or separating less than 10 years into their first marriage, roughly the same rate as among couples who didn’t finish high school and more than three times that of college graduates. By the 2000s, the percentage in “very happy” marriages—identical to that of college graduates in the 1970s—was also nearing that of high-school dropouts. Between 2006 and 2008, among moderately educated women, 44 percent of all births occurred outside marriage, not far off the rate (54 percent) among high-school dropouts; among college-educated women, that proportion was just 6 percent.

The same pattern—families of middle-class nonprofessionals now resembling those of high-school dropouts more than those of college graduates—emerges with norm after norm: the percentage of 14-year-old girls living with both their mother and father; the percentage of adolescents wanting to attend college “very much”; the percentage of adolescents who say they’d be embarrassed if they got (or got someone) pregnant; the percentage of never-married young adults using birth control all the time.

One stubborn stereotype in the United States is that religious roots are deepest in blue-collar communities and small towns, and, more generally, among Americans who do not have college degrees. That was true in the 1970s. Yet since then, attendance at religious services has plummeted among moderately educated Americans, and is now much more common among college grads. So, too, is participation in civic groups. High-school seniors from affluent households are more likely to volunteer, join groups, go to church, and have strong academic ambitions than seniors used to be, and are as trusting of other people as seniors a generation ago; their peers from less affluent households have become less engaged on each of those fronts. A cultural chasm—which did not exist 40 years ago and which was still relatively small 20 years ago—has developed between the traditional middle class and the top 30 percent of society.

The interplay of economic and cultural forces is complex, and changes in cultural norms cannot be ascribed exclusively to the economy. Wilcox has tried to statistically parse the causes of the changes he has documented, concluding that about a third of the class-based changes in marriage patterns, for instance, are directly attributable to wage stagnation, increased job insecurity, or bouts of unemployment; the rest he attributes to changes in civic and religious participation and broader changes in attitudes among the middle class.

In fact, all of these variables seem to reinforce each other. Nonetheless, some of the most significant cultural changes within the middle class have accelerated in the past decade, as the prospects of the nonprofessional middle class have dimmed. The number of couples who live together but are not married, for instance, has been rising briskly since the 1970s, but it really took off in the aughts—nearly doubling, from 3.8 million to 6.7 million, from 2000 to 2009. From 2009 to 2010, that number jumped by nearly a million more. In six out of 10 of the newly cohabitating couples, at least one person was not working, a much higher proportion than in the past.

Ultimately, the evolution of the meritocracy itself appears to be at least partly responsible for the growing cultural gulf between highly educated Americans and the rest of society. As the journalist Bill Bishop showed in his 2008 book, The Big Sort, American communities have become ever more finely sorted by affluence and educational attainment over the past 30 years, and this sorting has in turn reinforced the divergence in the personal habits and lifestyle of Americans who lack a college degree from those of Americans who have one. In highly educated communities, families are largely intact, educational ideals strong, and good role models abundant. None of those things is a given anymore in communities where college-degree attainment is low. The natural leaders of such communities—the meritocratic winners who do well in school, go off to selective colleges, and get their degrees—generally leave them for good in their early 20s.

In their 2009 book, Creating an Opportunity Society, Ron Haskins and Isabel Sawhill write that while most Americans believe that opportunity is widespread in the United States, and that success is primarily a matter of individual intelligence and skill, the reality is more complicated. In recent decades, people born into the middle class have indeed moved up and down the class ladder readily. Near the turn of the millennium, for instance, middle-aged people who’d been born to middle-class parents had widely varied incomes. But class was stickier among those born to parents who were either rich or poor. Thirty-nine percent of children born to parents in the top fifth of earners stayed in that same bracket as adults. Likewise, 42 percent of those whose parents were in the bottom fifth remained there themselves. Only 6 percent reached the top fifth: rags-to-riches stories were extremely rare.

A thinner middle class, in itself, means fewer stepping stones available to people born into low-income families. If the economic and cultural trends under way continue unabated, class mobility will likely decrease in the future, and class divides may eventually grow beyond our ability to bridge them.

What is most worrying is that all of the most powerful forces pushing on the nonprofessional middle class—economic and cultural—seem to be pushing in the same direction. We cannot know the future, and over time, some of these forces may dissipate of their own accord. Further advances in technology may be less punishing to middle-skill workers than recent advances have been; men may adapt better to a post-industrial economy, as the alternative to doing so becomes more stark; nonprofessional families may find a new stability as they accommodate themselves to changing norms of work, income, and parental roles. Yet such changes are unlikely to occur overnight, if they happen at all. Momentum alone suggests years of trouble for the middle class.

Changing the Path of the American Economy

True recovery from the Great Recession is not simply a matter of jolting the economy back onto its former path; it’s about changing the path. No single action or policy prescription can fix the varied problems facing the middle class today, but through a combination of approaches—some aimed at increasing the growth rate of the economy itself, and some at ensuring that more people are able to benefit from that growth—we can ameliorate them. Many of the deepest economic trends that the recession has highlighted and temporarily sped up will take decades to fully play out. We can adapt, but we have to start now.

The rest of this article suggests how we might do so. The measures that I propose are not comprehensive, nor are they without drawbacks. But they are emblematic of the types of proposals we will need to weigh in the coming years, and of the nature of the national conversation we need to have. That conversation must begin with a reassessment of how globalization is affecting American society, and of what it will take for the U.S. to thrive in a rapidly changing world.

In 2010, the McKinsey Global Institute released a report detailing just how mighty America’s multinational companies are—and how essential they have become to the U.S. economy. Multinationals headquartered in the U.S. employed 19 percent of all private-sector workers in 2007, earned 25 percent of gross private-sector profits, and paid out 25 percent of all private-sector wages. They also accounted for nearly three-quarters of the nation’s private-sector R&D spending. Since 1990, they’ve been responsible for 31 percent of the growth in real GDP.

Yet for all their outsize presence, multinationals have been puny as engines of job creation. Over the past 20 years, they have accounted for 41 percent of all gains in U.S. labor productivity—but just 11 percent of private-sector job gains. And in the latter half of that period, the picture grew uglier: according to the economist Martin Sullivan, from 1999 through 2008, U.S. multinationals actually shrank their domestic workforce by about 1.9 million people, while increasing foreign employment by about 2.4 million.

The heavy footprint of multinational companies is merely one sign of how inseparable the U.S. economy has become from the larger global economy—and these figures neatly illustrate two larger points. First, we can’t wish away globalization or turn our backs on trade; to try to do so would be crippling and impoverishing. And second, although American prosperity is tied to globalization, something has nonetheless gone wrong with the way America’s economy has evolved in response to increasingly dense global connections.

Particularly since the 1970s, the United States has placed its bets on continuous innovation, accepting the rapid transfer of production to other countries as soon as goods mature and their manufacture becomes routine, all with the idea that the creation of even newer products and services at home will more than make up for that outflow. At times, this strategy has paid off big. Rapid innovation in the 1990s allowed the economy to grow quickly and create good, new jobs up and down the ladder to replace those that were becoming obsolete or moving overseas, and enabled strong income growth for most Americans. Yet in recent years, that process has broken down.

One reason, writes the economist Michael Mandel, is that America no longer enjoys the economic fruits of its innovations for as long as it used to. Knowledge, R&D, and business know-how depreciate more quickly now than they did even 15 years ago, because global communication is faster, connections are more seamless, and human capital is more broadly diffused than in the past.

As a result, domestic production booms have ended sooner than they used to. IT-hardware production, for instance, which in 1999 the Bureau of Labor Statistics projected would create about 155,000 new jobs in the U.S. over the following decade, actually shrank by nearly 500,000 jobs in that time. Jobs in data processing also fell, presumably as a result of both offshoring and technological advance. Because innovations now depreciate faster, we need more of them than we used to in order to sustain the same rate of economic growth.

Yet in the aughts, as an array of prominent economists and entrepreneurs have recently pointed out, the rate of big innovations actually slowed considerably; with the housing bubble fueling easy growth for much of that time, we just didn’t notice. This slowdown may have been merely the result of bad luck—big breakthroughs of the sort that create whole categories of products or services are difficult to predict, and long droughts are not unknown. Overregulation in certain areas may also have played a role. The economist Tyler Cowen, in his recent book, The Great Stagnation, argues that the scientific frontier itself—or at least that portion of it leading to commercial innovation—has been moving outward more slowly, and requiring ever more resources to do so, for many decades.

Process innovation has been quite rapid in recent years. U.S. multinationals and other companies are very good at continually improving their operational efficiency by investing in information technology, restructuring operations, and shifting work around the globe. Some of these activities benefit some U.S. workers, by making the jobs that stay in the country more productive. But absent big breakthroughs that lead to new products or services—and given the vast reserves of low-wage but increasingly educated labor in China, India, and elsewhere—rising operational efficiency hasn’t been a recipe for strong growth in either jobs or wages in the United States.

America has huge advantages as an innovator. Places like Silicon Valley, North Carolina’s Research Triangle, and the Massachusetts high-tech corridor are difficult to replicate, and the United States has many of them. Foreign students still flock here, and foreign engineers and scientists who get their doctorates here have been staying on for longer and longer over the past 15 years. When you compare apples to apples, the United States still leads the world, handily, in the number of skilled engineers, scientists, and business professionals in residence.

But we need to better harness those advantages to speed the pace of innovation, in part by putting a much higher national priority on investment—rather than consumption—in the coming years. That means, among other things, substantially raising and broadening both national and private investment in basic scientific progress and in later-stage R&D—through a combination of more federal investment in scientific research, perhaps bigger tax breaks for private R&D spending, and a much lower corporate tax rate (and a simpler corporate tax code) overall.

Edmund Phelps and Leo Tilman, professors at Columbia University, have proposed the creation of a National Innovation Bank that would invest in, or lend to, innovative start-ups—bringing more money to bear than venture-capital funds could, and at a lower cost of capital, which would promote more investment and enable the funding of somewhat riskier ventures. The broader idea behind such a bank is that because innovation carries so many ambient benefits—from job creation to the experience gained by even failed entrepreneurs and the people around them—we should be willing to fund it more liberally as a society than private actors would individually.

Removing bureaucratic obstacles to innovation is as important as pushing more public funds toward it. As Wall Street has amply demonstrated, not every industry was overregulated in the aughts. Nonetheless, the decade did see the accretion of a number of regulatory measures that may have chilled the investment climate (the Sarbanes-Oxley accounting reforms and a proliferation of costly security regulations following the creation of the Department of Homeland Security are two prominent examples).

Regulatory balance i