Mixed Signals

I listen to music every day. Intentionally.  I choose something to set my internal harmonic brainscape and listen.  It was a difficult and startling revelation to me back in my youth to realize many people don’t. That is, even when they have music playing, they don’t listen.  For many, it’s wallpaper, and this just struck me as sad.

But it explained what I thought of then as the execrable taste a lot of my acquaintances seemed to display in music.  I have never cared for so-called Top 40 tunes, with rare exception, because in my experience such songs were either the least interesting pieces on their respective albums or they were the zenith of a mediocre musical imagination.  Boring.  Listen to them three or four times and their content is exhausted.

I also used to have an absolutely absurd prejudice that if I could manage to play it myself, on guitar or keyboard, with only a few practices, it was just too insignificant.  This was ridiculous, but I’d been raised to appreciate technical difficulty as a sign of quality in most things.  It took a long time for me to overcome this notion and I still have not completely.

For good or ill, though, it informs my taste to this day, and in the presence of the technically superb I am seduced.  I have found technically accomplished work that was simply not as good as its polish, but I have more rarely ever found sloppy work that was so much better than its presentation that it didn’t matter.  Technical ability, precision of execution, polish…these are not simply ancillary qualities.  The guitarist may know all the notes of the Bach piece but if the timing is wrong, the chording inaccurate, the strings squeak constantly, it will be a thoroughly unenjoyable performance.  Likewise, if the guitarist has composed a beautiful new piece but then can’t perform it as imagined…who will ever know how beautiful it is?

Ultimately, technical sloppiness gets in the way of the work.  The better the technique, the clearer the art shows through.

Which brings me to what I wanted to talk about here.

The other day I sat down with two works that for whatever reason seemed to counterpoint each other.  Put it down to my peculiar æsthetic, as I doubt anyone else would consider them complimentary.  And perhaps they aren’t, but they shared a common quality, the one I’ve been going on about—technical superiority.

Ansel Adams is a byword for precision in art, especially photographic art.  His images are studies in excellence, from their composition to their presentation.  There is a fine-tuned carefulness in many of them, if not all, that has set the standard for decades.  I have a number of his monographs on my shelf and I have been an admirer and follower since I was a boy.  His set of instructional books, the Basic Photo series, were among the first I read when becoming a photographer myself.  Every year I hang a new Ansel Adams calendar in my office.  I have a biography of him, one signed volume of his Yosemite images, and I find myself constantly drawn to his work.  These photographs are replenishing.

So when a new collection came out this past year—400 Photographs—it was a given that I would acquire it.  (I do not have all his books—there’s a heavy rotation of repeats strewn throughout his œvre.)  I had it for some weeks before I found time to sit down and really go through it.  When I did I was surprised.

The collection is broken down in periods, beginning with some of his earliest images made when he was a boy, reprinted directly from the scrapbooks in which they were pasted, all the way up to the very early 1970s when he, according to the commentary, stopped making “important” photographs and devoted his time to the darkroom.  Gathered are most if not all his iconic images, many that will be familiar to those who have more than a passing acquaintance with his work…

…but also a number of relatively unknown photographs, peppered throughout, many of which show a less than absolute control on Adams’ part.  They do not come up to par.  Some of them, the composition is slightly “off” or the tonal range is not fully captured.

Which is not to say they are not beautiful.  Adams at his worst is equal to most others at their best.  But historically it’s interesting and instructive to see the “not quites” and the “almost theres” among the otherwise perfect works we have all come to expect.  But rather than detract, these works actually enhance the overall impact of the collection, because there is variation, there is evidence of “better”, there is obvious progression.  The commentary between the periods by Andrea Stillman is concise, spare, and informative as to the distinctions in evidence.  This is a chronicle of an artist’s evolution.

Looking at an Ansel Adams photograph, one sometimes feels that the very air was different around him, that light passed from landscape to film plane through a more pristine medium, that nature itself stood still for a few moments longer so the image could be recorded with absolute fidelity in a way given to no other photographer.

As I went through the images, I listened to a new album.  New to me, at least, and in fact it was released this past year.  Levin Minnemann Rudess.

Who?

Of the three, two had been known to me before this year.  Tony Levin is a bassist of extraordinary range and ability.  Besides his own work, he seemed for a time the player the serious groups called in when their regular bassist was unavailable.  Which means he played bass for Pink Floyd in the wake of Roger Waters’ exit.  He played bass for Yes. Dire Straits, Alice Cooper, Warren Zevon, and even Paul Simon and Buddy Rich.

He was also one of the most prominent members of King Crimson during one of its best periods.  He is a session player in constant demand and his ability seems chameleonic.  He can play anything in almost any style.  He is one of those musicians who always works, is always in demand.

Given his associations, sometimes it is a surprise to hear his own work, which can either be described as a distillation of all his influences or as a complete departure from them.  Such would seem to be the case here.

Jordan Rudess plays keyboards and came out of the progressive schools of Keith Emerson, Rick Wakeman, UK, and others, although the first band with which he was associated was the Dixie Dregs. He later joined Dream Theater, but like Levin has been a much in demand session player whose name I’ve seen pop up many times since the early 90s.

Marco Minnemann, then, is the only name with which I am unfamiliar, but that’s changing.   As a drummer, he’s played with former members of UK—Eddie Jobson and Terry Bozzio—and has been doing session work with metal groups.  I learned of him just this past year in association with guitarist Guthrie Govan, with whom he has formed a trio with bassist Bryan Beller, The Aristocrats.  He seems committed to that unit, so I believe the album I’m discussing may be a one-off, an experiment for these three musicians.  He is an explosively complex, solid drummer.

What does this have to do with Ansel Adams?

Not much other than what I began with—precision.  There is an overwhelming technical precision here that, for the duration of my study of the Adams book, formed a complimentary experience of sharp-edged landscapes and absolute control.  The LMR album is largely instrumental (which has slotted it into my writing queue) but fits no particular genre exactly.  Jazz?  Sure.  Metal?  Somewhat.  Fusion, certainly, but fusion of what?  Rudess’s runs evoke classical associations, but no single track is identifiable with a particular Great Composer.  This is experimental work, theory-in-practice, done at a high level of musicianship and compositional daring.  An aural high-wire act that is constructing the landscape as it records it.

As I said earlier, it happens more often than not that technical prowess can substitute for significant content.  “Too many notes” can mask as absence of substance.  Too-fine a presentation can distract from the fact that an image contains nothing worthwhile.

But when substance and technique are combined at a stratospheric level of ability, when performance melds precision and depth, then we have something truly special.

All I needed that afternoon was a fine wine to complete the immersive experience.

Quantum Branching…As Literature Embraces Science Fiction, the Past is Again and Again

Kate Atkinson’s latest novel, Life After Life, is a remarkable achievement.  It’s several hundred pages of exquisitely controlled prose contain the story of Ursula Todd, who is in the course of the story, born again and again and again.  Each life, some so very brief, ends in a tragic death, accidental, malevolent, heroic, painful, and each time she starts over, comes to the point where that mistake was but is now sidestepped, turned away, avoided.  She lives multiple times, each one different, and yet she remains herself.

The novel opens with a shocking scene—Ursula, a young woman living in Berlin, enters a café wherein she finds Adolf Hitler, surrounded by sycophants, enjoying his celebrity.  She pulls a pistol and takes aim,

Then she is born.

It is 1910, in the English countryside, and snowing heavily.  The scene is reminiscent of Dickens.  She is born.  First she dies from strangulation, the umbilical cord wrapped around her with no  one around who knows what to do.  Then in the next life that obstacle is overcome.  And so it goes, as she ages, staggers through one life after another, growing a little older each time, her family battered by one damn thing after another.  Ursula herself, a middle child, watches as much as participates in the homely evolution of this middle class English family, and we are treated to an almost microscopic study of its composition—its hypocrisies, its crises, it successes, its failures.

Ursula endures.  As her name almost punningly suggests, she Bears Death, over and over.  She never quite remembers, though.  She has intense feelings of déjà vu, she knows such and such should be avoided, this and that must be manipulated, but she never quite knows why.  At times she comes perilously close to recognition, but like so much in life her actions are more ideas that seemed good at the time than any deeper understanding.

Unlike the rigor of traditional time travel, the past does change, but then this is not a time travel novel, at least not in any traditional sense.  You might almost say it’s a reincarnation story, but it’s not that, either, because Ursula never comes back as anyone other than herself.   At one point in the novel, time is described, not as circular but as a palimpsest—layers, one atop another, compiling.  The result here is a portrait more complete than most not of a life lived but of life as potential.  But for this or that, there wandered the future.  It is a portrait of possibility.

The big events of history are not changed, though.  Nothing Ursula does in her manifold existences alters the inevitability of WWII or Hitler or the Spanish Flu or any of the mammoth occurrences that dominate each and every life she experiences.

What she does change is herself.  And, by extension, her family, although all of them remain persistently themselves throughout.  It is only the consequences of their self expression that become shaped and altered.

We see who are the genuine heroes, who the fools, the cowards, the victims and victors as, where in one life none of this might emerge clearly, in the repeated dramas with minor changes character comes inexorably to the fore.

Atkinson does not explain how any of this happens.  It’s not important, because she isn’t doing the kind of fiction we might encounter as straight up science fiction, where the machinery matters.  She’s examining ramifications of the personal in a world that is in constant flux on the day to day level even as the accumulation of all that movement builds a kind of monolithic structure against which our only real choice is to choose what to do today.  Consequently, we have one of the most successful co-options of a science fiction-like conceit into a literary project of recent memory.

On a perhaps obvious level, isn’t this exactly what writers do?  Reimagine the personal histories of their characters in order to show up possibility?

Future Historicity

History, as a discipline, seems to improve the further away from events one moves. Close up, it’s “current events” rather than “history.”  At some point, the possibility of objective analysis emerges and thoughtful critiques may be written.

John Lukacs, Emeritus Professor of History at Chestnut Hill College, understands this and at the outset of his new study, A Short History of the Twentieth Century, allows for the improbability of what he has attempted:

Our historical knowledge, like nearly every kind of human knowledge, is personal and participatory, since the knower and the known, while not identical, are not and cannot be entirely separate.

He then proceeds to give an overview of the twentieth century as someone—though he never claims this—living a century or more further on might.  He steps back as much as possible and looks at the period under examination—he asserts that the 20th Century ran from 1914 to 1989—as a whole, the way we might now look at, say, the 14th Century or the 12th and so on.  The virtue of our distance from these times is our perspective—the luxury of seeing how disparate elements interacted even as the players on the ground could not see them, how decisions taken in one year affected outcomes thirty, forty, even eighty years down the road.  We can then bring an analysis and understanding of trends, group dynamics, political movements, demographics, all that go into what we term as culture or civilization, to the problem of understanding what happened and why.

Obviously, for those of us living through history, such perspective is rare if not impossible.

Yet Lukacs has done an admirable job.  He shows how the outbreak and subsequent end of World War I set the stage for the collapse of the Soviet Empire in 1989, the two events he chooses as the book ends of the century.  He steps back and looks at the social and political changes as the result of economic factors largely invisible to those living through those times, and how the ideologies that seemed so very important at every turn were more or less byproducts of larger, less definable components.

It is inevitable that the reader will argue with Lukacs.  His reductions—and expansions—often run counter to what may be cherished beliefs in the right or wrong of this or that.  But that, it seems, is exactly what he intends.  This is not a history chock full of the kind of detail used in defending positions—Left, Right, East, West, etc—and is often stingy of detail.  Rather, this is a broad outline with telling opinions and the kind of assertions one might otherwise not question in a history of some century long past.  It is intended, I think, to spur discussion.

We need discussion.  In many ways, we are trapped in the machineries constructed to deal with the problems of this century, and the machinery keeps grinding even though the problems have changed.  Pulling back—or even out of—the in situ reactivity seems necessary if we are to stop running in the current Red Queen’s Race.

To be sure, Lukacs makes a few observations to set back teeth on edge.  For instance, he dismisses the post World War II women’s consciousness and equality movements as byproducts of purely economic conditions and the mass movement of the middle class to the suburbs.  He has almost nothing good to say about any president of the period but Franklin Roosevelt.

He is, certainly, highly critical of the major policy responses throughout the century, but explains them as the consequence of ignorance, which is probably true enough.  The people at the time simply did not know what they needed to know to do otherwise.

As I say, there is ample here with which to argue.

But it is a good place to start such debates, and it is debate—discussion, interchange, conversation—that seems the ultimate goal of this very well-written assay.  As long as it is  debate, this could be a worthy place to begin.

He provides one very useful definition, which is not unique to Lukacs by any means, yet remains one of those difficult-to-parse distinctions for most people and leads to profound misunderstandings.  He makes clear the difference between nations and states.  They are not the same thing, though they are usually coincidentally overlapped.  States, he shows, are artificial constructs with borders, governmental apparatus, policies.  Nations, however, are simple Peoples.  Hence Hitler was able to command the German nation even though he was an Austrian citizen.  Austria, like Germany, was merely a state.  The German People constituted the nation.

Lukacs—valuably—shows the consequences of confusing the two, something which began with Wilson and has tragically rumbled through even to this day.  States rarely imposed a national identity, they always rely on one already extant—though often largely unrealized.  And when things go wrong between states, quite often it is because one or the other have negotiated national issues with the wrong part.

Which leads to an intriguing speculation—the fact that nativist sympathies really do have a difficult time taking root in this country.  Americans do not, by this definition, comprise a Nation.  A country, a state, a polity, certainly.  But not really a Nation.

And yet we often act as if we were.

Questions.  Discussion.  Dialogue.  This is the utility and virtue of this slim volume.

Greatless Illusion

The third book I read recently which resonated thematically with the previous two is one I have come somewhat late to given my inclinations.  But a new paperback edition was recently released and I considered buying it.  I hesitated as I was uncertain whether anything new or substantively unique was contained therein to make it worth having on my shelf.  I have other books along similar lines and while I am fond of the author, it seemed unlikely this book would offer anything not already covered.

Christopher Hitchens was a journalist and essayist and became one of our best commentators on current events, politics, and related subjects.  Even when I disagreed with him I have always found his arguments cogent and insightful and never less than solidly grounded on available fact.

So when he published a book of his views on religion, it seemed a natural addition to my library, yet I missed it when it first came out.  Instead, I read Richard Dawkins’ The God Delusion, which I found useful and well-reasoned, but pretty much a sermon to one who needed no convincing.  Such books are useful for the examples they offer to underpin their arguments.

Such is the case with God Is Not Great: How Religion Poisons Everything.  Hitchens’ extensive travels and his experiences in the face of conflict between opposing groups, often ideologically-driven, promised a surfeit of example and he did not fail to provide amply.

The title is a challenge, a gauntlet thrown at the feet of those with whom Hitchens had sizeable bones to pick.  In the years since its initial publication it has acquired a reputation, developed a set of expectations, and has become something of a cause celebré sufficient for people to take sides without having read it.  I found myself approaching the book with a set of expectations of my own and, with mild surprise, had those expectations undermined.

Yes, the book is a statement about the nature of religion as an abusive ideology—regardless of denomination, sect, theological origin—and offers a full range of examples of how conflicts, both between people and peoples, are generally made worse (or, more often than not, occur because of) by religious infusions into the situation.  It is in many ways a depressing catalog of misuse, misinterpretation, misstatement, misunderstanding, and sometimes misanthropy born out of religious conviction.  Hitchens analyzes the sources of these problems, charts some of the history, and gives us modern day examples.

But he tempers much of this by drawing a distinction between individuals and ideologies.

He also opens with a statement that in his opinion we shall never be rid of it.  This is quite unlike people like Dawkins who actually seem to feel humankind can be educated out of any need of religion.  Hitchens understood human nature all too well to have any hope that this was possible.

He does allow that possibly religion allows some good people to be better, but he does not believe religion makes anyone not already so inclined good.

By the end of the book, there will likely be two reactions.  One, possibly the more common, will be to dismiss much of his argument as one-sided.  “He overlooks all the good that has been done.”  It is interesting to me that such special pleading only ever gets applied consistently when religion is at issue.  In so much else, one or two missteps and trust is gone, but not so in religion, wherein an arena is offered in which not only mistakes but serious abuse can occur time and time again and yet the driving doctrine never called into question.  The other reaction will be to embrace the serious critique on offer, even the condemnations, and pay no attention to the quite sincere attempt to examine human nature in the grip of what can only be described as a pathology.

Because while Hitchens was a self-proclaimed atheist, he does take pains to point out that he is not talking about any sort of actual god in this book, only the god at the heart of human-made religions.  For some this may be a distinction without a difference, but for the thoughtful reader it is a telling distinction.  That at the end of it all, Hitchens see all—all—manifestations of gods through the terms of their religions as artifices.  And he wonders then why people continue to inflict upon themselves and each other straitjackets of behavior and ideology that, pushed to one extreme or another, seem to always result in some sort of harm, not only for the people who do not believe a given trope but for the believers themselves.

We are, being story-obsessed, caught in the amber of our narratives.  Per Mr. Thompson’s analysis of myth, we are never free of those stories—even their evocation for the purposes of ridicule bring us fully within them and determine the ground upon which we move.  The intractable differences over unprovable and ultimately unsubstantiated assumptions of religious dictate, per the history chronicled around the life Roger Smith, have left us upon a field of direst struggle with our fellows whose lack of belief often is perceived as a direct threat to a salvation we are unwilling ourselves to examine and question as valid, resulting in abuse and death borne out of tortured constructs of love.  Christopher Hitchens put together a bestiary of precedent demonstrating that treating as real the often inarticulate longings to be “right” in the sight of a god we ourselves have invented, too often leads to heartache, madness, and butchery.

The sanest religionists, it would seem by this testament, are those with the lightest affiliation, the flimsiest of dedications to doctrine.  They are the ones who can step back when the call to massacre the infidel goes out.

All of which is ultimately problematic due simply to the inexplicable nature of religion’s appeal to so many.

But it is, to my mind, an insincere devoteé who will not, in order to fairly assess the thing itself, look at all that has been wrought in the name of a stated belief.  Insincere and ultimately dangerous, especially when what under any other circumstance is completely wrong can be justified by that which is supposed to redeem us.

Monstrous Partiality

In keeping with the previous review, we turn now to a more modern myth, specifically that of our nation’s founding.  More specifically, one component which has from time to time erupted into controversy and distorted the civil landscape by its insistence on truth and right.

But first, a question:  did you know that once upon a time, in Massachussetts, it was illegal to live alone?

There was a law requiring all men and women to abide with families—either their own or others—and that no one, man or woman, was permitted to build a house and inhabit it by themselves.

John M. Barry details this and much more about early America which, to my knowledge, never makes it into history classes, at least not in primary or secondary schools, in his excellent book  Roger Williams and the Creation of the American Soul: Church, State, and the Birth of Liberty.

9780670023059_p0_v1_s260x420

Discussion of the Founding—and most particularly the Founding Fathers—centers upon the Revolutionary Era collection of savants who shaped what became the United States.  It is sometimes easy to forget that Europeans had been on these shores, attempting settlements, for almost two centuries by then.  It’s as if that period, encapsulated as it is in quaint myths of Puritans, Pocahontas, Squanto, John Smith, and Plymouth Rock, occupies a kind of nontime, a pre-political period of social innocence in which Individuals, whose personalities loom large yet isolated, like Greek Gods, prepared the landscape for our later emergence as a nation.  My own history classes I recall did little to connect the English Civil War to the Puritan settlements and even less to connect the major convulsions in English jurisprudence of that period to the the evolution of political ideas we tend to take for granted today.  In fact, it seems pains are taken to sever those very connections, as if to say that once here, on North American soil, what happened in Europe was inconsequential to our national mythos.

That illusion is shattered by Barry in this biography of not only one of the most overlooked and misunderstood Founders but of that entire morass of religious and political struggle which resulted in the beginnings of our modern understanding of the wall of separation between church and state.  More, he makes it viscerally real why  that wall not only came into being but had  to be.

If you learned about Roger Williams at all in high school, probably the extent of it was “Roger Williams was a Puritan who established the colony that became Rhode Island.  He contributed to the discussion over individual liberty.”  Or something like that.  While true, it grossly undervalues what Williams actually did and how important he was to everything that followed.

In a way, it’s understandable why this is the case.  Williams occupies a time in our history that is both chaotic and morally ambiguous.  We like to think differently of those who settled here than they actually were, and any deeper examination of that period threatens to open a fractal abyss of soul searching that might cast a shadow over the period we prefer to exalt.

But the seeds of Williams’ contribution were sown in the intellectual soil which to this day has produced a troubling crop of discontent between two different conceptions of what America is.

The Puritans (whom we often refer to as The Pilgrims) were religious malcontents who opposed the English church.  They had good reason to do so.  King James I (1566 – 1625) and then his son, Charles I (1600 – 1649), remade the Church of England into a political institution of unprecedented intrusive power, establishing it as the sole legitimate church in England and gradually driving out, delegitimizing, and anathematizing any and all deviant sects—including and often most especially the Puritans.  Loyalty oaths included mandatory attendance at Anglican services and the adoption of the Book of Common Prayer.  The reason this was such a big deal at the time was because England had become a Protestant nation under Queen Elizabeth I and everything James and Charles were doing smacked of Catholicism (or Romishness), which the majority of common folk had rejected, and not without cause.  The history of the religious whipsaw England endured in these years is a blood-soaked one.  How people prayed, whether or not they could read the Bible themselves, and their private affiliations to their religious conceptions became the stuff of vicious street politics and uglier national power plays.

So when we hear that the Pilgrims came to America in order to worship as they saw fit, we sympathize.  Naturally, we feel, everyone should be allowed to worship in their own way.  We have internalized the idea of private worship and the liberty of conscience—an idea that had no currency among the Puritans.

The Puritans were no more tolerant than the high church bishops enforcing Anglican conformity in England.  They thought—they believed—their view of christian worship was right and they had come to the New World to build their version of perfection.  A survey of the laws and practices of those early colonies gives us a picture of ideological gulags where deviation was treated as a dire threat, a disease, which sometimes required the amputation of the infected individual: banishment.

Hence the law forbidding anyone from living alone.  It was thought that in isolation, apart from people who could keep watch over you and each other, the mind’s natural proclivity to question would create nonconformity.

Conformity is sometimes a dirty word today.  We pursue it but we reserve the right to distance ourselves from what we perceive as intrusiveness in the name of conformity.  Among the Puritans, conformity was essential to bring closer the day of Jesus’ return.  Everyone had to be on the same page for that to occur.

(Which gave them a lot of work to do.  Not only did they have to establish absolute conformism among themselves, but they would at some point have to go back to England and overthrow the established—i.e. the King’s—order and convert their fellow Britons, and then invade the Continent and overthrow Catholicism, and all the while they had to go out into the wilderness of North America and convert all the Indians…but first things first, they needs must become One People within their own community—something they were finding increasingly difficult to do.)

Into this environment came Roger Williams and his family.  Williams was a Puritan.  But he also had a background as apprentice to one of the most formidable jurists in English history, Sir Edward Coke, the man who ultimately curtailed the power of the king and established the primacy of Parliament.  Coke was no Puritan—it’s a question if he was anything in terms of religious affiliation beyond a christian—but he was one of the sharpest minds and most consistent political theorists of his day.  He brought WIlliams into the fray where the boy saw first-hand how power actually worked.  He saw kings be petty, injustices imposed out of avarice, vice, and vengeance in the name of nobly-stated principles.  And, most importantly, he saw how the church was corrupted by direct involvement in state matters.

This is a crucial point of difference between Williams and later thinkers on this issue.  Williams was a devout christian.  What he objected to was the way politics poisoned the purity that was possible in religious observance.  He wanted a wall of separation in order to keep the state out of the church, not the other way around.  But eventually he came to see that the two, mingled for any reason, were ultimately destructive to each other.

Williams was an up-and-coming mover among the Puritans, but the situation for him and many others became untenable and he decamped to America in 1631, where he was warmly received by the governor of Massachussetts, John Winthrop.  In fact, he was eagerly expected by the whole established Puritan community—his reputation was that great—and was immediately offered a post.

Which he turned down.

Already he was thinking hard about what he had witnessed and learned and soon enough he came into conflict with the Puritan regime over matters of personal conscience.

What he codified eloquently was his observation that the worst abuses of religiously-informed politics (or politically motivated religion) was the inability of people to be objective.  A “monstrous partiality” inevitably emerged to distort reason in the name of sectarian partisanship and that this was destructive to communities, to conscience, to liberty.

For their part, the Puritans heard this as a trumpet call to anarchy.

The Massachussetts Puritans came very close to killing Williams.  He was forced to flee his home in the midst of a snowstorm while he was still recovering from a serious illness.  He was succored by the Indian friends he had made, primarily because he was one of the very few Europeans who had bothered to learn their language.  They gave him land, which eventually became Providence Plantation, and he attracted the misfits from all over.  Naturally, Massachussetts saw this as a danger to their entire program.  If there was a place where nonconformity could flourish, what then became of their City on the Hill and the advent toward which they most fervently worked?

The next several years saw Williams travel back and forth across the Atlantic to secure the charter for his colony.  He knew Cromwell and the others and wrote his most famous book, The Bloody Tenent of Persecution, for Cause of Conscience,
in 1644 right before returning to America to shepherd his new colony.  In this book for the first time is clearly stated the argument for a firm wall of separation.  It is the cornerstone upon which the later generation of Founders built and which today rests the history of religious freedom we take as a natural right.

But the struggle was anything but civil and the abuses to which Williams responded in his call for a “Liberty of conscience” are not the general picture we have of the quaint Pilgrims.

Barry sets this history out in vivid prose, extensively sourced research, and grounds the story in terms we can easily understand as applicable to our current dilemma.  One may wonder why Williams is not more widely known, why his contributions are obscured in the shadow of what came later.  Rhode Island was the first colony with a constitution that did not mention god and it was established for over fifty years before a church was built in Providence.

Williams himself was not a tolerant man.  He loathed Baptists and positively hated Quakers.  But he valued his principles more.  Perhaps he saw in his own intolerance the very reason for adoption of what then was not merely radical but revolutionary.

Light Fallen

I’ve read three books in tandem which are connected by subtle yet strong filaments.  Choosing which one to begin with has been a bit vexatious, but in the end I’ve decided to do them in order of reading.

The first is an older book, handed me by a friend who thought I would find it very much worth my while.  I did, not, possibly, for the reasons he may have thought I would.  But it grounds a topic in which we’ve been engaged in occasionally vigorous debate for some time and adds a layer to it which I had not expected.

William Irwin Thompson’s  The Time Falling Bodies Take To Light  is about myth.  It is also about history.  It is also about grinding axes and challenging paradigms.  The subtitle declares: Mythology, Sexuality & the Origins of Culture.  This is a lot to cover in a mere 270-some pages, but Mr. Thompson tackles his subject with vigor and wrestles it almost into submission.

His thesis is twofold.  The first, that Myth is not something dead and in the past, but a living thing, an aggregate form of vital memes, if you will, which recover any lost force by their simple evocation, even as satire or to be dismissed.  Paying attention to myth, even as a laboratory study, brings it into play and informs our daily lives.

Which means that myth does not have a period.  It is ever-present, timeless, and most subtle in its influence.

His other thesis, which goes hand in hand with this, is that culture as we know it is derived entirely from the tension within us concerning sex.  Not sex as biology, although that is inextricably part of it, but sex as identifier and motivator. That the argument we’ve been having since, apparently, desire took on mythic power within us over what sex means, how it should be engaged, where it takes us has determined the shapes of our various cultural institutions, pursuits, and explications.

It all went somehow terribly wrong, however, when sex was conjoined with religious tropism and homo sapiens sapiens shifted from a goddess-centered basis to a god-centered one and elevated the male above the female.  The result has been the segregation of the female, the isolation of the feminine, and the restriction of intracultural movement based on the necessity to maintain what amounts to a master-slave paradigm in male-female relationships.

Throughout all this “fallen” power play, ancient myths concerning origins and the latent meanings of mutual apprehensions between men and women (and misapprehensions) have continued to inform the dialogue, often twisted into contortions barely recognizable one generation to the next but still in force.

There is much here to consider.  Thompson suggests the rise of the great monotheisms is a direct result of a kind of cultural lobotomy in which the Father-God figure must be made to account for All, subjugating if not eliminating the female force necessary for even simple continuation.  The necessity of women to propagate the species, in this view, is accommodated with reluctance and they are, as they have been, shoved into cramped confines and designated foul and evil and unclean in their turn, even as they are still desired.  The desire transforms the real into the ideal and takes on the aspects of a former goddess worship still latent in mythic tropes.

Certainly there is obvious force to this view.

The book is marred by two problems.  I mentioned the grinding of axes. Time was published originally in 1981 and, mostly in the first third, but sprinkled throughout, is an unmasked loathing of evolutionary psychology and sociobiology.  He takes especial aim at E.O. Wilson for promulgating certain reductive explanations for prehistoric cultural evolution based wholly on biological determinants.  Thompson’s prejudice is clear that he wants even early homo sapiens to be special in its cultural manifestations and he derides attempts at exclusively materialist explanations.  The fact that E.O,. Wilson himself has moved away from these earlier “purely” biological considerations one hopes would result in an updating.

But interestingly, part of Thompson’s rejection of such early modeling comes from an apparent belief in Race Memory.  Not, as I might find plausible, race memory as deeply-entrenched memes, but apparently as some undiscovered aspect of our genome.  He never quite comes out claims that such race memory is encoded in our DNA, but he leaves little room for alternative views.

Hence, he asserts, the genuine power of myth, since it is carried not only culturally, but quasi-biologically, as race memory.  Which we ignore at our peril.

He does not once mention Joseph Campbell, whose work on the power of myth I think goes farther than most in explicating how myth informs our lives, how myth is essentially meaning encoded in ideas carried in the fabric of civilization.  He does, however, credit Marija Gimbutas, whose work on goddess cultures extending back before the rise of Sumer and the constellation of civilizations commonly recognized as the “birth” of civilization was attacked by serious allegations of fraud in order to undermine her legitimacy and negate her thesis that early civilizations were certainly more gender equal if not outright female dominated.  (Just a comment on the so-called “birth” of civilization: it has been long remarked that ancient Sumeria appeared to “come out of nowhere”, a full-blown culture with art and some form of science.  But clearly common sense would tell us that such a “birth” had to be preceded by a long pregnancy, one which must have contained all the components of what emerged.  The “coming out of nowhere” trope, which sounds impressive on its face, would seem to be cultural equivalent of the virgin birth myth that has informed so many civilizations and myth cycles since…)

My complaint, if there is any, is that he undervalues the work of geneticists, biologists, and sociometricians, seeking apparently to find a causation that cannot be reduced to a series of pragmatic choices taken in a dramatically changing ecosystem or evolutionary responses to local conditions.  Fair enough, and as far as it goes, I agree.  Imagination, wherever and whenever it sprang into being, fits badly into the kind of steady-state hypothesizing of the harder sciences when it comes to how human society has evolved.  But to dismiss them as irrelevant in the face of an unverifiable and untestable proposition like Race Memory is to indulge in much the same kind of reductionist polemic that has handed us the autocratic theologies of “recorded history.”

Once Thompson moves out of the speculative field of, say, 8,000 B.C.E. and older and into the period wherein we have records, his attack on cherished paradigms acquires heft and momentum and the charm of the outsider.  (His mention, however, of Erich von Daniken threatens to undo the quite solid examination of the nature of “ancient” civilizations.)  It is easy enough to see, if we choose to step out of our own prejudices, how the march of civilization has been one of privileging male concerns and desires over the female and diminishing any attempt at egalitarianism in the name of power acquisition.  The justification of the powerful is and probably has always been that they are powerful, and therefore it is “natural” that they command.  Alternative scenarios suffer derision or oxygen deprivation until a civilization is old enough that the initial thrill and charm of conquest and dominance fades and more abstruse concerns acquire potency.

But the value of The Time Falling Bodies Take To Light  may be in its relentless evocation of institutional religion as a negation of the spiritual, as if to say that since we gave up any kind of natural and sane attitude toward sexuality and ignored the latent meaning in our mythologies we have been engaged in an ongoing and evermore destructive program to capture god in a bottle and settle once and for all what it is we are and should be.  When one looks around at the religious contention today, it is difficult if not impossible to say it is not all about men being in charge and women being property.  Here and there, from time to time, we hear a faint voice of reason crying out that this is a truly stupid thing to kill each other over.

End Times

The Sixties.

Depending on what your major concerns are, that period means different things.  For many people, it was revolution, civil rights, the peace movement.  For many others, it was music.

For Michael Walker, it was evidently the latter.  In his new book, What You Want Is In The Limo,  he chronicles what he considers the End of the Sixties through the 1973 tours of three major rock groups—The Who, Led Zeppelin, and Alice Cooper.

His claim, as summarized in the interview linked above, is that after Woodstock, the music industry realized how much money could be made with this noisy kid stuff (which by Woodstock it no longer was—kid stuff, that is) and started investing heavily, expanding the concert scene, turning it from a “cottage industry” into the mega-million-dollar monster it has become.  1973, according to Walker, is the year all this peaked for the kind of music that had dominated The Sixties, made the turn into rock star megalomania, and ushered in the excesses of the later Seventies and the crash-and-burn wasteland of the Punk and New Wave eras (with a brief foray into Disco and cocaine before the final meltdown).

The bands he chose are emblematic, certainly, but of the end of the Sixties?  I agree with him that 1973 is the year the Sixties ended, but the music aspect, as always, was merely a reflection, not a cause.  What happened in 1973 that brought it all to an ignominious close was this: Vietnam ended.

(Yes, I know we weren’t out until 1975, but in 1972 Nixon went to China, which resulted in the shut-down of the South China rail line by which Russia had been supplying North Vietnam, and in 1973 the draft ended, effectively deflating a goodly amount of the rage over the war.  The next year and a half were wind-down.)

Walker’s analysis of the cultural differences before and after 1973 are solid, but while the money was certainly a factor, a bigger one is exhaustion.  After a decade of upheaval over civil rights and the war in Vietnam, people were tired.  Vietnam ended and everyone went home.  Time to party.  Up to that point, the music—the important music, the music of heft and substance—was in solidarity with the social movements and protest was a major component of the elixir.  Concerts were occasions for coming together in a common aesthetic, the sounds that distinguished Woodstock acting as a kind of ur-conscious bubble, binding people together in common cause.

Once the primary issues seemed settled, the music was just music for many people, and the aspects which seemed to have informed the popularity of groups like Cream or the Stones or the Doors lost touch with the zeitgeist.  What had begun as an industry of one-hit wonders returned to that ethic and pseudo-revolutionary music began to be produced to feed the remaining nostalgia.

(Consider, for example, a group like Chicago, which began as socially-conscious, committed-to-revolution act—they even made a statement to that effect on the inside cover of their second album—and yet by 1975 were cashing in on power ballads and love songs, leaving the heavily experimental compositions of their first three albums behind and eschewing their counter-culture sensibilities.)

To my mind the album that truly signified the end of that whole era was The Moody Blues Seventh Sojourn, which was elegaic from beginning to end.  The last cut, I’m Just A Singer In A Rock’n’Roll Band, was a rejection of the mantle bestowed on many groups and performers during the Sixties of guru.  With that recording, the era was—for me—over.

Also for me, Alice Cooper never signified anything beyond the circus act he was.  Solid tunes, an edgy stage act, and all the raw on-the-road excess that was seen by many to characterize supergroups, but most of Cooper’s music was vacuous pop-smithing.  The Who and Led Zeppelin were something else and both of them signify much more in artistic terms.  Overreach.

But interestingly enough, different kinds of overreach.  Walker talks of the self-indulgence of 45-minute solos in the case of Zeppelin, but this was nothing new—Cream had set the standard for seemingly endless solos back in 1966 and Country Joe McDonald produced an album in the Nineties with extended compositions and solos.  Quadraphenia was The Who’s last “great” album, according to Walker, and I tend to agree, but two kinds of exhaustion are at work in these two examples.  Zeppelin exhausted themselves in the tours and the 110% performances.  The Who exhausted the form in which they worked.  After Quadraphenia, all they could do was return to a formula that had worked well before, but which now gained them no ground in terms of artistic achievement.  As artistic statement—as an example of how far they could push the idiom—that album was a high watermark that still stands.  But the later Who Are You?  is possibly their best-crafted work after Who”s Next.  “Greatness”—whatever that means in this context—had not abandoned them.  But the audience had changed.  Their later albums were money-makers with the occasional flash of brilliance.  They were feeding the pop machine while trying to compose on the edge, a skill few manage consistently for any length of time.

“Excess” is an interesting term as well.  Excess in what?  The combination of social movement with compositional daring had a moment in time.  When that time passed, two audiences parted company.  Those who wanted to party (often nostalgically) and those who were truly enamored of music as pure form.  They looked across the divide at each other and the accusation of excess was aimed by each at different things.  The one disdained the social excess of the other while the latter loathed the musical excess of the former.  People gleefully embracing Journey, disco, punk, and a gradually resurgent country-western genre thought the experimental explorations of the post-Sixties “art rock” scene were self-indulgent, elitist, and unlistenable.   People flocking to Yes and Emerson,Lake & Palmer concerts, cuing up Genesis and UK on their turntables, (and retroactively filling out their classical collections) found the whole disco scene and designer-drug culture grotesque.  Yet in many ways they had begun as the same social group, before the End of the Sixties.

The glue that had bound them together evaporated with the end of the political and social issues that had produced the counterculture and its attendant musical reflection in the first place.  Without that glue, diaspora.

And the forms keep breaking down into smaller and smaller categories, which is in its own way a kind of excess.  The excess of pointless selectiveness.

Is the Novel Still Dying?

In 1955, Normal Mailer was declaring the death of the novel. A bit more than a decade later, it was John Barth’s turn.  There have now been a string of writers of a certain sort who clang the alarm and declare the imminent demise of the novel, the latest being a selection of former enfants terrible like Jonathan Franzen and David Foster Wallace.

Philip Roth did so a few years back, adding that reading is declining in America.  The irony of this is that he made such claims at a time when polls suggested exactly the opposite, as more people were reading books in 2005 (as percentage of adult population) than ever before.  In my capacity as one-time president of the Missouri Center for the Book I was happily able to address a group of bright adolescents with the fact that reading among their demographic had, for the first time since such things had been tracked, gone precipitously up in 2007.

And yet in a recent piece in the Atlantic, we see a rogues’ gallery of prominent literateurs making the claim again that the novel is dying and the art of letters is fading and we are all of us doomed.

Say what you will about statistics, such a chasm between fact and the claims of those one might expect to know has rarely been greater.  The Atlantic article goes on to point out that these are all White Males who seem to be overlooking the product of everyone but other White Males.  To a large extent, this is true, but it is also partly deceptive.  I seriously doubt if directly challenged any of them would say works by Margaret Atwood or Elizabeth Strout fall short of any of the requirements for vital, relevant fiction at novel length.  I doubt any of them would gainsay Toni Morrison, Mat Johnson, or David Anthony Durham.

But they might turn up an elitist lip at Octavia Butler, Samuel R. Delany, Tannarive Due, Nalo Hopkinson, Walter Mosley, or, for that matter, Dennis Lehane, William Gibson, and Neal Stephenson (just to throw some White Males into the mix as comparison).  Why?

Genre.

The declaration back in the 1950s that “the novel is dead” might make more sense if we capitalize The Novel.  “The Novel”—the all-encompassing, universal work that attempts to make definitive observations and pronouncements about The Human Condition has been dead since it was born, but because publishing was once constrained by technology and distribution to publishing a relative handful of works in a given year compared to today, it seemed possible to write the Big Definitive Book.  You know, The Novel.

Since the Fifties, it has become less and less possible to do so, at least in any self-conscious way.  For one thing, the Fifties saw the birth of the cheap paperback, which changed the game for many writers working in the salt mines of the genres.  The explosion of inexpensive titles that filled the demand for pleasurable reading (as opposed to “serious” reading) augured the day when genre would muscle The Novel completely onto the sidelines and eventually create a situation in which the most recent work by any self-consciously “literary” author had to compete one-on-one with the most recent work by the hot new science fiction or mystery author.

(We recognize today that Raymond Chandler was a wonderful writer, an artist, “despite” his choice of detective fiction.  No one would argue that Ursula K. Le Guin is a pulp writer because most of her work has been science fiction or fantasy.  But it is also true that the literary world tries to coopt such writers by remaking them into “serious” authors who “happened” to be writing in genre, trying ardently to hold back the idea that genre can ever be the artistic equivalent of literary fiction.)

The Novel is possible only in a homogenized culture.  Its heyday would have been when anything other than the dominant (white, male-centric, protestant) cultural model was unapologetically dismissed as inferior.  As such, The Novel was as much a meme supporting that culture as any kind of commentary upon it, and a method of maintaining a set of standards reassuring the keepers of the flame that they had a right to be snobs.

Very few of Those Novels, I think, survived the test of time.

And yet we have, always, a cadre of authors who very much want to write The Novel and when it turns out they can’t, rather than acknowledge that the form itself is too irrelevant to sustain its conceits at the level they imagine for it, they blame the reading public for bad taste.

If the function of fiction (one of its function, a meta-function, if you will) is to tell us who we are today, then just looking around it would seem apparent that the most relevant fiction today is science fiction.  When this claim was made back in the Sixties, those doing what they regarded as serious literature laughed.  But in a world that has been qualitatively as well as quantitatively changed by technologies stemming from scientific endeavors hardly imagined back then, it gets harder to laugh this off.  (Alvin Tofler, in his controversial book Future Shock, argued that science fiction would become more and more important because it taught “the anticipation of change” and buffered its devotees from the syndrome he described, future shock.)

Does this mean everyone should stop writing anything else and just do science fiction?  Of course not.  Science fiction is not The Novel.  But it is a sign of where relevance might be found.  Society is not homogeneous (it never was, but there was a time we could pretend it was) and the fragmentation of fiction into genre is a reflection that all the various groups comprising society see the world in different ways, ways which often converge and coalesce, but which nevertheless retain distinctive perspectives and concerns.

A novel about an upper middle class white family disagreeing over Thanksgiving Dinner is not likely to overwhelm the demand for fiction that speaks to people who do not experience that as a significant aspect of their lives.

A similar argument can be made for the continual popularity and growing sophistication of the crime novel.  Genre conventions become important in direct proportion to the recognition of how social justice functions, especially in a world with fracturing and proliferating expectations.

Novel writing is alive and well and very healthy, thank you very much, gentlemen.  It just doesn’t happen to be going where certain self-selected arbiters of literary relevance think it should be going.  If they find contemporary literary fiction boring, the complaint should be aimed at the choice of topic or the lack of perception on the part of the writer, not on any kind of creeping morbidity in the fiction scene.

Besides, exactly what is literary fiction?  A combination of craft, salient observation, artistic integrity, and a capacity to capture truth as it reveals itself in story?  As a description, that will do.

But then what in that demands that the work eschew all attributes that might be seen as genre markers?

What this really comes down to, I suspect, is a desire on the part of certain writers to be some day named in the same breath with their idols, most of whom one assumes are long dead and basically 19th Century novelists.  Criticizing the audiences for not appreciating what they’re trying to offer is not likely to garner that recognition.

On the other hand, most of those writers—I’m thinking Dickens, Dumas, Hugo, Hardy, and the like—weren’t boring.  And some of the others—Sabatini, Conan Doyle, Wells—wrote what would be regarded today as genre.

To be fair, it may well be that writers today find it increasingly difficult to address the moving target that is modern culture.  It is difficult to write coherently about a continually fragmenting and dissolving landscape.  The speed of change keeps going up.  If such change were just novelty, and therefore essentially meaningless, then it might not be so hard, but people are being forced into new constellations of relationships and required to reassess standards almost continually, with information coming to them faster and faster, sometimes so thickly it is difficult to discern shape or detail.  The task of making pertinent and lasting observations about such a kaleidoscopic view is daunting.

To do it well also requires that that world be better understood almost down to its blueprints, which are also being redrafted all the time.

That, however, would seem to me to be nothing but opportunity to write good fiction.

But it won’t be The Novel.

____________________________________________________________________

Addendum:  When I posted this, I was challenged about my claim that Mailer said any such thing. Some suggested Philip Roth, others went back even further, but as it turns out, I have been unable to track down who said exactly what and when. Yet this is a stray bit of myth that refuses to die.  Someone at sometime said (or quoted someone saying, or paraphrased something ) that the Novel Is Dying and it persists.  It has become its own thing, and finding who did—or did not—say it may be problematic at best.  It is nonetheless one of those things that seems accepted in certain circles.  It would be helpful if someone could pin it down, one way or the other.

Life On The Dark Side

There is a moment in Dennis Lehane’s Live By Night in which the protagonist, Joe Coughlin—Joseph to his father, the man against whom Joe gauges himself all his life—realizes that he is not what he wants to be, what he always asserted himself to be.

“How many men have you killed?” Estaban asked.

“None,” Joe said.

“But you’re a gangster.”

Joe didn’t see the point in arguing the definition between gangster and outlaw because he wasn’t sure there was one anymore. “Not all gangsters kill people.”

“But you must be willing to.”

Joe nodded. “Just like you.”

“I’m a businessman. I provide a product people want. I kill no one.”

“You’re arming Cuban revolutionaries.”

“That’s a cause.”

“In which people will die.”

“There’s a difference,” Estaban said. “I kill for something.”

“What? A fucking ideal?” Joe said.

“Exactly.”

“And what Ideal is that, Estaban?”

“That no man should rule another’s life.”

“Funny,” Joe said, “outlaws kill for the same reason.”

Throughout the novel, Joe is teasing at distinctions.  He gets involved in crime to distinguish himself from his father and his older brothers.  He disobeys his boss in order to fulfill an image of himself as his own man.  He takes as lover his boss’s moll because she is someone he wants more than he ever wanted anything before and cannot see why he should not risk all in order to be who he wants to be.

It costs him and in the end he loses—constantly and dearly—even as he achieves exactly that goal, to be himself.

Live By Night may be a turning point for Lehane, who has been consistently raising the bar in his own work by engaging his worlds and his characters at a level beyond the expectations of noir.

Joe Coughlin considers himself an outlaw.  Not a gangster.  For him, there is a fine by significant difference.  While both engage similar tactics, the reasons are different, and in his own way Joe seems to think there is a moral distinction.  The outlaw sets his own rules, but reserves the right—indeed, believes in the necessity—of setting limits on what he will and will not do in pursuit of his goals.  He will not kill indiscriminately.

This alone sets him at odds with his putative superiors.  As far as Joe is concerned, if he achieves the same thing without indulging in what he believes to be senseless violence, why should anyone be disappointed.

Sometimes this works out well and everyone is happy.  Other times, it runs afoul a deeper motivation on the part of the people with whom he is in league.

Set during Prohibition, Lehane gives us a rich view of the borderline landscapes where the illicit and licit blur into each other.  In Joe’s own view, he and his “live by night,” where the rules are murkier, the motives different, the standards other than for those who live in the day.  Day and Night are almost metaphysical concepts.  Similarities abound, but in many ways superficial.

Joe begins in Boston, the son of a prominent man in the police department who despairs of his youngest boy, even while he loves him.  The Oedipal tangles binding them in an impossible relationship are revealed but only as foundational constructs.  Nothing can be resolved between them.  Life has taken them in such directions that they cannot accommodate each other.

And yet their lives intersect tragically when Joe is sent to prison and falls into the orbit of one of the most powerful mob bosses on the east coast.  Joe plays the situation masterfully, but the game is ultimately rigged and the house claims it tonnage of flesh over the course of a career that sees Joe rise to power in Florida, becoming the chief rum runner in the Gulf.

What sets this story above the standard-issue gangster novel is Lehane’s insistence on a moral center that, flawed as it is, possesses real force for Joe and takes him in directions that often irritate him because it would be simpler, easier to just go along with the power structure.  In this, Joe becomes iconic—a moral man (such as he is) caught within a broken system.

As well, Lehane’s wordcraft—his art, his dextrous use of image—puts him on par with Chandler and Cain, Ross McDonald and Hammet.  There is a flavor of Scott Fitzgerald in his evocations, in the in-built tragedy, in the almost Shakespearean psychologies at play.  Even the minor, bit players feel fully fleshed and viscerally authentic.

And the passion is narcotic.  Joe loves two women in the course of the novel and Lehane makes it real.  Through this as much as anything else he shows us the costs of being an outlaw, of refusing the safer trajectories of life.  Joe makes his choices—because he can and also because he can’t not—and accepts the risks.

A superior read.

Persistent Ghosts

Recently I read two novels that, after some thought, work as examples of effective and ineffective sequels.  I confess up front I’m stretching things to make a point here and I in no way recommend a similar reading strategy.  I’m indulging myself in this in order to explain something.

I haven’t read Philip Roth since Portnoy’s Complaint came out in paperback.  Yes, I read it that long ago and, yes, I was probably far too young for it.  My impression of it at the time is hard to recapture, but it left me kind of stunned.  For one, I hadn’t encountered that kind of writing before (not even in some of the porn magazines I’d snuck into the house) and to see it in something on any best seller list was a shock to my 13-year-old psyche.  For another, the self-conscious analysis of an adolescent “matter in transition” surprised me.  I’m not sure it helped or just made me feel that the malaise in which I found myself then (and for a few years to come) was inevitable, which was depressing.

For whatever reason, I never went back to Roth.  From time to time I’ve thought that might have been a mistake.  He’s a Big Deal and maybe I’ve missed something.

So a month or so back I found a couple of used copies of his later novels, picked them up, and the first one I read was Exit Ghost.  For those who’ve kept up, of course, this is one of the ending books in his ongoing Zuckerman series.  From this novel, I gather Zuckerman is a kind of alter-ego for Roth himself.  A famous and successful writer (they aren’t always the same thing) moving through the travails of his fame and success, observing with his writer’s eye the changing landscapes around him.

In this one, Zuckerman has been living as an isolate in the country for several years, especially after prostate surgery which has left him both incontinent and impotent.  He returns to New York on the promise of a new procedure that may at least address his incontinence.  Roth vividly allows the reader to feel the misery of Zuckerman’s condition.  While in New York, Zuckerman meets a young couple who wish to leave (this is the aftermath year of 9/11) for some place Not New York, and offer to swap their apartment for his cabin for a year.

Zuckerman falls headlong into lust for the wife.

He begins working on a fictionalized treatment of their potential liaison, cleverly counterpointing it with what actually happens, at least in their conversations, which he (fictionally) idealizes.  The fictional treatment makes her more self-possessed and himself cleverer.  While all this is going on, Zuckerman finds himself dealing with resurrected ghosts of his literary (and erotic) past and the fact that he no longer knows how to function in this New York after having been away so long.

The writing is beautiful.  There are sentences here superbly crafted, achingly fraught with meaning.  I can see why Philip Roth is considered so highly.

But there is, in the end, only one ghost present which is seeking exit.  Portnoy.  It seems he is still writing about the problems of wanting to get laid, not getting laid, and wishing ardently to not feel guilty about either condition.  Fifty plus years after my last Philip Roth novel, I find that the work is still, at least in part, about the same things.  At least, in this instance.

Portnoy, however, is rather pathetic as a ghost.  He doesn’t disturb much other than the memory of erections no longer possible.  He moves around in the ruins of what was once a vital life, trying to find a way of accepting things as they are, not quite succeeding, and changing nothing.

Tim Powers, however, gives us much more tangible—and dangerous—ghosts in his Hide Me Among The Graves, which is at least a thematic sequel to his The Stress of Her Regard.  As in the previous novel, Powers gives us vampires, but not of the usual sort.  Powers’ vampires are not half-rotted corpses rising, undead, from graves, former humans with a thirst for their living cousins’ blood and a desire to replicate themselves.  Rather, Powers gives us the Nephilim, the remnants of a race that once dominated the Earth before the rise of the oyxgen-breathing, fast-living creatures of a Cambrian eco-system with no place for silicate-based life.  For Powers, these holdovers are the Lamiae, and they feed on iron and love in a grotesque symbiosis, one byproduct of which is artistic brilliance.  Among their captive suitors are Lord Byron, Percy Shelley, John Keats, Coleridge.

With their attention comes madness and the destruction of all competitors for the obsessive love they seem to crave.  Long life, genius, and ultimately a kind of moral corruption that ends up justifying any destruction in the name of…

Well, continuation, really.  These are ghosts that seek actively to persist.

While they come from outside the psyché, they are profoundly dependent on it.  On the willingness of their human partners, on their devotion, their protection, really, and therefore, for Powers, everything comes down to a matter of will.

In The Stress of Her Regard, the artistic center is represented by Byron and Shelley.  In this new novel, that center is the Rossettis—Dante Gabriel and Christina, specifically, with Swindburne as a sort of fifth wheel who learns about the lamiae and very much wants their attention, pining for the brilliance that results from it.

And as in the previous novel, it is those on the sidelines who are instrumental in ending the possessions of the ghosts.

As in the Roth, sex is very much at the heart of the infection.  There is spiritual V.D. in the relations Powers depicts.  We all bring our ghosts along to bed with us, but in the case of the Nephilim these are ghosts with lingering, almost incurable consequences.  And yet, celibacy is no guarantor of health.  Those with whom one’s cousin sleeps could kill you just because.

The brilliance that is a symptom of their infection strikes one as kin to the apparent genius unlocked by syphilis, as in people like Nietzsche

Powers’ ghosts move amid ruins as well, in this case the ancient tumbledowns of a London burned by Boadicea, who is herself become one of the Nephilim.  The new London often seems not much more than an incipient ruin itself as the protagonists, John Crawford and Adelaide McKee—both collateral damage in their own ways of the bigger game being played among these ancient monsters—strive to defeat them so they can save their daughter and try to have something like a normal life in which simple love dominates.

In this, Powers shows us a place of solace, a resolution, a condition wherein the ghosts can quieten finally, and peace has a chance to succeed.  The ghosts are recognizably Outside and putting them back outside offers a chance to go on wholly according to one’s self will.

Roth, on the other hand, shows us someone whose ghosts are completely of his own contrivance who treats them as if they are (or should be) something Outside—that can be run from, hidden from, denied.  The failure to recognize them for what they are—ultimately failures of will—condemns Zuckerman to a sophisticated kind of adolescent denial of reality.  Success—however it is defined, no matter how modest—is impossible.

In this, curiously, there is one other similarity between the subtexts of the two works, and that is that genius can be a trap.  What we might sacrifice for it can cut us off from kinder choices, saner trajectories, blind us to certain obvious realities, and give us a justification to cause harm without acknowledging that its expression, too, is a matter of will.  Powers, of the two, shows us clearly that genius is no excuse for embracing monsters or giving our lives over to ghosts.  I’m not altogether sure Roth would accept that formulation.