The vibe moves

In 1633 Galileo was put on trial by the Roman Catholic Inquisition. The ailing mathematician suffered through the indignity and ordeal of the trial to be declared guilty of advocating heliocentrism — the theory that the sun, not the earth lay at the center of the universe. Galileo’s book, “Dialogue Concerning the Two Chief World Systems”, was the principal piece of evidence presented against him. But as Dan Hofstadter repeatedly belabors in his book The Earth Moves, there was no material debate on the merits of Galileo’s arguments. The only relevant question was whether Galileo had usurped the authority Catholic Church over interpretation of scripture.

So the Counter-Reformation was fighting a losing battle against a freer, and often less literal reading of the Scriptures. Yet if there was one thing that had concerned the Council of Trent, it was the possibility that laymen would decide for themselves what passages in the Bible could be interpreted other than literally. In fact, the issue of the earth traveling about the sun had little if any bearing on the Catholic faith. But the notion that persons without theological training could decide for themselves to read this or that biblical passage in a non-literal sense constituted a mortal danger for Catholicism in the early seventeenth century.

Hofstadter’s book focuses on the trial, while also giving background on Galileo’s observations of the night sky through his recently invented telescope. We are not treated to a full biography of Galileo, nor the kind of exposition — which I would have certainly benefited from — on the nature of the reformation, counter-reformation, or the Roman Catholic Church of the time . Hofstadter’s own interests clearly lay in discursive discussions about Galileo’s engagement with the art and culture of the period.

There is much that is interesting, but also a great deal that is frustrating. After outlining the events of the trial, Hofstadter hedges against saying anything too definitive about the affair, conceding that both the event of the trial, along with its final verdict, may likely have been the consequence of unknown Papal intrigues and the obscure politics of the time. The book doesn’t seem to know what to do with certain pieces of context. Take the following insight into what we know of an individual’s religious conviction:

We do not know what Galileo or anybody really believed at this period, since religious belief was prescribed by an autocracy and heresy was an actionable offense. If one had misgivings, one kept them to oneself, so it would be naive to take religious ruminations penned in the papal realm or its client territories at face value. The Inquisition’s own records confirm that many people harbored reservations and heretical beliefs: before the Counter-Reformation, they had been much more candid about them.

Galileo had built a telescope that provided a tiny, limited window that gave him a better view upon the solar system. This insight shaped a scientific conviction that led him to stray on territory that the Church had claimed authority over. But it didn’t have to be science. If anything, science was at the fringes of what the Church controlled. Moral, social, and religious matters were the principal victims. Science just happened to have been the issue that the Church managed to look definitively foolish over. Frankly their other strictures look similarly bad today, at least to my eyes. The extract I just gave above seems to hit on something at least as important as the actual science that Galileo practiced. It almost seems to serve the Church to frame the affair as “science vs religion”, rather than as “religion vs freedom of thought”.

Hofstadter clearly loves Galileo as the Renaissance man, immersed in the art and culture of his day. I sensed a real nostalgia for the pre-Two Cultures world. There is a valor and a virtue that is popularly recognized in those early scientists — the “genius” we ascribe to those who were the first to figure certain things out. I am perhaps sensitive to a certain kind of sleight made against the “institutionalized” scientists for today who are fantastically empowered by the inherited work of earlier scientists. So great is the inheritance that it inevitably dwarfs any possible contribution they can make. It is a sentiment often derived from the valorization of those heroes of the scientific revolution. Sure, you can hear them sneer, you’re clever, but you aren’t a genius. I don’t sense any of that from Hofstadter. I could imagine him saying something more like: Sure, Galileo’s reasoning reads as scientifically illiterate to us today, but unlike academics today, he could actually write.

In case you don’t believe me about how Galileo’s reasoning reads today:

Aristotle had had no conception of impetus, and thus no conception of motion corresponding to what we may see and measure. He thought that the medium through which objects travel sustains their motion. By contrast, Galileo wrote “I seem to have observed that physical bodies have physical inclination to some motion,” which he then described — lacking the mathematics for an exact characterization — by a series of “psychological” metaphors, themselves of partly Aristotelian origin: inclination, repugnance, indifference, and violence. […] Galileo’s conception of the Sun’s motion is necessarily hesitant and ambiguous, and he was wary of flatly stating general principals. But one can perceive here the rough outline of what would become Newton’s first law of motion.

What Aristotle, and thus Galileo lacked was the basic material taught to high schoolers and undergraduates in science and mathematics.

It is certainly true that these were exciting and dangerous. Scientific progress has a particular quality of often looking more exciting the further back you stand from it. But the actual doing of science is in the close up; in the detail. And when you do look closer at the danger, at least in the case of Galileo, it looks far more grim than it does exciting.

The Final Word

It is the mark of clear thinking and good rhetorical style that when I start a sentence, I finish it in a suitable, grammatical fashion. A complete sentence is synonymous with a complete thought. In the world of AI, completing sentences has become the starting point for the Large Language Models which have started talking back to us. The problem that the neural networks are quite literally being employed to solve is “what word comes next”? Or at least this is how it was explained in Steven Johnson’s excellent Times article on the recent and frankly impressive advances made by the Large Language Model, GPT-3, created by OpenAI.

As is apparently necessary for a Silicon Valley project founded by men whose wealth was accrued through means as prosaic as inherited mineral wealth and building online payment systems, OpenAI sees itself on a grand mission for both the protection and flourishing of mankind. They see beyond the exciting progress currently being made in artificial intelligence, and foresee the arrival of artificial general intelligence. That is to say that they extrapolate from facial recognition, language translation, and text autocomplete, all the way to a science fiction conceit. They believe they are the potential midwives to the birth of an advanced intelligence, one that we will likely struggle to understand, and should fear, as we would a god or alien visitors.

What GPT-3 can actually do is take a fragment of text and give it a continuation. That continuation can take many precise forms: It can answer a question. It can summarize an argument. It can describe the function of code. I can’t offer you any guarantee that it will actually do these things well, but you can sign up on their website and try it for yourself. The effects can certainly be arresting. You might have read Vauhini Vara’s Ghosts in the Believer writing about her sister’s death. She was given early access to the GPT-3 “playground” and used it as a kind of sounding board to write honestly about the death of her sister. You can read the sentences that Vara fed the model and the responses offered, and quickly get a feeling for what GTP-3 can and can’t do.

It will be important for what I am about to say that I explain something to you of how machine learning works. I imagine that most of you reading this will not be familiar with the theory behind artificial intelligence, and possibly intimidated. But at it’s core it is doing something quite familiar.

Most of us at school will have been taught some elementary statistical techniques. A typical exercise involves being presented with a sheet of graph paper, with a familiar x and y axis and a smattering of data points. Maybe x is the number of bedrooms, and y is the price of the house. Maybe x is rainfall, and y is the size of the harvest. Maybe x is the amount of sugar in a food product, and maybe y is the average weekly sales. After staring at that cluster of points for a moment, you take your pencil and ruler and set down a “line of best fit”. From the chaos of those disparate, singular points on the page, you identified a pattern, a correlation, a “trend”, and then impose a straight line — a linear structure — on it. With that line of best fit drawn, you could then start making predictions. Given a value of x, what is the corresponding value of y?

This is, essentially, what machine learning and neural networks do.

The initial data points are what is referred to as the training data. In practice however, this is done in many more than two dimensions — many, many more, in fact. As a consequence, eyeballing the line of best fit is impossible. Instead, that line is found through a process called gradient descent. Taking some random choice of line as a starting point, small, incremental changes are made to the line, improving the fit with each iteration, until that line arrives in a place close to the presumed shape of the training data.

I say “lines”, but I mean some kind of higher dimensional curves. In the simplest case they are flat, but in GPT-3 they will be very curvy indeed. Fitting such curvy curves to the data is more involved, and this is where the neural networks come in. But ultimately all they do is provide some means of lining up, bending, and shaping the curves to those data points.

You might be startled that things as profound as language, facial recognition, or creating art, might all be captured in a curve, but please bear in mind that these curves are very high dimensional and are very curvy indeed. (And I’m omitting a lot of detail). It is worth noting that the fact we can do this at all has required three things: 1) Lots of computing power 2) Large, readily available data sets 3) a toolbox of techniques and heuristics and mathematical ideas for setting the coefficients that determine the curves.

I’m not sure that the writing world has absorbed the implications of what this all means. Here is what a Large Language Model could very easily do to writing. Suppose I write a paragraph of dog-shit-prose. Half baked thoughts put together in awkwardly written sentences. Imagine that I highlight that paragraph, right click, and somewhere in the menu that drops down there is the option to rewrite the paragraph. Instant revision with no new semantic content added. Clauses are simply rearranged, the flow is adjusted with mathematical precision, certain word choices are reconsidered, and suddenly everything is clear. I would use it like a next level spell checker.

And that is not all: I could revise the sentence into a particular prose style. Provided with a corpus of suitable training data I could have my sentences stripped of adjectives and set out in terse journalistic reportage. Or maybe I opt for the narrative cadence of David Sedaris. So long as there is enough of their writing available, the curve could be suitably adjusted.

In his Times article, Johnson devotes considerable attention to the intention and effort of the founders to ensure that OpenAI is on the side of the angels. They created a charter of values that aspired to holding themselves deeply responsible for the implications of their creation. A charter of values which reads as frankly, and literally hubristically, as they anticipate the arrival of Artificial General Intelligence, while they fine tune a machine which can convincingly churn out shit poetry. Initially founded as a non-profit, they now have birthed the for-profit corporation OpenAI LP, but made the decision to cap the potential profits for their investors: Microsoft looming particularly large.

But there was another kind of investment made in GPT-3. All the collected writings that were scraped up off the internet. The raw material that is exploited by the gradient descent algorithms, training and bending those curves to the desired shape. Ultimately, it is true that they are extracting coefficients from all that text-based content, but it is unmistakable how closely those curves hew, in their abstract way, to the words that breathed life into them. They actually explain that unfiltered internet content is actively unhelpful. They need quality writing. Here is how they curated the content in GPT-2:

Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny.

(From Language Models are Unsupervised Multitask Learners)

For all of Johnson’s discussion of OpenAI’s earnest proclamations of ethical standards and efforts to tame the profit motive, there is little discussions of how the principal investment that makes the entire scheme work is the huge body of writing available online that is used to train the model and fit the curve. I’m not talking about plagiarism; I’m talking about extracting coefficients from text, that can then be exploited for fun and profit. Suppose my fantasy of a word processor tool that can fix prose styling becomes true. Suppose some hack writer takes their first draft and uses a Neil Gaiman trained model to produce an effective Gaiman pastiche that they can sell to a publisher. Should Gaiman be calling his lawyer? Should he feel injured? Should he be selling the model himself? How much of the essence of his writing is captured in the coefficients of the curve that was drawn from his words?

Would aspiring writers be foolish not to use such a tool? With many writers aiming their novels at the audiences of existing writers and bestsellers, why would they want to gamble with their own early, barely developed stylings? Will an editor just run what they write through the Gaiman/King/Munroe/DFW/Gram/Lewis re-drafter? Are editors doing a less precise version of this anyway? Does it matter that nothing is changing semantically? Long sentences are shortened. Semi-colons are removed. Esoteric words are replaced with safer choices.

The standard advice to aspiring writers is that they should read. They should read a lot. Classics, contemporary, fiction, non-fiction, good, bad, genre, literary. If we believe that these large language models at all reflect what goes on in our own minds, then you can think of this process as being analogous to training the model. Read a passage, underline the phrases you think are good, and leave disapproving marks by the phrases that are bad. You are bending and shaping your own curve to you own reward function. With statistical models there is always the danger of “over-fitting the data”, and in writing you can be derivative, an imitator, and guilty of pastiche. At the more extreme end when a red capped, red faced, member of “the base” unthinkingly repeats Fox News talking points, what do we have but an individual whose internal curve has been over-fitted?

It is often bemoaned that we live in an age of accumulated culture, nostalgia, retro inclination. Our blockbusters feature superheros created in a previous century. There is something painfully static and conservative about it all. But what if artificial intelligence leads us down the road to writing out variations of the same old sentences over and over again?

In Barthes’ essay The Death of the Author he asserts

We know now that a text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from the innumerable centers of culture.

Given that Barthes was quite likely half-bullshitting when he threw out phrases like “multi-dimensional space”, what I have quoted above is a disturbingly accurate description of the workings of a large language model. But it isn’t describing the large language model. It is a description of how we write. Barthes continues:

the writer can only imitate a gesture that is always anterior, never original. His only power is to mix writings, to counter the ones with the others, in such a way as never to rest on any one of them. Did he wish to express himself, he ought at least to know that the inner ‘thing’ he thinks to translate’ is itself only a ready-formed dictionary, its words only explainable through other works, and so on indefinitely.

Maybe Gaiman would have no more cause to call his lawyer than all the great many writers he absorbed, reading and then imitating in his youth and early adulthood. Maybe if Barthes is to be believed here, the author is dead and the algorithm is alive. Our creativity is well approximated by a very curvy curve in a high dimensional space.

One potential outcome that does not seem to have been considered by our Silicon Valley aristocracy is that the Artificial General Intelligence they bring into this world will be an utterly prosaic thinker with little of actual interest to say. It shouldn’t stop it from getting a podcast, though.

Or maybe Barthes was wrong and maybe Large Language Model will continue to be deficient in some very real capacity. Maybe writers do have something to say, and their process of writing is their way of discovering it. Maybe we don’t have to consider a future where venture capitalists have a server farm churning out viable best-sellers in the same fashion they render CGI explosions in the latest Marvel movie. Maybe we should get back to finishing the next sentence of our novel because the algorithm won’t actually be able to finish it for us. Maybe.

The cookies are the monster

As is the case with our parents, there invariably comes a moment when a teacher reveals themselves to be utterly, fallibly human. Rather than being a reliable source of knowledge, with one stray remark they reveal themselves to being as prone to misconceptions and ignorance as the rest of us. We learn that whatever instruction they offer should be treated provisionally.

One such moment from my own youth: sixth form, in morning assembly — a venue for our teachers to share wonderfully secular homilies. The teacher taking the assembly that morning explained that when he was himself in school he witnessed the first computer arrive in the classroom. At that time it wasn’t clear what function this intimidating new appliance should serve. “For some reason,” he told us, “they decided to send it to the mathematics department.”

Sitting there I understood quite intuitively that the maths department would have been the obvious and appropriate place to send the computer. After all, what is a computer except a machine for performing a long sequence of mathematical operations? The teacher seemed to believe computers were elaborate typewriters with the additional capabilities of playing a game of solitaire or selling you something.

I will add that this was the same teacher who I had heard, from a reliable and highly placed source (another teacher), that perhaps the maths department should start offering an “applied math” course, stripped of all the impractical and superfluous “pure math”. You know, only the math a good worker would actually need. As if maths teachers were some kind of freaks who insisted on inflicting abstract suffering on on students before grudgingly teaching them useful stuff: statistics.

In retrospect, sat in that school assembly, we were living through a significant moment. Broadband had arrived along with youtube. Facebook was only the latest in a string of social media platforms. I got a gmail account, with its bottomless inbox. Teachers were beginning to be drilled on the importance of making use of online resources. Certain educators dictated long and complicated urls to us that we had to copy down carefully so that we could make use of them later. I’m not talking about the home page of a site — we were sent to pages deep within the site-map; urls trailing all kind of database tokens and php residue that we would someday learn was susceptible to the unpleasant sounding “link rot”. This is the future, those educators presumably thought to themselves as they had unhappily transcribed those web-addresses. No doubt they were far from convinced that any of this would ever be convenient.

Only a decade later and we would enjoy boomers falling into candy crush addiction. And computers became even less recognizable as machines of mathematics.


As part of a professional realignment, I have been learning the ropes of cyber security. The past month was dedicated to mastering low level memory exploits. Or at least the low level memory exploits of twenty years ago. Real zeros and ones stuff. Well, hexadecimal stuff really. Staring at (virtual) memory locations. Format string exploits. Messing around with debuggers. You might think that would have brought me to the mathematical heart of our digital engines. But no. I have instead had an almost gnostic revelation about the true nature of the Matrix.

Certainly, there is a lot of deep mathematics playing a fundamental role in the workings of all the code. To take a particularly central example: the existence of one-way-functions is the underpinning assumption of almost all cybersecurity. The cryptographic protection we enjoy (whether you realize it or not), is provisioned on the understanding that an adversary does not have the ability to reverse certain mathematical operations with reasonable efficiency. This assumption may well hold up. Quite likely P does not equal NP, and it is entirely possible that quantum computers are a physical impossibility. I may not live long to see such profound questions resolved.

But there is another side to our PC world. From where I am sitting, they are simply huge bureaucracies. Mathematical bureaucracies to be sure, but bureaucracies nonetheless. There are replete with elaborate filing and organizing systems, protocols with carefully written standards, and all the input and output amounts to a certain kind of paperwork. From this perspective most security breaches are the product of improper filing, out of date standards, and old fashioned mail fraud.

There is no undoing all the bureaucracy either. The further we get from the golden age of pre-broadband the more the bureaucracy swells; not only to deal with the non-tech-savvy hoi-polloi, but to integrate the hoi-polloi into the very system itself. The age of nerds noodling around with open source code and experimenting with new kinds of hardware has given away to a world of corporations and start-ups where every other college grad needs to get their “workflow” in-sync with their fellow internaut.

As with parents and teachers, the original architects of the cyberspace have revealed themselves to be shortsighted and ideologically blinkered human beings with their own unique set of foibles. It took us too long to see it. That teacher who revealed his digital ignorance to our year group happened to teach economics. I never took a class with him, but if he knew anything of economic history then there was a chance that he might have been able to teach us certain truths about technological advancement that we had overlooked.

The Punchline is Redundant

In graduate school, I was friends with a young man of a particularly restless disposition — a mathematician of the waggish inclination, given to a certain kind of tomfoolery. Often his antics would take the form of games of such banal simplicity that they felt like elaborate, conceptual pranks.

One game he set a number of us playing, during a longueur in one evening together with friends, sticks in my mind. Having first had each of us commit solemnly to absolute honesty, we each chose a number, greater than or equal to zero, which we would then one-after-the-other reveal (committed as we were to honesty), and whoever had chosen the lowest number that no one else had chosen was the winner. Several rounds were played, and while everyone wrestled with the question of whether to choose zero, or maybe one, trying to second guess each other, I refused to join in, offended by the very nature of the game.

A second game stays with me as well: pulling a mathematics journal from the shelf in the math department common room, my friend began reading aloud random sentences from various articles, pausing before the final word, inviting another friend, to guess the final word. He did pretty well, as I recall.

There was something powerful about these games. The first game, being stripped of all the usual frivolity, ritual, adornment, and pretense that usually accompanies games, revealed the essential nature of what a game is. That is to say a “game” in the sense that the mathematician John von Neumann formulated it. To von Neumann’s way of thinking Chess was not game in the sense he cared about: perfectly rational players would know the perfect set of moves to play and thus they would play those moves. He was more interested in Poker, where players have incomplete information (the cards in their hand and on the table), are left to compute the probabilities, and devise strategies.

Good poker players do not simply play the odds. They take into account the conclusions other players will draw from their actions, and sometimes try to deceive the other players. It was von Neumann’s genius to see that this devious was of playing was both rational and amenable to rigorous analysis.

The Prisoner’s Dilemma — William Poundstone

I recently discovered that my friend was not the true inventor of the second game either. Reading The Information by James Gleick, I learned that Claude Shannon, the founder of information theory, played a variation with his wife, as a kind of illustrative experiment.

He pulled a book from the shelf (it was a Raymond Chandler detective novel, Pickup on Noon Street), put his finger on a short passage at random, and asked Betty to start guessing the letter, then the next letter, then the next. The more text she saw, of course, the better her chances of guessing right. After “A SMAAL OBLONG READING LAMP ON THE” she got the next letter wrong. But once she knew it was D, she had not trouble guessing the next three letters. Shannon observed, “The errors, as would be expected, occur more frequently at the beginning of words and syllables where the line of thought had more possibility of branching out.”

The Information — James Gleick, page 230

Shannon’s counter-intuitive insight was to consider “information” through a notion he called entropy, which quantitatively captured the amount of new, novel, and surprising content in a message. Thus, the meaningful sentences of a novel, or indeed a math paper, contain all kinds of redundancy, while in contrast a random sequence of letters will always be surprising from one letter to the next, so therefore contains more of this stuff he referred to as “information”.

Von Neumann’s ideas about games would go on to shape the technocratic world view that was ascendant in the 20th century. Beyond mathematics the kind of games he defined could be found out in the fields of economics, social policy, geopolitics, and most infamously: the exchange of nuclear weapons.

Shannon’s ideas would have their greatest successes in science, and not only in the field of communication, where error correcting codes and encryption are the direct and intended applications of such thinking. But also in biology when DNA was discovered and life itself appeared to be reducible to a finite sequence of of four letters, and Physics via thermodynamics and later in quantum mechanics as information became a fundamental notion.

There is a variation on Shannon’s game that is a well established tradition around the Christmas dinner table: reading Christmas cracker jokes. (Popular within the former Commonwealth, but maybe less well known in the US). Having pulled the crackers and set the crepe party hats upon our heads, each of us will in turn read the set up of our joke, leaving the rest of the table to guess the punchline. The meta-joke being that while punchlines are supposed to be surprising, and thus amusing, Christmas cracker jokes are typically so bad that in their puns are quite predictable. Thus, somehow, in their perverse predictability, the jokes are funny all over again. But does that make them low entropy? Only if you allow for the mind to be addled enough that the punchline becomes predictable.

This is an important point. The ultimate arbiters of the question of assumed knowledge that Gleick offers are hypothetical aliens receiving our radio signals from across the galaxy, or the very real computers that we program here on earth. They do not share any of our cultural baggage and thus could be considered the most accurate yard sticks for “information”. When Gleick’s book was written, over a decade ago now, we had very different ideas about what computers and their algorithms should look like or be capable of doing. That has all changed in the intervening decade with the arrival of powerful artificial intelligence that gives the kind of output that we once could only have hoped for. The notions that Gleick covers were defined precisely and mathematically, but our intuition for these concepts, even to lay person, are dramatically shifting. Not that it would be the first time our expectations and intuition have shifted. We should recognize ourselves in Gleick’s description of the amusing misunderstandings that the new-fangled telegraph technology created upon its arrival.

In this time of conceptual change, mental readjustments were needed to understand the telegraph itself. Confusion inspired anecdotes, which often turned on awkward new meanings of familiar terms: innocent words like send, and heavily laden ones, like message. There was a woman who brought a dish of sauerkraut into the telegraph office in Karlsruhe to be “sent” to her son in Rastatt. She had heard of soldiers being “sent” to the front by telegraph. There was the man who brought a “message” into the telegraph office in Bangor, Maine. The operator manipulated the telegraph key and then placed the paper on the hook. The customer complained that the message had not been sent, because he could still see it hanging on the hook.

More mysterious still is the way information persists once it has arrived. Black Holes provided a thorny problem for physicists, but my own waggish friend poses his own set of questions. Assuming that he had not taken a course in information theory, or read of Shannon (which he may well have), that leaves the possibility that when he concocted his games he was subconsciously tapping into some kind of collective or ambient understanding. It is one things for the theory to be taught and for students to study the equations. It is quite another thing when ideas pervade our collective thinking in ways that cannot be easily accounted for. Information theory works when we can point to the individual bits and bytes. Things become much more tricky when not only can we not find the bits and bytes, but when the information is thoroughly not discrete, not even analogue, just out there in some way we don’t yet know how to think about.

Unfortunately, there was a booksale.

Some years ago — never mind how long precisely — having little or no money in my purse, and nothing particular to interest me in the UK, I thought I would do a PhD in Montreal, and see a little of the Francophone world. Perhaps there was some element of driving off the spleen, and regulating the circulation. Learning French at an Anglophone university was trickier than I would have liked. I did however persevere with the language while obtaining mon doctorat, and eventually I could read, with dictionary assistance, a contemporary novel or comic. The latter, I discovered, had a rich tradition and active culture in France, and as a consequence a serious presence in the neighborhood bibliothèque.

When I left the Francosphere behind, and eventually arrived back in the Anglosphere, the French language fell out of sight and out of mind. There was an abundance of English prose I badly wanted to read, so my habit of reading in French all but disappeared. That is until very recently, when I was inspired by a blog post I stumbled upon. The author of the The Untranslated reflects on five years of running his quite wonderful blog reviewing untranslated books. A great deal of it concerns the practice of learning new languages with the aim of reading specific works. Which makes a refreshing contrast to the usually proffered motivate for language aquisition: being able to order food or ask for directions like a local. So inspired, I set about throwing myself once again into French literary waters.

My first port of call was my go-to French webcomic, Bouletcorp. Started in 2004, it is a veteran of the scene. A typical comic depicts incidents from the author Boulet’s life, alongside ruminations and observations on everything from the quotidian irritations of modern life to lazy tropes in TV and movies. To my rather fanciful way of reading them, these are visual essays in the tradition of Montaigne’s essays. A more suitable reference point is Calvin and Hobbes, in the way Boulet frequently lets his imagination run in fanciful and speculative directions, rendering scenes reminiscent of Waterson’s Sunday strips, filled with all the dinosaurs, alien landscapes, and monsters that populated Calvin’s imagination. One situation that Boulet returns to more that once are encounters with obnoxious members of the public attending his in-person signings at conventions. In one such strip Boulet anonymizes the offending individual by drawing them in various forms: an ape, a cockerel, and finally a grim looking salad bowl of merde. Boulet is an artist with incredible versatility, belonging to the European tradition of Moebius. I grew up in the UK reading the frankly impoverished offerings of the Dandy and Beano, and picking through the debauched excesses of American superhero comics. So I feel like I missed out on the sophisticated the French tradition of bande dessinée. At least I had the adventures of Asterix & Obelix, and Tintin.


Reading my way through the Bouletcorp back catalogue I found one comic in particular that I responded to deeply. Created for the 2017 Frankfurt book fair website it is a reminiscence of youth. A tweenage Boulet is torn form his ordinateur and obliged to do his required reading for school. Disinclined towards his duty, he picks the shortest story in the collection, and finds himself drawn in, captivated, and horrified by the gothic power of Prosper Merimee’s La Venus d’Ille. Understanding that he wanted more of whatever that was he discovers, with the help of the school library, Poe, King, Asimov, and all kinds of fantastical fiction.

As readers we do not get to consume our art communally. Theater lovers go to watch actors tread the boards; film fans get to attend screenings; music fans flock to gigs; football fans sing together in their home stadiums. The solitude of reading has many benefits, to be sure. Frequently the best books we read have a strange power that is hard to assess, and merits might only become clear on second or third readings. But no doubt you have known a friend who has thrust a book into your hands that they can’t stop thinking about, telling you: “You have to read this.” So once you too have read it they can finally talk about it.

In much the same way I enjoy returning to a beloved book, I enjoy being reminded of the revelation that is reading. Of discovering the stories that suspend my disbelief and subvert my expectation. The books which captivated Boulet have some overlap with the books I read as a teenager, but it his experience that resonates so strongly.

That is why, of all the many podcasts that have devoted themselves to the works of the late Gene Wolfe, it is the reader interviews of the Rereading Wolfe podcast that I remain most excited about. The careful chapter by chapter analysis that all the Gene Wolfe podcasts offer are fine and good and wonderful. But there is something reaffirming about people discussing their responses to the work. In one utterly remarkable episode, a reader describes discovering The Book of the New Sun as she grew up in a cult — she had to read fantasy books smuggled in secret from the library. Among all the books she read, she understood at once that BotNS was on an entirely different level. I do not like to ascribe utility to art, but such stories allow me to believe in the vitality of art.

Part of the reason I aspire to become a proficient reader of the la langue française is so that I can return to that state of youthful discovery. I can become like a teenager again, seeing with fresh eyes what the culture offers, all over again. I can be liberated from the weight of expectation and reputation. I can be surprised all over again.

There was a booksale…

It rained, and the snails were about.
What does a dog want? To go for a walk? Or just to be outside?

Like the Da Vinci of Hot Takes

I only have myself to blame. I was the one who clicked subscribe on the Substack. I’m trying not to write a rebuttal; I’m not a fan of the “debate me” style of writing. And if you dunk on someone on Twitter and no one is around to like it, have you really dunked on them? If my current commitment to writing blog posts serves any purpose (and the consensus is that I am a decade too late to have a personal blog) then it is to organize my thoughts into something coherent, and maybe adorn it all with an appealing turn of phrase. I’d like to explain why what I read was so utterly disagreeable.

Erik Hoel is a neuroscientist, neuro-philisopher, and fiction writer. Recently, he wrote an impassioned Substack post addressing the “decline of genius”. Because, first of all, Erik Hoel believes in genius. Personally, I believe in “genius”, with the scare quotes, but I don’t want to derail my outline of Hoel’s argument this early. Erik Hoel believes, and asserts there is evidence, that the number of geniuses is in decline. This is despite an expanded education system and technological conveniences like powerful personal computers that we can carry about in our pockets. The golden age of human flourishing has not arrived, despite ostensibly perfect conditions. The cause, Erik Hoel believes, is that an old style of education — which he terms “aristocratic tutoring” — is the only reliable system that produces true genius, while the classrooms of today consistently fail to give their students that certain something. And with one weird trick, that is to say replacing classroom education with the one-on-one individual tutoring, we might yet reach a golden age, with a bounteous and reliable crop of little Einsteins, von Neumanns, and Newtons.

Hoel doesn’t simply want smaller class sizes or more resources. He advocates for intensive, rigorous, and engaging tutoring, done one-on-one. There is no compromise (or indeed much civic spirit) on the path to reliably producing geniuses. The education of John Stuart Mill is cited approvingly — Mill’s childhood was was a weird Benthamite experiment, designed to cram the classical canon into a child by the age of twelve, for no less a goal than to raise the future leader of a radical movement. Hoel neglects to mention Mill’s intensive education led to a psychological breakdown at the age of twenty.

There is no hiding the scorn Hoel seems to have for the efforts of the educators we do in fact have. The fact that there might be virtues to a public education system are not taken remotely seriously. But as frustrating as that is, I can merely dismiss it as wrong-headed and distasteful.

It is the genius stuff that really bothers me. The term “genius” is itself is so awfully weighted with elitist and reactionary agendas. Genius is more like a PR stunt than a genuine appreciation of human achievement. It is the convenient idea that Apple used to brand their computers — and not a serious was to engage with the history of science and ideas. You can say Einstein was a genius with absolute conviction and with little understanding of his theory of relativity.

It’s not that I don’t think that there are individuals whose talents and contributions rise above their peers. Your average tenured mathematician certainly deserves respect for their achievements, but for better or worse, there really are people who produce work on entirely another level. After roughly a decade in research mathematics, I can say that whether it be through nature or nurture, God’s gifts have been distributed quite unequally. (Appropriately enough, I appeal here to Einstein’s God).

And I simply don’t buy the idea that we have a shortage of geniuses. I think some people prefer to spout declinist narratives, tell educators that they are doing it all wrong, and dismiss contemporary art and culture and literature as being utterly non-genius. They prefer doing all that to appreciating the successes of today, simply because they don’t conform to some presumed ideal of genius.

It gets my goat because as I read the piece I can think of examples. To take what seems to me to be the example that should settle the debate: Grigori Perelman. Perelman’s proof of the Poincare conjecture in the early 2000s was a momentous moment in mathematics. Not in recent mathematics. In mathematics, full stop. His proofs arrived online without fanfare or warning. While it took time for mathematicians to process what he had written and conclude that a complete proof had been presented, they understood very quickly that it was a serious contribution. It is rather vulgar to say it out loud, and actually a disservice to the entire field of differential geometry, but you can rank it up there with any other seminal mathematical advance. It would be be bizarre to suggest that modern mathematics is impoverished when such work like is being produced.

That is just one example. How long a list would you like? Should I mention the resolution of Fermat’s Last Theorem? Big math prizes handed out reasonably regularly, and I don’t believe there are any shortages of candidates. But maybe their contributions haven’t transformed the world in the way people might imagine “proper” geniuses might. Maybe the problem is that they never read their way through the canon before adolescence. Am I actually expected to entertain this line of thought while I have the privilege of such fine specimens of achievement at my disposal. But I sense that Hoel will simply explain that Perelman and the rest were lucky enough to receive some variation on aristocratic tutoring. Exceptions that prove the rule, and all that.

The whole business annoys me because instead of reading something that is interested in the human achievements that the essay claims to valorize, it wants to stack them up like shiny Pokemon cards to be measured by the inch. It aggravates me because I could be reading a New Yorker profile of someone who might not be a genius, but who is at least interesting. I could be reading the Simons’-funded math propaganda outlet Quanta which has the benefit of believing that there is great mathematics being done and that it is worth writing about. And most of all, for all the “geniuses” have done for the world, I’m still on team non-genius, and we still bring more than enough to the table.

I hate the video-essay

Patricia Lockwood’s Booker nominated No One Is Talking About This is now out in paperback. I know because I went out, bought it, and read it. In real life. It is one of the most widely reviewed novels of 2021, in part because Lockwood is unquestionably an exciting writer with a clear voice and real style, but also because this book was a potential candidate for carrying home the title of “The Great Internet Novel”.

The events of NOITAT track the contours and trajectory, both broadly and in many details, of Lockwood’s own life, starting with becoming internet famous. The protagonist, who we assume is half-Lockwood, is brought to the center of the online stage for asking “can a dog be twins?”. Because we are to understand that virality really can hinge on something so slight. Actual-Lockwood achieved some kind of Twitter fame for retweeting the Paris Review, asking “So is Paris any good or not”, although I believe her trajectory involved more than that one tweet. A dog can in fact be twins, although very rarely in the sense of actually being identical twins (or so Google tells me). Half-Lockwood’s joke is funny in the same way that slant rhyme rhymes. It is also pretty dumb, and I have to wonder if there is some commentary in Lockwood’s part on the disproportionate accolade such a dumb joke can receive online. People write about her Paris Review tweet as if it was the height of wit, but really it just flatters a reader for knowing what the Paris Review even is.

Written in the manner of oblique fragments, NOITAT might evoke for some the fragmentary, non-linear nature of social media — or “the portal” as it is referred to. But my own reading left me recalling Joan Didion’s A Book of Common Prayer with her flashcuts jumping in and out of scenes, with the typical literary constructions and table setting eschewed, leaving you to carefully to follow the thread of each sentence so you can hopefully make it to the eventual destination. Fortunately, I have a better grasp on memes than I did on Central American politics in the seventies. But God help you if you are not on some basic level online.

Lockwood has an almost Talismanic status in the world of young and hip American lit. She does not have an MFA and did not attend college. Here is the immaculately conceived American poet, free from the sin of credentialism, the careerism, and the workshopping. Evidence that perhaps free verse isn’t just bullshit you have to attend grad school to “appreciate”. There is a wonderful passage in Priestdaddy where she describes the depth of texture and connotation words have for her, and reading it you too can begin to believe.

I am a great fan of her writing, especially the memoir. And also especially in the context of a certain popular idea of what constitutes good writing. The suggestion that dialogue tags would perhaps be best restricted to the inoffensive “said”, “asked”, and “told”, has become a stupefying dictum, so it is a pleasure to be reading a writer who is not afraid to have their speech “yelled”, “yelped”, “hissed”, and even “peeped” when the occasion arises. But that is to under-represent Lockwood’s qualities as a prose stylist. Even after the critics have had their pickings you can still find “the unstoppable jigsaw roll of tanks”, and now I too am one of those critics, unfortunately.

If the first half of the novel sees the half-Lockwood protagonist being submerged, and possibly drowned in the online, the second half finds her abruptly washed up on shore to deal with Real Life — not only in the world of flesh and blood, a life-threatening pregnancy, and a rare and terminal genetic disorder, but also Real as in we are still following events that actually transpired. What does it mean about very online life, that it served half-Lockwood very badly in these circumstances? What does it mean that online life is not well suited to these truths?

If I had a dollar for every time I made a friend laugh… Well, at best this would be a strange side hustle. But I don’t get a dollar for making casual quips. Nor for the hot takes or deeper thoughts I impart to those about me. Actual humour writing, as with all writing, is a harder, more laboured, and quite deliberate practice when it has to be done consistently and in quantity. Yet a lot of the internet seems to offer up the possible promise getting all those likes and subscribes for basically turning up and being you. Twitter, Youtube, and podcasts can give you the impression that sometimes it is simply a matter of typing it into your phone, or just setting the tape rolling. But after a while some of that back and forth between the hosts, and quite a lot of that laughter, feels more laboured than it should. Do I really believe that the Youtuber really captured the unpracticed vitality of their own genuine laughter?

Obviously not. It was all — or at least most of it — utterly scripted. Which in general is fine. There should be some mediation between our personal and public lives. A kind of negotiation and consideration. What is captured in NOITAT is an experience from the small class of people (workers? writers? creators?), mostly very young, who have been able to put a relatively unmediated portion of themselves out onto the web for the viewing benefit of the rest of us. Who have committed themselves to being “very online”. Because most people are not “very online”. For most of us there is a line, and although we may occasionally find ourselves on the wrong side of it, we get to enjoy the separation.

I think when critics were scouring the Earth for the “Great Internet Novel” they were hoping for all the sordid vicarious thrills that you might expect from a medium that has offered us strange new breeds of humour and fed our prurient desire for the salacious. Half-Lockwood’s “very online” quickly seems very exhausting. The contradictions and hypocrisy and inconsistency was already self diagnosed on the portal itself before it even arrived on the printed page. But all this is the point, I suspect.

Retrograde Motion

Before Newton there was Copernicus, and before Copernicus there was Ptolemy. Living in the second century AD, Ptolemy produced what would become the definitive work in astronomy for the next millennium. It was a geocentric system: the Earth, quite sensibly, set at the center of the solar system. While geocentricism was ultimately to suffer the ignominy of being synonymous with backward thinking, Ptolemy certainly didn’t lack in mathematical sophistication.

Keeping the Earth at the center of the solar system required a great deal creative invention. It was taken as axioms that the planets should travel at constant speeds, and adhere to the perfect forms of geometry (that is to say circles and spheres). But the planets that appeared in the night sky did not conform to these expectation. Unlike the sun and moon, which flattered us earthlings with their regular appearance and disappearance, the planets would sometimes slow down and reverse direction — what they called “retrograde motion”. The solution that Ptolemy and his predecessors developed was a whole Spirograph set of celestial structures called deferents and epicycles. This essentially involved imagining that the other planets were not set upon a wheel revolving about the Earth, but set on a wheel on a wheel in motion about the earth. And, if necessary, perhaps some greater sequence of nested wheels.

Copernicus, the Catholic canon and Polish polymath of the fifteenth and sixteenth century, had, like every other astronomer of his day read Ptolemy. Yet after carefully studying the night sky and much thought, he developed a heliocentric map of the solar system. That is to say, with the Sun at the center. While he managed to free himself from geocentric difficulties, and dramatically simplify the situation in many respects, he still adhered to a belief in constant speed and circular orbits. It would take Keplar and ultimately Newton to settle the matter with elliptic orbit determined by the force of gravity.


The heliocentric theory was controversial for two reasons. The first, and quite reactionary, objection was based on readings of a handful of bible verses. For example, when Joshua led the Israelite in battle against an alliance of five Amorite kings he ordered the sun to halt its motion across the sky, thus prolonging the day, and with it the slaughter of the opposing army. The point is that Joshua ordered the sun to stop, and not the earth. This might seem like pedantry, but that was precisely the point. The Catholic church hoped to hold a monopoly on biblical interpretation, and someone lower down the ecclesiastical hierarchy conducting their own paradigm shift equipped with nothing more than astronomical data and mathematics could set a dangerous precedent. At a time when many such precedents already being set.

The second, and quite serious objection, was that it created a whole new set of scientific questions. Why don’t we feel like we are moving through space? Not only about the sun, but when we make the Earth rotates daily about its own axis? The numbers required to calculate the implied velocity were known. And on top of that, if we were moving at such great speed, they why did we not observe a parallax effect between the stars? As the apparent distance between two buildings appears to change as we move past them, why couldn’t we observe a similar shift in the stars as we moved? Copernicus’ answer was that the stars were much farther away from earth than had ever been imagined before. It was a correct deduction that didn’t do much to convince anyone.


Both Copernicus and Newton were reluctant to publish their ideas. In Newton’s case he was satisfied to have developed his Calculus and did not care to suffer the scrutiny that others would subject his theory of gravity to. His experience justify his thinking to other scientists of his had soured his relationship with the wider scientific community of his day. It was only when it became clear that Leibniz had independently developed the tools of Calculus that he finally set about writing up, formalizing, and getting his hands on data in order to present his Principia.

Copernicus had gathered his data and written his book, yet for many years did not publish it. De revolutionibus orbium coelestium would only arrive in print as he lay on his death bed. While Copernicus had friends who supported his astronomical pursuits, it seems to have been the arrival of a young Lutheran mathematician Georg Joahim Rheticus, who was the key instigator in bringing the manuscript to print.

No one had invited him or even suspected his arrival. Had he sent advance notice of his visit he doubtless would have been advised to stay far away from Varmia. Bishop Dantiscus’ most recent anti-heresy pronouncement, issued in March, reiterated the exclusion of all Lutherans from the province — and twenty-five-year-old Georg Joachim Rheticus was not only Lutheran but a professor at Luther’s own university in Wittenberg. He had lectured there about a new direction for the ancient art of astrology, which he hoped to establish as a respected science. Ruing mundane abuses of astrology, such as selecting a good time for business transactions, Rheticus believed the stars spoke only of the gravest matters: A horoscope signaled an individual’s place in the world and his ultimate fate, not the minutiae of his daily life. If properly understood, heavenly signed would predict the emergence of religious prophets and the rise or fall of secular empires.

A More Perfect Heaven — Dava Sobel

I suspect that we may undervalue the weight that the belief in astrology may have carried in some (but not all) quarters. Many looked back to the Great Conjunction of 1524 as heralding the rise and spread of Lutheranism — an ideological shift with profound and widespread implications that might only be matched by Communism. We live in an age of scientific prediction, taking for granted the reliable weather forecast on our phone in the morning. We (at least most of us) accept the deep implications of the climate data for our future, while also paying heed to the sociology and political science can help us understand our lack of collective action. If we accept the astrology as being a kind of forebear to our own understanding, you can perhaps appreciate why Rheticus might have been willing to take such risks to pursue a better understanding of the stars.

We can only imagine what Rheticus must have said to Copernicus that led him to finally prepare his manuscript for publication. And that is what Dava Sobel has done, writing a biography of Copernicus, A More Perfect Heaven, which contains within it a two act play dramatizing how she imagines the conversation might have gone. It presents a Rheticus shocked to discover that Copernicus literally believes that the Earth orbits about the Sun, a Copernicus perplexed that the young man takes astronomy seriously, but who is won over by the prospect of taking on such a capable young mathematician as his student.

Rheticus’ principal legacy is in the précis of Copernicus’ theory that he wrote and had distributed as a means of preparing the way for the ultimate text. His contributions would ultimately be overshadowed by the later accusation, conviction, and banishment for raping the son of a merchant. While Sobel presents Rheticus in her play as pursuing/grooming a fourteen year old boy, it does not feel like she knows exactly where to take this dramatically. By way of contrast, John Banville in his novel Doctor Copernicus gleefully transitions to a Nabokovian narrative upon Rheticus’ arrival.


There is an interesting dramatic irony in the way Copernicus’ ideas were initially received. There was a ruse, by certain parties, to present Copernicus’ heliocentric theory as simply a means of computation. It could be tolerated if it was understood that one was not supposed to actually believe that the Sun was at the center of the solar system. Which struck some as a reasonable compromise. The Catholic church was drawing up what would become the Gregorian calender, and Copernicus’ made important contributions to calculating a more accurate average for the length of a year.

Yet now the situation has been reversed. While Copernicus’ techniques were rendered obsolete with the arrival of calculus, the conceptual understanding carries on in popular understanding. Meanwhile, as Terence Tao and Tanya Klowden have noted Ptolemy’s deferents and epicycles live on in the mathematics of Fourier analysis — a means of approximating arbitrary periodic functions using trigonometry.

Even within a field as definitive as mathematics and science it is interesting how even defunct and obsolete thinking can both be revealing and even persist with strange second lives. Why someone believed something can become more important than the truth of the thing. Eratosthenes deduced an impressive approximation for the Earth’s circumference after hearing a story about a well that would reflect the light of the sun at noon. We posses a more accurate figure now, but technique never grows old.

Sanderson and I

I have never read a Brandon Sanderson novel. Plenty of people haven’t, so that doesn’t make me special, even among avid readers. But a great many people do read Sanderson. So many, in fact, that even among high profile writers, Sanderson certainly is special. And this week Sanderson’s readership went from buying Sanderson’s books, to buying into Sanderson and his books; they put down an accumulated and unholy twenty million dollars (and growing) on his kickstarter to publish four “surprise” novels in 2023.

But perhaps I should feel a little bit special, because while I haven’t read any Sanderson, he has read me. Or at least he has, on one occasion, in interview, indicated that he had read the web-comic that I drew as a teenager.

Thog Infinitron, written by Riel Langlois, a Canadian I met on a web-comic forum, and drawn by me, Daniel Woodhouse, is the story of a cyborg caveman and his various adventures. After his body is crushed during a Rhino stampede, the titular Thog is rebuilt with all manner of enhancements by a pair of alien visitors. I uploaded a page a week, and the story ran for a grand total of 129 pages before I unceremoniously lost interest and ditched the project somewhere in the middle of my second year of undergraduate mathematics. I had other things going on. Thog’s story was left at a haunting cliffhanger, with story-lines left open and characters stuck forever mid-arc.

I do not even have to look back over my work to recognize that I was a callow and unsophisticated artist. My potential was frustratingly underdeveloped. In retrospect, I cringe at my own haste to produce a popular webcomic that would bring in wealth and recognition, and how that haste led me to neglect my craft. I lacked influence and serious guidance. Or maybe I was simply too stubborn in my ambition. I do wonder how I would have fared if I had been that same teenager today, able to discover the wealth of material and advice that is now available online. You can literally watch over the shoulder of accomplished artists as they draw.

Nevertheless, when I revisited Thog, I was impressed by the comic as a body of work. Langlois’ writing was truly fantastic — in an completely different league to my art. And as rough as the art is to my eye, I have to appreciate the sheer cumulative achievement.

(Please do not go looking for my webcomics. Aside from Thog Infinitron I sincerely hope that my teenage juvenalia has disappeared from the internet, and for the most part this wish seems to have come true through a combination of defunct image hosting and link-rot. Thog is still out there and readable thanks to a surviving free webcomic hosting site, although I’m not sure your browser will forgive you for navigating into those waters.)

At the time, Thog, with it’s regular update schedule, was a major feature in my life. Now it feels like a distant and minor chapter. Years later I would occasionally do a web search to see if people still mentioned it , if Thog was still being discovered, or if the comic had any kind of legacy at all. It was during one of those web searches that I discovered a passing reference to Thog by Sanderson in an interview on Goodreads.

It is a strange an unusual writer who does not want to be read. And indeed it is a strange and gratifying to discover that you have been read. It is an experience that Sanderson enjoys to a singular degree, but that I too have enjoyed to a thoroughly modest degree. At some point during Thog’s run we even gave permission to some particularly keen readers to translate it into Romanian. I have never received a dime for my web-comics, and at the time I didn’t take much note at the time, but in retrospect I’m in awe that I should have received such an honor. Sanderson’s works have been translated into 35 different languages.

The money that is being amassed on Kickstarter for Sanderson’s project is no small thing. The way the arts and literature are funded have profound effects on the culture. The proceeds of bestsellers have traditionally been reinvested by published houses in new writers (or so it has been claimed), and I imagine that more than a few people will look at Sanderson’s foray into self-publishing (or working outside a major publishing house) and wonder how different the future might be. But it is at least, for now, worth appreciating the sheer spectacle of a truly devoted readership.

Harrowing

In Don DeLillo’s novel White Noise the protagonist and narrator, Jack Galdney, finds himself struggling to get a straight answer from his precocious teenage son, Heinrich.

“It’s going to rain tonight.”
“It’s raining now,” I said.
“The radio said tonight.”

[…]
“Look at the windshield,” I said. “Is that rain or isn’t it?”
“I’m only telling you what they said.”
“Just because it’s on the radio doesn’t mean we have to suspend belief in the evidence of our senses.”
“Our senses? Our senses are wrong a lot more often than they’re right. This has been proved in the laboratory. Don’t you know about all those theorems that say nothing is what it seems? There’s no past, present or future outside our own mind. The so-called laws of motion are a big hoax. Even sound can trick the mind. Just because you don’t hear a sound doesn’t mean it’s not out there. Dogs can hear it. Other animals. And I’m sure there are sounds even dogs can’t hear. But they exist in the air, in waves. Maybe they never stop. High, high, high-pitched. Coming from somewhere.”
“Is it raining,” I said, “or isn’t it?”
“I wouldn’t want to have to say.”
“What if someone held a gun to your head?”

The exchange is as infuriating as it is amusing and you can’t help but wonder where your sympathies should lie. On the one hand Heinrich is deploying tendentious po-mo deconstruction. Yet his father is a professor at the town’s liberal arts college where he founded the academic field of Hitler Studies, created in service of academic advancement, providing a stage for his own po-mo preoccupations.

I couldn’t help but think of White Noise as I recently read Joy Williams’ Harrow. If you put a gun to my head and told me to describe the book I’d say it reads like White Noise meets Cormac McCarthy’s The Road. Describing the actual plot of Harrow makes describing the plot of White Noise seem easy. If there is a central conceit to the novel it is that there has been some kind of global environmental catastrophe — the titular “Harrow” — the details of which are only ever alluded to and described indirectly. The situation is stated most clearly towards the end of the novel.

Bouncing back from such historical earth-caused losses, humankind had become more frightened and ruthless than ever. Nature had been deemed sociopathic and if you found this position debatable you were deemed sociopathic as well and there were novel and increasingly effective ways of dealing with you.

None of this really reflects the nature of what awaits a reader in the book. So I will try again. We follow a teenager Khristen who is sent off to a mysterious school for gifted children, until “the Harrow” causes the school to be swiftly shuttered. Khristen goes in search of her mother and arrives instead at The Institute: a kind of eco-terrorist training camp for geriatrics who have decided to dedicate what remains of their lives to coordinated acts of revenge against the people who inflicted so much cruelty and damage on the natural world. Khristen eventually leaves the institute and in the final portions of the novel arrives in the bizarre courtroom of a twelve-year-old judge. I’ve skipped a great deal, but hopefully you get a sense of how resistant the book is to any kind of conventional narrative arc.

I might as well divulge another central conceit of the novel: Khristen’s mother holds the firm conviction that Khristen had briefly died and returned to life when she was a baby. None of the witnesses to the incident or the doctors who examined the child believe this happened. The baby just appeared to have momentarily stopped breathing. Yet this non-incident is returned and treated like it should hold a great deal of resonance. Later on there is much discussion of Kafka’s short story The Hunter Gracchus, which is obviously great fun if, like me, you’ve never read that particular story. But I am led to believe Gracchus’ own un-dead predicament should resonate with Khristen’s.

I should say that Joy Williams is very highly regarded as a writer and you can find plenty of evidence on the page of her skill as a prose stylist. Even if I spent most of the book waiting for it all to accumulate in some or any way, the scenes are nevertheless wildly inventive and individual lines can haunt you:

The fish was not rose-mole stippled and lovely but gray and gaunt as though it had lived its brief life in a drainpipe.

The poetic beauty of the initial description contrast powerfully with the bleak point at which the sentence ends. It is a knight’s move of a sentence, shifting trajectory somewhere along the way. A quick google search reveals that this “rose-mole stippled” business is lifted from a nature poem Pied Beauty by Gerard Manley Hopkins. Which is all to say that there is a lot going on if you look carefully.

But is Harrow actually a good novel? I cannot help myself but channel the spirit of Heinrich Gladney:
“Do you think Harrow was good?”
“In what sense good? Good to all readers in all times and in all circumstances? Good on a first reading or on a rereading? Perhaps you want me to give an Amazon star rating, because to that I must outright object on aesthetic grounds.”
“How about to you, today, when you read it.”
“I feel like any serious art inevitably provokes complicated sets of emotions in me that resist easy reduction.”
“So you did not enjoy it?”
“‘Enjoy’ is too narrow a term to capture whatever virtues the artists was aiming for. I feel like giving a straight answer would serve to do nothing more that to open me up to being accused of exhibiting a lack of literary sophistication.”
“Sounds like you are afraid that the book was good but that you were not able to appreciate it fully. Which would be awkward because lots of other people said it was great. Kirkus named it 2021’s best.”
“I certainly managed to appreciate some of it.”


If you want a more insightful critical rundown of Harrow, and Joy Williams oeuvre more generally, then I suggest Katy Waldman’s piece for the New Yorker.