Category: Books

Vienna Diary, July 10th

Wittgenstein’s Nephew by Thomas Bernhard is a novel, but also an authentic autobiographical account of the author’s relationship to Paul Wittgenstein — nephew to the famous philosopher and part of the once phenomenally wealthy Wittgenstein family. Like the celebrated philosopher, we are told that Paul did discarded, frittered, and give away his share of the wealth, and as time went on he became increasingly constant frustration to the rest of the family due to regular breakdowns and growing dependence on financial intervention. Bernhard very much liked Paul, having found in him an enlightened and fascinating friend; a man he judged to be quite unlike the rest of the family, who (excepting the celebrated philosopher) he despised.

I said it is an autobiographical account, but is it? It is hard not to wonder to what extent there is an element of performance. The opinions that thicken this text are extreme and scathing, and I presume are entirely sincere. He despises the countryside with the clear air that restores his ailing lungs, he despises the cafe culture for which Vienna is famous for, he despises the literary prizes he wins despite his persistence in offending all about him, he despises the upper classes, and he despises Austrian towns and cities for not carrying the Swiss daily he wants to read. I was waiting for him to despise apple strudle as well. If I came to the text without the modicum of context, I would have read the frank misanthropy of the narrator as being that deliberate ploy of unreliability. The novel is one unbroken, first-person paragraph, which I would have definitely identified as the unmistakable indicator of the unhinged.

In case you don’t know, Bernhard is regarded as one of the most important post-war Austrian literary figures. I will say this: it is a singular reading experience. It is also deeply frustrating, alluding to detail, to specifics, to events, without actually describing them. I think by now in the science of creative writing there is a consensus that specificity is a virtue in prose. What is not a virtue is the following shit:

I could recount not just hundreds, but thousands of Paul’s anecdotes in which he is the central figure; they are famous in the so-called upper reaches of Viennese society, to which he belonged and which, as everyone knows, have lives on such anecdotes for centuries; but I will refrain from doing so.

Wittgenstein’s nephew, page 60

Should I read that kind of sentence and continue to believe the author is writing in straight-forward good faith? By that point I was convinced he was fucking with me. Maybe there is some part of the continental European mentality that eludes me. I would have enjoyed Bernhard pulling back the curtain a little further on the “upper reaches” so disdained, and generously providing us with a little specificity. But maybe that is a vulgar inclination, and I too would be suspect under Bernhard’s gaze.

Mirrorflaking the ocean’s steely surface

After two decades of going over-schedule and over-budget, the James Webb Space Telescope is finally proving that there is no such thing as over-exposure when photographing deep space. Arriving at its final destination in orbit about the Sun, blossoming and unfolding its hexagonal petals, the JWST began soaking up photons — photons coming to the end of a long journey from the very distant past. Aside from some spacey looking, Chris Foss-esque, snapshots, serious technical data is being collected. By carefully observing the flickers of stars, scientists are measuring light spectra as distant planets cross pass over the face of their sun. From these spectra, the composition of elements on those planets can be deduced.

Most of us will never get to see the Earth from orbit or walk on the moon. The solar system and beyond will remain bright dots in the night sky. The craft of Astronomy is to deduce facts from a limited vantage point, with inevitably limited means, about phenomena that you will never observe directly. Astronomers refer to the chain of reasoning that they have employed as “climbing the celestial ladder”. It’s the oldest scientific game going, with shockingly accurate estimates of the earth’s size attributed to Eratosthenes in the third century BC.

We differ in many respects to those Greeks, not least of all in the extent and power of our technology. But here is another: we possess powerful convictions about the future, animated at least in part by science fiction. Some, when the JWST data is all brought together and processed, will be more than merely curious about the theoretical existence of life “out there”. Those certain people will be looking at those worlds as potential destinations for the future of our species.

When Herge had Tintin set his ligne claire foot upon the moon before Niel Armstrong, so he had to imagine how it could happen. There was a literature at the time, written by scientists and engineers, on the subject of rocketry and a future moon expedition and Herge consulted it. The aim was realism and in that Herge was remarkably successful. The field of “hard SF” — science fiction which sincerely aims to pays its dues to the equations — has shown much less restraint, but nevertheless seems to harbor the belief that they are anticipating similar future developments. Many authors and readers believe these stories have social utility in either inspiring us toward grand feats or in anticipating the inevitable challenges that await us. Either way these writers become prophets, sensitive to the whispers of the goddess Science in their ears.


This review of Aurora, by Kim Stanley Robinson, will stray deeply into the “spoilery” territory some of you like to avoid. Territory where I’m not so much assuming you have read the book, but that you are fine with me divulging the key contours of the plot so I can explain my grand theory about what is really going on. For my part, I started reading Aurora with a good idea what lay ahead. The story, as far as I understood it going in, was going to be a fictionalized illustration of a guest essay Robinson wrote for Boing Boing many years ago — that I read at the time — which laid down a thorough account of the reasons why we aren’t going to be colonising other planets or doing much of any travel outside our solar system.

The premise of Aurora is that humanity reaches for the stars anyway. We alight upon a generation star ship as it reaches the end of a 160 year journey to Tau Ceti — a nearby star possessing Earth-like. The generation star ship is a tool for science fiction writers; a means of carrying mankind across the unfathomable distances between stars. Rather than hyperdrive or cryosleep, you imagine a spaceship as a kind of grand terrarium in which a living, breathing, teeming, reproducing lump of Earth’s ecosystem, humans included, are sent off in a self sustaining bubble. A nuclear reactor provides the constant input of energy required to avoid entropy, and with the turning of a few pages hundreds of years pass, generations rise and fall, and we arrive at their destination.

There are characters, but only three are really important; only two are human. Devi, the ship’s chief engineer, whose existence is devoted to fixing the alarming array of problems that are presenting themselves, and Freya, her daughter, struggling to follow in her mothers footsteps despite lacking inclination toward anything mathematical. Then there is the ship’s AI, who might even dispute their status as a character at all, but has been cultivated by Devi into a storyteller who narrates most of (and I would argue the entire) novel.

Readers responded most strongly to the AI narrator, won over by the ship wrestling with the way we employ metaphor and simile in our language. But this amounts to the ship transitioning from writing with the dispassionate, exhaustive, technical register you’d expect from a computer, to the kind of cliched idiom you’d encounter in the prose of an eager yet unimaginative writer. That is until there is some geography to be described, in which case they invariably write like the singular Kim Stanley Robinson.

To my own liking, Devi is the most compelling character. She lives with a constant sense of anger. Not at anyone who shares the ship with her, but at those predecessors who volunteered themselves and condemned their offspring to the journey. As the chief engineer, she is responsible for addressing the oversights and shortcomings of the original design. As a consequence, she understands better and earlier than anyone else on board how truly nightmarish their reality is.

Kurt Vonnegut’s sixth of eight rules for writing seems especially relevant here:

Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.

In this respect Robinson uses the generation star ship as a terrible predicament for our characters. They aren’t intrepid voyagers; they are poor souls born in a tin can hurtling through the void. By the end it is the great, great, great, great grandchildren of those who first set off from earth who arrive at the destination. One of the all time great sci-fi speculations, but awful for anyone who would be caught, trapped, sealed inside such a thing.

It’s not Robinson who is the sadist — it’s all of us. The trick in storytelling is allowing the reader implicate themselves in the actions and events of a story. In Aurora’s case we are implicated as science-fiction readers, eagerly anticipating mankind setting foot on another world, with little consideration of the actual fate of the generations of voyagers stuck on the starship. We are with that first generation of volunteers who effectively consigned their descendants to their bleak and unpromising fate.

We are to learn the hard way that “habitable” and “Earth like” are terms-of-art for astronomers, applied to a broad family of planets. What resembles Earth from a great distance might not be a place we would want to live. Aurora is the planet the colonists settle on with its limited prospects: oceans, wind, and oxygen. Filled with relief, they rush off the ship that has been their inter-generational prison and start building a new life. Unfortunately, those who expose themselves to the ostensibly breathable atmosphere start having extreme adverse reactions, quite likely from some mysterious microscopic alien organism that they can barely identify. Thus, they find themselves caught in another great literary trick: the double bind. Either they choose to settle an Earth-like planet can potentially support life, in which case it will be almost certainly poisonous, or they choose an Earth-like planet that is inert and unlivable, on the promise that terra-forming will make it livable in a few hundred years.

It is a long way to have traveled to arrive at such a truth. To be betrayed. The population struggles to accept this reality, falling into violent civil war, and ultimately having to divide itself. The ship is split in two. Half of them remain, believing that colonization will still be possible, while the other half set off on a return journey to Earth. At this point, quite conveniently, our heroes are presented with newly invented hibernation technology that will allow them to sleep through the return journey. Not only sparing them from famine as the internal balance of their limited ecosystem begins to collapse, but also to allow those who arrived at Tau Ceti to confront the society that sent them in the first place.

The final chapter of the book has no clear narrator (although I believe that there is one implied). With insufficient means of slowing it’s approach, the ship performs a sequence of reverse gravity slingshot manoeuvres to slow their trajectory sufficiently so that the remaining colonists can be ejected back to earth, while the ship itself is lost into the sun. Freya and her shipmates arrive on earth, discovering in it a to be a home like nowhere in the universe. Perversely, many on earth are upset that they returned, feeling that they betrayed mankind’s true destiny. It is on this final ironic note that the novel might have ended, but instead we finish with an extended passage where Freya goes to the beach and enjoys the waves. This final passage has proven particularly dissatisfying and mystifying to many readers.

With the ship perishing in its final orbit around the sun, the reader might be obliged to conclude that the ship couldn’t possibly be narrating the final chapter. That is to forget why the ship was narrating at all. The ship’s narration was an exercise set by Devi as a means of explaining to the occupants on board the ship the nature of their predicament.

Devi: Ship! I said make it a narrative. Make an account. Tell the story.
Ship: Trying.
[…]
Devi: […] Do what you can. Quit with the backstory, concentrate on what’s happening now. Pick one of us to follow, maybe. To organize your account.
Ship: Pick Freya?
Devi: … Sure. She’s as good as anyone, I guess. And while you’re at it, keep running searches. Check out narratology maybe. Read some novels and see how they do it. See if you can work up a narratizing algorithm. Use your recursive programming, and the Bayesian analytic engine I installed in you.

While we can assume that everything up to a certain point is true, I don’t see any reason why the ship wouldn’t extrapolate further, simply because each story needs an ending. What better way to explain the situation than to imagine what might likely happen? Allow them to foresee a future as it might likely be. The ship would be doing what Robinson argues science fiction writers have failed to do. That the narrative was in large part an AI hallucination would explain many things, not least of all the convenience of the hibernation technology. But also the final passage on the beach, which closely tracks the opening passage of the book, set out on a lake in one of the ship’s internal ecological containers. The ship is solving the “halting problem”, frequently alluded to, with a recursive, oroborean structure, starting where we began.


Back in late 2020, and early 2021 I read a very different generation star ship novel (or really a quadrilogy of novels). The Book of the Long Sun by Gene Wolfe. As specific as Aurora’s preoccupations are to Robinson, he readily alludes to it’s influence:

One of the best novels in the history of world literature, Gene Wolfe’s Book of the Long Sun and Book of the Short Sun, a seven-volume saga telling the story of a starship voyage and the inhabitation of a new planetary system, finesses all these problems in ways that allow huge enjoyment of the story it tells. The novel justifies the entertaining of the idea, no doubt about it.

The central conceit of Wolfe’s story is that the population of the Whorl have forgotten that the live in generation star ship — something that Robinson’s characters could only wish to forget. Like all of Wolfe’s works there is a literary preoccupation with the internal logic and the implications of who is telling the story. In the case of the Long Sun, save a dropped pronoun, up until the very end the reader must assume that the text is narrated by the omniscient third person.

I don’t think that Wolfe saw himself as a practitioner of hard SF, and although he had given thought to how the Whorl worked, it all seems to be in service of providing the reader with a fantastic literary vista — a setting that forces the reader to reorient themselves in reading the text. The real appeal of SF, as far as I judge it, is not that the authority of science grants the writer powers of prognostication, but rather a certain means of suspending disbelief, and then remarkable destinations to carry the reader away off too once their disbelief is in flight. It has been glibly asserted that SF needs to be more than jet-packs and ray guns, but Wolfe himself, even in his literary shenanigans was all about the jet-packs and ray guns. The problem was never the paraphernalia. The problem with a lot of SF was that the writing was not very good.


The ship was pulling its punches. I half anticipated the ship to arrive back at Earth to find that reintegration of the passengers into Earth’s ecosystem was impossible. How many new viruses would have evolved on Earth in their absence? How dangerous would their own bacteria have become to humans on Earth? Or perhaps they would be received back on Earth with abject horror. Inured to their own true state, they don’t appreciate how irradiated, starved, inbred, and traumatized they. But these would have been endings that would have not have suited the ship’s purposes, nor Robinson’s inclinations as a writer.

In the end mankind’s colonisation of the galaxy resembles less of a science based prediction of the future, than a kind of secular heaven. It lives, like heaven, in our imaginations, where it can be left unchallenged, save for when we encounter true believers. The aspirations of Star Trek, like many religious traditions, are better inherited in a flexible, non-literal form. But that also has grim possibilities as a divide opens up between the naive believers in our science fiction future and the better schooled theologians who will inevitably have to choose their words carefully so as not to alienate the faithful. In the case of the novel, the readers are left to imagine how the passengers on the ship in Aurora might have eventually reacted to story they have presented to them. Back in the real world a work like Aurora entering the “canon” of SF might inoculate us, but if the resistance to accepting Joanna Russ’s We Who Are About To… into any kind of canon is any indication, Robinson has his work cut out.

The Long Tail Twitches

The 20th Century promised us flying cars, jet-packs, and Mars colonies. And how could it not deliver? It was already happening. The atomic age arrived, ending the Second World War in terrifying fashion, nuclear power was ushered in, and men walked on the moon. But where exactly did we fall short in achieving the envisioned grandeur of tomorrow? Was anything less than Star Trek going to be a disappointment?

The 21st Century birthed a fresh litter of new promises and speculation; now it was the internet that was going to transform mankind. And how could it not deliver? The internet was already here. We were connecting and reconnecting via social networks, being blessed with the unlimited email inbox space, discovering with instantaneous search, guiding ourselves through meatspace via smartphone GPS, and abandoning TV for streaming. You have to separate the broken promises, and from those which became a nightmare.

One promise was of particular interest for those who, like me, feel some investment in our arts and culture (in my own case, being somewhat ‘bookish’). It was argued in a Wired essay, then spun into a book, that the internet would fundamentally change the market for culture; supply would meet demand more responsively and more dynamically. We had unmet needs stuck out in the “long tail”. The niche and otherwise obscure would reach the audiences who would appreciate it. In many ways it has. But the true implication of the long tail is that most cultural output is now the krill that the big conglomerate whales feed off. No individual artist living in the long tail will make any real money, but a corporation taking a 10% cut from the entire the long tail has a healthy return. We won’t be leaving the old gatekeepers behind any time soon and we still live under the tyranny of the lowest common denominator.

The new media landscape that has emerged has deeply frustrating shortcomings — even for increasingly well served readers. As Matthew Claxton observed in his newsletter on the phenomenon of the Brandon Sanderson kickstarter, if you go to Sanderson’s Amazon webpage for any of his books, the algorithmically generated recommendations put you at the bottom of a pit full of other Sanderson titles. The algorithm sends you to books you are likely to buy, which ultimately means bestsellers very similar to the bestseller you are already looking at. There is no effort to introduce you to new releases you might be interested in, or take you anywhere off the beaten path. Amazon has no interest in doing the things that bookseller enjoy doing, which must be utterly infuriating for booksellers who are learning the hard way just how irrelevant they are to the business of simply selling as many books as possible. Amazon could have been interesting if they weren’t so singularly focused on their algorithms.

What kind of bookseller would feel any pride in just putting the most obvious bestseller into the customer’s hands? One who only cared about making money, probably. One who had no investment in the treasure of discovery, certainly.

Let’s face it though: all of this talk is tiresome. The failures of our futurists is far less intriguing than the question of how we actually stumble upon our treasures of discovery. There is no reading list for life, and it is difficult to even talk seriously of a canon. It is more like we have inherited what is both a legacy and still growing, thriving enterprise. Both past and future works outpace us as more gets published then ever, and older works are rediscovered and re-evaluated. Every book listicle is a desperate attempt at navigating this reality. Engagement with this leviathan is no small feat: a book takes an unusual degree of engagement, in terms of time, concentration and intellectual commitment compared to most other narrative art forms.

To look back and wonder why I read any particular book is to confront free will itself. Something that the ideology of big tech pays some tribute to in all their talk of providing us with more “choice”. But it is reassuring to look back and find half a mystery and plenty of serendipity.

Some books I have read in large part because they were sat on a shelf before me, especially when I was younger. I read Thomas Harris’ counterfactual thriller that I found in the Welsh holiday cottage my family stayed at on the strength of the implied horror of blurb alone (what if…?). A similar story with Witches Abroad by Terry Pratchett, with the Josh Kirby cover art that drawing me in rather than the blurb. It was bound to happen sooner or later — there are enough copies of both Harris and Pratchett out there that maybe it was only a matter of time.

Sometimes it is someone else’s initiative that leads me over the threshold of the first chapter — I make a point of reading the books that friends and acquaintances recommend to me. HHhH by Laurent Binet. Rant by Chuck Palahniuk. Death at Intervals by Jose Saramago. Me Speak Pretty One Day by David Sideris. Cuckoo’s Egg by Cliff Stoll. Fifth Business by Robertson Davies. 4321 by Paul Auster. A Walk in the Woods by Bill Bryson. Recommendations removed from book marketing and publishing log-rolling, and read free form the usual burden of expectations. I really can’t think of any that have been remotely dissatisfying.

Then there are the recommendations of famous and celebrated writers. This is treacherous ground, with all the log-rolling, blurbing, favors, politics, and insufferably chummy niceness that ultimately muddies the waters. You rarely hear an author expressing dislike for another authors work, but any serious writer must have stronger opinions than they let on. A reader with a developed sense of taste should be discarding books they find weak or disagreeable, and I find it hard to believe a writer the stones to make strong creative decisions if they can’t abandon a dismal novel.

I can think of two writers whose recommendations I would trust. This isn’t necessarily correlated to my opinion of them as writers, just on how their recommendations seem to have worked out. 1. Neil Gaimain — who seems to have a consistent knack for championing and recommending great work in sci-fi, fantasy, and horror. He led me to Gene Wolfe, Susanna Clarke, and Jonathan Carroll. 2. Will Self — who, when he isn’t musing on the future of the novel, recommends some seriously interesting reads — The Strange Last Voyage of Donald Crowhurst by by Nicholas Tomalin and Ron Hall, and My Father and Myself by JR Ackerly. (I also need to get around to Riddly Walker by Russell Hoban, which he has been effusive about.)

It is worth noting that The Book of the New Sun by Gene Wolfe has a more complicated provenance than the Niel Gaiman recommendation. I have a vivid memory of being in high school, and reading a glossy paged coffee table book that must have been Science Fiction: The Illustrated Encyclopedia by John Clute. I pored longingly over the reproduced cover art and reading accompanying text. One book sounded particularly arresting. A singular reading experience where the reader is introduced to what is ostensibly a fantasy setting before it is gradually revealed through various clues and hints, that it is fact the Earth of the distant future as the Sun is dying. It was a quadrilogy, which back then seemed like an unthinkable commitment, even if I could track down a copy. Yet the memory of the book lodged itself deeply in my mind as something that I should read. Yet I didn’t memorize the author or title, so I really have no idea how I managed to rediscover it later.

And then there are the lists. There are so many book lists, and you might wonder what possible use they may have. One list in particular — 1001 Books to Read Before You Die — published in book form, left a particular impression on me because I quite seriously set about trying to read them all. I was young and the attempt could not have lasted more than a year. But this would have been how I discovered Philip Roth, David Foster Wallace, Ian McEwan, and many other “obvious” authors. I only got about a third of the way through A Suitable Boy by Vikram Seth, and I should probably get around to giving it another go. There was something incredibly empowering about just going in blind, irrespective of any perceived “difficulty” the book might have. I suspect it liberated me as a reader.

There have been other lists that have been useful. The New York Times Best of the Year list and the Booker prize shortlists have been worthwhile. (A Tale for the Time Being, bu Ruth Ozeki being a stand out discovery). Sometimes books just have reputations: 1984, A Brave New World, The Selfish Gene, and Sapiens. Sometimes they got mentioned repeatedly on the New York Times Book Review podcast, and the authors interview very well — so I had to pick up Educated by Tara Westover and Priestdaddy by Patricia Lockwood. Say what you like about the NYTBR, but having little interaction with the humanities side of academia, the highly networked worlds of genre and lit-fic, nor publishing more generally, they have been the best means of access available to me for the vicarious thrill of insider book-talk.

And who wrote those lists? Critics, you’d hope. Book reviews do in fact occasionally drive me toward reading a specific something. It was newspaper reviews that led me to Nothing Is True and Everything Is Possible by Peter Pomerantsev and Countdown to Zero Day by Kim Zetter. I’ve tried a few books on the strength of them getting an A on The Complete Review. And was given the impetus to read Aurora by a New Yorker profile of Kim Stanley Robinson from the past year. But disappointingly it seems like it is the general critical consensus that moves me to read something that any one given critic. I guess James Wood is responsible for bringing Karl Ove Knausgård and Elena Ferrante to our attention, and I have read books by both of them. But it wasn’t James Wood’s writing that sent me to the bookstore.

Unfortunately it seems that my need to read a book accumulates within me as mysteriously as the workings of any algorithm. I badly needed to get around to reading The Secret History by Donna Tartt and The Sparrow by Mary Doria Russell, but I couldn’t point to any one particular recommendation. And I read Gilead by Marilynne Robinson understanding the general critical acclaim it holds. It’s not unusual for me to read a book review and conclude that the book in question sure sounds great, only to file a mental note away in the overflowing mental filing cabinet.

And then there is the fact that one book can lead to another. I’ve spent a fair amount of time on various author’s Wikipedia pages trying to make sense of their influences. I’ve never anything like a systematic study, but I read Vance’s Tales of a Dying Earth because of Gene Wolfe, or and dipped into Dickens because of how much his novels have meant to Donna Tartt, among others. Authors may bridle at the question of where they get their ideas from, but tracking down their influences is fantastically easy. I could of course go about it the other way around and enroll in a “Great Books” course so I could convince myself that there is a coherent trajectory in literature that arrives at the present, but I think that’s better left as a vague aspiration.

A great deal of ink has been spilled in service of justifying extensive personal libraries of books that couldn’t possibly all be read in the owners lifetime. But their principal justification must be the ready richness of possibility that lies open before you when choosing the next book. We aren’t whales consuming books like krill. A book takes up residence in your inner life for days, weeks, months, or years. The cerebral furniture has to be arranged to accommodate the guest. As readers it doesn’t make sense to think about the “long tail” when the kinds of book I’m reading lived out there long before the internet came along. The long tail isn’t even a particularly meaningful concept then to readers like me except perhaps in reference to the way certain books can stay with us long after we’ve closed their covers.

The vibe moves

In 1633 Galileo was put on trial by the Roman Catholic Inquisition. The ailing mathematician suffered through the indignity and ordeal of the trial to be declared guilty of advocating heliocentrism — the theory that the sun, not the earth lay at the center of the universe. Galileo’s book, “Dialogue Concerning the Two Chief World Systems”, was the principal piece of evidence presented against him. But as Dan Hofstadter repeatedly belabors in his book The Earth Moves, there was no material debate on the merits of Galileo’s arguments. The only relevant question was whether Galileo had usurped the authority Catholic Church over interpretation of scripture.

So the Counter-Reformation was fighting a losing battle against a freer, and often less literal reading of the Scriptures. Yet if there was one thing that had concerned the Council of Trent, it was the possibility that laymen would decide for themselves what passages in the Bible could be interpreted other than literally. In fact, the issue of the earth traveling about the sun had little if any bearing on the Catholic faith. But the notion that persons without theological training could decide for themselves to read this or that biblical passage in a non-literal sense constituted a mortal danger for Catholicism in the early seventeenth century.

Hofstadter’s book focuses on the trial, while also giving background on Galileo’s observations of the night sky through his recently invented telescope. We are not treated to a full biography of Galileo, nor the kind of exposition — which I would have certainly benefited from — on the nature of the reformation, counter-reformation, or the Roman Catholic Church of the time . Hofstadter’s own interests clearly lay in discursive discussions about Galileo’s engagement with the art and culture of the period.

There is much that is interesting, but also a great deal that is frustrating. After outlining the events of the trial, Hofstadter hedges against saying anything too definitive about the affair, conceding that both the event of the trial, along with its final verdict, may likely have been the consequence of unknown Papal intrigues and the obscure politics of the time. The book doesn’t seem to know what to do with certain pieces of context. Take the following insight into what we know of an individual’s religious conviction:

We do not know what Galileo or anybody really believed at this period, since religious belief was prescribed by an autocracy and heresy was an actionable offense. If one had misgivings, one kept them to oneself, so it would be naive to take religious ruminations penned in the papal realm or its client territories at face value. The Inquisition’s own records confirm that many people harbored reservations and heretical beliefs: before the Counter-Reformation, they had been much more candid about them.

Galileo had built a telescope that provided a tiny, limited window that gave him a better view upon the solar system. This insight shaped a scientific conviction that led him to stray on territory that the Church had claimed authority over. But it didn’t have to be science. If anything, science was at the fringes of what the Church controlled. Moral, social, and religious matters were the principal victims. Science just happened to have been the issue that the Church managed to look definitively foolish over. Frankly their other strictures look similarly bad today, at least to my eyes. The extract I just gave above seems to hit on something at least as important as the actual science that Galileo practiced. It almost seems to serve the Church to frame the affair as “science vs religion”, rather than as “religion vs freedom of thought”.

Hofstadter clearly loves Galileo as the Renaissance man, immersed in the art and culture of his day. I sensed a real nostalgia for the pre-Two Cultures world. There is a valor and a virtue that is popularly recognized in those early scientists — the “genius” we ascribe to those who were the first to figure certain things out. I am perhaps sensitive to a certain kind of sleight made against the “institutionalized” scientists for today who are fantastically empowered by the inherited work of earlier scientists. So great is the inheritance that it inevitably dwarfs any possible contribution they can make. It is a sentiment often derived from the valorization of those heroes of the scientific revolution. Sure, you can hear them sneer, you’re clever, but you aren’t a genius. I don’t sense any of that from Hofstadter. I could imagine him saying something more like: Sure, Galileo’s reasoning reads as scientifically illiterate to us today, but unlike academics today, he could actually write.

In case you don’t believe me about how Galileo’s reasoning reads today:

Aristotle had had no conception of impetus, and thus no conception of motion corresponding to what we may see and measure. He thought that the medium through which objects travel sustains their motion. By contrast, Galileo wrote “I seem to have observed that physical bodies have physical inclination to some motion,” which he then described — lacking the mathematics for an exact characterization — by a series of “psychological” metaphors, themselves of partly Aristotelian origin: inclination, repugnance, indifference, and violence. […] Galileo’s conception of the Sun’s motion is necessarily hesitant and ambiguous, and he was wary of flatly stating general principals. But one can perceive here the rough outline of what would become Newton’s first law of motion.

What Aristotle, and thus Galileo lacked was the basic material taught to high schoolers and undergraduates in science and mathematics.

It is certainly true that these were exciting and dangerous. Scientific progress has a particular quality of often looking more exciting the further back you stand from it. But the actual doing of science is in the close up; in the detail. And when you do look closer at the danger, at least in the case of Galileo, it looks far more grim than it does exciting.

The Final Word

It is the mark of clear thinking and good rhetorical style that when I start a sentence, I finish it in a suitable, grammatical fashion. A complete sentence is synonymous with a complete thought. In the world of AI, completing sentences has become the starting point for the Large Language Models which have started talking back to us. The problem that the neural networks are quite literally being employed to solve is “what word comes next”? Or at least this is how it was explained in Steven Johnson’s excellent Times article on the recent and frankly impressive advances made by the Large Language Model, GPT-3, created by OpenAI.

As is apparently necessary for a Silicon Valley project founded by men whose wealth was accrued through means as prosaic as inherited mineral wealth and building online payment systems, OpenAI sees itself on a grand mission for both the protection and flourishing of mankind. They see beyond the exciting progress currently being made in artificial intelligence, and foresee the arrival of artificial general intelligence. That is to say that they extrapolate from facial recognition, language translation, and text autocomplete, all the way to a science fiction conceit. They believe they are the potential midwives to the birth of an advanced intelligence, one that we will likely struggle to understand, and should fear, as we would a god or alien visitors.

What GPT-3 can actually do is take a fragment of text and give it a continuation. That continuation can take many precise forms: It can answer a question. It can summarize an argument. It can describe the function of code. I can’t offer you any guarantee that it will actually do these things well, but you can sign up on their website and try it for yourself. The effects can certainly be arresting. You might have read Vauhini Vara’s Ghosts in the Believer writing about her sister’s death. She was given early access to the GPT-3 “playground” and used it as a kind of sounding board to write honestly about the death of her sister. You can read the sentences that Vara fed the model and the responses offered, and quickly get a feeling for what GTP-3 can and can’t do.

It will be important for what I am about to say that I explain something to you of how machine learning works. I imagine that most of you reading this will not be familiar with the theory behind artificial intelligence, and possibly intimidated. But at it’s core it is doing something quite familiar.

Most of us at school will have been taught some elementary statistical techniques. A typical exercise involves being presented with a sheet of graph paper, with a familiar x and y axis and a smattering of data points. Maybe x is the number of bedrooms, and y is the price of the house. Maybe x is rainfall, and y is the size of the harvest. Maybe x is the amount of sugar in a food product, and maybe y is the average weekly sales. After staring at that cluster of points for a moment, you take your pencil and ruler and set down a “line of best fit”. From the chaos of those disparate, singular points on the page, you identified a pattern, a correlation, a “trend”, and then impose a straight line — a linear structure — on it. With that line of best fit drawn, you could then start making predictions. Given a value of x, what is the corresponding value of y?

This is, essentially, what machine learning and neural networks do.

The initial data points are what is referred to as the training data. In practice however, this is done in many more than two dimensions — many, many more, in fact. As a consequence, eyeballing the line of best fit is impossible. Instead, that line is found through a process called gradient descent. Taking some random choice of line as a starting point, small, incremental changes are made to the line, improving the fit with each iteration, until that line arrives in a place close to the presumed shape of the training data.

I say “lines”, but I mean some kind of higher dimensional curves. In the simplest case they are flat, but in GPT-3 they will be very curvy indeed. Fitting such curvy curves to the data is more involved, and this is where the neural networks come in. But ultimately all they do is provide some means of lining up, bending, and shaping the curves to those data points.

You might be startled that things as profound as language, facial recognition, or creating art, might all be captured in a curve, but please bear in mind that these curves are very high dimensional and are very curvy indeed. (And I’m omitting a lot of detail). It is worth noting that the fact we can do this at all has required three things: 1) Lots of computing power 2) Large, readily available data sets 3) a toolbox of techniques and heuristics and mathematical ideas for setting the coefficients that determine the curves.

I’m not sure that the writing world has absorbed the implications of what this all means. Here is what a Large Language Model could very easily do to writing. Suppose I write a paragraph of dog-shit-prose. Half baked thoughts put together in awkwardly written sentences. Imagine that I highlight that paragraph, right click, and somewhere in the menu that drops down there is the option to rewrite the paragraph. Instant revision with no new semantic content added. Clauses are simply rearranged, the flow is adjusted with mathematical precision, certain word choices are reconsidered, and suddenly everything is clear. I would use it like a next level spell checker.

And that is not all: I could revise the sentence into a particular prose style. Provided with a corpus of suitable training data I could have my sentences stripped of adjectives and set out in terse journalistic reportage. Or maybe I opt for the narrative cadence of David Sedaris. So long as there is enough of their writing available, the curve could be suitably adjusted.

In his Times article, Johnson devotes considerable attention to the intention and effort of the founders to ensure that OpenAI is on the side of the angels. They created a charter of values that aspired to holding themselves deeply responsible for the implications of their creation. A charter of values which reads as frankly, and literally hubristically, as they anticipate the arrival of Artificial General Intelligence, while they fine tune a machine which can convincingly churn out shit poetry. Initially founded as a non-profit, they now have birthed the for-profit corporation OpenAI LP, but made the decision to cap the potential profits for their investors: Microsoft looming particularly large.

But there was another kind of investment made in GPT-3. All the collected writings that were scraped up off the internet. The raw material that is exploited by the gradient descent algorithms, training and bending those curves to the desired shape. Ultimately, it is true that they are extracting coefficients from all that text-based content, but it is unmistakable how closely those curves hew, in their abstract way, to the words that breathed life into them. They actually explain that unfiltered internet content is actively unhelpful. They need quality writing. Here is how they curated the content in GPT-2:

Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny.

(From Language Models are Unsupervised Multitask Learners)

For all of Johnson’s discussion of OpenAI’s earnest proclamations of ethical standards and efforts to tame the profit motive, there is little discussions of how the principal investment that makes the entire scheme work is the huge body of writing available online that is used to train the model and fit the curve. I’m not talking about plagiarism; I’m talking about extracting coefficients from text, that can then be exploited for fun and profit. Suppose my fantasy of a word processor tool that can fix prose styling becomes true. Suppose some hack writer takes their first draft and uses a Neil Gaiman trained model to produce an effective Gaiman pastiche that they can sell to a publisher. Should Gaiman be calling his lawyer? Should he feel injured? Should he be selling the model himself? How much of the essence of his writing is captured in the coefficients of the curve that was drawn from his words?

Would aspiring writers be foolish not to use such a tool? With many writers aiming their novels at the audiences of existing writers and bestsellers, why would they want to gamble with their own early, barely developed stylings? Will an editor just run what they write through the Gaiman/King/Munroe/DFW/Gram/Lewis re-drafter? Are editors doing a less precise version of this anyway? Does it matter that nothing is changing semantically? Long sentences are shortened. Semi-colons are removed. Esoteric words are replaced with safer choices.

The standard advice to aspiring writers is that they should read. They should read a lot. Classics, contemporary, fiction, non-fiction, good, bad, genre, literary. If we believe that these large language models at all reflect what goes on in our own minds, then you can think of this process as being analogous to training the model. Read a passage, underline the phrases you think are good, and leave disapproving marks by the phrases that are bad. You are bending and shaping your own curve to you own reward function. With statistical models there is always the danger of “over-fitting the data”, and in writing you can be derivative, an imitator, and guilty of pastiche. At the more extreme end when a red capped, red faced, member of “the base” unthinkingly repeats Fox News talking points, what do we have but an individual whose internal curve has been over-fitted?

It is often bemoaned that we live in an age of accumulated culture, nostalgia, retro inclination. Our blockbusters feature superheros created in a previous century. There is something painfully static and conservative about it all. But what if artificial intelligence leads us down the road to writing out variations of the same old sentences over and over again?

In Barthes’ essay The Death of the Author he asserts

We know now that a text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from the innumerable centers of culture.

Given that Barthes was quite likely half-bullshitting when he threw out phrases like “multi-dimensional space”, what I have quoted above is a disturbingly accurate description of the workings of a large language model. But it isn’t describing the large language model. It is a description of how we write. Barthes continues:

the writer can only imitate a gesture that is always anterior, never original. His only power is to mix writings, to counter the ones with the others, in such a way as never to rest on any one of them. Did he wish to express himself, he ought at least to know that the inner ‘thing’ he thinks to translate’ is itself only a ready-formed dictionary, its words only explainable through other works, and so on indefinitely.

Maybe Gaiman would have no more cause to call his lawyer than all the great many writers he absorbed, reading and then imitating in his youth and early adulthood. Maybe if Barthes is to be believed here, the author is dead and the algorithm is alive. Our creativity is well approximated by a very curvy curve in a high dimensional space.

One potential outcome that does not seem to have been considered by our Silicon Valley aristocracy is that the Artificial General Intelligence they bring into this world will be an utterly prosaic thinker with little of actual interest to say. It shouldn’t stop it from getting a podcast, though.

Or maybe Barthes was wrong and maybe Large Language Model will continue to be deficient in some very real capacity. Maybe writers do have something to say, and their process of writing is their way of discovering it. Maybe we don’t have to consider a future where venture capitalists have a server farm churning out viable best-sellers in the same fashion they render CGI explosions in the latest Marvel movie. Maybe we should get back to finishing the next sentence of our novel because the algorithm won’t actually be able to finish it for us. Maybe.

The Punchline is Redundant

In graduate school, I was friends with a young man of a particularly restless disposition — a mathematician of the waggish inclination, given to a certain kind of tomfoolery. Often his antics would take the form of games of such banal simplicity that they felt like elaborate, conceptual pranks.

One game he set a number of us playing, during a longueur in one evening together with friends, sticks in my mind. Having first had each of us commit solemnly to absolute honesty, we each chose a number, greater than or equal to zero, which we would then one-after-the-other reveal (committed as we were to honesty), and whoever had chosen the lowest number that no one else had chosen was the winner. Several rounds were played, and while everyone wrestled with the question of whether to choose zero, or maybe one, trying to second guess each other, I refused to join in, offended by the very nature of the game.

A second game stays with me as well: pulling a mathematics journal from the shelf in the math department common room, my friend began reading aloud random sentences from various articles, pausing before the final word, inviting another friend, to guess the final word. He did pretty well, as I recall.

There was something powerful about these games. The first game, being stripped of all the usual frivolity, ritual, adornment, and pretense that usually accompanies games, revealed the essential nature of what a game is. That is to say a “game” in the sense that the mathematician John von Neumann formulated it. To von Neumann’s way of thinking Chess was not game in the sense he cared about: perfectly rational players would know the perfect set of moves to play and thus they would play those moves. He was more interested in Poker, where players have incomplete information (the cards in their hand and on the table), are left to compute the probabilities, and devise strategies.

Good poker players do not simply play the odds. They take into account the conclusions other players will draw from their actions, and sometimes try to deceive the other players. It was von Neumann’s genius to see that this devious was of playing was both rational and amenable to rigorous analysis.

The Prisoner’s Dilemma — William Poundstone

I recently discovered that my friend was not the true inventor of the second game either. Reading The Information by James Gleick, I learned that Claude Shannon, the founder of information theory, played a variation with his wife, as a kind of illustrative experiment.

He pulled a book from the shelf (it was a Raymond Chandler detective novel, Pickup on Noon Street), put his finger on a short passage at random, and asked Betty to start guessing the letter, then the next letter, then the next. The more text she saw, of course, the better her chances of guessing right. After “A SMAAL OBLONG READING LAMP ON THE” she got the next letter wrong. But once she knew it was D, she had not trouble guessing the next three letters. Shannon observed, “The errors, as would be expected, occur more frequently at the beginning of words and syllables where the line of thought had more possibility of branching out.”

The Information — James Gleick, page 230

Shannon’s counter-intuitive insight was to consider “information” through a notion he called entropy, which quantitatively captured the amount of new, novel, and surprising content in a message. Thus, the meaningful sentences of a novel, or indeed a math paper, contain all kinds of redundancy, while in contrast a random sequence of letters will always be surprising from one letter to the next, so therefore contains more of this stuff he referred to as “information”.

Von Neumann’s ideas about games would go on to shape the technocratic world view that was ascendant in the 20th century. Beyond mathematics the kind of games he defined could be found out in the fields of economics, social policy, geopolitics, and most infamously: the exchange of nuclear weapons.

Shannon’s ideas would have their greatest successes in science, and not only in the field of communication, where error correcting codes and encryption are the direct and intended applications of such thinking. But also in biology when DNA was discovered and life itself appeared to be reducible to a finite sequence of of four letters, and Physics via thermodynamics and later in quantum mechanics as information became a fundamental notion.

There is a variation on Shannon’s game that is a well established tradition around the Christmas dinner table: reading Christmas cracker jokes. (Popular within the former Commonwealth, but maybe less well known in the US). Having pulled the crackers and set the crepe party hats upon our heads, each of us will in turn read the set up of our joke, leaving the rest of the table to guess the punchline. The meta-joke being that while punchlines are supposed to be surprising, and thus amusing, Christmas cracker jokes are typically so bad that in their puns are quite predictable. Thus, somehow, in their perverse predictability, the jokes are funny all over again. But does that make them low entropy? Only if you allow for the mind to be addled enough that the punchline becomes predictable.

This is an important point. The ultimate arbiters of the question of assumed knowledge that Gleick offers are hypothetical aliens receiving our radio signals from across the galaxy, or the very real computers that we program here on earth. They do not share any of our cultural baggage and thus could be considered the most accurate yard sticks for “information”. When Gleick’s book was written, over a decade ago now, we had very different ideas about what computers and their algorithms should look like or be capable of doing. That has all changed in the intervening decade with the arrival of powerful artificial intelligence that gives the kind of output that we once could only have hoped for. The notions that Gleick covers were defined precisely and mathematically, but our intuition for these concepts, even to lay person, are dramatically shifting. Not that it would be the first time our expectations and intuition have shifted. We should recognize ourselves in Gleick’s description of the amusing misunderstandings that the new-fangled telegraph technology created upon its arrival.

In this time of conceptual change, mental readjustments were needed to understand the telegraph itself. Confusion inspired anecdotes, which often turned on awkward new meanings of familiar terms: innocent words like send, and heavily laden ones, like message. There was a woman who brought a dish of sauerkraut into the telegraph office in Karlsruhe to be “sent” to her son in Rastatt. She had heard of soldiers being “sent” to the front by telegraph. There was the man who brought a “message” into the telegraph office in Bangor, Maine. The operator manipulated the telegraph key and then placed the paper on the hook. The customer complained that the message had not been sent, because he could still see it hanging on the hook.

More mysterious still is the way information persists once it has arrived. Black Holes provided a thorny problem for physicists, but my own waggish friend poses his own set of questions. Assuming that he had not taken a course in information theory, or read of Shannon (which he may well have), that leaves the possibility that when he concocted his games he was subconsciously tapping into some kind of collective or ambient understanding. It is one things for the theory to be taught and for students to study the equations. It is quite another thing when ideas pervade our collective thinking in ways that cannot be easily accounted for. Information theory works when we can point to the individual bits and bytes. Things become much more tricky when not only can we not find the bits and bytes, but when the information is thoroughly not discrete, not even analogue, just out there in some way we don’t yet know how to think about.

Unfortunately, there was a booksale.

Some years ago — never mind how long precisely — having little or no money in my purse, and nothing particular to interest me in the UK, I thought I would do a PhD in Montreal, and see a little of the Francophone world. Perhaps there was some element of driving off the spleen, and regulating the circulation. Learning French at an Anglophone university was trickier than I would have liked. I did however persevere with the language while obtaining mon doctorat, and eventually I could read, with dictionary assistance, a contemporary novel or comic. The latter, I discovered, had a rich tradition and active culture in France, and as a consequence a serious presence in the neighborhood bibliothèque.

When I left the Francosphere behind, and eventually arrived back in the Anglosphere, the French language fell out of sight and out of mind. There was an abundance of English prose I badly wanted to read, so my habit of reading in French all but disappeared. That is until very recently, when I was inspired by a blog post I stumbled upon. The author of the The Untranslated reflects on five years of running his quite wonderful blog reviewing untranslated books. A great deal of it concerns the practice of learning new languages with the aim of reading specific works. Which makes a refreshing contrast to the usually proffered motivate for language aquisition: being able to order food or ask for directions like a local. So inspired, I set about throwing myself once again into French literary waters.

My first port of call was my go-to French webcomic, Bouletcorp. Started in 2004, it is a veteran of the scene. A typical comic depicts incidents from the author Boulet’s life, alongside ruminations and observations on everything from the quotidian irritations of modern life to lazy tropes in TV and movies. To my rather fanciful way of reading them, these are visual essays in the tradition of Montaigne’s essays. A more suitable reference point is Calvin and Hobbes, in the way Boulet frequently lets his imagination run in fanciful and speculative directions, rendering scenes reminiscent of Waterson’s Sunday strips, filled with all the dinosaurs, alien landscapes, and monsters that populated Calvin’s imagination. One situation that Boulet returns to more that once are encounters with obnoxious members of the public attending his in-person signings at conventions. In one such strip Boulet anonymizes the offending individual by drawing them in various forms: an ape, a cockerel, and finally a grim looking salad bowl of merde. Boulet is an artist with incredible versatility, belonging to the European tradition of Moebius. I grew up in the UK reading the frankly impoverished offerings of the Dandy and Beano, and picking through the debauched excesses of American superhero comics. So I feel like I missed out on the sophisticated the French tradition of bande dessinée. At least I had the adventures of Asterix & Obelix, and Tintin.


Reading my way through the Bouletcorp back catalogue I found one comic in particular that I responded to deeply. Created for the 2017 Frankfurt book fair website it is a reminiscence of youth. A tweenage Boulet is torn form his ordinateur and obliged to do his required reading for school. Disinclined towards his duty, he picks the shortest story in the collection, and finds himself drawn in, captivated, and horrified by the gothic power of Prosper Merimee’s La Venus d’Ille. Understanding that he wanted more of whatever that was he discovers, with the help of the school library, Poe, King, Asimov, and all kinds of fantastical fiction.

As readers we do not get to consume our art communally. Theater lovers go to watch actors tread the boards; film fans get to attend screenings; music fans flock to gigs; football fans sing together in their home stadiums. The solitude of reading has many benefits, to be sure. Frequently the best books we read have a strange power that is hard to assess, and merits might only become clear on second or third readings. But no doubt you have known a friend who has thrust a book into your hands that they can’t stop thinking about, telling you: “You have to read this.” So once you too have read it they can finally talk about it.

In much the same way I enjoy returning to a beloved book, I enjoy being reminded of the revelation that is reading. Of discovering the stories that suspend my disbelief and subvert my expectation. The books which captivated Boulet have some overlap with the books I read as a teenager, but it his experience that resonates so strongly.

That is why, of all the many podcasts that have devoted themselves to the works of the late Gene Wolfe, it is the reader interviews of the Rereading Wolfe podcast that I remain most excited about. The careful chapter by chapter analysis that all the Gene Wolfe podcasts offer are fine and good and wonderful. But there is something reaffirming about people discussing their responses to the work. In one utterly remarkable episode, a reader describes discovering The Book of the New Sun as she grew up in a cult — she had to read fantasy books smuggled in secret from the library. Among all the books she read, she understood at once that BotNS was on an entirely different level. I do not like to ascribe utility to art, but such stories allow me to believe in the vitality of art.

Part of the reason I aspire to become a proficient reader of the la langue française is so that I can return to that state of youthful discovery. I can become like a teenager again, seeing with fresh eyes what the culture offers, all over again. I can be liberated from the weight of expectation and reputation. I can be surprised all over again.

There was a booksale…

It rained, and the snails were about.
What does a dog want? To go for a walk? Or just to be outside?

I hate the video-essay

Patricia Lockwood’s Booker nominated No One Is Talking About This is now out in paperback. I know because I went out, bought it, and read it. In real life. It is one of the most widely reviewed novels of 2021, in part because Lockwood is unquestionably an exciting writer with a clear voice and real style, but also because this book was a potential candidate for carrying home the title of “The Great Internet Novel”.

The events of NOITAT track the contours and trajectory, both broadly and in many details, of Lockwood’s own life, starting with becoming internet famous. The protagonist, who we assume is half-Lockwood, is brought to the center of the online stage for asking “can a dog be twins?”. Because we are to understand that virality really can hinge on something so slight. Actual-Lockwood achieved some kind of Twitter fame for retweeting the Paris Review, asking “So is Paris any good or not”, although I believe her trajectory involved more than that one tweet. A dog can in fact be twins, although very rarely in the sense of actually being identical twins (or so Google tells me). Half-Lockwood’s joke is funny in the same way that slant rhyme rhymes. It is also pretty dumb, and I have to wonder if there is some commentary in Lockwood’s part on the disproportionate accolade such a dumb joke can receive online. People write about her Paris Review tweet as if it was the height of wit, but really it just flatters a reader for knowing what the Paris Review even is.

Written in the manner of oblique fragments, NOITAT might evoke for some the fragmentary, non-linear nature of social media — or “the portal” as it is referred to. But my own reading left me recalling Joan Didion’s A Book of Common Prayer with her flashcuts jumping in and out of scenes, with the typical literary constructions and table setting eschewed, leaving you to carefully to follow the thread of each sentence so you can hopefully make it to the eventual destination. Fortunately, I have a better grasp on memes than I did on Central American politics in the seventies. But God help you if you are not on some basic level online.

Lockwood has an almost Talismanic status in the world of young and hip American lit. She does not have an MFA and did not attend college. Here is the immaculately conceived American poet, free from the sin of credentialism, the careerism, and the workshopping. Evidence that perhaps free verse isn’t just bullshit you have to attend grad school to “appreciate”. There is a wonderful passage in Priestdaddy where she describes the depth of texture and connotation words have for her, and reading it you too can begin to believe.

I am a great fan of her writing, especially the memoir. And also especially in the context of a certain popular idea of what constitutes good writing. The suggestion that dialogue tags would perhaps be best restricted to the inoffensive “said”, “asked”, and “told”, has become a stupefying dictum, so it is a pleasure to be reading a writer who is not afraid to have their speech “yelled”, “yelped”, “hissed”, and even “peeped” when the occasion arises. But that is to under-represent Lockwood’s qualities as a prose stylist. Even after the critics have had their pickings you can still find “the unstoppable jigsaw roll of tanks”, and now I too am one of those critics, unfortunately.

If the first half of the novel sees the half-Lockwood protagonist being submerged, and possibly drowned in the online, the second half finds her abruptly washed up on shore to deal with Real Life — not only in the world of flesh and blood, a life-threatening pregnancy, and a rare and terminal genetic disorder, but also Real as in we are still following events that actually transpired. What does it mean about very online life, that it served half-Lockwood very badly in these circumstances? What does it mean that online life is not well suited to these truths?

If I had a dollar for every time I made a friend laugh… Well, at best this would be a strange side hustle. But I don’t get a dollar for making casual quips. Nor for the hot takes or deeper thoughts I impart to those about me. Actual humour writing, as with all writing, is a harder, more laboured, and quite deliberate practice when it has to be done consistently and in quantity. Yet a lot of the internet seems to offer up the possible promise getting all those likes and subscribes for basically turning up and being you. Twitter, Youtube, and podcasts can give you the impression that sometimes it is simply a matter of typing it into your phone, or just setting the tape rolling. But after a while some of that back and forth between the hosts, and quite a lot of that laughter, feels more laboured than it should. Do I really believe that the Youtuber really captured the unpracticed vitality of their own genuine laughter?

Obviously not. It was all — or at least most of it — utterly scripted. Which in general is fine. There should be some mediation between our personal and public lives. A kind of negotiation and consideration. What is captured in NOITAT is an experience from the small class of people (workers? writers? creators?), mostly very young, who have been able to put a relatively unmediated portion of themselves out onto the web for the viewing benefit of the rest of us. Who have committed themselves to being “very online”. Because most people are not “very online”. For most of us there is a line, and although we may occasionally find ourselves on the wrong side of it, we get to enjoy the separation.

I think when critics were scouring the Earth for the “Great Internet Novel” they were hoping for all the sordid vicarious thrills that you might expect from a medium that has offered us strange new breeds of humour and fed our prurient desire for the salacious. Half-Lockwood’s “very online” quickly seems very exhausting. The contradictions and hypocrisy and inconsistency was already self diagnosed on the portal itself before it even arrived on the printed page. But all this is the point, I suspect.

Retrograde Motion

Before Newton there was Copernicus, and before Copernicus there was Ptolemy. Living in the second century AD, Ptolemy produced what would become the definitive work in astronomy for the next millennium. It was a geocentric system: the Earth, quite sensibly, set at the center of the solar system. While geocentricism was ultimately to suffer the ignominy of being synonymous with backward thinking, Ptolemy certainly didn’t lack in mathematical sophistication.

Keeping the Earth at the center of the solar system required a great deal creative invention. It was taken as axioms that the planets should travel at constant speeds, and adhere to the perfect forms of geometry (that is to say circles and spheres). But the planets that appeared in the night sky did not conform to these expectation. Unlike the sun and moon, which flattered us earthlings with their regular appearance and disappearance, the planets would sometimes slow down and reverse direction — what they called “retrograde motion”. The solution that Ptolemy and his predecessors developed was a whole Spirograph set of celestial structures called deferents and epicycles. This essentially involved imagining that the other planets were not set upon a wheel revolving about the Earth, but set on a wheel on a wheel in motion about the earth. And, if necessary, perhaps some greater sequence of nested wheels.

Copernicus, the Catholic canon and Polish polymath of the fifteenth and sixteenth century, had, like every other astronomer of his day read Ptolemy. Yet after carefully studying the night sky and much thought, he developed a heliocentric map of the solar system. That is to say, with the Sun at the center. While he managed to free himself from geocentric difficulties, and dramatically simplify the situation in many respects, he still adhered to a belief in constant speed and circular orbits. It would take Keplar and ultimately Newton to settle the matter with elliptic orbit determined by the force of gravity.


The heliocentric theory was controversial for two reasons. The first, and quite reactionary, objection was based on readings of a handful of bible verses. For example, when Joshua led the Israelite in battle against an alliance of five Amorite kings he ordered the sun to halt its motion across the sky, thus prolonging the day, and with it the slaughter of the opposing army. The point is that Joshua ordered the sun to stop, and not the earth. This might seem like pedantry, but that was precisely the point. The Catholic church hoped to hold a monopoly on biblical interpretation, and someone lower down the ecclesiastical hierarchy conducting their own paradigm shift equipped with nothing more than astronomical data and mathematics could set a dangerous precedent. At a time when many such precedents already being set.

The second, and quite serious objection, was that it created a whole new set of scientific questions. Why don’t we feel like we are moving through space? Not only about the sun, but when we make the Earth rotates daily about its own axis? The numbers required to calculate the implied velocity were known. And on top of that, if we were moving at such great speed, they why did we not observe a parallax effect between the stars? As the apparent distance between two buildings appears to change as we move past them, why couldn’t we observe a similar shift in the stars as we moved? Copernicus’ answer was that the stars were much farther away from earth than had ever been imagined before. It was a correct deduction that didn’t do much to convince anyone.


Both Copernicus and Newton were reluctant to publish their ideas. In Newton’s case he was satisfied to have developed his Calculus and did not care to suffer the scrutiny that others would subject his theory of gravity to. His experience justify his thinking to other scientists of his had soured his relationship with the wider scientific community of his day. It was only when it became clear that Leibniz had independently developed the tools of Calculus that he finally set about writing up, formalizing, and getting his hands on data in order to present his Principia.

Copernicus had gathered his data and written his book, yet for many years did not publish it. De revolutionibus orbium coelestium would only arrive in print as he lay on his death bed. While Copernicus had friends who supported his astronomical pursuits, it seems to have been the arrival of a young Lutheran mathematician Georg Joahim Rheticus, who was the key instigator in bringing the manuscript to print.

No one had invited him or even suspected his arrival. Had he sent advance notice of his visit he doubtless would have been advised to stay far away from Varmia. Bishop Dantiscus’ most recent anti-heresy pronouncement, issued in March, reiterated the exclusion of all Lutherans from the province — and twenty-five-year-old Georg Joachim Rheticus was not only Lutheran but a professor at Luther’s own university in Wittenberg. He had lectured there about a new direction for the ancient art of astrology, which he hoped to establish as a respected science. Ruing mundane abuses of astrology, such as selecting a good time for business transactions, Rheticus believed the stars spoke only of the gravest matters: A horoscope signaled an individual’s place in the world and his ultimate fate, not the minutiae of his daily life. If properly understood, heavenly signed would predict the emergence of religious prophets and the rise or fall of secular empires.

A More Perfect Heaven — Dava Sobel

I suspect that we may undervalue the weight that the belief in astrology may have carried in some (but not all) quarters. Many looked back to the Great Conjunction of 1524 as heralding the rise and spread of Lutheranism — an ideological shift with profound and widespread implications that might only be matched by Communism. We live in an age of scientific prediction, taking for granted the reliable weather forecast on our phone in the morning. We (at least most of us) accept the deep implications of the climate data for our future, while also paying heed to the sociology and political science can help us understand our lack of collective action. If we accept the astrology as being a kind of forebear to our own understanding, you can perhaps appreciate why Rheticus might have been willing to take such risks to pursue a better understanding of the stars.

We can only imagine what Rheticus must have said to Copernicus that led him to finally prepare his manuscript for publication. And that is what Dava Sobel has done, writing a biography of Copernicus, A More Perfect Heaven, which contains within it a two act play dramatizing how she imagines the conversation might have gone. It presents a Rheticus shocked to discover that Copernicus literally believes that the Earth orbits about the Sun, a Copernicus perplexed that the young man takes astronomy seriously, but who is won over by the prospect of taking on such a capable young mathematician as his student.

Rheticus’ principal legacy is in the précis of Copernicus’ theory that he wrote and had distributed as a means of preparing the way for the ultimate text. His contributions would ultimately be overshadowed by the later accusation, conviction, and banishment for raping the son of a merchant. While Sobel presents Rheticus in her play as pursuing/grooming a fourteen year old boy, it does not feel like she knows exactly where to take this dramatically. By way of contrast, John Banville in his novel Doctor Copernicus gleefully transitions to a Nabokovian narrative upon Rheticus’ arrival.


There is an interesting dramatic irony in the way Copernicus’ ideas were initially received. There was a ruse, by certain parties, to present Copernicus’ heliocentric theory as simply a means of computation. It could be tolerated if it was understood that one was not supposed to actually believe that the Sun was at the center of the solar system. Which struck some as a reasonable compromise. The Catholic church was drawing up what would become the Gregorian calender, and Copernicus’ made important contributions to calculating a more accurate average for the length of a year.

Yet now the situation has been reversed. While Copernicus’ techniques were rendered obsolete with the arrival of calculus, the conceptual understanding carries on in popular understanding. Meanwhile, as Terence Tao and Tanya Klowden have noted Ptolemy’s deferents and epicycles live on in the mathematics of Fourier analysis — a means of approximating arbitrary periodic functions using trigonometry.

Even within a field as definitive as mathematics and science it is interesting how even defunct and obsolete thinking can both be revealing and even persist with strange second lives. Why someone believed something can become more important than the truth of the thing. Eratosthenes deduced an impressive approximation for the Earth’s circumference after hearing a story about a well that would reflect the light of the sun at noon. We posses a more accurate figure now, but technique never grows old.

Sanderson and I

I have never read a Brandon Sanderson novel. Plenty of people haven’t, so that doesn’t make me special, even among avid readers. But a great many people do read Sanderson. So many, in fact, that even among high profile writers, Sanderson certainly is special. And this week Sanderson’s readership went from buying Sanderson’s books, to buying into Sanderson and his books; they put down an accumulated and unholy twenty million dollars (and growing) on his kickstarter to publish four “surprise” novels in 2023.

But perhaps I should feel a little bit special, because while I haven’t read any Sanderson, he has read me. Or at least he has, on one occasion, in interview, indicated that he had read the web-comic that I drew as a teenager.

Thog Infinitron, written by Riel Langlois, a Canadian I met on a web-comic forum, and drawn by me, Daniel Woodhouse, is the story of a cyborg caveman and his various adventures. After his body is crushed during a Rhino stampede, the titular Thog is rebuilt with all manner of enhancements by a pair of alien visitors. I uploaded a page a week, and the story ran for a grand total of 129 pages before I unceremoniously lost interest and ditched the project somewhere in the middle of my second year of undergraduate mathematics. I had other things going on. Thog’s story was left at a haunting cliffhanger, with story-lines left open and characters stuck forever mid-arc.

I do not even have to look back over my work to recognize that I was a callow and unsophisticated artist. My potential was frustratingly underdeveloped. In retrospect, I cringe at my own haste to produce a popular webcomic that would bring in wealth and recognition, and how that haste led me to neglect my craft. I lacked influence and serious guidance. Or maybe I was simply too stubborn in my ambition. I do wonder how I would have fared if I had been that same teenager today, able to discover the wealth of material and advice that is now available online. You can literally watch over the shoulder of accomplished artists as they draw.

Nevertheless, when I revisited Thog, I was impressed by the comic as a body of work. Langlois’ writing was truly fantastic — in an completely different league to my art. And as rough as the art is to my eye, I have to appreciate the sheer cumulative achievement.

(Please do not go looking for my webcomics. Aside from Thog Infinitron I sincerely hope that my teenage juvenalia has disappeared from the internet, and for the most part this wish seems to have come true through a combination of defunct image hosting and link-rot. Thog is still out there and readable thanks to a surviving free webcomic hosting site, although I’m not sure your browser will forgive you for navigating into those waters.)

At the time, Thog, with it’s regular update schedule, was a major feature in my life. Now it feels like a distant and minor chapter. Years later I would occasionally do a web search to see if people still mentioned it , if Thog was still being discovered, or if the comic had any kind of legacy at all. It was during one of those web searches that I discovered a passing reference to Thog by Sanderson in an interview on Goodreads.

It is a strange an unusual writer who does not want to be read. And indeed it is a strange and gratifying to discover that you have been read. It is an experience that Sanderson enjoys to a singular degree, but that I too have enjoyed to a thoroughly modest degree. At some point during Thog’s run we even gave permission to some particularly keen readers to translate it into Romanian. I have never received a dime for my web-comics, and at the time I didn’t take much note at the time, but in retrospect I’m in awe that I should have received such an honor. Sanderson’s works have been translated into 35 different languages.

The money that is being amassed on Kickstarter for Sanderson’s project is no small thing. The way the arts and literature are funded have profound effects on the culture. The proceeds of bestsellers have traditionally been reinvested by published houses in new writers (or so it has been claimed), and I imagine that more than a few people will look at Sanderson’s foray into self-publishing (or working outside a major publishing house) and wonder how different the future might be. But it is at least, for now, worth appreciating the sheer spectacle of a truly devoted readership.