This is the third and final in a series of articles that expand upon my essay in Contents Issue No. 1. Read the first and second.
I worked for many years on textbooks—big books with several authors each, lots of moving parts, technical editors working alongside designers and manuscript editors and illustrators. I recall an evening when a group of us were reviewing changes to proofs for an especially complex science book—changes that really ought not to have been happening that close to the pub date. At one point, the production manager (who’d been working on books like this for the better part of three decades) leaned over to me and said, “I wish we could go back to the day when every change weighed several pounds.”
His point was that the process needed an external constraint; something that would force us to just ship the damn thing. But the weight analogy sticks: we talk of edits to a text as being either “heavy” or “light,” recalling the lead type that was once necessary to make them. The physical weight may be gone—displaced by word processors and infinitely light pixels—but that mental heaviness persists.
As does the idea of a fixed, crystallized, final work. Even our digital systems mimic the immutableness of ink on paper. Typos and egregious errors are routinely repaired in online texts, but rarely are “heavier” changes made. Ebooks can be updated, but only dumbly: a new file will wipe out annotations made to an earlier version, and no useful convention yet exists for communicating what was changed and why. Our content management systems know of only two states—draft and published—either privately in progress or publicly neglected. Nowhere is there a third state—in the world, but still evolving.
But perhaps there should be. What kept us working late on that science textbook (and often in the case of books like this) was new science: a discovery, which—if verified—would render the book out of date even before it went to print. But verification would take time, and the Fall semester loomed around the corner. We couldn’t wait to send the book to print; nor could we blindly make changes based on the results of a single study.
I don’t recall exactly what compromise we came to that night; I think we inserted a brief caveat and moved on, since few other options presented themselves. But what if it could have been different? What if the book we released were entirely digital, making the expense of a print run obsolete? And what if we could push updates to students and professors as the science happened, rather than waiting for the seemingly interminable two- or three-year editions cycle to pass. And—perhaps most interestingly—what if students could read the text and dive into these changes. Rather than learning from a (literally and figuratively) dead-tree text, they could learn from a living document.
Science writing very obviously benefits from this approach, but I don’t think it’s the only case. How many times have you written something, published it, and then realized in retrospect that what you thought you said was not in fact what came through? (Even if you’ve never done this yourself, you’ve certainly witnessed it in others.) What if you could revise a work after publishing it, and release it again, making clear the relationship between the first version and the new one. What if you could publish iteratively, bit by bit, at each step gathering feedback from your readers and refining the text. Would our writing be better?
Iteration in public is a principle of nearly all good product design; you release a version, then see how people use it, then revise and release again. With tangible products (hardware, furniture, appliances, etc.), that release cycle is long, just as with books. But when the product is weightless, the time between one release and the next can be reduced from months or years to days or even hours. The faster the release cycle, the more opportunities for revision—and, often, the better the product itself.
Writing has (so far) not generally benefited from this kind of process; but now that the text has been fully liberated from the tyranny of the printing press, we are presented with an opportunity: to deploy texts, instead of merely publishing them.
What does this mean in practice? It means working as close to the text as possible (the markup) so as to revise quickly and efficiently. It means designing systems for communicating a document’s history in a way that’s illustrative rather than overwhelming. It means building tools that permit us to evolve a text over time, adding bits of metadata that illuminate the process along the way.
And it means letting go of that weight, which now lives solely in our heads. Instead of weight, let’s think of depth: revisions that are either deep or shallow, measured in time and effort rather than pounds. And furthermore, let’s think about the reading experience as one of depth as well: superficial (only the latest version) or exhaustive (all the way down). In doing so we not only improve our own writing, and provide a richer reading experience; we also expose the craft of writing and editing for others to learn from.
Of course, we lose some things, too. Permanence, stability. What Elizabeth Eisenstein, in The Printing Press as an Agent of Change, called typographical fixity. But where fixity enabled us to become better readers, can iteration make us better writers? If a text is never finished, does it demand our contribution? Fixity is important if you deem the text the end; but perhaps instead the text is now a means—to our own writing, our own thinking. Perhaps it is time for the margins to swell to the same size as the text.