The hidden benefits of NaNoGenMo

On November 1st of 2015, NaNoGenMo begins its third year. It’ll be the third year that I’ve participated, and the third year that it’s…

The hidden benefits of NaNoGenMo

On November 1st of 2015, NaNoGenMo begins its third year. It’ll be the third year that I’ve participated, and the third year that it’s spawned articles in legitimate paper magazines and newspapers, none of which are, unfortunately, particularly distinct from the coverage of the story generator Brutus in 1999 in the New York Times or similar projects from years prior.

Media coverage seems to circle around the spectre of wholesale automation of authorship the way that hapless space-ships circle around a black hole. (Because we, as a community, have an interest in corpora — the ability to access and analyse data makes inserting variety into generative writing convenient — we have kept track of these articles.) Perhaps, being written by journalists, these articles are justified in having a bit of a hysterical bias. After all, certain classes of news stories are already being written mostly by software, and fear-mongering about automation has been a lucrative staple of the press since the invention of the automatic loom.

However, despite its universality, the narrative of automation of authorship is a poor lens with which to look at the current state of generative text. Some forms of journalism are trivially automated, and these are precisely the kinds that are automated. However, the variety of journalism that journalists are increasingly reaching for (beautiful, literary longform nonfiction, which thrives on the web because of the comparatively low cost of distribution and which stands out heavily from the landscape of low-quality under-considered short posts) is both far from the grasp of the current generation of text generators and far from the aim of NaNoGenMo in specific.

NaNoGenMo produces, quite consistency, a flurry of extreme creativity and a wide variety of aims, styles, and implementation techniques; many people start several entries with extremely different approaches and goals, and many people do not end up producing a novel-length text despite the utterly trivial requirements, because their amazingly creative techniques were unable to produce a novel whose originality they could be proud of. In its first year, we had (among other entries) a novel composed of a supercut of similar tweets, a novel composed of a supercut of Homeric fight scenes, and a mystery novel composed of the belabored meanderings of Alice and Bob in a labyrinthine house; in its second year, we got a deeply atmospheric comic book composed by pulling images from flickr, post-processing them, and superimposing thematically related lines from detective novels, as well as a wonderful book-length piece of asemic writing. This year — who knows?

Explicitly experimental techniques appear to produce the best results. This makes sense — experimental techniques tend to be very well-defined, and the results of experimental techniques in literature as executed by human beings tend to be dominated by the attributes of the techniques themselves (meaning that, were a machine to execute those same techniques, the results would be superficially very similar). We haven’t progressed to a level of understanding of the craft of writing that allows us to automate good, readable, page-turning fiction — and I doubt that even an author of best-selling potboilers has such an explicit model. As a result, our community is less Clairion and more Oulipo. Nevertheless, each year, we produce works that inch closer and closer to readable. We produce vast novelty with the (eventual) aim of mundane novelty.

As a result, it may be most sensible to consider NaNoGenMo to be an amateur expedition into the greater control and quantification of literature.

The Royal Society of London in the 17th and 18th centuries independently rediscovered many of the things already known to professional craftsmen of the physical sciences like doctors, midwives, miners, and sailors; nevertheless, by aiming to measure and control their experiments, they became the vanguard of systematic knowledge of the physical world, which made later developments easier to isolate and demonstrate. NaNoGenMo can be seen as doing the same for the craft of literature: by producing machinery that consistently executes particular literary techniques, we can produce large amounts of stylistically consistent text; we can perform systematic mutations of text; we can isolate important elements by seeing how text affects people with a level of purity and consistency and volume not possible with human-written text.

We are also holding engineering discussions about things like plot and style. We’re determining whether, given some N major plot events, any ordering of those N events can be made sensible via transitions. We’re determining whether macro-level plot beats produce a greater impression of novelty than sentence- and paragraph-level variation in structure, and whether either of them produce a greater impression of novelty in human readers than variation in word frequency. We’re trying to figure out how much novelty is not enough and how much is too much in the context of texts of different lengths. We’re talking about how to engineer the eliza effect in readers. We’re figuring out whether or not readers can identify descriptive fluff, and whether or not they care — and whether or not Ray Chandler was lying about how he structured detective novels, whether the Hero’s Journey really is too vague, and whether the beats in Save the Cat can truly produce compelling stories with minute-by-minute granularity at a feature-film scale.

You can make the argument that this will eventually lead to the possibility of fully automated journalism. But, being possible is not a particularly compelling argument for it to be likely. Even relatively crappy chess computers can now completely outclass chess grand-masters — and so we have augmented chess, wherein a human grand-master works hand in hand with a chess-playing machine to play against another grand-master-and-machine tag-team. Items like furniture are much better manufactured by machine (in terms of quality, price, and environmental impact), and yet we pay much more for artisanal furniture made by obsolete and wasteful processes because we value the idea of a human being doing something that doesn’t need to be done and doing it the hard way.

In the same way that there is a market for artisanal wicker chairs and artisanal bread, there will probably continue to be a market for artisanal journalism — and for the rest of us, human journalists may become symbiotes joined at the hip with machines that automate the less interesting parts of the job. We already have some such mechanisms — spell check, grammar check, layout tools, note-taking and mind-mapping and automatic summarization tools. Tools for augmenting creativity in authors aren’t new either — cut-ups pre-date digital computers, as do markov chains and bibliomancy, and oblique strategies are now half a century old. Cutups and markov chains actually produce text for you, to which you must act as editor; but all these ‘writing machines’ that augment creativity do so by acting as a source of semantic randomness, much as mind-warping drugs do. We accept the use of all these mechanisms already — we don’t criticize Thom Yorke or William S. Burroughs for using cut-ups any moreso than we criticize John Lennon or Hunter S. Thomson for using LSD. Automatic methods for the production of text will, if they gain acceptance among writers, gain as much acceptance among readers as spell check and LSD.

NaNoGenMo probably won’t produce the future journalism-symbiote I describe, in the same way that NaNoWriMo has never produced the great american novel; but, just as NaNoWriMo produces novelists (and published novels), NaNoGenMo will produce some of the figures and technologies and domains of collective knowledge and culture that will inform text generation in creative fiction in the near future.