Those of us working in prose generation have come to the conclusion that neural nets and other…

While generative fiction has been around for a long time (there were machines writng screenplays for westerns in the 50s), a lot of current…

Those of us working in prose generation have come to the conclusion that neural nets and other statistical methods (like markov chains) are not the best ways to generate interesting fiction. Statistical methods require a lot of training data for each unit of output, and need to be tweaked to be optimized for different scales. Parameters that generate an interesting title would not be capable of generating an interesting first sentence, and in order to be successful, fiction needs to simultaneously optimize for three potentially conflicting goals at many different scales: novelty, coherence, and evocativeness.

While generative fiction has been around for a long time (there were machines writng screenplays for westerns in the 50s), a lot of current activity among hobbyists and non-academics occurs around the NaNoGenMo community, established in 2013. From my experience there, it seems like the most effective methods involve a combination of generative grammars (i.e., rules for producing meaningful arrangements of words, built along the same lines as sentence diagrams), planners (i.e., pieces of code that are provided with an environment, a starting point, and a goal, and then produce a series of actions), and a context & tradition that encourages reader projection (experimental-looking styles encourage readers to see mistakes as hidden intentions — comic books and prose styles that are reminiscent of Joyce or Burroughs end up seeming much more interesting because of this, and can get away with a lot more randomness). Nevertheless, rarely to the results remain interesting for more than ten pages. My personal favorite project used generative grammars and planners to produce a pretty convincing imitation of mediocre Star Trek fanfiction — for about 30 pages.