FUN WITH RECURRENT NEURAL NETWORKS:

Fake Doctor Who Episode Titles

(If you aren't already familiar with recurrent neural networks, why not see Andrej Karpathy's excellent blog?)

These days, thanks to The Wonders of Science[TM], we can train neural networks to imitate different styles of text by showing them some examples. Often the results are gibberish, but occasionally in this gibberish there is a nugget of... less gibberish. There are many fine Python libraries out there to let one run RNN experiments: I am using textgenrnn, and fine-tuning its stock model on data of my own whimsical fancy. Here is a selection of the most interesting, perplexing, or otherwise notable outputs.

I trained the network on a list of all Doctor Who television episode titles, both old series and new. This is an interesting training corpus, in that there are several very distinctive recurring patterns. There's the classic "The X of the Y", often involving some combination of Time, Death, Planet, and Evil. Then there is "... of the Daleks". Some titles are only single words, while others -- especially from the early 60s -- can be quite florid. A lot of variety in the training set should make for a lot of variety in the synthetic output!

Featuring the Daleks:

Must suck to be that one. And then we have some very plausible-seeming titles that are probably already coming soon from Big Finish:

Some of these start to get a bit silly:

...And here begins the Mystery of the Sluggles. This was the first time one of my text-generation networks produced a variant of the output "sluggle". For whatever esoteric statistical reason, this word has consistently reappeared with slight variations as an output of many different text-generation networks trained on very different sets of data. Like almost everything in our universe, ultimately this means nothing.

Some of the fake titles are hilarious:

But the very BEST is: