FUN WITH RECURRENT NEURAL NETWORKS:

Fake Star Trek Episode Titles

(If you aren't already familiar with recurrent neural networks, why not see Andrej Karpathy's excellent blog?)

These days, thanks to The Wonders of Science[TM], we can train neural networks to imitate different styles of text by showing them some examples. Often the results are gibberish, but occasionally in this gibberish there is a nugget of... less gibberish. There are many fine Python libraries out there to let one run RNN experiments: I am using textgenrnn, and fine-tuning its stock model on data of my own whimsical fancy. Here is a selection of the most interesting, perplexing, or otherwise notable outputs.

I trained the network on lists of all Star Trek television episode titles, across all the series (TOS, TAS, TNG, DS9, VOY, ENT, and STD). The different eras all have their own particular styles: TOS titles can be prolix, TNG tend to be snappy, DS9 more philosophical, and so on. Let's see how much of this colour we can catch by training a network on each era's titles separately...

Using only TOS/TAS titles:

Now TNG titles:

DS9:

And I guess VOY, I mean we might as well...

Some of the best fake examples come from training a network up on all the series' titles combined:

And of course, in honour of the best Trek character of all time: