FUN WITH RECURRENT NEURAL NETWORKS:

Fake Sherlock Holmes story titles generated by a neural network

(If you aren't already familiar with recurrent neural networks, why not see Andrej Karpathy's excellent blog?)

These days, thanks to The Wonders of Science[TM], we can train neural networks to imitate different styles of text by showing them some examples. Often the results are gibberish, but occasionally in this gibberish there is a nugget of... less gibberish. There are many fine Python libraries out there to let one run RNN experiments: I am using textgenrnn, and fine-tuning its stock model on data of my own whimsical fancy. Here is a selection of the most interesting, perplexing, or otherwise notable outputs.

I fine-tuned the network on the titles of the Sherlock Holmes canon. This was the smallest training set I'd tried so far, and they are almost all of the form "The Adventure of..." As you can see, the network picked up on the pattern. Interestingly, I seemed to get less nonsense than usual to filter out: the regularity of the training corpus seems to have given it some pretty firm ideas about what words usually are. ...Mostly. It really digs blanched cardboard.

Some results are plausible...

Others aren't quite taking this seriously:

The network is really into blue:

Many things creep...

...while others are crooked:

The Mediterranean is a recurring concern:

As are the wacky Musgraves:

Six of everything: