FUN WITH RECURRENT NEURAL NETWORKS:

Fake Beatles Song Titles

(If you aren't already familiar with recurrent neural networks, why not see Andrej Karpathy's excellent blog?)

These days, thanks to The Wonders of Science[TM], we can train neural networks to imitate different styles of text by showing them some examples. Often the results are gibberish, but occasionally in this gibberish there is a nugget of... less gibberish. There are many fine Python libraries out there to let one run RNN experiments: I am using textgenrnn, and fine-tuning its stock model on data of my own whimsical fancy. Here is a selection of the most interesting, perplexing, or otherwise notable outputs.

I trained the network on lists of all recorded Beatles songs. These include a fair number of proper names or otherwise nonstandard words, and the titles can be lengthy, so the network should learn to generate some pretty elaborate horsefeathers.

Sometimes the network invents new words:

Some of the results are whimsical:

Others seem to hold a dark message:

But the winner is: