Voyages in sentence space
Well, it’s January 2020, and the day has finally come: I can no longer operate the software that made this essay interactive. I’ll leave the text up, but please understand that this essay is now considered “broken.” That’s the challenge this kind of presentation poses: it’s very cool when it works, but then —
Imagine a sentence. “I went looking for adventure.”
Imagine another one. “I never returned.”
Now imagine a sentence gradient between them —
Here’s what a neural network instructed to produce such a cloud of sentences (specifically, sentences from science fiction) delivers when you ask it to draw a gradient between “I went looking for adventure.” and “I never returned.”
You can ask it to draw a gradient of your own! Just replace the first and last sentences and use this button:
So, does that sentence gradient make sense? I honestly don’t know. Is it useful? Probably not! But I do know it’s interesting, and the larger artifact —
A comfortable embedding
I’ve been exploring neural networks —
When you’re tinkering with these tools, trying to produce something interesting (maybe even artful) from a dataset, whether it’s composed of text or images or something else, you often find yourself embedding that data into numeric space.
At a super simple level, imagine a dataset consisting of color swatches: rusty orange, dusty magenta, deep purple. You can see why it might make sense to embed these standalone swatches into a one-dimensional number line, a smooth sweep of color —
—so each has its own coordinate and there are also, as a significant added bonus, coordinates for all the intermediate colors between them.
Imagine a more complex dataset consisting of more colors. You can see how two dimensions might be useful:
Just like that, this dataset becomes something a neural network can chomp on, because it’s no longer color swatches described with metaphors, but a set of numbers. You know what computers love? SETS OF NUMBERS.
In practice, because datasets are often very rich —
Up above, embedding our color swatches into one or two dimensions was straightforward; the mapping was obvious. But how do we embed a face or a sentence into a numeric space with a hundred dimensions? How do we learn to map from “I went looking for adventure” to (-0.0036, -0.063, 0.014, … )
and back?
One tool we can use is called a variational autoencoder. It’s a kind of neural network that learns to embed rich data into numeric space, and not only embed it, but “pack” it densely. A variational autoencoder, even more than nature, abhors a vacuum.
In academic papers about autoencoders (like this one) you’ll often see a diagram demonstrating how a dataset of celebrity faces has been embedded into numeric space. The paper will show smooth (and perhaps slightly unsettling) gradients between points in that space, each of which represents a unique face:
Here’s where things get interesting. In 2016, a paper called “Generating Sentences from a Continuous Space” published by Samuel R. Bowman, Luke Vilnis, et. al., showed that you can use a variational autoencoder to embed sentences into numeric space, and pioneered a few techniques to make it possible.
The paper also introduced, along the same lines as the unsettling celebrity gradient, the concept of a smooth homotopy, or linear interpolation, between sentences. I understood these immediately as sentence gradients and as soon as I read the paper … I had to have them.
Programming is hard
I tried to implement the paper myself. I failed. Even after corresponding with the authors, I just couldn’t get the basic autoencoding engine to work.
Lucky me: not even a year later, another paper appeared, extending the work of Bowman, et. al. “A Hybrid Convolutional Variational Autoencoder for Text Generation” by Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth offered substantial additions to the idea and, even better: it offered THE CODE!!
(Let me just take a moment to praise researchers who publish their code. Without this project from Semeniuta, et. al., as a starting point, I would never have been able to explore these techniques. What a gift.)
Code in hand, I was well on my way to generating sentence gradients myself. I figured out the math to move through sentence space, implemented a few features to help organize experiments, added a simple server.
But there was a persistent problem: it ran too slow. I would write two sentences, ask the neural network to generate a gradient between them, and … wait. And wait and wait. Minutes passed. The process was too drawn out for experimentation, for exploration, for play.
Again, I tried to fix it myself. But I didn’t (and still don’t!) understand the innermost engine enough to see how I could speed up that process of moving sentences in and out of numeric space.
That’s when I asked for help.
The programmer Richard Assar’s implementation of a paper called SampleRNN, shared on GitHub, had impressed me with its usability and its speed. Sound generated by his code made its way into the audiobook of my latest novel. So, I reached out to him, asking, could I commission you to take a look at this sentence space project?
Richard said yes, and overnight —
Working with this code shared by Semeniuta, et. al., streamlined by Richard Assar, what did I end up with?
Welcome to sentence space
My project sentence-space
, now public on GitHub, provides an API that serves up two things:
- Sentence gradients: smooth interpolations between two input sentences.
- Sentence neighborhoods: clouds of alternative sentences closely related to an input sentence.
Sentence neighborhoods are simpler than gradients. Given an input sentence, what if we imagine ourselves standing at its location in sentence space, peering around, jotting down some of the other sentences we see nearby?
From the input
we get
You can increase or decrease the distance you peer into sentence space from your initial location; as you increase it, the results get more diverse. Adjust this slider, then use the button again:
Closer
to home
Further
afield
If you drag the slider fully to the left and look around, the results will all be identical, showing you the autoencoder’s best attempt at capturing your original sentence. Its reproduction is sometimes perfect; for example, try “The ship landed on the runway.” Don’t forget the period —
More often, the autoencoder returns something that seems … a bit … blurred? The effect gets stronger as your style and subject matter diverge from the autoencoder’s original dataset of sentences from science fiction. What you’re seeing is the transition from the richness of arbitrary text to the regularity of this particular sentence space. It’s very expressive —
Anyway.
After I’d gotten this up and running, I felt something similar to what I remembered from an earlier machine learning project: a sense of, well, I did it … now what?
That feeling is an important waystation. Sentence gradients are weird; maybe nothing more than linguistic baubles. But I believe there’s something undeniably deep and provocative about this space packed full of language. Drawing gradients and exploring neighborhoods are just two ways of moving through it. How else might you travel?
I’ve published the code, which is mostly the work of Semeniuta, et. al., with important improvements by Richard Assar and a few embroideries by me.
Maybe you can imagine something different to do inside this science fiction sentence space, or maybe you’d rather establish a space all your own, built on sentences of your choosing. You could implement new operations; maybe you want to add sentences together or find the average of many sentences. These spaces are dense with meaning and difficult to wrap your head around, and to me, that’s a very attractive combination.
As before, this is all about making tools (that make language) for humans to use —
“The information buzzed, emptying his lips.”
What a sequence of words! I’d never have written that on my own, and now I want to use it somewhere —
Go explore. Send back reports of your progress.
Or stay here on this page and play a little.
Thanks to Dan Bouk for his feedback on a draft of this post. Dan wrote a book about how not just sentences but whole lives got plotted and gridded, smoothed and statisticized. How Our Days Became Numbered is essential reading.
February 2018, Berkeley
March 2018, Berkeley