Not long ago I gave an introduction to text analysis and data mining for the Wake Forest University community. In that tutorial I demonstrated how to do a basic keyword analysis on “State of the Union” speeches from 1947 to present using the freeware AntConc. Although AntConc is a fantastic resource, it’s not my go-to approach for doing text analysis. In this post I will re-create the analysis using R: reading, exploring, and cleaning data as well as the main keyword analysis and corresponding visualizations.
In this code chunk we use the tm package to read in files in a specific directory whose file names match a certain pattern, then lowercase the text. Since we want to create corpora from file names matching republican on the one hand and democrat on the other, I chose to create a function as not to be too redundant with the code. Inside the function read.corpus(), we use the tm::DirSource() function to point R to the directory where the sotu texts are listed and then takes a pattern to match file names. This result is handed off to VCorpus() which creates a tm package VCorpus object.^[The handing off portion uses the %>% syntax from the magrittr package (loaded through the dplyr package here).] I then include an option to lowercase the text. In this case we will want our text lowercased, so I’ve set it to the default in the function (to.lower = TRUE), but that’s not always the case.
Now apply the function to get the Republican and Democrat texts and confirm that we’ve got VCorpus objects.
Let’s do a quick inspection of the results. It’s good to know a few functions for exploring VCorpus objects.
So print() provides us a basic overview of the number of documents in the corpus. meta() with the tag set to "id" returns the file names themselves. This is good here to verify that we’ve in fact captured the appropriate texts in each corpus. content(rep.corpus.raw) will return all of the text in the corpus. Go ahead and try it at home –I’ll save some screen real estate and skip this command.
In a keyword analysis we will be working with, well, words, and their frequencies so let’s make a first pass at creating a word frequency list.
Again, I turn to creating a function because we want to reduce redundancy. This function, create.wordlist(), takes a VCorpus object (corpus) as an argument and provides an option to output the wordlist as a data.frame (create_df). The corpus is passed to tm::DocumentTermMatrix() which is another tm object, but one that holds the corpora in matrix form. It’s worth taking a look at a simplified example to give you an idea what we are dealing with.
So the three character vectors in documents is read into VCorpus format and then sent to DocumentTermMatrix. The result in dtm can be inspected:
So looking at the bottom portion, the DocumentTermMatrix contains a matrix where rows correspond to each of the character vectors in documents and the columns each of the unique words. This matrix summarizes the word frequency and dispersion contained in our text documents.
Returning to the creat.wordlist() function from above, we then remove some of the extra bells and whistles in the DocumentTermMatrix with as.matrix() –giving us just the matrix that we are interested in. Summing the columns for each word (colSums()) gives us the word frequencies. Again I include an option to change the output to another data type, in this case a tabular, excel-like format – a.k.a. the data.frame.^[Passing, or ‘piping’ output using the %>% syntax assumes that the first argument of the next function will take the output of the previous. If that is not the case, then the . operator can be used to specify where insert the input.]
Run the create.wordlist() function, check the class, and find out how many (unique) words are in each of our corpora.
I fun way to visualize a word frequency list is via word clouds. Conveniently, there’s an R package for that, appropriately named wordcloud.
wordcloud::wordcloud() takes the words and their frequencies, minimally. But you can also specify a number of other parameters to modify the content and format of the word cloud. Here I’ve limited the output to the top 50 words.
Not what you expected? It’s real easy to overlook the fact that the most frequent words in a given language are often the least interesting, from a content standpoint. These words are often closed-class words (determiners, prepositions, conjunctions, etc.) and contrast to open-class words (nouns, verbs, adjectives) that are usually more interesting –especially for us at the moment. So we are going to want to filter them. But how do you know what words to filter? Enter “stopword lists”: pre-compiled lists of the kinds of words we are looking to eliminate.
That’s area we need to address. Other standard filters include removing punctuation, numerals, and extra whitespace. In addition, on inspecting the word frequency lists it appears that there are certain words enclosed inside brackets  and parenthesis () that should be deleted as well given they correspond, by and large, to meta discourse not the content of the speeches. See below^[Regular expressions are used for pattern matching]:
Let’s clean up the corpus. Removing bracket/ parenthetical words, stopwords, punctuation, numerals, and whitespace. The tm package includes the tm_map() function that provides many easy-to-implement ways to transform the corpus content.^[One twist is the my.sub() function I wrote to simplify adding custom transformations to tm_map().]
Now apply the filters and clean this corpus up!
Returning to our word clouds. We need to create new wordlists from the cleaned data and then visualize the results.
Alright, now we are in a position to do some comparing. Looking at the word clouds we can see obvious similarities and differences. The size of the text, however, can be misleading here: text between each word cloud is not on the same scale. That is, it’s hard to know how these frequencies/text sizes really match up between the corpora. To identify those words that are most/ least indicative of each corpus (when compared to the other corpus) we want to compare the relative frequency of words in the rep.corpus with the relative frequencies in the dem.corpus. Using the ratio of rep.corpus/ dem.corpus we will find that words with highest values tend to be more indicative of the rep.corpus and lowest values more indicative of the dem.corpus. This calculation is known as a Relative Frequency Ratio (Damerau, 1993).
The function rfr() implements the Relative Frequency Ratio calculation with a couple practical additions. First, each wordlist will contain words that do not occur in the other. Since this will lead to undefined ratios, a single value is added to each ratio calculation (+1 smoothing). Furthermore, I’ve taken the log() of this ratio to provide a more appeasing visual distribution (see below). The results are optionally returned as a data.frame.
First, we need to create wordlists from our cleaned-up corpora. Then run the rfr() function.
Let’s look at the first lines of rfr_df.
Looks good. Let’s take the top 25 and the bottom 25 and add Party labels to get ready to visually summarize these top results.
rbind() takes the head(), or top 25 and the tail() or bottom 25 and binds them by row. Since we know that the point at which the ratio is equal, that is, Republican words and Democrat words have the same relative frequency is 1 and log(1) is 0, then values 0 and above are republican-indicative otherwise democrat-indicative. The ifelse() statement tests this and applies the Party labels accordingly.
Very cool. One more step that would really complement this analysis would be to do a concordance on terms of interest to see particular words in context. Below is a more complex function that is a first stab at doing just that.
So to explore the word “federal” in the rep.corpus we run:
And there you have it a re-creation of the analysis using AntConc, this time using R.