What an innings of 200 on Day 3 in Chennai yesterday. I loved it. Just thought of exploring what people think on twitter about his 200 is what triggered me to write this blog but unfortunately it required lot of learning on using Twitter with R which I have summed it below. Irrespective of the intent behind analyzing Dhoni’s 200 data it also makes lot of business sense to analyze on trends in social media. In a bid to understand how the social media is dealing with your brand or products it’s important to analyze the data available in twitter. I’m trying to use R for fundamental analysis of tweets based on the TwitteR package available with R.
- If you have not installed the twitteR package you need to use the command install.packages(“twitter”)
- It will also install the necessary dependencies of that package(RCurl, bitops,rJson).
Load the twitter package using library(twitter)
- In the above R console statements I tried to get the maximum tweets upto 1000, but I managed to get only up to 377 tweets. That’s the reason you are seeing n=377, otherwise it returned me error “Error: Malformed response from server, was not JSON”
- If you don’t mention value of n , by default it will return 25 records which you can determine using length(dhoni200_tweets)
Next we need to analyze the tweets, so installing the Textmining package “tm”
Next step is to give the tweets which we have collected to the text mining but for doing so we need to convert the tweets into data frame use the following commands to do so:
 377 10
- Next we need move the textdata as vectorSource to Corpus. Using the command > dhoni200.corpus=Corpus(VectorSource(dhoni200_df$text))
- When we issue the command > dhoni200.corpus you will get the result “A corpus with 377 text documents”
Next refine the content by converting to lowercase, removing punctuation and unwanted words and convert to a term document matrix:
A term-document matrix (783 terms, 377 documents)
Non-/sparse entries: 3930/291261
Sparsity : 99%
Maximal term length: 23
Weighting : term frequency (tf)
Analysis: When we try to analyze the words which has occurred 30 and 50 times respectively these were the results:
Analysis: I tried to analyze further the association words when we use the word “century”. The following were the results:
The term firstever seems to be of the highest with 0.61. In this command findAssocs the number 0.20 is the correlation factor.
The command names(dhoni200_df) will list you the various columns which are coming out as tweets when converted to a data frame.
 “text” “favorited” “replyToSN” “created” “truncated”
 “replyToSID” “id” “replyToUID” “statusSource” “screenName”
Analysis: Most number of tweets