Step by Step Sentiment analysis on Twitter data using R with Airtel Tweets: Part – III

After lot of difficulties my 3rd post on this topic in this weekend. In my first post we saw what is sentiment analysis and what are the steps involved in it. In my previous post we saw how to retrieve the tweets and store it in the File step by step. Now we will move on to the step of Sentiment analysis.

Goal: To do sentiment analysis on Airtel Customer support via Twitter in India.

In this Post: We will retrieve the Tweets which are retrieved and stored in the previous post and start doing the analysis. In this post I’m going to use the simple algorithm as used by Jeffrey Breen to determine the scores/moods of the particular brand in twitter.

We will use the opinion lexicon provided by him which is primarily based on Hu and Liu papers. You can visit their site for lot of useful information on sentiment analysis. We can determine the positive and negative words in the tweets, based on which scoring will happen.

Step 1: We will import the CSV file into R using read.csv and you can use the summary to display the summary of the dataframe.

Step 2: We can load the Positive words and Negative words and store it locally and can import using Scan function as given below:


Step 3:

Now we will look at the code for evaluating the Sentiment. This has been taken from http://jeffreybreen.wordpress.com/2011/07/04/twitter-text-mining-r-slides/. Thanks for the source code by Jeffrey.


Step 4:

We will test this sentiment.score function with some sample data.


In this step we have created test and added 3 sentences to it. This contains different words which may be positive or negative. Pass this “Test” to the score.sentiment function with pos_words and neg_words which we have loaded in the previous tests. Now you get the result score from the score.sentiment function against each sentence.

we will also try understand little more about this function and what it does:

a. Two libraries are loaded they are plyr and stringr. Both written by Hadley Wickham one of the great contributor to R. You can also learn more about plyr using this page or tutorial. You can also get more insights on split-apply-combine details here best place to start according to Hadley Wickham. You can think of it on analogy with Map-Reduce algorithm by Google which is used more in terms of Parallelism. stringr makes the string handling easier.

b. Next laply being used. You can learn more on what apply functions do here. In our case we pass on the sentences vector to the laply method. In simple terms this method takes each tweet and pass on to the function along with Positive and negative words and combines the result.

c. Next gsub helps to handle the replacements with the help using gsub(pattern, replacement, x).

d. Then convert the sentence to lowercase

e. Convert the sentences to words using the split methods and retrieve the appropriate scores using score methods.

Step 5: Now we will give the tweetsofaritel from airteltweetdata$text to the sentiment function to retrieve the score.

Step 6: We will see the summary of the scores and its histogram:

The histogram outcome:

It shows the most of the response out of 1499 is negative about airtel.

Disclaimer: Please note that this is only sample data which is analyzed only for the purpose of educational and learning purpose. It’s not to target any brand or influence any brand.

Advertisements

Text Mining: Google n-Gram Viewer & the word “Tamil”

What is n-Gram?

According to Wikipedia the “n-Gram viewer is a Phrase-usage graphing tool which charts the yearly count of selected n-grams (letter combinations)[n] or words and phrases, as found in over 5.2 million books digitized by Google Inc (up to 2008).”

The Reference URL:

http://books.google.com/ngrams/

My Experiment:

http://books.google.com/ngrams/graph?content=Tamil&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=

I was curious on the following to understand the impact of the Tamil in Google digitization project.

Figure 1: Courtesy Google n-Grams

 

Some in scripts from the book belong to 1854 with the courtesy from Google digitization project.

 

Figure 2: Book digitized by Google from Jaffna, Srilanka Tamil to English

 

Astonished by the way the way digitization has been done and the way the text mining works. Awesome. Try your hands too.

Text Mining: Intro, Tools and References

What is it?

In simple terms retrieve quality information from the text for analysis.

Where it can be used?

  1. Analysis of emails, messages, etc.,
  2. Analysis of open-ended surveys
  3. Analysis of claims for fraud detection
  4. Investigation by crawling
  5. Spam filtering
  6. Labeling for Machine learning
  7. Recommendations engine

Various Stages of Text Mining:

Good tools for Text Mining (free J):

  • R Programming (refer to the tm package)
  • Gensim (Python library for analyzing plain text)
  • Gate (Open Source library for Text Processing 15-Year old)

Good References:

Where to get started: http://tedunderwood.com/2012/08/14/where-to-start-with-text-mining/

http://www.statsoft.com/textbook/text-mining/

http://rapid-i.com/component/option,com_myblog/show,Great-Video-Series-about-Text-Mining.html/Itemid,172/

Analysis of Cricketer “Dhoni’s 200” tweets on Twitter using R

What an innings of 200 on Day 3 in Chennai yesterday. I loved it. Just thought of exploring what people think on twitter about his 200 is what triggered me to write this blog but unfortunately it required lot of learning on using Twitter with R which I have summed it below. Irrespective of the intent behind analyzing Dhoni’s 200 data it also makes lot of business sense to analyze on trends in social media. In a bid to understand how the social media is dealing with your brand or products it’s important to analyze the data available in twitter. I’m trying to use R for fundamental analysis of tweets based on the TwitteR package available with R.

  1. If you have not installed the twitteR package you need to use the command install.packages(“twitter”)
  2. It will also install the necessary dependencies of that package(RCurl, bitops,rJson).
  3. Load the twitter package using library(twitter)

  4. In the above R console statements I tried to get the maximum tweets upto 1000, but I managed to get only up to 377 tweets. That’s the reason you are seeing n=377, otherwise it returned me error “Error: Malformed response from server, was not JSON”
  5. If you don’t mention value of n , by default it will return 25 records which you can determine using length(dhoni200_tweets)
  6. Next we need to analyze the tweets, so installing the Textmining package “tm”

  7. Next step is to give the tweets which we have collected to the text mining but for doing so we need to convert the tweets into data frame use the following commands to do so:

    > dhoni200_df=do.call(“rbind”,lapply(dhoni200_tweets,as.data.frame))

    > dim(dhoni200_df)

    [1] 377 10

    > dhoni200_df

  8. Next we need move the textdata as vectorSource to Corpus. Using the command > dhoni200.corpus=Corpus(VectorSource(dhoni200_df$text))
  9. When we issue the command > dhoni200.corpus you will get the result “A corpus with 377 text documents”
  10. Next refine the content by converting to lowercase, removing punctuation and unwanted words and convert to a term document matrix:

    > dhoni200.corpus=tm_map(dhoni200.corpus,tolower)

    > dhoni200.corpus=tm_map(dhoni200.corpus,removePunctuation)

    > mystopwords=c(stopwords(‘english’),’profile’,’prochoice’)

    > dhoni200.corpus=tm_map(dhoni200.corpus,removeWords,mystopwords)

    > dhoni200.dtm=TermDocumentMatrix(dhoni200.corpus)

    > dhoni200.dtm

    A term-document matrix (783 terms, 377 documents)

    Non-/sparse entries: 3930/291261

    Sparsity : 99%

    Maximal term length: 23

    Weighting : term frequency (tf)

  11. Analysis: When we try to analyze the words which has occurred 30 and 50 times respectively these were the results:

  12. Analysis: I tried to analyze further the association words when we use the word “century”. The following were the results:

    The term firstever seems to be of the highest with 0.61. In this command findAssocs the number 0.20 is the correlation factor.

  13. The command names(dhoni200_df) will list you the various columns which are coming out as tweets when converted to a data frame.

    [1] “text” “favorited” “replyToSN” “created” “truncated”

    [6] “replyToSID” “id” “replyToUID” “statusSource” “screenName”

  14. Analysis: Most number of tweets

    > counts=table(dhoni200_df$screenName)

    > barplot(counts)