Sentiment Analysis: Opportunities and Challenges

Originally published January 22, 2008

The steps involved in sentiment analysis are easy enough to grasp: use automated tools to discern, extract, and process attitudinal information found in text; apply to sources as varied as articles, blog postings, e-mail, call-center notes, and survey responses that capture facts and opinions. What do customers, reviewers, the business community – thought leaders and the public – think about your company and your company's products and services – and about your competitors? What can you learn that will help you improve design and quality, positioning, and messaging and also respond quickly to complaints?

The goal is to create market intelligence, to identify opportunities and issues, to understand the voice of the customer as expressed in everyday communications. The challenge stems from the huge variability and subtlety of spoken and written language: meaning that humans readily grasp from context is very difficult for computers to detect. How can software reliably discern facts and feelings in light of not only abbreviations, bad spelling, and fractured grammar, but also sarcasm, irony, slang, idiom, and, well, personality? How is a computer to understand? The following is taken verbatim from Dell's IdeaStorm.com, complete with misspellings and a buried subject, RAM – “Dell really... REALLY need to stop overcharging... and when i say overcharing... i mean atleast double what you would pay to pick up the ram yourself. ” (Isn't it excellent that Dell openly solicits customer feedback?) How can software additionally judge the impact of a posting like this? It's a hard challenge, yet the potential return is huge, as is the risk of not trying.

Text analytics can extend reach, lower costs, and improve reaction time in dealing with important enterprise information, including sentiment, that is locked in a variety of forms of human communications. Workers have limited capacity and they're (relatively) expensive, so we use computers for what they're good at: processing large volumes of data fast. Yet accuracy is a serious concern, and there is wide variation in the suitability of various available tools to the task. It is important to know what you can expect in order to create an approach that works given your information sources and goals.

Start with sources.

Text analytics has had great success in areas such as mining biomedical literature as part of drug-discovery processes. If we can understand the relationship between certain protein interactions and disease onset, we can begin to identify promising therapies. Text analytics can help us achieve this understanding without costly and time-consuming clinical trials. We mine for factual information, yet accuracy of information extraction from formally written scientific literature as measured by precision and recall – by levels of correctness and exhaustiveness – typically reaches the 85%-90% range.

Opinions are far harder than facts to describe. Opinion sources are typically informally written (or worse) and highly diverse. They are short on descriptive metadata that can provide context for analytical efforts. So sentiment-extraction accuracy is typically far lower, but it can be boosted by approaches that are appropriate for the sources and goals.

We might start by classifying source documents – Web pages, e-mail messages, news or blog articles, or audio transcripts – by theme, topic, type, authorship, and other characteristics. To this end, we parse documents for entities such as names of persons, products, companies, and places; for descriptive attributes such as authorship; and also for abstract concepts. For example, the concept “vehicle” subsumes entities are names of makes and models with year and style attributes. Taxonomies can help in the classification effort but they may be incomplete when dealing with truly diverse sources.

Entity extraction gives us subject matter for further investigation. But beyond facts – “I bought my first Mac last year” – what was the writer or speaker trying to communicate?

According to researchers Livia Polanyi and Annie Zaenen, “The most salient clues about attitude are provided by the lexical choice of the writer, ... but the organization of the text also contributes information relevant to assessing attitude.” Lexical choices: those are words. Boost, benefit, and brave indicate positive valence – that is, tone or polarity – while conspire, catastrophe, and cowardly are negative.

It is dangerous, however, to judge sentiment only by the presence of valence words. Throw in a negator such as not or never and you flip the valence. Intensifiers – for instance, very and most – indicate the strength of the sentiment expressed. Modal operators such as might, could, and should distinguish hypothetical from real situations and weaken intensity, as in Polanyi's and Zaenen's example sentence “If Mary were a terrible person, she would be mean to her dogs.” Other, “presuppositional” terms such as barely and even, similarly relate what the speaker/writer observes to his or her expectations. They can also help us distinguish subjective statements from objective ones.

We can start with a lexicon of all these expressive words, and perhaps we'd even build it up and refine it via some form of machine-learning process that starts from a manually annotated training set. A deeper linguistic analysis, based on word-scale to document-scale analysis of text, brings us a long way toward our goal of inferring meaning.

Other information-extraction approaches are more quantitative. They analyze text using Bayesian statistical models for pattern matching that discern relationships among disparate pieces of information – the meaning of texts and the entities contained – via “interaction analysis.” Autonomy is a proponent of this technique, applied for instance by their etalk subsidiary in mining recorded call-center audio. When dealing with spoken language, it is possible to add attributes such as voice volume and pitch, which suggest emotion and emotional intensity, to the mix. Sequence is also important (just as it is for life-sciences researchers who apply text analytics to study protein interactions), providing additional context that supports sentiment analysis.

Despite strengths of statistically rooted approaches such as ability to work with text in arbitrary human languages, there are risks, for instance, according to Vadim Berman of Digital Sonata, developer of the Carabao language kit, when linguistic rules are applied to texts whose characteristics don't match those of the training sets used to generate the rules. Faisal Mushtaq, CTO of media and market intelligence solution provider Biz360, explains, “No single technology or technique works the best. Automated analysis of unstructured text poses unique technology challenges requiring an interdisciplinary approach to text analysis. A good solution is a combination of the 'right' technologies to solve a real/immediate customer problem.”

In addition to the statistical vs. linguistic approaches, two hybrids are worth considering:

  1. Take into account fielded (usually numeric) information to improve sentiment-analysis accuracy. For instance, stars associated with Internet Movie DB comments hint at polarity. An Alvin and the Chipmunks reviewer – I refused to take my kids to that one myself – gave the movie 8 stars out of 10: it is likely that his sentiments captured in the text were generally positive and moderately forcefully held. It's not surprising that a 5/10 review has the title, "A huge disappointment for fans of this memorable series" and 10/10 is coupled with "I just LOVED IT!" Similarly, a hotel guest who chose a Fair rating in a satisfaction survey is likely to have posted more complaints than praise in free-text response fields.

  2. Try two analysis passes, the first using automated classification/extraction tools and the second for manual confirmation, correction, and augmentation as part of either a human-assisted machine learning approach where manual intervention tails off as you improve accuracy or an ongoing arrangement.

There are further challenges. For instance, a particular movie review may contain opinions of various polarities – some positive, some negative, and some neutral – and intensities. How do you decide the overall sentiment of the review and similarly understand the aggregate picture, the voice of the market rather than just of individuals? Can you discover relationships between sentiments and the characteristics of the people who expressed them as well trends over time and how opinions propagate through social networks? Can you forecast quantities like box-office receipts from opinions extracted from movie reviews? These analytical steps are the province of traditional data mining and descriptive statistics, which can be (and is being) applied to extracted attitudinal information. The view of Biz360 CTO Mushtaq is that “only a solution that leverages a combination of Information Extraction, Data Mining and Business Intelligence technologies can deliver true actionable intelligence.”

In today's Web 2.0 world, and when working with traditional channels, actionable intelligence may include an understanding of the reach and the influence of opinions. What kinds of view spread fastest and widest? How do they propagate through social networks? Who are the opinion leaders, who are the influencers, and who's listening? These questions can be answered by application of data mining techniques to attitudinal information, completing the sentiment analysis task.

Jeffrey Catlin, CEO of text-analytics vendor Lexalytics, believes “sentiment analysis has come a long way in the last four years. In certain domains, and under certain uses, it's a very dependable technology.” Nonetheless, accuracy is significantly lower than typically achieved when you stick to named entities and facts and well-structured documents. Text analytics/content management vendor Nstein reports that their Nsentiment annotator, “when trained with appropriate corpus, can achieve a precision and recall score between 60% to 70%.” These are good numbers when it comes to attitudinal information. Michelle DeHaaff, marketing VP at Attensity, says that “getting beyond sentiment to actionable information, to 'cause,' is what our customers want. But first, you've got to get sentiment right.”

We've looked at text-analytics techniques with the goal of getting sentiment right, and in a subsequent article, we'll focus on applications. 

  • Seth GrimesSeth Grimes

    Seth is a business intelligence and decision systems expert. He is founding chair of the Text Analytics Summit and principal consultant at Washington, D.C., based Alta Plana Corporation. Seth consults, writes, and speaks on information-systems strategy, data management and analysis systems, IT industry trends, and emerging analytical technologies. Seth chairs the Sentiment Analysis Symposium and the Text Analytics Summit.

    Editor’s Note: More articles and resources are available in Seth's BeyeNETWORK Expert Channel. Be sure to visit today!

Recent articles by Seth Grimes

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!