10.4185/RLCS-2015-1067en | ISSN 1138 - 5820 | RLCS # 70 | 2015 | |
Young internet users' evaluation of online consumer reviews. The case of the students from the University of the Basque Country (UPV/EHU)
Word of mouth (WOM) has always been a powerful force to market new products. Moreover, this force is not under the direct control of companies and organisations. With the emergence of the internet, some researchers have begun to speak of the “Electronic Word of Mouth” (eWOM) as if it were a completely different category (Kiecker, Cowles, 2002; Liu, Dong, Burkant, 2013).
EWOM takes very different forms on the internet, from contacts’ private recommendations via email or social networks to anonymous online reviews of products. While it is true that the first type of eWOM is not very different from conventional WOM, the latter type of eWOM represent a qualitative leap in which the scope of reviews multiplies (from one-to-one to one-to-many), the close social network is surpassed to access the opinions of complete strangers and information access becomes unprecedented (24/7 ). Without a doubt, it can be said that the emergence of the internet has taken the WOM to a new level and that the consumer has never had so much power over commercial communications.
In fact, the term prosumer has been coined to refer to the active consumer that generates content on the web. The term was initially attributed to Alvin Tofler (1980), who used it in the offline context, but it has been subsequently used to refer to the online context in various research works. This “prosumer” has become a subject of much debate among academics (Prahalad, Ramasswamy, 2004; Tapscott, Williams, 2006; Bruns, 2009; Ritzer, Jurgenson, 2010; Comor, 2010).
This “prosumer” has not gone unnoticed to the leaders of organisations, who not only allow but also encourage consumers to post reviews and opinions of their products on their own websites. The stronger this phenomenon gets, the less sense it makes to invest in conventional advertising. For example, Amazon.com, which has used the reviews of its members as a selling tool since its inception, has eliminated its general budget for TV and printed ads. Instead, Amazon.com uses this money to afford free shipping on orders over 25 USD, in the hope that this will contribute to a positive word of mouth (Thompson, 2003). Today, there are even online business models, such as Tripadvisor.com and Ciao.com, which rely precisely on the existence of such active consumers.
Faced with the rise of this phenomenon, several researchers have tried to respond to questions relating to the effects of online reviews. For most users practically all of the reviews they read have been written by strangers, so the persuasive effectiveness of such content could be questioned. However, several studies have shown that online reviews can be the source of changes in purchasing attitudes and decisions (Chen, Dhanasobhon, Smith, 2001; Chevalier, Mayzlin, 2006; Dellarocas, Zhang, Awad, 2007; Zhu, Zhang, 2010). So what makes a review more persuasive than another?
One of the factors that have been studied is the credibility of the source of the review (Lim, Van der Heide, 2014). Lis (2013) found out that being an expert and trustworthy are two different factors that determine the perception of credibility in eWOM. Another study (Willemsen, Neijens, Bronner, 2012) confirmed that credibility is a curious phenomenon, since expert sources can be perceived as little trustworthy unless the title of expert has been given by peers. Reichelt, Sievert, and Jacob (2014) found out that being trustworthy was the most important dimension of credibility.
Other characteristics, like revealing the identity of the reviewer or providing information about it, have also been examined (Forman, Ghose, Wiesenfeld, 2008). The conclusion is that revealing information about the reviewer exerted a positive effect on sales, especially when the number of reviews was high, and therefore the reader used particular methods to be able to process large volumes of information. Another variable that has an effect on the trustworthiness of reviews is the geographic proximity of the author. It has been established that these variables had a predictive value on sales that is very superior to other factors such as the valence of reviews.
In relation to this, there are conflicting results regarding the valence of online reviews. While Forman and colleagues were not able to identify an effect of the positive reviews on sales in the aforementioned study, there are a few studies that do have found evidence of that relation (Duan, Bu Whinston, 2008; Cui, Lui, Guo, 2012). However, other authors have found that negative reviews can also have positive effects on sales (Berger, Sørensen, Rasmussen, 2010).
In general, several studies (Cui, Lui, Guo, 2012; Lee, Park, Han, 2008; Lee, Koo, 2012, Ballantine, Au Yeung, 2015) suggest that negative information is perceived as more analytic, credible and significant than positive information (negativity bias), resulting in a greater effect on attitudes towards the product. Paradoxically, reviews tend to be perceived as more useful when they are positive (positivity bias) especially in the case of experiential, rather than utilitarian, products (Pan, Zhang, 2011). Other studies have found that extreme reviews (very positive or negative) are perceived as less useful when it comes to experiential products (Mudambi, Schuff, 2010). However, there are also other research studies that insist that the most relevant aspect is that the valence of the review is consistent with the valence of those that surround it (Quaschning, 2014).
The length and depth of the review have been also correlated positively with its perceived helpfulness, although this effect is moderated according to the type of product, being more noticeable in the case of utilitarian products (Mudambi, Schuff, 2010; Pan, Zhang, 2011).
The number of reviews is another factor that has been positively correlated with sales in such markets as films (Duan, Bu, Whinston, 2008; Liu, 2006) and video games (Cui, Lui, Guo, 2012). Most of these studies insist that the volume of reviews is more decisive than their valence when it comes to establishing correlations with sales.
The content of the reviews has also been studied. As Willensen, Neijens, Bronner and Ridder (2011) have pointed out, several studies have highlighted the importance of the quality of the information in an environment in which the source is almost always anonymous.
Scholars have also studied other attributes of reviews and have reached different conclusions, including the following: reviews based on objective attributes are more credible (Lee, Koo, 2012); the number of arguments included in the review is more important than quality of content with respect to the persuasive ability of the reviews (Dellarocas, Narayan, 2006); the mere presence of arguments makes the review more credible (Price, Nir, Capella, 2006).
The literature review has revealed that there are hardly any references to online reviews in the Spanish-language scientific journals. While there are some published research studies about online reputation (Martínez Maria-Dolores; Bernal García, Mellinas, 2012), the identification of the most active reviewers (Cabezudo, Izquierdo, Pinto, 2012), and even the reviews published about hotels on TripAdvisor (Melián González, Bulchand Gidumal, González López-Valcárcel, 2011), there are no studies specifically focused on the people’s perceptions of online reviews or the effects of the various aspects of online reviews.
Despite the disparity of issues, approaches and conclusions, it seems clear that the reader of anonymous reviews uses heuristics to process the vast amount of information that is normally available online about products. Certain features of the reviews seem to condition their processing.
It also seems clear that online reviews have effects on purchases and it seems logical to think that the effect will be more pronounced in their natural channel: the e-commerce B2C model, which in 2013 was worth 14,610 million euros in Spain, 18% more than in 2012 (Urueña et al., 2014: 9).
This research has focused on describing the use of reviews written by anonymous users and on identifying some common elements.
The overall purpose of this research is to identify the ways in which university students use and evaluate anonymous online reviews. More specific research questions have been derived from this general objective, including the following:
To select the sample, we focused on university students for several reasons. In the first place, they constitute a young segment of the population for whom access to the internet is not a problem: they are the digital natives. On the other hand, in the Autonomous Community of the Basque Country, young university graduates represent 44.2% of the total (compared to the 42.3% in Spain and 36.9% in the EU) . They are, therefore, a large part of the future Spanish internet users, who are currently configuring the consumption routines that will probably accompany them for the rest of their life.
This study has involved qualitative and quantitative techniques: First, three focus groups were conducted, each with 6-8 participants, to identify the characteristics of those reviews that they are more likely to be taken into account by people in their buying decision-making process. Since the study group turned out to be relatively homogeneous, we did not consider that it was necessary to form more focus groups. The data obtained from the groups was supplemented with the existing literature in order to design the self-administered online questionnaire that would be used in the next phase of the research. This questionnaire was made available through the servers of the University of the Basque Country (UPV/EHU) to all of its students. The following table summarises the data:
Table 1: sample data
The sample has the following demographic features. The average age of respondents is 24, with a mode of 21 years. The age of most participants (77.2%) is between the 18-25 range. 64.9% of the participants are women and 35.1% are men, what results in a slight overrepresentation of women in comparison to the proportion of women in the UPV/EHU (56%). Of the participants, 34.8% usually resides in a municipality of more than 120,000 inhabitants; 31.4% in a municipality of between 20,001 and 120,000 inhabitants, and the remaining 33.8% in a town of less than 20,000 inhabitants. 99% claims to dominate the reading of the Spanish language, while only 81.4% said the same for the English language (not an official language), and 76.9% for the Basque language (a co-official language in the Autonomous Community of the Basque Country).
Google Forms was used to create the questionnaire and collect data, which was subsequently stored in a database that was analysed with SPSS. Frequency and contingency tables and the statistical Chi-square were used to verify the significance of the correlation between variables. All of the relations mentioned in the results section are significant (p<0.05).
87.5% of the sample claims to buy products online, which is a high percentage in comparison with the 60.6% of Spanish internet users who made purchases last year according to Urueña et al. (2014:9) and is close to the 85% obtained by a similar study that used a sample composed of 18-55 year-old internet users (The Cocktail Analysis, 2013:5). There is no doubt that buying online is a natural activity for this demographic group.
Among those who buy online (n=3868), 6.5% do so at least once a fortnight, 31.8% do so at least once a month, and 61.7% do so at least once a year. Therefore, the majority of the sample buy online only sporadically, about a third do so with some frequency (at least once a month), while only a small group (about 1 of every 15) can be considered regular buyers.
The majority of respondents (83.5%) declared to consult the online reviews of other buyers when they plan to make a purchase, while 38.9% only does so when they plan to make a purchase online. Therefore, the use of online reviews is a widespread and common practice among this group.
In the focus groups there were two strong trends. On the one hand, the showrooming: consumers who come to the physical store to see and try the product but then buy it online at the lowest price they can find. On the other hand, there is the opposite trend, consumers who consult different sources on the internet but end go to the physical store to purchase the product: “I wanted to buy a last-generation TV in a physical store and the information that I found online helped me to know which specific questions to ask the seller.” There is no doubt that both channels (electronic and physical) are strongly related.
3.1. Reviews’ weight in the consumer buying decision process
Participants were asked about the weight they give to online reviews in their buying decision process. The results are presented in the following table as “percentage of the decision”:
Figure 1: Weight attributed to online reviews in the buying decision process
The average weight attributed by respondents to online reviews in their buying decision process is 50%, so it can be said that this group of respondents do give a considerable value to anonymous online reviews. Those who claim to give anonymous online reviews an importance equal to or less than 30% in their buying decision process represent 22.4% of the sample; those that attributed reviews a weight of 40% to 70% to reviews represent 64% of the sample; and finally, only 13.6% of the sample declares to attribute a weight of 80% or higher to the reviews. The first group could be called “sceptical users”, the second group would be the “average users”, and the third small group could be called the “credulous users”.
The group of sceptical users has slightly more male members than the sample (41% vs. 35.6%, respectively), buys less often than the sample (30.7% and 36.9%, respectively, buys at least once a month) and its members consider themselves less experienced in the search of online reviews (only 37.8% of the group considers itself experienced against 45.7% in the sample who believes so).
On the other hand, the credulous group has slightly more female members than the sample (72.4% vs. 64.4%, respectively), its members consider themselves to be more experienced in the search of online reviews than the sample (54.4% vs. 45.7%, respectively) and in some cases it members consider online reviews more reliable than friends or family (32.3% and 34.9% vs. 26.6% and 28.2%, in average, respectively).
The data obtained from the focus groups has enables us to relativize these results, which suggest that the importance of the brand in the buying decision process is diluted. These results seem to indicate that online reviews are generally attributed a very strong weight but also that there are exceptions, precisely in the case of specific products or brands for which the buying decision has been previously made, in which case reviews play a small or insignificant role: “I do pay attention to reviews when I am going to buy expensive products, but if what I want is an iPhone I just buy the IPhone even if it has bad reviews”.
3.2. Types of products about which students read reviews
Respondents considered that in certain types of products the reviews of other buyers have more importance than others. Thus, participants give different answers when asked what types of products they considered more necessary to use reviews for:
Table 2: Percentage of respondents who consider important to consult reviews on the following types of products (n=3638)
The previous table clearly shows that reviews about accommodation top the list (93.2%). Accommodation is a type of product (51.6%) that is often bought via e-commerce and whose turnover is close to 4,300 million euros (Urueña et al., 2014:28). The focus groups participants acknowledged the importance of investigating before making this type of purchase. Normally, this purchase of high involvement, since success in the election depends on the quality of the experience in the tourist destination. In addition, several focus groups participants stated that that the images and descriptions offered by the hospitality industry tend to be unreliable so they give much more credibility to the reviews published by anonymous users. 68% of the interviewees stated that they rely more on the images posted by users than on those posted by official sources. That is why the websites that offer this kind of reviews, such as TripAdvisor, which has more than 32 million “candid traveller photos” are very popular .
Other types of products in which it is considered important to consult reviews (88.7%) are electronic devices. This is a typically complex product and a high involvement purchase, in which reviews can offer relevant information that does not appear in the commercial descriptions or the advice of experts.
A category that involves very large volumes of online purchases are airline tickets, however only 31.4% of respondents considered it important to consult reviews in this regard. The conception that most users have about this type of product is probably closer to that of commodity than of a comparison product.
It is striking that clothes and footwear are at the end of the list given that they are one of the four types of products with the largest online sales volumes in Spain (Urueña et al., 2014:28), estimated at around 1,600 million euros, and that 49.6% of internet users have bought this type of product online. Consumers do not seem particularly interested in the reviews of anonymous buyers with regards to clothing, which suggests that the buying of this type of product is more a matter of taste.
It seems surprising that the last positions of the list include categories such as books (33.6%), which probably were the first products to get online reviews, tickets for concerts and shows (16.6%) and music (10.6%) given that they are products that are generally consumed only once and whose experience cannot be predicted without consuming them, which would make them ideal candidates to previous evaluation based on the reviews of others. We should bear in mind, however, that their buying is much less widespread than the preceding categories.
Based on the previous data and attending the considerations raised in the focus groups, it can be said that online reviews are mostly consulted in the case of high-involvement products whose buying decision does not depend on subjective elements (such as personal tastes). Electronics and accommodation are clear examples of products whose objective features are not always adequately represented by advertising (battery life, the pool size, sharpness of the screen, variety of choices at the buffet...) and for which online reviews are attributed an important weight in the buying decision.
3.3. Sources of reviews
On the other hand, there is the question of what are the sources of the reviews that students use. The following table summarises participants’ answers to the question “in which websites do you consult the reviews of other users?”
Table 3: Search methods used by respondents to access online reviews (n=3638)
Most respondents (69.7%) used search engines as their preferred method to find online reviews, followed closely by the use of online stores with reviews section (67.1%). Also popular (with 53.2%) is the use of websites specialised in reviews (such as TripAdvisor, Ciao, Hostelworld, Metacritic, etc.).
Paradoxically, 42.7% of respondents stated they seek consumer reviews on the brand’s official website, despite one could think that these kinds of reviews are biased or filtered. From the answers to another question included in the questionnaire, we know that 54.6% of respondents think it is positive that the reviews they read are posted on the official website, while 19.5% considered this source makes reviews less interesting. This idea contrasts with the doubts raised about the reliability of these reviews in the focus groups: “I think that if the reviews are on the website of the company that sells the product, the company will delete the negative reviews”. It can be said that although some respondents considered that it is possible for brands to manipulate reviews, this idea is not held by the majority. Most respondents accept in good faith the reviews posted by anonymous users on the brand’s official website.
Social networks do not seem to be the preferred way to find consumer reviews, as only 19.2% of respondents stated using this method.
3.4. Analysis of reviews: general trust in websites
We asked students what factors make reviews more interesting or generate more interest in reading them. In general, most respondents (94.7%) agree that it is important that the website in which the reviews are posted is “trustworthy”. This is the feature of the review on which there is the greatest agreement. In addition, 47.3% of respondents agrees or strongly agrees that when they are looking for reliable reviews they tend to turn to the same websites, while 16.5% disagrees or strongly disagrees with this.
Figure 2: Features that make reviews more interesting (green) or less interesting (red), according to respondents
It might be important to ask what exactly respondents mean by a trustworthy website, but this question is beyond the objective of this study, which focuses on the features of reviews. However, we can highlight a fact that we have already mentioned: 54.6% of respondents believe that the fact that the review is on the brand’s official website is something positive and that only 29.5% think the opposite. This seems to clearly indicate that in the majority of cases the official website is considered an interesting source of reviews.
On the other hand, the number of reviews on offer does seems to be a sign of credibility to the website. Focus groups participants gave statements like the following: “I get suspicious when a website or seller does not display reviews” and “If the number of reviews is very small, I think this has been the work of someone in the company”. In addition, 79.9% of respondents considered the existence of many reviews about the product from different authors to be something positive. The greater the volume of reviews, the more difficult it seems to control reviews by one of the parties involved.
Opinions seem to divide when there is a similar amount of positive and negative reviews. 40.8% of respondents considered this to be negative while 21.1% considered it to be positive. It is important to remember that the literature review indicated that the negative reviews have greater weight than the positive ones (negativity bias).
We could summarise the position of students in the following way: In general terms, they believe that a product must have a high volume of reviews and that most of them should be positive. When the amounts of negative and positive reviews are similar, respondents suspect that something is wrong with the product or the website. However, although the ideal for students is to find few negative reviews, as their number becomes smaller (13.9%) or even disappears (24.4%), a minority of respondents becomes suspicions about the validity of these reviews when faced with this situation. Based on the testimonies obtained from the focus groups, we could say that there is a small group of expert buyers who know that when there is no negative review about the product there is probably some kind of manipulation. However, we cannot affirm that that is the general feeling, since, for example, 58.1% of the sample think the absence of negative reviews about the product is something positive.
One might think that there is a general tendency among students to prefer positive reviews. However, a quick look at the literature allows us to interpret it in a different way. Sen and Lerman (2007) differentiate the value of review based on the product’s utilitarian or experiential/hedonic nature. Utilitarian products, such as electronics and mobile phones, should be examined according to their varied technical specifications, which usually requires analysing lists of positive and negative points. Negative reviews are more useful for the final purchase decision making in the case of these kinds of products than in the case of hedonic/experiential products. In the latter category, which includes accommodation, experiences are more subjective and the reader will be mainly looking for positive benefits and experiences of other people. Therefore, negative reviews are least appreciated, since they are often attributed to the reviewer’s internal motivations.
This first general analysis, without going into individual cases, seems to have great importance in the purchase decision. However, it does not seem to be the only factor taken into account given that only 23.1% agrees that number of “stars” (the average rank) is more reliable than the reviews themselves, while 38.7% thinks otherwise. The general assessment has a weight, but the reading of individual reviews seems to provide something extra.
3.5. Analysis of individual reviews
The previous section addressed students’ overview of the available set of reviews and the website. This section examines what are the specific features make individual reviews more and less interest.
Figure 3: Content features that make reviews more interesting (green) or less interesting (red), according to respondents
Content of review. Almost unanimously (94.8%), the main feature that makes a review interesting is the explanation of “the reasons why the buyer is happy or not with the product/service”. Focus groups respondents pointed out that in their opinion reviews that only express the (positive or negative) valence have little credibility. It is necessary for the reviewer to specify his/her reasons in order for the review to be taken into account: “if no reason is given to explain why something is good or bad, I do not take it into consideration” and “it does not say why: brilliant!”.
The level of reasoning demanded from reviews varies according to their valence. When it comes to a positive review, readers are less demanding and do not expect the reasoning to be very elaborate. On the contrary, if the valence is negative, readers expect the review to have elaborate content. “When something is ok I do not expect much reasoning but when something is bad, I do want to know why it is wrong”.
In that sense, most participating university students (90.4%) consider that another feature that gives value to a review is that the inclusion of “positive and negative aspects”. This is related to the previous idea since students understand that a complex product cannot have an assessment in a single direction: “I trust reviews with negative and positive assessments more: they seem more objective, since not everything is great, and not everything is bad”.
Figure 4: Writing/spelling features that make reviews more interesting (green) or less interesting (red), according to respondents
Also in relation to content, virtually all students (89.6%) valued the recentness of the review very positively (“It is a recent review”). However, the time elapsing from the purchase until the review is posted on the internet may affect its credibility. According to the focus groups, many students prefer the review to be posted after the author has tested the product. Students resent reviews written the day after the purchase: “it gives me the impression that the product has just arrived and the buyer has written the review without having tried it”.
On the other hand, students value very positively those interviews that include “a personal experience or anecdote of the author with the product/service” (84.6%), that “provide details that the average consumer would otherwise not notice” (75.9%) and that “includes photos of the product or service” (70.1%).
The main function of the review seems to be adding a layer of extra information that is different from that offered by advertising: personal experiences, photos from unconventional (or little flattering) angles and more details than a flyer or a quick test in the store can provide.
Writing and spelling. Contrary to what one might expect from a generation that has grown voluntarily transgressing most spelling and grammar rules, respondents are critical towards the reviews that contain voluntary or involuntary errors. Not all the rules that apply in other areas (WhatsApp, SMS, email, Facebook, etc.) are applicable to reviews of products and services.
One of the features that according to students gives value to reviews is that “it is well written” (86.5%, only 1.6% thinks this makes them less interesting). The writing seems to work as an indicator of credibility due to the effort put into the task. Moreover, students also reject reviews with spelling errors: 48.9% thinks “apparently involuntary spelling errors” decrease the value of a review. This percentage increases to 60.1% when errors are “apparently voluntary”. It seems that students do not see correct to use in reviews what is tolerated or even encouraged in other areas of the internet: “it bothers me a little when reviews contain spelling mistakes because I think the reviewers did not make an effort to write it well. This is not the same as a comment on Facebook: If you have taken some time to leave a comment, you should write it well”. There were other similar comments: “If the review does not contain spelling mistakes, I trust it more. I think that the person who wrote it, made an effort to do it well. So, I give more value to this review than to others with grammatical errors”.
On the other hand, the internet etiquette do seems to apply in regards to capitalisation. 53.1% considered that reviews written entirely in capital letters were less interesting. This is because in chat language, words written completely in capital letters mean shouted words: “I read these reviews as if I were shouting” and “I think the author is angry”. However, a selective use of capital letters in some specific words to highlight ideas is considered to be positive by 54.7% of respondents and to be negative by 9.5%.
Finally, an element that seems crucial to rule out a review without reading it that it is “is a bad translation of another language”. 78.1% consider that this decreases the attractiveness of a review and can deter people from reading it: “the first thing I see is how well the review is written. If it has spelling mistakes, is not in my language or is a bad translation of another language, I do not take it into consideration.”
With regards to language, it is important to add that most Spanish-speaking respondents (63.9%) affirmed that the use of the English language does not make a review more or less attractive to them, while 26% believe that the use of the English language increases the value of the review and 10.2% consider it decreases the value. Something slightly different occurs with the Basque language. While respondents claim to dominate the reading of both Basque and English languages (76.9% and 81.6%, respectively), the percentage of those who consider the use of the Basque language does not make a review more or less attractive to them drops to 53.7% and the percentage of respondents who consider that it increases the value of the review rises to 33.7% (while 12.6% think that it decreases the value of the review). Perhaps we can attribute this small difference to readers’ perception of proximity or similarity with the author who uses a language with such a small geographical area as the Basque Country. However, this small difference does not change the general trend: language does not seem to influence the interest generated by a review.
Figure 5: Author’s features that make reviews more interesting (green) or less interesting (red), according to respondents
The reviewer’s identity. Despite the academic literature indicates that there is a considerable interest on the source of the message, which undoubtedly comes from the tradition of research on persuasion, we can conclude from the results of our survey that these variables generate much less concern than the variables mentioned so far. Thus, 64% of respondents consider that the fact that the author of a review is anonymous does not make the review more or less valuable (only 25.4% of respondents consider that the anonymity of the review reduces its attractiveness). It is understood that this is the default situation: most reviews are anonymous and users have to look at other elements to evaluate it.
Similarly, 65% of respondents are indifferent towards the personalisation attempts of anonymous reviewer: using a nickname or an image to identify themselves. Even the use of the author’s real name or picture is indifferent to a large percentage of respondents (51.1%), although in this case 45.1% of respondents do consider the use of these features do increases the value of the review. Very similar rates of indifference are generated by the fact that the author has the “same or similar age”: 53.5% of respondents are indifferent and 41.8% consider this makes the review more interesting.
Geographic proximity does not seem to be given any special consideration. 60.1% of respondents are indifferent to the fact that the reviewer is from the same country and 64.8% are indifferent to the fact that the reviewer is from the same geographical region. Only the fact that the reader knows personally the reviewer generates a clear interest for the review (in 56.2% of respondents), but even in this case 37.7% of respondents remain indifferent.
The reason for this apparent contradiction is that: “all this data can be manipulated. It is very easy to deceive people”. Focus groups participants mentioned that they do look for reviewers who are similar to them. However, the indicators focus groups participants mentioned have to do with the content of the reviews, the issues they addressed, their judgements and their evaluation criteria: “you check whether the content is similar to what you think, whether it is related to what you are looking for in this place”
This fits perfectly with the classical theory of persuasion that states that the similarity that actually increases the persuasive capacity of a source is the similarity in attitudes and values. In non-electronic environments, sharing certain physical characteristics (colour of skin, sex, age, gender, etc.) can have the same effect because the receiver infers that this physical resemblance indicates that certain attitudes are shared. In virtual environments, it is very easy to manipulate and control such identifying information, so that the similarity in attitudes has to be inferred from the content of the review.
3.6. Trustworthiness of reviews
As we can see, in general terms, advertising and, to a lesser degree, venders/sellers are considered less reliable than online reviews.
The editorial content of the media, however, is considered more or less as important as anonymous online reviews. This is paradoxical if we take into account that usually the journalist or professional who writes the reviews about a particular product tends to be an expert in that product or has dedicated time to document the review. Perhaps anonymous reviews are perceived as more reliable than the opinions of journalists because the latter may be influenced by advertising/commercial interests.
Finally, as expected, the weight of the opinions of family and friends is similar among them and, in general, is higher than that of online reviews. It should be noted that the “credulous” users, who gave great weight to reviews (80% or more), tended to regard the opinions of family, friends and even media less important that anonymous online reviews, in comparison to the sceptical users (30% or less).
Figure 6: Sources considered by respondents as less reliable (red), just as reliable (yellow) and more reliable (green) than anonymous online reviews
This study has allowed us to outline the way consumers evaluate online reviews. We have to acknowledge that our sample represents a sector of the population (students from the University of the Basque Country) that is geographically limited, highly educated and skilled in the use of the internet. Certainly, the results cannot be extrapolated to the entire population, but they offer us some clues about the overall process.
Results reflect a youth sector in which virtually everyone has experience with online purchases, although for most of them this purchase is rather sporadic. Almost all respondents consult the online reviews of other users to guide their purchases. Importantly, respondents consider that reviews are more important in certain product categories (accommodation, electronics) than in others (music, tickets for concerts, collectable items, etc.).
We have seen that there is a large group (almost one in two) that uses online reviews also to guide their offline purchases. Therefore, the flow goes in two directions: young people who consult online reviews and then go to a physical store, and young people who test a product in a physical store but then searches for a better price online.
Once young users find a website with reviews, they analyse the information in two phases. In the first, they perform an evaluation of the whole set of reviews about the product and what matters here is the total volume of reviews well as their general sense (valence). In order for a product to be desirable, potential buyers expect it to have a clear majority of positive reviews, although a minority of negative reviews might also tip the balance during the second phase. In addition, a small group of respondents find it suspicious when there are no negative reviews, so the ideal situation for the user in the first phase would be to find few negative reviews surrounded by a majority of positive reviews, but never a total absence of negative reviews.
This first overall analysis, without going into the detail of individual reviews, seems to have great importance in the purchasing decision making process. However, the process does not stop here but rather enters a second phase in which individual reviews are read. In fact, for most users the average scores of the product (“stars”), which should summarise this general feeling, are less interesting than the reviews themselves.
In a second phase, this group decides which individual reviews they will read in search for information they usually cannot find anywhere else: personal experiences with the product, information that is less biased than the one offered by advertising, details of use that are deeper than the ones obtained from a test a physical store, and photos from unconventional (or little flattering) angles. The reviews that contain any of these three elements are more likely to be selected for a detailed reading. In addition, young users appreciate reviews that are not completely positive or completely negative. They expect reviews to provide arguments for and against the product because not everything is usually white or black.
Their preference for such specific details is based on the fact that it is possible for a trait to be perceived as negative by one person and as positive or neutral by another. More than the valence of the review, young users are interested in the reasons behind this valence. Very typically young users dedicate their effort to extract those facts and submit them to the filters of their own criteria and preferences before making a decision.
Paradoxically, two of the reasons that can deter users from reading a review are their writing and spelling. The generation of digital natives, which is characterised for communicating with spelling mistakes, assesses negatively badly written reviews or with spelling mistakes, whether they are volunteers or involuntary mistakes. Forms of writing that may be perceived positively in Facebook posts or WhatsApp messages are perceived as a lack of seriousness in online reviews. Users neither tolerate reviews that are a bad translation of another language, so the websites that offer automatic translations of reviews to increase their reach should seek alternative options.
The persuasive features of the author are diluted in an environment in which it is very easy to fake data as age, sex or geographical origin. However, users do look for these similarities and we can assume that these similarities will exert greater persuasive effect on users, as in other areas of communication. However, instead of seeking these similarities in easily observable physical external features and physical environments, users look for them in the points of view, tastes, values and common criteria included in the content of the reviews.
Based on what they read from different reviews, users will form a general idea that will influence their purchase decision process, to a greater or lesser extent according to their individual preferences. Most of the sample said that the weight of these consulted reviews stands at about 40-70%. It is outside the scope of this study to identify the other players taking part in this decision making process, but it is obvious that such factors as price, delivery times and bids play an important role.
When we compare the trust given to online reviews, we can see that they are a source of information whose importance is on par with the editorial content of the media, below the opinions of family and friends and above the brand’s advertising.
Finally, we encourage critical reflection on the process followed by the consumer, which may be quite easily influenced (and, therefore, manipulated by entities with commercial interests), since it involves criteria that are relatively easy to control. It is striking that the majority of this group of consumers has not developed the more basic defence mechanisms, such as being critical of the reviews posted on the official website of the brand selling they product in question and the reviews that give many positive details that are usually out of the reach of the average consumer. As critical consumers, we must be aware that, while the reviews of other users offer us a layer of information that would be inaccessible otherwise, there are many interests behind them.
The good thing is that the internet is so broad that it is impossible for any company to manipulate all the sources of opinions. As consumers we can be sure that if there is anything that we should know about a product, we can just look for it and we will find out about it.
 Data from 2013 obtained from the Basque Youth Observatory. Available at http://www.gazteaukera.euskadi.eus/r58-7651x/es/contenidos/informacion/behatokia_tableu_hezkuntza/es_def/index.shtml [consulted on 15-6-2015]
 Figure taken from official data published in La universidad del País Vasco en cifras (UPV/EHU 2013-2014). Available at http://www.ehu.eus/zenbakitan/es.html
 Tripadvisor’s Factsheet (2015): http://www.tripadvisor.com/PressCenter-c4-Fact_Sheet.html [consulted on 15-6-2015]
P Ballantine & C Au Yeng (2015): “The effects of review valence in organic versus sponsored blog sites on perceived credibility, brand attitude, and behavioural intentions”. Marketing Intelligence & Planning, vol. 33, no 4.
J Berger, AT Sorensen, S Rasmussen & J Scott (2010): “Positive effects of negative publicity: When negative reviews increase sales”. Marketing Science, vol. 29, no 5, pp. 815-827.
B Bickart & RM Schindler (2001): “Internet forums as influential sources of consumer information”. Journal of interactive marketing, vol. 15, no 3, pp. 31-40.
A Bruns (2009): “From prosumer to produser: Understanding user-led content creation”. Transforming Audiences 2009, 3-4 Sep, London.
RSJ Cabezudo, CC Izquierdo & JR Pinto (2012): “En busca de los evangelizadores digitales: Por qué las empresas deben identificar y cuidar a los usuarios más activos de los espacios de opiniones online”. Universia Business Review 35, pp. 14-31.
E Comor (2010): “Contextualizing and critiquing the fantastic prosumer: Power, alienation and hegemony”. Critical Sociology, pp. 104-142.
G Cui, HK Lui & X Guo (2012): “The effect of online consumer reviews on new product sales”. International Journal of Electronic Commerce, vol. 17, no 1, pp. 39-58.
C Dellarocas & R Narayan (2006): “A statistical measure of a population’s propensity to engage in post-purchase online word-of-mouth”. Statistical Science, vol. 21, no 2, pp. 277-285.
C Dellarocas, XM Zhang & NF Awad (2007): “Exploring the value of online product reviews in forecasting sales: The case of motion pictures”. Journal of Interactive marketing, vol. 21, no 4, pp. 23-45.
JA Chevalier & D Mayzlin (2006): “The effect of word of mouth on sales: Online book reviews”. Journal of marketing research, vol. 43, no 3, pp. 345-354.
PY Chen, S Dhanasobhon & MD Smith (2008): “All reviews are not created equal: The disaggregate impact of reviews and reviewers at amazon.com”. Heinz Research 55.
W Duan, B Gu & AB Whinston (2008): “The dynamics of online word-of-mouth and product sales-An empirical investigation of the movie industry”. Journal of retailing, vol. 84, no 2, pp. 233-242.
C Forman, A Ghose & B Wiesenfeld (2008): “Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets”. Information Systems Research, vol. 19, no 3, p. 291-313.
P Kiecker & D Cowles (2002): “Interpersonal communication and personal influence on the Internet: A framework for examining online word-of-mouth”. Journal of Euromarketing, vol. 11, no 2, p. 71-88.
J Lee, DH Park & I Han (2008): “The effect of negative online consumer reviews on product attitude: An information processing view”. Electronic Commerce Research and Applications, 2008, vol. 7, no 3, pp. 341-352.
KT Lee & DM Koo (2012): “Effects of attribute and valence of e-WOM on message adoption: Moderating roles of subjective knowledge and regulatory focus”. Computers in Human Behavior, vol. 28, no 5, pp. 1974-1984.
YS Lim & B Van der Heide (2014): “Evaluating the Wisdom of Strangers: The Perceived Credibility of Online Consumer Reviews on Yelp”. Journal of Computer‐Mediated Communication, vol. 20, no 1, pp. 67-82.
B Lis (2013): “In eWOM we trust”. Business & information systems engineering, vol. 5, no 3, pp. 129-140.
Y Liu (2006): “Word of mouth for movies: Its dynamics and impact on box office revenue”. Journal of marketing, vol. 70, no 3, pp. 74-89.
Y Liu, D Dong & RE Burnkant (2013): “Provide Consumers with What They Want on Word of Mouth Forums”. iBusiness, vol.5, no. 1A.
SM Martínez María-Dolores, JJ Bernal García & JP Mellinas (2012): “Los hoteles de la región de Murcia ante las redes sociales y la reputación online”. Revista de Análisis Turístico 13, pp. 1-10.
S Melián González, J Bulchand Gidumal & B González Lopez-Valcárcel (2011): “La participación de los clientes en sitios web de valoración de servicios turísticos. El caso de Tripadvisor”. Revista de Análisis Turístico, no 10.
SM Mudambi & D Schuff (2010): “What makes a helpful review? A study of customer reviews on Amazon.com”. MIS quarterly, vol. 34, no 1, p. 185-200.
Y Pan & JQ Zhang (2011): “Born unequal: a study of the helpfulness of user-generated product reviews”. Journal of Retailing, vol. 87, no 4, p. 598-612.
V Price, L Nir & JN Capella (2006): “Normative and informational influences in online political discussions”. Communication Theory, vol. 16, no 1, p. 47-74.
S Quaschning, M Pandelaere & I Vermeir (2014): “When Consistency Matters: The Effect of Valence Consistency on Review Helpfulness”. Journal of Computer‐Mediated Communication, 2014.
J Reichelt, J Sievert & F Jacob (2014): “How credibility affects eWOM reading: The influences of expertise, trustworthiness, and similarity on utilitarian and social functions”. Journal of Marketing Communications, vol. 20, no 1-2, pp. 65-81.
G Ritzer & N Jurgenson (2010): “Production, Consumption, Prosumption The nature of capitalism in the age of the digital ‘prosumer’”. Journal of consumer culture, vol. 10, no 1, pp. 13-36.
S Sen & D Lerman (2007): “Dawn. Why are you telling me this? An examination into negative consumer reviews on the web”. Journal of interactive marketing, vol. 21, no 4, pp. 76-94.
A Urueña (Coor.), O Ureña, MP Ballestero, S Cadenas, R Castro & E Valdecasa (2014): “Estudio sobre Comercio Electrónico B2C 2013”. ONTSI, 2014. Available at http://www.ontsi.red.es/ontsi/sites/default/files/estudio_sobre_comercio_electronico_b2c_2013_edicion_2014.pdf (consulted on 9-7-2015)
D Tapscott & AD Williams (2006): Wikinomics: How Mass Collaboration Changes Everything. New York: Portofolio.
The Cocktail Analysis (2013): Informe de resultados. El comprador Online Español en 2012. Available at http://www.slideshare.net/TCAnalysis/el-comprador-online-espaol-en-2012 (Consulted on 15-6-2015)
N Thompson (2003): “More companies pay heed to their 'word of mouse' reputation”. New York Times, 23 June.
LM Willemsen et al. (2011): “Highly recommended!” The content characteristics and perceived usefulness of online consumer reviews”. Journal of Computer‐Mediated Communication, vol. 17, no 1, pp. 19-38.
LM Willemsen, PC Neijens & F Bronner (2012): “The ironic effect of source identification on the perceived credibility of online product reviewers”. Journal of Computer‐Mediated Communication, vol. 18, no 1, pp. 16-31.
F Zhu & X Zhang (2010): “Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics”. Journal of marketing, vol. 74, no 2, p. 133-148.
NN Ho-Dac, SJ Carson & WL Moore (2013): “The effects of positive and negative online customer reviews: do brand strength and category maturity matter?”. Journal of Marketing 77(6), pp. 37-53.
FS Sandes & AT Urdan (2013): “Electronic word-of-mouth impacts on consumer behavior: exploratory and experimental studies”. Journal of International Consumer Marketing 25(3), pp. 181-197.
SJ Doh & JS Hwang (2009): “How consumers evaluate eWOM (electronic word-of-mouth) messages”. CyberPsychology & Behavior 12(2), pp. 193-197.
How to cite this article in bibliographies / References
ME Olabarri-Fernández, S Monge-Benito, S Usín Enales (2015): “Young internet users’ evaluation of online consumer reviews. The case of the students from the University of the Basque Country (UPV/EHU)”. Revista Latina de Comunicación Social, 70, pp. 703 to 725.
Article received on 2 September 2015. Accepted on 29 October.