The Dark Side of Personalization

How Personal Data Collection and Data Use influence Privacy Concerns, Personalization and Willingness to Transact


Master's Thesis, 2012

58 Pages, Grade: 1,2


Excerpt


TABLE OF CONTENTS

List of Figures

List of Abbreviations

1 Introduction

2 Literature Background
2.1 Personalization vs. Customization
2.2 Personalization in an Online Marketing Environment

3 Conceptual Framework and Hypotheses
3.1 Privacy Concerns and CFIP
3.2 Control of Personal Data and CFIP
3.3 Data Gathering Method – Overt and Covert Approach
3.4 Use of Data – Authorized Primary Use or Unauthorized Secondary Use
3.5 Willingness to Transact
3.6 Customers’ Value of Online Personalization
3.7 Risk Beliefs of Online Personalization
3.8 Perceived Usefulness of Online Personalization
3.9 Moderating Role of Trust Beliefs between Use of Data and CFIP

4 Research Design
4.1 Data Collection Process
4.2 Sample Description
4.3 Questionnaire Design
4.4 Measures
4.5 Scale Validity and Reliability
4.6 Data Analysis and Results
4.7 Model Evaluation
4.8 Main Effects and Path Coefficients
4.9 Indirect Effects
4.10 Moderation Analysis

5 Discussion and Conclusion
5.1 Theoretical Implications
5.2 Managerial Implications
5.3 Limitations and Future Research

Appendices

References

List of Figures

Figure 1: Conceptual Model and Hypotheses

Figure 2: Convergent Validity and Cronbach’s Alpha

Figure 3: Main Effects and Path Coefficients

List of Abbreviations

illustration not visible in this excerpt

1 Introduction

“Online Privacy Fears Stoked By Google, Twitter, Facebook Data Collection Arms Race” (Menn, 2012), “Your E-Book Is Reading You” (Alter, 2012), ""Instant personalization" brings more privacy issues to Facebook” (Keane, 2010). These are only a few recent examples of media headlines dealing with the issue of online privacy and personalization. Scholars and managers have repeatedly stated the benefits of personalization, which is targeting products and services to individual customers and constitutes a key element of an interactive marketing strategy (Montgomery & Smith, 2009). To be able to accurately estimate the needs and wants of customers, it is necessary to gather a significant amount of information. Privacy concerns may arise when personal information about customers are gathered. If this arises, personalization can backfire by making clients reluctant to use the service or - even worse - developing a negative attitude towards the company. A recent survey by Opera Software (2011) found, that Americans fear online privacy violations more than job losses or declaring personal bankruptcy. This had induced politicians to introduce regulations and laws that address online privacy that safeguard consumers against online monitoring and intrusion into confidential user information (Los Angeles Times, 2011). However, privacy online remains a complicated issue for both, managers and politicians, because new personalization technology emerges at a much faster pace than political regulations and guidelines.

Online users can only perceive privacy, if they are able to control their personal data. Prior literature has identified two prerequisites to determine the users’ control of information privacy: awareness of information collection and information usage (Sheehan & Hoy, 1999). Over the last two decades, the effects of privacy concerns has been investigated comprehensively (Yuan, 2011). Although insights in the social-psychology perspective of customer’s information privacy concerns may be interesting for managers, most of the previous research focuses on general privacy concerns instead on information privacy (Laudon & Traver, 2008). Even less literature sheds light on the topic of information privacy in the context of personalization. A deeper understanding of the dimensions of control and their consequences is essential to fully understand the process online personalization. Especially if markets intend to deliver the best online personalization experience to their customers, these settings are of vital interest. Therefore, the goal of this study is to provide answers to the following research questions:

-What are the effects of different data collection methods (overt/covert) regarding private customer data on privacy concerns?
-How do different purposes of data usage (primary/ unauthorized secondary) influence concerns for information privacy and is this relationship influenced by trust that users have in the online merchant?
-Do users perform a risk-value analysis when personalization is applied by the online merchant?
-Do increased privacy concerns impact the evaluation of personalization and as a consequence users’ willingness to transact?

These insights will provide marketers and advertising strategists with practical advice in order to implement an optimal personalization application for customers. Aiming to find answers to the research questions above, the paper will be structured as follows: First, the theoretical basis will be provided by a review on the existing literature and a conceptual model will be developed. As a conclusion of the literature review, hypotheses will be presented that create a basis for the experimental setting. Second, the conducted experiment, in which the model and the hypotheses were tested, will be described. Third, the results will be discussed and interpreted in detail. Finally, the last section will present theoretical and managerial implications, insights into limitations of the current study and suggestions for future research.

2 Literature Background

2.1 Personalization vs. Customization

Personalization occurs when a company tailors their product or service offerings to the individual tastes of their customers based on personal and preference information (Chellappa & Sin, 2005). To understand the principle of ‘personalization’, it has to be clearly distinguished from ‘customization’, which is defined as the users’ ability to adapt certain criteria of the product offering, in order to better fit their individual needs (Laudon & Traver, 2008). While in customization the wish to adapt is initiated from the customer side, in personalization the marketer adapts the product or service to the customer by anticipating the customer’s needs and wants (Chaffey, 2007; Montgomery & Smith, 2009). As such, researchers refer to customization as pull marketing and personalization as push marketing (Milne & Rohm, 2000). Hence, with respect to customization, users provide personal data voluntarily, while personalization requires firms to either ask for information or monitor and analyze customers’ behavior, in order to adapt the product to individual needs.

2.2 Personalization in an Online Marketing Environment

Before personalization can take place, databases of collected personal consumer information have to be created. Data collection from online users is one of the fastest growing businesses on the online business (Angwin, 2010; Sipior, Ward, & Mendoza, 2011). Companies are able to collect an enormous amount of customer information, which can be used to deliver online experiences tailored specifically around the needs of each individual user (Ashworth & Free, 2006; Culnan, 1993; Pitta, Franzak, & Laric, 2003). This information can be divided into two groups: personally identifiable information and anonymous information. The former refers to data that enables identification, contact and discovery of an individual, while the latter refers to data that describes the individual but cannot be used to identify a specific person. Further, three subcategories of information types exist - contact information (name, address, phone, e-mail address), profile information (age, ethnicity, gender), and behavioral information (browsing and purchase history) on a single website or across multiple websites (Chaffey, 2007; Chellappa & Sin, 2005; Federal Trade Commission, 2000). The Internet enables firms to gather user data via multiple methods, which can be broadly classified as overt and covert information gathering techniques. Overt collections occur if firms ask users to answer direct questions. In this approach, users are aware of the fact that their data is gathered and used by the online company (Montgomery & Smith, 2009). In contrast, covert collections occur when firms gather data without the users’ awareness, so companies monitor and track the users’ online behavior without explicitly mentioning it to them (Xu, Luo, Carroll, & Rosson, 2011). The information gathered by these techniques can be used by the online merchant to personalize the users’ online experience with features such as personalized advertisements or indices of product websites in order to enhance convenience and search efficiency (Laudon & Traver, 2008; Tam & Ho, 2006).

Despite the improved online experience, a conflict is prevailing with online personalization: Do the users want to share private information in order to get an automatically adapted web service or do they have too many concerns about the risk of losing anonymity online? Whether users want to share control of personal information to benefit from a personalized web experience can be described as a risk-value calculation. With an increasing number of data gathering companies, users’ concerns for information privacy is on the rise. In a recent study, TRUSTe (2012) – the biggest online consumer advisor – found out that 91 percent of U.S. adults are worried about privacy online. Moreover, 53 percent do not disclose any personal information to businesses online because of mistrust. And further 88 percent tend to avoid firms that do not protect consumer privacy. Therefore, in order to get deeper insight into these privacy issues, more research has to focus on privacy concerns during the personalization process.

3 Conceptual Framework and Hypotheses

3.1 Privacy Concerns and CFIP

Since the landmark article ‘The Right to Privacy’ in 1890, in which privacy was formulated as “right to be let alone” (Warren & Brandeis, 1890, p. 193), almost every time a new technology with improved possibilities to capture, save, analyze and exchange detailed personal data emerged, privacy concerns soared (Culnan, 1993). This was particularly the case when web-based marketing occurred in 1995, which reduced the cost of gathering private information to a minimum (Laudon & Traver, 2008). Over time, the definition of privacy changed or got supplemented with extensions. One of these is information privacy, a subset of general privacy (Laudon & Traver, 2008). Recent definitions have focused on the users’ ability to control the dissemination and use of their personal information (Phelps, Nowak, & Ferrell, 2000).

Smith et al. (1996) have developed the construct of concerns for information privacy (CFIP). It is a second- order, formative construct that consists of four dimensions: collection, unauthorized secondary use, improper access and errors (Van Slyke, Shim, Johnson, & Jiang, 2006). The first dimension deals with concerns about companies gathering personal information. The second item centers on concerns about the use of the gathered data for a secondary purpose that has not been authorized by the user. Improper access, the third factor, reflects the individual’s concerns about gathered data being available for access to unauthorized third persons. Finally, the fourth dimension, errors, centers around the concerns about accuracy; that is, whether the gathered information really reflects the individuals (Smith, Milberg, & Burke, 1996).

Studies on the effect of privacy concerns and personalization have not included CFIP in their models. However, although research on the effects of CFIP in the context of e-commerce and personalization has been limited, many studies have focused on similar issues relevant to the given CFIP dimensions. The antecedent factors influencing CFIP that prior research has analyzed so far can be grouped into four categories: individual factors, social and legal norms, transaction and corporate factors, and information characteristics (Yuan, 2011). Research has also analyzed the consequences of privacy concerns. These literature findings can be grouped into beliefs, attitudes, intended and actual behavior (Yuan, 2011). A detailed overview of the prior research on CFIP and its findings can be found in appendix A and B. Summing up, privacy concerns influence human thoughts, opinions and actions. In general, privacy is seen as a precious good, which consumers value more than the disclosure of information. Connecting it to this study, CFIP should also influence users’ perception of personalization: Personal values, such as privacy concerns, affect the value a user associates with the result of personalization (Awad & Krishnan, 2006). Therefore, a higher level of privacy concerns should result in a lower value of personalized service. Due to the nature of personalization, users have to give up a certain amount of privacy so that the merchant is able to adapt the offering to the individual taste (Chellappa & Sin, 2005). However, prior research has analyzed the effect of loss aversion, stating that people overvalue what they already have compared to things that they might attain (Novemsky & Kahneman, 2005). In other words, gains are valued less than losses. Hence, users should value advantages of personalization less than the lost personal privacy. Due to this, CFIP should have a negative impact on the value of personalization. More formally:

H1: Online user‘s Concerns for Information Privacy has a negative influence on Customer‘s Value of Online Personalization.

Not only the value of personalization but also the risks related to it are affected by CFIP. Several privacy related risks are related to e-commerce and personalization, so for example, risk of privacy loss due to data collection, risk of improper access via third parties or risk of unauthorized secondary use of the information (Van Slyke, Shim, Johnson, & Jiang, 2006). Hence, if concerns for information increase, users should experience more risk. The higher the individual’s concerns about information privacy, the more risk is perceived during the personalization. Stated as hypothesis:

H2: Online user‘s Concerns for Information Privacy has a positive influence on Risk Beliefs of Online Personalization.

3.2 Control of Personal Data and CFIP

In current literature, privacy is widely perceived as the control of information. According to Sheehan and Hoy (2000), two dimensions determine the users’ control of information privacy: awareness of information collection and information usage. While awareness simply refers to whether users know that private data is gathered about them, usage implies how and for what purpose the gathered data is being deployed. Surprisingly, research has never concentrated on the collection and use of data as antecedents influencing CFIP in the context of personalization. A deeper analysis will provide a better understanding of antecedents of CFIP and personalization in order to maximize user satisfaction and website opportunities.

3.3 Data Gathering Method ­– Overt and Covert Approach

Nowadays, the use of customer information is one of the most important success factors in e-business. Nevertheless, the challenge of accumulating these knowledge data in a way customers feel comfortable with is still prevalent (Awad & Krishnan, 2006). Personal information can be gathered in two methods: overt and covert, so with and without the knowledge of the user. Montgomery et al. (2009) defines this overt/covert approach as active (to inform him or to post direct questions to the consumer) and passive (to make inferences based on transaction, clickstream or e-mail data) learning about customers. The type of data gathering has a direct connection to the control of personal information data. Knowledge that a website is collecting information about users for personalization – so an overt approach – therefore is an elementary prerequisite for control. Contrariwise, if users do not know about the fact that data about them is being collected, users have no control of it.

Research that included an overt vs. covert approach in combination with online personalization has been very limited. Xu et al. (2009) analyzed in a study on personalized mobile marketing how covert or overt personalization influence the perceived benefits and risks of information disclosure. They find that personalization increases the perceived value of information disclosure through both collecting methods and that perceived risk [value] has a negative [positive] impact on the value of information disclosure. Most striking is that personalization is only positively related to perceived risk of information disclosure when the data is gathered covertly, because there was no significant increase in perceived risk in an overt state.

Simply telling the user about the data collection process might show similar results as privacy seals or privacy policies. Several studies found out that informing users about the way how the web-merchant deals with privacy helps to decrease privacy concerns (Andrade, Kaltcheva, & Weitz, 2002; Nam, Song, Lee, & Park, 2006; Wirtz, Lwin, & Williams, 2007). Informing the user about the data collection enables an active two-way communication between the merchant and the user, which is an important antecedent of trust (O'Malley, Patterson, & Evans, 1997; Pitta, Franzak, & Laric, 2003). By offering a pellucid, overt approach, it is ensured that users know about the data being gathered, feel more control of the gathering process and hence trust the web partner more. The website might be able to decrease the concerns, when letting users participate in the personalization process.

Furthermore, users might feel a loss of privacy and even harm or betrayal, when they find out that data about them was gathered without an agreement (Cespedes & Smith, 1993). A personalized interface is the result of this data gathering process, so the nature of personalization allows users to realize that the website gathered private information about them. Therefore, consumers might feel a breach of trust, if the website gathers data covertly and automatically shows a highly personalized interface, which results in higher privacy concerns (Montgomery & Smith, 2009).

To conclude, if the user knows about the gathering of his personal data, he can have an impact in the process, decide whether he wants to disclose his data, and should gain more control of the personalization. Therefore, the following hypotheses are introduced:

H3a: Overt Data Collection has a positive impact on online user‘s Concerns for Information Privacy.

H3b: Covert Data Collection has a negative impact on online user‘s Concerns for Information Privacy.

3.4 Use of Data – Authorized Primary Use or Unauthorized Secondary Use

The second dimension of users’ control of information privacy is information usage (Sheehan & Hoy, 1999). Two key types of data usage exist: primary use and secondary use. Primary use of information can be defined as a company’s use of the accumulated personal data to improve sales and customer services, inventory and personnel planning and other corporate operations, which has previously been authorized by the customer (Culnan, 1993). Secondary use represents the use of the same information for a different purpose than the original reason of collecting (Culnan, 1993). This is - in most cases - not authorized by the user. A reason why companies apply personal information to an unauthorized use is that they can gain a strategic advantage through an effective secondary use (Porter & Millar, 1985). Analyzing and managing customer data is a critical success factor for all e-businesses (Awad & Krishnan, 2006). Unauthorized secondary use can happen internally - within different departments of the data gathering company – or externally – disclosure of the personal data to a third party (Van Dyke, Midha, & Nemati, 2007). However, although differences between primary and secondary use of information are well-defined in theory, in practice this distinction is not always existent. Websites gather data to enhance the user’s online experience, nevertheless, no general rules exist that regulate which methods advance and which debase the users’ online experience (Sipior, Ward, & Mendoza, 2011).

Companies can even attain customer data from firms that specialize in gathering personal data or from online advertising networks like AdWords or DoubleClick (Laudon & Traver, 2008; Liao, Liu, & Chen, 2011). To get an idea of the dimensions, one needs to understand that an average U.S. resident is profiled in about 100 different databases (VanHoose, 2003). Furthermore, about one quarter of all websites participate in ‘cookie sharing’[1] among companies (Sipior, Ward, & Mendoza, 2011). From a legal point of view, secondary use of information is valid and it is common use online to share data between firms (O'Malley, Patterson, & Evans, 1997). However, if customers realize that their private data is not kept confidential, trust declines and willingness to disclose information decreases (Pitta, Franzak, & Laric, 2003). This act of data collection can create an image of corporate ‘dataveillance’, which leads to increased privacy concerns (Ashworth & Free, 2006; Culnan, 1993; Foxman & Kilcoyne, 1993).

Furthermore, it is not only the release of private data that fosters privacy concerns, but the opportunistic behavior of companies: It is widely common that companies sell private information and make money on that data. Customers experience a misuse of their private information, while not even getting a share of the earnings (Dinev & Hart, 2006). If new technology is applied opportunistically, personal privacy will be threatened, which can result in further anxieties and a feeling of unfair treatment (McCreary, 2008; Nowak & Phelps, 1992; Preston J. , 2004).

Another reason for increased privacy concerns refers to the user’s control of data. When a user keeps his private data privately, he has total control. This situation changes if he discloses his data to a second party, nevertheless it is still obvious who has access to the data. However, it becomes more complicated to control the access and the use of the data, when a third party comes into play: The user can easily lose the overview which parties have access to his private data; moreover, the third party can even spread the information further. In this situation, the customer no longer has any control over his personal data because it is not transparent with whom the data has been shared. Hence, this process should result in higher privacy concerns.

Summing up, the more parties have unauthorized access to the user’s private data, the more privacy concerns should evolve. Therefore, the following hypothesis can be formulated:

H4a: Primary Use of Data has a negative impact on online user’s Concerns for Information Privacy.

H4b: Secondary Use of Data has a positive impact on online user’s Concerns for Information Privacy.

3.5 Willingness to Transact

The Internet enables users to conduct a ‘second exchange’ (Li, Sarathy, & Xu, 2010), so consumers can decide to ‘pay’ with their personal data in order to acquire personalized products or services online. In this study, this decision is represented by the behavioral intention of users to participate in a personalization process. The dependent variable represents the user’s wish to disclose private information to transact on the Internet. Using personalization, users are participating in a ‘privacy calculus’, so a cost-benefit analysis between value and risks of personalization (Malhotra, Kim, & Agarwal, Internet Users' Information Privacy Concerns (IUIPC): The Construct, the Scale, and a Causal Model, 2004; Li, Sarathy, & Xu, 2010; Xu, Zhang, Shi, & Song, 2009). Therefore, willingness to transact represents the individual’s assessment of the utility of the information disclosure weighted against the potential risks. Customers will accept a loss of privacy, as long as a positive net result is achieved by the disclosure of their private information (Chellappa & Sin, 2005).

3.6 Customers’ Value of Online Personalization

Individuals are likely to give up a degree of privacy, if they get something in return to compensate for it (Xu, Luo, Carroll, & Rosson, 2011). Moreover, the perceived value of the outcome of the information disclosure is potentially related to the willingness to give information. The value of online personalization and the advantages for users can be grouped in to three different categories: product, convenience and quality of service, and relationships. The most obvious advantage is the adapted product or service the online-company offers. It is aligned to meet the specific preferences and needs by incorporating the data the user supplied (Chellappa & Sin, 2005) and therefore offers a positive influence on willingness to transact. Ansari and Mela (2003) showed that click-through rates could be increased by 62 per cent when using personalized e-mails. The second group of advantages is convenience and quality of the related service: a personalized website decreases transaction and search time, because customer preferences are already known and do not have to be entered each time the customer returns to the website. This implies a more convenient service for the customer and perceived usefulness is increased (Davis, 1989; Hui, Teo, & Lee, 2007; Lee & Lee, 2009; Wolfinbarger & Gilly, 2001). Furthermore, the website will only provide online services, advertisements and recommendations that are relevant and anticipated for the user (Laudon & Traver, 2008; Phelps, Nowak, & Ferrell, 2000; Vesanen, 2007). Howard and Kerin (2004) found out that simply inserting the user’s name in personalized product recommendations significantly increased the response rates. Hence, this process will increase the quality of decision making (Lee & Lee, 2009). The third group of benefits is relationship related: an interactive two-way communication is enabled and a relationship is created between the company and the customer (Rayport & Jaworski, 2003; Sheehan & Hoy, 2000; Smith, Milberg, & Burke, 1996). This connotes an intangible benefit for the customer, which reduces the perceived risk and offers an intrinsic social benefit of relationship participation (O'Malley, Patterson, & Evans, 1997). Summing up, it can be stated that personalization increases customer satisfaction and customer value at hand (Ariely, 2000; Montgomery & Smith, 2009; Turban, 2008; Vesanen, 2007). Therefore, the following hypothesis can be drawn:

[...]


[1] Cookies can be shared and combined with other cookies, public records, survey data on the basis of an individual’s unique identification mean, for example the social security number.

Excerpt out of 58 pages

Details

Title
The Dark Side of Personalization
Subtitle
How Personal Data Collection and Data Use influence Privacy Concerns, Personalization and Willingness to Transact
College
Maastricht University
Course
International Business - Strategic Marketing
Grade
1,2
Author
Year
2012
Pages
58
Catalog Number
V201663
ISBN (eBook)
9783656321576
ISBN (Book)
9783656322443
File size
2031 KB
Language
English
Keywords
Online Personalization, One-to-one Marketing, Privacy, Privacy Concerns, Data Collection, Private Customer Data, Data Use, Personalization, Value of Personalization, Risks and Benefits of Personalization, CFIP, Concerns for Information Privacy, Information Privacy, Customization, Personalisation, Customisation, Willingness to Transact, Usefulness of Personalization, Amazon Mturk, Amazon Mechanical Turk, Unauthorized Secondary Use, Risk-Value Analysis, Willingness to share information
Quote paper
Jörg Ziesak (Author), 2012, The Dark Side of Personalization, Munich, GRIN Verlag, https://www.grin.com/document/201663

Comments

  • No comments yet.
Look inside the ebook
Title: The Dark Side of Personalization



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free