Celebrity endorsements are often sought to influence public opinion. We ask whether celebrity endorsement per se has an effect beyond the fact that their statements are seen by many, and whether on net their statements actually lead people to change their beliefs. To do so, we conducted a nationwide Twitter experiment in Indonesia with 46 high-profile celebrities and organizations, with a total of 7.8 million followers, who agreed to let us randomly tweet or retweet content promoting immunization from their accounts. Our design exploits the structure of what information is passed on along a retweet chain on Twitter to parse reach versus endorsement effects. Endorsements matter: tweets that users can identify as being originated by a celebrity are far more likely to be liked or retweeted by users than similar tweets seen by the same users but without the celebrities’ imprimatur. By contrast, explicitly citing sources in the tweets actually reduces diffusion. By randomizing which celebrities tweeted when, we find suggestive evidence that overall exposure to the campaign may influence beliefs about vaccination and knowledge of immunization-seeking behavior by one’s network. Taken together, the findings suggest an important role for celebrity endorsement.
The DeGroot model has emerged as a credible alternative to the standard Bayesian model for studying learning on networks, offering a natural way to model naive learning in a complex setting. One unattractive aspect of this model is the assumption that the process starts with every node in the network having a signal. We study a natural extension of the DeGroot model that can deal with sparse initial signals. We show that an agent’s social influence in this generalized DeGroot model is essentially proportional to the number of uninformed nodes who will hear about an event for the first time via this agent. This characterization result then allows us to relate network geometry to information aggregation. We identify an example of a network structure where essentially only the signal of a single agent is aggregated, which helps us pinpoint a condition on the network structure necessary for almost full aggregation. We then simulate the modeled learning process on a set of real world networks; for these networks there is relatively little information loss. We also explore how correlation in the location of seeds can exacerbate aggregation failure. Simulations with real world network data show that with clustered seeding, information loss can be substantial.
A policy debate centers around the question whether news aggregators such as Google News decrease or increase traffic to online news sites. One side of the debate, typically espoused by publishers, views aggregators as substitutes for traditional news consumption because aggregators' landing pages provide snippets of news stories and therefore reduce the incentive to click on the linked articles. Defendants of aggregators, on the other hand, view aggregators as complements because they make it easier to discover news and therefore drive traffic to publishers. This debate has received particular attention in the European Union where two countries, Germany and Spain, enacted copyright reforms that allow newspapers to charge aggregators for linking to news snippets. In this paper, we use Spain as a natural experiment because Google News shut down all together in response to the reform in December 2014. We compare the news consumption of a large number of Google News users with a synthetic control group of similar non-Google News users. We find that the shutdown of Google News reduces overall news consumption by about 20% for treatment users, and it reduces page views on publishers other than Google News by 10%. This decrease is concentrated around small publishers while large publishers do not see significant changes in their overall traffic. We further find that when Google News shuts down, its users are able to replace some but not all of the types of news they previously read. Post-shutdown, they read less breaking news, hard news, and news that is not well covered on their favorite news publishers. These news categories explain most of the overall reduction in news consumption, and shed light on the mechanisms through which aggregators interact with traditional publishers.
We combine survey responses, network data, and medical records in order to examine how friends affect the decision to get vaccinated against influenza. The random assignment of undergraduates to residential halls at a large private university generates exogenous variation in exposure to the vaccine, enabling us to credibly identify social effects. We find evidence of positive peer influences on health beliefs and vaccination choices. In addition, we develop a novel procedure to distinguish between different forms of social effects. Most of the impact of friends on immunization behavior is attributable to social learning about the medical benefits of the vaccine.
This paper uses a microfinance field experiment in two Lima shantytowns to measure the relative importance of social networks and prices for borrowing. Our design randomizes the interest rate on loans provided by a microfinance agency, as a function of the social distance between the borrower and the cosigner. This design effectively varies the relative price (interest rate differential) of having a direct friend versus an indirect friend as a cosigner. After loans are processed, a second randomization relieves some cosigners from their responsibility. These experiments yield three main results. (1) As emphasized by sociologists, connections are highly valuable: having a friend cosigner is equivalent to 18 per cent of the face value of a 6 month loan. (2) While networks are important, agents do respond to price incentives and switch to a non-friend cosigner when the interest differential is large. (3) Relieving responsibility of the cosigner reduces repayment for direct friends but has no effect otherwise, suggesting that different social mechanisms operate between friends and strangers: Non-friends cosign known high types, while friends also accept low types because of social collateral or altruism.
We seed noisy information to members of a real-world social network to study how information diffusion and information aggregation jointly shape social learning. Our environment features substantial social learning. We show that learning occurs via diffusion which is highly imperfect: signals travel only up to two steps in the conversation network and indirect signals are transmitted noisily. We then compare two theories of information aggregation: a naive model in which people double-count signals that reach them through multiple paths, and a sophisticated model in which people avoid double-counting by tagging the source of information. We show that to distinguish between these models of aggregation, it is critical to explicitly account for imperfect diffusion. When we do so, we find that our data are most consistent with the sophisticated tagged model.
The division of labor first increased during industrialization, and then decreased again after 1970 as job roles have expanded. In this paper, we explain these trends in the organization of work through a simple model, making two minimal assumptions: (a) machines require standardization to exploit economies of scale and (b) more customized products are subject to trends and fashions which make production tasks less predictable and a strict division of labor impractical. The model predicts capital-skill substitutability during industrialization and capital-skill complementarity in the maturing industrial economy: At the onset of industrialization, the market supports only a small number of generic varieties which can be mass-produced under a strict division of labor. Then, thanks to productivity growth, niche markets gradually expand, producers eventually move into customized production, and the division of labor decreases again. We test our model by exploiting the time-lags in the introduction of bar-coding in three-digit SIC manufacturing industries in the U.S.. We find that both increases in investments in computers and bar-coding have led to skill-upgrading. However, consistent with our model bar-coding has affected mainly the center of the skill distribution by shifting demand away from the high-school educated to the less-than-college educated.
Stylized evidence suggests that people process information about their own ability in a biased manner. We provide a precise characterization of the nature and extent of these biases. We directly elicit experimental subjects' beliefs about their relative performance on an IQ quiz and track the evolution of these beliefs in response to noisy feedback. Our main result is that subjects update as if they misinterpret the information content of signals, but then process these misinterpreted signals like Bayesians. Specifically, they are asymmetric, over-weighting positive feedback relative to negative, and conservative, updating too little in response to both positive and negative feedback. These biases are substantially less pronounced in a placebo experiment where ego is not at stake, suggesting they are motivated rather than cognitive. Consistent with Bayes' rule, on the other hand, updating is invariant to priors (and over time) and priors are sufficient statistics for past information. Based on these findings, we build a model that theoretically derives the optimal bias of a decision-maker with ego utility and show that it naturally gives rise to both asymmetry and conservatism as complementary strategies in self-confidence management.
This paper analyzes the impact of news aggregators on the quantity and composition of internet news consumption. In principle, news aggregators can be a substitute or a complement to the news outlets who invest in the creation of news stories. A policy debate centers around the decrease in the incentives for news creation that results if readers choose to consume their news through aggregators without clicking through to the news websites or generating any revenue for the outlets. This paper provides a case analysis of an example where Google News added local content to their news home page for users who chose to enter their location. Using a dataset of user browsing behavior, we compare users who adopt the localization feature to a sample of control users who are similar to the treatment users in terms of recent internet news consumption. We find that users who adopt the localization feature subsequently increase their usage of Google News, which in turn leads to additional consumption of local news. Users also navigate directly to the new sites they have discovered, further increasing their local news consumption. The increase in local news consumption diminishes over time, however, and in the longer run most of the additional local news consumption derives from increased Google News usage. Patterns of news consumption change: users read a wider variety of outlets, more outlets that are new to them, and a larger fraction of their news “home page” views come from Google News rather than the home page of other news outlets. Thus, the inclusion of local content by Google News had mixed effects on local outlets: it increased their traffic, especially in the short run, but it also increased the reliance of users on Google News for their choices of news, and increased the dispersion of user attention across outlets.
This paper develops a model of delayed network effects to explore the curious dynamics of competition in the local telephone market between AT\&T and the 'Independents' at the turn of the century. In the early years of telephone diffusion, local service competition between these two non-interconnected networks became widespread, but declined rapidly when diffusion rates started to slow down after 1907. The analysis is based on the observation that urban markets subdivide into social 'islands' along geographical and socio-economic dimensions: users are more likely to communicate with subscribers 'inside' their island than with those 'outside' it. A simple dynamic model demonstrates how minority networks can thrive and preserve their market share at a low state of development when islands form essentially independent niche markets. As the industry matures, these niches 'grow' together and standardization occurs. The implications of the model are confirmed using a small panel data set of US cities.
We study the formation of social capital in an environment where specialized agents have frequent diverse needs. This limits the potential of purely bilateral cooperation because the interaction frequency between any two particular agents is low. Such interactions usually invite defection by both sides unless agents are altruistic, or there exist information aggregation institutions that facilitate the use of group punishments. In a companion paper Gentzkow and Mobius (2002) develop a theory of how agents can cooperate even in a limited information environment as long as they can relay requests for help. This mechanism creates networks with long-term relationships which are continuously recombined to satisfy short-term needs. We test the theoretical predictions by conducting an experiment with two treatments: in the first treatment, agents can only utilize direct ‘favors’ while the second treatment adds the ability to provide indirect ‘favors’ as well. Our results help us understand how agents form and sustain weak links.
Historically, commodity money preceded fiat money. Standard search-theoretical models of money such as Kiyotaki and Wright (1989) cannot explain this transition because of multiple equilibria: a small infusion of fiat money with superior intrinsic characteristics into a commodity money equilibrium is always valued if agents believe in its acceptability. We propose a natural extension of the standard model in order to break this indeterminacy. We assume (1) that agents derive positive utility from consuming even non-favorite commodities and (2) that agents have to consume regularly. We find that agents accept only commodity money if search frictions are large. Fiat money can become valuable in sufficiently advanced economies with small search frictions.
I analyze a simple evolutionary model of residential segregation based on decentralized racism which extends Schelling's (1972) well-known tipping model by allowing for local interaction between residents. The richer set-up explains not only the persistence of ghettos, but also provides a mechanism for the rapid transition from an all-white to an all-black equilibrium. On one-dimensional streets segregation arises once a group becomes sufficiently dominant in the housing market. However, the resulting ghettos are not persistent, and periodic shifts in the market can give rise to "avenue waves". On two-dimensional inner-cities, on the other hand, ghettos can be persistent due to the \encircling phenomenon" if the majority ethnic group is sufficiently less tolerant than the minority. I review the history of residential segregation in the US and argue that my model can explain the rapid rise of almost exclusively black ghettos at the beginning of the 20th century. For the analysis of my model I introduce a new technique to characterize the medium and long-run stochastic dynamics. I show that clustering predicts the behavior of large-scale processes with many agents more accurately than standard stochastic stability analysis, because the latter concept overemphasizes the 'noisy' part of the stochastic dynamics.