Skip to content
BY 4.0 license Open Access Published by De Gruyter August 27, 2021

Cheap Talk with Multiple Experts and Uncertain Biases

  • Gülen Karakoç EMAIL logo

Abstract

A decision maker solicits information from two partially informed experts and then makes a choice under uncertainty. The experts can be either moderately or extremely biased relative to the decision maker, which is their private information. I investigate the incentives of the experts to share their private information with the decision maker and analyze the resulting effects on information transmission. I show that it may be optimal to consult a single expert rather than two experts if the decision maker is sufficiently concerned about taking advice from extremely biased experts. In contrast to what may be expected, this result suggests that getting a second opinion may not always be helpful for decision making.

JEL Classifications: C72; D82; D83

1 Introduction

Conventional wisdom suggests that getting a second opinion is helpful for decision making and it is common in many real-life situations, for example: in healthcare markets, patients often seek a second opinion to find the right diagnosis; universities often ask more than one recommendation letter before making tenure decisions; and customers often talk to several salespeople to find the product that better fits their needs. These examples suggest that decision maker may wish to consult more than one expert to make sound decisions.

However, experts often have different preferences vis-à-vis the decision maker, and this makes communication difficult.[1] In particular, when the talk is cheap and hence, unverifiable, biased experts may have incentives to strategically alter their advice to push the decision makers in a certain direction, leading them to make a wrong decision. Moreover, as an outsider, the decision maker may not always know the actual preferences of the experts. This, in turn, makes it even harder for the decision maker to make inferences from the experts’ opinions.[2] Hence, getting advice from multiple experts creates an opportunity to extract more information but, at the same time, creates a new challenge for the decision maker to resolve potentially conflicting opinions.

Many existing models explain why, and under which conditions, an uninformed decision maker benefits from consulting multiple experts before making a decision (see, e.g., Sobel (2013), for a survey). However, most of these models assume that the experts’ biases are known, whereas little is known about the communication when the bias of the expert is private information. Do experts have incentives to share their private information with the decision maker? What is the effect of this information asymmetry on the decision maker’s behavior? And, is it better to consult two experts or just one?

I address these issues by analyzing a simple cheap talk model adapted from Austen-Smith (1993). I consider an environment in which an uninformed decision maker seeks advice from either one or two partially informed experts before taking a payoff relevant action. Each expert receives a private binary signal about the state of the world and then provides information to the decision maker through simultaneous cheap talk. The decision maker and the experts have different preferences (e.g., biases) over actions.[3] The decision maker’s preferences are common knowledge across players, while each expert is privately informed about his bias. The experts may differ in terms of how biased they are relative to the decision maker, which is a natural assumption in many real-life situations. For instance, one could think of political leaders relying upon the recommendations of economic and military advisers before conducting a military intervention abroad: both types of advisers wish to influence the leader but possibly to differing degrees. To formalize this idea, I introduce two types of experts: an expert is either moderately biased (hereafter moderate expert), whose bias is small; or extremely biased (hereafter extreme expert), whose bias is large in absolute terms. Because an expert’s bias measures how distant his preferences are relative to those of the decision maker, a moderate expert is assumed to be less biased than an extreme expert.

Building on this insight, I focus on two informative equilibria in which the decision maker can learn some information from the experts’ messages. As a benchmark, I consider a fully-revealing equilibrium in which experts of either type truthfully reveal their privately observed signals about the state of the world and the decision maker believes them. Then, I consider a semi-revealing equilibrium in which a moderate expert is willing to send informative messages to the decision maker depending on his privately observed signal, while an extreme expert reports the same message independent of his private information so that no information can be inferred from his message.

I first examine the effect of uncertain biases on the decision maker’s action and the experts’ truth-telling incentives. I show that in a fully-revealing equilibrium, the conditions for the existence of such equilibrium are not different from those that one would obtain if the biases were known. By contrast, in a semi-revealing equilibrium, the fact that the decision maker is uncertain about the experts’ biases affects the incentives to disclose truthful information. In particular, the interval that supports truth-telling as equilibrium is small compared to that of the fully-revealing equilibrium. This happens because, other things being equal, in a semi-revealing equilibrium, the decision maker knows that with some probability that each expert reports a message that does not necessarily reflect the privately observed signal and the decision maker updates her beliefs accordingly. This provides an incentive to lie and, hence, makes the truth-telling condition tighter.

Next, I show that in both fully-revealing and semi-revealing equilibria information transmission is more difficult with two experts. When the expert is consulted alone, the decision maker’s action is conditioned only on his message and, hence, restraining the expert’s incentive to misreport his private signals due to the overshooting effect that was highlighted in Morgan and Stocken (2008). With multiple experts, instead, each expert knows that his report is less relevant in affecting the decision maker’s final action. Consequently, the presence of another expert provides a strong incentive to lie via the overshooting effect compared to communication with only one expert.

After characterizing the conditions for the existence of fully-revealing and semi-revealing equilibria, I next develop a welfare analysis. As a measure of welfare, I use the ex-ante expected utility of the decision maker to determine how much she learns about the experts’ private information after simultaneous communication with one and two experts.[4] Interestingly, I show that the fully-revealing equilibrium with one expert may be informationally superior to the semi-revealing equilibrium with two experts. The trade-off faced by the decision maker is the following. On the one hand, if the decision maker consults two experts, bias uncertainty becomes detrimental to her because it makes her more skeptical about the informational content of the experts’ messages. In that case, the decision maker’s choice of action may fail to restrain the experts’ incentives to misreport effectively, and truthtelling may become less appealing to the experts. On the other hand, consulting just one expert gives the decision maker a less precise idea about the underlying state as compared to the two-expert mechanism, but this may incentivize the extreme expert to report his private information truthfully. Which of these two contrasting effects dominates depends on the relative likelihood of receiving distorted messages by the extreme expert. Specifically, when the probability that the decision maker believes the expert to be moderate is low, then consulting two experts with uncertain biases increases the likelihood of receiving distorted information from the extreme experts. In this case, the decision maker may prefer to consult only one expert because he truthfully communicates his private signal in equilibrium via overshooting effect, even if he is an extreme expert.

In sum, the analysis unveils a novel effect arising from information asymmetries in a canonical multi-expert cheap talk framework. The results suggest that talking to multiple experts may not always be optimal for a decision maker who deals with privately informed biased experts. Therefore, when the decision-maker consults a single expert, asymmetric information tends to put additional pressure on him to communicate truthfully via the overshooting effect. This insight may help to explain why, in reality, some doctoral programs have softened their requirement for “at least two letters of recommendation” policy when they make an application decision.[5] Although I develop the arguments in a decision maker – expert framework, the scope of the analysis is broader. The results can easily be adapted to many situations that involve simultaneous communication between an uninformed party and informed parties, such as management consulting, and medical, political, and financial advice.

The rest of the paper is organized as follows. After discussing the related literature, Section 2 describes the baseline model. Section 3.1 characterizes the conditions under which a fully-revealing equilibrium exists. In Section 3.2, I characterize the conditions under which a semi-revealing equilibrium exists. Welfare is discussed in Section 4. The last section concludes. All proofs are in the Appendix.

Related Literature. I build on and contribute to two strands of literature. First, this paper relates to the literature on cheap talk with multiple experts. Gilligan and Krehbiel (1989) first characterized the cheap talk model with two perfectly informed experts in a one-dimensional environment. Krishna and Morgan (2001) consider a cheap talk model with two perfectly informed experts to show that when the decision maker sequentially consults two experts who are biased in the same direction, then the most informative equilibrium is obtained by consulting the less biased expert alone. Gick (2006) studies a cheap talk model in which an uninformed decision maker seeks advice from two perfectly informed experts. He shows that having a second expert, even if he/she is more biased than the first one, improves the information structure when the communication is simultaneous.[6] The analysis in this paper is related to that in Austen-Smith (1993), who considers a uniform state space, and assumes that the experts are partially informed about the underlying state, as this paper does. However, I allow the decision maker to be uncertain about the experts’ biases. Specifically, Austen-Smith (1993) shows that simultaneously consulting two experts leads to higher welfare than consulting only one expert, while in this paper, I find that there are some circumstances under which two-expert communication is not necessarily superior to one-expert communication.

Second, this paper is related to cheap talk literature with uncertain individual preferences. There is a growing literature that considers experts’ reputational/career concerns as a source of uncertainty. For instance, Sobel (1985), Bénabou and Laroque (1992), Morris (2001), Gentzkow and Shapiro (2006), Ottaviani and Sørensen (2006) consider uncertainty about expert types and focus on the reputational incentives, which this paper does not address. Few papers focus on the informativeness of costless communication with uncertain biases. In particular, Morgan and Stocken (2003) and Dimitrakas and Sarafidis (2005) show that the revelation of the expert’s bias weakens the communication when the magnitude of the bias is uncertain.

Interestingly, Li (2004) and Li and Madarász (2008) analyze cheap talk equilibria with uncertain (and exogenous) biases. As in this paper, they consider uniform state space and allow two possible values of bias. However, in a model with a single expert, they show that transparency about biases may not improve communication.[7] The reason is that, in both papers, the corresponding uncertainty that the decision maker faces concerns both the direction and the magnitude of the bias. Specifically, they allow the two possible values of biases to have the same magnitude but opposite signs. In that case, the bias variable may have an expected value of zero, and hence, revealing its non-zero realization may decrease informational efficiency. Instead, in the current paper, bias uncertainty concerns only the magnitude of the conflict, and I allow the two possible values of biases to be of different magnitudes such that the expected value of the bias is non-zero. Hence, unlike Li (2004) and Li and Madarász (2008), my result suggests that transparency of biases improves information transmission and increases experts’ incentives to communicate truthfully due to the risk of overshooting the decision maker’s ideal action.

Finally, in the context of disclosure games, Bhattacharya and Mukherjee (2013) also obtain a similar result as the current paper does – i.e., the decision-maker may be better off with fewer experts. However, their result stems from very different sources as they assume uncertainty over what each expert knows, and the relevant information consists of hard evidence that can be verified. Bhattacharya and Mukherjee (2013) show that bringing an additional expert may not always benefit the decision-maker when the experts are like-minded. The reason is that although having experts who are similarly biased allows the decision-maker to learn the underlying state with a greater probability if it lies in the experts’ revelation set, the states outside this set are never revealed. Hence, the introduction of a new like-minded expert may fail to move the decision maker’s action sufficiently away from the experts’ common preferences, and consequently, may make truth-telling less appealing for the experts if they are indeed informed. By contrast, the current paper assumes that the experts of either type observe an informative signal about the state of the world, but the decision maker is uncertain whether experts’ reports reflect their information. Hence, the marginal gain from consulting two experts with uncertain biases may be lower than the loss of truthful information via the decision maker’s weak reaction to undetectable lies. Moreover, unlike Bhattacharya and Mukherjee (2013), a necessary condition for my result to hold is that the experts’ biases are not too similar so that the extreme expert does not communicate truthfully with the decision-maker in the presence of another expert while he communicates truthfully if he is consulted alone.

2 The Model

Players and Environment. Consider a decision maker (female), D, who seeks advice from two (male) experts, A 1 and A 2. The decision maker takes an action y R that affects the payoffs of all players. The state of the world, θ, is a random variable and uniformly distributed on 0,1 , with density f θ = 1 . The decision maker has no further information about θ, while each expert privately observes an informative signal about the state. Specifically, each expert, say A i , observes a binary signal s i S 0,1 . Since θ is uniformly distributed on the unit interval the marginals are Pr s i = 1 2 , s i S , i = 1,2 .

Following Austen-Smith (1993), I assume that the signals are conditionally independent across experts given the underlying state θ. I assume that each signal s i has the following conditional probability

(1) Pr s i | θ = θ s i 1 θ 1 s i , s i S .

Conditional on the state θ; therefore, the joint probability distribution of the signals is such that

(2) Pr s i , s j | θ = θ s i + s j 1 θ 2 s i s j , s i , s j S .

Based upon the realized signal, each expert simultaneously reports a message to the decision maker. Let m i be A i ’s message, and, for simplicity, I consider a binary message space such that m i M 0,1 .[8] Based upon the received messages, the decision maker takes an action y m i , m j that affects the payoffs of all players.

All players have quadratic loss utility functions. Specifically, D’s utility is

U D y , θ , b D y θ b D 2 ,

and A i ’s utility is

U i y , θ , b i y θ b i 2 , i = 1,2 .

The quadratic loss utility function is commonly used in the cheap talk literature (e.g., Austen-Smith (1993); Crawford and Sobel (1982); Morgan and Stocken (2008); among many others) because it allows us to obtain (tractable) closed form solutions. The quadratic loss utility function has an important implication because it guarantees the concavity of D’s objective function and hence, the uniqueness of the optimal action. Hence, given quadratic loss specification, in state θ, the decision maker’s most preferred action is θ + b D and A i ’s most preferred action is θ + b i .

The parameter b D ≥ 0 represents the decision maker’s bias and is common knowledge across players.[9] The parameter b i B b M , b E , i = 1,2 , instead, represents A i ’s bias and measures how distant his preferences are relative to those of the decision maker. Specifically, if b i = b M , then the bias is moderate and A i is said to be a moderate expert. Meanwhile, if b i = b E , then the bias is extreme and A i is said to be an extreme expert where a moderate expert is assumed to be less biased than an extreme expert, that is b M b D < b E b D .

More importantly, A i ’s bias is his private information and is drawn from the following distribution

Pr b i = b M ν 1 Pr b i = b E , i = 1,2 .

Hence, A i knows his own bias, while D and A j have only a prior about that.[10] Finally, all players are expected utility maximizers.

Timing. The timing is as follows.

  1. Nature randomly chooses θ according to a uniform distribution on 0,1 .

  2. Nature independently chooses the types of the experts and privately informs them.

  3. Each expert privately observes s i .

  4. Each expert simultaneously sends m i to the decision maker.

  5. Based upon the received messages, D takes an action y R .

Equilibrium. The solution concept is perfect Bayesian equilibrium (PBE). For simplicity, I consider only pure strategies for the experts (see, e.g., Austen-Smith 1993; Li 2004 among many others).[11]

As is common in cheap talk models, multiple equilibria exist. In particular, a babbling equilibrium always exists, in which the messages do not depend on the experts’ private information about the underlying state. Indeed, given such a strategy, it is optimal for the decision maker to ignore the messages but then babbling is actually the best response for the experts. However, I focus on two informative equilibria: (i) fully-revealing equilibrium in which experts of either type truthfully report their signals about the underlying state and the decision maker believes them; (ii) semi-revealing equilibrium in which a moderate expert truthfully reports his private signal while an extreme expert reports the same message regardless of his private information about the state.

Without loss of generality, in the analysis that follows, I assume that the extreme expert is rightward biased; that is b D < b E . Assuming that a rightward biased extreme expert is with no loss of generality because experts’ payoffs are symmetric and the message space is binary.[12] As it will be clear shortly, in a semi-revealing equilibrium, a rightward biased extreme expert always reports, with a slight abuse of notation, m E = 1 independent of his signal; that is such that he wants as high action as possible relative to the decision maker. When he observes a signal equal to one, he wants to report m E = 1 instead of zero because, by doing so, he can shift the decision maker’s action rightward. Moreover, I do not impose any restrictions on the direction of moderate bias because a moderate expert, in equilibrium, is willing to send both messages (both 0 and 1) depending on his privately observed signal. Therefore, he wants as high (resp. low) action as possible if b D < b M (resp. b D > b M ).

3 Equilibrium Analysis

I now characterize the decision maker’s optimal action after receiving any messages from the experts. I will then analyze experts’ incentives to communicate in fully-revealing and semi-revealing equilibria with one and two experts, respectively. Notice that, as in Austen-Smith (1993), the circumstance of one expert giving no information in a multiple-expert setup is equivalent to there being only one expert. This observation establishes the game with a single expert. A detailed equilibrium analysis with one expert can be found in the Appendix.

3.1 Fully-Revealing Equilibrium

First assume that the experts simultaneously and truthfully report their private signals and the decision maker believes them. Letting y s i , s j F to denote the decision maker’s optimal action after being truthfully informed about the signals, D’s best response to such a strategy is

(3) y s i , s j F = arg max y R θ y θ b D 2 f θ | s i , s j d θ , = b D + E θ | s i , s j s i , s j S 2 ,

Hence, when D receives truthful messages from the experts, the optimal action is just the conditional expectation of the state shifted by her bias b D . The following lemma characterizes the decision maker’s optimal actions with one and two experts, respectively.

Lemma 1

In a fully-revealing equilibrium, when D consults only one expert, her optimal actions are

y 0 F = b D + 1 3 , y 1 F = b D + 2 3 ,

while when D simultaneously consults two experts, her optimal actions are

y 0,0 F = b D + 1 4 , y 0,1 F = y 1,0 F = b D + 1 2 , y 1,1 F = b D + 3 4 .

This lemma allows us to see that, in a fully-revealing equilibrium, uncertainty on experts’ types has no consequence on the optimal actions, and thus, they can be unambiguously ranked. Intuitively, when D receives two different signals, the optimal action is based on her prior beliefs about the state. Instead, when D receives the same signals, she has a more precise idea regarding the state, and consequently, this shifts the optimal action rightward when she receives s i , s j = 1,1 , and leftward when she receives s i , s j = 0,0 from the experts. A similar logic applies when D consults one expert.

Consider now the experts’ incentives to reveal the observed signal. Without loss of generality, I focus on the truth-telling incentives of A i because the experts are ex-ante symmetric. Notice that from A i ’s standpoint, A j truthfully reports his signal; that is, in equilibrium m j = s j . Hence, A i has an incentive to reveal m i = s i instead of sending false message 1 − s i along the equilibrium path if

(4) s j S θ - y s i , s j F - θ - b i 2 f s j , θ | s i d θ s j S θ - y 1 - s i , s j F - θ - b i 2 f s j , θ | s i d θ , b i B .

The expression in (4) reflects that A i ’s incentive to report his private signal is shaped by D’s reaction to receiving false information from A i – i.e., the overshooting effect (highlighted in Morgan and Stocken 2008): a deviation from a truthful message may shift D’s action too far from the expert’s ideal action. In other words, an expert with rightward bias (resp. leftward bias) may prefer a higher (resp. lower) action than the decision maker, but then the displacement in the decision maker’s action caused by an undetectable lie may be too large relative to the case of truth-telling. By slightly abusing notation, let Δ y s i , s j F y 1 s i , s j F y s i , s j F be the displacement in D’s action after receiving false and correct signal from A i . Hence, the higher is the magnitude of Δ y s i , s j F , the stronger is the overshooting effect and the lower are the incentives to lie in equilibrium because reporting a false signal to the decision maker cannot be incentive compatible.

The following proposition characterizes a fully-revealing equilibrium with one and two experts.

Proposition 1

There exists a fully-revealing equilibrium with the following properties:

  1. In the game with only one expert, the expert of either type truthfully reports his signal if and only if b 1 b D 1 6 , b 1 B .

  2. In the game with two experts, the experts of either type truthfully report their signals if and only if b i b D 1 8 , b i B , i = 1,2 .

Hence, uncertainty has no impact on the interval that supports truth-telling as an equilibrium. Accordingly, the impact of each message on the optimal action is very high. This, in turn, makes truth-telling an optimal strategy for an expert who has preferences close to those of the decision maker because he cannot do better than report his true signal due to the overshooting effect.

Moreover, Proposition 1 also allows us to see that the truth-telling condition is tighter with multiple experts, relative to the case where D consults only one expert. The reason is that when D consults one expert, the optimal action is conditioned only on one message, and hence, the displacement in her action caused by an undetectable lie is large compared to the case with two experts – i.e., such that Δ y s i F > Δ y s i , s j F . Another expert’s presence, therefore, weakens the overshooting effect, thereby magnifies incentives to misreport. Consequently, A i ’s preferences should be even closer to those of the decision maker’s (as compared to one expert) to reveal his private information.

3.2 Semi-Revealing Equilibrium

Consider now an equilibrium in which the moderate expert truthfully reports his signal, while the (rightward biased) extreme expert reports m E = 1 independent of his private signal. In this case, the decision maker must form beliefs about the signals s i , s j given the message pair m i , m j because the messages may not necessarily reflect the privately observed signals. Letting y m i , m j S be the optimal action taken by the decision maker in a semi-revealing equilibrium, D solves

(5) y m i , m j S = arg max y R θ y θ b D 2 f θ | m i , m j d θ = b D + s i , s j S 2 Pr s i , s j | m i , m j E θ | s i , s j E ν θ | m i , m j .

Hence, the decision maker’s optimal actions now depend on her posterior beliefs that abide by Bayes rule. The other salient characteristics of the optimal actions are given in the lemma below.

Lemma 2

In a semi-revealing equilibrium, when D consults only one expert, her optimal actions are

y 0 S = b D + 1 3 , y 1 S = b D + 3 ν 3 2 ν ,

while when D consults two experts, her optimal actions are

y 0,0 S = b D + 1 4 , y 0,1 S = y 1,0 S = b D + 2 ν 2 3 2 ν , y 1,1 S = b D + ν 2 4 ν + 6 4 ν 2 3 ν + 3 .

The decision maker’s optimal actions in a semi-revealing equilibrium are depicted in Figure 1. Hence, even with uncertain biases, the optimal actions can be unambiguously ranked. Specifically, this lemma allows us to see that when D receives any messages that contain at least one message equal to one, she updates her beliefs, discounting the possibility of receiving uninformative messages. In that case, the optimal actions reflect the decision maker’s rational skepticism as she updates her beliefs, and consequently, the experts’ messages have a lower impact on the optimal action. As a result, the higher the chances of being moderate, the more ‘accurate’ the inference that D can make on the messages given the signals. Hence, the optimal actions converge to those found in a fully-revealing equilibrium as ν tends to one.

Figure 1: 

D’s optimal actions in a semi-revealing equilibrium.
Figure 1:

D’s optimal actions in a semi-revealing equilibrium.

By contrast, when D receives m i , m j = 0,0 from the experts, she can infer with certainty that these messages come from two moderate experts who tell the truth, and hence, uncertainty about the expert’s types has no effect on the optimal action. The same reasoning applies when D consults a single expert. However, a remark is in order. Observe that y 1 S < y 1,1 S . The reason is simple. Although bias uncertainty erodes the credibility of messages equal to one in equilibrium, when D consults two experts, she has a higher chance of getting truthful information from one of the experts as compared to the one-expert mechanism. As a consequence, D’s optimal action is higher when she receives m i , m j = 1,1 than receiving only one message m i = 1.

To determine the experts’ incentives to reveal their private information, first suppose that A i is moderate. Given that A j ’s bias is his private information, from A i ’s standpoint, A j can be either moderate or extreme. In the light of this, A i ’s incentive compatibility constraint can be written as follows:

(6) b j B Pr b j s j S θ y m i , m j S θ b M 2 f s j , θ | s i d θ b j B Pr b j s j S θ y 1 m i , m j S θ b M 2 f s j , θ | s i d θ .

The expression in (6) reflects that the overshooting effect now depends on the parameter ν. Since A j ’s bias is his private information and the other players have only a prior about that, the content of A j ’s message plays a crucial role in shaping A i ’s incentive to report truthfully. With a slight abuse of notation, let us denote the displacement in D’s action after receiving false and correct messages from A i by Δ y m i , m j S y 1 m i , m j S y m i , m j S and notice that the overshooting effect is stronger when A j reports m j = 1 than when he reports m j = 0 – i.e., such that Δ y m i , 0 S < Δ y m i , 1 S . The reason is that when D receives m j = 1, she anticipates the risk that it is an uninformative message and discounts accordingly A j ’s message. Thus, while updating, the decision maker puts more weight on A i ’s message so that a lie from A i has a stronger impact on the optimal action.

Finally, we need to guarantee that, when A i is extreme, he has no incentive to report truthfully when he observes s i = 0. Then the following should hold for semi-revealing equilibrium to arise:

b j B Pr b j s j { 0,1 } θ y 0 , m j S θ b E 2 f s j , θ | s i = 0 d θ < b j B Pr b j s j { 0,1 } θ y 1 , m j S θ b E 2 f s j , θ | s i = 0 d θ .

Clearly, when s i = 1, a rightward biased extreme expert has an incentive to report m E = 1. The following proposition characterizes a semi-revealing equilibrium with one and two experts.

Proposition 2

There exists a semi-revealing equilibrium with the following properties:

  1. In the game with only one expert, there exist two thresholds α 1 ν and β 1 ν , with 0 < α 1 ν < β 1 ν , such that the moderate expert truthfully reports his signal, while the extreme expert always reports m E = 1 if and only if

    β 1 ν b M b D α 1 ν and b E b D > α 1 ν .

  2. In the game with two experts, there exist two thresholds α 2 ν and β 2 ν , with α 2 ν < α 1 ν and β 2 ν < β 1 ν , such that the moderate expert truthfully reports his signal, while the extreme expert always reports m E = 1 if and only if

    β 2 ν b M b D α 2 ν and b E b D > α 2 ν .

    Moreover, α 1 ν and α 2 ν are increasing in ν, while β 1 ν and β 2 ν are decreasing in ν.

For the sake of clarity, and without loss of insights, hereafter, I assume that both types of experts are biased in the same direction relative to the decision maker – i.e., b D < b M < b E . Figure 2 represents the truth-telling thresholds in a semi-revealing equilibrium. Uncertainty about the experts’ biases has two effects on information transmission. First, the interval that supports truth-telling as an equilibrium shrinks as the probability of being moderate tends to zero – i.e., as α i ν increases and β i ν decreases with ν. The intuition behind this result is straightforward: due to uncertainty on experts’ biases, each message has a lower impact on D’s action in equilibrium. This, in turn, makes the incentives to lie stronger when an expert observes a signal that would shift the decision maker’s action in an undesired direction if reported truthfully via overshooting effect.[13]

Figure 2: 
Truth-telling thresholds for semi-revealing equilibrium.
Figure 2:

Truth-telling thresholds for semi-revealing equilibrium.

Second, as shown in Figure 2, the conditions for truth-telling are tighter when the decision maker consults two experts rather than just one. The reason is that when A i is consulted alone, D’s optimal action is conditioned only on his report, and this makes him relatively sure of the consequence of the message that he sends to D. In contrast, when there are two experts, A i is unsure about the weight of his message because D’s optimal action depends on A j ’s report too. As a result, another expert’s presence with an unknown bias creates sharper incentives to lie relative to the communication with only one expert.

4 Welfare Analysis

To study the welfare effects, I now compare the decision maker’s ex-ante expected utility among the types of equilibria defined in Propositions 1 and 2. Recall from the Introduction the definition of an informationally superior equilibrium that will be useful for the subsequent discussion: an equilibrium is said to be informationally superior to another if the former gives the decision maker a higher expected utility. In the light of this definition, I first compare D’s ex-ante expected utility with one and two experts within each equilibrium. I have the following result.

Proposition 3

In both fully-revealing and semi-revealing equilibria, consulting two experts is informationally superior to consulting just one.

In principle, consulting two experts have two effects on the decision maker’s ex-ante expected utility. First, getting advice from multiple experts has a positive direct effect as it creates an opportunity for the decision maker to extract more information, allowing D to take action as a combination of two messages. However, another expert’s presence has an indirect effect because it weakens the overshooting effect, thereby magnifying incentives to misreport compared to the one expert case. Not surprisingly, in a fully-revealing equilibrium, two experts provide more information to the decision maker than a single expert. The reason is that, in a fully-revealing equilibrium, experts of either type truthfully reveal their privately observed signals; thus, the positive effect always dominates as the decision maker can fully internalize the indirect impact of two-expert communication on the experts truth-telling incentives by taking a more precise action.

The same conclusion holds even when the experts report noisy information to the decision maker. The intuition is simple. Although the magnitude of the overshooting effect is attenuated in a semi-revealing equilibrium relative to a fully-revealing equilibrium due to uncertain biases, in a semi-revealing equilibrium, the improvement in information transmission in two-expert communication comes from the fact D has a higher chance of getting truthful information from one of the experts. In other words, the marginal gain from consulting two experts with uncertain biases is, on average, higher than the loss of truthful information via overshooting effect. Moreover, this result echoes the findings of Austen-Smith (1993) and allows us to see that his results regarding simultaneous communication are robust to the introduction of uncertainty about the experts’ biases.

In the light of Proposition 3, Figure 3 plots the decision maker’s welfare maximizing equilibrium within each interval defined in Propositions 1 and 2 when both types of experts are biased in the same direction relative to the decision maker – i.e., b D < b M < b E .

Figure 3: 
Decision maker’s welfare maximizing equilibria.
Figure 3:

Decision maker’s welfare maximizing equilibria.

By comparing D’s expected utility in a fully-revealing equilibrium with one expert and a semi-revealing equilibrium with two experts, I can establish the key result of the model.

Proposition 4

There exists a threshold ν ̃ such that fully-revealing with one expert is informationally superior to semi-revealing equilibrium with two experts if ν ν ̃ .

Comparing the informativeness of the two equilibria just mentioned highlights a novel trade-off for the decision maker. On the one hand, if D consults one expert, her optimal action is conditioned only on one message, and hence, she has a less precise idea about the underlying state. However, this may incentivize the extreme expert who has preferences not too far from those of the decision maker to disclose his information via a strong overshooting effect. On the other hand, consulting two experts opens a new channel to the decision maker, through which she can obtain more information about the underlying state. However, bias uncertainty makes D highly skeptical about the informational content of the experts’ messages, and consequently, she may fail to discipline the experts. Any of these two contrasting effects dominate depending on the relative likelihood of receiving distorted messages by the extreme type, which reflects the relevance of bias uncertainty.

Surprisingly, when the probability of being moderate is sufficiently low, the decision maker prefers to consult a single expert. To understand why, consider first the region of parameters where the two equilibria exist – i.e., a fully-revealing revealing equilibrium with one expert and a semi-revealing equilibrium with two experts (see Figure 3). It is easy to see that these two equilibria obtain when

  1. A moderate type has preferences close enough to those of the decision maker to induce him to report his signal truthfully regardless of the strategy of the other expert – i.e., b M b D < α 2 ν , and

  2. An extreme expert has distant enough preferences that induce him not to report truthfully if the other expert does so but close enough that, if consulted alone, he would report his signal truthfully – i.e., such that 1 8 < b E b D < 1 6 .

Now, in the relevant region of parameters, when ν is sufficiently small, disciplining the extreme expert outweighs the benefits of consulting two experts under uncertainty, thus receiving single but undistorted information becomes more important for the decision maker than consulting multiple experts with uncertain biases. The intuition is the following. If ν is small, the truth-telling interval in a semi-revealing equilibrium is small too, – i.e., such that α i ν shrinks. In that case, the moderate expert has lower incentives to report his signal truthfully, and, hence, the information content of his message decreases. Since the bias uncertainty is just a noise that erodes the credibility of the messages in equilibrium, the lower is ν, the higher is the relative likelihood of receiving false information from the experts, and the lower the ex-ante expected utility of the decision maker. Hence, when ν is sufficiently small, the decision maker prefers to consult a single expert who reports his signal truthfully. By contrast, when ν > ν ̃ , the negative impact of bias uncertainty on D’s ex-ante utility diminishes because the experts distort information less when ν is high. Consequently, consulting two experts with uncertain biases becomes more important than disciplining the extreme expert since D has a higher chance of getting truthful information from the experts, who eventually provide more information than a single expert.[14]

Taken together, this result suggests that it may be optimal to consult a single expert rather than two experts whenever the biases of the experts are not too similar, the extreme expert is not too extreme, and the probability of getting undistorted information is sufficiently low.

5 Conclusion

It is commonly believed that seeking advice from multiple sources improves the information transmission between the uninformed party and the informed parties. This presumption may be incorrect, especially when there is uncertainty about the experts’ biases. Interestingly, contrary to what conventional wisdom suggests, I have shown that the decision maker may prefer to consult a single expert rather than two experts when the decision maker is sufficiently concerned about taking advice from extreme experts because, in this case, the extreme expert does not communicate truthfully with the decision maker in the presence of another expert, while he communicates truthfully if he is consulted alone. This suggests that even though the decision maker is uncertain about the experts’ biases, consulting a single expert can be used as a tool to prevent opportunistic behavior by the experts in the first place. Hence, talking to multiple experts to elicit information from them about the true state is not always ex-ante efficient.


Corresponding author: Gülen Karakoç, Department of Economics, University of Milan - Bicocca, Milano, Italy. E-mail:

Acknowledgments

I am particularly grateful to Ronald Peeters (the Editor) and two anonymous referees for extremely useful comments. I would like to thank Giacomo Calzolari, Ferdinando Colombo, Vittoria Cerasi, Nenad Kos, Marco Pagnozzi, Salvatore Piccolo and Giovanni Ursino as well as audiences of the workshops on “Conference on Economic Design” (2019, Budapest), on “15th European Meeting on Game Theory” (2019, Turku), on “46th Annual Conference of the European Association for Research in Industrial Economics” (2019, Barcelona) and seminar participants at University of Bergamo and University of Milan-Bicocca for insightful comments and suggestions. All remaining errors are mine.

Appendix

Proof of Lemma 1.

  1. One expert . Suppose that D consults one expert who truthfully reports his signal. Since the utility function is concave in y, the (expected) utility maximizing action of the decision maker after receiving m 1 = s 1 can be defined as follows

    (A1) y s 1 F = arg max y R θ ( y θ b D ) 2 f θ | s 1 d θ , = b D + θ θ f θ | s 1 d θ E θ | s 1 , s 1 S ,

    where the conditional density of θ given the signal s 1 is

    f θ | s 1 = Pr s 1 | θ f θ θ Pr s 1 | θ f θ d θ .

    Using the conditional probability distribution of the signal from (1), I obtain

    (A2) f θ | s 1 = 0 = 2 1 θ , f θ | s 1 = 1 = 2 θ .

    Substituting (A2) into (A1), it is immediate to verify that

    (A3) y 0 F = b D + 1 3 , E θ | s 1 = 0 y 1 F = b D + 2 3 E θ | s 1 = 1 ,

    as claimed.

  2. Two experts. From (3) I know that D’s optimal action after receiving m i = s i and m j = s j is

    (A4) y s i , s j F = b D + θ θ f θ | s i , s j d θ E θ | s i , s j ,

    where the conditional density of θ given the signals s i and s j is

    f θ | s i , s j = Pr s i , s j | θ f θ θ Pr s i , s j | θ f θ d θ ,

    Using the conditional probability distribution of the signals from (2), I obtain

    (A5) f θ | s i = 0 , s j = 0 = 3 1 θ 2 , f θ | s i = 1 , s j = 1 = 3 θ 2 ,

    (A6) f θ | s i = 0 , s j = 1 = f θ | s i = 1 , s j = 0 = 6 θ 1 θ .

    Substituting (A5) and (A6) into (A4) yields the decision maker’s optimal actions

    (A7) y 0,0 F = b D + 1 4 E θ | 0,0 , y 0,1 F = y 1,0 F = b D + 1 2 E θ | 0,1 , y 1,1 F = b D + 3 4 E θ | 1,1 ,

    as claimed.□

Proof of Proposition 1.

  1. One expert . Consider A 1’s incentive to report truthfully his private signal. A 1’s expected utility from reporting m 1 = s 1 is higher than his expected utility from reporting m 1 = 1 − s 1 if and only if

    θ y s 1 F θ b 1 2 f s 1 | θ d θ θ y 1 s 1 F θ b 1 2 f s 1 | θ d θ , s 1 S , b 1 B ,

    which substituting f s 1 | θ = f θ | s 1 Pr s 1 by Bayes’ rule and integrating yields

    y s 1 F E θ | s 1 b 1 2 Pr s 1 y 1 s 1 F E θ | s 1 b 1 2 Pr s 1 .

    Using D’s best response from (A1) and rearranging terms, I obtain

    (A8) b D b 1 2 Pr s 1 ( b D + E θ | 1 s 1 E θ | s 1 Δ y F s 1 b 1 ) 2 Pr s 1 ,

    Expanding squares and rearranging terms, (A8) further simplifies to

    (A9) b 1 b D Δ y F s 1 Δ y F s 1 2 2 ,

    where I have used the fact that Pr s 1 = 1 2 , s 1 S . Solving (A9) jointly with D’s optimal actions from Lemma 1, it is immediate to verify that when s 1 = 0 truth-telling by A 1 requires

    b 1 b D 1 6 ,

    while, when he observes s 1 = 1, truth-telling condition is

    b 1 b D 1 6 ,

    where b 1 B . The result follows immediately.

  2. Two experts . Without loss of generality, I focus on A i ’s incentive to report truthfully his signal, because experts are ex-ante symmetric. A i ’s expected utility from reporting m i = s i is higher than his expected utility from reporting a false message m i = 1 − s i if and only if

    (A10) s j S θ y s i , s j F θ b i 2 f s j , θ | s i d θ s j S θ y 1 s i , s j F θ b i 2 f s j , θ | s i d θ ,

    which, substituting f s j , θ | s i = f θ | s i , s j Pr s j | s i by Bayes’ rule and following the same steps as I did above, simplifies to

    (A11) b i b D s j S Pr s j | s i Δ y F s i , s j s j S Pr s j | s i Δ y F s i , s j 2 2 .

    In order to compute Pr s j | s i , notice first that conditional probability distribution of the signals can be written as follows

    (A12) Pr s i , s j | θ = f s i , s j , θ f θ .

    Then, using (A12) together with the fact that f θ = 1 , I obtain

    (A13) Pr s j | s i = θ f s j , θ | s i d θ = θ f s i , s j , θ Pr s i d θ = Pr s i θ Pr s i , s j | θ d θ .

    Using (2) together with Pr s i = 1 2 , s i S , it can be easily verified that

    (A14) Pr s j = 0 | s i = 0 = Pr s j = 1 | s i = 1 = 2 3 ,

    (A15) Pr s j = 1 | s i = 0 = Pr s j = 0 | s i = 1 = 1 3 .

    Finally, substituting (A14), (A15) into (A11) and using D’s optimal actions from Lemma 1, when s i = 0, truth-telling by A i requires

    b i b D 1 8 ,

    while, when s i = 1, truth-telling by A i requires

    b i b D 1 8 .

    where b i B , i = 1,2 . The result follows immediately. □

Proof of Lemma 2.

  1. One expert. In a semi-revealing equilibrium, D’s maximization problem after receiving m 1 M is

    (A16) y m 1 S = arg max y R θ ( y θ b D ) 2 f θ | m 1 d θ , = b D + s 1 S Pr s 1 | m 1 E θ | s 1 E ν θ | m 1 , m 1 M .

    Bayes rule implies that D’s posterior beliefs about s 1 can be written as follows

    (A17) Pr s 1 | m 1 = Pr m 1 | s 1 Pr s 1 s 1 S Pr m 1 | s 1 Pr s 1 .

    When D receives m 1 = 1, her posteriors beliefs are

    (A18) Pr s 1 = 1 | m 1 = 1 = 1 2 ν , Pr s 1 = 0 | m 1 = 1 = 1 ν 2 ν ,

    while when she receives m 1 = 0, her posterior beliefs are

    (A19) Pr s 1 = 1 | m 1 = 0 = 0 , Pr s 1 = 0 | m 1 = 0 = 1 .

    Substituting the posteriors (A18) and (A19) into (A16), and using the conditional expectations E θ | s i , s j from the proof of Lemma 1, I have

    y 0 S = b D + 1 3 E ν θ | m 1 = 0 , y 0 S = b D + 3 ν 3 2 ν E ν θ | m 1 = 1 ,

    as claimed.

  2. Two experts. From (5) I know that D’s optimal action after receiving m i and m j is

    (A20) y m i , m j S = b D + s i , s j S 2 E θ | s i , s j Pr s i , s j | m i , m j E ν θ | m i , m j , m i , m j M 2 .

    Bayes’ rule implies that D’s posterior beliefs about the signals can be written as follows

    (A21) Pr s i , s j | m i , m j = Pr m i , m j | s i , s j Pr s i , s j s i , s j S 2 Pr m i , m j | s i , s j Pr s i , s j .

    Given that the extreme expert’s babbling strategy is to report m E = 1, when D receives m i , m j = 1,1 , her posterior beliefs about s i , s j are

    (A22) Pr s i = 1 , s j = 1 | 1,1 = 1 ν 2 3 ν + 3 , Pr s i = 0 , s j = 0 | 1,1 = 1 ν 2 ν 2 3 ν + 3 ,

    and

    (A23) Pr s i = 0 , s j = 1 | 1,1 = Pr s i = 1 , s j = 0 | 1,1 = 1 ν 2 ν 2 3 ν + 3 .

    By the same token, when D receives m i , m j = 0,1 , the posteriors are

    (A24) Pr s i = 0 , s j = 0 | 0,1 = 2 1 ν 3 2 ν , Pr s i = 0 , s j = 1 | 0,1 = 1 3 2 ν ,

    and zero, otherwise. Since the message space is binary, symmetric argument applies to the case where decision maker receives m i , m j = 1,0 . Finally, when D receives m i , m j = 0 , 0 the posteriors are

    (A25) Pr s i = 0 , s j = 0 | 0,0 = 1 ,

    and zero, otherwise. Next, I need to compute the joint probability of the signals. Notice that, Bayes rule implies that Pr s i , s j can be written as follows:

    (A26) Pr s i , s j = Pr s j | s i Pr s i .

    Then substituting Pr s j | s i from Eqs. (A14) and (A15) into (A26), it follows that

    (A27) Pr s i = 1 , s j = 1 = Pr s i = 0 , s j = 0 = 1 3 ,

    (A28) Pr s i = 0 , s j = 1 = Pr s i = 1 , s j = 0 = 1 6 .

    Finally, substituting E θ | s 1 , s 2 from the proof of Lemma 1 and the joint probability of the signals (A27) and (A28) into (A20), it is immediate to verify that

    y 0,0 S = b D + 1 4 E ν θ | 0,0 , y 0,1 S = y 1,0 S = b D + 2 ν 2 3 2 ν E ν θ | 0,1 , y 1,1 S = b D + ν 2 4 ν + 6 4 ν 2 3 ν + 3 E ν θ | 1,1 ,

    as desired. □

Proof of Proposition 2

  1. One expert . Suppose that A 1 is moderate – i.e., such that b 1 = b M . Then A 1 has an incentive to report truthfully if and only if

    θ y m 1 S θ b M 2 f s 1 | θ d θ θ y 1 m 1 S θ b M 2 f s 1 | θ d θ ,

    which following the same steps as I did in the proof of Proposition 1 and rearranging terms, simplifies to

    (A29) Pr s 1 b M b D Δ y S m 1 Pr s 1 Δ y S m 1 2 2 + Δ y S m 1 E ν θ | m 1 E θ | s 1 ,

    where Δ y S m 1 y 1 m 1 S y m 1 S . Now substituting the optimal actions from Lemmas 1 and 2 into (A29), it follows that whenever s 1 = 0, truth-telling by the moderate expert requires

    b M b D α 1 ν 1 6 2 ν .

    Similarly, when s 1 = 1 is observed, truth-telling by the moderate expert requires

    b M b D β 1 ν ,

    where

    β 1 ν 3 2 ν 6 2 ν .

    Moreover, α 1 ν is increasing in ν and β 1 ν is decreasing in ν – i.e.,

    d d ν α 1 ν = 1 6 2 ν 2 > 0 , d d ν β 1 ν = 1 6 2 ν 2 < 0 ,

    as expected. To complete the proof, I need to check that the extreme expert has no incentive to report m 1 = 0 when his signal is s 1 = 0. Adopting the same logic used above, this required condition is

    (A30) b E b D Δ y S 0 > Δ y S 0 2 2 + Δ y S 0 E ν θ | 0 E θ | 0 .

    Substituting the optimal actions from Lemmas 1 and 2 into (A30), whenever s 1 = 0, babbling condition required by the extreme expert is

    b E b D > α 1 ν .

    Finally, when s 1 = 1, a rightward biased extreme expert has a strict incentive to report m 1 = 1.

  2. Two experts . Without loss of generality, I focus on A i ’s incentives to disclose his private information, since experts have symmetric payoffs. Consider first that A i is a moderate such that b i = b M . Given that A j ’s type is his private information, A i ’s incentive compatibility constraints are written as

    (A31) b j B Pr b j s j S θ y m i , m j S θ b M 2 f s j , θ | s i d θ b j B Pr b j s j S θ y 1 m i , m j S θ b M 2 f s j , θ | s i d θ .

    Following the same steps as I did in the proof of Proposition 1, the above constraint can be rewritten as follows

    (A32) ( b M b D ) b j B Pr b j s j 0,1 Pr s j | s i Δ y S m i , m j b j B Pr b j s j 0,1 Pr s j | s i × Δ y S m i , m j 2 2 + Δ y S m i , m j E ν θ | m i , m j E θ | s i , s j .

    Using the optimal actions from Lemmas 1 and 2 and Pr s j | s i from Eqs. (A14) and (A15), when s i = 0, truth-telling by the moderate expert requires

    b M b D α 2 ν 5 ν 4 34 ν 3 + 84 ν 2 90 ν + 36 8 3 2 ν ν 2 3 ν + 3 3 ν 2 8 ν + 6 .

    By the same token, when s i = 1, truth-telling by the moderate expert requires

    b M b D β 2 ν ,

    where

    β 2 ν 5 ν 4 33 ν 3 + 80 ν 2 87 ν + 36 8 3 2 ν ν 2 3 ν + 3 2 ν .

    Moreover, it can be shown that

    d α 2 ν d ν = 30 ν 8 408 ν 7 + 2329 ν 6 7374 ν 5 + 14 262 ν 4 17 316 ν 3 + 12 90 ν 2 5400 ν + 972 8 3 2 ν 2 ν 2 3 ν + 3 2 3 ν 2 8 ν + 6 2 > 0 ,

    and

    d β 2 ν d ν = ν 6 + 10 ν 5 112 ν 4 + 384 ν 3 627 ν 2 + 504 ν 162 8 2 ν 2 3 2 ν 2 ν 2 3 ν + 3 2 < 0 .

    To complete the proof, I need to check that extreme expert has no incentive to report m i = 0 when his signal is s i = 0. Adopting the same logic used above, A i ’s expected utility from reporting m E = 1 is higher than his expected utility when reporting truthfully m i = 0 if

    ( b M b D ) b j B Pr b j s j 0,1 Pr s j | 0 Δ y S 0 , m j > b j B Pr b j s j 0,1 Pr s j | 0 1 2 Δ y S 0 , m j 2 + Δ y S 0 , m j E ν θ | 0 , m j E θ | 0 , s j .

    Using the optimal actions from Lemmas 1 and 2, when s 1 = 1, babbling by the rightward biased extreme expert requires

    b E b D > α 2 ν ,

    while when s 1 = 1, the rightward biased extreme expert has an incentive to report truthfully his signal. □

Proofs of Propositions 3 and 4. I first compare D’s expected utility from consulting one and two experts within each equilibrium. Let E U i F , i = 1,2 , be the decision maker’s ex-ante expected utility in a fully-revealing equilibrium. More precisely, in a fully-revealing equilibrium, D’s expected profit from consulting one expert is

(A33) E U 1 F θ s 1 S y s 1 F θ b D 2 Pr s 1 | θ f θ d θ ,

which using (1) and using the results of Lemma 1 yields

(A34) E U 1 F = 1 18 .

Similarly, in a fully-revealing equilibrium, D’s expected profit from consulting two experts is

(A35) E U 2 F θ s i , s j S 2 y s i , s j F θ b D 2 Pr s i , s j | θ f θ d θ .

Using (2) and using the optimal actions from Lemma 1, I have

(A36) E U 2 F = 1 24 .

Comparing this with (A34),

E U 2 F E U 1 F = 1 72 > 0 .

Therefore, in a fully-revealing equilibrium, D’s ex-ante expected utility is higher with two experts. Now let E U i S , i = 1,2 be the decision maker’s ex-ante expected utility in a semi-revealing equilibrium. More precisely, in a semi-revealing equilibrium D’s ex-ante expected profit from consulting one expert is

(A37) E U 1 S θ m 1 M y m 1 S θ b D 2 Pr m 1 | θ f θ d θ ,

where

(A38) Pr m 1 | θ = s 1 S Pr m 1 | s 1 Pr s 1 | θ .

Substituting the conditional probability distribution of s 1 from (1) and the corresponding prior beliefs into (A38), I have

Pr m 1 = 1 | θ = θ + 1 ν 1 θ and Pr m 1 = 1 | θ = ν 1 θ .

Hence,

(A39) E U 1 S = 3 2 ν 18 2 ν ,

where I have used the optimal actions from Lemma 2. Similarly, in a semi-revealing equilibrium D’s ex-ante expected profit from consulting two experts is

E U 2 S θ m i , m j M 2 y m i , m j S θ b D Pr m i , m j | θ f θ d θ ,

where

(A40) Pr m i , m j | θ = s i , s j S 2 Pr m i , m j | s i , s j Pr s i , s j | θ .

Substituting the conditional probability distribution of s i , s j from (2) and the corresponding prior probabilities into (A40), I have

Pr m i = 1 , m j = 1 | θ = 1 ν 1 θ 2 , Pr m i = 0 , m j = 0 | θ = ν 2 1 θ 2 Pr m i = 0 , m j = 0 | θ = Pr m i = 1 , m j = 0 | θ = ν 1 θ 1 ν 1 θ ,

Then using the optimal actions from Lemma 2, I obtain

(A41) E U 2 S = 36 1 ν 2 + 13 ν 2 1 ν + 2 ν 2 48 3 2 ν ν 2 3 ν + 3 .

Comparing (A39) and (A41),

E U 2 S E U 1 S = 3 ν ν 3 + 6 ν 2 ν 1 ν 144 2 ν 3 2 ν ν 2 3 ν + 3 ,

which is positive. Therefore, in a semi–revealing equilibrium, D’s ex-ante expected utility is higher with two experts. Finally, I compare D’s ex-ante expected utility in a semi-revealing equilibrium with two experts with her ex-ante expected utility when she consults one expert who reports truthfully his signal. Direct comparison of (A34) and (A41) yields

(A42) E U 2 S E U 1 F = 1 144 96 ν 81 ν 2 + 23 ν 3 36 3 2 ν ν 2 3 ν + 3 .

Since the denominator is positive, the sign of (A42) depends on the sign of

μ ν 96 ν 81 ν 2 + 23 ν 3 36 .

Notice that

μ 0 = 36 < 0 , μ 1 = 2 > 0 .

Moreover,

d μ ν d ν = 3 23 ν 2 54 ν + 32 > 0 .

Hence, by intermediate value theorem there exists a unique ν ̃ 0.74 such that μ ν < 0 (so that the decision maker’s ex-ante expected utility is higher with one accurate expert) if and only if ν < ν ̃ . □

References

Austen-Smith, D. 1993. “Interested Experts and Policy Advice: Multiple Referrals under Open Rule.” Games and Economic Behavior 5: 3–43. https://doi.org/10.1006/game.1993.1002.Search in Google Scholar

Bénabou, R., and G. Laroque. 1992. “Using Privileged Information to Manipulate Markets: Insiders, Gurus, and Credibility.” Quarterly Journal of Economics 107: 921–58. https://doi.org/10.2307/2118369.Search in Google Scholar

Bhattacharya, S., and A. Mukherjee. 2013. “Strategic Information Revelation when Experts Compete to Influence.” The RAND Journal of Economics 44: 522–44. https://doi.org/10.1111/1756-2171.12029.Search in Google Scholar

Chung, W., and R. Harbaugh. 2019. “Biased Recommendations from Biased and Unbiased Experts.” Journal of Economics and Management Strategy 28: 520–40. https://doi.org/10.1111/jems.12293.Search in Google Scholar

Crawford, V. P., and J. Sobel. 1982. “Strategic Information Transmission.” Econometrica 50: 1431–51. https://doi.org/10.2307/1913390.Search in Google Scholar

Dimitrakas, V., and Y. Sarafidis. 2005. Advice from an Expert with Unknown Motives. Mimeo: INSEAD.10.2139/ssrn.797844Search in Google Scholar

Gick, W. 2006. Two Experts are Better than One: Multi-Sender Cheap Talk under Simultaneous Disclosure. Mimeo: Dartmouth University.Search in Google Scholar

Evans, R. G. 1974. “Supplier-induced Demand: Some Empirical Evidence and Implications.” In The Economics of Health and Medical Care, 162–73. London: Palgrave Macmillan.10.1007/978-1-349-63660-0_10Search in Google Scholar

Gambacorta, L., L. Guiso, P. Mistrulli, A. Pozzi, and A. Tsoy. 2017. The Cost of Distorted Financial Advice - Evidence from the Mortgage Market. Rome, Italy: Einaudi Institute for Economics and Finance (EIEF).Search in Google Scholar

Gentzkow, M., and J. M. Shapiro. 2006. “Media Bias and Reputation.” Journal of Political Economy 114: 280–316. https://doi.org/10.1086/499414.Search in Google Scholar

Gilligan, T. W., and K. Krehbiel. 1989. “Asymmetric Information and Legislative Rules with a Heterogeneous Committee.” American Journal of Political Science 33: 459–90. https://doi.org/10.2307/2111156.Search in Google Scholar

Kawamura, K. 2011. “A Model of Public Consultation: Why is Binary Communication so Common.” The Economic Journal 121: 819–42. https://doi.org/10.1111/j.1468-0297.2011.02436.x.Search in Google Scholar

Krishna, V., and J. Morgan. 2001. “A Model of Expertise.” Quarterly Journal of Economics 116: 747–75. https://doi.org/10.1162/00335530151144159.Search in Google Scholar

Li, M. 2004. To Disclose or Not to Disclose: Cheap Talk with Uncertain Biases. Mimeo: Concordia University.Search in Google Scholar

Li, M. 2008. “Two(talking) Heads are Not Better Than One.” Economics Bulletin 3: 1–8.Search in Google Scholar

Li, M., and K. Madarász. 2008. “When Mandatory Disclosure Hurts: Expert Advice and Conflicting Interests.” Journal of Economic Theory 139: 47–74. https://doi.org/10.1016/j.jet.2007.07.009.Search in Google Scholar

Morgan, J., and P. C. Stocken. 2003. “An Analysis of Stock Recommendations.” The RAND Journal of Economics 34: 183–203. https://doi.org/10.2307/3087449.Search in Google Scholar

Morgan, J., and P. C. Stocken. 2008. “Information Aggregation in Polls.” The American Economic Review 98: 864–96. https://doi.org/10.1257/aer.98.3.864.Search in Google Scholar

Morris, S. 2001. “Political Correctness.” Journal of Political Economy 109: 231–65. https://doi.org/10.1086/319554.Search in Google Scholar

Ottaviani, M., and P. N. Sørensen. 2006. “Reputational Cheap Talk.” The RAND Journal of Economics 37: 155–75. https://doi.org/10.1111/j.1756-2171.2006.tb00010.x.Search in Google Scholar

Piccolo, S., G. W. Puopolo, and L. Vasconcelos. 2016. “Non-Exclusive Financial Advice.” Review of Finance 20: 2079–123. https://doi.org/10.1093/rof/rfv055.Search in Google Scholar

Rosovsky, H., and M. Hartley. 2002. Evaluation and the Academy: Are We Doing the Right Thing? Grade Inflation and Letters of Recommendations. Cambridge: American Academy of Arts and Science.Search in Google Scholar

Sobel, J. 1985. “A Theory of Credibility.” The Review of Economic Studies 52: 557–73. https://doi.org/10.2307/2297732.Search in Google Scholar

Sobel, J. 2013. “Giving and Receiving Advice.” In Advances in Economics and Econometrics: Tenth World Congress Econometric Society Monographs, edited by D. Acemoglu, M. Arellano, and E. Dekel, 305–41. Cambridge: Cambridge University Press.10.1017/CBO9781139060011.011Search in Google Scholar

Received: 2020-11-06
Accepted: 2021-07-18
Published Online: 2021-08-27

© 2021 Gülen Karakoç, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.6.2024 from https://www.degruyter.com/document/doi/10.1515/bejte-2020-0171/html
Scroll to top button