Paper The following article is Open access

The entropic approach to causal correlations

, , , and

Published 27 November 2017 © 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Nikolai Miklin et al 2017 New J. Phys. 19 113041 DOI 10.1088/1367-2630/aa8f9f

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/19/11/113041

Abstract

The existence of a global causal order between events places constraints on the correlations that parties may share. Such 'causal correlations' have been the focus of recent attention, driven by the realization that some extensions of quantum mechanics may violate so-called causal inequalities. In this paper we study causal correlations from an entropic perspective, and we show how to use this framework to derive entropic causal inequalities. We consider two different ways to derive such inequalities. Firstly, we consider a method based on the causal Bayesian networks describing the causal relations between the parties. In contrast to the Bell-nonlocality scenario, where this method has previously been shown to be ineffective, we show that it leads to several interesting entropic causal inequalities. Secondly, we consider an alternative method based on counterfactual variables that has previously been used to derive entropic Bell inequalities. We compare the inequalities obtained via these two methods and discuss their violation by noncausal correlations. As an application of our approach, we derive bounds on the quantity of information—which is more naturally expressed in the entropic framework—that parties can communicate when operating in a definite causal order.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

When describing most physical phenomena it seems natural to assume that physical events take place in a well-defined causal structure. For instance, earlier events can influence later ones but not the opposite, or, if two events are distant enough (typically, space-like separated) from each other, any correlation between them can only be due to some common cause in their past. This intuition is formalized in Reichenbach's principle [1] and generalized by the mathematical theory of causal models [2] that form the basis for our current understanding of how to infer causation from empirically observed correlations. Not surprisingly, it has found a wide range of applications [24]. Yet, quantum phenomena defy such an intuitive notion of cause and effect.

As shown by Bell's Theorem [5], quantum correlations obtained by measurements on distant entangled parties are incompatible with Reichenbach's principle [6, 7] or, more generally, with classical theories of causality, forcing us to generalize the notion of causal models [813]. In a scenario where different experimenters interact only once with a given system that is exchanged between them, one could expect that no simultaneous causal influences between them should be possible but rather only one-way influences. However, it has been realized that physical theories do not necessarily have to comply with the idea of a definite causal order [14, 15]. One can also imagine theories where the causal order itself is in a sort of 'quantum superposition' [14, 16], which can be verified using so-called causal witnesses [17, 18].

As for entanglement witnesses [19, 20], the use of causal witnesses assumes that we have a precise description of the measurement apparatus, that is, they are relevant in a device-dependent framework. Nevertheless, by allowing physical theories that are locally equivalent to quantum mechanics but relaxing the assumption of a fixed global causal structure, it is possible to verify causal indefiniteness also in a device-independent manner. With the aim of providing a general framework to such scenarios, the process matrix formalism [14] has been introduced and shown to allow for the violation of so-called causal inequalities [14, 2125], which are device-independent constraints that play a similar role to that of Bell inequalities [5]. However, whether violations of causal inequalities can be experimentally observed is still an important open question.

Our goal in this paper is to introduce a new framework for the derivation of causal inequalities and the study of their potential violations: the entropic approach to causal correlations. The idea of using entropies to understand sets of correlations has its origins in the context of Bell inequalities [2629] but since then has also found various other applications in quantum contextuality [3032], device-independent applications [33, 34], causal inference [9, 35, 36] and in the characterization of nonsignaling correlations [37]. As for these previous applications, the interest in characterizing the entropies compatible with causal correlations stems not only from practical and technical issues, but also from a more fundamental point. To begin with, causal inequalities expressed in terms of probabilities are constructed for a fixed number of inputs and outputs, and their systematic derivation becomes harder as this number increases [24, 25]. In contrast, we will derive entropic causal inequalities that are valid for arbitrary finite alphabets either for the input and output variables, or just for the output variables. Furthermore, entropic inequalities can be easily combined with extra assumptions, such as conditional independence (CI) relations or information theoretic constraints (e.g., bounds on the amount of communication), which would be hard to treat in the probabilistic framework [35, 37, 38]. More fundamentally, given that entropies are a core concept in classical and quantum information theory, it is of clear relevance to have a framework that focuses on these quantities rather than on probabilities, and it may help connect causal inequalities with principles such as information causality [39].

The paper is organized as follows. In section 2, we will introduce the basic notions relevant for our investigation, namely causal correlations and the entropic approach to causal structures, and elaborate two complementary ways in which the approach can be applied. In section 3 we will show how to derive entropic causal inequalities for the bipartite scenario, and discuss their violation. In section 4, we will explain how this approach can be generalized to multipartite scenarios. Finally, as an application, in section 5 we use this approach to derive bounds on mutual informations in causal games.

2. Preliminaries

2.1. Causal correlations

Causal correlations are most easily introduced in the bipartite case, where we consider two parties, Alice (${\rm{A}}$) and Bob (${\rm{B}}$), who together conduct a joint experiment while each having control over a separate closed laboratory. During each round of the experiment, Alice and Bob each receive, operate on, and send out a single physical system, which is the only means by which they may communicate. In addition, they each receive some (external) classical inputs X and Y, for Alice and Bob respectively, and produce some classical outputs A and B, respectively. Throughout the paper we use upper-case letters (e.g., X) to denote random variables, and corresponding lower-case letters (e.g., x) to denote the specific values they take. Their probability distributions will generically be denoted by P; we will also use the shorthand notations P(x) for $P(X=x)$, $P({x}_{(,)}y)$ for $P(X=x,Y=y)$, $P(a| x)$ for $P(A=a| X=x)$, etc.

The joint conditional probability distributions $P({ab}| {xy})$ that can be produced in such an experiment depend on the causal relation between Alice and Bob. If Bob cannot signal to Alice their correlations should obey $P(a| {xy})=P(a| {xy}^{\prime} )$ for all $x,y,y^{\prime} ,a$, where $P(a| {xy})={\sum }_{b}P({ab}| {xy})$. We denote this situation by ${\rm{A}}\prec {\rm{B}}$, and write $P={P}^{{\rm{A}}\prec {\rm{B}}}$ in this case. Note that this does not necessarily imply that Alice is in the causal past of Bob since the events could be space-like separated, but merely that the correlation is compatible with such a causal order. Similarly, if the correlation is compatible with Bob being in the causal past of Alice we write ${\rm{B}}\prec {\rm{A}}$ and we have ${P}^{{\rm{B}}\prec {\rm{A}}}(b| {xy})={P}^{{\rm{B}}\prec {\rm{A}}}(b| x^{\prime} y)$ for all $x,x^{\prime} ,y,b$. The correlations that satisfy both these conditions (and are thus consistent both with ${\rm{A}}\prec {\rm{B}}$ and ${\rm{B}}\prec {\rm{A}}$) are precisely the nonsignaling correlations [40].

More generally, we are interested in the correlations achievable under the assumption of a definite causal order in each round of the experiment, even if the causal relation between Alice and Bob may be different (e.g., chosen randomly) for each individual round. We thus say that a correlation $P({ab}| {xy})$ is causal if it can be written as

Equation (1)

with ${q}_{0},{q}_{1}\in [0,1]$ and ${q}_{0}+{q}_{1}=1$, where ${P}^{{\rm{A}}\prec {\rm{B}}}({ab}| {xy})$ and ${P}^{{\rm{B}}\prec {\rm{A}}}({ab}| {xy})$ satisfy the respective (one-way) no-signaling conditions defined above [14].

It was shown in [24] that the set of bipartite causal correlations forms a convex polytope, whose vertices are simply the deterministic causal correlations (i.e., causal correlations for which the outputs $A,B$ are deterministic functions of the inputs $X,Y$). The facets of this polytope specify causal inequalities, analogous to Bell inequalities for local correlations, that any causal correlation must satisfy [14]. The situation with binary input and output variables was characterized completely in [24], where it was shown that there are only two nonequivalent causal inequalities (up to symmetries). The simplest of these is perhaps the 'guess your neighbor's input' (GYNI) inequality, which has a simple interpretation as a game (up to a relabeling of the inputs and outputs) in which the inputs $X,Y$ are chosen uniformly at random and the goal is for each party to output the other party's input. One such form of this inequality can be written [24]

Equation (2)

where δ is the Kronecker delta function.

The notion of causal correlations can be generalized to more parties, although one has to take into account the fact that, in a given round of the experiment, the causal order of some parties may depend on the inputs and outputs of previous parties [23, 25]. In this paper we will primarily, in section 3, focus on applying the entropic approach to bipartite causal correlations, before returning to the multipartite case in section 4.

2.2. The entropic approach and marginal problems

Below we introduce the basic notions concerning entropy cones and marginal scenarios. We then review the entropic characterization of marginal scenarios [28] using two complementary methods, the first considering the entropies of the variables composing a given causal model, and the second based on the counterfactual approach to correlations. To illustrate concretely and contrast these two methods, we apply them to the well-known Bell scenario. Readers well-familiarized with the entropic approach may prefer to skip these expository examples.

2.2.1. Entropy and Shannon cones

Let $S=\{{X}_{1},\,\ldots ,\,{X}_{n}\}$ be a set of n random variables taking values ${x}_{1},\,\ldots ,\,{x}_{n}$, whose joint distribution $P({x}_{1},\,\ldots ,\,{x}_{n})$ we wish to characterize entropically. For every nonempty subset $T\subset S$ we shall denote by ${{\boldsymbol{X}}}_{T}={({X}_{i})}_{{X}_{i}\in T}$ the joint random variable that involves all variables in T, taking values ${{\boldsymbol{x}}}_{T}={({x}_{i})}_{{X}_{i}\in T}$. We can then compute the marginal Shannon entropies $H({{\boldsymbol{X}}}_{T})=H(T)$ from the marginal probability distributions $P({{\boldsymbol{X}}}_{T}={{\boldsymbol{x}}}_{T})=P({{\boldsymbol{x}}}_{T})$ as

Equation (3)

Together with $H(\varnothing ):= 0$, every global probability distribution $P({x}_{1},\,\ldots ,\,{x}_{n})$ thus specifies ${2}^{n}$ real numbers in the entropic description, which can be expressed as the components of a $({2}^{n})$-dimensional vector ${\boldsymbol{h}}=(H(\varnothing ),H({X}_{1}),\,\ldots ,\,H({X}_{1}{X}_{2}),\,\ldots ,\,H({X}_{1}\ldots {X}_{n}))={(H(T))}_{T\subset S}$ in ${{\mathbb{R}}}^{{2}^{n}}$.

A fundamental problem in information theory is to decide whether a given vector is an entropy vector, that is, if it is obtainable from some probability distribution. The (closure of the) region of valid entropy vectors

Equation (4)

is known to be a convex cone, called the entropy cone (see [41] for a comprehensive discussion of entropy cones). There is no known explicit description of ${{\rm{\Gamma }}}_{{S}}^{* }$, so one generally has to rely on an approximation of it. A well-known and very useful outer approximation of ${{\rm{\Gamma }}}_{{S}}^{* }$ is the so-called Shannon cone ${{\rm{\Gamma }}}_{{S}}$, defined by the elemental inequalities

Equation (5)

for all $1\leqslant i,j\leqslant n$, $i\ne j$, and $T\subset S\setminus \{{X}_{i},{X}_{j}\}$. That is, the Shannon cone ${{\rm{\Gamma }}}_{{S}}$ is described by a finite system of $m=n+{2}^{n-2}\left(\genfrac{}{}{0em}{}{n}{2}\right)$ linear inequalities, which one can write in the form $I{\boldsymbol{h}}\leqslant {\bf{0}}$, where I is an $m\times {2}^{n}$ real matrix and ${\bf{0}}$ a vector with null entries. The inequalities in equation (5) are the minimal set of inequalities implying the monotonicity of entropy, i.e., $H(U| T):= H({TU})-H(T)\geqslant 0$, and the submodularity (or strong subadditivity), i.e., $I(U:V| T):= H({TU})+H({TV})-H({TUV})-H(T)\geqslant 0$, for any subsets $T,U,V\subset S$. These inequalities and any combination thereof are known as Shannon-type inequalities. It is known that for $n\leqslant 3$ variables every inequality delimiting the entropy cone ${{\rm{\Gamma }}}_{S}^{* }$ is of the Shannon type; however, this is not the case for $n\gt 3$ [41].

The inequalities characterizing the Shannon cone simply arise from demanding that the function $P({{\boldsymbol{x}}}_{T})$ appearing in (3) should be identified with a valid probability distribution (i.e., it should be non-negative and normalized). However, one often wishes to consider (and characterize the entropy vectors for) situations where additional constraints on the random variables are known. For example, Xi and Xj might be known to be independent, which implies that $P({x}_{i},{x}_{j})=P({x}_{i})P({x}_{j})$. Such independence constraints, which are nonlinear in terms of probabilities, define simple linear constraints in terms of entropies, e.g., $P({x}_{i},{x}_{j})=P({x}_{i})P({x}_{j})\to H({X}_{i}{X}_{j})=H({X}_{i})+H({X}_{j})$. These extra constraints can be easily incorporated into the entropic framework since they define a linear subspace, which we denote ${{L}}_{{ \mathcal C }}$, characterized by linear equalities. When combined with the elemental inequalities one obtains a new finite system of inequalities ${I}^{{\prime} }{\boldsymbol{h}}\leqslant {\bf{0}}$ characterizing the 'constrained Shannon cone' ${{\rm{\Gamma }}}_{{S}}\,\bigcap \,{{L}}_{{ \mathcal C }}$.

In some cases, one may also wish to add linear inequality constraints which, in general, may give rise to more general polyhedra described by inhomogeneous systems of linear inequalities ${I}^{{\prime} }{\boldsymbol{h}}\leqslant {\boldsymbol{\beta }}$ [42]. In such cases we will again denote the set of vectors ${\boldsymbol{h}}$ satisfying these additional constraints as ${{L}}_{{ \mathcal C }};$ we will return to this point in more detail in section 3.

2.2.2. Marginal scenarios

Consider again a set of random variables $\{{X}_{1},\,\ldots ,\,{X}_{n}\}$ with a joint probability distribution $P({x}_{1},\,\ldots ,\,{x}_{n})$. We often encounter situations where not all variables, or combinations thereof, are empirically accessible. For example, our system of interest could be composed of three random variables ${X}_{1},{X}_{2},{X}_{3}$ but, for some reason, we can access at most two of them at a time, thus implying that we cannot know their joint entropy $H({X}_{1}{X}_{2}{X}_{3})$. Alternatively, there might be variables that represent latent factors [2] and that, for this reason, are unobservable. In such cases, we face a marginal problem: decide whether some given information on the marginals is compatible with a global description fulfilling certain constraints (for example the elemental entropy inequalities). In the example with three variables, it is easy to see that the elemental inequalities imply that

Equation (6)

That is, the global structure of entropy vectors implies nontrivial constraints (which are not elemental inequalities (5)) that should be respected by any marginal information compatible with it.

More formally, given a set of random variables $S=\{{X}_{1},\,\ldots ,\,{X}_{n}\}$, a marginal scenario is a collection of subsets ${ \mathcal M }=\{{M}_{1},\,\ldots ,\,{M}_{| { \mathcal M }| }\}$, ${M}_{j}\subset S$ representing those variables for which we have access to the probability distribution $P({{\boldsymbol{x}}}_{{M}_{j}})$ (and thus to $H({M}_{j})$). Clearly, ${M}_{j}\in { \mathcal M }$ and ${M}_{j}^{{\prime} }\subset {M}_{j}$ implies ${M}_{j}^{{\prime} }\in { \mathcal M }$, that is, given some probability distribution we also have access to any marginal of it. In a slight abuse of notation we will therefore write ${ \mathcal M }$ only in terms of its maximal subsets, since these are sufficient to specify the entire marginal scenario; the complete representation of ${ \mathcal M }$, which explicitly includes all (not necessarily maximal) subsets T for which the marginal distribution $P({{\boldsymbol{x}}}_{T})$ is accessible, will be denoted ${{ \mathcal M }}^{{\rm{c}}}=\{T\,| \,T\subset {M}_{j},{M}_{j}\in { \mathcal M }\}$. In the example above the marginal scenario would then be represented as ${ \mathcal M }=\{\{{X}_{1},{X}_{2}\},\{{X}_{1},{X}_{3}\},\{{X}_{2},{X}_{3}\}\}$, or ${{ \mathcal M }}^{{\rm{c}}}=\{\varnothing ,\{{X}_{1}\},\{{X}_{2}\},\{{X}_{3}\},\{{X}_{1},{X}_{2}\},\{{X}_{1},{X}_{3}\},\{{X}_{2},{X}_{3}\}\}$.

In general we are interested in characterizing the entropy cone ${{\rm{\Gamma }}}_{{ \mathcal M }}^{* }$ associated with a marginal scenario ${ \mathcal M }$, thus obtaining constraints implied by the global entropy cone on the marginal subspace of interest. Geometrically, this corresponds to the projection of the original entropy cone onto the subspace of entropy vectors ${\boldsymbol{h}}={(H(T))}_{T\in {{ \mathcal M }}^{{\rm{c}}}}\in {{\mathbb{R}}}^{| {{ \mathcal M }}^{{\rm{c}}}| }$, corresponding to the variables in ${{ \mathcal M }}^{{\rm{c}}}$. Since, in practice, we work with the Shannon cone ${{\rm{\Gamma }}}_{{S}}$—possibly constrained by some further linear constraints specifying a subset of entropy vectors ${{L}}_{{ \mathcal C }}$, as described previously—which is characterized by a finite system of inequalities, this projection corresponds to a simple variable elimination of all the terms not contained in ${{ \mathcal M }}^{{\rm{c}}}$ [28, 43, 44]. After removing redundant inequalities, the remaining inequalities are facets (i.e., the boundaries) of the Shannon cone, or more generally polyhedron, in the observable marginal subspace. Formally, the marginal Shannon polyhedron ${{\rm{\Gamma }}}_{{ \mathcal M }}$ is defined as

Equation (7)

where ${{\rm{\Pi }}}_{{ \mathcal M }}$ denotes the projection onto the coordinates associated with the marginal scenario ${ \mathcal M }$—i.e., onto the coordinates H(T) with $T\in {{ \mathcal M }}^{{\rm{c}}}$.

2.2.3. Probability structures

The characterization of entropy cones (or polyhedra) and marginal problems outlined above can be easily extended to the case where we no longer assume that there is a well-defined global probability distribution over all the variables in the set S. Instead, we may assume that only certain subsets of variables have such a joint distribution, and that only the marginals of certain subsets of these subsets are empirically accessible. This type of restriction may be imposed by assumptions about the underlying physical theory being described, as will be clear in the example we discuss in section 2.2.5.

We will denote the collection of subsets of S for which we assume joint probability distributions exist by ${ \mathcal S }=\{{S}_{1},\,\ldots ,\,{S}_{| { \mathcal S }| }\}$, with each ${S}_{i}\subset S$ such that ${\cup }_{i}{S}_{i}=S;$ we call ${ \mathcal S }$ the probability structure6 . As for the marginal scenario, we will represent ${ \mathcal S }$ by just its maximal subsets in a slight abuse of notation; the complete representation of ${ \mathcal S }$, that explicitly includes all subsets for which a joint probability distribution exists, will similarly be denoted ${{ \mathcal S }}^{{\rm{c}}}$. In such a situation the entropies H(T) cannot be defined for all subsets $T\subset S$, but only for the subsets in ${{ \mathcal S }}^{{\rm{c}}}$. The entropy vectors we shall consider will thus be defined here as ${\boldsymbol{h}}={(H(T))}_{T\in {{ \mathcal S }}^{{\rm{c}}}}\in {{\mathbb{R}}}^{| {{ \mathcal S }}^{{\rm{c}}}| }$. Again, no explicit characterization is known for the set of valid entropy vectors; we will instead rely on its outer approximation characterized via the Shannon constraints, now restricted to each subset ${S}_{i}\in { \mathcal S }$. Namely, the Shannon cone of interest is now

Equation (8)

where ${{\rm{\Gamma }}}_{{{\rm{S}}}_{i}}\subset {{\mathbb{R}}}^{| {{ \mathcal S }}^{{\rm{c}}}| }$ is the cone defined by the Shannon inequalities on the variables in Si, which, in particular, leave the other variables in $S\setminus {S}_{i}$ unconstrained. In the extremal case where we do assume a global joint probability distribution for all variables we have ${ \mathcal S }=\{S\}$, ${{ \mathcal S }}^{{\rm{c}}}={2}^{S}$, and we recover ${{\rm{\Gamma }}}^{{ \mathcal S }}={{\rm{\Gamma }}}_{{S}}$.

One can similarly consider marginal scenarios under a given probability structure ${ \mathcal S }$, with the constraint that marginals must arise from existing probability distributions, i.e., for all ${M}_{j}\in { \mathcal M }$ there must exist an ${S}_{i}\in { \mathcal S }$ such that ${M}_{j}\subset {S}_{i}$. One can also add linear constraints to the entropy vectors under consideration, as before, represented by some subset of entropy vectors ${{L}}_{{ \mathcal C }}$. We can thus define the marginal Shannon polyhedron associated with ${ \mathcal S },{ \mathcal M }$, and ${{L}}_{{ \mathcal C }}$ as

Equation (9)

The choice of probability structure can generally be considered on a case-by-case basis depending on the scenario being modeled. Unless otherwise stated we will take ${ \mathcal S }=\{S\}$ but, as we will discuss, this will not always be the most pertinent choice.

2.2.4. The entropic characterization of causal Bayesian networks

In order to describe the causal relations between random variables, we will first use the framework of causal Bayesian networks7 [2]. Such networks can be conveniently represented as directed acyclic graphs (DAGs), in which each node represents a variable and directed edges (arrows) encode the causal relations between them. A set of variables $S=\{{X}_{1},\,\ldots ,\,{X}_{n}\}$ forms a Bayesian network with respect to a given DAG if and only if the variables admit a global probability distribution $P({x}_{1},\,\ldots ,\,{x}_{n})$, i.e., ${ \mathcal S }=\{S\}$, that factorizes according to

Equation (10)

where ${\mathrm{Pa}}_{i}$ stands for the graph-theoretical parents of variable Xi, that is, all those variables Xj that have an outgoing edge pointing to Xi in the DAG under consideration. The decomposition above implies a set of CIs, which are either independence relations of the type $P({x}_{i},{x}_{j})=P({x}_{i})P({x}_{j})$ (in which case we write ${X}_{i}\perp \perp {X}_{j}$) or conditional independence relations such as $P({x}_{i},{x}_{j}| {x}_{k})=P({x}_{i}| {x}_{k})P({x}_{j}| {x}_{k})$ (denoted ${X}_{i}\perp \perp {X}_{j}\,| \,{X}_{k}$)8 . Given a DAG, a complete list of CIs can be obtained via the d-separation criterion [2]. If the arrows in the DAG representation of a Bayesian network describe the direct causal relations between the variables in question, then we call it a causal Bayesian network.

Entropically, these CIs correspond to simple linear relations: ${X}_{i}\perp \perp {X}_{j}\to H({X}_{i}{X}_{j})=H({X}_{i})+H({X}_{j})$ and ${X}_{i}\perp \perp {X}_{k}| {X}_{k}\to H({X}_{i}{X}_{j}| {X}_{k})=H({X}_{i}| {X}_{k})+H({X}_{j}| {X}_{k})$. As a result, the set of entropy vectors compatible with a given DAG is the intersection of the entropy cone ${{\rm{\Gamma }}}_{{S}}^{* }$ with the linear subspace ${{L}}_{\mathrm{CI}}$ defined by the set of linear constraints that characterize the CIs associated with the DAG [29, 35]. In practice, we again rely on the outer approximation given by the intersection of the Shannon cone ${{\rm{\Gamma }}}_{{S}}$ with ${{L}}_{\mathrm{CI}}$.

If all the variables in a DAG are observable, in order to check the compatibility of a given entropy vector with the DAG it suffices to check whether all the entropic CIs are satisfied. However, we are often interested in DAGs containing latent, nonobservable, variables. Splitting the n variables making up the DAG into j observable variables ${O}_{1},\,\ldots ,\,{O}_{j}$ and n − j latent variables ${{\rm{\Lambda }}}_{1},\,\ldots ,\,{{\rm{\Lambda }}}_{n-j}$ we thus need to compute the marginal Shannon cone ${{\rm{\Pi }}}_{{ \mathcal M }}({{\rm{\Gamma }}}_{{S}}\bigcap {{L}}_{\mathrm{CI}})$ where ${ \mathcal M }=\{\{{O}_{1},\,\ldots ,\,{O}_{j}\}\}$.

As an illustration, consider the paradigmatic causal Bayesian network for a local hidden variable model satisfying Bell's assumption of local causality [5, 7]. The relevant DAG, shown in figure 1, has five variables, four of which are observable while the hidden variable Λ is not: in the context of Bell's Theorem the 'hidden variables' indeed refer to the latent factors introduced above. This DAG represents the physical scenario where two distant observers receive physical systems produced by a common source (the hidden variable Λ) and make different measurements (choices of which are labeled by X and Y), obtaining measurement outcomes (represented by the variables A and B). That is, the probability structure is ${ \mathcal S }=\{S\}$ with $S=\{X,Y,A,B,{\rm{\Lambda }}\}$, and the marginal scenario is ${ \mathcal M }=\{\{X,Y,A,B\}\}$. Some of the CIs implied by this DAG are given by $P({xy}\lambda )=P(x)P(y)P(\lambda )$ (the measurement independence assumption), $P(a| {xyb}\lambda )=P(a| x\lambda )$ and $P(b| {xya}\lambda )=P(b| y\lambda )$ (the locality assumption) that in turn imply (after eliminating the hidden variable Λ) Bell inequalities for the observed variables [5, 7]. These constraints also imply the no-signaling constraints $P(a| {xy})=P(a| x)$ and $P(b| {xy})=P(b| y)$.

Figure 1.

Figure 1. DAG showing the causal structure of a local hidden variable model for the Bell scenario.

Standard image High-resolution image

This example shows that, in general, DAGs with latent variables imply CIs both on the level of observable and unobservable variables. The CIs involving latent variables are not directly testable but imply further constraints (Bell inequalities, in the example above) that can be tested to check whether the observable behavior is compatible with the proposed underlying DAG.

If, instead of characterizing the allowed probability distributions, we consider the entropic description of the Bell scenario, i.e., the Shannon cone together with the linear constraints arising from the DAG's CIs, then after eliminating the latent variable Λ one obtains no further constraints other than the elemental inequalities (which are trivial since they are respected by all probability distributions) and the observable CIs implied by the DAG: $H({XY})=H(X)+H(Y)$, $H(A| {XY})=H(A| X)$ and $H(B| {XY})=H(B| Y)$ [45]. The first CI relation represents the independence of the two measurement choices, while the two latter ones are no-signaling conditions. Thus, for this particular causal Bayesian network, when the entropic approach is applied to the variables making up the DAG, one does not obtain any nontrivial constraints (i.e., entropic Bell inequalities) [45]. However, there are many examples of Bayesian networks for which one does obtain such nontrivial constraints [9, 29, 35]. In fact, as we will see in section 3, a slight modification of this method also leads to nontrivial constraints on causal correlations.

2.2.5. The entropic characterization of counterfactuals

While the DAG method fails to provide nontrivial constraints for the Bell scenario (a result that can be extended to a larger class of 'line-like' Bayesian networks [45]), it has been known for some time that entropic Bell inequalities can be derived using different methods [26]. Interestingly, these inequalities can even be turned into necessary and sufficient conditions for a given probability distribution to satisfy Bell's local causality assumption [31].

The method that allows such inequalities to be derived is motivated by the realization that the entropic approach can be applied to any marginal scenario for a relevant set of random variables [28], and not only those arising from causal Bayesian networks. In particular, when we are interested in constraints on conditional distributions of the form $P({ab}| {xy})$, where we have distinct sets of input and output variables, we may consider the output variables conditioned on certain relevant input variables (e.g. Axy and Bxy, where the notation Axy denotes the random variable $A| (X=x,Y=y)$)9 . The choice of relevant input variables to condition on, as well as the appropriate probability structure, will depend on the physical situation being considered. In general, a global probability distribution may not exist on such 'counterfactual' variables even if one does exist on the unconditioned variables.

Let us illustrate how this method may be applied by considering again its application to the Bell scenario. Instead of considering all the input and output variables as in the DAG approach (e.g. $X,Y,A,B$), one can consider copies of the output variables conditioned on the corresponding party's input, i.e., ${A}_{x},{B}_{y}$, where Ax denotes the random variable $A| (X=x)$. Indeed, due to the no-signaling constraints, the output variables can only depend on the corresponding local input. Furthermore, from Fine's Theorem [46] we know that Bell's local causality assumption is equivalent to the existence of a well defined (although empirically inaccessible) joint probability distribution $P({a}_{1},\,\ldots ,\,{a}_{| { \mathcal X }| },{b}_{1},\,\ldots ,\,{b}_{| { \mathcal Y }| })$ (where ${ \mathcal X }=\{1,\,\ldots ,\,| { \mathcal X }| \}$ and ${ \mathcal Y }=\{1,\,\ldots ,\,| { \mathcal Y }| \}$ denote the alphabets of Alice and Bob's inputs) on these variables10 that marginalizes to the observable one given by $P({ab}| {xy})=P({a}_{x},{b}_{y})$. Hence, the appropriate probability structure for local correlations in the Bell scenario is ${ \mathcal S }=\{S\}$ with $S=\{{A}_{1},\ldots ,{A}_{| { \mathcal X }| },{B}_{1},\,\ldots ,\,{B}_{| { \mathcal Y }| }\}$, and we consider the Shannon cone ${{\rm{\Gamma }}}_{S}={{\rm{\Gamma }}}^{{ \mathcal S }}$ that contains all ${2}^{| { \mathcal X }| +| { \mathcal Y }| }$-dimensional entropy vectors ${\boldsymbol{h}}=(H(\varnothing ),H({A}_{1}),\,\ldots ,\,H({B}_{1}),\,\ldots ,\,H({A}_{1}\ldots {A}_{| { \mathcal X }| }{B}_{1}\ldots {B}_{| { \mathcal Y }| }))$. The marginal scenario in this case is simply ${ \mathcal M }={\{\{{A}_{x},{B}_{y}\}\}}_{x,y}$ and local correlations are then characterized by the cone ${{\rm{\Pi }}}_{{ \mathcal M }}({{\rm{\Gamma }}}_{S})$.

In contrast to the characterization based directly on the DAG variables, this approach leads to nontrivial entropic inequalities (i.e., not obtainable from the elemental inequalities in equation (5)) in the Bell scenario. For example, for two measurement settings per party, which we label in this case x, y = 0, 1, one obtains the Braunstein–Caves inequality [26] together with its symmetries obtained by relabeling the inputs, namely,

Equation (11)

where $I({A}_{x}:{B}_{y}):= H({A}_{x})+H({B}_{y})-H({A}_{x}{B}_{y})$ is the mutual information between the variables Ax and By. This inequality can be understood as the entropic counterpart of the paradigmatic CHSH inequality [47].

Although the choice of probability structure above corresponds, via Fine's theorem, to the assumption of a local hidden variable theory, one can also consider other possibilities. For instance, taking ${ \mathcal S }={ \mathcal M }$ amounts to assuming a nonsignaling theory [40]. In this case, the entropy cone is characterized only by the Shannon inequalities and one can obtain a characterization of the extremal rays of the cone, corresponding to the entropic analog of Popescu–Rohrlich boxes [37].

In general (i.e., beyond the simplest Bell scenario), both methods based on the variables in a causal Bayesian network and on counterfactual variables can lead to nontrivial constraints [9, 29, 35, 37, 48, 49]. To conclude this section, let us nonetheless highlight an important difference between the two methods: while the former is valid for arbitrary input alphabets, the latter fixes the number of inputs to which the inequalities apply.

3. Bipartite entropic causal inequalities

With the entropic approach to characterizing sets of correlations outlined, we can now proceed to apply this approach to causal correlations, so as to derive entropic causal inequalities. We consider in this section the bipartite case. We first show how the method based on causal Bayesian networks can be adapted to characterize causal correlations, before considering also the method based on counterfactual variables.

3.1. Characterization based on causal Bayesian networks

3.1.1. Conditional DAGs for bipartite causal correlations

The ability to apply the entropic approach to DAGs, as outlined in section 2.2.4, is a powerful tool for characterizing the correlations obtainable within arbitrary causal networks. However, the notion of causal correlations defined in equation (1) is somewhat more general and cannot be directly expressed within the framework of causal Bayesian networks. In order to see why this is the case, let us first note that the random variables of interest are $X,Y,A,B$, representing the inputs $X,Y$ and outputs $A,B$ for Alice and Bob. Note that since we consider signaling scenarios here, unlike in the Bell scenario, we do not need to include any latent variable Λ in our description to account for shared randomness, since this can be established via local randomness and communication.

If Alice and Bob share a correlation compatible with a fixed causal order (i.e. either ${\rm{A}}\prec {\rm{B}}$ or ${\rm{B}}\prec {\rm{A}}$), then the functional dependences between $X,Y,A,B$ can indeed be expressed as a DAG—specifically, the two DAGs containing these variables in figure 2. However, a causal correlation may in general not be compatible with any fixed causal order, but may require a mixture thereof. This has some similarities with the situation in the Svetlichny definition of genuine multipartite nonlocality [50, 51] where a convex mixture of different DAGs has to be considered.

Figure 2.

Figure 2. DAGs for bipartite causal correlations. The latent 'switch' variable Q determines which DAG, corresponding to the fixed causal order ${\rm{A}}\prec {\rm{B}}$ (for Q = 0, top) or ${\rm{B}}\prec {\rm{A}}$ (for Q = 1, bottom), is 'activated'.

Standard image High-resolution image

To tackle this problem it is necessary to find a way to take into account the constraints arising separately from each of the two fixed causal orders, and then to combine them to obtain those satisfied by causal correlations. In order to do this, we exploit the fact that any mixture of fixed-order causal correlations can be seen as arising from a latent variable that determines the causal order for each individual experiment [23]. We thus introduce a new random variable Q which we call a 'switch', and which determines univocally the appropriate causal Bayesian network for each trial. The resulting causal model is shown in figure 2, where the DAG with ${\rm{A}}\prec {\rm{B}}$ is used for Q = 0, and the one with ${\rm{B}}\prec {\rm{A}}$ for Q = 1. By identifying ${q}_{0},{q}_{1}$ in equation (1) as ${q}_{0}=P(Q=0)$, and ${q}_{1}=P(Q=1)$, one can readily see that this description is equivalent to the definition of causal correlations in equation (1).

Both DAGs imply the independence of the inputs, $X\perp \perp Y$. The DAG for Q = 0 (i.e., for ${\rm{A}}\prec {\rm{B}}$) also implies the CI relation $A\perp \perp Y| X$ (i.e. that there is no signaling from ${\rm{B}}$ to ${\rm{A}}$), while the DAG for Q = 1 implies $B\perp \perp X| Y$ instead. In addition, the switch variable Q should be independent of Alice and Bob's inputs X and Y, so that we have ${XY}\perp \perp Q$, which, together with $X\perp \perp Y$, implies that $X\perp \perp Y\perp \perp Q$.

3.1.2. Shannon polyhedra of causal correlations

In order to use the 'conditional' causal Bayesian network in figure 2 to characterize the set of entropy vectors obtainable from causal correlations, we first note that we can directly use the techniques of section 2.2.4 to construct the Shannon cones for each of the two DAGs appearing in the figure conditioned on Q (i.e., for fixed-order correlations with ${\rm{A}}\prec {\rm{B}}$ or ${\rm{B}}\prec {\rm{A}}$). Denoting these cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$, we have

Equation (12)

and

Equation (13)

where ${{\rm{\Gamma }}}_{{S}}$ is the Shannon cone for the four variables in $S=\{X,Y,A,B\}$, the probability structure is simply ${ \mathcal S }=\{S\}$, and ${{L}}_{{ \mathcal C }}^{{\rm{A}}\prec {\rm{B}}}$ denotes the linear subspace defined by the CI constraints for the case ${\rm{A}}\prec {\rm{B}}$, namely, the equations $H({XY})=H(X)+H(Y)$ and $H({YA}| X)=H(Y| X)+H(A| X)$, and similarly for ${{L}}_{{ \mathcal C }}^{{\rm{B}}\prec {\rm{A}}}$. These cones are characterized by the systems of inequalities ${I}_{0}{\boldsymbol{h}}\leqslant {\bf{0}}$ and ${I}_{1}{\boldsymbol{h}}\leqslant {\bf{0}}$, where ${\boldsymbol{h}}={(H(T))}_{T\subset S}$.

Recall that in the probabilistic case the polytope of causal correlations is simply the convex hull of the polytopes of correlations for ${\rm{A}}\prec {\rm{B}}$ and ${\rm{B}}\prec {\rm{A}}$ [24], and with the new variable Q the definition in equation (1) can be rewritten as

Equation (14)

In contrast, the convex hull of the cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ does not contain all entropy vectors of causal correlations due to the concavity of the Shannon entropy. Indeed, in appendix A we provide an explicit example of a causal correlation whose entropy vector is not contained in the convex hull $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$.

To see more precisely why this is the case, and how to give a correct entropic characterization of causal correlations, observe that, when taking a convex mixture of two causal correlations with different causal orders, the 'conditional entropy vectors' ${{\boldsymbol{h}}}_{0}={(H(T| Q=0))}_{T\subset S}$ and ${{\boldsymbol{h}}}_{1}={(H(T| Q=1))}_{T\subset S}$ must be contained in ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$, respectively, and thus satisfy ${I}_{0}{{\boldsymbol{h}}}_{0}\leqslant {\bf{0}}$ and ${I}_{1}{{\boldsymbol{h}}}_{1}\leqslant {\bf{0}}$. For any causal correlation, the convex mixture

Equation (15)

is thus contained in $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$. Observe now that, in contrast to the convex sum (14) defining causal correlations, ${{\boldsymbol{h}}}_{\mathrm{conv}}$ thus defined is equal to ${(H(T| Q))}_{T\subset S}$, rather than just ${(H(T))}_{T\subset S}$, and hence the convex hull of the fixed-order cones characterizes the conditional entropies (conditioned on the switch variable Q) obtainable with causal correlations, rather than the entropy vectors of causal correlations directly.

With the appropriate transformation, the inequalities $I{\boldsymbol{h}}\leqslant {\bf{0}}$ characterizing11 $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ can be transformed into inequalities satisfied by the standard (i.e., nonconditional) entropy vector $\tilde{{\boldsymbol{h}}}={(H(T))}_{T\subset \tilde{S}}$ for the variables now in $\tilde{S}=S\cup \{Q\}$ (and the probability structure is consequently extended to $\widetilde{{ \mathcal S }}=\{\tilde{S}\}$). Specifically, each row ${\boldsymbol{I}}$ of the matrix I (defining each individual inequality ${\boldsymbol{I}}\cdot {\boldsymbol{h}}\leqslant 0$) must undergo the linear transformation ${{ \mathcal T }}_{Q}:{{\mathbb{R}}}^{{2}^{| S| }}\to {{\mathbb{R}}}^{{2}^{| S| +1}}$ mapping ${\boldsymbol{I}}\mapsto \tilde{{\boldsymbol{I}}}:= {{ \mathcal T }}_{Q}({\boldsymbol{I}})$ with the components of $\tilde{{\boldsymbol{I}}}$ given by12

Equation (16)

for all nonempty subsets $T\subset S$. We will denote by ${\mathrm{conv}}_{Q}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ the cone of vectors $\tilde{{\boldsymbol{h}}}$ satisfying the resulting inequalities $\tilde{I}\tilde{{\boldsymbol{h}}}\leqslant {\bf{0}}$.

To complete the characterization of entropy vectors for causal correlations, we recall that, in addition to the fact that any distribution on $\tilde{{S}}$ must give an entropy vector in the Shannon cone ${{\rm{\Gamma }}}_{\tilde{{S}}}$, the conditional DAG in figure 2 gives us the CI constraints $X\perp \perp Y\perp \perp Q$. Moreover, since Q is a binary variable (as there are only two orders to switch between) we have $H(Q)\leqslant 1$. A consequence of this final inequality constraint is that the set of entropy vectors under consideration will be characterized by an inhomogeneous system of inequalities of the form $\tilde{I}\tilde{{\boldsymbol{h}}}\leqslant \tilde{{\boldsymbol{\beta }}}$ for some $\tilde{{\boldsymbol{\beta }}}\in {{\mathbb{R}}}^{{2}^{| S| +1}}$ and is thus no longer a cone but a polyhedron. The polyhedron characterizing entropy vectors associated with the conditional DAG (when still including Q) is thus given by

Equation (17)

where the notation ${{L}}_{{ \mathcal C }}(\cdot )$ denotes the subset (here, a polyhedron) in the entropy vector space defined by the corresponding linear constraints.

Finally, following the general approach presented in section 2.2, it remains just to eliminate the terms containing the (unobservable) switch variable Q in order to obtain the inequalities characterizing bipartite causal correlations. This is done by projecting ${\widetilde{{\rm{\Gamma }}}}_{\mathrm{AB}}^{\mathrm{causal}}$ onto the marginal scenario ${ \mathcal M }=\{S\}=\{\{X,Y,A,B\}\}$. We thus finally obtain the polyhedron

Equation (18)

which we shall refer to as the causal Shannon polyhedron or simply the causal polyhedron and is again characterized by an inhomogeneous system of inequalities $I^{\prime} {\boldsymbol{h}}\leqslant {\boldsymbol{\beta }}$ for some ${\boldsymbol{\beta }}\in {{\mathbb{R}}}^{{2}^{| S| }}$.

We emphasize that the construction given above is in fact not at all restricted to the description of causal correlations, and can be used to characterize arbitrary convex mixtures of different Bayesian networks. Furthermore, as we will see in section 4, this method can be generalized to convex combinations of more distributions, in our case corresponding to more than two causal orders in multipartite scenarios (and even correlations with 'dynamical causal order' [23, 25, 53]).

3.1.3. Entropic causal inequalities and their violation

The constructive description of the causal polyhedron ${{\rm{\Gamma }}}_{\mathrm{AB}}^{\mathrm{causal}}$ from equations (17) and (18) also makes it clear how we can characterize it, in practice, as a system of linear inequalities. A description of ${\widetilde{{\rm{\Gamma }}}}_{\mathrm{AB}}^{\mathrm{causal}}$ in terms of its facets is straightforwardly obtained by taking the union of the inequalities describing the individual cones and polyhedron appearing in equation (17) and eliminating redundant ones. The inequalities characterizing ${{\rm{\Gamma }}}_{\mathrm{AB}}^{\mathrm{causal}}$ can then be found by eliminating the terms not contained in the marginal scenario ${ \mathcal M }=\{S\}$, either by Fourier–Motzkin elimination [43] or by finding its extremal rays and projecting out the unwanted coordinates.

The resulting system of inequalities is thus satisfied by any bipartite causal correlation. However, many of these inequalities are either elemental inequalities (as in equation (5)) or can be obtained from these by using the independence constraint $X\perp \perp Y$, and thus represent trivial constraints. After characterizing the polyhedron in equation (18) and eliminating all trivial inequalities, i.e., those satisfied by any distribution P(xyab) with $X\perp \perp Y$, we find 35 novel entropic causal inequalities. Several of these inequalities are equivalent under the exchange of parties (i.e., exchanging $(X,A)\leftrightarrow (Y,B)$), and under this symmetry there are in fact 20 equivalence classes of entropic causal inequalities, the full list of which is given in appendix B. Of these, 10 have bounds of 0 (i.e., are of the form ${\boldsymbol{I}}\cdot {\boldsymbol{h}}\leqslant 0$), while the remaining 10 have nonzero bounds (resulting from a nontrivial dependence on H(Q) before this variable was eliminated; see appendix B). Simple interpretations of the entropic causal inequalities seem to be less forthcoming than for the bipartite causal inequalities in terms of probabilities [24] (for binary inputs and outputs—recall that the entropic inequalities given here are, in contrast, valid for any number of possible inputs and outputs). One of the simpler examples, which is symmetric under the exchange of parties, is

Equation (19)

Note that the fact that we find nontrivial inequalities is in stark contrast to the situation for Bell-type inequalities (and line-like causal Bayesian networks), where the DAG-based entropic method only leads to trivial inequalities obtainable from the elemental inequalities and no-signaling conditions [45].

While these entropic inequalities are obeyed by any bipartite causal correlation, we note that a priori they need not be tight. Indeed, recall that the Shannon cone is only an outer approximation to the true entropy cone, so it is thus interesting to study the tightness and violation of these inequalities more carefully.

Although one generally would not expect every point on the boundary of ${{\rm{\Gamma }}}_{\mathrm{AB}}^{\mathrm{causal}}$ to be obtainable by a causal correlation, it is nonetheless desirable to be able to saturate each inequality by some causal probability distribution for appropriate distributions for X and Y. By looking at deterministic causal distributions with binary inputs and outputs, which can easily be enumerated, we readily verified that all 10 families of inequalities that are bounded by 0 (given in equation (B1)) can indeed be saturated when taking uniformly distributed inputs. However, we were unable to find causal distributions, either by mixing binary ones or by considering more outputs, that saturate the remaining inequalities, and their tightness remains an open question.

To understand now the violation by noncausal distributions of the entropic inequalities, we consider the extremal rays of the constrained Shannon cone

Equation (20)

which violate the inequalities13 . A crucial question is whether or not these extremal rays actually correspond to valid probability distributions (i.e., whether they support entropy vectors), and if not, whether the inequalities can nonetheless be violated.

In order to look at this, it is instructive to first restrict our attention to distributions satisfying $H(X)\leqslant 1$, $H(Y)\leqslant 1$, $H(A)\leqslant 1$ and $H(B)\leqslant 1$. These constraints are satisfied by all distributions with binary inputs and outputs, and this therefore also allows us to compare the violation of the entropic causal inequalities to the violation of standard causal inequalities that are understood well in this scenario [24]. Imposing these constraints on the cone in equation (20), one obtains a polytope with extremal points corresponding to the extremal rays of the cone scaled to satisfy these constraints (together with the null vertex ${\bf{0}}$). Under these we found that the 10 inequalities in equation (B1) and the two inequalities in equation (B2) could be violated, although the latter are (in that case) weaker than, and implied by, the former and are thus redundant. The remaining 8 inequalities in equations (B3) and (B4) cannot be violated. All in all, the set of binary causal correlations is entropically characterized by the 10 inequalities in equation (B1) that are bounded by 0.

Amongst the extremal points violating each of these inequalities, those that give the maximal violation all satisfy $H(X)=H(Y)=1$ and $H({XY})=H({XYAB})$ and thus, if realizable, correspond to deterministic conditional distributions taken with uniformly distributed inputs X and Y14 . In fact, all but one of these 10 inequalities are maximally violated (by which we henceforth mean with respect to the Shannon cone augmented with the independence constraint $X\perp \perp Y$) by one of the three following deterministic distributions taken with uniform inputs:

Equation (21)

where $x,y,a,b$ take the binary values $0,1$, and $\oplus $ denotes addition modulo 2. For example, equation (19) is violated by the third distribution above with a value for the left-hand side of 1. The one exception not violated by the distributions in equation (21) is the second inequality in (B1),

Equation (22)

which, in turn, is violated by the deterministic distribution (again taken with uniform inputs)

Equation (23)

However, unlike for the other inequalities, this distribution does not give the maximal possible violation of inequality (22) (which is 1/2), as the corresponding extremal point ${{\boldsymbol{h}}}_{\mathrm{ext}}$ that does maximally violate it is not reachable by a valid probability distribution with binary inputs and outputs. This is easily verified by making use of the previous observation that this extremal point must correspond to a deterministic distribution taken with uniform inputs, the set of which can easily be enumerated for binary inputs and outputs. Amongst such distributions, the one in equation (23) gives the best violation of $1-\tfrac{3}{2}{\mathrm{log}}_{2}\tfrac{3}{2}\approx 0.123\gt 0$.

The distributions in equation (21) are particularly interesting, as they all violate maximally some symmetries of the GYNI inequality (2) (under relabeling of the inputs and outputs), but not equation (2) itself. Interestingly, it turns out that all binary deterministic noncausal distributions, when taken with uniform inputs, violate at least one of our entropic inequalities except the distribution ${P}^{\mathrm{GYNI}}({ab}| {xy})={\delta }_{a,y}{\delta }_{b,x}$ (which violates maximally equation (2)) and its four symmetries under input-independent relabeling of outputs only. Note, however, that if Alice and Bob have a noncausal resource producing the distribution ${P}^{\mathrm{GYNI}}$, they can produce any of the distributions in equation (21) by appropriately XORing their input with their output, and thus still obtain an operational violation of an entropic causal inequality15 . It is interesting to observe that distributions maximally violating GYNI-type inequalities have such a crucial role in violating the entropic causal inequalities given that the entropic inequalities superficially bear little resemblance to these, and are valid for arbitrary numbers of inputs and outputs.

Returning to the more general situation with no upper bound imposed on $H(X),H(Y),H(A)$ and H(B), we see that all the remaining entropic causal inequalities can be violated by entropy vectors that are parallel to the realizable entropy vectors giving violations in the restricted scenario—more precisely, those obtained from the distributions in equation (21) (for all but one of the remaining inequalities) and equation (23) (for the remaining one). This shows that, given large enough alphabets for the input and output variables, all the entropic causal inequalities we obtained can indeed be violated by noncausal probability distributions, since if the distribution P(xyab) has entropy vector ${\boldsymbol{h}}$ then the distribution

Equation (24)

where ${\boldsymbol{x}}=({x}_{1},\,\ldots ,\,{x}_{n})$ and similarly for ${\boldsymbol{y}}$, ${\boldsymbol{a}}$ and ${\boldsymbol{b}}$, has entropy vector $n\cdot {\boldsymbol{h}}$ 16 . One should be careful, however, to note that the operation of sharing multiple independent correlations among the same parties is not a free operation either in the framework of causal correlations (since, for example, two independent copies of a causal distribution may give rise to a noncausal one), or in the process matrix framework (where two independent copies of a process matrix does not, in general, produce a valid process matrix). Nevertheless, $P({\boldsymbol{a}}{\boldsymbol{b}}| {\boldsymbol{x}}{\boldsymbol{y}})=P({\boldsymbol{x}}{\boldsymbol{y}}{\boldsymbol{a}}{\boldsymbol{b}})/P({\boldsymbol{x}}{\boldsymbol{y}})$ obtained from equation (24) still represents a valid (possibly noncausal) distribution.

It is interesting also to ask how sensitive the entropic causal inequalities are for detecting noncausality. Since it does not appear possible to saturate the inequalities (B2)–(B4) with nonzero bounds using causal distributions, these inequalities are not tight and, consequentially, unable to detect noncausal correlations that are very close to being causal. For the other inequalities in equation (B1) this is nonetheless a pertinent question. More precisely, one may ask whether there exists a distribution ${P}^{\varepsilon }$ of the form

Equation (25)

where ${P}^{\mathrm{NC}}$ is a noncausal distribution and ${P}^{{\rm{C}}}$ is causal, that violates any of these entropic inequalities for arbitrarily small $\varepsilon \gt 0$.

We looked in detail at this question for the case of binary inputs and outputs, where the inequalities in equation (B1) can all both be saturated by causal distributions, and violated by noncausal ones. By trying exhaustively all deterministic distributions ${P}^{\mathrm{NC}}$ and ${P}^{{\rm{C}}}$, we found that such behavior was exhibited (for such distributions) only by the two inequalities

Equation (26)

and

Equation (27)

Equation (26), for example, is violated by ${P}^{\varepsilon }$ for all $\varepsilon \gt 0$ when taking ${P}^{\mathrm{NC}}({ab}| {xy})={\delta }_{a\oplus x,y}\,{\delta }_{b\oplus y,x}$ and ${P}^{{\rm{C}}}({ab}| {xy})={\delta }_{a,0}\,{\delta }_{b\oplus y,x}$ along with uniformly distributed inputs X and Y, which also gives a violation of the GYNI-type causal inequality

Equation (28)

with a left-hand side value of $\tfrac{1+\varepsilon }{2}\gt \tfrac{1}{2}$.

For the remaining inequalities, such mixtures that violate a standard causal inequality for arbitrarily small ε only violate an entropic causal inequality when $\varepsilon \gt {\varepsilon }_{0}$ for some ${\varepsilon }_{0}$ bounded away from 0. We observed identical behavior when we extended our consideration also to various nondeterministic distributions ${P}^{\mathrm{NC}}$ and ${P}^{{\rm{C}}}$, and it thus seems that only equations (26) and (27) exhibit this ability to detect the noncausality of distributions that are arbitrarily close to being causal.

A final point worth discussing relates to the physical interpretation of the distributions violating entropic causal inequalities. One of the motivations in introducing the notion of causal correlations was whether nature permits more general causal structures that might allow such correlations to be realized, for example in quantum gravity. In particular, the authors of [14] introduced the so-called process matrix formalism, in which quantum mechanics is assumed to hold locally for each party, while no global order is assumed between the parties. They showed that causal inequalities can be violated within this framework, and this helped motivate further studies of causal and noncausal correlations, where it has been shown that the violation of causal inequalities is ubiquitous within this framework [21, 22, 24, 25, 54, 55]. It is thus interesting to see whether entropic causal inequalities share this property and can also be violated within the process matrix framework.

To look for such violations, we used the optimization techniques of [24, 25] with qubit systems to try and optimize the violation of the GYNI-type inequalities that the distributions in equation (21) violate maximally. We also tried minimizing the distance to other deterministic noncausal correlations such as equation (23), as well as optimizing in random directions in probability space. Unfortunately, we were unable to find any process matrices operating on qubits that violate our entropic causal inequalities with such techniques. We additionally attempted to reproduce (as closely as possible) distributions of the form (25) for small ε in order to violate inequalities (26) and (27), but similarly found no violation. Finally, we looked at correlations obtained by mixing noncausal correlations realizable by process matrices with causal correlations. An analogous mixing procedure was shown to enable all nonlocal distributions to violate the entropic Bell inequalities described in section 2.2.5 [31], but we were unable to find violations of any entropic causal inequalities with this approach.

This lack of violation is perhaps unsurprising given the general lack of sensitivity of the entropic inequalities to nearly causal distributions, and the fact that the best-known violations of causal inequalities for this scenario with process matrices are relatively small [24]. Nonetheless, it remains possible that violations can be found with higher-dimensional systems or more inputs and outputs; we leave this as an open question.

3.2. Characterization based on counterfactual variables

In this section we will consider counterfactual variables as outlined in section 2.2.5. Rather than considering the inputs as random variables X and Y, we take copies of each output variable for all input combinations, i.e. Axy and Bxy. In contrast to the method based on causal Bayesian networks, this method fixes the number of inputs that the inequalities apply to but may lead to novel constraints, as is the case in the Bell scenario.

3.2.1. Counterfactual variables for bipartite causal correlations

To keep the discussion simple, we will consider only the case of binary inputs, but the generalization to arbitrary inputs is straightforward. We thus consider the variables in

Equation (29)

Note that, in contrast to the example of Bell inequalities discussed in section 2.2.5, we need to consider copies of each variable for each input pair (x, y). This is a consequence of the fact that the correlations which we want to characterize may be signaling, e.g., for the causal order ${\rm{A}}\prec {\rm{B}}$, B00 and B10 will in general be different.

Since Axy and ${B}_{x^{\prime} y^{\prime} }$ are jointly observable only if $x=x^{\prime} $ and $y=y^{\prime} $, the marginal scenario in this case is

Equation (30)

In contrast to the DAG-based method, several choices of probability structure ${ \mathcal S }$ compatible with ${ \mathcal M }$ are possible, and the particular choice must be motivated on the basis of physical assumptions. One natural possibility would be to take ${ \mathcal S }={ \mathcal M }$, as one may have no a priori reason to think that the variables Axy and ${A}_{x^{\prime} y^{\prime} }$ have simultaneous physical meaning for $(x,y)\ne (x^{\prime} ,y^{\prime} )$, and hence may not have a well-defined joint probability distribution. On the other hand, in some cases one may imagine that such inputs correspond to the choice of measurements of some physical properties that are simultaneously well-defined, as in a classical theory; hence, one may alternatively take ${ \mathcal S }=\{{\cup }_{{M}_{j}\in { \mathcal M }}{M}_{j}\}=\{S\}$. In the following, we will adopt the former approach and take ${ \mathcal S }={ \mathcal M }$, since this constitutes the minimum assumptions compatible with the marginal scenario. The Shannon cone for ${ \mathcal S }$ is thus

Equation (31)

as in equation (8). We note however that this physically motivated choice for ${ \mathcal S }$ implies, for this particular scenario, that a global probability distribution does in fact exist17 . Taking ${ \mathcal S }=\{S\}$ would thus provide an equivalent entropic characterization, and moreover, this equivalence also holds at the level of Shannon (rather than entropy) cones (see appendix C for a more detailed discussion).

We follow a method analogous to that used in section 3.1. First, we characterize the cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ of entropy vectors for fixed-order causal correlations, then, we characterize the convex mixtures of such correlations.

To do this, we note that the no-signaling conditions obeyed by fixed-order correlations (see section 2.1) impose constraints on the counterfactual variables. For example, correlations consistent with the order ${\rm{A}}\prec {\rm{B}}$ obey $P(a| {xy})=P(a| {xy}^{\prime} )$ for all $x,y,y^{\prime} ,a$, which implies ${A}_{{xy}}={A}_{{xy}^{\prime} }$ and thus $H({A}_{{xy}})=H({A}_{{xy}^{\prime} })$ also. Similarly, for ${\rm{B}}\prec {\rm{A}}$, we have $H({B}_{{xy}})=H({B}_{x^{\prime} y})$ for all $x,x^{\prime} ,y$. The cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ are thus given by

Equation (32)

and

Equation (33)

where ${{L}}_{{ \mathcal C }}(\cdot )$ again denotes the linear subspace defined by the corresponding constraints.

As in section 3.1, we introduce the latent switch variable Q, denote the augmented set of random variables $\tilde{S}=S\cup \{Q\}$, and extend the probability structure as

Equation (34)

(in appendix C we discuss further the implications of different choices of probability structures). With this extra variable we note again that the convex hull $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ contains the vectors ${{\boldsymbol{h}}}_{\mathrm{conv}}={(H(T| Q))}_{T\in {{ \mathcal S }}^{{\rm{c}}}}$ for causal correlations. The system of inequalities $I{\boldsymbol{h}}\leqslant {\bf{0}}$ characterizing $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ can then again be transformed in a similar way to equation (16) into a new system $\tilde{I}\tilde{{\boldsymbol{h}}}\leqslant {\bf{0}}$ defining the cone of corresponding entropy vectors $\tilde{{\boldsymbol{h}}}={(H(T))}_{T\in {\widetilde{{ \mathcal S }}}^{{\rm{c}}}}$, which we again denote by ${\mathrm{conv}}_{Q}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$. In contrast to the DAG-based method, the only constraint on Q is, now, $H(Q)\leqslant 1$, since Q need not be independent of the counterfactual output variables ${A}_{{xy}},{B}_{{xy}}$. Finally, we need to project onto the marginal scenario ${ \mathcal M }$ in equation (30). The causal polyhedron is thus given, in analogy to equations (17) and (18), by

Equation (35)

where we have ${{\rm{\Gamma }}}^{\widetilde{{ \mathcal S }}}={\bigcap }_{x,y\in \{\mathrm{0,1}\}}{{\rm{\Gamma }}}_{\{{A}_{{xy}},{B}_{{xy}},Q\}}$.

3.2.2. Entropic causal inequalities for counterfactual variables and their violation

As in section 3.1, the construction above allows one to obtain the full list of entropic inequalities characterizing ${{\rm{\Gamma }}}_{\mathrm{AB}}^{\mathrm{causal}}$. After removing the trivial inequalities directly implied by Shannon constraints on ${ \mathcal M }$, we find that there are 6 nontrivial entropic causal inequalities, which can be grouped into two equivalence classes of inequalities under the relabeling of inputs:

Equation (36)

and

Equation (37)

The fact that these inequalities have nontrivial bounds is, as for the DAG-based method, a result of the constraint $H(Q)\leqslant 1$ which means ${{\rm{\Gamma }}}_{\mathrm{AB}}^{\mathrm{causal}}$ is a polyhedron characterized by a set of inhomogeneous inequalities. Indeed, if one chooses not to eliminate Q from the entropic description, one obtains a convex cone characterized by the above equations, except that the right-hand sides are multiplied by H(Q) (see the discussion in appendix B).

In contrast to the case for the DAG-based approach, where violation of the causal inequalities we obtained was possible even with deterministic distributions, it is clear that such distributions provide no interesting behavior in the counterfactual approach since any such distribution will have a null entropy vector. By looking at equal mixtures of deterministic causal distributions ${P}^{{\rm{A}}\prec {\rm{B}}}$ and ${P}^{{\rm{B}}\prec {\rm{A}}}$, however, we were able to verify that the inequalities in equations (36) and (37) can indeed be saturated by such (causal) distributions and are thus tight.

In order to study the potential violation of these entropic inequalities, we again need to look at nondeterministic distributions. One can easily see, however, that equations (36) and (37) cannot be violated when restricted to distributions satisfying $H({A}_{{xy}})\leqslant 1$ and $H({B}_{{xy}})\leqslant 1$ for all $x,y\in \{0,1\}$, as this also implies that $I({A}_{{xy}}:{B}_{{xy}})\leqslant 1$. This means that the inequalities for counterfactual variables are unable to detect noncausality when both parties are restricted to binary outputs.

To find violations we again look at the extremal rays of the Shannon cone ${{\rm{\Gamma }}}^{{ \mathcal S }}$ of equation (31) which violate one of the inequalities, and examine whether these rays can be reached by any probability distribution. Considering bounds on $H({A}_{{xy}})$ and $H({B}_{{xy}})$ strictly larger than 1, we find that violations are possible for any such bound. Moreover, the entropy vectors giving maximal violation of equations (36) and (37) are generally realizable with equal mixtures of causal and noncausal distributions. For example, given the constraints $H({A}_{{xy}})\leqslant {\mathrm{log}}_{2}k$ and $H({B}_{{xy}})\leqslant {\mathrm{log}}_{2}k$ for some integer $k\geqslant 2$, the distribution

Equation (38)

where $a,b\in \{0,\,\ldots ,\,k-1\}$, realizes such an extremal point for all $k\geqslant 2$, and provides a violation of both equations (36) and (37) for $k\gt 2$. For k = 2 (binary outputs), this distribution can be written as the convex combination

Equation (39)

where ${P}^{\mathrm{NC}}({ab}| {xy})={\delta }_{a\oplus x\oplus 1,y}{\delta }_{b\oplus y\oplus 1,x}$ maximally violates a GYNI-type inequality (it is simply a symmetry of the third distribution in equation (21), obtained by flipping all outputs), and ${P}^{{\rm{C}}}({ab}| {xy})={\delta }_{a,0}{\delta }_{b,0}$ is causal. Even though it does not violate equation (36) or (37), P2 is noncausal. The distribution Pk can be seen as a possible generalization of a GYNI-violating distribution.

This link to the GYNI-type inequalities and correlations can be made more explicit by considering the related distribution

Equation (40)

with again $a,b\in \{0,\,\ldots ,\,k-1\}$. We have ${P}_{2}^{{\prime} }={P}^{\mathrm{NC}}$, and, for $k\geqslant 3$, ${P}_{k}^{{\prime} }$ has the same entropy vector as ${P}_{k-1}$. ${P}_{k}^{{\prime} }$ can be clearly simulated from ${P}_{2}^{{\prime} }={P}^{\mathrm{NC}}$ by making use of shared randomness and by letting both parties replace the output 1 obtained from ${P}_{2}^{{\prime} }$ by a shared random value $a=b\in \{1,\,\ldots ,\,k-1\}$. It is interesting to see, then, that the GYNI-maximally violating distributions also provide the best behavior entropically when augmented with shared randomness, even though they fail to violate the inequalities when the parties have only binary outputs.

As for the DAG-based method, it is also interesting to look at the sensitivity of the inequalities with respect to the detection of noncausality. To do so, we again looked at distributions of the form given in equation (25), but where ${P}^{\mathrm{NC}}$ and ${P}^{{\rm{C}}}$ are now equal mixtures of 3-outcome deterministic noncausal and causal distributions, respectively. The number of such distributions makes an exhaustive search difficult, but by sampling randomly we were nonetheless able to verify the existence of distributions ${P}^{\varepsilon }({ab}| {xy})$ which violate the entropic inequalities (36) and (37) for arbitrary small ε. Although the examples we found are not particularly informative or simple (and thus we refrain from giving them explicitly), they nevertheless show that the entropic inequalities (36) and (37) exhibit the desired sensitivity.

Finally, one may again ask whether one can violate any of the entropic inequalities for counterfactuals within the process matrix formalism, or whether any noncausal correlation can be mixed with a causal one to violate an entropic inequality, as is the case for entropic Bell inequalities obtained from the counterfactual approach [31]. We leave this as an open question, but note only that we were not able to find a way to do so: for example, we were unable to find a violation (with or without the use of shared randomness) for noncausal distributions realizable within the process matrix framework.

4. Multipartite entropic causal inequalities

The notion of causal correlations can be extended to more than two parties in a recursive manner [23, 25]. Consider N parties ${{A}}_{1},\,\ldots ,\,{{A}}_{N}$, with inputs ${\boldsymbol{x}}=({x}_{1},\,\ldots ,\,{x}_{N})$ and outputs ${\boldsymbol{a}}=({a}_{1},\,\ldots ,\,{a}_{N})$. In any given run, one party, say ${{\rm{A}}}_{k}$, must act first, and none of the other parties can signal to them, which implies $P({a}_{k}| {\boldsymbol{x}})=P({a}_{k}| {x}_{k})$. The correlations shared by the remaining $N-1$ parties, conditioned on the input and output of the first, must also in turn be causal. However, note that the causal order itself (and not only the response functions) of the remaining parties may depend on the input and output of the first, a phenomenon called dynamical causal order [23, 25, 53], and which goes beyond the standard model of fixed causal Bayesian networks.

An N-partite correlation $P({\boldsymbol{a}}| {\boldsymbol{x}})$ is thus called causal if it can be decomposed in the following way [23, 25]:

Equation (41)

where ${{\boldsymbol{x}}}_{\setminus k}=({x}_{1},\,\ldots ,\,{x}_{k-1},{x}_{k+1},\,\ldots ,\,{x}_{N})$ and ${{\boldsymbol{a}}}_{\setminus k}=({a}_{1},\,\ldots ,\,{a}_{k-1},{a}_{k+1},\,\ldots ,\,{a}_{N})$, with ${q}_{k}\geqslant 0$, ${\sum }_{k}{q}_{k}=1$, and where for each $k,{x}_{k},{a}_{k}$, ${P}_{k,{x}_{k},{a}_{k}}({{\boldsymbol{a}}}_{\setminus k}| {{\boldsymbol{x}}}_{\setminus k})$ is a causal $(N-1)$-partite correlation (down to the lowest level of this recursive definition, where any 1-partite correlation is considered to be causal). Note that, for N = 2 this reduces to equation (1). The entropic approach can be generalized to the multipartite scenario using a similar recursive method.

4.1. Causal Bayesian network method

It is instructive to first look into the details of the tripartite case—in which case we shall denote the parties Alice (${\rm{A}}$), Bob (${\rm{B}}$) and Charlie (${\rm{C}}$), as is standard—before generalizing the method to more parties. The general method follows that used for the bipartite case in section 3.1, and the relevant conditional DAG is shown in figure 3. The set of observable variables to be considered here is $S=\{X,Y,Z,A,B,C\}$, the marginal scenario and the probability structure are ${ \mathcal M }={ \mathcal S }=\{S\}$.

Figure 3.

Figure 3. DAGs for tripartite causal correlations. The latent 'switch' variable Q determines which DAG is 'activated'. Correlations among variables from shaded rectangles are causal conditionally on the input and output of the party acting first.

Standard image High-resolution image

The polytope of tripartite causal correlations (i.e., of the form equation (41)) can be written as

Equation (42)

where ${{ \mathcal P }}^{{\rm{A}}}$ is the polytope of causal distributions consistent with Alice acting first and such that the remaining conditional correlation shared by Bob and Charlie is causal, and analogously for ${{ \mathcal P }}^{{\rm{B}}}$ and ${{ \mathcal P }}^{{\rm{C}}}$. As a consequence, in order to define the polyhedron characterizing entropically tripartite causal correlations, which we denote ${{\rm{\Gamma }}}_{\mathrm{ABC}}^{\mathrm{causal}}$, we first need to define the corresponding polyhedra, namely ${{\rm{\Gamma }}}^{{\rm{A}}},{{\rm{\Gamma }}}^{{\rm{B}}},{{\rm{\Gamma }}}^{{\rm{C}}}$, associated with each party acting first.

Let us thus consider ${{\rm{\Gamma }}}^{{\rm{A}}}$. According to the recursive definition given in equation (41), for any $x,a$, the conditional entropy vector ${{\boldsymbol{h}}}_{\mathrm{BC}}^{{xa}}={(H(T| X=x,A=a))}_{T\subset \{Y,Z,B,C\}}$ for a correlation in ${{ \mathcal P }}^{{\rm{A}}}$ must be contained in the bipartite causal polyhedron ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$, defined for Bob and Charlie as in equations (17) and (18). By convexity this also implies that ${{\boldsymbol{h}}}_{\mathrm{BC}}={(H(T| {XA}))}_{T\subset \{Y,Z,B,C\}}={\sum }_{x,a}P(x,a){{\boldsymbol{h}}}_{\mathrm{BC}}^{{xa}}$ is in ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$. We can then use a similar transformation to equation (16) to obtain constraints on ${{\rm{\Gamma }}}^{{\rm{A}}}$: if entropy vectors ${{\boldsymbol{h}}}_{\mathrm{BC}}$ in ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$ satisfy the inequalities ${\boldsymbol{I}}\cdot {{\boldsymbol{h}}}_{\mathrm{BC}}\leqslant \beta $, then the corresponding (unconditional) entropy vector ${\boldsymbol{h}}={(H(T))}_{T\subset S}$ must satisfy the inequalities ${{ \mathcal T }}_{{XA}}({\boldsymbol{I}})\cdot {\boldsymbol{h}}\leqslant \beta $. Writing ${{ \mathcal T }}_{{XA}}^{* }$ for the dual transformation on the space of entropy vectors, we thus have that ${\boldsymbol{h}}\in {{ \mathcal T }}_{{XA}}^{* }({{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}})$. Together with the facts that ${\boldsymbol{h}}$ must lie in the Shannon cone ${{\rm{\Gamma }}}_{{S}}$ for the relevant variables, that all the inputs must be independent from each other, and that Alice's output must be independent from Bob and Charlie's inputs (conditioned on her input), we obtain the characterization

Equation (43)

with similar expressions for ${{\rm{\Gamma }}}^{{\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{C}}}$.

Following the same approach as in section 3.1, we introduce a (now three-valued) switch variable Q (see figure 3). Similarly to what we observed in the bipartite case, the convex hull $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}},{{\rm{\Gamma }}}^{{\rm{B}}},{{\rm{\Gamma }}}^{{\rm{C}}})$ contains the conditional entropy vectors ${(H(T| Q))}_{T\subset S}$ for tripartite causal correlations. The inequalities characterizing $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}},{{\rm{\Gamma }}}^{{\rm{B}}},{{\rm{\Gamma }}}^{{\rm{C}}})$ can again be transformed into inequalities satisfied by the entropy vector $\tilde{{\boldsymbol{h}}}={(H(T))}_{T\subset \tilde{S}}$, for variables in $\tilde{S}=S\cup \{Q\}$, by introducing a transformation ${{ \mathcal T }}_{Q}$ as in equation (16), thus defining the polyhedron ${\mathrm{conv}}_{Q}({{\rm{\Gamma }}}^{{\rm{A}}},{{\rm{\Gamma }}}^{{\rm{B}}},{{\rm{\Gamma }}}^{{\rm{C}}})$ as before. Taking into account the Shannon constraints for all variables in $\tilde{S}$, the independence constraints ${\mathrm{CI}}_{Q}=(X\perp \perp Y\perp \perp Z\perp \perp Q)$ and the bound $H(Q)\leqslant {\mathrm{log}}_{2}3$, and finally projecting onto the observable variables in S, we see that the entropy vectors for tripartite causal correlations belong to the polyhedron

Equation (44)

While this characterization is certainly valid, some subtleties arising from the differences between the probabilistic and entropic descriptions allow one to actually make it tighter. Specifically, certain conditions implied by the definition (41) need not be implied by the corresponding entropic definition outlined above. For example, if $P({abc}| {xyz})$ is a causal correlation, then the bipartite marginal distributions ${P}_{x}({bc}| {yz})={\sum }_{a}P({abc}| {xyz})$ and $P({bc}| {yz})={\sum }_{x}P(x){P}_{x}({bc}| {yz})$ are both causal (as are the corresponding marginals for each other pair of parties) [25]. This implies that the entropy vectors ${(H(T| X))}_{T\subset \{Y,Z,B,C\}}$ and ${(H(T))}_{T\subset \{Y,Z,B,C\}}$ corresponding to a tripartite causal correlation must also satisfy all the inequalities characterizing the bipartite causal polyhedron ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$—which may not necessarily be implied by the characterization of ${({{\rm{\Gamma }}}_{\mathrm{ABC}}^{\mathrm{causal}})}_{0}$ above. We can thus tighten the previous characterization, and define the tripartite causal polyhedron as18

Equation (45)

where $[\mathrm{perms}.]$ denotes the permutations of the preceding two terms for the other parties. Note that such extra constraints do not need to be imposed in the bipartite case since the causality of all one-party marginals is equivalent to them being valid probability distributions, which is already assured by the elemental inequalities.

To extend the above idea to the general multipartite case of equation (41), we simply define recursively (here the notation should be self-evident)

Equation (46)

where ${\mathrm{CI}}_{{{\rm{A}}}_{k}}$ denotes the set of independence constraints resulting from the assumption that all parties' inputs are independent, i.e. ${X}_{1}\perp \perp \,\cdots \,\perp \perp {X}_{N}$, and that party k acts first, which implies ${A}_{k}\perp \perp {{\boldsymbol{X}}}_{\setminus k}| {X}_{k}$. The causal polyhedron is then defined as

Equation (47)

where ${\mathrm{CI}}_{Q}$ denotes the independence relation between all inputs and Q, i.e. ${X}_{1}\perp \perp \cdots \perp \perp {X}_{N}\perp \perp Q$.

4.2. Counterfactual variable method

A similar generalization is possible also for the counterfactual method. Again, it is instructive to look first at the tripartite case, where the set of variables to be considered is $S={\{{A}_{{xyz}},{B}_{{xyz}},{C}_{{xyz}}\}}_{x,y,z}$, the marginal scenario is ${ \mathcal M }={\{\{{A}_{{xyz}},{B}_{{xyz}},{C}_{{xyz}}\}\}}_{x,y,z}$ and we take the probability structure to be ${ \mathcal S }={ \mathcal M }$. We start by defining the polyhedron for the case in which Alice acts first,

Equation (48)

which is the analog, for the counterfactual method, of the polyhedron in equation (43). Similar definitions hold for ${{\rm{\Gamma }}}^{{\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{C}}}$. The tripartite polyhedron of causal counterfactual inequalities can then be defined, following a similar reasoning to the previous case, as

Equation (49)

where $\widetilde{{ \mathcal S }}={\{\{{A}_{{xyz}},{B}_{{xyz}},{C}_{{xyz}},Q\}\}}_{x,y,z}$ and ${{\rm{\Gamma }}}_{\mathrm{BC}| x}^{\mathrm{causal}}$ is defined by imposing the constraints characterizing ${{\rm{\Gamma }}}_{\mathrm{BC}}$ (a priori defined for some variables ${B}_{{yz}},{C}_{{yz}}$) to the variables ${B}_{{xyz}},{C}_{{xyz}}$, and with similar definitions for ${{\rm{\Gamma }}}_{\mathrm{AC}| y}^{\mathrm{causal}}$ and ${{\rm{\Gamma }}}_{\mathrm{AB}| z}^{\mathrm{causal}}$.

As for the case based on causal Bayesian networks, the construction in equation (49) can then be generalized to an arbitrary number of parties in a recursive way.

5. Information bounds in causal games

One of the advantages of the entropic approach is that it allows information theoretic constraints to be naturally imposed, derived, and interpreted [10, 39]. As an illustration, we consider a simple application of our approach to understanding the role of bounded communication in causal games.

Consider the generalization of the GYNI game described in section 2.1 to arbitrary numbers of inputs and outputs, in which two parties try to maximize the winning probability ${p}_{\mathrm{succ}}=P(a=y,b=x)$. If the parties operate causally, then in any given round of the game only one-way communication may occur. One may be interested in the effect of limiting the amount communication that can occur in any such round. In the entropic framework, this can easily be taken into account by adding an additional constraint of the form $I(X:B)\leqslant {I}_{\max }$ to ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ in order to restrict B's dependency on X, and similarly imposing $I(Y:A)\leqslant {I}_{\max }$ to ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$, where the quantity ${I}_{\max }$ represents the maximum allowable entropy of the classical message communicated by the parties. For example, if the parties are permitted, in each round, to exchange a classical d-dimensional system, then ${I}_{\max }={\mathrm{log}}_{2}d$. In general, the amount of one-way communication ${I}_{\max }$ does not need to be specified in advance, it will appear as a parameter in our inequalities. By applying the approach of section 3.1 to this scenario one finds that causal correlations must then obey the inequality

Equation (50)

i.e., the sum of the two mutual informations is similarly bounded by ${I}_{\max }$. Although this is perhaps not unexpected, it shows the ease with which such bounds can be derived in the entropic framework.

A more subtle variant is obtained by considering a slight generalization of the causal game proposed by Oreshkov, Costa, and Brukner (OCB) in [14]. In this game, the goal is also for one party to guess the other party's input; in contrast to the GYNI game, however, an additional input random bit $Y^{\prime} $ is given19 , which determines whether it is Bob who should guess Alice's input (if $Y^{\prime} =0$) or vice versa (if $Y^{\prime} =1$). The parties thus now attempt to maximize the winning probability

Equation (51)

An analogous entropic inequality can be obtained via a combination of the methods discussed in section 3. Since the relevant direction of communication in each round of this game depends on the additional input $Y^{\prime} $, we will combine the DAG-based method for the variables $X,Y,A,B$ with the counterfactual approach to condition on $Y^{\prime} $. More precisely, one may take ${ \mathcal S }={ \mathcal M }={\{\{X,Y,{A}_{y^{\prime} },{B}_{y^{\prime} }\}\}}_{y^{\prime} }$ and $\widetilde{{ \mathcal S }}={\{\{X,Y,{A}_{y^{\prime} },{B}_{y^{\prime} },Q\}\}}_{y^{\prime} };$ the relevant causal constraints for the cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ and the polyhedron ${\widetilde{{\rm{\Gamma }}}}_{\mathrm{AB}}^{\mathrm{causal}}$ are the same as those imposed on $X,Y,A,B,Q$ in the DAG-based method, except that now they are applied to each copy of the conditional variables ${A}_{y^{\prime} }$ and ${B}_{y^{\prime} }$, and the communication bounds $I(X:{B}_{y^{\prime} })\leqslant {I}_{\max }$ and $I(Y:{A}_{y^{\prime} })\leqslant {I}_{\max }$ are imposed on the corresponding cones. Notice that, in this way, we are assuming that $X\perp \perp Y\perp \perp Y^{\prime} \perp \perp Q$. Combining the above constraints with the analysis in section 3, one finds that causal correlations must obey

Equation (52)

This inequality, for the special case of binary inputs and outputs and with ${I}_{\max }=1$, was proposed in [58] as a potential principle to bound the set of correlations obtainable within the process matrix formalism20 , in analogy with the celebrated information causality principle [39] that provides bounds on the strength of bipartite quantum correlations. Our approach allowed us to show that equation (52) indeed holds for causal processes, but it remains to be seen whether such a constraint on mutual informations for causal correlations can be violated within the process matrix framework. This example, however, highlights the potential of the entropic approach to causal correlations for studying information-theoretic principles.

6. Discussion

Since Bell first formulated his eponymous theorem, understanding the role of causality within quantum mechanics has been a central yet thorny goal. Complicating matters further, the very idea of a definite causal order itself has begun to be questioned. While sophisticated frameworks have been introduced in an effort to free physical theories from the shackles of a rigid causal framework, the issue of whether nature permits violations of causal inequalities remains an elusive question.

Against this backdrop, our aim in this paper was to introduce an entropic approach to studying causal correlations, and to this end we presented two complementary methods: the first based on the consideration of the entropies of the variables appearing in the causal Bayesian networks describing causal scenarios, and the second based on a counterfactual description of the outcome variables appearing in such networks. Focusing on bipartite causal scenarios, we described in detail the successful application of both methods to derive nontrivial entropic causal inequalities, before showing how the characterizations can be generalized to multipartite scenarios. In contrast to the usual approach to causal correlations based on probability distributions, the entropic causal inequalities we derived using both methods are valid for any finite number of possible outcomes, as well as for any number of inputs for the first method based on causal Bayesian networks, and thus provide a very concise description of causal correlations. We discussed the ability for the derived entropic causal inequalities to witness the noncausality of several classes of interesting noncausal correlations, but were nonetheless unable to find violations of the inequalities by correlations obtainable within the process matrix formalism [14] using qubit systems. In light of the coarse-grained description provided by entropic inequalities and the fact that the known violations of standard causal inequalities are in general rather small [24], this is arguably an unsurprising negative result. The question of whether entropic causal inequalities can be violated within the process matrix formalism and (more importantly) by quantum correlations thus remains open. More generally, our construction can be used to characterize arbitrary convex combinations of different causal Bayesian networks, and thus provides, for example, a natural tool to investigate stronger notions of multipartite Bell nonlocality [50, 51, 59, 60] from the entropic perspective.

In view of this new framework for the study of causal correlations we believe that several other directions of research can naturally be pursued. Here we focused on using the Shannon entropies of the relevant variables, but it is known that, at least in particular scenarios, the same approach can be used to derive constraints using certain generalized entropies [61, 62] and even with nonstatistical information measures such as the Kolmogorov complexity [35]. Can our framework be extended to these other information measures, and if so, are they more sensitive to violations of causality? Similarly, one may wonder whether the addition of non-Shannon-type inequalities to the entropic descriptions of causal correlations considered might lead to tighter constraints [28, 63, 64]. Moreover, in the multipartite characterization of causal correlations in section 4 we also saw that causality imposes additional entropic constraints on marginal and conditional distributions that allow us to tighten our characterization in equation (45). It remains an open question whether additional such constraints can be found that tighten even further the characterization.

Another important direction to consider would be the ability to formulate, and perhaps violate, information-theoretical principles [10] of causality. We provided, as a simple application, an idea for one possible approach, showing how simple bounds on mutual informations can be derived for causal games where communication is limited in each direction. It would be interesting to see, in particular, whether such principles could be violated within the process matrix formalism and, if so, the connection to the violation of causal inequalities. For example, does the violation of causal inequalities imply the violation of some principle implied by quantum mechanics? We expect our results to motivate these and many more future investigations.

Acknowledgments

We acknowledge fruitful discussions with Philippe Allard Guérin, Flavio Baccari, and Časlav Brukner. This work was funded by the DAAD, the 'Retour Post-Doctorants' program (ANR-13-PDOC-0026) of the French National Research Agency, the Brazilian ministries MEC and MCTIC, and the FWF (Project: M 2107 Meitner-Programm).

Appendix A.: Causal correlations not contained in $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$

Starting with the systems of inequalities ${I}_{0}{\boldsymbol{h}}\leqslant {\bf{0}}$ and ${I}_{1}{\boldsymbol{h}}\leqslant {\bf{0}}$ characterizing the cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ defined in equations (12) and (13), the characterization $I{\boldsymbol{h}}\leqslant {\bf{0}}$ of $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ can be found by first solving the extremal ray enumeration problem for the extremal rays of ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$, taking the union of these rays and finally solving the facet enumeration problem for the inequalities characterizing $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$.

We find that there are six nontrivial inequalities (i.e., non Shannon-type inequalities) for $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$, which correspond to four equivalence classes of inequalities under exchange of parties:21

Equation (A1)

In order to see that there are causal bipartite correlations that have entropy vectors not contained in $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$, consider the following counterexample. Take ${P}^{{\rm{A}}\prec {\rm{B}}}({ab}| {xy})={\delta }_{a,x}{\delta }_{b,x}$ and ${P}^{{\rm{B}}\prec {\rm{A}}}({ab}| {xy})\,={\delta }_{a,y}{\delta }_{b,y}$ and consider the inputs $x,y$ to be uniformly distributed so that ${P}^{{\rm{A}}\prec {\rm{B}}}({xyab})=\tfrac{1}{4}{P}^{{\rm{A}}\prec {\rm{B}}}({ab}| {xy})$ and ${P}^{{\rm{B}}\prec {\rm{A}}}({xyab})=\tfrac{1}{4}{P}^{{\rm{B}}\prec {\rm{A}}}({ab}| {xy})$. The distribution $P({xyab})=\tfrac{1}{2}{P}^{{\rm{A}}\prec {\rm{B}}}({xyab})+\tfrac{1}{2}{P}^{{\rm{B}}\prec {\rm{A}}}({xyab})$ thus also defines a causal correlation $P({ab}| {xy})$, but one can verify that the entropy vector for P(xyab) violates the first and last inequalities in (A1) with a value for the left-hand sides of $1-\tfrac{3}{2}{\mathrm{log}}_{2}\tfrac{3}{2}\approx 0.123\gt 0$.

A similar conclusion can also be reached for the method based on counterfactual variables: starting from the definitions of ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ in equations (32) and (33) one finds that the inequalities characterizing $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ are precisely the same as the causal inequalities in equations (36) and (37) except with bounds on the right-hand side of 0. One can easily verify that equations (36) and (37) can be saturated by causal correlations (for some equal mixtures of correlations ${P}^{{\rm{A}}\prec {\rm{B}}}$ and ${P}^{{\rm{B}}\prec {\rm{A}}}$), thus providing such a counterexample.

Appendix B.: Bipartite entropic causal inequalities from the DAG method

The following is the full list of (equivalence classes of) entropic causal inequalities obtained from the DAG method, up to their symmetries under the exchange of parties.

Ten (of the twenty) families of inequalities have bounds of 0 and can be violated by binary distributions:

Equation (B1)

Two more have nonzero bounds but, under the constraints that $H(A)\leqslant 1$, $H(B)\leqslant 1$, $H(X)\leqslant 1$, $H(Y)\leqslant 1$, turn out to be implied by the previous inequalities in equation (B1):

Equation (B2)

Four correspond to 'corrected' versions of the inequalities (A1) characterizing $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$, and cannot be violated by binary distributions:

Equation (B3)

while a further four can also not be violated by binary distributions:

Equation (B4)

We note that, instead of projecting ${\widetilde{{\rm{\Gamma }}}}_{\mathrm{AB}}^{\mathrm{causal}}$ (as defined in equation (17)) onto the marginal scenario ${ \mathcal M }=\{\{X,Y,A,B\}\}$ to obtain these entropic causal inequalities, one could start by projecting it onto the marginal scenario ${ \mathcal M }^{\prime} =\{\{X,Y,A,B\},\{Q\}\}$ which would amount to eliminating all entropies $H(T\cup \{Q\})$ for all nonempty subsets $T\subset \{X,Y,A,B\}$ from the description while keeping H(Q). By doing so, one obtains the same inequalities given in equations (B1)–(B4), except with all the right-hand sides multiplied by H(Q). The inequalities in equation (B1) thus have no dependence on H(Q) (i.e., on the exent to which correlations of different causal orders are mixed), while the remaining inequalities have a nontrivial dependence on it. By eliminating H(Q) using the constraint $H(Q)\leqslant 1$ one then recovers the entropic causal inequalities above22 .

The inequalities containing H(Q) may be of interest if, for some reason, one puts a nontrivial bound on H(Q) (e.g., if one knows that one fixed causal order is more probable than the other), as they give novel constraints in such situations. In the extreme case, if we know that $H(Q)=0$, then the inequalities we obtain (namely equations (B1)–(B4), with all upper bounds replaced by 0) are valid for fixed-order causal correlations. All of the inequalities in equations (B3)–(B4) with upper-bounds multiplied by H(Q), except the second one in (B3), can be violated by binary noncausal correlations for any $H(Q)\lt 1;$ for the second inequality in equation (B3) we were only able to find a violation for $H(Q)\lt \tfrac{1}{2}\left(1+\tfrac{3}{2}{\mathrm{log}}_{2}\tfrac{3}{2}\right)\approx 0.939$.

Appendix C.: Relations between different probability structures

In the application of the counterfactual method to causal correlations discussed in section 3.2, as a result of the structure of the marginal scenario one can prove that different choices of probability structure ${ \mathcal S }$ give rise to the same observed marginal distributions. This is due to the fact that since all the marginals ${M}_{j}\in { \mathcal M }$ are disjoint, they are always consistent with the global product probability distribution

Equation (C1)

Hence, whichever probability structure ${ \mathcal S }$ we choose (consistent with ${ \mathcal M }$), the observed marginal probabilities can always be interpreted as arising from a global probability distribution. Similarly, the choice of extended probability structure $\widetilde{{ \mathcal S }}$ including the switch variable Q in equation (34) implies also the existence of a global probability distribution

Equation (C2)

(Such a construction is also possible in some other types of scenarios; see [56] for more general results.) It thus follows that the probability structures $\widetilde{{ \mathcal S }}$ that we chose and $\widetilde{{ \mathcal S }}^{\prime} =\{\tilde{S}\}$ again give rise to the same marginal distributions on ${ \mathcal M }$. A similar analysis can also be applied to the recursive method presented for the multipartite case in section 4.2.

At the level of entropic inequalities, however, the fact that we are considering Shannon inequalities that provide only an outer approximation of the entropy cone means that one may a priori obtain different constraints depending on which of these equivalent probability structures one assumes. For the specific case of a marginal scenario with disjoint elements, i.e., ${M}_{i}\cap {M}_{j}=\varnothing $ for all ${M}_{i},{M}_{j}\in { \mathcal M }$, a result by Matúš (see remark 1 in [65]) implies, nevertheless, that choosing ${ \mathcal S }={ \mathcal M }$ or ${ \mathcal S }^{\prime} =\{S\}$, with $S={\cup }_{{M}_{i}\in { \mathcal M }}{M}_{i}$, also provides an equivalent description for the Shannon cone. More precisely, we have

Equation (C3)

where ${{\boldsymbol{X}}}_{{M}_{i}}$ denotes the joint random variable associated with the subset of variables ${M}_{i}\in { \mathcal M }$. The linear constraints in equations (32) and (33) can then be imposed after the projection. Hence, the use of ${ \mathcal S }={ \mathcal M }$ or ${ \mathcal S }^{\prime} =\{S\}$ is equivalent, in this case, even at the level of the Shannon cone description of ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$.

One may hope that a similar analysis can be applied to show the equivalence of the probability structures $\widetilde{{ \mathcal S }}$ and $\widetilde{{ \mathcal S }}^{\prime} =\{\tilde{S}\}$, where ${\tilde{S}}_{i}\cap {\tilde{S}}_{j}=\{Q\}$ for all distinct ${\tilde{S}}_{i},{\tilde{S}}_{j}\in \widetilde{{ \mathcal S }}$. However, even though the marginal scenario of interest remains the same as above, one no longer has $\widetilde{{ \mathcal S }}={ \mathcal M }$ and, moreover, equation (35) involves extra constraints given by ${\mathrm{conv}}_{Q}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ and $H(Q)\leqslant 1$. As a result, the previous approach does not allow us to show the equivalence of choice between $\widetilde{{ \mathcal S }}$ and $\widetilde{{ \mathcal S }}^{\prime} $ in this situation, which we were indeed unable to prove.

Nonetheless, we stress that any possible differences in tightness between the entropic inequalities arising here from one particular probability structure or another are not due to stricter physical assumptions (i.e., the existence of joint probability distributions), but are rather due to different outer approximations of the true entropy cone (or polyhedron) via Shannon inequalities. We remark, however, that the choice of a minimal probability structure is computationally easier to handle due to the much lower number of variables; for example, compare the case ${ \mathcal S }={ \mathcal M }$ in equation (31), where $| {{ \mathcal S }}^{{\rm{c}}}| =13$ and thus ${{\rm{\Gamma }}}^{{ \mathcal S }}\subset {{\mathbb{R}}}^{13}$ (and where the entropy vectors to be considered are effectively 12-dimensional, since $H(\varnothing )$ is fixed to be 0), with the corresponding case for ${ \mathcal S }^{\prime} =\{S\}$, where ${{\rm{\Gamma }}}_{S}\subset {{\mathbb{R}}}^{{2}^{8}}={{\mathbb{R}}}^{256}$ (with effectively 255-dimensional entropy vectors). For an extensive discussion of the role of such constraints in the computation of tighter approximations to the entropy cone we refer the reader to [57].

Footnotes

  • Of course the probability assignment should be consistent. That is, for two subsets Si and ${S}_{i}^{{\prime} }$ of ${ \mathcal S }$, the corresponding probability distributions Pi and ${P}_{i}^{{\prime} }$ should coincide on ${S}_{i}\cap {S}_{i}^{{\prime} }$, so that one must have ${P}_{i}({{\boldsymbol{x}}}_{T})={P}_{i}^{{\prime} }({{\boldsymbol{x}}}_{T})$ for all $T\subset {S}_{i}\cap {S}_{i}^{{\prime} }$. This allows one to define H(T) for all $T\in {{ \mathcal S }}^{{\rm{c}}}$ unambiguously.

  • Note that although notions of causal correlations and causal Bayesian networks both share the 'causal' qualifier, they are distinct concepts: a causal correlation is not simply one that can be obtained from any particular causal Bayesian network.

  • For CIs between more than two variables, we use the natural extension of this notion. For example, if $P({x}_{i},{x}_{j},{x}_{k})=P({x}_{i})P({x}_{j})P({x}_{k})$ we write ${X}_{i}\perp \perp {X}_{j}\perp \perp {X}_{k}$.

  • We focus here on the bipartite case for concreteness, but the method readily generalizes to multipartite scenarios.

  • 10 

    In particular, by invoking Fine's Theorem we do not need to explicitly include the hidden variable Λ in this method, contrary to the DAG method outlined previously.

  • 11 

    In practice these can be obtained by taking the union of the extremal rays of the two cones ${{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}}$ and ${{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}}$ and solving the facet enumeration problem to obtain the inequality representation of $\mathrm{conv}({{\rm{\Gamma }}}^{{\rm{A}}\prec {\rm{B}}},{{\rm{\Gamma }}}^{{\rm{B}}\prec {\rm{A}}})$ using standard software for convex polyhedra such as PANDA [52].

  • 12 

    Note that ${(\tilde{{\boldsymbol{I}}})}_{\varnothing }$ multiplies $H(\varnothing )=0$ in the scalar product $\tilde{{\boldsymbol{I}}}\tilde{{\boldsymbol{h}}}$, so its value is irrelevant.

  • 13 

    Note that the nontriviality of the inequalities implies that such extremal rays indeed exist.

  • 14 

    Note that there are nonetheless extremal points that are only realizable by distributions with nonuniformly distributed inputs X and Y, and which violate some of the inequalities. However, such distributions never yield the maximal violation obtainable.

  • 15 

    This illustrates an important difference between the probabilistic and entropic frameworks: while all symmetries of a correlation obtained by flipping inputs and outputs (possibly conditioned on the local inputs for the latter) are equivalent in the probabilistic case (in the sense that if one violates a causal inequality, then all other ones violate a symmetry of that inequality) this is not the case in the entropic approach. The entropy vectors of two different symmetries of a correlation may be inequivalent, with one violating an entropic causal inequality while the other does not.

  • 16 

    Using this approach (and the distributions in equations (21) and (23)), one must take n = 17 (and thus 217 inputs) in order to violate all the remaining inequalities (although n = 2 is sufficient for all but one of these inequalities). However, we expect that more intelligent approaches may allow them to be violated using less inputs.

  • 17 

    This is the result of the more general fact that different choices of ${ \mathcal S }$ may provide equivalent descriptions of marginal probabilities [56] and entropies [57].

  • 18 

    In equation (45) we abuse the notation slightly and denote by ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$ the set of entropy vectors ${(H(T))}_{T\subset S}$—instead of ${(H(T))}_{T\subset \{Y,Z,B,C\}}$—which satisfy the constraints characterizing ${{\rm{\Gamma }}}_{\mathrm{BC}}^{\mathrm{causal}}$ as defined in equations (17) and (18). The transformation ${{ \mathcal T }}_{X}$, of which ${{ \mathcal T }}_{X}^{* }$ is the dual, is again defined in a similar way as in equation (16).

  • 19 

    In the original OCB game, only Bob receives the input $Y^{\prime} $, whereas in the variant we consider here, both parties have access to it.

  • 20 

    Reference [58] proposed this inequality in the framework of the original OCB game. However, one can easily see that our derivation of equation (52) in the more general scenario implies that it must hold in that framework too. Indeed, if only Bob receives $Y^{\prime} $, then this implies the additional constraint $H({A}_{0})=H({A}_{1})$ when ${\rm{A}}\prec {\rm{B}}$. The set of correlations obtainable is thus a subset of those obtainable in the more general version of the game, and thus equation (52) must again hold true.

  • 21 

    For compactness we generically write entropic inequalities not just in terms of Shannon entropies (as defined in equation (3)), but also in terms of conditional entropies (of the form $H(A| B):= H({AB})-H(B)$), of mutual information ($I(A:B):= H(A)+H(B)-H({AB})$) and of conditional mutual information ($I(A:B| C):= H({AC})+H({BC})-H({ABC})-H(C)$). The expressions given for the inequalities are of course not unique.

  • 22 

    A similar procedure can also be followed for the approach with counterfactual variables, in which case one obtains upper-bounds of H(Q) and $2H(Q)$ in equations (36) and (37) (or 0 for fixed-order correlations when $H(Q)=0$), before eliminating H(Q) and obtaining equations (36)–(37) again.

Please wait… references are loading.
10.1088/1367-2630/aa8f9f