1 Introduction

On a given probability space \((\Omega , \mathcal{F}, (\mathcal{F}_t), {\mathbb {P}})\) we consider a discrete time market consisting of n assets with bid \({\underline{S}}^i_t\) and ask \({\overline{S}}^i_t\) prices, which are adapted to \(\mathcal{F}_t\), for which we can sell or buy ith asset respectively at time t, where \(t=0,1\ldots ,T\) and \(i=1,\ldots ,n\). We shall assume that bid and ask prices are random variables such that \(0<{\underline{S}}^i_t < {\overline{S}}^i_t\) for \(t=0,1,\ldots ,T\) and \(i=1,\ldots ,n\). At each time t we can buy \(l_t^i\) or sell \(m_t^i\) of ith assets basing on available for us information \(\mathcal{F}_t\). We denote by \(x_t\) our position in a bank account and by \(y_t^i\) the number of ith assets in our portfolio at time t. Our principal assumption is that we do not allow shortselling nor borrowing so that both \(x_t\) and \(y_t^i\) for \(t=0,1,\ldots ,T\) and \(i=1,\ldots ,n\) should be nonnegative. As one can see in [10] such restriction is typical when asset prices satisfy so called full support condition. To simplify notations we also assume zero interest rates on the bank account. Our purpose is to maximize

$$\begin{aligned} J_{x,y}((l^i_t,m^i_t))={\mathbb {E}}\left[ U(x_T+y_T \cdot {\underline{S}}_T)\right] , \end{aligned}$$
(1.1)

where x is initial bank position and \(y=(y^1,\ldots ,y^n)\) are initial number of assets in our portfolio, U is a utility function which is assumed to be strictly increasing, strictly concave and continuously differentiable and \(y_T \cdot {\underline{S}}_T:=\sum _{i=1}^n y^i_T {\underline{S}}^i_T\). In what follows we shall consider nonnegative coordinates \((x,y^1,\ldots ,y^n)\) assuming that at least one is different than 0. We denote by \({\mathbb {R}}_{+}:=[0,\infty )\) and shall use the following notation \({\mathbb {R}}_{+}^{n+1}:=[0,\infty )^{n+1} \setminus \left\{ (0,\ldots ,0)\right\} \), for \(n=1,2,\ldots .\) In the paper we are interested in characterization of optimal strategies for (1.1) and construction of so called shadow price i.e. the price which is between bid and ask such that optimal value of the functional in the frictionless market with that price is the same as in the market with bid and ask prices. The interest with shadow price has started with the paper [8], where shadow price was studied for the Black-Scholes model with transaction costs and discounted logarithmic utility function. Existence of shadow price for discrete time finite market was shown in [9]. In some cases we are not able to find a frictionless market with price process taking values between bid and ask prices which gives the same optimal strategy as the market with transaction costs (see [1] and [4]). Shadow price for continuous time market model has been studied intensively in a number of papers (see [3] and references therein) using in most general cases duality theory which gives us an existential result. Discrete time shadow price was studied using duality in [4]. In the paper [10] a direct method based on dynamic programming was proposed. The advantage of dynamic programming method is that it allows to work on approximation methods. This paper generalizes [10], where discrete time shadow price was constructed using discrete time system of Bellman equations and certain geometric properties of transaction zones. We extend the results of [10] showing that under quite general assumptions the existence of shadow price and construction of optimal strategies can be restricted to the study of one period static case. We furthermore show that construction of optimal strategies and shadow price can be extended to the case of various assets (multidimensional case). Situation however is then more complicated and notation is much harder so that we restrict ourselves to the two asset case only. The paper consists of 7 sections. In Sect. 2 we solve static portfolio optimization problem for one asset. In Sect. 3 we introduce induction step considering two time moments problem. In Sect. 4 we solve general dynamic problem with one asset. Sections 5 and 6 parallel Sects. 2, 3 and 4 for the case with two assets. An Appendix contains a number of auxiliary results used in the paper.

2 Static Two Dimensional Case

Let the function \(w : {\mathbb {R}}_{+}^2 \longrightarrow {\mathbb {R}}\) be strictly increasing with respect to both variables, strictly concave and continuously differentiable. We shall consider it as a one period value function which depends on the value of our bank account and number of assets in our portfolio. We recall our notation that \( {\mathbb {R}}_{+}^2:=[0,\infty )\times [0,\infty )\setminus \left\{ (0,0)\right\} \). Let \({\mathbb {D}} := \{ (\underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} : \quad 0< \underline{s} < \overline{s} \}\) and \({\hat{{\mathbb {D}}}} := \{ {\hat{s}} \in {\mathbb {R}}_{+} : \quad {\hat{s}} > 0 \}\).

For every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) define

$$\begin{aligned} {\mathbb {A}}(x, y, \underline{s}, \overline{s}) := \{ (l, m) \in {\mathbb {R}}_{+}^{2} : \qquad x + \underline{s} m - \overline{s} l \geqslant 0, \quad y - m + l \geqslant 0 \}, \end{aligned}$$
(2.1)

which is the set of possible investment strategies at time 0, such that our bank and asset accounts are nonnegative with bid price \(\underline{s}\) and ask price \(\overline{s}\). For every \((x, y, {\hat{s}}) \in {\mathbb {R}}_{+}^{2} \times {\hat{{\mathbb {D}}}}\) put

$$\begin{aligned} {\hat{{\mathbb {A}}}}(x, y, {\hat{s}}) := \{ (l, m) \in {\mathbb {R}}_{+}^{2} : \qquad x + {\hat{s}} m - {\hat{s}} l \geqslant 0, \quad y - m + l \geqslant 0 \}, \end{aligned}$$
(2.2)

which in turn corresponds to nonnegative bank and asset accounts with only one price \({\hat{s}}\) (without proportional transaction costs). It is clear that for every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) and for every \({\hat{s}} \in [\underline{s}, \overline{s}]\) we have that

$$\begin{aligned} {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \subseteq {\hat{{\mathbb {A}}}}(x, y, {\hat{s}}) . \end{aligned}$$
(2.3)

Notice that the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s}) \) is convex, compact, while the set \({\hat{{\mathbb {A}}}}(x, y, {\hat{s}})\) is closed convex. For every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) let

$$\begin{aligned} {\overline{w}}(x, y, \underline{s}, \overline{s}) := \sup _{(l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})} w (x + \underline{s} m - \overline{s} l, y - m + l) . \end{aligned}$$
(2.4)

Clearly there is a maximizer for which the supremum on the right hand side of (2.4) is attained. For every \((x, y, {\hat{s}}) \in {\mathbb {R}}_{+}^{2} \times {\hat{{\mathbb {D}}}}\) let

$$\begin{aligned} {\hat{w}}(x, y, {\hat{s}}) := \sup _{(l, m) \in {\hat{{\mathbb {A}}}}(x, y, {\hat{s}})} w (x + {\hat{s}} m - {\hat{s}} l, y - m + l) . \end{aligned}$$
(2.5)

We have no problem with the existence of maximizer in \({\hat{w}}(x, y, {\hat{s}})\) since we can restrict ourselves to the maximization over the set \({\hat{{\mathbb {A}}}}^1(x, y, {\hat{s}})\cup {\hat{{\mathbb {A}}}}^2(x, y, {\hat{s}})\), where \({\hat{{\mathbb {A}}}}^1(x, y, {\hat{s}})=\{ (l,0)\in {\mathbb {R}}_{+}^{2} : x\geqslant {\hat{s}} l \}\) and \({\hat{{\mathbb {A}}}}^2(x, y, {\hat{s}})=\{ (0,m)\in {\mathbb {R}}_{+}^{2} : m\leqslant y \}\) and each of these sets is convex and compact. One can notice that from (2.3) we get that for every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) and for every \({\hat{s}} \in [\underline{s}, \overline{s}]\) we have that

$$\begin{aligned} {\overline{w}}(x, y, \underline{s}, \overline{s}) \leqslant {\hat{w}}(x, y, {\hat{s}})= {\overline{w}}(x, y, {\hat{s}}, {\hat{s}}), \end{aligned}$$
(2.6)

where we naturally extend the meaning of \({\overline{w}}(x, y, \underline{s}, \overline{s})\) to the case when \(\underline{s} = \overline{s}\). In what follows we shall characterize optimal (lm) for which the supremum in (2.4) is attained.

From now on we assume that \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) is fixed.

Given \(c > 0\) and \(s\in {\hat{{\mathbb {D}}}}\) consider the function from [0, c] to \({\mathbb {R}}\) defined by

$$\begin{aligned} H_{c,s}(x) := w \Big ( x, \frac{c - x}{s} \Big ). \end{aligned}$$
(2.7)

where the function \(H_{c,s}\) corresponds to the value function with wealth c and price s.

Clearly, for every \(c > 0\) we have that

$$\begin{aligned} H_{c,\underline{s}}(0) = w \Big ( 0, \frac{c}{\underline{s}} \Big ) > H_{c,\overline{s}}(0) = w \Big ( 0, \frac{c}{\overline{s}} \Big ) \quad \text{ and } \quad H_{c,\underline{s}}(c) = w(c, 0) = H_{c,\overline{s}}(c). \end{aligned}$$
(2.8)

Moreover, for every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) we have

$$\begin{aligned} {\hat{w}}(x, y, \underline{s}) = \sup _{u \in [0, x + \underline{s} y]} H_{x + \underline{s} y,\underline{s}}(u) \quad \text{ and } \quad {\hat{w}}(x, y, \overline{s}) = \sup _{u \in [0, x + \overline{s} y]} H_{x + \overline{s} y,\overline{s}}(u). \end{aligned}$$
(2.9)

For every \(c > 0\) and for every \(x \in [0, c]\) we also have the following derivatives of the functions \(H_{c,\underline{s}}\) and \(H_{c,\overline{s}}\) at point x (at \(x=0\) or \(x=c\) we have right or left derivatives respectively):

$$\begin{aligned} \begin{aligned} H_{c,\underline{s}}'(x) = w_{x} \Big ( x, \frac{c - x}{\underline{s}} \Big ) - \frac{1}{\underline{s}} w_{y} \Big ( x, \frac{c - x}{\underline{s}} \Big ) , \\ H_{c,\overline{s}}'(x) = w_{x} \Big ( x, \frac{c - x}{\overline{s}} \Big ) - \frac{1}{\overline{s}} w_{y} \Big ( x, \frac{c - x}{\overline{s}} \Big ) . \end{aligned} \end{aligned}$$
(2.10)

Furthermore, for every \(c > 0\) for the left hand limits of \(H_{c,\underline{s}}'\) and \(H_{c,\overline{s}}'\) at point c we have

$$\begin{aligned} H_{c,\underline{s}}'(c - ) = w_{x}(c, 0) - \frac{1}{\underline{s}}w_{y}(c, 0) < w_{x}(c, 0) - \frac{1}{\overline{s}} w_{y}(c, 0) = H_{c,\overline{s}}'(c - ). \end{aligned}$$
(2.11)

Lemma 2.1

For every \(c > 0\) and \(s\in {\hat{{\mathbb {D}}}}\) the functions \(H_{c,s}\) are strictly concave on [0, c].

Proof

This is the consequence of the fact that for every \(c > 0\) and \(s\in {\hat{{\mathbb {D}}}}\) the functions \(H_{c,s}\) are compositions of linear and strictly concave functions. \(\square \)

Functions \({\hat{w}}\) strongly depend on the asset price s. The result below shows that when the market positions (xy) are positive such dependence is injective.

Proposition 2.2

For every \((x, y) \in {\mathbb {R}}_{+}^{2}\) such that \(x, y > 0\) we cannot have

$$\begin{aligned} {\hat{w}}(x, y, \overline{s}) = {\hat{w}}(x, y, \underline{s}). \end{aligned}$$
(2.12)

The only points where we have such equality are: (x, 0) when \(H_{x,\underline{s}}'(x-)\geqslant 0\) and (y, 0) when \(H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\).

Proof

Let \((x, y) \in {\mathbb {R}}_{+}^{2}\) be such that (2.12). Since w is increasing with respect to both variables and for \(u\in [0,x)\), \(w(x,{x+y \overline{s}-u \over \overline{s}})< w(x,{x+y \underline{s}-u \over \underline{s}})\), while for \(u\in (x,x+y \underline{s}]\) we have \(w(x,{x+y \underline{s}-u \over \underline{s}})<w(x,{x+y \overline{s}-u \over \overline{s}})\) we therefore have

$$\begin{aligned} {\hat{w}}(x, y, \overline{s}) = {\hat{w}}(x, y, \underline{s})=w(x,y). \end{aligned}$$
(2.13)

This means that it is optimal to do nothing for (xy) both under the price \(\underline{s}\) as well as \(\overline{s}\). Furthermore \({\hat{w}}(x, y, {\hat{s}}) = w(x,y)\) for every \({\hat{s}} \in [\underline{s}, \overline{s}]\). Assume now that \(x, y>0\). Then for every \({\hat{s}} \in [\underline{s}, \overline{s}]\) the function \(F : \big [ - \frac{x}{{\hat{s}}}, y \big ] \longrightarrow {\mathbb {R}}\) given by

$$\begin{aligned} F(u) := w(x + {\hat{s}} u, y - u) \end{aligned}$$

achieves its maximum for \(u = 0\). This means that

$$\begin{aligned} F'(0) = 0 = w_{x}(x, y) {\hat{s}} - w_{y}(x, y) \end{aligned}$$

which can only happen when \(w_{x}(x, y) = 0\), which contradicts the fact that w was strictly increasing. By Lemma 2.1 the function \(H_{c,s}\) is strictly concave. Therefore if \(H_{x,\underline{s}}'(x-) \geqslant 0\), then \(H_{x,\underline{s}}\) is increasing on [0, x]. Using (2.11) we have that \(H_{x,\overline{s}}'(x-) > 0\). Taking into account strict concavity of \(H_{x,\overline{s}}\), we get that the function \(H_{x,\overline{s}}\) is also strictly increasing on [0, x]. Consequently, it is clear that

$$\begin{aligned} {\hat{w}}(x, 0, \overline{s}) = {\hat{w}}(x, 0, \underline{s}) = w(x, 0). \end{aligned}$$

If \(H_{x,\underline{s}}'(x-) < 0\), then the supremum of \(H_{x,\overline{s}}\) on [0, x] is attained for some \(x' \in [0, x)\) and \({\hat{w}}(x, 0, \underline{s}) > w(x, 0)\).

When \(H_{\overline{s} y, \overline{s}}'(0+) \leqslant 0\), then by strict concavity of \(H_{\overline{s} y,\overline{s}}\) we know that it is decreasing on \([0, \overline{s} y]\) and therefore

$$\begin{aligned} H_{\overline{s} y,\overline{s}}(0) = w(0, y) = {\hat{w}}(0, y, \overline{s}). \end{aligned}$$

Since

$$\begin{aligned} H_{\overline{s} y,\overline{s}}'(0+) = w_{x}(0, y) - \frac{1}{\overline{s}} w_{y}(0, y) \geqslant w_{x}(0, y) - \frac{1}{\underline{s}} w_{y}(0, y) = H_{\underline{s} y,\underline{s}}'(0+), \end{aligned}$$

then we also have that \(H_{\underline{s} y,\underline{s}}'(0+) < 0\) and \(H_{\underline{s} y,\underline{s}}(0) = w(0, y) = {\hat{w}}(0, y, \underline{s}).\) If \(H_{\overline{s} y,\overline{s}}'(0+) > 0\), then the supremum of \(H_{\overline{s} y,\overline{s}}\) on \([0, \overline{s} y]\) is attained for some \(x' \in (0, \overline{s} y]\) and therefore \({\hat{w}}(0, y, \overline{s}) > w(0, y)\). \(\square \)

Remark 2.3

Notice that (2.12) can not happen for \(x>0\) and \(y>0\), when w(xy) is concave (non necessarily strictly concave), differentiable and increasing with respect to both coordinates. Strict concavity assumption will be important to study differentiability of \({\hat{w}}\).

For \(c > 0\) and \(s > 0\) let

$$\begin{aligned} h(c, s) := {{\,\mathrm{arg\,max}\,}}_{\left\{ (x,y)\in {\mathbb {R}}_{+}^2:\ x+sy=c\right\} } w(x, y) . \end{aligned}$$
(2.14)

Clearly \(h(c,s)=\left( \begin{matrix} h_0(c,s) \\ h_1(c,s) \end{matrix} \right) \) and we have \(h_1(c,s)={c-h_0(c,s) \over s}\). Note that h(cs) is the optimal portfolio corresponding to the wealth c and asset price s. One can notice furthermore that

$$\begin{aligned} h_0(c, s) = {{\,\mathrm{arg\,max}\,}}_{x \in [0, c]} w \Big ( x, \frac{c - x}{s} \Big ). \end{aligned}$$
(2.15)

From the proof of Proposition 2.2 we have the following

Corollary 2.4

Let \(c > 0\) be such that \(h_0 (c,\underline{s}) = c\). Then \(h_0 (c,\overline{s}) = c\). Moreover, if \(y > 0\) is such that \(h_0(\overline{s} y,\overline{s}) = 0\), then also \(h_0 (\underline{s} y, \underline{s}) = 0\).

Proof

Taking into account concavity of \(H_{c,\underline{s}}\) we get \(h_0 (c,\underline{s}) = c\) only when \(H_{c,\underline{s}}'(c-) \geqslant 0\). Then by (2.11) also \(H_{c,\overline{s}}'(c-) > 0\). In effect, the concavity of \(H_{c,\overline{s}}\) implies that \(h_0 (c,\overline{s}) = c\).

Similarly, if \(y > 0\) is such that \(h_0(\overline{s} y,\overline{s}) = 0\), then \(H_{\overline{s} y,\overline{s}}(0+) \leqslant 0\) which implies that \(H_{\underline{s} y,\underline{s}} < 0\) and \(h_0 (\underline{s} y,\underline{s}) = 0\). \(\square \)

Taking into account strict concavity of the function w we can show that the selector h defined in (2.14) is continuous. Namely, we have

Lemma 2.5

Function h is continuous on \((0, \infty ) \times (0, \infty )\).

Proof

Let \((c, s) \in (0, \infty ) \times (0, \infty )\) be arbitrary and let \((c_{n}, s_{n})_{n = 1}^{\infty }\) be an arbitrary sequence from \((0, \infty ) \times (0, \infty )\) which converges to (cs). It suffices to show that

$$\begin{aligned} \sup _{x \in [0, c_{n}]} w \Big ( x, \frac{c_{n} - x}{s_{n}} \Big ) \xrightarrow {n \longrightarrow \infty } \sup _{x \in [0, c]} w \Big ( x, \frac{c - x}{s} \Big ) . \end{aligned}$$
(2.16)

We have

$$\begin{aligned} \begin{aligned} \Bigg | \sup _{x \in [0, c_{n}]}&w \Big ( x, \frac{c_{n} - x}{s_{n}} \Big ) - \sup _{x \in [0, c]} w \Big ( x, \frac{c - x}{s} \Big ) \Bigg | \\&\leqslant \Bigg | \sup _{x \in [0, c_{n}]} w \Big ( x, \frac{(c_{n} - x)^{+}}{s_{n}} \Big ) - \sup _{x \in [0, c_{n}]} w \Big ( x, \frac{(c - x)^{+}}{s} \Big ) \Bigg | \\&\ \ \ \ + \Bigg | \sup _{x \in [0, c_{n}]} w \Big ( x, \frac{(c - x)^{+}}{s} \Big ) - \sup _{x \in [0, c]} w \Big ( x, \frac{(c - x)^{+}}{s} \Big ) \Bigg | =: I_{n} + II_{n} . \end{aligned} \end{aligned}$$

By continuity arguments, we get that \(I_{n} \xrightarrow {n \longrightarrow \infty } 0\). As w is continuous and \(c_{n} \xrightarrow {n \longrightarrow \infty } c\), we also have that \(II_{n} \xrightarrow {n \longrightarrow \infty } 0\).

For every \(n \in {\mathbb {N}}\) we have that \(h_0(c_{n}, s_{n}) \leqslant c_{n}\). Thus, if for some \(d \in {\mathbb {R}}\) we have that \(h_0(c_{n}, s_{n}) \xrightarrow {n \longrightarrow \infty } d\), then it must be \(d \leqslant c\) and

$$\begin{aligned} w \Big ( h_0(c_{n}, s_{n}), \frac{c_{n} - h_0(c_{n}, s_{n})}{s_{n}} \Big ) \xrightarrow {n \longrightarrow \infty } w \Big ( d, \frac{c - d}{s} \Big ) . \end{aligned}$$

By (2.16), this implies that

$$\begin{aligned} w \Big ( d, \frac{c - d}{s} \Big ) = \sup _{x \in [0, c]} w \Big ( d, \frac{c - x}{s} \Big ) . \end{aligned}$$

By strict concavity of the mapping \(x \longmapsto w \big ( d, \frac{c - x}{s} \big )\), we get that \(h_0(c, s) = d\). Therefore, \(h_0(c_{n}, s_{n}) \xrightarrow {n \longrightarrow \infty } h_0(c, s)\). \(\square \)

The next Corollary characterizes properties of the graph of h.

Corollary 2.6

The graph of the mapping \((c, s) \longmapsto h(c, s)\) does not have common points except of points \((x, 0) \in {\mathbb {R}}_{+}^{2}\) whenever \(H_{x,s}'(x-) \geqslant 0\) and \((0, y) \in {\mathbb {R}}_{+}^{2}\) whenever \(H_{s y,s}'(0+) \leqslant 0\).

Proof

Let \(c_{1}, c_{2} > 0\) and \((s_{1},s_{2}) \in {\mathbb {D}}\) be such that \(s_{1}\leqslant s_{2}\) and

$$\begin{aligned} \Big ( h_0(c_{1}, s_{1}), \frac{c_{1} - h_0(c_{1}, s_{1})}{s_{1}} \Big ) = \Big ( h_0(c_{2}, s_{2}), \frac{c_{2} - h_0(c_{2}, s_{2})}{s_{2}} \Big ) =: (x, y). \end{aligned}$$

Then \(h_0(c_{1}, s_{1}) = h_0(c_{2}, s_{2})\) and \(\frac{c_{1} - h_0(c_{1}, s_{1})}{s_{1}} = \frac{c_{2} - h_0(c_{2}, s_{2})}{s_{2}}\).

Consequently, \({\hat{w}}(x, y, s_{1}) = {\hat{w}}(x, y, s_{2}) = w(x, y)\), which by Proposition 2.2 can happen only when \(y = 0\) and \(H_{x, s_{1}}'(x-) \geqslant 0\) or when \(x = 0\) and \(H_{s_{2} y, s_{2}}(0+) \leqslant 0\). \(\square \)

We have characterized so far the function \({\hat{w}}\) which corresponded to one asset price s. Now we study the function \({\overline{w}}\) which is the optimal value corresponding to bid price \(\underline{s}\) and ask price \(\overline{s}\).

Lemma 2.7

For every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) we have that

$$\begin{aligned} {\overline{w}}(x, y, \underline{s}, \overline{s}) = \max \Bigg \{ \sup _{u \in [x, x + \underline{s} y]} w \Big ( u, y+ \frac{x - u}{\underline{s}} \Big ), \sup _{u \in [0, x]} w \Big ( u, y + \frac{x - u}{\overline{s}} \Big ) \Bigg \}, \end{aligned}$$
(2.17)

and

$$\begin{aligned} {\overline{w}}(x, y, \underline{s}, \overline{s}) = \max \Bigg \{ \sup _{u \in [0, y]} w \Big ( x+(y-u)\underline{s}, u \Big ), \sup _{u \in [y, y+{x\over \overline{s}}]} w \Big ( x-(u-y)\overline{s},u \Big ) \Bigg \}. \end{aligned}$$
(2.18)

Furthermore

$$\begin{aligned} {\overline{w}}(x, y, \underline{s}, \overline{s}) \leqslant \min \big \{ {\hat{w}}(x, y, \underline{s}), {\hat{w}}(x, y, \overline{s})\} . \end{aligned}$$
(2.19)

Proof

Notice that the meaning of (2.17) and (2.18) is that starting from (xy) we can buy or sell assets for \(\overline{s}\) or \(\underline{s}\) respectively. By (2.6) we immediately get (2.19). \(\square \)

Functions \({\bar{w}}\) and \({\hat{w}}\) inherit concavity property of function w, which will be important for further studies. We have

Proposition 2.8

For every \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) the functions \({\overline{w}}(\cdot , \cdot , \underline{s}, \overline{s}), {\hat{w}}(\cdot , \cdot , \underline{s})\) and \({\hat{w}}(\cdot , \cdot , \overline{s})\) are concave on \({\mathbb {R}}_{+}^{2}\). In particular, they are continuous on \({\mathbb {R}}_{+}^{2}\).

Proof

The proof follows from concavity of \({\overline{w}}\) with respect to the first two coordinates. Let \((x_1,y_1), (x_2,y_2) \in {\mathbb {R}}_{+}^{2}\) and \(({\hat{l}}_1, {\hat{m}}_1)) \in {\mathbb {A}}(x_1, y_1, \underline{s}, \overline{s}) \), \(({\hat{l}}_2, {\hat{m}}_2)) \in {\mathbb {A}}(x_2, y_2, \underline{s}, \overline{s}) \) be maximizers in \({\overline{w}}\) for \((x_1,y_1)\) or \((x_2,y_2)\) respectively. For \(\lambda \in (0,1)\) we have that

$$\begin{aligned} (\lambda {\hat{l}}_1 + (1-\lambda ){\hat{l}}_2, \lambda {\hat{m}}_1 + (1-\lambda ){\hat{m}}_2)\in {\mathbb {A}}(\lambda x_1 + (1-\lambda ) x_2, \lambda y_1 + (1-\lambda ) y_2), \underline{s}, \overline{s}) \end{aligned}$$

and consequently

$$\begin{aligned} \begin{aligned}&{\overline{w}}(\lambda x_1 + (1-\lambda ) x_2,\lambda y_1 + (1-\lambda ) y_2,\underline{s}, \overline{s})\geqslant w(\lambda x_1 + (1-\lambda ) x_2 +\underline{s} (\lambda {\hat{m}}_1 \\&+(1-\lambda ){\hat{m}}_2)-\overline{s}(\lambda {\hat{l}}_1 + (1-\lambda ){\hat{l}}_2), \lambda y_1 + (1-\lambda ) y_2\\&- (\lambda {\hat{m}}_1 + (1-\lambda ){\hat{m}}_2) + (\lambda {\hat{l}}_1 + (1-\lambda ){\hat{l}}_2))\geqslant \\&\lambda w(x_1+\underline{s}{\hat{m}}_1 - \overline{s}{\hat{l}}_1,y_1-{\hat{m}}_1+{\hat{l}}_1)+ (1-\lambda ) w(x_2+\underline{s}{\hat{m}}_2 - \overline{s}{\hat{l}}_2,y_2-{\hat{m}}_2+{\hat{l}}_2) \\&=\lambda {\overline{w}}(x_1,y_1,\underline{s}, \overline{s})+(1-\lambda ){\overline{w}}(x_1,y_1,\underline{s}, \overline{s}). \end{aligned} \end{aligned}$$
(2.20)

The concavity of \({\hat{w}}\) can be shown in the same way. Continuity follows directly from concavity. \(\square \)

Remark 2.9

Note that in fact we have in (2.20) strict inequality whenever

$$\begin{aligned} (x_1+\underline{s}{\hat{m}}_1 - \overline{s}{\hat{l}}_1,y_1-{\hat{m}}_1+{\hat{l}}_1)\ne (x_2+\underline{s}{\hat{m}}_2 - \overline{s}{\hat{l}}_2,y_2-{\hat{m}}_2+{\hat{l}}_2), \end{aligned}$$

and equality if after optimal strategy we enter the same point in \({\mathbb {R}}_{+}^{2}\).

For \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) we define portfolio zones

$$\begin{aligned} \begin{aligned} \mathbf {NT}&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} \quad {\overline{w}}(x, y, \underline{s}, \overline{s}) = w(x, y) \big \}, \\ \mathbf {B}&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} \quad {\overline{w}}(x, y, \underline{s}, \overline{s}) = {\hat{w}}(x, y, \overline{s}) \big \} \setminus \mathbf {NT}(\underline{s}, \overline{s}) \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \mathbf {S}(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} \quad {\overline{w}}(x, y, \underline{s}, \overline{s}) = {\hat{w}}(x, y, \underline{s}) \big \} \setminus \mathbf {NT}(\underline{s}, \overline{s}) . \end{aligned}$$

For \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) define also

$$\begin{aligned} \mathbf {NT}^{\circ }(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2}: \quad h_0 (x + \underline{s} y, \underline{s})< x < h_0 (x + \overline{s} y, \overline{s}) \big \} . \end{aligned}$$

The above sets have the following meanings: \(\mathbf {NT}(\underline{s}, \overline{s})\) is a no transaction zone, i.e. the set of position from which we do not change our portfolio, \(\mathbf {B}(\underline{s}, \overline{s})\) is a buying zone - the set positions in which we buy assets for the price \(\overline{s}\) until we enter the \(\mathbf {NT}(\underline{s}, \overline{s})\) zone, the set \(\mathbf {S}(\underline{s}, \overline{s})\) is a selling zone - the set of positions in which we sell assets for the price \(\underline{s}\) until we enter the \(\mathbf {NT}(\underline{s}, \overline{s})\) zone. The above zones can be characterized in terms of the selector \(h_0\). Namely,

Theorem 2.10

Let \((\underline{s}, \overline{s}) \in {\mathbb {D}}\). Then

$$\begin{aligned} \begin{aligned} \mathbf {NT}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad h_0 (x + \underline{s} y, \underline{s}) \leqslant x \leqslant h_0 (x + \overline{s}y,\overline{s}) \big \}, \\ \mathbf {B}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad x > h_0 (x + \overline{s} y, \overline{s}) \big \}, \\ \mathbf {S}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad x < h_0 (x + \underline{s} y, \underline{s}) \big \} \end{aligned} \end{aligned}$$

and for every \((x, y) \in \mathbf {NT}^{\circ }(\underline{s}, \overline{s})\) we have strict inequality in (2.19). Moreover, \(\mathbf {NT}^{\circ }(\underline{s}, \overline{s})\) is an open set and its closure (excluding point (0, 0)) coincides with \(\mathbf {NT}(\underline{s}, \overline{s})\).

Proof

Let \((x, y) \in {\mathbb {R}}_{+}^{2}\). If \({\overline{w}}(x, y, \underline{s}, \overline{s}) = w(x, y)\), then by (2.17) we are not able to increase the value of w(xy) buying or selling assets. Consequently by strict concavity of w we should have that \(h_0 (x + \underline{s} y, \underline{s}) \leqslant x\) and \(h_0 (x + \overline{s} y, \overline{s}) \geqslant x\). If \((x, y) \in \mathbf {B}(\underline{s}, \overline{s})\), then by (2.17) we have that \(x > h_0 (x + \overline{s} y, \overline{s})\). If \((x, y) \in \mathbf {S}(\underline{s}, \overline{s})\), then by (2.17) we have that \(x < h_0 (x + \underline{s} y, \underline{s})\). If \(x \geqslant h_0 (x + \overline{s} y, \overline{s})\), then \({\overline{w}}(x, y, \underline{s}, \overline{s}) = {\hat{w}}(x, y, \overline{s})\) and we have equality in (2.19). If \(x \leqslant h_0 (x + \underline{s} y, \underline{s})\), then \({\overline{w}}(x, y, \underline{s}, \overline{s}) = {\hat{w}}(x, y, \underline{s})\) and we have equality in (2.19). If \(h_0 (x + \underline{s} y, \underline{s})< x < h_0 (x + \overline{s} y, \overline{s})\), then \({\overline{w}}(x, y, \underline{s}, \overline{s}) < {\hat{w}}(x, y, \underline{s})\) and \({\overline{w}}(x, y, \underline{s}, \overline{s}) < {\hat{w}}(x, y, \overline{s})\) and we have strict inequality in (2.19). By the continuity of the mappings \(c \longmapsto h_0 (c, \underline{s})\) and \(c \longmapsto h_0 (c, \overline{s})\), we have that \(\mathbf {NT}^{\circ }(\underline{s}, \overline{s})\) is an open set. Clearly, its closure coincides with \(\mathbf {NT}(\underline{s}, \overline{s})\). This ends the proof. \(\square \)

Using (2.18) we can obtain an alternative version of the formulae for the zones in terms of the selector \(h_1\).

Proposition 2.11

Let \((\underline{s}, \overline{s}) \in {\mathbb {D}}\). Then

$$\begin{aligned} \begin{aligned} \mathbf {NT}^{\circ }&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2}: \quad h_1 (x + \overline{s} y, \overline{s})< y< h_1 (x + \underline{s} y, \underline{s}) \big \}, \\ \mathbf {NT}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad h_1 (x + \overline{s} y, \overline{s}) \leqslant y \leqslant h_1 (x + \underline{s}y,\underline{s}) \big \}, \\ \mathbf {B}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad y < h_1 (x + \overline{s} y, \overline{s}) \big \}, \\ \mathbf {S}&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad y > h_1 (x + \underline{s} y, \underline{s}) \big \}. \end{aligned} \end{aligned}$$

We also immediately have

Corollary 2.12

For every \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) the following implications hold:

$$\begin{aligned} (x, y) \in \mathbf {B}(\underline{s}, \overline{s}) \Longrightarrow {\overline{w}}(x, y, \underline{s}, \overline{s}) = w \Big (h_0 (x + \overline{s} y, \overline{s}), \frac{x + \overline{s} y - h_0 (x + \overline{s} y)}{\overline{s}} \Big ) \end{aligned}$$

and

$$\begin{aligned} (x, y) \in \mathbf {S}(\underline{s}, \overline{s}) \Longrightarrow {\overline{w}}(x, y, \underline{s}, \overline{s}) = w \Big ( h_0 (x + \underline{s} y, \underline{s}), \frac{x + \underline{s} y - h_0 (x + \underline{s} y, \underline{s})}{\underline{s}} \Big ) . \end{aligned}$$

For given \((x,y) \in {\mathbb {R}}_{+}^{2}\) we are looking for \({\hat{s}}\in [\underline{s},\underline{s}]\) such that \({\overline{w}}(x, y, \underline{s}, \overline{s})={\hat{w}}(x,y,{\hat{s}})\). Such value if exists is called a shadow price (see [10] for more explanation). It clearly depends on the value of (xy). It is rather obvious (see again to [10]) that shadow price for (xy) in \(\mathbf {B} (\underline{s}, \overline{s})\) is equal to \(\overline{s}\), while for (xy) in \(\mathbf {S}(\underline{s}, \overline{s})\) is equal to \(\underline{s}\). The only problem is to find shadow price for \((x,y)\in \mathbf {NT} (\underline{s}, \overline{s})\) i.e. in the no transaction zone corresponding to bid \(\underline{s}\) and ask \(\overline{s}\) prices.

Proposition 2.13

Let \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) be such that \((x, y) \in \mathbf {NT}^{\circ }(\underline{s}, \overline{s})\) and \(y > 0\). Then there exists a unique \({\hat{s}}(x, y) \in (\underline{s}, \overline{s})\) such that \(x = h_0 \big ( x + {\hat{s}}(x, y) y, {\hat{s}}(x, y) \big )\) and \(y = h_1 \big ( x + {\hat{s}}(x, y) y, {\hat{s}}(x, y) \big )\). Moreover, the function \({\hat{s}}\) can be extended to a continuous function on \((0, \infty ) \times (0, \infty ) \cap \mathbf {NT}(\underline{s}, \overline{s})\).

Proof

By Theorem 2.10, we have that \(h_0(x + \underline{s} y, \underline{s})< x < h_0(x + \overline{s} y, \overline{s})\). By Lemma 2.5, the mapping \(s \longmapsto h_0(x + s y, s)\) is continuous. Therefore, there exists \({\hat{s}}(x, y) \in (\underline{s}, \overline{s})\) such that \(h_0(x + {\hat{s}}(x, y) y, {\hat{s}}(x, y)) = x\). This means that \({\hat{w}}(x, y, {\hat{s}}(x, y)) = w(x, y)\). By Corollary 2.6, \({\hat{s}}(x, y)\) is unique. By Proposition 2.11\(h_1 (x + \overline{s} y, \overline{s})< y < h_1 (x + \underline{s} y, \underline{s})\) and by Lemma 2.5, the mapping \(s \longmapsto h_1(x + s y, s)\) is continuous. Therefore, there exists \(\tilde{s}(x, y) \in (\underline{s}, \overline{s})\) such that \(h_1(x + \tilde{s}(x, y) y, \tilde{s}(x, y)) = y\) and \({\hat{w}}(x, y, \tilde{s}(x, y)) = w(x, y)\). By uniqueness \( \tilde{s}(x, y)={\hat{s}}(x, y)\) By uniqueness again \({\hat{s}}(x, y)\) is continuous on \(\mathbf {NT}^{\circ }(\underline{s}, \overline{s})\). If \((x_{n}, y_{n})_{n = 1}^{\infty }\) is a sequence from \(\mathbf {NT}^{\circ }(\underline{s}, \overline{s})\) which converges to the point \((x, y) \in \mathbf {NT}(\underline{s}, \overline{s}) \setminus \mathbf {NT}^{\circ }(\underline{s}, \overline{s})\), then from Theorem 2.10 and Lemma 2.5 we have that either \(x = h_0 (x + \underline{s} y,\underline{s})\) or \(x = h_0 (x + \overline{s} y,\overline{s} y)\). Assume that \(x = h_0 (x + \underline{s} y, \underline{s})\). If for some \(z \in [\underline{s}, \overline{s}]\) we have that \({\hat{s}}(x_{n}, y_{n}) \xrightarrow {n \longrightarrow \infty } z\), then by continuity of h we have that

$$\begin{aligned} x_{n} = h_0 \big ( x_{n} + {\hat{s}}(x_{n}, y_{n}) y_{n}, {\hat{s}}(x_{n}, y_{n}) \big ) \xrightarrow {n \longrightarrow \infty } {\hat{s}}(x + z y, z). \end{aligned}$$

Therefore \(x = {\hat{s}} (x + y z, z) = {\hat{s}} (x + \underline{s} y, \underline{s})\) and by Corollary 2.6 we have that \(z = \underline{s}\). \(\square \)

Remark 2.14

Proposition 2.13 says that in our model shadow price is uniquely defined for \((x,y)\in (0, \infty ) \times (0, \infty ) \cap \mathbf {NT}(\underline{s}, \overline{s})\), and furthermore is a continuous function. From Proposition 2.2 and its proof we have that \({\hat{s}}\) is not uniquely defined only at points (x, 0) whenever \(H_{x,\underline{s}}'(x-)\geqslant 0\) and and (y, 0) when \(H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\). For such points any value from the interval \([\underline{s}, \overline{s}]\) may serve as a shadow price. In the set \(\mathbf {B}(\underline{s}, \overline{s})\) we have \({\hat{s}}=\overline{s}\), while in the set \(\mathbf {S}(\underline{s}, \overline{s})\) we have \({\hat{s}}=\underline{s}\).

In the proof of Proposition 2.2 differentiability of w was important. Now we consider first differentiability of \({\hat{w}}\) and then differentiability of \({\overline{w}}\).

Proposition 2.15

Function \({\hat{w}}\) is continuously differentiable at point \((x, y, {s}) \in {\mathbb {R}}_{+}^{2} \times {\hat{{\mathbb {D}}}}\).

Proof

For \((z, s) \in {\mathbb {R}}_{+} \times {\hat{{\mathbb {D}}}}\) and \(u \in [0, z]\) define \(w^{*}(z, u, s) := w \Big ( u, \frac{z - u}{s} \Big )=H_{z,s}(u)\).

Clearly,

$$\begin{aligned} {\hat{w}}(x, y, s) = \sup _{u \in [0, x + s y]} w^{*}(x + s y, u, s) . \end{aligned}$$
(2.21)

Assume first that \(H_{x+sy,{s}}'(x+sy-) < 0\) and \(H_{x+ys,{s}}'(0+) > 0\). Then supremum in (2.21) is attained in the open interval \((0, x + {s} y)\). Therefore by Proposition 7.2 we have that the mapping

$$\begin{aligned} (z, {s}) \longmapsto W(z,s):=\sup _{u \in [0, z]} w^{*}(z, u, {s}) \end{aligned}$$

is continuously differentiable, since supremum is attained inside the interval [0, z] and therefore we may assume locally that supremum is over a fixed subinterval which does not depend explicitly on z. Consequently by (7.3) we have

$$\begin{aligned} W_z'(z,s)={w_z^*}'(z,u_{z,s},s)={1\over s}w_y'\Big (u_{z,s},{z-u_{z,s}\over s}\Big ) \end{aligned}$$
(2.22)

and

$$\begin{aligned} W_s'(z,s)={w_s^*}'(z,u_{z,s},s)=-{1\over s^2} w_y'\Big (u_{z,s},{z-u_{z,s}\over s}\Big ) \end{aligned}$$
(2.23)

where \(u_{z,s}\) is the maximizer of \(w^*\) in the definition of W. Therefore function \({\hat{w}}(x,y,s)=W(x+sy,s)\) is continuously differentiable at point (xys) and by (2.22), (2.23) we have

$$\begin{aligned} {\hat{w}}_x'(x, y, s)= & {} {1\over s} w_y'\Big (u_{x+sy,s}, {x+sy-u_{x+sy,s} \over s}\Big ), \end{aligned}$$
(2.24)
$$\begin{aligned} {\hat{w}}_y'(x, y, s)= & {} w_y'\Big (u_{x+sy,s}, {x+sy-u_{x+sy,s} \over s}\Big ), \end{aligned}$$
(2.25)
$$\begin{aligned} {\hat{w}}_s'(x, y, s)= & {} -{1\over s^2}w_y'\Big (u_{x+sy,s}, {x+sy-u_{x+sy,s} \over s}\Big ), \end{aligned}$$
(2.26)

where \(u_{x+sy,s}\) is the maximal value of u in (2.21).

When \(H_{x+sy,{s}}'(x+sy-) \geqslant 0\) then \({\hat{w}}(x, y, s)=w(x+sy,0)\) and when \(H_{x+ys,{s}}'(0+) \leqslant 0\) then \({\hat{w}}(x, y, s)=w(0,{1\over s}x+y)\). When \(H_{x+sy,{s}}'(x+sy-) > 0\) or \(H_{x+ys,{s}}'(0+) < 0\) then in some neighborhood of (xys) we have \({\hat{w}}(x, y, s)=w(x+sy,0)\) or \({\hat{w}}(x, y, s)=w(0,{1\over s}x+y)\) and differentiability follows from differentiability of w. Consequently we may have problem with differentiability only when \(H_{x+sy,{s}}'(x+sy-) = 0\) or \(H_{x+ys,{s}}'(0+) = 0\). Then from (2.24) to (2.26) by continuity of \((x,y,s)\mapsto u_{x+sy,s}\) (which follows from uniqueness of \(u_{x+sy,s}\)) we obtain continuous differentiability of \({\hat{w}}\) at (xys). \(\square \)

Corollary 2.16

Function \({\overline{w}}\) is continuously differentiable for every

$$\begin{aligned}&(x,y)\in \mathbf {NT}^{\circ }(\underline{s},\overline{s})\cup \mathbf {S}(\underline{s},\overline{s})\cup \mathbf {B}(\underline{s},\overline{s})\\&\cup \left\{ (x,0): x>0, H_{x,\underline{s}}'(x-)\geqslant 0\right\} \cup \left\{ (0,y): y>0, H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\right\} . \end{aligned}$$

Proof

When \((x,y) \in \mathbf {NT}^{\circ }(\underline{s},\overline{s})\) we have that \({\overline{w}}(x,y,\underline{s},\overline{s})=w(x,y)\) which is continuously differentiable. For \((x,y) \in \mathbf {S}(\underline{s},\overline{s})\) we have that \({\overline{w}}(x,y,\underline{s},\overline{s})={\hat{w}}(x,y,\underline{s})\) which is continuously differentiable by Proposition 2.15. Finally for \((x,y) \in \mathbf {B}(\underline{s},\overline{s})\) we have \({\overline{w}}(x,y,\underline{s},\overline{s})={\hat{w}}(x,y,\overline{s})\) which is again continuously differentiable by Proposition 2.15. Consequently we may have problem with differentiability only at the boundary of \(\mathbf {NT}(\underline{s},\overline{s})\). Moreover in the sets \(\left\{ (x,0): H_{x,\underline{s}}'(x-)\geqslant 0\right\} \cup \left\{ (0,y): H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\right\} \) we get continuous differentiability as in end of the proof of Proposition 2.15, since it is a subset of \(\mathbf {NT}(\underline{s},\overline{s})\). \(\square \)

3 Induction Analysis

In the previous section we studied one period problem. Now we come to two period problem which in the sequel (in the next section) will be replaced by multi period problem studied by induction. Assume on a given filtered probability space \(\big ( \Omega , {\mathcal {F}}, ({\mathcal {F}}_{t})_{t = 0, 1}, \mathbb {P} \big )\) we are given two \({\mathcal {F}}_{1}\)-random variables \({\underline{S}}_{1}\) and \({\overline{S}}_{1}\) such that \(0< {\underline{S}}_{1} < {\overline{S}}_{1}\) such that for each \((x,y)\in {\mathbb {R}}_{+}^{2}\) the derivatives of the random variable \({\overline{w}}(x,y,{\underline{S}}_{1}, {\overline{S}}_{1})\) whenever exist are integrable and that conditional law \(\mathbb {P} \big ( ({\underline{S}}_{1}, {\overline{S}}_{1}) \in \cdot \big | {\mathcal {F}}_{0} \big )\) is continuous. In what follows we shall use a regular version of such conditional probability (which there exists by Theorem 6.3 of [7]).

For \((x, y) \in {\mathbb {R}}_{+}^{2}\) put

$$\begin{aligned} {\tilde{w}}(x, y) := \mathbb {E} \big ( {\overline{w}}(x, y, {\underline{S}}_{1}, {\overline{S}}_{1}) \big | {\mathcal {F}}_{0} \big ) , \end{aligned}$$

considering it as regular conditional expected value. Notice that by Lemma 7.3 we can put in the place of x and y any \({\mathcal {F}}_{0}\) random variables. Later on we shall consider frequently such versions of regular conditional probability. The construction of function \({\tilde{w}}(x, y)\) is crucial for in our induction step which we consider in the next section. The function \({\overline{w}}\) is not strictly concave with respect to the first two coordinates. As we show below the function \({\tilde{w}}\) is already strictly concave, which allows us to use later the results of the Sect. 2.

Proposition 3.1

Random function \({\tilde{w}}\) is strictly concave.

Proof

Let \((x_{1}, y_{1}), (x_{2}, y_{2}) \in {\mathbb {R}}_{+}^{2} \setminus \{ (0, 0) \}\) be such that \((x_{1}, y_{1}) \ne (x_{2}, y_{2})\). From (2.4) we have that there exist \(\mathcal {G} := \sigma \big ( {\mathcal {F}}_{0}, {\underline{S}}_{1}, {\overline{S}}_{1} \big )\)-measurable random variables \((l_{1}, m_{1})\) and \((l_{2}, m_{2})\) such that \((l_{1}, m_{1}) \in {\mathbb {A}}(x_{1}, y_{1}, {\underline{S}}_{1}, {\overline{S}}_{1})\), \((l_{2}, m_{2}) \in {\mathbb {A}}(x_{2}, y_{2}, {\underline{S}}_{1}, {\overline{S}}_{1})\) which are optimal in the market with bid and ask prices starting from \((x_1,y_1)\) and \((x_2,y_2)\) respectively, i.e. for which we have

$$\begin{aligned} \begin{aligned} {\overline{w}}&(x_{1}, y_{1}, {\underline{S}}_{1}, {\overline{S}}_{1}) = w(x_{1} + {\underline{S}}_{1} m_{1} - {\overline{S}}_{1} l_{1}, y_{1} - m_{1} + l_{1}), \\ {\overline{w}}&(x_{2}, y_{2}, {\underline{S}}_{1}, {\overline{S}}_{1}) = w(x_{2} + {\underline{S}}_{1} m_{2} - {\overline{S}}_{1} l_{2}, y_{2} - m_{2} + l_{2}) . \end{aligned} \end{aligned}$$

Then for every \(\lambda \in [0, 1]\) we have that

$$\begin{aligned} \lambda {\tilde{w}}(&x_{1}, y_{1}) + (1 - \lambda ) {\tilde{w}}(x_{2}, y_{2}) \nonumber \\&= \mathbb {E} \Big [ \lambda w(x_{1} + {\underline{S}}_{1} m_{1} - {\overline{S}}_{1} l_{1}, y_{1} -m_{1} + l_{1})\nonumber \\&\ \quad + (1 - \lambda ) w(x_{2} + {\underline{S}}_{1} m_{2} - {\overline{S}}_{1} l_{2}, y_{2} - m_{2} + l_{2}) \Big | {\mathcal {F}}_{0} \Big ] \nonumber \\&\leqslant \mathbb {E} \Bigg [ w \Big ( \big ( \lambda x_{1} + (1 - \lambda ) x_{2} \big ) + {\underline{S}}_{1} \big ( \lambda m_{1} + (1 - \lambda ) m_{2} \big ) - {\overline{S}}_{1} \big ( \lambda l_{1} + (1 - \lambda ) l_{2} \big ) , \nonumber \\&\ \ \quad \big ( \lambda y_{1} + (1 - \lambda ) y_{2} \big ) - \big ( \lambda m_{1} - (1 - \lambda ) m_{2} \big ) + \big ( \lambda y_{1} - (1 - \lambda ) l_{2} \big ) \Big ) \Bigg | {\mathcal {F}}_{0} \Bigg ] \nonumber \\&\leqslant \mathbb {E} \Big [ {\overline{w}} \big ( \lambda x_{1} + (1 - \lambda ) x_{2}, \lambda y_{1} + (1 - \lambda ) y_{2}, {\underline{S}}_{1}, {\overline{S}}_{1} \big ) \Big | {\mathcal {F}}_{0} \Big ] \end{aligned}$$
(3.1)

with equality whenever

$$\begin{aligned} (x_{1} + {\underline{S}}_{1} m_{1} - {\overline{S}}_{1} l_{1}, y_{1} - m_{1} + l_{1}) =(x_{2} + {\underline{S}}_{1} m_{2} - {\overline{S}}_{1} l_{2}, y_{2} - m_{2} + l_{2}) . \end{aligned}$$

Since for optimal strategies we must have \(l_{1} \cdot m_{1} = 0\) and \(l_{2} \cdot m_{2} = 0\), then we consider four cases.

Case 1: \(\quad m_{1} = m_{2} = 0\)

Then

$$\begin{aligned} x_{1} - {\overline{S}}_{1} l_{1} = x_{2} - {\overline{S}}_{1} l_{2} \quad \text{ and } \quad y_{1} + l_{1} = y_{2} + l_{2} \end{aligned}$$

so that

$$\begin{aligned} x_{1} - x_{2} = {\overline{S}}_{1} \cdot (l_{1} - l_{2}) = - {\overline{S}}_{1} \cdot (y_{1} - y_{2}) \end{aligned}$$

and provided that \(y_{1} \ne y_{2}\) (if \(y_{1} = y_{2}\), then \(l_{1} = l_{2}\) and \(x_{1} = x_{2}\)) we have that \({\overline{S}}_{1} = \frac{x_{1} - x_{2}}{y_{2} - y_{1}}\) which, by continuity of the conditional law of \({\overline{S}}_{1}\), can happen with conditional probability 0.

Case 2: \(\quad l_{1} = l_{2} = 0\)

Then

$$\begin{aligned} x_{1} + {\underline{S}}_{1} = x_{2} + {\underline{S}}_{1} m_{2} \quad \text{ and } \quad y_{1} - m_{1} = y_{2} - m_{2}. \end{aligned}$$

This leads to

$$\begin{aligned} x_{1} - x_{2} = - {\underline{S}}_{1} \cdot (m_{1} - m_{2}) = {\underline{S}}_{1} \cdot (y_{2} - y_{1}) \end{aligned}$$

and again for \(y_{1} \ne y_{2}\) (if \(y_{1} = y_{2}\), then we have \(m_{1} = m_{2}\) and \(x_{1} = x_{2}\)) we obtain \({\underline{S}}_{1} = \frac{x_{1} - x_{2}}{y_{2} - y_{1}}\) which, by the continuity of the conditional law \({\underline{S}}_{1}\), can happen with conditional probability 0.

Case 3: \(\quad l_{1} = m_{2} = 0\)

Then

$$\begin{aligned} x_{1} + {\underline{S}}_{1} m_{1} = x_{2} - {\overline{S}}_{1} l_{2} \quad \text{ and } \quad y_{1} - m_{1} = y_{2} + l_{2} . \end{aligned}$$

Assuming that \(m_1>0\), since then by concavity local maximum on the selling line with price \({\underline{S}}_{1}\) coincides with global maximum in the market with one price \({\underline{S}}_{1}\), we have that

$$\begin{aligned} \begin{aligned}&{\overline{w}} (x_{1}, y_{1}, {\underline{S}}_{1}, {\overline{S}}_{1})=w(x_{1} + {\underline{S}}_{1} m_{1}, y_{1} - m_{1}) \\&={\hat{w}}(x_1,y_1,{\underline{S}}_{1})={\hat{w}}(x_{1} + {\underline{S}}_{1} m_{1}, y_{1} - m_{1}, {\underline{S}}_{1}) \end{aligned} \end{aligned}$$

Assuming furthermore that \(l_2>0\) we have that

$$\begin{aligned} \begin{aligned}&{\overline{w}} (x_{2}, y_{2}, {\underline{S}}_{1}, {\overline{S}}_{1})=w(x_{2} - {\overline{S}}_{1} l_{2}, y_{2} + l_{2})\\&={\hat{w}}(x_2,y_2,{\overline{S}}_{1})={\hat{w}}(x_{2} - {\overline{S}}_{1} l_{2}, y_{2} + l_{2},{\overline{S}}_{1}) \end{aligned} \end{aligned}$$

Therefore

$$\begin{aligned} {\hat{w}}(x_{1} + {\underline{S}}_{1} m_{1}, y_{1} - m_{1}, {\underline{S}}_{1})={\hat{w}}(x_{1} + {\underline{S}}_{1} m_{1}, y_{1} - m_{1}, {\overline{S}}_{1}) \end{aligned}$$

which, by Proposition 2.2, can happen only when either \(x_{1} + {\underline{S}}_{1} m_{1} = 0\) or \(y_{1} - m_{1} = 0\). If \(x_{1} + {\underline{S}}_{1} m_{1} = 0\), then \(x_{1} = 0\) and \(m_{1} = 0\) and we have a contradiction. If \(y_1=m_1\) then \(y_2+l_2=0\) which means that \(y_2=0=l_2\) we get a contradiction again. Assume now that \(m_1=0\). Then \(x_2-x_1={\overline{S}}_1(y_1-y_2)\), which can happen only with probability 0. Finally when \(l_2=0\) we have \(x_2-x_1={\underline{S}}_1(y_1-y_2)\) which again can happen only with probability 0.

Case 4: \(\quad m_{1} = l_{2} = 0\). This case is studied identically to the Case 3.

Summarizing, we have strict inequality in (3.1). This ends the proof. \(\square \)

To start induction procedure we also need continuous differentiability of \({\tilde{w}}\) which is shown below.

Proposition 3.2

Random function \({\tilde{w}}\) is continuously differentiable for \((x,y)\in {\mathbb {R}}_{+}^{2}\), \(\mathbb {P}\) almost surely.

Proof

Since \({\tilde{w}}\) is strictly concave it is continuously differentiable whenever it is differentiable. By Corollary 2.16 function \({\overline{w}}(\cdot , \cdot , \underline{s}, \overline{s})\) is continuously differentiable at points

$$\begin{aligned} \left\{ (x,0): H_{x,\underline{s}}'(x-)\geqslant 0\right\} \cup \left\{ (0,y): H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\right\} \end{aligned}$$

and at other points \((x, y)\in {\mathbb {R}}_{+}^{2}\) except for those \((x, y)\in {\mathbb {R}}_{+}^{2}\) for which \({\hat{w}}(x, y, \underline{s}) = w(x, y)\) or \({\hat{w}}(x, y, \overline{s}) = w(x, y)\). Let \(\hat{\mathcal {S}}(x, y) := \big \{ {\hat{s}} > 0 : \quad {\hat{w}}(x, y, {\hat{s}}) = w(x, y) \big \}\). By Proposition 2.2 for every \((x, y) \in Z:={\mathbb {R}}_{+}^{2}\setminus \left\{ \left\{ (x,0): H_{x,\underline{s}}'(x-)\geqslant 0\right\} \cup \left\{ (0,y): H_{y\overline{s},\overline{s}}'(0+)\leqslant 0\right\} \right\} \) the set \(\hat{\mathcal {S}}(x, y)\) is at most a singleton. For every \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) function \({\overline{w}}(\cdot , \cdot , \underline{s}, \overline{s})\) is differentiable at point \((x, y)\in Z\), whenever \(\underline{s} \not \in \hat{\mathcal {S}}(x, y)\) or \(\overline{s} \not \in \hat{\mathcal {S}}(x, y)\). Since the conditional law \(\mathbb {P} \big ( ({\underline{S}}_{1}, {\overline{S}}_{2}) \in \cdot \big | {\mathcal {F}}_{0} \big )\) is continuous, then function \({\tilde{w}}\) is continuously differentiable on Z and considering as in the proof of Proposition 2.15 we have that it is continuously differentiable for \((x,y)\in {\mathbb {R}}_{+}^{2}\), \(\mathbb {P}\) almost surely. \(\square \)

4 Dynamic Two Dimensional Case

In this section we summarize the results of the previous sections to study multi period case using induction. We consider the system of Bellman equations

$$\begin{aligned}&{\overline{w}}_{T}(x, y, \underline{s}, \overline{s}) := U(x + \underline{s} y), \end{aligned}$$
(4.1)
$$\begin{aligned}&{\overline{w}}_{T-1}(x, y, \underline{s}, \overline{s}) := \underset{(l, m) \in {{\mathbb {A}}}(x, y, \underline{s}, \overline{s})}{{{\,\mathrm{ess\,sup}\,}}}\nonumber \\&\quad {\mathbb {E}}[{\overline{w}}_{T}(x + \underline{s} m - \overline{s} l, y - m + l, {\underline{S}}_{T}, {\overline{S}}_{T})|{\mathcal {F}}_{T-1}] \end{aligned}$$
(4.2)

and

$$\begin{aligned}&{\overline{w}}_{T-k}(x, y, \underline{s}, \overline{s}) :=\underset{(l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})}{{{\,\mathrm{ess\,sup}\,}}}\nonumber \\&\quad {\mathbb {E}}[{\overline{w}}_{T-k+1}(x + \underline{s} m - \overline{s} l, y - m + l, {\underline{S}}_{T-k+1}, {\overline{S}}_{T-k+1})|{\mathcal {F}}_{T-k}], \end{aligned}$$
(4.3)

for \(k = 1, 2, \ldots , T\), where the conditional expectations are considered as regular conditional expectation. By Lemma 7.3 we are allowed to put in the place of \((x,y,\underline{s}, \overline{s})\) in (4.3) any nonnegative and positive (in the case of the last two coordinates) \({\mathcal {F}}_{T-k}\)—measurable random variables. Furthermore assuming suitable integrability we have that the values of regular conditional expectations in (4.2)–(4.3) are continuous with respect to (xy) and supremum over a compact set is attained so that essential supremum maybe replaced by supremum calculated for each fixed \(\omega \), see Lemma 7.3 for detailed proof of this nontrivial fact. Consequently, in what follows we shall write \(\sup \) instead of \({{\,\mathrm{ess\,sup}\,}}\).

We shall assume

(A):

bid and ask prices \(({\underline{S}}_{t}, {\overline{S}}_{t})\) for \(t=1,\ldots ,T\) are such that random functions \({\overline{w}}_k(x,y,\underline{s}, \overline{s})\) are well defined for \(k=0,1\ldots ,T\) and for \((x,y)\in {\mathbb {R}}_{+}^{2}\) random functions \({\overline{w}}_k(x,y,{\underline{S}}_{k}, {\overline{S}}_{k})\) are integrable together with their derivatives with respect to x and y (whenever they exist).

Sufficient condition for (A) is defined in Lemma 7.4. Furthermore we assume that

(B):

conditional law \(\mathbb {P} \big ( ({\underline{S}}_{k+1}, {\overline{S}}_{k+1}) \in \cdot \big | {\mathcal {F}}_{k} \big )\) is continuous for \(k=0,1\ldots ,T-1\).

Let

$$\begin{aligned}&{\tilde{w}}_k(x, y) := \mathbb {E} \big ( {\overline{w}}_{k+1}(x, y, {\underline{S}}_{k+1}, {\overline{S}}_{k+1}) \big | {\mathcal {F}}_{k} \big ), \end{aligned}$$
(4.4)
$$\begin{aligned}&{\overline{w}}_k(x, y, \underline{s}, \overline{s}) := \sup _{(l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})} {\tilde{w}}_k (x + \underline{s} m - \overline{s} l, y - m + l), \end{aligned}$$
(4.5)

and

$$\begin{aligned} {\hat{w}}_k(x, y, {\hat{s}}) := \sup _{(l, m) \in {\hat{{\mathbb {A}}}}(x, y, {\hat{s}})} {\tilde{w}}_k (x + {\hat{s}} m - {\hat{s}} l, y - m + l) . \end{aligned}$$
(4.6)

For \((x, y, \underline{s}, \overline{s}) \in {\mathbb {R}}_{+}^{2} \times {\mathbb {D}}\) denote by \(\mathcal {A}_{k}(x, y,\underline{s}, \overline{s})\) the set of all \({\mathcal {F}}_{k}\)-measurable random variables taking values in the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\). Denote by \((x_k,y_k)\) our bank position and number of assets held at time k. By Lemma 7.3 we clearly have that

$$\begin{aligned} {\overline{w}}_k(x_k, y_k, {\underline{S}}_k, {\overline{S}}_k) = \underset{(l, m) \in \mathcal {A}_k(x_k, y_k, {\underline{S}}_k, {\overline{S}}_k)}{{{\,\mathrm{ess\,sup}\,}}}{\tilde{w}}_k (x_k + {\underline{S}}_k m - {\overline{S}}_k l, y_k - m + l), \end{aligned}$$
(4.7)

and

$$\begin{aligned} {\tilde{w}}_k(x_k, y_k) := \mathbb {E} \big ( {\overline{w}}_{k+1}(x, y, {\underline{S}}_{k+1}, {\overline{S}}_{k+1}) \big | {\mathcal {F}}_{k} \big ). \end{aligned}$$
(4.8)

Define furthermore for \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) the sets

$$\begin{aligned} \begin{aligned} \mathbf {NT}_k&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2}: \quad {\overline{w}}_k(x, y, \underline{s}, \overline{s}) = {\tilde{w}}_k(x, y) \big \} ,\\ \mathbf {B}_k&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2}: \quad {\overline{w}}_k(x, y, \underline{s}, \overline{s}) = {\hat{w}}_k(x, y, \overline{s}) \big \} \setminus \mathbf {NT}_k(\underline{s}, \overline{s}),\\ \mathbf {S}_k&(\underline{s}, \overline{s}) := \big \{ (x, y) \in {\mathbb {R}}_{+}^{2}: \quad {\overline{w}}_k(x, y, \underline{s}, \overline{s}) = {\hat{w}}_k(x, y, \underline{s}) \big \} \setminus \mathbf {NT}_k(\underline{s}, \overline{s}) \end{aligned} \end{aligned}$$

which correspond respectively to no transaction, buying or selling zones at time k. Let for \(c > 0\), \(s \geqslant 0\) and \(k=0,1,\ldots ,T-1\)

$$\begin{aligned} h(c, s, k) := \underset{\left\{ (x,y): \ x+sy=c, x\geqslant 0, y\geqslant 0\right\} }{{{\,\mathrm{arg\,max}\,}}}{\tilde{w}}_k (x, y) \end{aligned}$$
(4.9)

be the optimal portfolio strategy at time k given wealth value c and asset price s assuming that later at times \(k+1,\ldots , T-1\) we have market with bid and ask prices \(({\underline{S}}_{t}, {\overline{S}}_{t})\). Then \(h(c,s,k)=\left( \begin{matrix} h_0(c,s,k) \\ h_1(c,s,k) \end{matrix} \right) \) and for \(s>0\) we have \(h_1(c,s,k)={c-h_0(c,s,k) \over s}\). Recall now the definitions of shadow price(see [10]): this is a stochastic process \((\hat{S}_k(x_k,y_k))\) taking values between bid and ask prices \({\underline{S}}_k\) and \({\overline{S}}_k\) such that optimal value of utility from terminal wealth (1.1) with price process \((\hat{S}_k(x_k,y_k))\) is the same as in the case of bid and ask prices \({\underline{S}}_k\) and \({\overline{S}}_k\).

Theorem 4.1

Under (A) and (B) for \(k=0,1,\ldots ,T-1\) and \((\underline{s}, \overline{s}) \in {\mathbb {D}}\) we have

$$\begin{aligned} \begin{aligned} \mathbf {NT}_k&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad h_0 (x + \underline{s} y, \underline{s},k) \leqslant x \leqslant h_0 (x + \overline{s}y,\overline{s},k) \big \}, \\ \mathbf {B}_k&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad x > h_0 (x + \overline{s} y, \overline{s},k) \big \}, \\ \mathbf {S}_k&(\underline{s}, \overline{s}) = \big \{ (x, y) \in {\mathbb {R}}_{+}^{2} : \quad x < h_0 (x + \underline{s} y, \underline{s},k) \big \}. \end{aligned} \end{aligned}$$

If at time k our bank position is \(x_k\) and number of assets held in portfolio is \(y_k\) then the optimal strategy is: when \((x_k,y_k)\in \mathbf {NT}_k ({\underline{S}}_k, {\overline{S}}_k)\) to do nothing, while when \((x_k,y_k)\in \mathbf {B}_k ({\underline{S}}_k, {\overline{S}}_k)\) is to buy assets to reach \(h_1(x_k+{\overline{S}}_k y_k,{\overline{S}}_k)\) paying \({\overline{S}}_k\) for each asset and finally when \((x_k,y_k)\in \mathbf {S}_k ({\underline{S}}_k, {\overline{S}}_k)\) is to sell assets to reach \(h_1(x_k+{\underline{S}}_k y_k,{\underline{S}}_k)\) getting for each asset the value \({\underline{S}}_k\). Furthermore there exists a shadow price process \((\hat{S}_k(x_k,y_k))\), which at time k is equal to \({\underline{S}}_k\) or \({\overline{S}}_k\) whenever \((x_k,y_k)\) is in \(\mathbf {S}_k({\underline{S}}_k, {\overline{S}}_k)\) or \(\mathbf {B}_k({\underline{S}}_k, {\overline{S}}_k)\) respectively and is equal to \({\hat{s}}_k(x_k,y_k)\) in the case when \((x_k,y_k)\in \mathbf {NT}_k({\underline{S}}_k, {\overline{S}}_k)\), where \({\hat{s}}_k\) is defined as \({\hat{s}}\) in Proposition 2.13.

Proof

The proof is by an induction. Notice first that by Proposition 3.1\({\tilde{w}}_k\) strictly concave and by Proposition 3.2 it is also continuously differentiable \(\mathbb {P}\) a.s. for \((x,y)\in {\mathbb {R}}_{+}^{2}\). Consequently by Proposition 2.15\({\hat{w}}_k\) is continuously differentiable at \((x, y, {\hat{s}}) \in {\mathbb {R}}_{+}^{2} \times {\hat{{\mathbb {D}}}}\) for \(k=0,1,\ldots ,T\). Furthermore by Corollary 2.16\({\overline{w}}_k\) is continuously differentiable for \((x,y)\in \mathbf {NT}_k^\circ \cup \mathbf {S}_k \cup \mathbf {B}_k\), where \(\mathbf {NT}_k^\circ \) is an interior of \(\mathbf {NT}_k\). The remaining part of the proof follows from Theorem 2.10 and Proposition 2.11. The existence and the form of shadow price follows directly from Proposition 2.13 using induction again. \(\square \)

5 Static Multi (Two) Asset Case

In this section we consider two asset case: the market with two kinds of assets: asset I with bid and ask prices \({\underline{S}}_t^1, {\overline{S}}_t^1\) and asset II with bid and ask prices \({\underline{S}}_t^2, {\overline{S}}_t^2\) respectively. Then our market position will be denoted by the triple \((x,y^1,y^2)\) where x is as before the amount of money on the bank account while \(y^1\) and \(y^2\) denote number of assets I or II respectively in our portfolio. Define the sets of admissible strategies:

$$\begin{aligned}&{\hat{{\mathbb {A}}}}_{bb}(x,y^1,y^2,{s}^1,{s}^2):=\left\{ (l^1,l^2)\in [0,\infty ) \times [0,\infty ):\ x-l^1{s}^1-l^2{s}^2 \geqslant 0 \right\} , \end{aligned}$$
(5.1)
$$\begin{aligned}&{\hat{{\mathbb {A}}}}_{bs}(x,y^1,y^2,{s}^1,{s}^2):=\left\{ (l^1,m^2)\in [0,\infty ) \times [0,\infty ): \ x-l^1{s}^1+m^2{s}^2 \geqslant 0,y^2-m^2\geqslant 0 \right\} , \end{aligned}$$
(5.2)
$$\begin{aligned}&{\hat{{\mathbb {A}}}}_{sb}(x,y^1,y^2,{s}^1,{s}^2):=\left\{ (m^1,l^2)\in [0,\infty ) \times [0,\infty ): \ x+m^1{s}^1-l^2{s}^2 \geqslant 0,y^1-m^1\geqslant 0 \right\} , \end{aligned}$$
(5.3)
$$\begin{aligned}&{\hat{{\mathbb {A}}}}_{ss}(x,y^1,y^2,{s}^1,{s}^2):=\left\{ (m^1,m^2)\in [0,\infty ) \times [0,\infty ): \ y^1-m^1\geqslant 0, y^2-m^2\geqslant 0 \right\} , \end{aligned}$$
(5.4)

which correspond to buying the first and the second asset strategy, buying the first and selling the second asset, selling the first and buying the second asset and selling both first and second asset respectively. Clearly these sets are compact. Denote by

$$\begin{aligned} {\mathbb {C}}(c,s^1,s^2):=\left\{ (x,y^1,y^2)\in {\mathbb {R}}_{+}^3: \ c=x+y^1s^1+y^2s^2\right\} . \end{aligned}$$
(5.5)

This is the set of all bank and I and II asset positions respectively, which can be achieved under the price \(s^1\) and \(s^2\) of the first and second asset respectively with the wealth process equal to c. Let \(w: {\mathbb {R}}_{+}^3 \rightarrow {\mathbb {R}}_+\) be a function which is increasing with respect to its coordinates, strictly concave and differentiable. Therefore it is a continuous function. By strict continuity there is a unique in \({\mathbb {C}}(c,s^1,s^2)\) element maximizing w. Let

$$\begin{aligned} {\mathbb {G}}(c,s^1,s^2):=\max _{(x,y^1,y^2) \in {\mathbb {C}}(c,s^1,s^2)} w(x,y^1,y^2), \end{aligned}$$
(5.6)

and

$$\begin{aligned} {{\,\mathrm{arg\,max}\,}}{\mathbb {G}}(c,s^1,s^2):=(h_0(c,s^1,s^2),h_1(c,s^1,s^2),h_2(c,s^1,s^2)). \end{aligned}$$
(5.7)

We have

Lemma 5.1

Functions \(h_i\) for \(i=0,1,2\) are continuous.

Proof

It follows from the fact that function w is strictly concave which implies that its supremum over a compact set \({\mathbb {C}}\) is unique, and furthermore the mapping \((0,\infty )\times (0,\infty ) \times (0,\infty )\ni (c,s^1,s^2)\mapsto {\mathbb {C}}(c,s^1,s^2)\) is continuous in the Hausdorff metric (see the proof of theorem 2.1 in [10], or [2]). \(\square \)

Let

$$\begin{aligned}&{\hat{w}}_{bb}(x,y^1,y^2,s^1,s^2):=\sup _{(l^1,l^2)\in {\hat{{\mathbb {A}}}}_{bb}(x,y^1,y^2,s^1,s^2)}w(x-l^1s^1-l^2s^2,y^1 +l^1,y^2+l^2), \end{aligned}$$
(5.8)
$$\begin{aligned}&{\hat{w}}_{bs}(x,y^1,y^2,s^1,s^2):=\sup _{(l^1,m^2)\in {\hat{{\mathbb {A}}}}_{bs}(x,y^1,y^2,s^1,s^2)}w(x-l^1s^1 +m^2s^2,y^1+l^1,y^2-m^2), \end{aligned}$$
(5.9)
$$\begin{aligned}&{\hat{w}}_{sb}(x,y^1,y^2,s^1,s^2):=\sup _{(m^1,l^2)\in {\hat{{\mathbb {A}}}}_{sb}(x,y^1,y^2,s^1,s^2)}w(x+m^1s^1-l^2s^2,y^1 -m^1,y^2+l^2), \end{aligned}$$
(5.10)
$$\begin{aligned}&{\hat{w}}_{ss}(x,y^1,y^2,s^1,s^2):=\sup _{(m^1,m^2)\in {\hat{{\mathbb {A}}}}_{ss}(x,y^1,y^2,s^1,s^2)}w(x+m^1s^1 +m^2s^2,y^1-m^1,y^2-m^2). \end{aligned}$$
(5.11)

An interpretation of the function \({\hat{w}}_{bb}(x,y^1,y^2,s^1,s^2)\) is that it is the optimal value of the function w when starting from the position \((x,y^1,y^2)\) we buy asset I and asset II. The meaning of the other functions is similar.

Let

$$\begin{aligned}&{\hat{{\mathbb {A}}}}(x,y^1,y^2,s^1,s^2):=\left\{ ((l^1,m^1),(l^2,m^2))\in ([0,\infty )\times [0,\infty ))^2: \right. \nonumber \\&\left. x+(m^1-l^1)s^1+(m^2-l^2)s^2\geqslant 0, y^1-m^1+l^1\geqslant 0, y^2-m^2+l^2\geqslant 0\right\} \nonumber \\ \end{aligned}$$
(5.12)

denote the set of all admissible strategies when we start from position \((x,y^1,y^2)\) and \(s^1\), \(s^2\) are the prices of asset I and II respectively. Define

$$\begin{aligned}&{\hat{w}}(x,y^1,y^2,s^1,s^2):=\sup _{((l^1,m^1),(l^2,m^2))\in {\hat{{\mathbb {A}}}}(x,y^1,y^2,s^1,s^2)} w(x+(m^1-l^1)s^1\nonumber \\&\quad +\,(m^2-l^2)s^2, y^1-m^1+l^1, y^2-m^2+l^2). \end{aligned}$$
(5.13)

We have (see Proposition 2.8).

Lemma 5.2

The mapping \((x,y^1,y^2)\mapsto {\hat{w}}(x,y^1,y^2,s^1,s^2)\) is concave and

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,s^1,s^2)=G(x+y^1s^1+y^2s^2,s^1,s^2). \end{aligned}$$
(5.14)

Next Lemma and Proposition are three dimensional versions of Proposition 2.2. Define for given \(x\geqslant 0\), \(y^1\geqslant 0\) and \(y^2\geqslant 0\)

$$\begin{aligned} F_{s^1,s^2}^{x,y^1,y^2}(u^1,u^2)=w(x+s^1u^1+s^2u^2,y^1-u^1,y^2-u^2). \end{aligned}$$
(5.15)

We shall maximize \(F_{s^1,s^2}^{x,y^1,y^2}\) over \(u^1\) and \(u^2\) such that \(u^1\leqslant y^1\), \(u^2\leqslant y^2\) and \(x+s^1u^1+s^2u^2\geqslant 0\).

Lemma 5.3

We have

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,s^1,s^2)=w(x,y^1,y^2) \end{aligned}$$
(5.16)

when: \(x>0\), \(y^1>0\) and \(y^2>0\) and

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{x,y^1,y^2}}{\partial u^1}(0,0)= 0, \ \ \frac{\partial F_{s^1,s^2}^{x,y^1,y^2}}{\partial u^2}(0,0)=0, \end{aligned}$$
(5.17)

or \(x>0, y^1>0, y^2=0\) and

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{x,y^1,0}}{\partial u^1}(0,0) = 0, \ \ \frac{\partial F_{s^1,s^2}^{x,y^1,0}}{\partial u^2}(0,0-)\geqslant 0, \end{aligned}$$
(5.18)

or \(x>0, y^1=0, y^2>0\) and

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{x,0,y^2}}{\partial u^1}(0-,0) \geqslant 0, \ \ \frac{\partial F_{s^1,s^2}^{x,0,y^2}}{\partial u^2}(0,0) = 0, \end{aligned}$$
(5.19)

or \(x=0, y^1>0, y^2>0\) and

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{0,y^1,y^2}}{\partial u^1}(0+,0) = 0, \ \ \frac{\partial F_{s^1,s^2}^{0,y^1,y^2}}{\partial u^2}(0,0+) = 0, \end{aligned}$$
(5.20)

or \(x=0, y^1>0, y^2=0\) and for any \(\alpha \in [0,{s^1\over s^2-s^1}]\) provided that \(s^2>s^1\) or any \(\alpha \in [0,\infty )\) when \(s^2\leqslant s^1\) we have

$$\begin{aligned} ((1+\alpha )s^1-\alpha s^2){w_x}'(0+,y^1,0)-(1+\alpha ){w_{y^1}}'(0,y^1,0)+\alpha {w_{y^2}}'(0,y^1,0+)\geqslant 0, \end{aligned}$$
(5.21)

or \(x=0, y^1=0, y^2>0\) and for any \(\alpha \in [0,{s^2\over s^1-s^2}]\) provided that \(s^1>s^2\) or any \(\alpha \in [0,\infty )\) when \(s^1\leqslant s^2\) we have

$$\begin{aligned} ((-\alpha s^1+(1+\alpha ) s^2){w_x}'(0+,0,y^2)+\alpha {w_{y^1}}'(0,0+,y^2)-(1+\alpha ) {w_{y^2}}'(0,0,y^2)\geqslant 0, \end{aligned}$$
(5.22)

or \(x>0, y^1=0, y^2=0\) and

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{x,0,0}}{\partial u^1}(0-,0)\geqslant 0, \ \ \frac{\partial F_{s^1,s^2}^{x,0,0}}{\partial u^2}(0,0-)\geqslant 0. \end{aligned}$$
(5.23)

Proof

Equality (5.16) means that it is optimal to do nothing when we are at \((x,y^1,y^2)\) and we have only prices \(s^1\) and \(s^2\) for the first and second asset respectively. Therefore when \(x>0\), \(y^1>0\) and \(y^2>0\) partial derivatives of the function \(F_{s^1,s^2}^{x,y^1,y^2}(u^1,u^2)\) should for \(u^1=0\) and \(u^2=0\) be equal to 0. When \(x>0, y^1>0, y^2=0\) function \(F_{s^1,s^2}^{x,y^1,0}(u^1,u^2)\) attains its maximum for \(u^1=0\) and \(u^2=0\), and the partial derivative for \(u^1\) should be equal to 0, while for \(u^2\) should be nonnegative (the function should be increasing). The case \(x>0, y^1=0, y^2>0\) can be studied in a similar way. When \(x=0, y^1>0, y^2>0\) we can sell both assets and therefore we have (5.20). In the case when \(x=0, y^1>0, y^2=0\) we consider the function: \(u\mapsto w(0+s^1(1+\alpha )u-s^2\alpha u, y^1-(1+\alpha )u,0+\alpha u)\) in which \(\alpha \) is nonnegative and such that \(s^1(1+\alpha )u-s^2\alpha u\geqslant 0\). This function attains its maximum for \(u=0\) and therefore its derivative should be nonnegative for each \(\alpha \) within the range defined in Lemma. When \(x=0, y^1>0, y^2=0\) we consider the function: \(u\mapsto w(0-s^1 \alpha u+s^2(1+\alpha ) u, 0+\alpha u, y^2-(1+\alpha ) u)\) which should have nonnegative derivative for \(\alpha \) as in the statement of Lemma. For the case \(x>0, y^1=0, y^2=0\) we have function \( F_{s^1,s^2}^{x,0,0}\) which should have nonnegative derivatives both in \(u^1\) and \(u^2\). \(\square \)

An analog of Proposition 2.2 can be formulated as follows

Proposition 5.4

We have

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,s^1,s^2)={\hat{w}}(x,y^1,y^2,{s^1}',{s^2}')=w(x,y^1,y^2) \end{aligned}$$
(5.24)

for \(x>0\), \(y^1>0\) and \(y^2>0\) when \((s^1,s^2)=({s^1}',{s^2}')\), for \(x>0\), \(y^1>0\) and \(y^2=0\) when \(s^1={s^1}'\), for \(x>0\), \(y^1=0\) and \(y^2>0\) when \(s^2={s^2}'\), for \(x=0\), \(y^1>0\) and \(y^2>0\) when \((s^1,s^2)=({s^1}',{s^2}')\), for \(x=0\), \(y^1>0\) and \(y^2=0\) when (5.21) is satisfied both for \((s^1,s^2)\) and \(({s^1}',{s^2}')\), and \(\alpha \) as in Lemma 5.3, for \(x=0\), \(y^1=0\) and \(y^2>0\) when (5.22) is satisfied both for \((s^1,s^2)\) and \(({s^1}',{s^2}')\), and \(\alpha \) as in Lemma 5.3, and finally for \(x>0\), \(y^1=0\) and \(y^2=0\) when (5.23) is satisfied both for \((s^1,s^2)\) and \(({s^1}',{s^2}')\).

Proof

When \(x>0\), \(y^1>0\) and \(y^2>0\) then we have (5.17) both for \((s^1,s^2)\) and \(({s^1}',{s^2}')\). Since

$$\begin{aligned} \frac{\partial F_{s^1,s^2}^{x,y^1,y^2}}{\partial u^1}(0,0)=s^1{w_x}'(x,y^1,y^2)-{w_{y^1}}'(x,y^1,y^2)=0 \end{aligned}$$
(5.25)

and the same holds for \({s^1}'\) we therefore have \(s^1={s^1}'\). From the partial derivative with respect to \(u^2\) we get \(s^2={s^2}'\). In the cases \(x>0\), \(y^1>0\) and \(y^2=0\) or \(x>0\), \(y^1=0\) and \(y^2>0\) from the partial derivatives with respect to \(u^1\) or \(u^2\) respectively we get either \(s^1={s^1}'\) or \(s^2={s^2}'\). In the case \(x=0\), \(y^1>0\) and \(y^2>0\) from (5.20) we have

$$\begin{aligned} s^1{w_x}'(0+,y^1,y^2)-{w_{y^1}}'(0+,y^1,y^2)=0 \ \ s^2{w_x}'(0+,y^1,y^2)-{w_{y^2}}'(0+,y^1,y^2)=0 \end{aligned}$$
(5.26)

both for \((s^1,s^2)\) and \(({s^1}',{s^2}')\) from which we obtain \((s^1,s^2)=({s^1}',{s^2}')\). The remaining part of Proposition that is the cases when two variables are equal to 0 follow directly from Lemma 5.3. \(\square \)

Let

$$\begin{aligned}&{\mathbb {A}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2):=\left\{ ((l^1,m^1),(l^2,m^2))\in ([0,\infty )\times [0,\infty ))^2: \ \right. \nonumber \\&\left. x+m^1\underline{s}^1-l^1\overline{s}^1+m^2\underline{s}^2 -l^2\overline{s}^2\geqslant 0,y^1-m^1+l^1\geqslant 0, y^2-m^2+l^2\geqslant 0\right\} \end{aligned}$$
(5.27)

be the set of admissible portfolio strategies when we have bid and ask prices \(\underline{s}^1, \overline{s}^1\) and \(\underline{s}^2, \overline{s}^2\) for asset I and asset II respectively. Define the corresponding to such strategies one period value function

$$\begin{aligned}&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2):= \nonumber \\&\sup _{((l^1,m^1),(l^2,m^2))\in A(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)} w(x+m^1\underline{s}^1-l^1\overline{s}^1+m^2\underline{s}^2 -l^2\overline{s}^2, y^1\nonumber \\&\quad -\,m^1+l^1,y^2-m^2+l^2). \end{aligned}$$
(5.28)

The following lemma describes our decision rule we use in the case of \({\bar{w}}\)

Lemma 5.5

We have

$$\begin{aligned}&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)=\max \left[ {\hat{w}}_{bb} (x,y^1,y^2,\overline{s}^1,\overline{s}^2), {\hat{w}}_{bs}(x,y^1,y^2,\overline{s}^1,\underline{s}^2),\right. \nonumber \\&\left. \ \ \ \ {\hat{w}}_{sb}(x,y^1,y^2,\underline{s}^1,\overline{s}^2), {\hat{w}}_{ss}(x,y^1,y^2,\underline{s}^1,\underline{s}^2)\right] . \end{aligned}$$
(5.29)

Proof

In fact, starting from \((x,y^1,y^2)\) we have four possible choices of our strategies: buy first and second asset, buy first and sell second, sell first and buy second or sell both assets, taking into account that in each case we can do nothing, so that no transactions are included in this scheme. \(\square \)

Furthermore

Lemma 5.6

We have

$$\begin{aligned}&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\leqslant \min \left[ {\hat{w}}(x,y^1,y^2,\overline{s}^1,\overline{s}^2), {\hat{w}}(x,y^1,y^2,\overline{s}^1,\underline{s}^2),\right. \nonumber \\&\left. \ \ \ \ {\hat{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^2), {\hat{w}}(x,y^1,y^2,\underline{s}^1,\underline{s}^2)\right] . \end{aligned}$$
(5.30)

Proof

Clearly

$$\begin{aligned} x+(m^1-l^1){\underline{s}}^1+(m^2-l^2){\underline{s}}^2\geqslant x+m^1 \underline{s}^1-l^1{\overline{s}}^1+m^2\underline{s}^2-l^2{\overline{s}}^2 \end{aligned}$$

and therefore

$$\begin{aligned} {\mathbb {A}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\subseteq {\hat{{\mathbb {A}}}}(x,y^1,y^2,\overline{s}^1,\overline{s}^2). \end{aligned}$$
(5.31)

Similarly we have

$$\begin{aligned} \begin{aligned}&{\mathbb {A}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\subseteq {\hat{{\mathbb {A}}}}(x,y^1,y^2,\overline{s}^1,\underline{s}^2),\\&{\mathbb {A}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\subseteq {\hat{{\mathbb {A}}}}(x,y^1,y^2,\underline{s}^1,\overline{s}^2), \\&{\mathbb {A}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\subseteq {\hat{{\mathbb {A}}}}(x,y^1,y^2,\underline{s}^1,\underline{s}^2). \end{aligned} \end{aligned}$$
(5.32)

Now by (5.31) and monotonicity of w with respect to each coordinate

$$\begin{aligned}&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2, \overline{s}^2):=\sup _{((l^1,m^1),(l^2,m^2))\in A(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)} \nonumber \\&w(x+m^1 \underline{s}^1-l^1\overline{s}^1+m^2\underline{s}^2-l^2 \overline{s}^2, y^1-m^1+l^1,y^2-m^2+l^2) \nonumber \\&\leqslant \sup _{((l^1,m^1),(l^2,m^2))\in {\hat{{\mathbb {A}}}} (x,y^1,y^2,\overline{s}^1,\overline{s}^2)} w(x+(m^1-l^1) \overline{s}^1+(m^2-l^2)\overline{s}^2, \nonumber \\&\ \ \ \ y^1-m^1+l^1,y^2-m^2+l^2) = {\hat{w}} (x,y^1,y^2,\overline{s}^1,\overline{s}^2) \end{aligned}$$
(5.33)

and similarly using (5.32) we obtain

$$\begin{aligned} \begin{aligned}&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2) \leqslant {\hat{w}} (x,y^1,y^2,\overline{s}^1,\underline{s}^2), \\&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2) \leqslant {\hat{w}} (x,y^1,y^2,\underline{s}^1,\overline{s}^2), \\&{\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2) \leqslant {\hat{w}} (x,y^1,y^2,\underline{s}^1,\underline{s}^2), \end{aligned} \end{aligned}$$

which completes the proof. \(\square \)

We now define no transactions zone, buying-no transactions, no transactions - buying, selling - no transactions, no transactions - selling, buying-buying, buying-selling, selling-buying and selling-selling zones respectively, to simplify notations we skip the dependence of these zones on the values of \(\underline{s}^1, \overline{s}^1, \underline{s}^2, \overline{s}^2\) and the range \( {\mathbb {R}}_{+}^{3}\) of \((x, y^1, y^2)\)

$$\begin{aligned} \mathbf {NT}&:=\left\{ (x,y^1,y^2):\ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)=w(x,y^1,y^2)\right\} \end{aligned}$$
(5.34)
$$\begin{aligned} \mathbf {BNT}&:=\left\{ (x,y^1,y^2): \ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2) \right. \nonumber \\&\,\left. = \sup _{\left\{ l^1\geqslant 0:\ x-l^1 \overline{s}^1\geqslant 0\right\} } w(x-l^1 \overline{s}^1, y^1+l^1,y^2)\right\} \Bigg \backslash \mathbf {NT} \nonumber \\ \mathbf {NTB}&:=\left\{ (x,y^1,y^2): \ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,\left. = \sup _{\left\{ l^2\geqslant 0: \ x-l^2 \overline{s}^2\geqslant 0\right\} } w(x-l^2 \overline{s}^2, y^1,y^2+l^2)\right\} \setminus \mathbf {NT} \nonumber \\ \mathbf {SNT}&:=\left\{ (x,y^1,y^2): \ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,\left. = \sup _{\left\{ m^1\geqslant 0:\ m^1\leqslant y^1\right\} } w(x+m^1 \underline{s}^1, y^1-m^1,y^2)\right\} \setminus \mathbf {NT} \nonumber \\ \mathbf {NTS}&:=\left\{ (x,y^1,y^2):\ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2) \right. \nonumber \\&\,\left. = \sup _{\left\{ m^2\geqslant 0:\ m^2\leqslant y^2\right\} } w(x+m^2 \underline{s}^2, y^1,y^2-m^2)\right\} \setminus \mathbf {NT} \nonumber \\ \mathbf {BB}&:=\left\{ (x,y^1,y^2):\ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,\left. ={\hat{w}}_{bb}(x,y^1,y^2, \overline{s}^1,\overline{s}^2)\right\} \setminus \left\{ \mathbf {BNT}\cup \mathbf {NTB} \cup \mathbf {NT}\right\} \nonumber \\ \mathbf {BS}&:=\left\{ (x,y^1,y^2): \ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,\left. ={\hat{w}}_{bs}(x,y^1,y^2, \overline{s}^1,\underline{s}^2)\right\} \setminus \left\{ \mathbf {BNT}\cup \mathbf {NTS} \cup \mathbf {NT}\right\} \nonumber \\ \mathbf {SB}&:=\left\{ (x,y^1,y^2): \ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,=\left. {\hat{w}}_{sb}(x,y^1,y^2, \underline{s}^1,\overline{s}^2)\right\} \setminus \left\{ \mathbf {SNT}\cup \mathbf {NTB} \cup \mathbf {NT}\right\} \nonumber \\ \mathbf {SS}&:=\left\{ (x,y^1,y^2):\ {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)\right. \nonumber \\&\,\left. ={\hat{w}}_{ss}(x,y^1,y^2, \underline{s}^1,\underline{s}^2)\right\} \setminus \left\{ \mathbf {SNT}\cup \mathbf {NTS} \cup \mathbf {NT}\right\} \end{aligned}$$
(5.35)

Theorem 5.7

The optimal strategies in the sets \(\mathbf {BB}\cup \mathbf {BS}\cup \mathbf {SB} \cup \mathbf {SS}\) are to reach the point

$$\begin{aligned} (h^0(c,s^1,s^2),h^1(c,s^1,s^2),h^2(c,s^1,s^2)), \end{aligned}$$
(5.36)

where c is the wealth corresponding to prices \(s^1\), \(s^2\) which depend on the zones \(\mathbf {BB}, \mathbf {BS}, \mathbf {SB}, \mathbf {SS}\), that is we have

$$\begin{aligned} \mathbf {BB}&=\left\{ (x,y^1,y^2): y^1< h_1(x+y^1\overline{s}^1+y^2\overline{s}^2,\overline{s}^1,\overline{s}^2),y^2 < h_2(x+y^1\overline{s}^1+y^2\overline{s}^2,\overline{s}^1,\overline{s}^2) \right\} , \end{aligned}$$
(5.37)
$$\begin{aligned} \mathbf {BS}&=\left\{ (x,y^1,y^2): y^1 < h_1(x+y^1\overline{s}^1+y^2\underline{s}^2,\overline{s}^1,\underline{s}^2), y^2 > h_2(x+y^1\overline{s}^1+y^2\underline{s}^2,\overline{s}^1,\underline{s}^2) \right\} , \end{aligned}$$
(5.38)
$$\begin{aligned} \mathbf {SB}&=\left\{ (x,y^1,y^2): y^1 > h_1(x+y^1\underline{s}^1+y^2\overline{s}^2,\underline{s}^1,\overline{s}^2), y^2 < h_2(x+y^1\underline{s}^1+y^2\overline{s}^2,\underline{s}^1,\overline{s}^2) \right\} , \end{aligned}$$
(5.39)
$$\begin{aligned} \mathbf {SS}&=\left\{ (x,y^1,y^2): y^1> h_1(x+y^1\underline{s}^1+y^2\underline{s}^2,\underline{s}^1,\underline{s}^2), y^2 > h_2(x+y^1\underline{s}^1+y^2\underline{s}^2,\underline{s}^1,\underline{s}^2) \right\} . \end{aligned}$$
(5.40)

When \((x,y^1,y^2)\in \mathbf {NT}\) it is optimal do not change our portfolio. In the sets \(\mathbf {BNT}\), \(\mathbf {NTB}\), \(\mathbf {SNT}\), \(\mathbf {NTS}\) the control is reduced to one asset case studied in Theorem 2.10. The sets \(\mathbf {NT}\), \(\mathbf {BNT}\), \(\mathbf {NTB}\), \(\mathbf {SNT}\), \(\mathbf {NTS}\), \(\mathbf {BB}\), \(\mathbf {BS}\), \(\mathbf {SB}\) and \(\mathbf {SS}\) are disjoint and cover all nonzero portfolio cases i.e. all nonnegative portfolios except of trivial zero portfolio.

Proof

Notice first that when \((x,y^1,y^2)\in \mathbf {BB}\) then by the formula (5.34) it is optimal to buy both assets. It is in the case when \( y^1 < h_1(x+y^1\overline{s}^1+y^2\overline{s}^2,\overline{s}^1,\overline{s}^2)\) and \(y^2 < h_2(x+y^1\overline{s}^1+y^2\overline{s}^2,\overline{s}^1,\overline{s}^2),\) which is in fact the formula for \(\mathbf {BB}\) in (5.37). Other formulae follows from similar consideration. The form of optimal portfolios follow directly from optimal strategies for function w. Directly from the form of (5.37) we see that the sets \(\mathbf {BB}\), \(\mathbf {BS}\), \(\mathbf {SB}\) and \(\mathbf {SS}\) are disjoint. The other sets are disjoint almost by the definition (5.34). \(\square \)

In analogy to Proposition 2.13 consider now shadow price for two assets case.

Proposition 5.8

If \((x,y^1,y^2)\in \mathbf {NT}\) and \(x^1,y^1,y^2>0\) there exist unique prices \({\hat{s}}^1(x,y^1,y^2)\) and \({\hat{s}}^2(x,y^1,y^2)\) such that \(\underline{s}^1\leqslant {\hat{s}}^1(x,y^1,y^2) \leqslant \overline{s}^1\), \(\underline{s}^2 \leqslant {\hat{s}}^2(x,y^1,y^2)\leqslant \overline{s}^2\) and

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,{\hat{s}}^1(x,y^1,y^2), {\hat{s}}^2(s,y^1,y^2))=w(x,y^1,y^2) \end{aligned}$$
(5.41)

The prices \({\hat{s}}^1(x,y^1,y^2)\) and \({\hat{s}}^2(x,y^1,y^2)\) for \((x,y^1,y^2)\in \mathbf {NT}\) are called shadow price.

Proof

If \((x,y^1,y^2)\in \mathbf {NT}\) then \({\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)=w(x,y^1,y^2)\). For \(c>0\), \(s>0\) let

$$\begin{aligned} {\bar{h}}^1(c,s,y^1):= \underset{\{(x,y^2): \ x+y^2s=c, x\geqslant 0, y^2\geqslant 0\}}{{{\,\mathrm{arg\,max}\,}}}{w(x,y^1,y^2)} \end{aligned}$$
(5.42)
$$\begin{aligned} {\bar{h}}^2(c,s,y^2):= \underset{\{(x,y^1): \ x+y^1s=c, x\geqslant 0, y^1\geqslant 0\}}{{{\,\mathrm{arg\,max}\,}}}w(x,y^1,y^2) \end{aligned}$$
(5.43)

Extending the proof of Lemma 2.5 we see that \({\bar{h}}^1\) and \({\bar{h}}^2\) are continuous functions. Consequently by Theorem 2.10 we have that \({\bar{h}}_0^2(x+\underline{s}^1y^1,\underline{s}^1,y^2)\leqslant x \leqslant {\bar{h}}_0^2(x+\overline{s}^1y^1,\overline{s}^1,y^2)\) and using Propositions 2.11 and 2.13 we obtain the existence of \({\hat{s}}^1(x,y^1,y^2)\) such that \(x={\bar{h}}^2_0(x+{\hat{s}}^1(x,y^1,y^2)y^1,{\hat{s}}^1(x,y^1,y^2),y^2)\) and \(y^1={\bar{h}}^2_1(x+{\hat{s}}^1(x,y^1,y^2)y^1,{\hat{s}}^1(x,y^1,y^2),y^2)\) and \({\bar{w}}(x,y^1,y^2,{\hat{s}}^1(x,y^1,y^2), {\hat{s}}^1(x,y^1,y^2),\underline{s}^2,\overline{s}^2)=w(x,y^1,y^2)\). But this means that \((x,y^2)\) is in a no transaction zone in the market with bid and ask prices \(\underline{s}^2,\overline{s}^2\) with fixed \(y^1\). Using Theorem 2.10 again we have \({\bar{h}}_0^1(x+\underline{s}^2y^2,\underline{s}^2,y^1)\leqslant x \leqslant {\bar{h}}_0^1(x+\overline{s}^2y^2,\overline{s}^2,y^1)\) and by Propositions 2.11 and 2.13 we obtain the existence of \({\hat{s}}^2(x,y^1,y^2)\) such that \(x={\bar{h}}^1_0(x+{\hat{s}}^2(x,y^1,y^2)y^1,{\hat{s}}^2(x,y^1,y^2),y^1)\) and \(y^2={\bar{h}}^1_1(x+{\hat{s}}^2(x,y^1,y^2)y^1,{\hat{s}}^2(x,y^1,y^2),y^1)\) and finally we have (5.41). \(\square \)

6 Multidimensional Induction Step and Dynamic Portfolio

We shall formulate here major steps and sketch main differences in the proofs, which allow us to study the case of two assets in a similar way as in the one asset case.

Proposition 6.1

Function \({\hat{w}}\) is continuously differentiable at points \((x, y^1, y^2, s^1, s^2) \in {\mathbb {R}}_{+}^{3} \times {\hat{{\mathbb {D}}}}^2\) with respect to first three coordinates .

Proof

We follow the proof of Proposition 2.15. For \(z>0\) let

$$\begin{aligned} \Gamma (z):=\left\{ (u_1,u_2):\ u_1\geqslant 0, u_2\geqslant 0, u_1+u_2\leqslant z\right\} . \end{aligned}$$

Define \(w^*(z,u_1, u_2, s^1, s^2):= w(u_1, \frac{u_2}{s^1}, \frac{z-u_1-u_2}{s^2})\). Then

$$\begin{aligned} {\hat{w}}(x, y^1, y^2, s^1, s^2)=\sup _{(u_1,u_2)\in \Gamma (x+s^1y^1+s^2y^2)} w^*(x+s^1y^1+s^2y^2,u_1, u_2, s^1, s^2) \end{aligned}$$
(6.1)

and consider the function

$$\begin{aligned} W(z,s^1,s^2):= \sup _{(u_1,u_2) \in \Gamma (z)} w^*(z,u_1, u_2, s^1, s^2). \end{aligned}$$

When function \(w^*\) attains its maximum over \(\Gamma (z)\) for \(u_{1,z,s^1,s^2}>0\), \(u_{2,z,s^1,s^2}>0\) such that \(u_{1,z,s^1,s^2}+u_{2,z,s^1,s^2}<z\) then differentiability of W (we again locally may assume that supremum is attained in a fixed compact subset of \(\Gamma (z)\)) follows from Proposition 7.2. Moreover since \({\hat{w}}(x,y^1,y^2,s^1,s^2)=W(x+s^1y^1+s^2y^2,s^1,s^2)\), by (7.3) partial derivatives of \({\hat{w}}\) are in fact partial derivatives of \(w^*\) with \(u_{1,x+s^1y^1+s^2y^2,s^1,s^2}>0\), \(u_{2,x+s^1y^1+s^2y^2,s^1,s^2}>0\). When the maximum of w over \({\mathbb {C}}(z,s^1,s^2)\) is attained at point \({\bar{x}}\geqslant 0, {\bar{y}}^1 \geqslant 0, {\bar{y}}^2 =0 \), we have

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,s^1,s^2)=w({\bar{x}},{\bar{y}}^1,0,s^1,s^2) ={\tilde{W}}(x+y^1s^1+y^2s^2,s^1) \end{aligned}$$
(6.2)

where \({\tilde{W}}(z,s^1):= \sup _{u\in [0,z]} w(u,{z-u\over s^1},0)\). Continuous differentiability of \({\tilde{W}}\) with respect to z follows from the proof of Proposition 2.15. By (6.2) clearly also \({\hat{w}}\) is continuously differentiable with respect to \(x,y^1,y^2\).

The cases when w over \({\mathbb {C}}(z,s^1,s^2)\) is attained at point \({\bar{x}}= 0, {\bar{y}}^1 \geqslant 0, {\bar{y}}^2 \geqslant 0 \) and at point \({\bar{x}}\geqslant 0, {\bar{y}}^1 = 0, {\bar{y}}^2 \geqslant 0 \) can be shown in a similar way using the same arguments. \(\square \)

Furthermore we have

Corollary 6.2

Function \({\overline{w}}\) is continuously differentiable with respect to first three coordinates at points \((x, y^1, y^2, \underline{s}^1,\overline{s}^1, \underline{s}^2, \overline{s}^2) \in {\mathbb {R}}_{+}^{3} \times {{\mathbb {D}}}^2\) for \((x,y^1,y^2)\in \mathbf {BB} \cup \mathbf {BS}\cup \mathbf {SB}\cup \mathbf {SS}\) as well as in the interiors of the sets \(\mathbf {NT}\), \(\mathbf {BNT}\), \(\mathbf {NTB}\), \(\mathbf {SNT}\), \(\mathbf {NTS}\).

Proof

The set \( \mathbf {BB} \cup \mathbf {BS}\cup \mathbf {SB}\cup \mathbf {SS}\) is open and therefore for e.g. \((x,y^1,y^2)\in \mathbf {BB}\)

$$\begin{aligned} {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)={\hat{w}}(x,y^1,y^2,\overline{s}^1,\overline{s}^2) \end{aligned}$$

while for \((x,y^1,y^2)\in \mathbf {BS}\)

$$\begin{aligned} {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)={\hat{w}}(x,y^1,y^2,\overline{s}^1,\underline{s}^2) \end{aligned}$$

so that continuous differentiability of \({\bar{w}}\) follows from continuous differentiability of \({\hat{w}}\). In the cases \((x,y^1,y^2)\in \mathbf {SB}\) and \((x,y^1,y^2)\in \mathbf {SS}\) we have the same consideration. In the case when \((x,y^1,y^2)\) is in the interior of \(\mathbf {NT}\) we have that \({\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)={w}(x,y^1,y^2)\) and continuous differentiability of \({\bar{w}}\) follows from differentiability of w. In the case when \((x,y^1,y^2)\) is in the interior of \(\mathbf {BNT}\) we have that

$$\begin{aligned} {\bar{w}}(x,y^1,y^2,\underline{s}^1, \overline{s}^1,\underline{s}^2,\overline{s}^2)={\hat{w}}(x,y^1,y^2,\overline{s}^1,\overline{s}^2) \end{aligned}$$

and again continuous differentiability of \({\bar{w}}\) follows from continuous differentiability of \({\hat{w}}\). The cases when \((x,y^1,y^2)\) is in the interior of \(\mathbf {SNT}\) or \(\mathbf {NTS}\) can be shown in the same way. \(\square \)

Assume on a given filtered probability space \(\big ( \Omega , {\mathcal {F}}, ({\mathcal {F}}_{t})_{t = 0, 1}, \mathbb {P} \big )\) we have four \({\mathcal {F}}_{1}\)-random variables \({\underline{S}}_{1}^1\), \({\overline{S}}_{1}^1\), \({\underline{S}}_{1}^2\), \({\overline{S}}_{1}^2\) such that \(0< {\underline{S}}_{1}^1 < {\overline{S}}_{1}^1\) and \(0< {\underline{S}}_{1}^2 < {\overline{S}}_{1}^2\) such that for each \((x,y^1,y^2)\in {\mathbb {R}}_{+}^{3}\) the derivatives of the random variable \({\overline{w}}(x,y,{\underline{S}}_{1}^1, {\overline{S}}_{1}^1, {\underline{S}}_{1}^2, {\overline{S}}_{1}^2)\) whenever exist are integrable. Furthermore assume that conditional law \(\mathbb {P} \big ( ({\underline{S}}_{1}^1, {\overline{S}}_{1}^1,{\underline{S}}_{1}^2, {\overline{S}}_{1}^2) \in \cdot \big | {\mathcal {F}}_{0} \big )\) is continuous.

For \((x, y^1, y^2) \in {\mathbb {R}}_{+}^{3}\) put

$$\begin{aligned} {\tilde{w}}(x, y^1 , y^2) := \mathbb {E} \big ( {\overline{w}}(x, y^1, y^2, {\underline{S}}_{1}, {\overline{S}}_{1}, {\underline{S}}_{2}, {\overline{S}}_{2}) \big | {\mathcal {F}}_{0} \big ) , \end{aligned}$$

considering it as a regular conditional expected value using Lemma 7.3. We have

Proposition 6.3

Random function \({\tilde{w}}\) is strictly concave.

Proof

We use the arguments of Proposition 3.1. Namely we have to show that for two different points \((x_1,y_1^1,y_1^2)\) and \((x_2,y_2^1,y_2^2)\) we have to exclude \(\mathbb {P}\) a.e. the case when there exist \((l_1^1,m_1^1,l_1^2,m_1^2) \in {\mathbb {A}}(x_1,y_1^1,y_1^2,{\underline{S}}^1_1, {\overline{S}}^1_1,{\underline{S}}^2_1,{\overline{S}}^2_1) \) and \((l_2^1,m_2^1,l_2^2,m_2^2) \in {\mathbb {A}}(x_2,y_2^1,y_2^2,{\underline{S}}^1_1, {\overline{S}}^1_1,{\underline{S}}^2_1,{\overline{S}}^2_1)\) such that

$$\begin{aligned}&(x_{1} + {\underline{S}}_{1}^1 m_{1}^1 - {\overline{S}}_{1}^1 l_{1}^1 + {\underline{S}}_{1}^2 m_{1}^2 - {\overline{S}}_{1}^2 l_{1}^2, y_{1}^1 - m_{1}^1 + l_{1}^1, y_{1}^2 - m_{1}^2 + l_{1}^2)\\&\quad =(x_{2} + {\underline{S}}_{1}^1 m_{2}^1 - {\overline{S}}_{1}^1 l_{2}^1 + {\underline{S}}_{1}^2 m_{2}^2 - {\overline{S}}_{1}^2 l_{2}^2, y_{2}^1 - m_{2}^1 + l_{2}^1, y_{2}^2 - m_{2}^2 + l_{2}^2). \end{aligned}$$

assuming additionally that \(l_{1}^1 \cdot m_{1}^1 = 0\), \(l_{2}^1 \cdot m_{2}^1 = 0\), \(l_{1}^2 \cdot m_{1}^2 = 0\) and \(l_{2}^2 \cdot m_{2}^2 = 0\). We shall consider three general cases:

Case 1. In both cases (i.e. when we start with initial position \((x_1,y_1^1,y_1^2)\) and \((x_2,y_2^1,y_2^2)\) we make the same kind of decisions: that is we buy or sell the first asset and we buy or sell the second asset. In this case \({\overline{S}}^1_1, {\overline{S}}^2_1\), or \({\overline{S}}^1_1, {\underline{S}}^2_1\), or \( {\underline{S}}^1_1,{\overline{S}}^2_1\) or \( {\underline{S}}^1_1,{\underline{S}}^2_1\) lie on at most two dimensional hyperspace which can happen with probability 0 (by continuity of the laws).

Case 2. Our investment strategies for \((x_1,y_1^1,y_1^2)\) and \((x_2,y_2^1,y_2^2)\) differ in the case of one asset. To be more precise consider the case \(m_1^1=0\), \(l_1^2=0\) and \(m_2^1=0\), \(m_2^2=0\). Then we have \(x_1-l_1^1{\overline{S}}^1_1+m_1^2{\underline{S}}^2_1=x_2-l_2^1{\overline{S}}^1_1-l_2^2{\overline{S}}^2_1\), \(y_1^1+l_1^1=y_2^1+l_2^1\), and \(y_1^2-m_1^2=y_2^2+l_2^2\). If \(m_1^2>0\) and \(l_2^2>0\) then

$$\begin{aligned}&{\hat{w}}(x_1-l_1^1{\overline{S}}^1_1+m_1^2{\underline{S}}^2_1,y_1^1+l_1^1,y_1^2-m_1^2,{\overline{S}}^1_1,{\underline{S}}^2_1)\nonumber \\&\quad ={\hat{w}} x_1-l_1^1{\overline{S}}^1_1+m_1^2{\underline{S}}^2_1,y_1^1+l_1^1,y_1^2-m_1^2,{\overline{S}}^1_1,{\overline{S}}^2_1) \end{aligned}$$
(6.3)

from which by Proposition 5.4 we have that (a) \(x_1-l_1^1{\overline{S}}^1_1+m_1^2{\underline{S}}^2_1=0\), or (b) \(y_1^1+l_1^1=0\) or (c) \(y_1^2-m_1^2=0\). If \(y_1^1+l_1^1>0\) and \(y_1^2-m_1^2>0\) then (a) can not happen by Proposition 5.4. Therefore it remains to consider the cases (b) and (c). In the case (b) we have \(y_1^1=0=l_1^1\), and then also \(y_1^1=0=l_2^1\). Consequently we have \(x_1-m_1^2{\underline{S}}^2_1=x_2-l_2^2{\overline{S}}_1^2\) and \(y_1^2-m_1^2=y_2^2+l_2^2\) and we can continue as in the proof of Proposition 3.1. In the case (c) we also have that \(y_2^2=l_2^2=0\) and then \(x_1-x_2+{\overline{S}}_1^1(y_1^1-y_2^1)+ {\overline{S}}_1^2 y_1^2=0\) which can happen with probability 0 (since it would mean that \(({\overline{S}}_1^1,{\overline{S}}_1^2)\) lie on the hyperplane). It remains to consider the cases \(m_1^2=0\) and \(l_2^2=0\). When \(m_1^2=0\) then \(y_1^2=y_2^2+l_2^2\) and \(l_2^2=y_1^2-y_2^2\) so that we have \(x_1-x_2+{\overline{S}}_1^1(y_1^1-y_2^1)+(y_1^2-y_2^2){\overline{S}}_1^2=0\), which can happen with probability 0. When \(l_2^2=0\) then \(m_1^2=y_1^2-y_2^2\) and \(x_1-x_2+{\overline{S}}_1^1(y_1^1-y_2^1)+{\underline{S}}_1^2 y_1^2=0\), which again can happen with probability 0.

Case 3. Our investment strategies for \((x_1,y_1^1,y_1^2)\) and \((x_2,y_2^1,y_2^2)\) differ for each asset. This is the case when e.g. we have \(m_1^1=0\), \(l_1^2=0\) and \(l_2^1=0\), \(m_2^2=0\). We then have \(x_1-l_1^1{\overline{S}}_1^1+m_1^2{\underline{S}}_1^2=x_2-m_2^1{\underline{S}}_1^1+l_2^2{\overline{S}}_1^2\), \(y_1^1+l_1^1=y_2^1-m_2^1\), and \(y_1^2-m_1^2=y_2^2+l_2^2\). When \(m_1^2>0\) and \(l_2^2>0\) then we have three cases: (a) \(x_1-l_1^1{\overline{S}}_1^1+m_1^2{\underline{S}}_1^2=0\), (b) \(y_1^1+l_1^1=0\) or (c) \(y_1^2-m_1^2=0\). When \(y_1^1+l_1^1>0\) and \(y_1^2-m_1^2>0\) the case (a) by Proposition 5.4 can not happen. In the case (b) we have \(y_1^1=l_1^1=0\) and \(y_2^1=m_2^1\) and we can continue as in the proof of Proposition 3.1. In the case (c) we have \(y_1^2=m_1^2\) and \(y_2^2=l_2^2=0\) and we come to the equation \(x_1{\overline{S}}_1^1-x_1+y_1^2{\underline{S}}_1^2+y_2^2 {\underline{S}}_1^2=0\), which can happen with probability 0. The cases \(m_1^2=0\) or \(l_2^2=0\) lead to the cases studied in the proof of Proposition 3.1. \(\square \)

To finish induction step as in the two dimensional case we need the following

Proposition 6.4

Random function \({\tilde{w}}\) is continuously differentiable for \((x,y^1, y^2)\in {\mathbb {R}}_{+}^{3}\) almost surely.

Proof

Notice that \({\bar{w}}\) maybe not differentiable at points \((x,y^1,y^2)\) whenever we have

$$\begin{aligned} {\hat{w}}(x,y^1,y^2,s^1,s^2)=w(x,y^1,y^2) \end{aligned}$$
(6.4)

which corresponds to the boundary of \(\mathbf {NT}\). Then in the cases: \(x>0, y^1>0, y^2>0\), \(x=0, y^1>0,y^2>0\), \(x>0, y^1=0,y^2>0\), \(x>0, y^1>0,y^2=0\), \(x>0, y^1=0, y^2=0\) using Proposition 5.4 we see that conditional law of values of \((s^1,s^2)\) for which we have (6.4) is equal to 0, by the continuity of the law \(\mathbb {P} \big ( ({\underline{S}}_{1}^1, {\overline{S}}_{1}^1,{\underline{S}}_{1}^2, {\overline{S}}_{1}^2) \in \cdot \big | {\mathcal {F}}_{0} \big )\). In the cases when \({\hat{w}}(0,y^1,0,s^1,s^2)=w(x,y^1,0)\) or \({\hat{w}}(0,0,y^2,s^1,s^2)=w(x,0,y^2)\) do not determine uniquely \(s^1\) or \(s^2\) respectively but in that cases continuous differentiability of \({\bar{w}}\) follows as in Proposition 3.2. Other cases of possible violation of differentiability concern situation, when our position in one asset is at the boundary of no transaction zone while the other asset we are in buying or selling zone. Such cases practically reduce to one asset case, which was studied in Proposition 3.2. \(\square \)

To continue construction of dynamical portfolio and to prove a two asset analog of Theorem 4.1 we shall need two asset versions of the assumptions (A) and (B) concerning integrability of suitable functions \({\overline{w}}_k\) and continuity of the conditional laws of \(\mathbb {P} \big ( ({\underline{S}}_{k+1}^1, {\overline{S}}_{k+1}^1,{\underline{S}}_{k+1}^2, {\overline{S}}_{k+1}^2) \in \cdot \big | {\mathcal {F}}_{k} \big )\) for \(k=0,\ldots ,T-1\).