This section begins with a review of the notations and preparatory statements, some of which are already contained in the prior works [13, 14]. In particular, Sect. 2.1 offers a broader perspective on critical points in one-dimensional random media and states the main technical assumptions, Sect. 2.2 provides some examples, and then, Sect. 2.3 recalls and extends facts on the random dynamics of Dyson–Schmidt variables in the presence of hyperbolic critical points. These facts are transposed to the logarithmic Dyson–Schmidt variables in Sect. 2.4, namely the form in which they will be applied in Sect. 3. Finally, Sect. 2.5 provides a perturbative formula for the dynamics of the rescaled logarithmic Dyson–Schmidt variables.
2.1 Definition of Critical PointsThe random matrices in (1) are the product of one matrix that is close to the identity and a random diagonal matrix \(D^0=}(\kappa ,\kappa ^)\). Modifications of this structural property in the sense of the following definition appear in several one-dimensional random problems.
Definition 4Let \((\Sigma ,\textbf)\) be a probability space and, for each \(\sigma \in \Sigma \), suppose given a sufficiently smooth map \(E\in }\mapsto \mathcal ^E_\sigma \in \text(2,})\). Then, \(E_c\in }\) is a critical point (or critical energy) of the family \((\mathcal ^E_\sigma )_\) if the matrices \((\mathcal ^_)_\) all commute. The critical point is then called
(i)elliptic if \(|}(\mathcal ^_)|<2\) for all \(\sigma \in \Sigma \);
(ii)parabolic if \(|}(\mathcal ^_)|=2\) and \(\mathcal ^_\) is the same non-trivial Jordan block for all \(\sigma \in \Sigma \);
(iii)hyperbolic if \(|}(\mathcal ^_)|>2\) for a set of \(\sigma \in \Sigma \) of positive measure.
The two-site transfer matrix of the random hopping model given in (4) hence has \(E_\textrm=0\) as a hyperbolic critical energy. More generally, if \(E_\textrm\) is a critical point of a family \(E\mapsto \mathcal ^E_\sigma \), then one can find an invertible real \(2\times 2\) matrix M such that \(M\mathcal ^}_\sigma M^\) is a rotation matrix, a standard Jordan block or a diagonal matrix in the elliptic, parabolic and hyperbolic cases, respectively. In this representation, one can then expand the so-called M-modified transfer matrices \(T^\epsilon _\sigma =M\mathcal ^+\epsilon }_ M^\) exactly as in (1) with zeroth order on the r.h.s. given either by random rotation matrices, a standard Jordan block or random diagonal matrices \(D^0=}(\kappa ,\kappa ^)\). If \(T^\epsilon _\sigma \) is obtained as a product of a finite number of one-site transfer matrices (such as in (4) where two one-site transfer matrices appear), one speaks of a random polymer model [14, 25]. Note that for the random hopping model the notation \(T^\epsilon _\sigma =M\mathcal ^+\epsilon }_ M^\) is consistent with (1) because M is the identity and \(E_\textrm=0\).
Let us briefly give a few examples of critical energies. For a random polymer model in which \(\Sigma \) only consists of two points, it generically happens that there exists an elliptic critical energy [17, 25]. At such an elliptic critical energy \(E_\textrm\), it is then known that the Lyapunov exponent behaves like \(\gamma ^=C\epsilon ^2+}}(\epsilon ^3)\) with a computable constant \(C>0\) [25]. On the other hand, parabolic critical energies appear at the band edges of random Jacobi matrices (such as the one-dimensional Anderson model) and lead to a rich scaling behavior for the Lyapunov exponent in their vicinity [31]. Also, the lower band edges of certain random Kronig–Penney models are parabolic critical energies and their Lyapunov exponent then behaves for \(\epsilon >0\) like \(\gamma ^=C_- \epsilon +}}(\varepsilon ^})\) and \(\gamma ^=C_+\epsilon ^}+}}(\epsilon )\) to the outside and inside of the spectrum, respectively, again with computable constants \(C_\pm >0\) [16]. This paper is about hyperbolic critical points which are further distinguished in several cases.
Definition 5A hyperbolic critical point of \(E\in }\mapsto \mathcal ^E_\sigma \in \text(2,})\) is called
(i)unbalanced if \(}}(\log (\kappa _\sigma ))\not =0\);
(ii)balanced if \(}}(\log (\kappa _\sigma ))=0\);
(iii)balanced of rotating type if \(}}(\log (\kappa _\sigma ))=0\) and \(a_\sigma >|b_\sigma |\) almost surely;
(iv)balanced of confined type if \(}}(\log (\kappa _\sigma ))=0\) and \(b_\sigma >|a_\sigma |\) almost surely.
In the unbalanced case the Lyapunov exponent is simply given by \(\gamma ^=}}(\log (\kappa _\sigma ))+o(\epsilon )\) [14, 21]. Here, the focus is on balanced hyperbolic critical points for which several examples will be given in Sect. 2.2. Let us note that one also has a balanced hyperbolic critical point of rotating type if \(a_\sigma <-|b_\sigma |\) almost surely, and of confined type if \(b_\sigma <-|a_\sigma |\) almost surely, but these cases reduce to the above after a conjugation of \(T^\epsilon _\sigma \) with \(J=}(1,-1)\).
In the remainder of the paper, several quantitative bounds on the matrix entries of \(T^\epsilon _\sigma \) as well as the rotating or confined type of the balanced hyperbolic critical point will be used. To state these bounds and fix the corresponding constants, let us rewrite (1) as
$$\begin T^\epsilon _\sigma \;=\; J\,Q^\epsilon _\sigma \,D^\epsilon _\sigma \, J , \end$$
(13)
where \(J=}(1,-1)\) as above and
$$\begin \begin D^\epsilon _\sigma&\;=\; \begin \kappa _\sigma (1+\epsilon c_\sigma ) & 0 \\ 0 & (\kappa _\sigma (1+\epsilon c_\sigma ) )^ \end \;, \\ \qquad Q^\epsilon _\sigma&\;=\; \textbf\,-\,\epsilon a_\sigma \begin 0 & -1 \\ 1 & 0 \end \,-\,\epsilon b_\sigma \begin 0 & 1 \\ 1 & 0 \end \,+\, \epsilon ^2 \,A^\epsilon _\sigma \;, \end \end$$
(14)
with \(A^\epsilon _\sigma \) being a random \(2\times 2\) matrix, bringing the term \(}}(\epsilon ^2)\) in (1) into an explicit form. This corresponds to (17) in [13], except for the additional assumption \(c = 0\) made there. The factor J in (13) is merely inserted for notational convenience because further down it brings the dynamics (16) into the particularly simple form (17). The following technical assumptions are assumed to hold throughout this work:
MainHypothesisThe family \(\epsilon \mapsto T^\epsilon _\sigma \) of random matrices has a balanced hyperbolic critical point at \(\epsilon =0\) of the form (1) with random variables \(\kappa _\sigma >0\), \(a_\sigma \), \(b_\sigma \), \(c_\sigma \) and \(A^\epsilon _\sigma \) all having distributions with compact support. The random variable \(\log (\kappa _\sigma )\) is balanced, namely \(}}(\log (\kappa _\sigma ))=0\), and its distribution is supposed to be non-trivial in the sense that \(\textbf\big (\\big )>0\). Furthermore, with \(\mathop }\) and \(\mathop }\) w.r.t. \(\textbf\), let us introduce the finite constants
$$\begin C_0 :=\; \mathop }\,|\log (\kappa _\sigma )| \,\in \, (0,\infty ), \end$$
and
$$\begin C_1 := \;\mathop }_(a_ - |b_|), \qquad C'_1 := \;\mathop }_(b_-|a_|), \end$$
as well as
$$\begin C_2 :=\; \mathop }_(|a_| + |b_| +|c_|),\quad C_3\;:=\; \sup _\mathop }_\Vert A_^\Vert . \end$$
For a balanced hyperbolic critical point of rotating type, it is supposed that \(C_1>0\), while for a confining type, it is supposed that \(C'_1>0\).
Notations and conventions: For sake of simplicity, we focus on \(\epsilon >0\) throughout the paper. To improve readability from this point on, random variables like \(\kappa _\sigma \) or \(A_\sigma ^\epsilon \) will often not contain the index \(\sigma \) or even the index \(\epsilon \), whenever confusion seems unlikely. We will also write \(\kappa _n\) or \(A_n^\epsilon \) for \(\kappa _\) or \(A_^\epsilon \) from now on. Moreover, we will denote sets like \(\\) simply by \(\\) and then denote the indicator functions on such a set by \(\textbf\big [\log (\kappa ) > 0\big ]\). It will also be useful to introduce the centered random variable
$$\begin w\;:=\; \frac\log (\kappa ) . \end$$
(15)
According to the Main Hypothesis, it satisfies \(|w| \le 1\) \(\textbf\)-a.s. and the support of \(w\) contains a positive and a negative real number and furthermore either \(-1\) or 1.
2.2 Examples of Hyperbolic Critical PointsA first example of a hyperbolic critical energy appears in the random hopping model already described in Sect. 1. If the distribution of the hopping elements \(t_\) on the even and odd sites is different, one is typically in the unbalanced case which is studied in more detail in [14]. If the hopping terms are all i.i.d., then \(}}(\log (\kappa _n))=}}(\log (t_))-}}(\log (t_))=0\) and the hyperbolic critical energy is balanced. Moreover, (4) shows that it is of rotating type. As already claimed above, Theorems 1 and 2 therefore apply.
As already stressed in the prior work [13], balanced hyperbolic critical energies always appear at topological phase transitions in chiral one-dimensional (and, more generally, one-channel) topological insulators. At these phase transitions, the critical energy is then \(E_\textrm=0\), the center of the spectrum. There are then structural arguments [14, Proposition 3] showing that for these hyperbolic critical energies the coefficients in the expansion (1) satisfy the deterministic inequality \(a\ge \sqrt\), implying that these critical energies are of rotating type. A concrete example is the disordered Su–Schrieffer–Heeger model [27, 36], but also more general models in this class can be analyzed if one works with reduced transfer matrices [13]. As for the random hopping model, the inverse of the Lyapunov exponent defined by (2) is the localization length only up to a factor given by the average polymer length (see [13, 14, 25] for details).
Next let us consider a random Dirac operator \(H=\gamma _2 \imath \partial _x+\sum _}}}W_n\delta _n\) on \(L^2(},}}^2)\) where \(W_n=(W_n)^*\) are i.i.d. real \(2\times 2\) matrices and \(\gamma _1,\gamma _2,\gamma _3\) denote the standard Pauli matrices. The rigorous definition of the operator is implemented by boundary conditions \(\lim _\psi (n+\varepsilon )=e^\lim _\psi (n-\varepsilon )\) on sufficiently smooth \(\psi \in L^2(},}}^2)\), see [34] for details. Let us now implement the chiral symmetry \(\gamma _3 H\gamma _3=-H\) and suppose that \(}}(W_n)=0\). This implies that \(W_n\) is off-diagonal with a random entry \(w_n\). The fundamental solution over \([n-1,n)\) at energy \(\epsilon \) is then given by
$$\begin T^\epsilon _n \;=\; e^ \,e^ \;=\; \left[ \textbf\,+\,\epsilon \begin 0 & -1 \\ 1 & 0 \end \,+\, }}(\epsilon ^2) \right] \begin e^ & 0 \\ 0 & e^ \end . \end$$
This is hence of the form (1), showing that the chiral random Dirac operator has a balanced hyperbolic critical energy at \(E_\textrm=0\), which is clearly of rotating type. Hence, the Lyapunov exponent is again given by Theorem 1.
Finally, let us come to an example of a balanced hyperbolic critical point of confined type. For that purpose, let us consider the partition function of a classical Ising chain with spin coupling \(J>0\) and random external magnetic field \((h_n)_}}}\) which, at inverse temperature \(\beta =1\), volume N and with periodic boundary conditions, is given by
$$\begin Z_}N}(J)&\;=\; \sum _^N} \exp \left( -\sum _J\sigma _n\sigma _-\sum _h_n\sigma _n \right) \;, \end$$
where the outer sum runs over all spin configurations \(\sigma \) and \(\sigma _=\sigma _1\), assuring periodic boundary conditions. One is then interested in the free energy density
$$\begin f(J) \;=\; \lim _\,\frac\,\log \big (Z_}N}(J)\big ) , \end$$
and its behavior in the limit of strong coupling \(J\rightarrow \infty \). As is well known and easy to check (e.g., [12]), the partition function can be rewritten as the trace of a product of random \(2\times 2\) matrices:
$$\begin Z_}N}(J)&\;=\; }\left( \prod _^N \begin e^ & e^ \\ e^ & e^ \end \begin e^ & 0 \\ 0 & e^ \end \right) \\&\;=\; \left( e^ -e^\right) ^} }\left( \prod _^N (1-e^)^} \begin 1 & e^ \\ e^ & 1 \end \begin e^ & 0 \\ 0 & e^ \end \right) \;. \end$$
In the second equality, the factor was taken out so that inside of the trace appears a product of random matrices with unit determinant given by
$$\begin T^\epsilon _n&= (1-e^)^} \begin 1 & e^ \\ e^ & 1 \end \begin e^ & 0 \\ 0 & e^ \end\\&= \left[ \textbf\;+\; \epsilon \begin 0 & 1 \\ 1 & 0 \end \;+\; }}(\epsilon ^2) \right] \begin \kappa _n & 0 \\ 0 & \tfrac \end \;, \end$$
where we set \(\epsilon =e^\) and \(\kappa _n=e^\). Hence, one has a product of random matrices of the form (1) with \(a=0\), \(b=1\) and \(c=0\). If \(\gamma ^\epsilon \) denotes the Lyapunov exponent associated with this random product, the free energy density is thus given by
$$\begin f(J) \;=\; \tfrac\log (e^-e^)\,+\,\gamma ^\epsilon , \qquad \epsilon \,=\,e^ . \end$$
While the first summand is \(J(1+}}(\epsilon ))\) and thus dominates the second, it is still of interest to compute the Lyapunov exponent. The small parameter is now \(\epsilon =e^\) and has a hyperbolic critical point. It is balanced if \(}}(h_n)=0\) and, moreover, of confined type because \(b=1\) and \(a=0\). Hence, Theorem 3 can be applied to the random field Ising chain in this situation.
2.3 Random Dynamics in Dyson–Schmidt VariablesIt is well known (e.g., [7, 13, 25, 29]) that the action of invertible real \(2\times 2\) matrices on \(}\text(1)\) that appears on the r.h.s. of (6) is, under the stereographic projection (7), implemented by the standard Möbius transformation \(\left( \alpha \;\beta \\ \gamma \;\delta \end}\right) \cdot x=\frac\) on the Dyson–Schmidt variables. Associated with an i.i.d. sequence \((T^\epsilon _n)_}}}\) of matrices given by (1) with an initial condition \(x_0\in \dot}}\), one hence obtains a random dynamical system \((x^\epsilon _n)_\) by
$$\begin x^\epsilon _n\;=\;-T^\epsilon _n\cdot (-x^\epsilon _) , \end$$
(16)
in which again the overall sign in (1) is irrelevant. The minus signs in (16) result from the sign in (7) and maintain the orientation in the below. These sign changes are also implemented by a Möbius transformation, namely \(J\cdot x=-x\) for \(J=}(1,-1)\) so that \(x^\epsilon _n=JT^\epsilon _nJ\cdot x^\epsilon _\). As \(T^\epsilon _n\) is of the form (13) and given by a product of matrices, the group action property of the Möbius transformation shows
$$\begin x^\epsilon _n\;=\;Q^\epsilon _n\cdot (D^\epsilon _n \cdot x^\epsilon _) . \end$$
(17)
This explains why it is advantageous to include the factor J in (13). The random dynamics (17) is precisely the two-step dynamics of (11) in the x-picture, namely after the transformation (7). It is analyzed in great detail in the references [13, 14], and this section reproduces and appends several facts that are relevant for the present work. The formula (17) shows that the dynamics is given by the alternation of a diagonal Möbius action \(x \mapsto D^\epsilon \cdot x=\kappa ^2(1+\epsilon c)^2x\) followed by a perturbation \(Q^\epsilon \cdot \) of the order of \(\epsilon \). For all realizations, the action \(D^\epsilon \cdot \) has two fixed points 0 and \(\infty \) (corresponding to \(\theta =0\) and \(\theta =\frac\) in \(}\text(1)\)) and leaves the two intervals \((-\infty ,0)\) and \((0,\infty )\) invariant. Due to \(a>0\), the \(\epsilon \)-dependent perturbation \(Q^\epsilon \cdot \) only leads to passages through 0 from left to right, and from \(+\infty \) to \(-\infty \) (for \(\epsilon >0\)), so that the random dynamics enters the intervals \((-\infty ,0)\) and \((0,\infty )\) only from the left. The random times at which passages are completed are given by
$$\begin&\}(x_}N-1}) \ne }(x_}N})\}\nonumber \\&= \}N-1} \le 0< x_}N} \text x_}N} \le 0 < x_}N-1}\} \,, \end$$
(18)
and its order statistics are denoted by \(N_0< N_1< N_2 < \dots \). For (without loss of generality) \(x_ > 0\), the first run through \((0,\infty )\) takes \(N_1 - N_0\) steps, the first one through \((-\infty ,0)\) takes \(N_2 - N_1\) steps, and so on in an alternating manner so that during the k-th passage the sign \(\nu \) in (9) is given by \(\nu =(-1)^k\). Then, \((N_-N_k)_\) are called the random passage times. They are neither independent nor identically distributed as they depend on the initial condition \(x_\) of the passage which in turn depends on the full history. In order to deal with this difficulty, Sect. 3 introduces a slower and a faster comparison process (similar to the constructions in [13]). Up to errors, this allows to decouple the passages so that renewal theory can be applied in the following. For the computation of the invariant measure (Theorem 2), it will then be relevant to control the dynamics within each passage. As the passages are either through \((0,\infty )\) or \((-\infty ,0)\), two cases have to be considered. It is, however, possible to reduce the analysis on the negative interval to that of the positive one by applying the orientation-preserving bijection \(x\in (-\infty ,0)\mapsto -\frac=J'\cdot x\in (0,\infty )\) where \(J'=\left( 0\,-1\\ 1\;\;0\end}\right) \). Indeed, \(J'^*D^\epsilon J'=-(D^\epsilon )^\) and \(J'^*Q^\epsilon J'\) merely has a changed sign before b in (14) and the higher-order term \(A^\epsilon \) is conjugated by \(J'\), but the sign before a does not change. Hence, the Main Hypothesis directly transposes. (Note that this does not hold in the unbalanced case dealt with in [13, 14] because \(}}\,\log (\kappa )\) and \(}}\,\log (\frac)\) then have a different sign.) In the remainder of this paper, only the dynamics on \((0,\infty )\) in the x-picture will be analyzed.
The lemmata below provide quantitative deterministic statements on passages of the dynamics through \((0,\infty )\). The first lemma states a monotonicity property of the perturbation. It is stated and proved in [13, Lemma 4]:
Lemma 6For all realizations and \(\epsilon >0\), \(x \in [0,\infty )\) and \(Q^\epsilon \cdot x \ge 0\) imply \(Q^\epsilon \cdot x \ge x\).
The next lemma will provide lower bounds on the dynamics. It will allow to construct the lower comparison process in Sect. 3.3. This latter process as well as all quantities associated with it will carry a hat. Let us introduce the points
$$\begin \widehat_- :=\; \tfrac, \qquad \widehat_c :=\; \tfrac(e^+1), \qquad \widehat_+ :=\; \tfrac}. \end$$
(19)
Fig. 4In both pictures, the arrows illustrate properties of the dynamics in the Dyson–Schmidt coordinates on \((0,\infty )\) as stated in Lemmata 7 (on the left) and 9 (on the right)
Lemma 7For all realizations and \(\epsilon >0\) small enough, one has
$$\begin&x \in [0,\infty ) & \Longrightarrow & Q^\epsilon \cdot (D^\epsilon \cdot x) \notin [0,\widehat_-)\,, \end$$
(20)
$$\begin&x \in [\widehat_-,\infty ) & \Longrightarrow & Q^\epsilon \cdot (D^\epsilon \cdot x) \notin [0,\widehat_c)\,, \end$$
(21)
$$\begin&x \in [\widehat_+,\infty ) & \Longrightarrow & Q^\epsilon \cdot (D^\epsilon \cdot x) \in (-\infty ,0) \,. \end$$
(22)
Lemma 7 is [13, Lemma 5] where a proof is given. (A minor modification is needed to account for the contribution of c in \(D^\epsilon \), which was set to 0 in [13].) The claims of the lemma are graphically illustrated in Fig. 4. As stated above, these bounds are needed to construct a slower comparison process in Sect. 3. The remainder of the section consists of bounds that are needed to also control a faster comparison process. For notational purposes, it will be useful to introduce the quantity
$$\begin \delta :=\; \frac)}. \end$$
(23)
Note that \(\delta >0\) for \(\epsilon >0\) sufficiently small, and that \(\delta \rightarrow 0\) for \(\epsilon \rightarrow 0\), even though \(\epsilon \ll \delta ^\alpha \) for any \(\alpha \ge 1\). The next result shows that in the x-picture, up to some constant factor, the full action of \(Q^\epsilon D^\epsilon \) can be bounded by that of \(D^0\) on a large interval. (Recall that \(D^0\cdot \) is just the multiplication by \(\kappa ^2\) in the x-picture.)
Lemma 8There exists a constant C depending on \(C_0\), \(C_2\), \(C_3\) such that after setting
$$\begin }_- := \; \frac,\qquad }_+ := \; \frac, \end$$
(24)
for \(x \in [}_-,}_+]\) it follows for \(\epsilon > 0\) small enough and all realizations that
$$\begin e^(D^0 \cdot x) \;\le \; Q^\epsilon \cdot (D^\epsilon \cdot x) \;\le \; e^(D^0 \cdot x). \end$$
ProofIt will be shown that it is possible to take \(C = \fracC_2}\). As \(D^\epsilon \cdot x \in [e^x,e^x]\) and \(D^0(D^)^ \cdot x \in [e^x,e^x]\) for all \(x \in (0,\infty )\), it suffices to show that \(x \in [e^}_-,e^}_+]\) implies \(e^x \le Q^\epsilon \cdot x \le e^x\). Therefore, take \(x \in [e^}_-,e^}_+]\). Now, \(Q^\epsilon \cdot x \le e^x\) is (writing some contributions as error terms)
$$\begin \frac(\epsilon ^2))x + (a-b)\epsilon + \mathcal (\epsilon ^2)}(\epsilon ^2)-(a+b+\mathcal (\epsilon ))\epsilon x} \;\le \;e^x, \end$$
which is equivalent to the following inequality (again for all possible realizations)
$$\begin (a+b+\mathcal (\epsilon ))\epsilon e^x^2 - (e^-1 + \mathcal (\epsilon ^2))x + (a-b)\epsilon + \mathcal (\epsilon ^2) \;\le \; 0. \end$$
(25)
The latter is indeed shown (estimating all random variables by the Main Hypothesis) by
$$\begin&(a+b+\mathcal (\epsilon ))e^\epsilon x^2 - (e^-1 + \mathcal (\epsilon ^2))x + (a-b)\epsilon + \mathcal (\epsilon ^2)\nonumber \\&\quad \le \; (C_2+\mathcal (\epsilon ))(1+\mathcal (\delta ^2))\epsilon x^2 - (C_0\delta ^2 + \mathcal (\epsilon ^2))x + C_2\epsilon + \mathcal (\epsilon ^2)\nonumber \\&\quad \le \;2C_2\epsilon x^2 - \frac\delta ^2 x + 2C_2\epsilon \nonumber \\&\quad \le \; 2C_2\epsilon x^2 - \left( \frac+\frac\right) x + 2C_2\epsilon \nonumber \\&\quad =\; 2C_2\epsilon (x-e^}_-)(x-e^}_+)\nonumber \\&\quad \le 0 \end$$
(26)
for \(x \in [e^}_-,e^}_+]\) and for \(\epsilon \) (and then also \(\delta \)) small enough.
It remains to be shown that \(x \in [e^}_-,e^}_+]\) implies \(Q^\epsilon \cdot x \ge e^x\). The latter is equivalent to (25) after replacing \(\delta ^2\) by \(-\delta ^2\) and inserting a global minus sign on the left hand side of this inequality. This modified statement is then once more shown (modifying the first line in the same way) by (26). \(\square \)
Now, one can further introduce
$$\begin }_c :=\; e^}_-. \end$$
(27)
Then, the next result is [13, Lemma 10]. (The proof is identical, even though the definitions of \(}_\pm \) are different here.) It is also illustrated in Fig. 4, see the right half there.
Lemma 9For all realizations and \(\epsilon >0\) small enough, one has
$$\begin x \notin [0,\infty )&\Longrightarrow Q^\epsilon \cdot (D^\epsilon \cdot x) \notin [}_-,\infty )\,, \end$$
(28)
$$\begin x \notin [}_-,\infty )&\Longrightarrow Q^\epsilon \cdot (D^\epsilon \cdot x) \notin [}_c,\infty )\,, \end$$
(29)
$$\begin Q^\epsilon \cdot (D^\epsilon \cdot x) \notin [0,\infty )&\Longrightarrow x \notin [0,}_+)\,. \end$$
(30)
Recollecting all objects introduced above, one has
$$\begin 0 \;<\; \widehat_-\;<\; \widehat_c \;<\; }_-\;<\; }_c \;< \;1 \;<\; }_+ \;<\; \widehat_+ \;<\; \infty . \end$$
(31)
2.4 Random Dynamics of Logarithmic Dyson–Schmidt VariablesThis section merely spells out the results of Sect. 2.3 after the transformation (8) to logarithmic Dyson–Schmidt variables. As in the last section, the focus will be merely on positive Dyson–Schmidt variables, so only the \(+\) component of (8) which reads \(x\in (0,\infty )\mapsto y=\frac\log (x)\in }\). Taking logarithms of the objects in (31), namely \(\widehat_- = \frac\log (\widehat_-)\), \(\widehat_c = \frac\log (\widehat_c)\), \(}_- = \frac\log (}_-) = -\frac\log (}_+) = -}_+\), etc., yields the real constants
$$\begin -\infty \;<\;\widehat_- \;<\; \widehat_c \;<\; }_-\;<\; }_c \;<\; 0 \;< \;}_+ \;<\; \widehat_+\;<\;\infty . \end$$
(32)
All depend on \(\epsilon \) (or, equivalently, on \(\delta \)). Their limit behavior is described in the next lemma.
Lemma 10For \(\epsilon \) small enough, it holds that
$$\begin 2C_0\delta \widehat_-&= -1 + \mathcal (\delta )\,, \quad 2C_0\delta \widehat_c = -1 + \mathcal (\delta )\,, \\&\quad 2C_0\delta \widehat_+ = 1 + \mathcal (\delta )\,, \\ 2C_0\delta }_-&= -1 + \mathcal (\delta \log (\delta ))\,, \quad 2C_0\delta }_c = -1 + \mathcal (\delta \log (\delta ))\,, \\&\quad 2C_0\delta }_+ = 1 + \mathcal (\delta \log (\delta ))\,. \end$$
ProofThis is immediate from the definition, combined with (19), (24) and (27). \(\square \)
In Sect. 3, the dynamics will mainly be analyzed in the logarithmic Dyson–Schmidt variables because it takes a particularly simple form. Indeed, the Möbius action \(T^\epsilon \cdot \) takes the form
$$\begin T^\epsilon *(y,\nu ) \;=\; \left( \f
Comments (0)