PHYSICAL REVIEW E 92, 042149 (2015)

Stationary properties of maximum-entropy random walks Purushottam D. Dixit* Department of Systems Biology, Columbia University, New York, New York 10032, United States (Received 22 June 2015; published 23 October 2015) Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed. DOI: 10.1103/PhysRevE.92.042149

PACS number(s): 05.30.−d, 02.50.Tt, 05.40.Fb, 89.70.Cf

I. INTRODUCTION

On the one hand, in biological, sociological, and technological systems, our ability to collect large amounts of data has been improving continuously. Examples include measurement of expression levels of thousands of mRNAs in individual cells [1,2], estimation of species abundance in different ecologies [3,4], connectivity, and dynamics of socio-technological platforms such as FaceBook and Twitter [5,6]. On the other hand, in many such cases our intuition to build prescriptive models has not yet fully developed. Indeed, probabilistic descriptive approaches have become popular. A popular example is the principle of maximum entropy (ME) [7–10]. Intuitively, ME picks the “least informative” distribution while requiring it to reproduce certain low-dimensional aspects of the data, for example, lower order moments. The result is the Boltzmann exponential distribution in constrained quantities. ME has been employed to study a diverse set of systems, for example, neuronal dynamics [11], bird flocks [12], ecological species distribution [13], gene expression noise [14], protein sequence variability [15,16], and behavior [17]. In many cases [11–14], but not always [15–17], the experimental data are a realization of a stochastic process. Here, in order to estimate the distribution over states, one may impose path-dependent constraints in addition to statedependent constraints in the inference procedure. Also, the dynamical neighborhood of any state—states reachable in a single transition—is usually finite, which defines the state space topology. How these three factors affect the maximum path entropy distribution over states and how different this distribution is from the Boltzmann exponential distribution remains currently unknown. We solve this problem for Markovian dynamics. We derive transition probabilities and the stationary distribution over states of the maximum path entropy Markov process subject to state- and path-dependent constraints. We show that the stationary distribution is the outer product of the left and the right Perron-Frobenius eigenvectors of a transfer matrix and depends nontrivially on the topology and imposed constraints and is almost always substantially

*

[email protected]

1539-3755/2015/92(4)/042149(6)

different than the Boltzmann distribution. We illustrate our results with a particle diffusing on a two-dimensional lattice. This article is organized as follows. First, we derive the maximum path entropy Markovian random walk subject to state- and path-dependent constraints. Next, we illustrate our results with a particle diffusing on a two dimensional grid. Finally, we speculate connections with path integral formulation of diffusion processes. II. THEORY

We begin with an observation. Discrete state stochastic systems can be modeled as a random walk in higher dimensions. For example, the Glauber dynamics [18] of the Ising model with N spins is a random walk in 2N dimensions. In Glauber dynamics, every state is connected to only N − 1 other states out of the 2N , defining the state space topology. To that end, we consider an irreducible and aperiodic discrete time Markovian random walk on a directed graph G with nodes V and edges E. We denote the unique stationary distribution over the states by {pa } and the transition probabilities p(a → b) by kab . We assume that kab = 0 iff (a,b) ∈ E and pa > 0∀a. A. State- and path-dependent constraints

The appropriate ensemble to impose state- and pathdependent constraints is the ensemble {} of stationary state trajectories  ≡ · · · → a → b → · · · of fixed duration T  1. The entropy of the ensemble, normalized by T , is given by [19–22]  1  S=− p() log p() ≈ − pa kab log kab . (1) T  a,b The second approximation holds when T  1 for Markovian dynamics and is equivalent to constraining two point correlations in a more general formulation [23–26]. In Eq. (1) and from here onwards, all summations involving quantities with two indices are restricted on the edges of G unless stated otherwise. We constrain path ensemble average of a state- (or path-) dependent quantity rab . Generalization to multiple constraints is straightforward (see below). State-dependent quantities such as energy and particle number depend only on the initial state

042149-1

©2015 American Physical Society

PURUSHOTTAM D. DIXIT

PHYSICAL REVIEW E 92, 042149 (2015)

a or the final state b. Path-dependent quantities such as fluxes and rates depend on both states. The path ensemble averages are given by [19–21]  pa kab rab . (2) r = a,b

Additionally, {pa } and {kab } are not mutually independent. We have    pa kab = pa , pa kab = pb , pa kab = 1. (3) a

b

a,b

B. Maximizing path entropy

We maximize S in Eq. (1) with respect to unknown stationary distribution pa and transition probabilities kab while imposing constraints in Eqs. (2) and (3). Using Lagrange multipliers to impose the constraints, we write the unconstrained Lagrange function, sometimes called the Caliber [10,27,28],      pa kab log kab + ma pa kab − pa C=− a,b

+



nb

 

−γ





pa kab − pb + ζ

a

b



a

b

 



pa kab − 1

a

(4)

pa (log kab + 1) = pa (ma + nb + ζ − γ rab ) ⇒ kab = ema +nb +ζ −1−γ rab .

(5)

Differentiating the Caliber with respect to pa , we have    kab log kab + ma kab − ma + nb kab − na 0=− +ζ

kab − γ

b

b

Substituting kab from Eq. (5) and using ma + na = 1.

φb Wab , ηφa



b kab

Wab = η

pb . φb

(10)

Thus, if ψ¯ is the left Perron-Frobenius eigenvector and φ¯ is the right Perron-Frobenius eigenvector of W with the same eigenvalue η, the stationary distribution is their outer product, pa = ψa φa .

(11)

Equation (11) is remniscent of quantum mechanics. W is a transfer operator, φ¯ is the wave function, and ψ¯ is its conjugate. The stationary probability pa is the outer product of φa and ψa (see below). The Perron-Frobenius eigenvectors and thus the stationary distribution depend on the topology and the imposed constraints in a nontrivial fashion. In other words, the Boltzmann distribution, the maximum-entropy distribution over states, is no longer guaranteed when topological and dynamical information is introduced.

Is the inferred Markov process detailed balanced? Let us calculate its entropy production rate s˙ [29],

= 1, we get s˙ =

(7)

 a,b

(8)

when (a,b) ∈ E and zero otherwise. Here, Wab = e−γ rab when (a,b) ∈ E and zero otherwise, φa= e−ma , and η = e−ζ . In order to determine φa , we impose b kab = 1. We have  Wab φb = ηφa . (9) b

φa

D. Detailed balance

(6)

Substituting in Eq. (5), we find that the maximum path entropy transition probabilities are given by kab =

 pa a

b

kab rab .

b

pa kab = pb ⇒

a,b

pa kab rab − r .



The stationary distribution over states {pa } is determined by solving 

In Eq. (4), we have worked with a single path-dependent constraint for notational simplicity. Generalization to multiple constraints introduces additional Lagrange multipliers but is conceptually straightforward. Next, we maximize the Caliber with respect to kab and pa . Differentiating the Caliber with respect to kab , we have



C. Stationary distribution



a,b

b

Thus, φ¯ = {. . . ,φa , . . . ,φb . . . } is the right eigenvector of W with eigenvalue η. How do we guarantee that kab is positive for all a and b connected by an edge and that the inferred Markov chain has the maximum path entropy? Since the graph representing the network of states is assumed to be aperiodic and ergodic, the matrix W is non-negative. The Perron-Frobenius theorem guarantees that the maximum eigenvalue of W is positive and the corresponding right and left eigenvectors have positive elements. Moreover, the path entropy maximization problem is convex and has a unique maximum. Thus, if we choose η to be the maximum eigenvalue and φ¯ the corresponding right eigenvector, transition probabilities kab are guaranteed to be positive for all a and b and the corresponding Markov chain is guaranteed to have the maximum path entropy under imposed constraints. From here onwards, we work with this choice.

pa kab log

kab = −γ rab − rba . kba

(12)

In Eq. (12), the antisymmetric part of constraint rab contributes to entropy production. If all constraints are symmetric, the entropy production is zero and the Markov process is automatically detailed balanced. In fact, if detailed balance pa kab = pb kba is explicitly imposed, the inference problem is equiva† lent to constraining symmetrized quantities rab = 12 (rab + rba ) (see Appendix for details). In this case, the transfer matrix W is symmetric and the left and the right Perron-Frobenius eigenvectors coincide. The stationary distribution is simply the outer product of this eigenvector with itself.

042149-2

STATIONARY PROPERTIES OF MAXIMUM-ENTROPY . . .

PHYSICAL REVIEW E 92, 042149 (2015)

E. Path probability

Finally, we write down the probability of an arbitrary stationary state path  ≡ a1 → a2 → · · · → an , p() = pa1 · ka1 a2 · · · kan−1 an = ψa1 φa1 =

1 ηn−1

φa2 Wa1 a2 · · · ηφa1

e−A() ,

(13)

where the “action” A() associated with the path  is A() = γ

n−1 

rat at+1 − log ψa1 − log φan .

(14)

t=1

The construction of the maximum path entropy Markov process is complete. Recently, similar results were developed from a large deviation point of view which provides a probabilistic rationale for maximizing path entropy [30,31].

FIG. 1. (Color online) The stationary distribution of a freely diffusing particle on an aperiodic square lattice.

define a potential a at every point a = (x,y),

III. NUMERICAL ILLUSTRATION

Equation (11) gives a recipe to calculate the stationary distribution from constraints but does not give us an intuitive understanding of how it depends on them. To that end, we highlight three important features: (1) path entropy-enthalpy compensation, (2) state space topology, and (3) currents with a numerical example. We study the stationary distribution of a particle diffusing on a two-dimensional [−N,N ] × [−N,N ] square lattice to illustrate these features. We assume that the particle jumps to one of its nearest neighbors in a single transition.

a =

A . + y2 + B

(16)

We fix A = 11 and B = 10. The radially symmetric potential has a peak in the middle of the lattice and takes its lowest values in the four corners (see Appendix for a graph). Let us obtain the stationary distribution by constraining the average potential and imposing detailed balance. The symmetric transfer matrix is    a + b , Wab = exp −γ 2

A. Path entropy-enthalpy compensation

First, consider an aperiodic square lattice with boundaries. What is the stationary distribution when no state- and pathdependent constraints are imposed? We start with the transfer matrix W which here is the adjacency matrix of G [32]. The stationary distribution is the square of its Perron-Frobenius ¯ For a = (x,y), pa is given by eigenvector φ.

x2

(17)

(2x + 2N + 1)π (2y + 2N + 1)π 1 sin2 . sin2 2 (N + 1) 4N + 4 4N + 4 (15)

when a and b are nearest neighbors on the lattice and zero otherwise. γ is the Lagrange multiplier associated with the average potential. In Fig. 2 we show the stationary distribution for γ = 0.01 and 0.1. When constrained by average potential, the particle balances path multiplicity with energetic unfavorability of states. This balance is reminiscent of entropy-enthalpy compensation [33] well known in chemistry. We note that the values of parameters used in this study are strictly for illustrative purposes and do not carry any special meaning.

Here, x,y ∈ [−N,N ]. Equation (15) is recognized as the probability distribution corresponding to the lowest energy eigenfunction of the discretized form of the Schrodinger equation for a particle in a square box (see Appendix for details). The stationary distribution localizes near the center of the lattice (see Fig. 1), a striking departure from the microcanonical ME distribution where all states are equiprobable. The entropic localization can be understood as follows. All paths of a fixed duration T have equal probability [see Eq. (13)]. But, there are many more such paths near the center compared to the boundaries. Thus, the states in the center are frequented often compared to the boundaries which leads to their higher probability [32]. How does the stationary distribution change when statedependent quantities are constrained? To illustrate this, we

FIG. 2. (Color online) The stationary distribution of particle when average potential is constrained.

pa =

042149-3

PURUSHOTTAM D. DIXIT

PHYSICAL REVIEW E 92, 042149 (2015)

FIG. 3. (Color online) The maximum path entropy stationary distribution pa and the Boltzmann distribution qa when averaged potential is constrained.

B. Topology

Thus, defects and asymmetries in state space topology have a prominent impact on the stationary distribution. On a topologically symmetric state space, for example, a periodic lattice, a freely diffusing particle indeed has a flat stationary distribution (not shown). Nonetheless, the symmetry can be broken by a combination of finite dynamical neighborhood of states and the potential landscape. In Fig. 3 we plot the stationary distribution pa [Eq. (11)] and the Boltzmann distribution qa ∝ e−β a after constraining the average potential on a periodic lattice. pa is calculated as above with a slight modification that the underlying graph of connectivity represents a periodic lattice. γ in Eq. (17) is fixed at 0.1. Inverse temperature β of the maximum-entropy Boltzmann distribution qa is adjusted to match the numerical value of the average potential of the maximum path entropy distribution pa , which allows a direct comparison. Indeed pa is significantly different than qa . How do we understand this difference? On the one hand, the ME distribution qa depends solely on the state potential a . On the other hand, Eq. (13) shows that the paths that frequent both high and low potential states have a non-negligible probability thereby comparatively increasing the stationary probability of high potential states that are in the dynamic neighborhood of low potential states. As the dynamical neighborhood of the particle is expanded, for example, when the particle is allowed to jump to nth (n > 1) nearest neighbor, pa and qa are expected to agree more and more. Indeed, if the particle can jump from any state to any other state in a single transition, the ME and the maximum path entropy predictions are trivially identical to each other [21]. C. Currents

In addition to state-dependent quantities, one may constrain path-dependent quantities such as rates and currents. How do path-dependent constraints change the stationary distribution? On the periodic lattice, we constrain the average potential and a current along the positive Y axis (Fig. 4). To obtain the stationary distribution, we first identify the asymmetric transfer matrix, Wab

   a + b − αJab . = exp −γ 2

(18)

FIG. 4. (Color online) The stationary distribution in the presence of nonequilibrium current. As α increases from 0.1 to 0.5, the net current increases and modulates the stationary distribution to a greater extent. γ is fixed at 0.1.

As above, γ is the Lagrange multiplier associated with potential and α is associated with current. The current in the positive Y direction between states a = (x,y) and b = (z,w) is defined as Jab = ±1 if w = y ± 1 with appropriate corrections at y,w = 1,N . Jab is zero for sideways movement. Jab is antisymmetric and contributes to entropy production, making the Markov process a nonequilibrium stationary state. We find the left and the right Perron-Frobenius eigenvectors ψ¯ and φ¯ of W. The stationary distribution is the outer product of these two vectors, pa = ψa φa . Figure 4 shows pa at α = 0.1 and α = 0.5 with γ fixed at γ = 0.1. We see that net current modulates the stationary distribution, a fact well known in statistical physics [34]. This effect can be understood by looking at path probabilities. From Eq. (13), we know that paths that traverse through regions of high potential have a low probability. But, this may be alleviated if they simultaneously carry a net favorable current. This leads to a higher probability for energetically unfavorable states that are represented frequently in current carrying paths. IV. DISCUSSION

The maximum-entropy (ME) principle was first introduced by Gibbs in statistical physics more than a century ago [8] and was later established as an inference principle in its own right by Shore and Johnson [9]. As discussed in the introduction, ME has been applied to a variety of problems across fields. Additionally, ME is often invoked as the information theoretic basis of statistical physics [7]. The majority of the applications of ME have concentrated on inferring distributions over states of a system. Yet, ME can also be applied to study dynamics where the probabilistic “state space” is the collective of all possible paths. In this work, we derived the maximum path entropy distribution over the “state space” of stationary paths of a Markov chain subject to stateand path- dependent constraints. We derived the analytical form of the corresponding transition probability matrix and the stationary distribution over states. While the maximum path entropy distribution has a Boltzmann-like exponential form for path probabilities [see Eq. (13)], the stationary distribution over states is profoundly affected by asymmetry in state space topology, finite dynamical reach of states, and path-dependent

042149-4

STATIONARY PROPERTIES OF MAXIMUM-ENTROPY . . .

PHYSICAL REVIEW E 92, 042149 (2015)

constraints. Notably, even in the absence of path-dependent constraints, state space topology affects the maximum path entropy distribution projected over states, which deviates considerably from the Boltzmann exponential distribution. These effects will be magnified in higher dimensions and are clearly very relevant in many discrete state systems where state-based ME has previously been employed [11–14]. Apart from their theoretical importance, the results developed here should allow better numerical predictions in ME modeling of dynamical probability distributions. In this work, we focused on probability distributions over states. But, we have access to path probabilities as well [see Eq. (13)]. What is the relevance of the inferred Markovian dynamics to the study of diffusive random walks in general? We provide a speculation. The two mathematical frameworks that describe random walks, the local FokkerPlanck formulation and the nonlocal path-integral formulation are often equivalent. For example, the local assertion that all nearest neighbor jumps on an infinite regular lattice are equiprobable is equivalent to the nonlocal assertion that all paths of equal duration are equiprobable. But, confinement and lattice irregularities lead to prominent localization away from the boundary, a striking difference between the two approaches [32]. This nonenergetic localization has to be explained by invoking fictitious entropic forces in the Fokker-Planck approach. We believe that path-based approaches may be better descriptors of stochastic dynamics especially for discrete state finite systems such as spin systems and chemical reaction networks. We leave this for future theoretical and experimental studies.

⇒ kab = e(ma +nb +ζ −1−γ rab + ab − ba ) .

Differentiating the Caliber with respect to pa , we have    kab log kab + ma kab − ma + nb kab − na 0=− b



APPENDIX

+

 b

+



nb

a

pa kab − pb + ζ

a

ab (pa kab − pb kba ) − γ

a,b



 

rab kab .

(A4)

b

ma + na = 1.

(A5)

Substituting in Eq. (A3), we get kab =

αb −γ rab e κab . ηαa

(A6)

Here, αa = e−ma , η = e−ζ , and κab = e ab − ba . Notice that κab κba = 1. To determine κab , we impose detailed balance, α2 kab pb 2 = = b2 e−γ rab +γ rba κab kba pa αa pb αa 1 γ (rab −rba ) e2 . ⇒ κab = pa αb Thus, the transition probabilities are pb αa γ (rab −rba ) αb −γ rab kab = e2 e pa αb ηαa 1 pb − 1 γ (rab +rba ) e 2 . = η pa

(A7) (A8)

(A9) (A10)

√ − 12 γ (rab +rba ) Let φa = pa and Wab when (a,b) ∈ E = e and zero otherwise. Using b kab = 1, we have  Wab φb = ηφa . (A11)

pa kab − 1

2. Probability distribution of a free particle

 pa kab rab − r .

ab

(A1) We have introduced Lagrange multipliers ab to enforce detailed balance. As above, all summations involving two indices are restricted to edges of the graph. Differentiating the Caliber with respect to kab , we have pa (log kab + 1) = pa ma + pa nb + pa ζ + pa ( ab − ba ) − pa γ rab

kab ( ab − ba ) − γ

b

b



¯ the vector of square roots of probabilities is the Thus, φ, eigenvector of W with eigenvalue η. Thus, imposing detailed balance is equivalent to constraining a symmetrized form of the constraints.

b



b

b

As above, we consider the Caliber,      pa kab log kab + ma pa kab − pa C=−  

kab +



Substituting kab from Eq. (A3), we get

1. Imposing detailed balance

a,b

 b

ACKNOWLEDGMENTS

We thank Dr. Sumedh Risbud, Dr. Manas Raach, Professor Thierry Mora, and Dr. Karthik Shekhar for valuable discussions and comments on the manuscript.

(A3)

(A2)

To gain intuition, consider a free particle on a onedimensional aperiodic lattice between x = −N to x = N . Assume that at every step, the particle jumps to one of its nearest neighbors. Let us construct the maximum path entropy random walk for this particle without any state- or path-dependent constraints. The transfer matrix W has the following form. ⎤ ⎡ 0 1 0 0 ... 0 0 ... 0 ⎥ ⎢ 1 0 1 ⎢ 0 1 0 1 ... 0 ⎥ ⎥. ⎢ (A12) W=⎢ . . . . .. ⎥ .. ⎣ .. .. .. .. . . ⎦ 0 0 0 ... 1 0

042149-5

PURUSHOTTAM D. DIXIT

PHYSICAL REVIEW E 92, 042149 (2015)

if a ∈ [−N + 1,N − 1] and φ2 = ηφ1 ,φN−1 = ηφN .

(A14)

Recognize that Eqs. (A13) and (A14) are the discretized version of the time-independent Schrodinger equation for a free particle, −

∂ 2φ = (1 − η)φ. ∂x 2

(A15)

The Perron-Frobenius eigenvector is then the lowest energy state of the particle in a box [35], φa =

aπ 1 sin . n+1 2n + 2

(A16)

FIG. 5. (Color online) Potential landscape on a [−N,N ] × [−N,N ] square lattice with N = 20. The potential is highest at the center of the lattice and decreases as the reciprocal of the squared distance from the center [see Eq. (16)]. We have chosen A = 11 and B = 10.

The stationary distribution is simply the square of this eigenvector. From Eq. (A16), it is straightforward to write down the stationary distribution for a two-dimensional case [see Eq. (15)].

If φ¯ is an eigenvector of W corresponding to an eigenvalue η, we have

3. Potential landscape

(A13)

Figure 5 has a graphical representation of the potential landscape.

A. K. Shalek et al., Nature (London) 498, 236 (2013). A. A. Powell et al., PLoS ONE 7, e33788 (2012). H. M. P. Consortium et al., Nature (London) 486, 207 (2012). B. Rodriguez-Brito et al., ISME J. 4, 739 (2010). H. Kwak, C. Lee, H. Park, and S. Moon, , in Proceedings of the 19th International Conference on World Wide Web (ACM, New York, 2010), pp. 591–600. N. B. Ellison, C. Steinfield, and C. Lampe, J. Comp. Med. Comm. 12, 1143 (2007). E. T. Jaynes, Phys. Rev. 106, 620 (1957). J. W. Gibbs, Elementary Principles in Statistical Mechanics: Developed with Special Reference to the Rational Foundation of Thermodynamics (Cambridge University Press, Cambridge, 2010). J. Shore and R. Johnson, IEEE Trans. Info. Theory 26, 26 (1980). S. Press´e, K. Ghosh, J. Lee, and K. A. Dill, Rev. Mod. Phys. 85, 1115 (2013). E. Schneidman, M. J. Berry, R. Segev, and W. Bialek, Nature (London) 440, 1007 (2006). W. Bialek et al., Proc. Natl. Acad. Sci. 109, 4786 (2012). S. J. Phillips, R. P. Anderson, and R. E. Schapire, Ecol. Model. 190, 231 (2006). P. D. Dixit, Biophys. J. 104, 2743 (2013). T. Mora, A. M. Walczak, W. Bialek, and C. G. Callan, Proc. Natl. Acad. Sci. 107, 5405 (2010). K. Shekhar, C. F. Ruberman, A. L. Ferguson, J. P. Barton, M. Kardar, and A. K. Chakraborty, Phys. Rev. E 88, 062705 (2013). J. Peterson, P. D. Dixit, and K. A. Dill, Proc. Natl. Acad. Sci. 110, 20380 (2013). R. J. Glauber, J. Math. Phys. 4, 294 (1963). P. D. Dixit and K. A. Dill, J. Chem. Th. Comp. 10, 3002 (2014).

[20] P. D. Dixit, A. Jain, G. Stock, and K. A. Dill, J. Chem. Theor. Comput. (2015), doi: 10.1021/acs.jctc.5b00537. [21] A. Filyukov and V. Y. Karpov, J. Engg. Phys. Thermophys. 13, 416 (1967). [22] T. M. Cover and J. A. Thomas, Elements of Information Theory (John Wiley & Sons, New York, 2012). [23] J. C. Vasquez, O. Marre, A. G. Palacios, M. J. Berry, and B. Cessac, J. Physol. Paris 106, 120 (2012). [24] H. Ge, S. Press´e, K. Ghosh, and K. A. Dill, J. Chem. Phys. 136, 064108 (2012). [25] A. Cavagna, I. Giardina, F. Ginelli, T. Mora, D. Piovani, R. Tavarone, and A. M. Walczak, Phys. Rev. E 89, 042707 (2014). [26] T. Mora, S. Deny, and O. Marre, Phys. Rev. Lett. 114, 078105 (2015). [27] G. Stock, K. Ghosh, and K. A. Dill, J. Chem. Phys. 128, 194102 (2008). [28] M. J. Hazoglou, V. Walther, P. D. Dixit, and K. A. Dill, J. Chem. Phys. 143, 051104 (2015). [29] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976). [30] R. Chetrite and H. Touchette, in Annales Henri Poincar´e (Springer, Berlin, 2014), pp. 1–53. [31] R. Chetrite and H. Touchette, arXiv:1506.05291. [32] Z. Burda, J. Duda, J. M. Luck, and B. Waclaw, Phys. Rev. Lett. 102, 160602 (2009). [33] R. Lumry and S. Rajender, Biopol. 9, 1125 (1970). [34] D. Kondepudi and I. Prigogine, From Heat Engines to Dissipative Structures (John Wiley & Son, New York, 1998). [35] G. Lombardi and R. Rebaudo, Eigenvalues and eigenvectors of a special class of band matrices, 1988, http://etabeta.univ.trieste.it/ dspace/bitstream/10077/4851/1/LombardiRebaudoRendMat20. pdf.

φa+1 + φa−1 − φa = (η − 1)φa ,

[1] [2] [3] [4] [5]

[6] [7] [8]

[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]

042149-6

Stationary properties of maximum-entropy random walks.

Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic syst...
566B Sizes 1 Downloads 14 Views