Solving the chemical master equation using sliding windows
 Verena Wolf^{1}Email author,
 Rushil Goel^{2},
 Maria Mateescu^{3} and
 Thomas A Henzinger^{4}
DOI: 10.1186/17520509442
© Wolf et al; licensee BioMed Central Ltd. 2010
Received: 1 November 2009
Accepted: 8 April 2010
Published: 8 April 2010
Abstract
Background
The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species.
Results
In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy.
Conclusions
The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori.
Background
Experimental studies have reported the presence of stochastic mechanisms in cellular processes [1–9] and therefore, during the last decade, stochasticity has received much attention in systems biology [10–15]. The investigation of stochastic properties requires that computational models take into consideration the inherent randomness of chemical reactions. Stochastic kinetic approaches may give rise to dynamics that differ significantly from those predicted by deterministic models, because a system might follow very different scenarios with nonzero likelihoods.
Under the assumption that the system is spatially homogeneous and has fixed volume and temperature, at a each point in time the state of a network of biochemical reactions is given by the population vector of the involved chemical species. The temporal evolution of the system can be described by a Markov process [16], which is usually represented as a system of ordinary differential equations (ODEs), called the chemical master equation (CME).
The CME can be analyzed by applying numerical solution algorithms or, indirectly, by generating trajectories of the underlying Markov process, which is the basis of Gillespie's stochastic simulation algorithm [17, 18]. In the former case, the methods are usually based on a matrix description of the Markov process and thus primarily limited by the size of the system. A survey and comparisons of the most established methods for the numerical analysis of discretestate Markov processes are given by Stewart [19]. These methods compute the probability density vector of the Markov process at a number of time points up to an a priori specified accuracy. If numerical solution algorithms can be applied, almost always they require considerably less computation time than stochastic simulation, which only gives estimations of the measures of interest. This is particularly the case if not only means and variances of the state variables are estimated with stochastic simulation, but also the probability of certain events. However, for many realistic systems, the number of reachable states is huge or even infinite and, in this case, numerical solution algorithms may not be applicable. This depends mainly on the number of chemical species. In low dimensions (say <10) a direct solution of the CME is possible whereas in high dimensions stochastic simulation is the only choice. In the case of stochastic simulation estimates of the measures of interest can be derived once the number of trajectories is large enough to achieve the desired statistical accuracy. However, the main drawback of simulative solution techniques is that a large number of trajectories is necessary to obtain reliable results. For instance, in order to halve the confidence interval of an estimate, four times more trajectories have to be generated. Consequently, often stochastic simulation is only feasible with a very low level of confidence in the accuracy of the results.
We focus on two specific numerical solution methods, the uniformization method and the Krylov subspace method. We compare their efficiency when they are used to solve the ODEs that arise during the sliding window iteration. We also compare the sliding window method to the numerical algorithms applied in a global fashion, that is, to all reachable states (not only to the states of the window), for systems of tractable size. We are interested in the probability distribution of the Markov process and not only in means and variances. These probabilities are difficult to estimate accurately with stochastic simulation. Therefore, we compare the solution obtained by the sliding window method only to numerical solution algorithms but not to stochastic simulation.
Recently, finite state projection algorithms (FSP algorithms) for the solution of the CME have been proposed [20, 21]. They differ from our approach in that they are based solely on the structure of the underlying graph, whereas the sliding window method is based on the stochastic properties of the Markov process. The FSP algorithms start with an initial projection, which is expanded in size if necessary. The direction and the size of the expansion is chosen based on a qualitative analysis of the system in a breadthfirst search manner. It is not clear how far the state space has to be explored in order to capture most of the probability mass during the next time step. Thus, if the projection size is too small, the computation has to be repeated with an expanded projection. Moreover, for most models, the location of the main portion of the probability mass follows a certain direction in the state space, whereas the expansion is done in all directions. Therefore, unnecessary calculations are carried out, because the projection contains states that are visited with a small probability. By contrast, in the sliding window approach, we determine the location and direction of the probability mass for the next computation step based on the reaction propensities and the length of the time step. The projection that we obtain is significantly smaller than the projection used in the FSP whereas the accuracy of our approach is similar to the accuracy of the FSP. In this way we achieve large memory and computational savings, since the time complexity of our window construction is small compared to the calculation of the probability distribution of the window. In our simulations we never had to repeat the computation of the probabilities using a window of larger size.
The FokkerPlanck equation is an approximation of the CME, for which a solution can be obtained efficiently [22, 23]. This approximation, however, does not take into account the discrete nature of the system, but changes the underlying model by assuming a continuous state space. Other approaches to approximate the probability distributions defined by the CME are based on sparse grid methods [24], spectral methods [25], or the separation of time scales [26, 27]. The latter approach uses a quasisteady state assumption for a subset of chemical species and calculates the solution of an abstract model of the system. In contrast, we present an algorithm that computes a direct solution of the CME. Our method is also related to tauleaping techniques [18, 28], because they require estimates of the upper and lower bounds on the population sizes of the chemical species, just as our method. The time leap must be sufficiently small such that the changes in the population vector do not significantly affect the dynamics of the system. Our method differs from the calculation of the leap in predicting the future dynamics for a dynamically chosen time period. More precisely, we determine the length of the next time step while approximating the future behavior of the process.
Here, we present the sliding window method in more detail and provide an additional comparison between uniformization and Krylov subspace methods for the solution of the window. Moreover, we have improved our implementation of the algorithm and evaluated it on more examples, such as the bistable toggle switch, which is reported in detail.
The remainder of this paper is organized as follows. We first describe the theoretical framework of our modeling approach in the Background Section. In the Results Section we present the sliding window method, and numerical solution approaches for the CME. Experimental results are given at the end of Section Results.
Stochastic model
where c_{ m }> 0 is a constant and l_{ i }is the number of molecules of type i that are consumed by a reaction of type R_{ m }. The propensity α_{ m }(x) determines the "speed" of the reaction R_{ m }in x, as explained below. Note that equals the number of all distinct combinations of reactants. Besides the propensity function, we associate a change vector v^{(m)}∈ ℤ^{ n }with R_{ m }that describes the effect of reaction type R_{ m }. If x is the current state and α_{ m }(x) > 0 then x + v^{(m)}is the state of the system after a reaction of type R_{ m }. Note that α_{ m }(x) > 0 implies that x + v^{(m)}contains no negative entries.
We denote the initial state of the system by y ∈ and define S ⊆ as the set of all states reachable from y via an arbitrary number of reactions, that is, S is the smallest subset of such that y ∈ S and x' ∈ S iff there exists m ∈ {1, ..., k} and x ∈ S with α_{ m }(x) > 0 and x + v^{(m)}= x'. Note that S is countable but possibly infinite.
As above, the set of states reachable from the initial state y = (y_{1}, y_{2}, y_{3}, y_{4}) is finite because of the conservation laws y_{1} = x_{1} + x_{3} and y_{2} = x_{2} + x_{3} + x_{4}, where we assume that y_{3} = y_{4} = 0.
Chemical master equation
where we assume a fixed enumeration of the state space. Note that the row sums of the (possibly infinite) matrix Q are zero and λ_{ x }= Q(x, x), the exit rate of state x, is the reciprocal value of the average residence time in x.
where the matrix exponential is given by e^{ Qt }= .
This means that, for t_{0} = 0 and t_{ r }= t, we obtain p^{(t)}by the iterative scheme in Eq. (3) for t_{1}  t_{0}, t_{2}  t_{1}, ..., t_{ r } t_{r1}.
If the state space if infinite we can only compute approximations of p^{(t)}. But even if Q is finite, several factors can hamper the efficient solution of the matrix exponential in Eq. (5). First of all, the size of the matrix Q might be large because it grows exponentially with the number of state variables. However, usually Q is sparse, as the number of reaction types is small compared to the number of states. But even when Q is sparse often only an approximate solution can be computed efficiently. Adding up a sufficiently large number of terms of the infinite sum is numerically unstable, as Q contains strictly positive and negative entries, leading to severe roundoff errors [31]. Various numerical solution methods exist for systems of firstorder linear equations of the form of Eq. (4). However, many of them are not useful as they do not preserve the sparseness of Q. Several surveys and comparisons exist in literature [19, 32, 33]. Most popular are methods based on uniformization [34, 35], approximations in the Krylov subspace [36], or numerical integration [37, 38]. We will describe the former two methods in more detail in the section on numerical solution methods.
Results
Sliding window method
The key idea of the algorithm proposed in this paper is to calculate an approximation of p^{(t)}in an iterative fashion, as described in Eq. (6). More precisely, we compute a sequence of approximations such that for a subset W_{ j }of the state space, j ∈ {1, ..., r}, for all x ∈ W_{ j }. The sets W_{1}, ..., W_{ r }are called windows, and we assume that W_{ j }contains the states at which (most of) the probability mass is concentrated during the time interval [t_{j1}, t_{ j }). We discuss the construction of the window W_{ j }later.
where = (1_{ y })^{ T }and D_{ j }is the diagonal matrix whose main diagonal entries are one for x ∈ W_{ j }and zero otherwise. The row vector (1_{ y })^{ T }is one at position y and zero otherwise.
In the jth step, the matrix contains the probabilities to move in τ_{ j }time units within W_{ j }from one state to another. As initial probabilities, Eq. (7) uses the approximations for all states x ∈ W_{ j }. The diagonal matrix ensures that the probability mass located in W_{j1}\W_{ j }is ignored during the computation, that is, only elements of the intersection W_{j1}∩ W_{ j }can have nonzero entries in the vector D_{ j }. This is necessary because Q_{ j }does not contain the transition rates of states outside of W_{ j }(these states are absorbing). Intuitively, the vector describes the location of the probability mass after moving within W_{ j }.
Thus, if Eq. (7) is solved exactly, the total approximation error of the sliding window method is η_{ r }. Note that the error in Eq. (8) is the sum of the errors of all components of the vector .
Window construction
In each step of the iteration the window W_{ j }must be chosen such that the error η_{ j }is kept small. This is the case if W_{ j }satisfies the following conditions: (a) with a sufficiently high probability X(t_{j1}) ∈ W_{ j }, (b) the probability of leaving W_{ j }within the time interval [t_{j1}, t_{ j }) is sufficiently small.
Requirement (a) implies that W_{ j }contains a significant part of the support of , that is, a subset S_{ j }⊂ S such that is small. In the first step we set S_{1} = {y}. For j > 1, the window W_{ j }is constructed after is calculated. We fix a small δ > 0 and choose S_{ j }= {x  > δ}. If the support of is large and distributes almost uniformly, it may be necessary to construct S_{ j }such that is smaller than some fixed threshold. However, our experimental results show that using a fixed threshold yields good results, which makes the additional effort of sorting the support of unnecessary in practice. Note that requirement (a) implies that W_{ j }and W_{j1}intersect. Thus, in each step we "slide" the window in the direction that the probability mass is moving.
is small, where X_{ d }(τ) is the dth component of the random vector X(τ).
For a fixed state z ∈ A_{ j }we exploit the regular structure of the Markov chain for the computation of and . We start in state z and update the state variables one by one. We assume that for a small time interval of length Δ the rate of reaction type R_{ m }remains constant, i.e., is equal to α_{ m }(z). Then the number of R_{ m }transitions within the next Δ time units is Poisson distributed with parameter α_{ m }(z)Δ. We can approximate this number by the expectation α_{ m }(z)Δ of the Poisson distribution. Note that the above assumption is warranted since in the case of coupled chemical reactions the propensities α_{ m }(x) are linear or at most quadratic in x, if only elementary reactions are considered, i.e. reactions that correspond to a single mechanistic step and have therefore at most two reactants. In general, reactions may have intermediate products and/or parallel reaction pathways. They can, however, always be decomposed into elementary reactions. As we are interested in an upper and a lower bound, we additionally consider the standard deviation of the Poisson distribution. We assume that, if the current state is x, within Δ units of time

at least ,

at most
yields continuous deterministic worstcase approximations of X(t + Δ), X(t + 2Δ),... under the condition that X(t) = z. For functions α_{ m }that grow extremely fast in the state variables, the iteration may yield bad approximations because it is based on the assumption that the propensities are constant during [0, Δ).
In the context of biochemical reaction networks, α_{ m }is at most quadratic and therefore the approximation given by Eq. (11) yields adequate results. For a given system, we perform the approximation in Eq. (11) for all possible combinations . It is possible to skip combinations that treat preferentially transition types leading to opposite directions in the state space, because they will not give a worstcase bound. Consider, for instance, Example (1) with c_{1} = c_{2} = c_{3} = 1, z = (10, 10, 100, 0), and Δ = 0.01. If we assume that more reactions of type R_{2} and R_{3} happen (than on average) and fewer of R_{1}, we get , and . This means that the number of complex molecules decreases and x^{(1)} = (14,12, 96, 2). We can omit combinations that contain both and . As R_{1} equates R_{2} and vice versa, these combinations will not yield good approximations of the extreme values of the state variables. In general, the dependency graph of the reaction network may be helpful to identify those combinations that maximize a certain population (see also the section on experimental results).
In the sequel, each chosen combination is referred to as a branch because, for fixed z, the corresponding iterations lead to different successors x^{(l+1)}. Note that for a particular branch, for each m ∈ {1, ..., k} we fix κ_{ m }= , or κ_{ m }= for all l. The iteration ends after ⌈τ_{ j }/Δ⌉ steps (where the length of the last time step is the remainder instead of Δ), and the extreme values and are given by the minimal and maximal values of the state variables during the iteration. More specifically, and , where 1 ≤ d ≤ n, x^{(l)}= , and z = x^{(0)}.
The method ContDetApprox (left) and the main procedure sWindow (right)
ALGORITHM ContDetApprox(z, τ, b^{+}, b^{})  ALGORITHM sWindow(y, t, ϵ, δ) 

Input: initial state z, length τ of time interval, old boundaries  Input: initial state y, times t = (t_{0}, ..., t_{ r }), error ϵ > 0, threshold δ > 0 
Output: new boundaries  Output: probability vectors p_{0}, ..., p_{ r } 
1 for each branch i ∈ {1, ..., 2n} do  1 p_{0} = (1_{ y })^{ T }; //start with probability one in y 
2 x^{⟨i⟩}:= z; //z is initial state of all branches  2 for j := 1 to r do 
3 end  3 τ_{ j }:= t_{ j } t_{j1}; //length of next time step 
4 time := 0;  //define S_{ j }for construction of W_{ j } 
5 Δ := step_size; //choose length of time steps  4 S_{ j }:= {x  p_{j1}(x) >δ}; 
6 while time <τ do  5 numStates := min(10, size(S_{ j })); 
7 for each branch i ∈ {1, ..., 2n} do  //choose numStates random states from S_{ j } 
//compare current state variables with boundaries  6 A_{ j }:= rand(S_{ j }, numStates); 
8 for d = 1 to n do  7 b^{+}: ∞; b^{} : +∞; //initial boundaries 
9 if then  8 for each z in A_{ j }do 
10 ; //adjust upper bound  //run continuous determ. approximation 
11 end  //on z and update boundaries 
12 if then  9 (b^{+}, b^{}) := ContDetApprox (z, τ_{ j }, b^{+}, b^{}); 
13 ; //adjust lower bound  10 end 
14 end  11 Q_{ j }:= generator (S_{ j }, b^{+}, b^{}); //construct Q_{ j } 
15 end  //construct diagonal matrix for W_{ j }(cf. Eq. (7)) 
16 for m := 1 to k do  12 D_{ j }:= diag(S_{ j }, b^{+}, b^{}); 
//choose more/fewer transitions of type R_{ m }  13 ; //solve Eq. (7) 
//depending on branch i  14 end 
17 κ_{ m }:= choose(α_{ m }(x^{⟨i⟩})·Δ, i);  
18 end  
19 ; //update state (cf. Eq. (11))  
20 end  
21 time := time + Δ;  
22 end 
Regarding the choice of the time step Δ, we suggest to choose Δ dynamically such that for each m the interval in Eq. (10) covers at least, say, 80% of the probability mass of the corresponding Poisson distribution. Clearly, the accuracy of the method increases in the case of larger intervals covering more probability mass. For our experimental results, we chose Δ such that λ_{ x }·Δ = 1 yielded sufficiently accurate results.
Sliding window algorithm
In the right column of Table 1 we describe the main procedure, called sWindow, in pseudocode. The for loop in lines 214 implements the approximations of by successively computing vector p_{ j }from p_{j1}. Input ϵ is a bound for the total approximation error caused by the solutions of the ODEs in line 13. The array t contains the time instances t_{0}, ..., t_{ r }. For our experimental results, we compare two different time stepping mechanisms that are explained below. The parameter δ is the threshold that is used to remove those states in the support of p_{j1}having a smaller probability than δ. We define S_{ j }as the set of all states x for which p_{j1}(x) is greater than δ in line 4. Note that for j = 1 the set S_{1} contains only the initial state y. In line 6, rand(S_{ j }, numStates) returns a set of numStates random elements from S_{ j }that are used to construct the vectors b^{+} and b^{} in lines 710. The rate entries of all states in the window W_{ j }(cf. Eq. (9)) are calculated in line 11, and all remaining entries in Q_{ j }are set to zero. A solution method is invoked in line 13 to calculate p_{ j }from p_{j1}. This can be, for instance, the uniformization method, an ODE solver or a method based on an approximation in the Krylov subspace. We pass a time step of length τ_{ j }and the corresponding fraction of the approximation error.
We can calculate the overall loss of probability mass from the output p_{ r }by η_{ r }= 1  ∑_{ x }p_{ r }(x). This value includes both approximation errors of the algorithm: (1) the probability of leaving window W_{ j }during the time interval [t_{j1}, t_{ j }) and (2) the probability that is lost due to the sliding of the window, obtained by the multiplication with D_{ j }(cf. Eq. (7)).
Note that it is always possible to repeat a computation step in order to increase the obtained accuracy. More precisely, we can determine a larger window by increasing the confidence of the interval in Eq. (10), i.e. by choosing the time step Δ such that for each m the maximal/minimal number of transitions of type R_{ m }lies in the interval with a certain confidence (e.g. with a confidence of 80%). For our experimental results, however, we did not repeat any computation step since we always obtained sufficiently accurate results.
Time intervals
For our experimental results, we compare two different time stepping mechanisms for Algorithm sWindow (see Table 1, right). We either choose equidistant time steps τ_{ j }= τ, for all j, or we determine τ_{ j }during the construction of the window W_{ j }(adaptive time steps). The latter method yields faster running times. Depending on the dynamics of the system, long time steps may cause three problems: (1) the window is large and the size of the matrix Q_{ j }may exceed the working memory capacity, (2) the dynamics of the system may differ considerably during a long time step and Q_{ j }has bad mathematical properties, (3) the window may contain states that are only significant during a much shorter time interval. If, on the other hand, the time steps are too small then many iterations of the main loop have to be carried out until the algorithm terminates. The windows overlap nearly completely, and even though each step may require little time, the whole procedure can be computationally expensive. One possibility is to fix the size of the windows and choose the time steps accordingly. But this does not necessarily result in short running times of the algorithm either. The reason is that the time complexity of the solution methods does not depend only on the size of the matrix representing the window but also on its mathematical properties.
The problems mentioned above can be circumvented by calculating τ_{1}, ..., τ_{ r }during the construction of the window W_{ j }as follows. We compute the number of the states that are significant at time t_{j1}and pass it to ContDetApprox in line 9 (see Table 1). We run the while loop in Algorithm ContDetApprox (see Table 1, left) until (1) the window has at least a certain size and (2) the number of states in the window exceeds twice the number of the states that are significant at time t_{j1}. The first condition ensures that the window exceeds a certain minimum size of, say, 500 states. The second condition ensures that the new window is just big enough to move the probability mass to a region outside of S_{ j }. More precisely, it ensures that the sets S_{1}, S_{2},...are not overlapping and that subsequent sets are located next to each other (as illustrated in Figure 1). Note that this ensures that the resulting window does not contain many states that are only significant during a much shorter time interval.
On termination of the whileloop, we pass the value of the variable time from ContDetApprox to sWindow and set τ_{ j }to the value of time. Obviously, in sWindow we add a variable representing the time elapsed so far, and the for loop in line 2 is replaced by a while loop that stops when the time elapsed so far exceeds t. Later, we present experimental results of the sliding window method where we use adaptive time steps in the way described above.
Numerical solution methods
In this section, we present the theoretical basis of two numerical solution algorithms, namely the uniformization method and the Krylov subspace method. We approximate a global solution of the CME (cf. Eq. (5)), as well as the local solutions that are required in line 13 of Algorithm sWindow (see also Eq. (7)).
Uniformization
The uniformization method goes back to Jensen [34] and is also referred to as Jensen's method, randomization, or discretetime conversion. In the performance analysis of computer systems, this method is popular and often preferred over other methods, such as Krylov subspace methods and numerical integration methods [19, 39]. Recently, uniformization has also been used for the solution of the CME [40–42].
Global uniformization
Let (X(t), t ∈ ℝ_{≥0}) be a CTMC with finite state space S. The basic idea of uniformization is to define a discretetime Markov chain (DTMC) and a Poisson process. The DTMC is stochastically identical to X, meaning that it has the same transient probability distribution if the number of steps within [0, t) is given, and the Poisson process keeps track of the time as explained below.
as long as the required number of summands is not extremely large.
Time complexity and stiffness
As λt grows the Poisson distribution flattens, and the left truncation point L in Eq. (16) grows linearly in λt, while the number of significant Poisson probability terms is [44]O( ). If the vectors w^{(L)}, w^{(L+1)}, ..., w^{(U)}are computed using U matrixvector multiplications (cf. Eq. (14)), then the complexity of the uniformization procedure is O(νλt) where ν is the number of nonzero elements in P.
All analysis methods (simulationbased or not) encounter serious difficulties if the underlying model is stiff. In a stiff model the components of the underlying system act on time scales that differ by several orders of magnitude and this arises in various application domains, especially in systems biology. For a stiff model, the uniformization rate λ ≥ max_{x∈S}λ_{ x }will correspond to the fastest time scale. By contrast, a significant change of the slow components can be observed only during a period of time that corresponds to the slowest time scale. The uniformization method is then extremely time consuming because of a very large stiffness index [45]t·max_{x∈S}λ_{ x }.
In the sequel, we show how uniformization can be applied in a local fashion such that stiffness has a less negative effect on the performance of the method. In other words, the sliding window technique enables uniformization to perform well even for stiff systems.
Local uniformization
where L and U are the truncation points depending on λ_{ j }τ_{ j }, and . Moreover, as λ_{ j }depends only on W_{ j }, the uniformization rate is usually smaller than the global one, sup_{x∈S}λ_{ x }, which means that fewer terms are required in Eq. (17) than in Eq. (16).
The computational complexity of the whole procedure is O( ), and thus, we save computation time, compared to global uniformization, if , where λ = sup_{x∈S}λ_{ x }and ν_{ j }is the number of nonzero elements in P_{ j }.
Krylov subspace
Krylov subspace methods are widely used for large eigenvalue problems, for solving linear equation systems, and also for approximating the product of a matrix exponential and a vector [46, 47]. We are interested in the latter approximation and show how it can be used to solve the CME, either in a global fashion or in combination with the sliding window method. Recently, Krylov subspace methods have been applied to the CME by Sidje et al. [21].
Global Krylov subspace method
where e_{ k }is a column vector of appropriate size whose kth component is one and all other components are zero. Intuitively, Eq. (18)(b) states that H_{ m }is the matrix projection of A onto K_{ m }w.r.t. the basis defined by V_{ m }. An approximation of e^{ A }v in K_{ m }expressed using V_{ m }is e^{ A }v≈ V_{ m }y, where y is a vector of size m.
where ρ = A_{2} is the spectral norm of A. The approximation in Eq. (19) still involves the computation of the matrix exponential of H_{ m }, but as H_{ m }is of small dimension and has a particular structure (upper Hessenberg), this requires a smaller computational effort. For the matrix exponential of small matrices, methods such as Schur decomposition and Padé approximants may be applied [31].
Assume now that the time instant t is arbitrary, i.e., we want to approximate e^{ tA }v for some t > 0. In order to control the approximation error, we calculate e^{ tA }v stepwise by exploiting that for τ_{1}, τ _{2} ≥ 0. For a step size τ, we approximate e^{ τA }v by v_{2} because the Krylov subspaces associated with A and τA are identical and . It follows from Eq. (20) that we have a small error bound if Aτ_{2} is small.
where initially u_{0} = v. The matrices and result from the ith execution of Arnoldi's algorithm for the construction of an orthonormal basis of the subspace Span{u_{i1}, A u_{i1}, ..., A^{m1}u_{i1}}. When the elapsed time equals t, we obtain an approximation of e^{ At }v.
For the step size of the Krylov subspace method, a popular heuristic is to choose τ_{i+1}depending on an estimate of the error ϵ_{ i }of the previous step. Let tol > 0 be an a priori specified tolerance. A common scheme consists of three steps [36]. (1) Define , (2) compute u_{ i }and the error estimation ϵ_{ i }. (3) If ϵ_{ i }> 1.2 tol reject u_{ i }, replace ϵ_{i1}with ϵ_{ i }, and go to step (1). For our experimental results, we used the Expokit software [48] where the small exponential, , is computed via the irreducible Padé approximation [49].
Local Krylov subspace method
Assume now that we use the Krylov subspace method in line 13 of Algorithm sWindow (see Table 1, right), to approximate (cf. Eq. (7)). By letting v= , A = , and t = τ_{ j }we can apply the same procedure as in the global case. Note that this yields a nested iteration because the time steps τ_{ j }are usually much bigger than the time steps of the Krylov subspace method. For the Krylov subspace method, using the matrix Q_{ j }instead of Q offers important advantages. The Arnoldi process is faster as Q_{ j }usually contains fewer nonzero entries than Q. As well, the sliding window method is likely to provide matrices with a smaller spectral norm Q_{ j }_{2}. This allows for bigger time steps during the Krylov approximation, as can be seen in our experimental results.
Experimental results
As explained in detail below, we also implemented the method proposed by Burrage et al. [21] in order to compare it to our algorithm in terms of running time and accuracy. Moreover, for finite examples we compare our method to a global analysis, i.e. where in each step the entire state space is considered. We do not compare our method to Gillespie simulation or approximation methods based on the FokkerPlanck equation. The former method provides only estimates of the probability distribution and becomes infeasible if small probabilities are estimated [52]. The latter type of methods do not take into account the discreteness of the molecule numbers and are known to provide bad approximations in the case of small populations as considered here [53].
Parameters
We fixed the input ϵ = 10^{8} of Algorithm sWindow for all experiments (see Table 1, right). We chose the input δ in a dynamical fashion to ensure that in the jth step we do not lose more probability than 10^{5}·τ_{ j }/(t_{ r } t_{0}) by restricting to significant states, that is, we decrease δ until after line 4 of Algorithm sWindow the set S_{ j }contains at most 10^{5}· less probability than the former set S_{j1}. In Figure 2, we list the average value that we used for δ.
In the sequel, we give details about the parameters used for the results that we obtained for Example 1 and Example 2. For the remaining two examples, we list the corresponding chemical reactions and the parameters that we chose for the results in Figure 2.
Enzyme example
We tried different parameter sets, referred to as pset a)c), for Example 1 (see Figure 2). For parameter combination a) we have c_{1} = c_{2} = 1, c_{3} = 0.1 and start with 1000 enzymes and 100 substrates. In this case the number of reachable states is 5151. For parameter set b) and c) we have c_{1} = c_{2} = c_{3} = 1 and start with 100 enzymes and 1000 substrates and 500 enzymes and 500 substrates, which yields 96051 and 125751 reachable states, respectively. Each time we choose the time horizon according to the time until most of the probability mass is concentrated in the state in which all substrate molecules are transformed into products. For the time steps τ_{ j }in Algorithm sWindow, we apply the condition described above.
We consider four branches for the iteration in Eq. (11) in order to determine upper and lower bounds on the state variables. (1) To obtain an estimate for the maximal number of complex molecules (and a minimum for the enzyme population), we enforce more reactions of type R_{1} than on average (κ_{1} = ), and fewer of types R_{2} and R_{3} (κ_{3} = and κ_{2} = ). (2) By considering fewer reactions of type R_{1} (κ_{1} = ), and more of types R_{2} and R_{3} (κ_{3} = and κ_{2} = ) the complex population becomes minimal (and the enzyme population maximal). (3) An estimate for the minimal number of type P molecules (and the maximal number of type S molecules) is obtained by enforcing more reactions of type R_{2} (κ_{2} = ), and fewer of types R_{1} and R_{3} (κ_{1} = and κ_{3} = ). (4) Finally, more reactions of types R_{1} and R_{3} (κ_{1} = and κ_{3} = ), and fewer of type R_{2} (κ_{2} = ) gives a maximal increase of the number of product molecules (and minimizes the number of substrate molecules).
Note that the parallelogram in Figure 3 was induced by the conservation laws of the system. In general, conservation laws should be taken into account since otherwise the window may be inconsistent with the conservation laws, i.e. it may contain states that are not reachable.
Gene expression example
In Figure 2 we present results for Example 2. The difference between parameter set a) and parameter set b), referred to as pset a) and pset b), is that for a) we start with the empty system and for b) we start with 100 mRNA molecules and 1000 proteins. For both variants, we choose rate constants c_{1} = 0.5, c_{2} = 0.0058, c_{3} = 0.0029, c_{4} = 0.0001. The time steps that we use are determined by the condition in the section on time intervals. Note that we cannot solve this example using a global method because the number of reachable states is infinite. The column error contains the total error η_{ r }(see Eq. (8)) and times in sec refers to the running time in seconds. In column perc. we list the percentage of the total running time that was spent for the window construction. The column average wind. size refers to the average number of states in the window.
For the gene expression example, we use four branches: We maximize the number of mRNA molecules by choosing and and minimize it with and . Reactions R_{2} and R_{4} are irrelevant for this species. We maximize the protein population by choosing , and and minimize it with , and .
Goutsias' model
 1:
RNA T RNA + M
 2:
M T ?
 3:
DNA.D T RNA + DNA.D
 4:
RNA T ?
 5:
DNA + D T DNA.D
 6:
DNA.D T DNA + D
 7:
DNA.D + D T DNA.2D
 8:
DNA.2D T DNA.D + D
 9:
M + M T D
 10:
D T M + M
We used the same kinetic constants as Goutsias [50] and Sidje et al. [21], as well as the same initial state.
Bistable toggle switch
The toggle switch involves two chemical species A and B and four reactions. Let x = (x_{1}, x_{2}) ∈ . The reactions are ∅ → A, A → ∅, ∅ → B, B → ∅ and their propensity functions α_{1}, ..., α _{4} are given by α_{1}(x) = c_{1}/(c_{2} + ), α_{2}(x) = c_{3}·x_{1}, α_{3}(x) = c_{4}/(c_{5} + ), α_{4}(x) = c_{6}·x_{2}. Note that in this example the propensity functions are not of the form described in Eq. 1. For our experimental results, we chose the same parameters as Sjöberg et al. [23], that is, c_{1} = c_{4} = 3·10^{3}, c_{2} = c_{5} = 1.1·10^{4}, c_{3} = c_{6} = 0.001, and β = γ = 2. The initial distribution is a Gaussian distribution (μ, σ^{2}) with μ = (133, 133)^{ T }and . We consider the obvious four branches each of which is intended to minimize/maximize one of the two components. The branch minimizing A for example will have less of the first reaction and more of the second.
Discussion
In this section, we discuss our algorithm w.r.t. accuracy and running time where we consider different solution methods and different time step mechanisms. Moreover, we compare our method with a global analysis.
Accuracy
The column labeled by error in Figure 2 shows the total error η_{ r }(cf. Eq. (8)) of the sliding window method plus the uniformization error (which is bounded by ϵ = 10^{8}). The error using the Krylov subspace method instead yields the same accuracy because for both, uniformization and the Krylov subspace method, the error bound is specified a priori. For all examples, the total error does not exceed 1 × 10^{4}, which means that not more than 0.01 percent of the probability mass is lost during the whole procedure. It would, of course, be possible to add an accuracy check in the while loop of Algorithm sWindow, expand the current window if necessary, and recalculate. But as the method consistently returns a small error, this has been omitted.
We also considered relative errors, that is, for states x ∈ W_{ j }with > 10^{5}. We approximate the value by solving Eq. (13) via global uniformization, where we use truncation error ϵ = 10^{8}. Since this is only possible if the state space is finite, we compared relative errors only for the enzyme example. Our calculations show that the relative errors are always smaller than 10^{4}.
In order to support our considerations in the window construction section, we carried out experiments in which we exclusively chose the average in line 17 of Algorithm ContDetApprox (see Table 1, left). More precisely, for the construction of the window we do not consider the deviations in the numbers of reactions but only the average number. In this case, we called the method ContDetApprox with input 2τ to make sure that on average the probability mass moves to the center of the window and not too close to the borders. For this configuration, the total error is several orders of magnitude higher, e.g., for parameter set a) of the enzyme example the total error is 0.0224.
Finally, we test the size of the windows constructed in lines 710 of Algorithm sWindow. We change Algorithm sWindow by decreasing the size of the window by 5% between lines 10 and 11. In this case, the total error η_{ r }increases. For instance, η_{ r }= 0.35% for parameter set a) of the enzyme example. These results substantiate that the size and the position of the sliding window is such that the approximation error is small whereas significantly smaller windows result in significantly higher approximation errors.
Running time
For the time complexity analysis, we concentrate on three main issues.

Sliding window method vs. global analysis: We compare the sliding window method with a global solution in one step, and with another window method, where the size of the window is doubled if necessary.

Solution method (uniformization vs. Krylov subspace method): In Algorithm sWindow, we vary the solution method by exchanging uniformization with the Krylov subspace method.

Time intervals (equidistant vs. adaptive time steps): We use different methods to determine the length τ_{ j }of the next time step in line 3 of Algorithm sWindow.
Sliding window method vs. global analysis
Observe that the total error of the global uniformization method is smaller (compare the columns labeled by error) since the only error source is the truncation of the infinite sum in Eq. (13). In the column with heading #states we list the number of states that are reachable. During the global solution we consider all reachable states at all time whereas in the sliding window method the average number of states considered during a time step is much smaller. This is the main reason why the sliding window method is much faster. Moreover, in the case of uniformization, the rate for global uniformization is the maximum of all exit rates, whereas for local uniformization, we take the maximum over all states in the current window. Note that the global maximum can be huge compared to the local maxima. This explains the bad performance of the global uniformization method. When the Krylov subspace method is used for a global solution, the running times of the global solutions are also higher than the times of the local Krylov subspace method (sliding window method combined with the Krylov subspace method). Again, the reason is that a smaller number of states is considered during the sliding window iteration. Moreover, the matrices Q_{ j }have numerical properties that facilitate the use of bigger, and thus, fewer time steps. The total number of iteration steps used to solve Eq. (6) with the Krylov subspace method and the sliding window method is indeed small when compared to the global Krylov subspace method (on average around 20 times fewer steps).
We now focus on a comparison between our sliding window method and another local method, called doubling window method. For the latter, we compute the probability vectors in a similar way as Sidje et al. [21]. We start with an initial window and apply the Krylov algorithm. We do not iterate over the time intervals [t_{j1}, t_{ j }) but use the step sizes of the Krylov subspace method. After each time step, we remove those parts of the window that will not be used for the remaining calculations. We expand the size of the window if the error exceeds a certain threshold. Since the performance of the method depends heavily on the initial window and the directions in which a window is expanded, we start initially with the same window as the sliding window method and expand always in the directions that are most advantageous for the computation. For this we used information about the direction in which the probability mass is moving that we obtained from experiments with the sliding window method. The expansion of a window is realized by doubling the length of all of its edges.
We applied the doubling window method to the enzyme example and the gene expression. For all parameter sets that we tried, the sliding window method outperforms the doubling window method w.r.t. running time (with an average speedup factor of 5). The total number of iterations of the Krylov subspace approximation is up to 13 times smaller in the case of the sliding window method compared to the doubling window method (with an average of 6.5). Note that for arbitrary systems the doubling window method cannot be applied without additional knowledge about the system, i.e., it is in general not clear, in which direction the window has to be expanded.
Our results indicate that the sliding window method achieves a significant speedup compared to global analysis, but also compared to the doubling window method. Moreover, while global analysis is limited to finitestate systems and the doubling window methods requires additional knowledge about the system, our method can be applied to any system where the significant part of the probability mass is located at a tractable subset of states. If the dimension of the system is high, then the significant part of the probability mass may be located at intractably many states and in this case the memory requirements of our algorithm may exceed the available capacity.
Solution method
During the sliding window iteration different solution methods can be applied in line 13 of Algorithm sWindow. We concentrate on the uniformization method and on the Krylov subspace method. The running times in Figure 2 (compare the columns labeled by sWindow + uniformization with the columns labeled by sWindow + Krylov) show that the Krylov subspace method performs better (average speedup factor of around 1.5). The reason is that the Krylov subspace method is more robust to stiffness than uniformization. For nonstiff systems, uniformization is known to outperform the Krylov subspace method [19, 39]. However, since biochemical network models are typically stiff, the Krylov subspace method seems to be particularly well suited in this area.
Time intervals
In order to confirm our considerations in the section on time intervals, we also applied the sliding window method using equidistant time steps. For all examples, using equidistant time steps results in longer computation times compared to using adaptive time steps (with an average speedup factor of 3.5). A adaptive choice of the time steps has also the advantage that we can control the size of the windows and avoid that the memory requirements of the algorithm exceed the available capacity.
Conclusions
The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. It replaces a global analysis of the system by a sequence of local analyzes. The method applies to a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. The proposed method is compatible with all existing numerical algorithms for solving the CME, and also a combination with other techniques, such as time scale separation [26, 27], is possible.
We demonstrated the effectiveness of our method with a number of experiments. The results are promising as even systems with more than two million states with significant probability can be solved in acceptable time. Moreover, for examples that are more complex than those presented here, it is often sufficient to consider only a relatively small part of the state space. The number of molecules in the cell is always finite and, usually, a biochemical system follows only a small number of different trends. Stated differently, it is rarely the case that in biochemical systems a large number of different scenarios have significant likelihoods. Thus, we expect that the sliding window method can be successfully applied to systems with many chemical species and reactions as long as the significant part of the probability mass is always located at a tractable subset of states. In addition, further enhancements are possible, such as a splitting of the windows, which will be particularly useful for multistable systems. Moreover, we plan to automate our algorithm in a way that besides the initial conditions and the set of reactions no further input from the user is necessary, such as combinations of reactions that maximize/minimize certain populations.
Declarations
Acknowledgements
This research has been partially funded by the Swiss National Science Foundation under grant 205321111840 and by the Cluster of Excellence on Multimodal Computing and Interaction at Saarland University. A preliminary version of this paper appeared in proceedings of the International Conference on Computer Aided Verification [54].
Authors’ Affiliations
References
 Elowitz MB, Levine MJ, Siggia ED, Swain PS: Stochastic Gene Expression in a Single Cell. Science. 2002, 297: 11831186. 10.1126/science.1070919View ArticlePubMedGoogle Scholar
 Ozbudak EM, Thattai M, Kurtser I, Grossman AD, van Oudenaarden A: Regulation of Noise in the Expression of a Single Gene. Nature Genetics. 2002, 31: 6973. 10.1038/ng869View ArticlePubMedGoogle Scholar
 Blake WJ, Kaern M, Cantor CR, Collins JJ: Noise in Eukaryotic Gene Expression. Nature. 2003, 422: 633637. 10.1038/nature01546View ArticlePubMedGoogle Scholar
 Fedoroff N, Fontana W: Small Numbers of Big Molecules. Science. 2002, 297: 11291131. 10.1126/science.1075988View ArticlePubMedGoogle Scholar
 Kierzek A, Zaim J, Zielenkiewicz P: The Effect of Transcription and Translation Initiation Frequencies on the Stochastic Fluctuations in Prokaryotic Gene Expression. J Biol Chem. 2001, 276 (11): 81658172. 10.1074/jbc.M006264200View ArticlePubMedGoogle Scholar
 Paulsson J: Summing up the noise in gene networks. Nature. 2004, 427 (6973): 415418. 10.1038/nature02257View ArticlePubMedGoogle Scholar
 Swain PS, Elowitz MB, Siggia ED: Intrinsic and extrinsic contributions to stochasticity in gene expression. PNAS, USA. 2002, 99 (20): 1279512800. 10.1073/pnas.162041399.View ArticleGoogle Scholar
 Turner TE, Schnell S, Burrage K: Stochastic approaches for modelling in vivo reactions. Comp Biol Chem. 2004, 28: 165178. 10.1016/j.compbiolchem.2004.05.001.View ArticleGoogle Scholar
 Wilkinson DJ: Stochastic Modelling for Systems Biology. 2006, Chapman & Hall,Google Scholar
 McAdams HH, Arkin A: It's a noisy business!. Trends in Genetics. 1999, 15 (2): 6569. 10.1016/S01689525(98)01659XView ArticlePubMedGoogle Scholar
 Rao C, Wolf D, Arkin A: Control, exploitation and tolerance of intracellular noise. Nature. 2002, 420 (6912): 231237. 10.1038/nature01258View ArticlePubMedGoogle Scholar
 Thattai M, van Oudenaarden A: Intrinsic Noise in Gene Regulatory Networks. PNAS, USA. 2001, 98 (15): 86148619. 10.1073/pnas.151588598.View ArticleGoogle Scholar
 McAdams HH, Arkin A: Stochastic mechanisms in gene expression. PNAS, USA. 1997, 94: 814819. 10.1073/pnas.94.3.814.View ArticleGoogle Scholar
 Arkin A, Ross J, McAdams HH: Stochastic Kinetic Analysis of Developmental Pathway Bifurcation in Phage λInfected Escherichia coli Cells. Genetics. 1998, 149: 16331648.PubMed CentralPubMedGoogle Scholar
 Srivastava R, You L, Summers J, Yin J: Stochastic vs. Deterministic Modeling of Intracellular Viral Kinetics. J Theor Biol. 2002, 218: 309321. 10.1006/jtbi.2002.3078View ArticlePubMedGoogle Scholar
 Gillespie DT: Markov Processes. 1992, Academic Press,Google Scholar
 Gillespie DT: Exact Stochastic Simulation of Coupled Chemical Reactions. J Phys Chem. 1977, 81 (25): 23402361. 10.1021/j100540a008.View ArticleGoogle Scholar
 Gillespie DT: Approximate Accelerated Stochastic Simulation of Chemically Reacting Systems. J Chem Phys. 2001, 115 (4): 17161732. 10.1063/1.1378322.View ArticleGoogle Scholar
 Stewart WJ: Introduction to the Numerical Solution of Markov Chains. 1995, Princeton University Press,Google Scholar
 Munsky B, Khammash M: The finite state projection algorithm for the solution of the chemical master equation. J Chem Phys. 2006, 124: 04414410.1063/1.2145882.View ArticleGoogle Scholar
 Burrage K, Hegland M, Macnamara F, Sidje B: A Krylovbased Finite State Projection algorithm for solving the chemical master equation arising in the discrete modelling of biological systems. Proc of the Markov 150th Anniversary Conference. Edited by: Langville AN, Stewart WJ. 2006, 2138. Boson Books,Google Scholar
 Sjöberg P: Numerical Methods for Stochastic Modeling of Genes and Proteins. Phd thesis. 2007, Uppsala University, Sweden,Google Scholar
 Sjöberg P, Lötstedt P, Elf J: FokkerPlanck approximation of the master equation in molecular biology. Computing and Visualization in Science. 2009, 12: 3750. 10.1007/s0079100600456.View ArticleGoogle Scholar
 Hegland M, Burden C, Santoso L, Macnamara S, Booth H: A solver for the stochastic master equation applied to gene regulatory networks. J Comput Appl Math. 2007, 205: 708724. 10.1016/j.cam.2006.02.053.View ArticleGoogle Scholar
 Engblom S: Galerkin spectral method applied to the chemical master equation. Comm Comput Phys. 2009, 5: 871896.Google Scholar
 Busch H, Sandmann W, Wolf V: A Numerical Aggregation Algorithm for the EnzymeCatalyzed Substrate Conversion. Proc of CMSB. 2006, 4210: 298311. LNCS, Springer,Google Scholar
 Peles S, Munsky B, Khammash M: Reduction and Solution of the chemical master equation using time scale separation and finite state projection. J Chem Phys. 2006, 125: 204104 10.1063/1.2397685View ArticlePubMedGoogle Scholar
 Cao Y, Gillespie D, Petzold L: Efficient step size selection for the tauleaping simulation method. J Chem Phys. 2006, 124 (4): 04410911. 10.1063/1.2159468View ArticlePubMedGoogle Scholar
 Kampen NGv: Stochastic Processes in Physics and Chemistry. 2007, Elsevier, 3,Google Scholar
 Çinlar E: Introduction to Stochastic Processes. 1975, PrenticeHall,Google Scholar
 Moler CB, Van Loan CF: Nineteen Dubious Ways to Compute the Exponential of a Matrix. SIAM Review. 1978, 20 (4): 801836. 10.1137/1020098.View ArticleGoogle Scholar
 Grassmann WK, : Computational Probability. 2000, Kluwer Academic Publishers,Google Scholar
 Sidje R, Stewart W: A survey of methods for computing large sparse matrix exponentials arising in Markov chains. Markov Chains, Computational Statistics and Data Analysis 29. 1996, 345368.Google Scholar
 Jensen A: Markoff chains as an aid in the study of Markoff processes. Skandinavisk Aktuarietidskrift. 1953, 36: 8791.Google Scholar
 Gross D, Miller D: The randomization technique as a modeling tool and solution procedure for transient Markov processes. Operations Research. 1984, 32 (2): 926944. 10.1287/opre.32.2.343.View ArticleGoogle Scholar
 Philippe B, Sidje R: Transient solutions of Markov processes by Krylov subspaces. Proc of the 2nd Int Workshop on the Numerical Solution of Markov Chains. 1995, 95119. Kluwer Academic Publishers,Google Scholar
 Hairer E, Norsett S, Wanner G: Solving Ordinary Differential Equations I: Nonstiff Problems. 2008, Springer,Google Scholar
 Hairer E, Wanner G: Solving Ordinary Differential Equations II. 2004, Stiff and DifferentialAlgebraic Problems Springer,Google Scholar
 Reibman A, Trivedi K: Numerical transient analysis of Markov models. Comput Oper Res. 1988, 15: 1936. 10.1016/03050548(88)900263.View ArticleGoogle Scholar
 Zhang J, Watson LT, Cao Y: A Modified Uniformization Method for the Solution of the Chemical Master Equation. Tech. Rep. TR0731, Computer Science, Virginia Tech. 2007,Google Scholar
 Hellander A: Efficient computation of transient solutions of the chemical master equation based on uniformization and quasiMonte Carlo. J Chem Phys. 2008, 128 (15): 154109 10.1063/1.2897976View ArticlePubMedGoogle Scholar
 Sidje R, Burrage K, MacNamara S: Inexact Uniformization Method for Computing Transient Distributions of Markov Chains. SIAM J Sci Comput. 2007, 29 (6): 25622580. 10.1137/060662629.View ArticleGoogle Scholar
 de Souza e Silva E, Gail R: Transient Solutions for Markov Chains. Computational Probability. Edited by: Grassmann WK. 2000, 4379. Kluwer Academic Publishers,View ArticleGoogle Scholar
 Fox BL, Glynn PW: Computing Poisson probabilities. Communications of the ACM. 1988, 31 (4): 440445. 10.1145/42404.42409.View ArticleGoogle Scholar
 Dunkel J, Stahl H: On the transient analysis of stiff Markov chains. Proc of the 3rd IFIP Working Conference on Dependable Computing for Critical Applications. 1993, 137160.View ArticleGoogle Scholar
 Gallopoulos E, Saad Y: On the parallel solution of parabolic equations. Proc. ACM SIGARCH89. 1989, 1728. ACM press,Google Scholar
 Saad Y: Analysis of some Krylov subspace approximations to the matrix exponential operator. SIAM J Numer Anal. 1992, 29: 209228. 10.1137/0729014.View ArticleGoogle Scholar
 Sidje RB: EXPOKIT: Software Package for Computing Matrix Exponentials. ACM Transactions on Mathematical Software. 1998, 24: 130156. 10.1145/285861.285868.View ArticleGoogle Scholar
 Baker G: The essentials of Padé approximants. 1975, Academic Press, New York,Google Scholar
 Goutsias J: Quasiequilibrium Approximation of Fast Reaction Kinetics in Stochastic Biochemical Systems. J Chem Phys. 2005, 122 (18): 184102 10.1063/1.1889434View ArticlePubMedGoogle Scholar
 Gardner T, Cantor C, Collins J: Construction of a genetic toggle switch in Escherichia coli. Nature. 2000, 403: 339342. 10.1038/35002131View ArticlePubMedGoogle Scholar
 Didier F, Henzinger TA, Mateescu M, Wolf V: Approximation of Event Probabilities in Noisy Cellular Processes. Proc. of CMSB. 2009, 5688: 173LNBI,Google Scholar
 Ferm L, Lötstedt P, Hellander A: A Hierarchy of Approximations of the Master Equation Scaled by a Size Parameter. Journal of Scientific Computing. 2008, 34: 127151. 10.1007/s109150079179z.View ArticleGoogle Scholar
 Henzinger T, Mateescu M, Wolf V: Sliding Window Abstraction for Infinite Markov Chains. Proc CAV. 2009, 5643: 337352. LNCS, Springer,Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.