 Research article
 Open Access
 Published:
Hybrid stochastic simplifications for multiscale gene networks
BMC Systems Biology volume 3, Article number: 89 (2009)
Abstract
Background
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models.
Results
We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial KramersMoyal expansion [1–3] which is equivalent to the application of the central limit theorem to a submodel. By averaging and variable aggregation we drastically reduce simulation time and eliminate noncritical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples.
Conclusion
Hybrid simplifications can be used for onionlike (multilayered) approaches to multiscale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
Background
At a molecular level, the functioning of cellular processes is unavoidably stochastic. Recent advances in realtime single cell imaging, microfluidic manipulation and synthetic biology have shown that gene expression and protein abundance in single cells are dynamic random variables submitted to significant variability [4–7].
Markov processes approaches to gene networks dynamics, originating from the pioneering ideas of Delbrück [8], capture diverse features of the experimentally observed variability, such as transcription bursts [4, 6, 7, 9], various types of steadystate distributions of RNA and protein numbers [10, 11], noise amplification or reduction [12, 13]. In the case of molecular switches, random transitions between two or several metastable states with different transcriptional activities, limit the possibility of the corresponding genetic circuits to store cellular memory and generate variability [6].
Even the simplest stochastic model, such as the production module of a protein [11, 14] contains tens of variables and biochemical reactions, and at least the same number of parameters. Direct simulation of such networks by Stochastic Simulation Algorithm (SSA) [15] is extremely time consuming. Network reverse engineering, model parameter identification, and design principles studies suffer from the same drawbacks when the full models are attacked with traditional tools. Various approximation methods were proposed, based on time discretization such as binomial, Poisson or Gaussian time leaping [16, 17], or based on fast/slow partitions [18–21].
The time complexity of exact simulation algorithms scales with the number of jumps (reactions) per unit time. Currently available hybrid algorithms treat fast reactions as continuous variables, which significantly reduces the number of jumps [18–21]. Although computationally efficient, the use of slow reactions as discrete variables, and of rapid reactions as continuous variables, does not always provide a convenient description of the hybrid stochastic process. Indeed, the discrete/continuous structure of the state space of a hybrid molecular system is better expressed in terms of species components. Activity of some promoters and production of the corresponding proteins occurs in bursts [6, 7]. In a bursting event, a steep increase of the number of transcripts is followed by relatively smoother exponential degradation. In this example, the natural candidates for continuous variables are the gene transcripts and products concentrations, while the states of the DNA promoters are the discrete variables. This picture also emphasizes the possibility for noise transfer from discrete, microscopic components, to continuous, mesoscopic, and closer to the phenotype, variables. In such processes, both microscopic and mesoscopic variables are noisy. At the mesoscopic scale, the stochastic fluctuations result from the intermittency of the dynamics and do not have a simple dependence on the numbers of molecules of the continuous components. Counterintuitively, the noise amplitude could increase with the number of transcripts or proteins in a burst [6].
Stochastic intermittence or bursting occurs not only in transcription, but also in many other biochemical processes, such as action potential firing [22], blips in calcium signaling [23], possibly happloinsufficiency [24], etc.
Thus, a large class of molecular stochastic models can be approximated by stochastic hybrid processes in which some state components are discrete, while others are continuous. Processes of this kind have been intensively studied in relation to applications in technology (air traffic management, flexible manufacturing systems, fault tolerant control systems, storage models, etc.) [25–28]. Several classes of stochastic hybrid processes were studied, among which the most important are the switching diffusion processes and the piecewise deterministic processes. For switching diffusions, the continuous state component has continuous trajectories governed by stochastic differential equations, punctuated by jumps controlled by the discrete component. For piecewise deterministic processes, the continuous component follows deterministic ordinary differential equations between two jumps. The jumps can be changes of the parameters in the (stochastic) differential equations (switch of regime with no discontinuity in the trajectory), or instantaneous changes of the continuous variables (discontinuities of the trajectory), or both. Piecewise deterministic processes have been recently proposed as models for various biological systems [29, 30]. Diffusion approximations (without jumps) of Markov processes justify approximate versions of the Gillespie algorithm, such as the τleap method [31]. A general approach allowing to obtain and classify various hybrid approximations that can be obtained from molecular Markov jump processes is still needed and represents a first result of this paper. Hybrid stochastic processes can be obtained by applying the law of large numbers (or the central limit theorem) to the continuous components. This will be performed here by using a partial KramersMoyal expansion of the chemical master equation. In many cases, this procedure provides only a moderate reduction of the number of stochastic jumps to be simulated. Indeed, remaining discrete components can jump rapidly, in which case the simulation by a hybrid process performs ineffectively.
The second result of this paper allows us to cope with frequent jumps of the discrete components by using averaging. Averaging techniques are widely used in physics and chemistry to simplify models by eliminating fast microscopic variables [32–34]. In our case, the microscopic variables are the discrete state components (for instance species present in low numbers) and the resulting approximations are averaged switched diffusion or averaged (piecewise) deterministic processes. Standard averaging techniques for discrete Markov chains [35] use state aggregation. However, state aggregation algorithms are effective only for Markov chains with a finite, small number of states. A more effective averaging method, that we present here, is cycle averaging in chemical reaction networks. Our method exploits the formal analogy between the quasistationarity assumption for fast deterministic linear cycles [36] and the stochastic averaging assumption for the same type of submodels.
Stochastic quasisteadystate approximations presented in [37] combine singular perturbations for the master equations and Van Kampen's Ωexpansion [38] (first order KramersMoyal expansion) and are very close to our approach. However, we provide here a much more general setting and completely characterize the hybrid limits that result from this setting.
The stochastic dynamics of our simplified hybrid models provide good approximations of the unreduced chemical master equation. We show rigorously elsewhere [39] that the simplified hybrid processes are weak approximations of the unreduced Markov process. This means that all the statistical properties of the trajectories are approximated in distribution. In particular, steady state distributions of molecular species concentrations should be accurately reproduced. Also, distributions of intermittence times like the times between bursts in the production of mRNA or proteins, or the commutation time for stochastic switches are expected to be similar for the reduced model and for the initial model.
The structure of this paper is the following. In the Methods section we obtain hybrid simplifications for pure jump Markov processes and also propose a new averaging algorithm for these models. In the "Results and Discussion" section we present several examples of hybrid models obtained from stochastic chemical kinetics models, including a medium scale protein production model for which we demonstrate averaging.
Methods
Introductory notes on stochastic hybrid systems
Stochastic hybrid processes are couples of the type (θ(t), x(t)) where θ(t) is a discrete component (piecewise constant), x(t) is a vector with piecewise continuous components, and t is time. The discrete component θ(t) can be considered as a controlled Markov chain, whose transition matrix may depend on the continuous variable x(t). The discrete state space Θ (values of θ) can be finite or infinite. For instance, supposing that the discrete variables are r "rare" molecular species, whose numbers never go over a small value N, then Θ ⊂ {0,..., N}^{r}has at most (N + 1)^{r}states. Different values of θ may also correspond to various states of a single molecule. For proteins with multiple phosphorylation sites the number of states is 2^{r}where r is the number of sites. In the case of competitive transcriptional regulation, when N transcription factors are competing on r binding sites, the number of states is at most (N + 1)^{r}.
There are several possible ways to specify the transitions of the discrete variables. Generally, these can be given by the stochastic matrix λ_{i, j}(x, t) where λ_{i, j}dt + o(dt) is the probability of a transition from the state i to the state j, between t and t + dt. However, this representation is not handy when the matrices have very large dimension. In such cases, the usual molecular Markov jump process description is more appropriate. Then, possible transitions are specified by the stoichiometry vectors γ_{ i }. The intensity (propensity) function λ_{ i }(θ, x) represents the probability per unit time that a transition from θ to θ + γ_{ i }takes place between t and t + dt [26]. It follows that the probability for a transition (of any kind) between t and t + dt is λ(θ, x) dt + o(dt), where λ = ∑_{ i }λ_{ i }is the total intensity. The inverse of the total intensity represents the mean waiting time between two successive transitions. The transition θ → θ + λ_{ i }is chosen with probability λ_{ i }/λ.
Jumps can also occur in the continuous variables, in which case the size of the jump can be continuously distributed.
There are several possibilities for the evolution of the continuous variables leading to various classes of hybrid processes.
Piecewise deterministic processes
Piecewise deterministic processes were first formalized by [40] and found many applications in various areas ranging from industry to mathematical finance and biology.
The state of a piecewise deterministic process (PDP) is a couple ζ = (θ, x) where θ ∈ Θ denotes the discrete variable and x ∈ ℝ^{n}denotes the continuous variable. A PDP is given by three local characteristics namely:

a vector field χ_{ θ }(x) (flow function),

a jump intensity λ(θ, x) and

a transition measure Q^{T}such that Q^{T}(θ', x'; θ, x) is the probability distribution of the postjump positions (θ', x') ∈ Θ × ℝ^{n}. Related to this is the jump distribution Q^{J}giving the probability distribution of the jumps: Q^{J}(δθ, δx; θ, x) = Q^{T}(θ + δθ, x + δx; θ, x) (Q^{J}is obtained from Q^{T}by phase space translation).
Such a process involves deterministic motion punctuated by jumps. The lengths of the time intervals between successive jumps are random variables called waiting times τ_{ i }. On the time interval [t_{ i }, t_{i + 1}= t_{ i }+ τ_{ i }] the discrete variable remains constant θ = θ_{ i }, while the continuous variable evolves according to the differential equation:
Let ϕ(t, (x_{ i }, θ_{ i })) be the solution of the differential equation (1). We suppose that this solution exists and is unique for all t ≥ t_{ i }. The waiting time is a random variable whose distribution reads [40]:
The reader can easily obtain Eq.(2). The probability not to jump in the time interval [s, s + ds] is 1  λds + o(ds). Thus P[τ_{ i }> s + ds] = P[τ_{ i }> s](1  λds) + o(ds). By taking the logarithm we get log P[τ_{ i }> s + ds]  log P[τ_{ i }> s] = λds + o(ds). Summing over ds leads to Eq.(2). The more familiar expression used in the Gillespie algorithm P[τ_{ i }> t] = exp[λ(θ_{ i }) t] corresponds to the particular case when λ does not depend on the continuous variable.
After a deterministic evolution on the (i + 1)th interval, the discrete variable and the continuous variable can change according to:
where , are sampled from the jump distribution Q^{J}.
The discrete variables are parameters of the differential equations describing the dynamics of the continuous variables. Notice that if = 0 the only effect of a jump on the continuous variables is a change of regime (change of parameter in the differential equation). The trajectories of the continuous variables are globally continuous. However, the velocities of the continuous variables are discontinuous at jumps of the discrete variables. We call this possibility "switching". If ≠ 0, then continuous variables can have instantaneous jumps and their trajectories are only piecewise continuous. We call this possibility "breakage". It is possible to have both "switching" and "breakage".
A PDP can be also defined by its generator [40, 41], acting on functions f defined on the phase space:
The adjoint of the generator is used to obtain the FokkerPlanck equation [3, 38, 42] describing the time evolution of the probability distribution p(θ, x, t) of the process:
The structure of the generic PDP corresponds to the following simulation algorithm:
(1) Set the initial state condition x(t_{0}) = x_{0}, θ = θ_{0},
(2) Generate a random variable u uniformly distributed in [0, 1],
(3) Integrate the system of differential equations
between t_{ i }and t_{ i }+ τ_{ i }with the stopping condition F (t_{ i }+ τ_{ i }, θ_{ i }, x_{ i }) = u,
(4) Generate a second uniform variable v and use it to randomly choose the jumps (, ) (the decision is made in the same way as in the Gillespie algorithm [15]),
(5) Change the system state (θ_{ i }, x(t_{ i })) into (θ_{ i }+ , x(t_{ i }) + ),
(6) Reiterate the system from 2) with the new state until a time t_{max} previously defined is reached.
Hybrid diffusions
A rather general class of hybrid diffusions has been proposed in [26]. However, in that setting, jumps of continuous variables are commanded by the hitting of some predefined phase space sets, which is not our case. We propose a simpler and different setting, which is natural for biochemical systems. In our setting, a hybrid diffusion is defined by:

a vector field χ(θ, x) (drift or flow function),

a diffusion matrix σ(θ, x),

a jump intensity and a transition measure defined like for PDPs.
Between two consecutive jumps at t_{ i }and t_{ i }+ τ_{ i }, the continuous variables x(t) satisfy the Itô stochastic differential equations [43]:
where W_{ j }(t) are independent onedimensional Wiener processes.
The waiting time τ is determined like for PDPs. One should integrate the system:
with the initial condition F(t_{ i }) = 1, x(t_{ i }) = x_{ i }and the stopping condition F(t_{ i }+ τ_{ i }) = u, where u is a random variable, independent from (θ_{ i }, x_{ i }), uniformly distributed on [0, 1].
The generator for such a process is:
where "." stands for the scalar product and ":" stands for the double contracted tensor product, i.e. .
Notice that hybrid diffusions contain PDPs as the particular case when diffusion is zero (σ = 0).
Hybrid simplification of Markov pure jump processes (MPJP)
Markovian stochastic chemical systems
Most of the existing numerical methods for stochastic chemical systems are based on the representation of the chemical reaction system as a Markov pure jump process (MPJP), that corresponds to Gillespie's [15, 44] stochastic simulation algorithm (SSA). The state of a system of n chemical species A_{1},..., A_{ n }is a vector X(t) whose components are the numbers of molecules from each species. The state space of the process is E = ℤ^{n}. From now on, we shall use upper case letters X for numbers of molecules, and lower case letters x = X/ for concentrations, where is the reaction volume, typically the volume of the cell's compartment.
For each reaction,
we have the jump γ_{ i }= β_{ i } α_{ i }∈ ℤ^{n}, i = 1,..., n_{ r }(n_{ r }is the number of reactions). We consider equally the reverse reaction, corresponding to the jump γ_{ i }. The intensities and the transition measures for the Markov jump process are:
V_{ i }, V_{i}are the reaction rates and probabilities satisfy . Here δ_{ X }is the Dirac measure centered on X, and X' is the postjump position.
Several choices are possible for the reaction rates [45]. The most popular is the mass action law, when rates are polynomial functions of concentrations:
As for any homogeneous Markov process, we can characterize this process by its generator [41], whose action on test functions f defined on the state space E is:
The adjoint of the generator defines the chemical master equation which describes the time evolution of the probability distribution p(X, t) in phase space.
Notice that hybrid diffusions or PDPs with no continuous components are Markov pure jump processes.
Partial KramersMoyal expansion
Diffusion approximations (Langevin dynamics) for stochastic chemical kinetics was proposed by [31] and was rigorously justified by [46, 47]. This approximation is valid when the numbers of molecules of all species are much larger than one. It concerns for instance the thermodynamic limit → ∞ where is the volume of a well stirred reactor. Then, the diffusion approximation applies to intermediate scales in phase space. Formally, this approximation can be obtained by the KramersMoyal expansion [1–3], which is the second order Taylor expansion of the master equation with respect to jumps divided by the volume; in the first order one gets the deterministic dynamics, and in the second order the Langevin dynamics. In [29], we have used partial KramersMoyal expansions to obtain hybrid approximations of MPJP. The possibilities of this method are examined here in more generality.
First, let us partition the species of the system into two sets X = (X_{ D }, X_{ C }), where X_{ D }represents the species in small numbers and X_{ C }the species in large numbers. Note that this partition is different from reaction based partitions used in other works [18, 20, 21, 48–50]. All these works have as objective the reduction of simulation time. With respect to this objective, many schemes that avoid individual simulation of rapid reactions by regrouping them and by treating the corresponding channels as continuous variables in "fluid" limits, are more or less equivalent. We fix ourself also another objective, which is to find a natural representation of the simplified processes. A species based partition is more suitable for this end. Let M be the set of reactions in which every reversible reaction contributes with two reactions, one for each direction. The species partition determines a partition of M into three sets . The reaction partition distinguishes between the reactions whose reactants and products are in X_{ C }(), reactions whose reactants and products are in X_{ D }() and reactions with at least a reactant or a product in X_{ D }and at least a reactant or product in X_{ C }(). We should emphasize that although reactions are rapid, some other rapid reactions can be contained in or .
The second step towards the hybrid processes is to rescale the continuous variables by dividing them with the volume and obtaining x_{ C }= X_{ C }/. The X_{ D }variables will not be rescaled and will be considered discrete. Their evolution is given by discrete transitions.
The third step is to consider a second order Taylor expansion of the master equation, for the continuous variables x_{ c }and with respect to the small parameters γ_{ i }/. This expansion can be performed equivalently on the generator (10) or on the master equation (11). For simplicity, we use the generator. The test functions depend now on two variables (x_{ C }, X_{ D }) and are considered to be twice differentiable with respect to the continuous variables x_{ C }.
Let us consider that rates of reactions in are proportional to the volume (this follows from mass action law), , i ∈ . The second order Taylor expansion of (10) with respect to x_{ C }reads:
where ⊗ is for tensor product ((γ ⊗ γ)_{ kl }= γ_{ k }γ_{ l }).
The approximated generator corresponds to hybrid diffusions with drift function and diffusion matrix . As it can be easily seen, the drift and diffusion coefficients for the variables x_{ C }do not depend on the discrete variables X_{ D }. Thus, switching is not present in this approximation and discrete variables can be just forgotten if not observed. Furthermore, jumps vanish in the limit → ∞, implying that continuous variables have no instantaneous jumps (no breakage). However, switching and breakage is present in many examples in molecular biology. How is this possible?
In order to cope with the possibility of switching and breakage we need to consider that some reactions from exhibit particular properties. We can consider two types of special reactions:

1.
reactions in that are frequent enough to change the continuous variables and induce switching,

2.
reactions in with a large stoichiometric vector, that induce breakage.
These reactions will be referred to as "superreactions". The first type of super reactions are denoted by the set and the second type of superreactions are denoted . As a consequence, we distinguish two types of hybrid approximations following which one of the two "superreactions" are present in the mechanism.
A. Hybrid approximations with switching
This first case of approximate process is induced by "superreactions" with stoichiometric vectors of order one, but with rates that scale with the volume: , i ∈ . Reactions from induce rapid transitions of the continuous variables. The set is considered to be empty. For this approximation, even if the rates of the "superreactions" depend on the discrete variables, we suppose that the "superreactions" do not change the discrete variables. In other words
If this condition fails, another type of approximation arises: some discrete variables change rapidly. The resulting process will be an averaged hybrid process.
The evolution of the discrete variables is given by slow jumps (reactions from ). The component x_{ C }has rapid jumps but their amplitude is relatively small. Thus, between two consecutive transitions of the discrete variables, the continuous variables can be approximated by a smooth deterministic trajectory. The continuous variables have continuous trajectory, however, discontinuous velocities. The first order of the KramersMoyal expansion corresponds to the piecewise deterministic approximation defined by the flow function
the jump intensity
and the jump probability
where , are the projections of γ_{ i }on continuous and on discrete components, respectively. The relation (15) clearly shows that the jump intensity depends both on continuous and on discrete variables. Nevertheless, only the discrete variables are changed by jumps.
The second order of the KramersMoyal expansion corresponds to the hybrid diffusion approximation whereas the diffusion matrix reads:
B. Hybrid approximations with breakage
This type of approximation is induced by the "superreactions" with a large stoichiometric vector (i.e. reactions ). We suppose also, that the set is empty. In other words, to have this approximation, we suppose that at least one reaction i in the set has a large stoichiometric vector (λ_{ i })_{ j }/ is comparable with x_{ j } for i ∈ and for some species j ∈ C. A reaction can change both continuous and discrete components. The main difference consists in the fact that a reaction significantly change, in one step, the concentration of the x_{ C }component.
The reactions of the type do not contribute to the deterministic flow. They only appear as jumps. As before, we write down the main characteristics of the approximated PDP process (first order KramersMoyal expansion), namely the flow function
the jump intensity
the jump probabilities for the discrete variables
and the jump probability for the continuous variables
If the superreactions i ∈ have ≠ 0, jumps occur simultaneously in the continuous and in the discrete variables.
The deterministic flow is defined by the flow function χ(x_{ C }) (Eq.(18)) which does not depend on the discrete variable X_{ D }. In this case there is no switching. In the second order of the KramersMoyal expansion, we include phase space diffusion, which contains contributions only from reactions of :
If both sets , are not empty, then both types of discontinuities can appear in the trajectory of x_{ C }:

switching: discontinuity of velocity (jump of X_{ D }), with no discontinuity of trajectory,

breakage: discontinuity of trajectory (jump in some variable x_{ C }).
The first discontinuities are induced by reactions from , while the latter are induced by reactions from .
C. Hybrid approximations with singular switching
Although mathematically distinct, breakage and switching could be physically indistinguishable in certain cases. Indeed, the jump of the continuous variable can result from the repeated application of one or several very fast reactions from . We can obtain processes with breakage as singular limits of processes with switching when very short lasting steep variations alternate with long lasting slower variations of the continuous variables.
Let us consider that there is a subset of extremely rapid superreactions such that:
where 0 <ϵ << 1 is a small positive parameter, and where are discrete species, substrates of the reaction i ∈ .
Like for reactions , we consider that = 0, for all i ∈ .
For each reaction i ∈ we consider that there are two subsets of reactions in : contains reactions that produce the species and contains reactions that consume the species . We define . We consider that all reactions in are rapid:
First, we apply the KramersMoyal expansion for large and obtain a switched PDP. The flow function of this PDP reads:
where
The jump intensity of the PDP reads
We show in the Additional File 1 that, in the limit ϵ → 0, the switched PDP converges to a PDP having the following generator:
where is a probability density satisfying and Φ(s; x, X_{ D }) is the solution of the differential equation .
We recognize above the generator of a PDP with breakage. Contrary to the previous case of breakage resulting from superreactions , the breakage size is now continuously distributed. The switch transitions disappear in the singular limit (the substrates of the reactions remain practically all the time in the state = 0).
Practical criteria for identifying small parameters and superreactions leading to piecewise deterministic approximations
The law of large numbers is applicable in the limit → ∞. Certainly, in cell biology, the idea of infinite volumes should be considered with care. For this reason we will replace this condition by a set of easy to check criteria concerning relative orders of parameters of the models. These criteria will concern piecewise deterministic approximations. Criteria for hybrid diffusion approximation, involving central limit theorem, will not be discussed in this paper.
Our criteria are relative to two reference quantities. The first reference is a large number N representing a lower bound for the numbers of molecules of continuous species. The second reference is a time τ, which is a lower bound both for the characteristic times of the deterministic dynamics of the continuous variables τ_{ C }and for the average time between two successive jumps of the discrete variables τ_{ D }, namely τ = min(τ_{ C }, τ_{ D }). τ can be related to kinetic parameters by methods exposed in [51, 52].
The applicability of the law of large numbers to continuous variables implies two conditions. The first condition is that jump sizes should be small with respect to the numbers of molecules, i.e. <<N for all i ∈ . The second condition is that the number of jumps of the continuous variables on the timescale τ should be big, i.e. V_{ i }τ >> 1 for all i ∈ . The two conditions are fulfilled simultaneously in the limit → ∞ because N, V_{ i }scale with and , τ do not depend on .
A similar condition must be satisfied by superreactions of the type 1: V_{ i }>> τ^{1} also for all i ∈ . The superreactions of the type 2 do not satisfy the small jump condition, i.e. must be comparable to N (not much smaller) for all i ∈ .
In order to have singular switching we need a new small parameter ϵ << 1 and the following set of conditions:
a) There are superreactions of the type 3. These are just very quick superreactions of the type 1.
b) Superreactions of the type 3 act only on a laps of time τ_{ ϵ }= ϵ τ i.e. V_{ i }= (ϵ τ)^{1} >> τ^{1} for i ∈ . Reactions in that inactivate superreactions of type 3 could thus be as frequent as reactions in and in (but not as quick as reactions in ).
c) Superreactions of the type 3 are quick enough to produce breakages comparable to N during the time τ_{ ϵ }, i.e. V_{ i }>> (ϵ τ)^{1} for i ∈ .
The conditions of the type V_{ i }>> τ^{1} can be simplified if the reactions are first order with respect to discrete species. In this case V_{ i }= k_{ i }X_{ D }with X_{ D }≈ 1, therefore the condition is k_{ i }>> τ^{1}. All these criteria are summarized in the Table 1.
Averaging for stochastic chemical kinetics
The performance of the hybrid algorithm can be very bad when the discrete mechanism contains rapid cycles which effectuate many reactions on the deterministic timescale τ_{ C }(this is the timescale on which the continuous variables have significant variation). Indeed, in this situation the deterministic solver is artificially sampling the interval between two discrete cycle reactions. First, this leads to unreasonable increase of the simulation time. Second, the condition on the number of jumps of continuous variables is not satisfied and the hybrid approximation is not accurate. In this case, the hybrid scheme performs worse than SSA. It is therefore important to eliminate rapid cycling from the system before implementing numerical schemes for the hybrid approximations. This can be done by averaging.
Averaging principles are widely used for deterministic (see classical textbooks [53, 54] more recently revisited in the nonautonomous case by [55]) and stochastic (stochastic differential equations [56], Markov chains [35, 57]) systems.
The classical averaging idea is to identify fast ergodic (that, starting in any value, can reach in finite, small time any other value in a given phase space set) variables and to average the dynamics of the slow variables with respect to the quasistationary distribution of the fast variables. An important difficulty is to identify the right slow and fast variables. For perturbed Hamiltonian dynamical systems the actionangle variables provide the natural framework for averaging [53].
In the case of chemical kinetics, there are two kinds of slow variables that should be taken into account for averaging. The first kind are just slow components (discrete components that change infrequently or continuous components with large deterministic timescale). The second kind are linear combinations of components that are conserved by the fast cycling dynamics. These new slow variables, that play a role similar to the action variables in Hamiltonian systems, provide new "aggregated" or "lumped" species in the averaged system. Notice that aggregation of states has been used for averaging Markov chains [35, 57]. The aggregation of species that we propose here adapts this type of argument to the case of chemical kinetics. Variable aggregation for simple deterministic reactive models has been used in relation to applications in ecology (see for instance [58]). In [36, 52] we proposed a general solution for simplification of reaction networks, that works for any linear mechanism with well separated constants. In this algorithm, species aggregation is systematically applied to hierarchies of rapid cycles. We show here that the same algorithm can be effectively used also in the stochastic case.
Averaging principles for reaction mechanisms
Averaging can be applied to multiscale reaction mechanisms. This procedure leads to averaged hybrid models. In order to apply averaging, one should first identify a submodel of discrete components, satisfying the following conditions:
C1) the dynamics of the submodel is much faster than the dynamics of continuous and of other discrete, slower modes.
C2) the dynamics of the submodel is ergodic: starting with any state, the system can reach any other state in finite time.
The general procedure to obtain averaged hybrid simplifications is described in the Additional File 1. Some cases are particularly interesting.
The first case corresponds to fast discrete cycles producing continuous species. In this case rapid superreactions change both discrete and continuous components (Eq.(13) is not valid). The discrete components that are affected by such reactions are fast discrete variables. In the resulting hybrid model, both the continuous dynamics and the slow discrete reaction rates should be averaged with respect to the fast discrete variables.
The second case corresponds to fast, purely discrete cycles. In this case, some of the cycles of the submechanism are at least as fast or faster than reactions . The other reactions of are much slower. In the resulting model, rates of the slow reactions of should be averaged with respect to the fast discrete variables.
In both cases, one needs the steady probability distribution for the fast discrete submodel. All the slow processes will be averaged with respect to this distribution. The calculation of the steady probability distribution can be easily done only when the submodel is linear. Considering linear submodels has also another advantage. In this case, averaging and aggregation lead to a coarsegrained reaction mechanisms. For these reasons, we have developed an algorithm for linear submodels. Of course, gene network models are generally nonlinear. However, this does not mean that all the parts of such models behave nonlinearly. Many submodels, in particular monomolecular cycles, can be simplified by averaging methods that we designed for linear networks.
Cycle averaging for linear submodels
Linear submodels
The are two types of linear reaction networks: monomolecular networks and first order networks. The structure of monomolecular reaction networks can be completely defined by a simple digraph, in which vertices correspond to chemical species A_{ i }, edges correspond to reactions A_{ i }→ A_{ j }with rate constants k_{ ji }> 0. For each vertex, A_{ i }, a positive real variable c_{ i }(concentration) is defined.
The deterministic kinetic equation is
First order reaction networks include monomolecular networks as a particular case, and are characterized by a single substrate and by reaction rates that are proportional to the concentration of the substrate. First order reaction networks can contain reactions that are not monomolecular, such as A → A + B, or A → B + C. We shall restrict ourselves to pseudoconservative first order reactions, ie reactions that do not change the total number of molecules in a given submechanism (A → A + B reactions are allowed, provided that B is external to the submechanism; similarly A → B + C reactions are allowed, provided that either B or C is external to the submechanism). With such constraints, the total number of molecules in the submechanism is conserved and the kinetic equations are the same as (24). Degradation reactions can be studied in this framework by considering a special component (sink), that collects degraded molecules. Further release of the constraints is possible. For instance, the system can be opened by allowing constant (or slowly variable) production terms in Eq.(24). These terms will change the steady state, but will not influence the relaxation times of the system. The linear submechanisms can be considered as part of a nonlinear network, given fixed (or slowly changing) values of external inputs (boundaries). For instance, even in systems of binary reactions, one can define pseudomonomolecular reactions when one of the substrates of the binary reaction is not changing (or changing slowly). This condition can be fulfilled if the substrate is in excess, for instance.
The stochastic dynamics of a unique molecule in such linear reaction network is given by the probability p(j, t) that the molecule is in A_{ j }at the time t. We can easily show that the master equation for p(j, t) is the same as the deterministic kinetic equation (24). Considering only one molecule does not restrict generality because when several molecules are present in a linear network, these behave independently. Thus, the simplification algorithm proposed for deterministic networks [36, 52] can be also applied to stochastic networks [59]. The algorithm is based on a set of operations transforming the reaction graph into an acyclic digraph without branching (no graph component is the substrate of more than one reaction). For such type of graphs the eigenvectors and eigenvalues of the kinetic matrix can be straightforwardly calculated, which completely solves the problem of deterministic dynamics. We could follow precisely the same approach to simplify and then solve the stochastic dynamics. However, we argue here that applying only a few steps of the algorithm is enough for effectively reducing computational time of the SSA method.
Cycle averaging algorithm
Let us recall that the reduction method presented in [36, 52] uses two types of operations acting on the reaction graph.
I.Construction of an auxiliary network (dominance). For each node A_{ i }of a linear submechanism, let us define κ_{ i }as the maximal kinetic constant for reactions A_{ i }→ A_{ j }: k_{ i }= max_{ j }{k_{ ji }}. For the corresponding j we use the notation ϕ(i): ϕ(i) = arg max_{ j }{k_{ ji }}. An auxiliary reaction network is the set of reactions A_{ i }→ A_{ϕ(i)}with kinetic constants κ_{ i }. In such a network there is no branching: if several reactions consume the same component A_{ i }, only the quickest one is kept and all the other discarded.
II. Glueing cycles (aggregation). Rapid cycles are replaced by a single node. Constants of reactions leaving these cycles are renormalized according to an averaging principle (see the Additional File 1).
In order to present the simplification algorithm let us use two simple examples.
First, let us consider a chain of molecular reactions A_{1} → A_{2} → ... A_{ m }. The reaction rate constant for A_{ i }→ A_{i+1}is k_{ i }. All rate constants are considered well separated, i.e. either k_{ i }<<k_{ j }or k_{ i }>> k_{ j }for any i ≠ j.
The smallest rate constant in the chain is called limiting, and denoted by k_{lim}. If 1/k_{lim} <<τ_{ C }(rapid chain), then all molecules A_{1} are transformed into molecules A_{ m }on a timescale much quicker than the deterministic time τ_{ C }. We can thus ignore the chain reactions and consider that the entire mass of the chain is practically always in A_{ m }. This is equivalent to considering the chain at quasistationarity because the steady state probability distribution of a chain is a Dirac delta measure localized at the end of the chain. However, if we do not simplify chains, then simulating them by Gillespie's SSA will not be computationally expensive because the mass of the chain is transferred to the end of the chain A_{ m }in a number of steps that is relatively small (this is bounded by the total mass of discrete species multiplied by the longest chain length).
As a second example, let us consider the cycle C be the following sequence of monomolecular reactions A_{1} → A_{2} → ... A_{ m }→ A_{1}. Let all rate constants be well separated and the smallest one be k_{ lim }like before.
We add to the cycle one branching reaction; this transforms A_{ j }a component of the cycle into B a component exterior to the cycle.
We consider the following distinct situations:
(I) the branching reaction is A_{ j }→ B of rate constant k and k >> k_{ j },
(II) the branching reaction is A_{ j }→ B and k <<k_{ j },
(III) the branching reaction is A_{ j }→ A_{ j }+ B, or
(IV) the branching reaction is A_{ j }→ A_{j+1}+ B of rate constant k_{ j }.
In the situation (I) the exit reaction is faster and dominates the cycling reaction A_{ j }→ A_{j+1}. According to the rule for auxiliary networks in this case (that we call "broken" cycle) the cycle can be opened (by eliminating the cycling reaction A_{ j }→ A_{j+1}) and the resulting multiscale dynamics is the one of a chain; we recover the previous example and in this case we can safely decide to do nothing.
In the situation (II) the exit reaction is much slower than the cycling reaction. In this case the molecules inside the cycle have rapid transformations and the mass distribution inside the cycle can be considered to reach quasistationary distribution. We call this cycle "unbroken".
As discussed in [36, 51, 52], the relaxation time of a cycle with separated constants is the inverse of the second slowest rate constant k^{(2)} >> k^{(1)} = k_{ lim }. To understand this, one should consider the two possible paths to equilibrate a cycle, one passing by the slowest step and the quicker one passing by the second slowest step: the quicker shortcuts the first one. Thus, a cycle can be considered quasistationary if k^{(2)} >> 1/τ_{ C }. A nonaveraged fast cycle is computationally expensive in SSA, if a molecule can perform a huge number of steps along the cycle on the timescale τ_{ C }. The corresponding condition involves the quasistationary flux (not the relaxation time) and reads k^{(1)} = k_{ lim }>> 1/τ_{ C }.
From a quasistationary cycle, the mass is lost stochastically, but slowly by the branching reaction. The intensity of the loss process can be calculated by replacing X_{ j }by its average with respect to the quasistationary distribution of the cycle. The average of X_{ j }is = N(t) k_{lim}/k_{ j }, where N(t) is the total mass inside the cycle . We obtain the average intensity = N(t) kk_{lim}/k_{ j }.
In the situations (III) or (IV) the average intensities of the branching reactions are = N(t) kk_{lim}/k_{ j }and = N(t) k_{lim}, respectively.
All these operations can be presented as a:
Simplification algorithm
Input:
a first order reaction mechanism G with separated kinetic constants.
Output:
a simplified first order reaction mechanism.
while there are fast unbroken cycles
for each cycle in G not containing reactions of the type (I) having a sufficiently intense flux (k_{ lim }>> 1/τ_{ C }) do
1: "glue" the cycle into a single node C having the total mass N;
2: replace the exit reaction of the type (II) A_{ j }→ B of rate constant k by a reaction C → B of effective constant k' = kk_{lim}/k_{ j };
3: replace the reaction of the type (III) A_{ j }→ A_{ j }+ B or rate constant k by a reaction C → C + B of effective constant k' = kk_{lim}/k_{ j };
4: replace the reaction of the type (IV) A_{ j }→ A_{j+1}+ B of rate constant k_{ j }by a reaction C → C + B of effective constant k' = k_{lim};
Imposing the "glueing" condition on the cycling flux and not on the cycle relaxation time allows to iterate the cycle averaging algorithm until no more fast unbroken cycles remain. This would not be possible when adopting a relaxation time criterion to perform averaging. Indeed, the relaxation time of a cycle is given by the second slowest constant. This leaves place for a counterintuitive possibility: one can have slow cycles embedded into rapid cycles. For instance, a fast cycle whose nodes are "glued" cycles can have a slower node as the beginning of its limiting step. Indeed, the internal constants of this node are necessarily more rapid than the limiting step of the large cycle, but can be slower than the second constant which gives the relaxation time of the large cycle. The slow scales are lost by glueing. In order to recover the full multiscale dynamics, a restoration operation has been considered at the end of the algorithm presented in [36, 52]. In this operation, all "glued" cycles are restored without their limiting steps. Thus, slower cycle subdynamics can be recovered. Restoration is not needed for glued cycles satisfying the flux condition k_{ lim }>> , because these are automatically faster than the timescale τ_{ C }. Our averaging method works for mechanisms with well separated constants. Generalization for partially separated constants are possible. For instance, one can consider cycles containing reactions with equivalent constants, but such that for each node of the cycle, the constant of the branching reaction is much smaller than the constant of the cycling reaction.
Noisy networks and design rules for hierarchies of cycles
As we have shown in the previous sections, intrinsic noise in biochemical networks is generated at a "microscopic level" by the discrete variables and can be observed at the "mesoscopic level" of the continuous variables either as switching, or as breaking events. Thus, when there are no discrete variables (all species are in large numbers), there is no intrinsic noise. Also, if the switching events are much faster than the deterministic time scale, averaging principles apply and noise is not transmitted to the continuous variables: the deterministic approximation is again recovered. The only way to transmit noise by switching to the mesoscopic level is by intermittency and this needs particular combinations of slow reactions that change the values of the discrete species and frequent reactions that change the continuous species. Intrinsic noise transmission from micro to mesoscopic level results from certain (not all) combinations of low numbers, fast and rapid timescales. Some general design rules relating topology to dynamic properties relative to noise can be derived from our approach.
By hierarchies of cycles we understand reaction mechanisms containing cycles connected by branching reactions forming higher level cycles. The nodes of higher level cycles are "glued" cycles from the lower levels. Hierarchies of cycles in discrete variables have interesting properties with respect to noise production. Generally, in cycle hierarchies, effective constants of branching reactions are at least as slow or slower than the limiting step (slowest reaction) of the node (glued cycle) from which they start. Coupling cycles into hierarchies is a way to produce slower and slower reactions from initially rapid reactions and generate thus intermittency.
As a possible design rule, we could state: exit reactions of the type (II) or (III) (but not (IV)) generate intermittency when the exit node is not the beginning of the limiting step in some unbroken fast cycle.
Indeed, unless k_{ j }is the limiting step in the cycle, one has k_{lim}/k_{ j }<< 1. Then, the average intensity of the exit reaction of the type (II) or (III) is weak and could represent a source of intermittency in the system. This situation should be avoided for less noise in the system, or created when noise is wanted.
An example of how this rule applies to the biochemistry of bacteria will be given below. A systematic investigation of the possibilities of this class of design rules will be presented elsewhere in relation to synthetic biology.
Results and Discussion
Methodology to obtain hybrid simplifications
We demonstrate how hybrid simplifications can be obtained from Markov pure jump models for gene networks. The simplified models preserve the main stochastic properties of the initial complex models and can replace these models in simulations.
The simplification procedure consists of four successive steps:
I Identification of discrete and continuous variables, partition of the reactions.
II Cycle averaging.
III Identification of superreactions.
IV Construction of the hybrid simplifications.
In order to justify the utility of our approach we show by examples that all types of hybrid processes that we have discussed are represented in gene networks models.
To introduce the examples we employ the following notation. Reactions are numbered by integers. If the ith reaction is reversible, then is the ith reaction in the forward direction and the ith reaction in the reverse direction. Irreversible reactions are denoted simply R_{ i }.
The rate constants units are s^{1} for monomolecular and s^{1}(M)^{1} for binary reactions. In order to obtain pseudomonomolecular rate constants from binary reaction constants with slowly varying substrates X we have used the following formula:
where N_{ A }is the Avogadro number and is the cell volume. For a bacterium, N_{ A } ≈ 1(n M)^{1}.
Similarly, the reaction rates are calculated as
We discuss two simple hybrid models, one with switching the second one with breakage, then two more complex models that need averaging. All our simulations were performed using MATLAB 2008 in a Windows XP32 environment with a dual core INTEL 6700, 2.65 GHz processor.
Hybrid model with switching: Cook's model
The simplest model with switching has been introduced by Cook [24] as a model for haploinsufficiency phenomena. This model can be described by the following system of biochemical reactions:
In this model, G, G* represent inactive, respectively active states of the gene. In the active state, the gene produces some protein P. Let G_{0} be the gene copy number, which is a conserved quantity of the dynamics G + G* = G_{0}. The haploinsufficiency regime corresponds to a small value of G_{0}. For the simplicity of the argument we consider G_{0} = 1.
Let X = (X_{ D }, X_{ C }) be the state vector. We notice that, the partitions of the species is the following: the species in large number X_{ C }= {P}, and the species in small number X_{ D }= {G, G*}. This partition of the species defines a partition of reactions. With the above notations we get:
The deterministic timescale is τ = (k_{3})^{1}. The stoichiometric vectors have all lengths of order one, much smaller than N (the number of proteins).
For the parameters values used in [24] we have k_{2}/k_{3} >> 1 and R_{2} is a superreaction of the type . The reaction R_{3} satisfies V_{ i }= N k_{3} >> k_{3} = τ^{1}, which means that the law of large numbers can be applied to the continuous variable.
In the first order of the KramersMoyal development we obtain a PDP approximation with switching. In the following, we will denote by x = [P] the continuous variable and by θ = G* the discrete variable. This model is a piecewise deterministic process with state space (θ, x) ∈ {0, 1} × ℝ.
The flow function χ(θ, x) is given by
and the jump intensity λ is defined by
The resulting PDP is the same as ONOFF systems studied in operational research [25]. The model has been proposed as abstract model for stochastic protein production in several other places [29, 60]. With the PDP description of the system and using the hybrid algorithm introduced in the section Methods, we draw a possible time evolution of this system (see Figure 1a). The same initial condition was used when the Gillespie algorithm was implemented for the model described above. The simulation time to generate a trajectory using a PDP approximation, was 2.6 seconds, while with the Gillespie algorithm the simulation time was 14.5 seconds in the average. The trajectories look qualitatively the same: production intervals are followed by degradation intervals, with no discontinuity in the continuous variable.
In order to quantitatively test the accuracy of the PDP algorithm, we have computed the stationary distribution of the number of proteins using a piecewise deterministic simulation and compared it to the estimated distribution from Gillespie trajectories.
The numerical methods to estimate stationary distributions are described in the Additional File 1. The theoretical curve for the PDP model is a beta distribution. The variable x = P/(k_{2}/k_{3}) follows the Beta distribution , B is the Beta function [29]. The result of various comparisons is represented in Figure 1b.
Cook's model operates in the region k_{2} >> k_{m 1}, which corresponds to a broken cycle in the discrete variables. If the opposite inequality is valid k_{2} <<k_{m 1}, then cycle averaging is needed. The cycle should be glued to a point of mass = G_{0} = 1 and the branching reaction R_{2} becomes , where . The resulting model is a birth and death model with effective birth rate and death rate constant k_{3}. Provided that k_{2}/k_{3} >> 1 (meaning that R_{2} remains a superreaction of the type ), the model can be approximated by a completely deterministic process with flow given by the averaged flow function χ(x) = k_{3} x + .
Hybrid models with breakage: neuroscience and bacterium operator sites
Neural systems exhibit stochastic behavior. Stein [61, 62] proposed a simple Markovian model for the evolution of a neuron membrane potential. In this model, discontinuous random jumps of the potential are followed by exponential decay. We do no discuss here how to obtain this hybrid model from a microscopic pure jump model. Stein's model is a phenomenological representation of very complex electric and biochemical processes. We demonstrate its properties in terms of trajectories and stationary distribution. The subthreshold behavior of the membrane potential of a neuron cell is described in this model by a hybrid Markov process of continuous variable V(t) with jumps of constant intensity λ and constant amplitude in the continuous variable. Although in the original Stein model there are jumps of positive and negative sign, corresponding to excitatory and inhibitory synaptic activations, here we consider only positive sign jumps. If t is the moment of jump, then V(t^{+})  V(t^{}) = a, where a > 0 is the amplitude. Between two jumps V decays according to where α is a constant.
The generator of such process is:
The steady probability distribution p(x) for such a process satisfies the following delay differential equation:
and p(x) = 0, for all x < 0.
Analytical solutions are not known for this delay differential equation. In the Additional File 1 we describe the numerical scheme to calculate the steady probability distribution. The comparison between the simulated and calculated distribution is shown in Figure 2cd. In Figure 2ab we show trajectories of the system. Stein's model could also apply (even better, because its main defects with respect to neurons, such as the absence of a refractory period, is not a problem for genes) to gene networks. Generalizations of this model, allowing for continuous distributions of the jumps, could be used as an archetype of intermittent activity of a promoter.
Hybrid models with breakage can result as singular limits of hybrid models with switching as discussed in the Methods section Hybrid approximations with singular switching. The typical case is a bacterium operon. This model can be found in many places in literature. A detailed molecular example [63, 64] will be studied as our last example (repressed operator site in a bacterium). A simpler version of this model is proposed by [65]. It consists of the following reactions:
The first reaction is zero order, all the other reactions are first order. The parameters satisfy k_{1} = ak_{4}, k_{2} = bk_{3}, k_{4} = ϵ k_{3}. We consider that b >> 1, ϵ << 1.
From the aspect of the trajectories (these show bursting), the authors of [65] hypothetize that a PDP approximation with breakage is the natural simplification and solve the corresponding stationary hybrid FokkerPlanck equation for the protein component. We show here that the hybrid FokkerPlanck equation given in [65] can be found as an application of our approach.
First, we notice that a PDP limit is applicable to the Markov jump process. The species partition is X_{ D }= mRNA, X_{ C }= Protein. The mRNA component follows a birth and death process with birth intensity k_{1} and death intensity (k_{3} + k_{2}) mRNA. Using the master equation for birth and death processes [42] and considering that k_{1}/(k_{3} + k_{2}) = a ϵ/(1 + b) << 1, it follows that the probability to have zero or one molecule of mRNA is close to one (the probability to have two or more molecules is negligible).
This justifies that mRNA is a discrete species. The partition of the reactions is = {R_{1}, R_{3}}, = {R_{2}}, = {R_{4}}. The timescale of the continuous variables is τ = (k_{4})^{1}. V_{2} = k_{4} X >> τ, where X is the number of proteins, provided that X >> 1. This condition, allowing application of the PDP approximation, is satisfied because b >> 1 (the significance of b is the average number of proteins produced in a bursting event).
Let us show that we have singular switching, equivalent to breakage. The reaction R_{2} is a superreaction of the type because V_{2} = k_{2} = b(ϵ τ)^{1} The discrete variable mRNA can be considered to remain zero almost all the time, except during the negligible duration of the bursts when mRNA = 1. The reaction corresponds to the transition 0 → 1, while the reaction corresponds to the transition 1 → 0 of the discrete component mRNA. The intensity of the reaction R_{3} satisfies V_{3} = (ϵ τ)^{1}. Thus, the mean duration of the burst is ϵ τ. According to the section Methods, we are in the case of a PDP with singular switching. The evolution of the unique continuous variable x (protein concentration) is well approximated by the following hybrid generator (obtained after a simple change of the integration variable in Eq.(23)):
which shows that b is the mean number of proteins produced in a burst (mean size of the breakage).
We can notice a continuous distribution of the breakage size (exponential distribution), situation different from the Stein model. The FokkerPlanck equation can be solved in this case. The steady distribution of the continuous variable x is the Gamma distribution [65].
First complex example: λphage toggle switch
In this section we revisit a classical example of toggle molecular switch. The model of crorepressor (cI) switch in λphage, a temperate bacteriophage of Escherichia coli, was investigated by many authors [66–69].
The life cycle of phages has two alternative pathways, lysogenic when the phage duplicates synchronously with the host genome and lytic when the phage produces large amounts of its own mRNA. The switch between these two pathways is controlled by the level of cI protein. The lysogenic pathway corresponds to large levels of cI, while the lytic pathway corresponds to low levels of cI. A cI dimer may bind to any of the three operators: OR1, OR2, and OR3, in this order. By cooperativity, OR1 and OR2 are almost simultaneously occupied by cI. The third site OR3 will be occupied only when the cI concentration is high enough. A simpler model, in which OR1 is absent, may be considered [68]. Let cI, cI_{2} and D denote the repressor, repressor dimer, and DNA promoter site. The dynamics of the operator sites is described by the following reaction system:
where the DcI_{2} and complexes denote binding to the OR2 or OR3 sites, respectively binding to both sites.
The other reactions concern production, degradation of molecules cI, cI_{2}:
where n is the number of proteins per mRNA transcript.
The state vector is X = (X_{ D }, X_{ C }) where X_{ D }= {D, DcI_{2}, , DcI_{2}cI_{2}}, X_{ C }= {cI, cI_{2}}. Note that the choice of discrete variables is dictated here, like in our first example, by the conservation law D + DcI_{2} + + DcI_{2}cI_{2} = D_{0} that restricts the numbers of D, DcI_{2}, , DcI_{2}cI_{2} to small values.
The partition of the species induces the following partition of the reactions:
The interesting property of the λphage model is its bistability. The naive calculation to find steady states uses quasistationarity of the deterministic dynamics. Then, we can use the equilibrium equations for reactions R_{ i }, i ∈ [1, 4], the steady state equation for cI and the conservation law to compute the concentration x of cI at steadystate. x = 0 is one of the steady state, namely the lytic attractor. It also corresponds to an absorbing state of the Markov process (the process always stops when it reaches this state). The lytic state can be rendered nonabsorbing by including, like in [68], a small basal production rate of cI protein in the model. However, this is not important for the illustration of our approach. If k_{2}/k_{m 2}= k_{3}/k_{m 3}it follows that steady states (other than the lytic state) must satisfy:
where .
Let us notice that failure of naive quasistationarity calculations was demonstrated for nonlinear subsystems [37]. However, these calculations can be justified here by linear cycle averaging (the promoter submechanism reactions R_{ i }, i ∈ [2, 4] are all pseudomonomolecular).
Bistability occurs when the quartic equation (26) has real roots, which is the case when σ is small and α is big. More precisely, the model is bistable if . Like in [68] we have used σ = 5. In order to ensure bistability we have chosen α = 7.
The submechanism R_{ DC }contains rapid unbroken cycles that need to be averaged before any further approximation of the process. These rapid cycles (see Figure 3a) correspond to rapid bindingunbinding of the dimer cI_{2} on the DNA. Three steps lead from the unreduced Model 1 to the averaged Model 2 (see also Figure 3a):
1.1 The cycle DcI_{2}, DcI_{2}cI_{2} is unbroken. It is glued to the node whose total mass is equal to the mass of DcI_{2} and DcI_{2}cI_{2}.
1.2 The limiting step of the cycle is k_{ lim }= k_{m 4}<<k_{4} cI_{2}.
1.3 The branching reaction DcI_{2} → n X + DcI_{2} is replaced by → n X + of effective constant . The reaction DcI_{2} → D + cI_{2} is replaced by → D + cI_{2} with the reaction constant .
After averaging, two more cycles remain in the resulting Model 2. However, the rates of the remaining reactions D + DcI_{2} → DcI_{2} and D + DcI_{2} → are equivalent, which does not allow further application of our algorithm. Furthermore, the slow reactions → D + cI_{2}, and → D + cI_{2} produce intermittence and should by no means be averaged.
The next approximation is a first order partial KramersMoyal expansion. The reactions in R_{ C }and the superreaction R_{5} ∈ contribute to the deterministic dynamics of the continous species defined by the following differential equations:
where x, y are the concentrations of cI and cI_{2}, respectivelly.
The remaining reactions induce the jump mechanism.
The time evolution towards the lysogenic attractor is represented in Figure 3b for the unreduced Model 1 and in Figure 3c for the PDP Model 4 (which is obtained by averaging and first order KramersMoyal expansion from Model 1). Steady probability distribution close to this attractor is represented in Figure 3d for all models in this study. We can notice the intermittent behavior of the cI, cI_{2} components that is well captured in the switching PDP approximations (Models 3 and 4).
Second complex example: Stochastic bursting of a repressed bacterium operon
Under strong repression, protein production from a bacterium operator site undergoes stochastic bursting. A stochastic model for protein production in prokaryotes has been introduced in [63]. The behavior of the operator site under the regulation of a repressor molecule that prevents protein production has been considered in [64]. The corresponding model is represented in the Table 2 and in Figure 4.
In this model, the bacterium is considered to be in exponential growth phase, increasing size and dividing normally. Cell growth is simulated by a linear increase of the volume in time. During replication, the nuclear material doubles (variables D,D.R,DRNAP). At fission, the nuclear material is halved and all other components are divided among daughter cells according to a binomial distribution.
I. Species and reaction partition
Some of the components (D, D.R, D.RN AP, Tr RN AP) are confined to small numbers by the conservation law D + D.R + D.RN AP + Tr RN AP = const.
Two more components RBS and RibRBS have small lifetimes ≈ 2s and can not accumulate to significant levels. Thus, the discrete variables are:
The remaining components are in large numbers and form the continuous variables:
We notice that Rib, RNAP and R are in relatively large numbers and practically constant, which justifies a first order reaction approximation for the mechanism R_{ D }. The sets of discrete species D, D.R, D.RN AP and RBS, RibRBS form rapid unbroken cycles and can be averaged.
II. Cycle averaging
The cycle averaging procedure can be applied three times:
1.1 The cycle D, D.R is unbroken. It is glued to the node D* whose total mass is equal to the mass of D and D.R.
1.2 The limiting step of the cycle is k_{ lim }= k_{m 1}<<k_{1}.
1.3 The branching reaction D → D.RN AP is replaced by D* → D.RN AP of effective constant .
2.1 The cycle D*, D.RN AP is unbroken. It is glued to the node D** whose total mass is equal to the mass of D and D.R and D.RN AP.
2.2 We have <<k_{m 2}hence the limiting step of the cycle is .
2.3 The branching reaction D.RN AP → TrRN AP is replaced by D** → TrRN AP of effective constant .
3.1 The cycle RBS, Rib.RBS is unbroken. It is glued to the node RBS* whose total mass is the one of RBS and of Rib.RBS.
3.2 The limiting step is k_{m 6}<<k_{6} Rib.
3.3 The branching reaction Rib.RBS → ElRib + RBS is replaced by the reaction RBS* → ElRib + RBS* of effective constant k7' = k 7.
3.4 The branching reaction RBS → ∅ is replaced by the reaction RBS* → ∅ of effective constant .
Notice that a loss of accuracy should be expected from the application of the third averaging step. The separation of the branching and cycling reaction rate constants is not that good. Indeed, k_{7}/k_{m 6}≈ 0.22 while in theory we need k_{7}/k_{m 6}<< 1.
III. Identification of superreactions
Notice that after cycle averaging, a low intensity reaction D** → TRN AP results, producing intermittency (bursting). The reduced mechanism is represented in Figure 4b. The discrete/continuous partition of the species in the reduced mechanism is inherited from the initial model.
In the reduced mechanism, the reaction RBS* → RBS* + ElRib is very quick, even quicker than the continuous reactions ElRib → Prot, Prot → FoldedProt, FoldedProt → ∅, therefore it is a superreaction of the type .
IV. Hybrid approximation
First order KramersMoyal expansion applied to the averaged Markov jump process leads to a PDP with switching.
The continuous variables obey to the following differential equations:
The remaining three discrete components (D**, TrRN AP, RBS*) form a Markov jump process. Inside the reaction mechanism there is a rapid chain leading from TrRN AP to RBS* and to ElRib production which is fed by a extremely slow reaction producing TrRN AP. Thus RBS* presents unfrequent bursts of activity leading to rapid production of ElRib. The increasing part of the burst does not last long, because the discrete component RBS* rapidly switches back to zero by the reaction RBS* → ∅. Thus, in the continuous variables, after a steep (deterministic) increase of ElRib one observes a slower decrease. We have the case of a PDP with switching, where the steep increase event could be assimilated to a breakage (see Figure 5a). However, the timescales of the decreasing and increasing parts of the peak are not well separated. Thus, the breakage approximation could be used only to obtain qualitative results.
To summarize, we have considered five models: the unreduced Markov jump model, two averaged reduced jump Markov models (the second model obtained after averaging steps 1 and 2, the third model after application of all three steps) and two PDP models with switching obtained from each of the previous two averaged jump Markov models. The three jump Markov models are simulated using the Gillespie algorithm, and the two hybrid models are simulated with the PDP algorithm.
Of course, the process is Markovian only between two cell cycle events that are under external command. Considering the external command, the process is semiMarkovian [29, 70]. We could restore the Markovian framework, by considering a cell cycle variable that has periodic autonomous dynamics and triggers the transitions between the cell cycle stages.
In order to compare the performance of the models (in terms of time complexity) we have represented the total jump intensities for the first three models (exact SSA, and the two averaged models) as functions of time on a trajectory. The model that demands the least computer time is the one with the smallest jump intensity. In Figure 5b, we notice a decrease of several orders of magnitudes of the total intensity from the exact SSA model to the models obtained after the second and third averaging steps.
We can not use the same method to estimate the computation time of the PDP algorithm. Indeed, although it is true generally that the computation time increases with the number of discrete transitions, the number of operations to compute the deterministic parts depends on the deterministic solver. In order to reduce the number of operations per time point we have favored onestep schemes (although this is not important for very large systems, where the main difficulty is represented by the computation of the flow function). We also favored implicit stiff solvers that can function with large time steps, reducing computation time. All the calculations were performed by using ode23s solver of MATLAB (this is a onestep implicit stiff solver using Rosenbrock formula of order 2). A comparison of the times to generate a 20 cycles long trajectories with the different methods is given in Table 3. This table is rather illustrative of the advantages of various approximations. Averaging allows a tremendous reduction of the execution time at least 50 times for weak repression and 10^{4} times for strong repression (when stochastic effects are strongest). The KramersMoyal expansion leading to PDPs produce an extra 2fold decrease of the execution time, but this is true only for weak repression and after averaging of all rapid cycles. The PDP model M4 with incomplete averaging of fast cycles has a very bad performance, that can be even worse than the nonaveraged SSA. This can be explained by the strong stiff character of the deterministic dynamics with steep ElRib peaks.
To test the accuracies of various approximations we have computed the steady probability distribution of the protein and of the folded protein using trajectories generated by the five models. The distribution obtained by SSA for the unreduced model is used as the "exact" reference. The observed errors are the consequences of less good separation between systems constants. For instance, in Figure 4, k_{7} = 0.5 and k_{m 6}= 2.25, while in theory we need k_{7} <<k_{m 6}. However, the approximate models are qualitatively correct. All models render correctly the bursting behavior of the system.
Another advantage of a simplified model is the reduced number of parameters. The full SSA model has 14 parameters. After the first two averaging steps only 9 parameters remain, and after the three averaging steps only 7. The PDPs have the same number of parameters as the corresponding averaged Markov jump processes, namely 9 and 7 parameters. Furthermore, the parameters of the simplified models are monomials of parameters of the unreduced model. This can be used for sensitivity analysis. After identification of the monomials that are critical for a given property the backtracked in uence of the initial parameters is given by the power of the corresponding parameter in the critical monomial. The details of the method can be found in [36].
Conclusion
We have presented, in a unified framework, various hybrid simplifications of stochastic biochemical models. These simplifications are based on partial KramersMoyal expansions and on averaging.
The use of simplified models in stochastic studies of cellular processes has several advantages.
The first advantage of simplified models is the reduction of computational time. We have shown that cycle averaging leads to the most drastic reduction of the computation time. This averaging algorithm, that can be used independently of the KramersMoyal expansion, represents a general preconditioning method for both stochastic and deterministic simulations. In stochastic studies, it reduces the number of discrete events to be simulated. In deterministic studies, it produces less stiff systems. The preconditioned models can be the starting point for other reduction methods.
Mathematically, our simplified models are weak approximations of the fully detailed jump Markov processes. This means that all statistical properties of the trajectories of the full model including steady distributions, transition times, etc. should be rendered without significant loss of accuracy by the simplified models. Of course, after the reduction procedure, some variables and reactions may disappear and some resulting parameters are synthetic. Because the correspondence with the original variables and parameters is known, it is always possible to go back to the details of the full model. In particular, the identification of the critical parameters of the reduced model allows to backtrack the critical parameters of the full model. The KramersMoyal expansion, leading to hybrid simplifications, produced only a moderate decrease of the computation time. This limitation was partly due to our choice of low to medium size models with rather small numbers of molecules and with simple dynamics of the continuous variables. More obvious computational advantage can be obtained for more complex models. Particularly, this could be the case for models with oscillating dynamics of the continuous variables, such as molecular clocks.
The second advantage of simplified models lies in the understanding that these bring with respect to the stochastic properties of the system. Averaging and hybrid simplifications unravel the origin of noise in multiscale biochemical systems. Noise is generated at the microscopic level of discrete variables and transferred to the mesoscopic level of the the continuous variables. In this transfer, the topology of the reaction network plays a role, but also other properties are important such as the hierarchy of timescales of the system. Our simplification algorithm explains how and when hierarchies of cycles lead to intermittence and noise in gene networks. In many cases, important properties such as the stationary distribution, noise amplitude, waiting times between successive bursts can be analytically calculated for the hybrid simplifications.
We have discussed several types of hybrid approximations: piecewise deterministic and hybrid diffusions with switching, with breakage, and singular switching that can be assimilated to breakage. Taken together, these results offer a rather complete panorama of various intermittence phenomena in biochemical systems.
References
 1.
Kramers H: Brownian motion in a field of force and the diffusion model of chemical reactions. Physica A. 1940, 7: 284304.
 2.
Moyal J: Stochastic Processes and Statistical Physics. J R Stat Soc London. 1949, Ser.B 11: 150210.
 3.
Risken H: The FokkerPlanck equation: Methods of Solution and Applications. 1989, Berlin: Springer
 4.
Ozbudak E, Thattai M, Kurtser I, Grossman A, van Oudenaarden A: Regulation of noise in the expression of a single gene. Nature Genet. 2002, 31: 6973. 10.1038/ng869
 5.
Swain P, Elowitz M, Siggia E: Intrinsic and extrinsic contributions to stochasticity in gene expression. Proc Natl Acad Sci USA. 2002, 99: 1279512800. 10.1073/pnas.162041399
 6.
Kaufmann BB, Yang Q, Mettetal JT, van Oudenaarden A: Heritable Stochastic Switching Revealed by SingleCell Genealogy. Plos Biology. 2007, 5: 19731980. 10.1371/journal.pbio.0050239.
 7.
Yu J, Xiao J, Ren X, Lao K, Xie XS: Probing Gene Expression in Live Cells, One Protein Molecule at a Time. Science. 2006, 311: 16001603. 10.1126/science.1119623
 8.
Delbrück M: Statistical Fluctuations in Autocatalytic Reactions. J Chem Phys. 1940, 8: 120124. 10.1063/1.1750549.
 9.
Cai L, Friedman N, Xie X: Stochastic protein expression in individual cells at the single molecule level. Nature. 2006, 440 (7082): 358362. 10.1038/nature04599
 10.
Kaern M, Elston TA, Blake WJ, Collins JJ: Stochasticity in gene expression: from theories to phenotypes. Nature Rev Genet. 2005, 6: 451464. 10.1038/nrg1615
 11.
Krishna S, Banerjee B, Ramakrishnan T, Shivashankar G: Stochastic simulations of the origins and implications of longtailed distributions in gene expression. PNAS. 2005, 102: 47714776. 10.1073/pnas.0406415102
 12.
Paulsson J: Summing up the noise in gene networks. Nature. 2004, 427: 415418. 10.1038/nature02257
 13.
Warren P, TanaseNicola S, Wolde P: Exact results for noise power spectra in linear biochemical reaction networks. J Chem Phys. 2006, 125 (14): 144904 10.1063/1.2356472
 14.
Kierzek A, Zaim J, Zielenkiewicz P: The Effect of Transcription and Translation Initiation Frequencies on the Stochastic Fluctuations in Prokaryotic Gene Expression. J Biol Chem. 2001, 276: 81658172. 10.1074/jbc.M006264200
 15.
Gillespie DT: J Comput Phys. 1976, 22: 40310.1016/00219991(76)900413.
 16.
Tian T, Burrage K: Binomial leap methods for simulating stochastic chemical kinetics. The Journal of Chemical Physics. 2004, 121: 10356 10.1063/1.1810475
 17.
Gillespie D, Petzold L: Improved leapsize selection for accelerated stochastic simulation. The Journal of Chemical Physics. 2003, 119: 822910.1063/1.1613254.
 18.
Haseltine EL, Rawlings JB: Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J Chem Phys. 2002, 117: 69596969. 10.1063/1.1505860.
 19.
Ball K, Kurtz TG, Popovic L, Rempala G: Asymptotic analysis of multiscale approximations to reaction networks. Ann Appl Probab. 2006, 16: 19251961. 10.1214/105051606000000420.
 20.
Alfonsi A, Cances E, Turinici G, Di Ventura B, Huisinga W: Adaptive simulation of hybrid stochastic and deterministic models for biochemical systems. ESAIM Proceedings. 2005, 14: 113.
 21.
Alfonsi A, Cancès E, Turinici G, Di Ventura B, Huisinga W: Exact simulation of hybrid stochastic and deterministic models for biochemical systems. Research Report RR5435, INRIA. 2004, http://hal.inria.fr/inria00070572/en/
 22.
Stein R, Gossen E, Jones K: Neuronal variability: noise or part of the signal?. Nature Reviews Neuroscience. 2005, 6 (5): 389 10.1038/nrn1668
 23.
Rudiger S, Shuai J, Huisinga W, Nagaiah C, Warnecke G, Parker I, Falcke M: Hybrid Stochastic and Deterministic Simulations of Calcium Blips. Biophysical Journal. 2007, 93 (6): 1847 10.1529/biophysj.106.099879
 24.
Cook DL, Gerber AN, Tapscott SJ: Modeling stochastic gene expression: Implications for haploinsufficiency. Proc Natl Acad Sci USA. 1998, 95: 1564115646. 10.1073/pnas.95.26.15641
 25.
Boxma O, Kaspi H, Kella O, Perry D: On/Off Storage Systems with StateDependent Input, Output and Switching Rates. Probability in the Engineering and Informational Sciences. 2005, 19: 114. 10.1017/S0269964805050011.
 26.
Ghosh M, Bagchi A: Modeling stochastic hybrid systems. System Modeling and Optimization. 2005, 166: 269280. full_text. full_text
 27.
Pola G, Bujorianu M, Lygeros J, Di Benedetto M: Stochastic hybrid models: An overview. Proceedings IFAC Conference on Analysis and Design of Hybrid Systems. 2003
 28.
Bujorianu M, Lygeros J: General stochastic hybrid systems: Modelling and optimal control. Proc 43th Conference in Decision and Control. 2004
 29.
Radulescu O, Muller A, Crudu A: Théorèmes limites pour des processus de Markov à sauts. Synthèse des resultats et applications en biologie moleculaire. Technique et Science Informatique. 2007, 26: 443469. 10.3166/tsi.26.443469.
 30.
Zeiser S, Franz U, Wittich O, Liebscher V: Simulation of genetic networks modelled by piecewise deterministic Markov processes. Systems Biology, IET. 2008, 2 (3): 113135. 10.1049/ietsyb:20070045.
 31.
Gillespie DT: The Chemical Langevin equation. J Chem Phys. 2000, 113: 297306. 10.1063/1.481811.
 32.
Bogoliubov NN, Mitropolski YA: Asymptotic Methods in the Theory of Nonlinear Oscillations. 1961, New York: Gordon and Breach
 33.
Givon D, Kupferman R, Stuart A: Extracting macroscopic dynamics: model problems and algorithms. Nonlinearity. 2004, 17: R55R127. 10.1088/09517715/17/6/R01.
 34.
Acharya A, Sawant A: On a computational approach for the approximate dynamics of averaged variables in nonlinear ODE systems: Toward the derivation of constitutive laws of the rate type. J Mech Phys Sol. 2006, 54: 21832213. 10.1016/j.jmps.2006.03.007.
 35.
Yin G, Zhang Q, Badowski G: Singularly Perturbed Markov Chains: Convergence and Aggregation. Journal of Multivariate Analysis. 2000, 72 (2): 208229. 10.1006/jmva.1999.1855.
 36.
Radulescu O, Gorban AN, Zinovyev A, Lilienbaum A: Robust simplifications of multiscale biochemical networks. BMC Systems Biology. 2008, 2: 86 10.1186/17520509286
 37.
Mastny E, Haseltine E, Rawlings J: Two classes of quasisteadystate model reductions for stochastic kinetics. The Journal of Chemical Physics. 2007, 127: 094106 10.1063/1.2764480
 38.
Van Kampen N: Stochastic processes in physics and chemistry. 2007, Amsterdam: North Holland, third
 39.
Crudu A, Debussche A, Muller A, Radulescu O: Hybrid weak limits and averaging for multiscale stochastic gene networks.
 40.
Davis M: Markov Models and Optimization. 1993, London: Chapman and Hall
 41.
Ethier SN, Kurtz TG: Markov Processes. 1986, New York: John Wiley & Sons
 42.
BaruchaReid A: Elements of the Theory of Markov Processes and their Applications. 1960, New York: McGrawHill Book Co
 43.
Ikeda N, Watanabe S: Stochastic differential equations and diffusion processes. Amsterdam: NorthHolland
 44.
Gillespie DT: Exact stochastic simulation of coupled chemical reactions. The Journal of Physical Chemistry. 1977, 81: 23402361. 10.1021/j100540a008.
 45.
Gorban AN, Karlin IV: Invariant manifolds for physical and chemical kinetics, Lect Notes Phys 660. 2005, Berlin, Heidelberg: Springer
 46.
Allain M: Approximation par un processus de diffusion, des oscillations, autour d'une valeur moyenne, d'un processus de Markov de saut pur. C R Acad Sc Paris. 1976, t.282: 891894.
 47.
Kurtz TG: Limit theorems for sequences of jump Markov processes approximating ordinary differential processes. J Appl Prob. 1971, 8: 344356. 10.2307/3211904.
 48.
Surovtsova I, Sahle S, Pahle J, Kummer U: Approaches to complexity reduction in a systems biology research environment (SYCAMORE). Proceedings of the 37th conference on Winter simulation, Winter Simulation Conference. 2006, 16831689.
 49.
Salis H, Sotiropoulos V, Kaznessis Y: Multiscale Hy3S: hybrid stochastic simulation for supercomputers. BMC Bioinformatics. 2006, 7: 93 10.1186/14712105793
 50.
Griffith M, Courtney T, Peccoud J, Sanders W: Dynamic partitioning for hybrid simulation of the bistable HIV1 transactivation network. Bioinformatics. 2006, 22 (22): 2782 10.1093/bioinformatics/btl465
 51.
Gorban AN, Radulescu O: Dynamical robustness of biological networks with hierarchical distribution of time scales. IET Systems Biology. 2007, 1: 238246. 10.1049/ietsyb:20060083
 52.
Gorban AN, Radulescu O: Dynamic and static limitation in multiscale reaction networks, revisited. Advances in Chemical Engineering: Mathematics and Chemical Engineering and Kinetics. Edited by: Marin G, West D, Yablonsky G. 2008, 34: 103173. Academic Press
 53.
Arnold V: Supplementary chapters to the theory of ordinary differential equations. 1978, Moscow: MIR
 54.
Sanders J, Verhulst F: Averaging methods in nonlinear dynamical systems. 1985, New York: Springer
 55.
Artstein Z: Averaging of timevarying differential equations revisited. Journal of Differential Equations. 2007, 243 (2): 146167. 10.1016/j.jde.2007.01.022.
 56.
Freidlin M: Markov processes and differential equations: asymptotic problems. 1996, Basel: Birkhauser
 57.
Yin G, Zhang Q, Yang H, Yin K: Discretetime dynamic systems arising from singularly perturbed Markov chains. Nonlinear Analysis of Theory Methods and Applications. 2001, 47: 47634774. 10.1016/S0362546X(01)005880.
 58.
Auger P, de la Para RB, Poggiale JC, Sanchez E, Huu TN: Aggregation of variables and applications to population dynamics. Structured Population Models in Biology and Epidemiology, LNM 1936, Mathematical Biosciences Subseries. Edited by: Magal P, Ruan S. 2008, 209263. Berlin: Springer
 59.
Radulescu O, Gorban A: Limitation and averaging for deterministic and stochastic biochemical reaction networks. International Workshop Model Reduction in Reacting Flow, Notre Dame, unpublished proceedings. 2009, http://cam.nd.edu/upcomingconferences/spring2009/talk%20_abstracts/radulescu_abstract.pdf
 60.
Karmarkar R, Bose I: Graded and binary responses in stochastic gene expressions. Phys Biol. 2004, 1: 197204. 10.1088/14783967/1/4/001
 61.
Stein R: Some models of neuronal variability. Biophysical Journal. 1967, 7: 3768. 10.1016/S00063495(67)865743
 62.
Tuckwell H: Stochastic processes in the neurosciences. 1989, Philadelphia: Society for Industrial and Applied Mathematics
 63.
Kierzek A, Zaim J, Zielenkiewicz P: The Effect of Transcription and Translation Initiation Frequencies on the Stochastic Fluctuations in Prokaryotic Gene Expression. Journal of Biological Chemistry. 2001, 276 (11): 81658172. 10.1074/jbc.M006264200
 64.
Krishna S, Banerjee B, Ramakrishnan T, Shivashankar G: Stochastic simulations of the origins and implications of longtailed distributions in gene expression. Proceedings of the National Academy of Sciences. 2005, 102 (13): 47714776. 10.1073/pnas.0406415102.
 65.
Friedman N, Cai L, Xie X: Linking stochastic dynamics to population distribution: an analytical framework of gene expression. Physical review letters. 2006, 97 (16): 168302 10.1103/PhysRevLett.97.168302
 66.
Reinitz J, Vaisnys J: Theoretical and experimental analysis of the phage lambda genetic switch implies missing levels of cooperativity. J Theor Biol. 1990, 145 (3): 295318. 10.1016/S00225193(05)801110
 67.
Arkin A, Ross J, McAdams HH: Stochastic Kinetic Analysis of Developmental Pathway Bifurcation in Phage λinfected Escherichia Coli Cells. Genetics. 1998, 149: 16331648.
 68.
Hasty J, Pradines J, Dolnik M, Collins J: Noisebased switches and amplifiers for gene expression. Proc Natl Acad Sci USA. 2000, 97: 20752080. 10.1073/pnas.040411297
 69.
Tian T, Burrage K: Bistability and switching in the lysis/lysigeny genetic regulatory network of bacteriophage λ. J Theor bio. 2004, 227: 229237. 10.1016/j.jtbi.2003.11.003.
 70.
Korolyuk V, Swishchuk A: SemiMarkov Random Evolutions. 1995, Dordrecht: Kluwer
 71.
Gorban AN, Radulescu O: Dynamical robustness of biological networks with hierarchical distribution of time scales. IET Systems Biology. 2007, 1: 238246. 10.1049/ietsyb:20060083
Author information
Affiliations
Corresponding author
Additional information
Authors' contributions
OR proposed the methodology to obtain hybrid simplifications. AC and OR chose the examples and implemented the algorithms. AC, AD and OR contributed to the mathematical aspects of the methods part. All authors drafted, read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Crudu, A., Debussche, A. & Radulescu, O. Hybrid stochastic simplifications for multiscale gene networks. BMC Syst Biol 3, 89 (2009). https://doi.org/10.1186/17520509389
Received:
Accepted:
Published:
Keywords
 Discrete Component
 Stochastic Simulation Algorithm
 Chemical Master Equation
 Jump Intensity
 Gillespie Algorithm