Reduction of linear hierarchical models
Introductory notes
In this section we present a general algorithm for finding dominant subsystems and critical parameters for linear systems with completely separated time scales. Linear systems represent a special situation when all the interactions in the reaction network are monomolecular, i.e., have the form A → B.
Although systems biology models are nonlinear and contain also multimolecular reactions, it is nevertheless useful to have an efficient algorithm for solving linear problems. First, as we shall see in the next section, non-linear systems can include linear subsystems, containing reactions that are pseudo(monomolecular) with respect to species internal to the subsystem (at most one internal species is reactant and at most one is product). Second, for reactions A + B → ..., if concentrations c
A
and c
B
are well separated, say c
A
>> c
B
, then we can consider this reaction as B → ... with rate constant proportional to c
A
which is practically constant, or changes only slowly. We can assume that this condition is satisfied for all but a small fraction of genuinely non-linear reactions (the set of non-linear reactions changes in time but remains small). Thus, linear models can serve as very effective approximations of behavior of non-linear models in certain windows of time: in this way, non-linear behavior can be approximated as a sequence of linear dynamics, followed one each other in a sequence of "phase transitions". Third, linear networks represent the case when very large reaction networks models can be approached analytically, and some intuition and design principles can be learned and partially generalized to the non-linear case. As an example, see the robustness study made in [34]. The linear case offers nice simple illustrations of the concepts of dominant subsystem, critical monomials and critical parameters.
The algorithm presented here in its "recipe" form ready for computational implementations, is developed in detail elsewhere [34], with many examples and rigorous justifications.
The structure of linear (monomolecular) reaction networks can be completely defined by a simple digraph, in which vertices correspond to chemical species A
i
, edges correspond to reactions A
i
→ A
j
with kinetic constants k
ji
> 0. For each vertex, A
i
, a positive real variable c
i
(concentration) is defined.
"Pseudo-species" (labeled ∅) can be defined to collect all degraded products, and degradation reactions can be written as A
i
→ ∅ with constants k0i. Production reactions can be represented as ∅ → A
i
with rates ki 0.
The kinetic equation is
(1)
or in vector form: ċ = K0 + K c.
The advantage of linear dynamics is that it is completely specified by the eigenvectors and the eigenvalues of the kinetic matrix K.
The system has an unique bounded steady state cs= K-1 K0 if and only if the matrix K is non-singular.
In this case, it is easy to write down the general solution of Eq.(1):
(2)
where λ
k
, lk, rk, k = 1,..., n are the eigenvalues, the left eigenvectors (vector-rows) and the right eigenvectors (vector-columns) of the matrix K, respectively, i.e.
K rk= λ
k
rk, lkK = λ
k
lk.
with the normalization (li, rj) = δ
ij
, where δ
ij
is Kronecker's delta.
Closed systems are characterized by K0 = 0 (no production reactions, although degradation is permitted). Close systems are conservative if the matrix K is singular (a particular case is when there is no degradation at all). Then, the left kernel of K provides a set of conservation laws (if l K = 0, then quantities (l, c) are conserved). Solution of the homogeneous linear equations are simply:
(4)
If all reaction constants k
ij
would be known with precision then the eigenvalues and the eigenvectors of the kinetic matrix can be easily calculated by standard numerical techniques. Furthermore, singular value decomposition can be used for model reduction. But in systems biology models often one has only approximate or relative values of the constants (information on which constant is bigger or smaller than another one). In the further we will consider the simplest case: when all kinetic constants are very different (separated), i.e. for any two different pairs of indices I = (i, j), J = (i', j') we have either k
I
>> k
J
or k
J
>> k
I
. In this case we say that the system is hierarchical with timescales (inverses of constants k
ij
, j ≠ 0) totally separated.
Hierarchical linear network can be represented as a digraph and a set of orders (integer numbers) associated to each arc (reaction). The lower the order, the more rapid is the reaction (see Fig. 1). It happens that in this case the special structure of the matrix K (originated from a reaction graph) allows us to exploit the strong relation between the dynamics (1) and the topological properties of the digraph. Big advantage of the fully separated network is that the possible values of are 0, 1 and the possible values of are -1, 0, 1 with high precision [34]. Thus, if we can provide an algorithm for finding non-zero components of , , based on the network topology and the constants ordering, then this will give us a good approximation to the problem solution (2).
Some basic notions
Two vertices of a graph are called adjacent if they share a common edge. A path is a sequence of adjacent vertices. A graph is connected if any two of its vertices are linked by a path. A maximal connected subgraph of graph G is called a connected component of G. Every graph can be decomposed into connected components.
A directed path is a sequence of adjacent edges where each step goes in direction of an edge. A vertex A is reachable from a vertex B, if there exists an oriented path from B to A.
A nonempty set V of graph vertexes forms a sink, if there are no oriented edges from A
i
∈ V to any A
j
∉ V. For example, in the reaction graph A1 ← A2 → A3 the one-vertex sets {A1} and {A3} are sinks. A sink is minimal if it does not contain a strictly smaller sink. In the previous example, {A1}, {A3} are minimal sinks. Minimal sinks are also called ergodic components.
A digraph is strongly connected, if every vertex A is reachable from any other vertex B. Ergodic components are maximal strongly connected subgraphs of the graph, but the reverse is not true: there may exist maximal strongly connected subgraphs that have outgoing edges and, therefore, are not sinks. If the digraph has no branching (each vertex has only one successor), then we can define a deterministic flow (discrete dynamical system) on the set of its vertices. Every vertex is the origin of an unique directed path.
Basic procedure for approximating eigenvectors
The algorithm we provide is based on the solution of two simplest cases: 1) network without cycles and without branching (i.e, there are no vertices with more than one outgoing edges) (for example, Fig. 1a) and 2) network without branching with a unique sink which is a cycle (for example, Fig. 1b).
For the networks without branching, we can simplify the notation for the kinetic constants, by introducing κ
i
= k
ij
. Also it is useful to introduce a map ϕ (see Fig. 1):
(5)
Acyclic non-branching network
In this case, for any vertex A
i
there exists an eigenvector. If A
i
is a sink vertex (i.e. ϕ(i) = i) then this eigenvalue is zero. If A
i
is not a sink (i.e. ϕ(i) ≠ i and reaction A
i
→ Aϕ(i)has nonzero rate constant κ
i
) then this eigenvector corresponds to eigenvalue -κ
i
. For left and right eigenvectors of K that correspond to A
i
we use notations li(vector-row) and ri(vector-column), correspondingly.
Let us suppose that A
f
is a sink vertex of the network. Its associated right and left eigenvectors corresponding to the zero eigenvalue are given by:
(6)
Generally, right eigenvectors can be constructed by recurrence starting from the vertex A
i
and moving in the direction of the flow. The construction is in opposite direction for left eigenvectors.
For right eigenvector rionly coordinates (k = 0, 1, ..) can have nonzero values, and
(7)
For left eigenvector licoordinate can have nonzero value only if there exists such q ≥ 0 that ϕq(j) = i (this q is unique because the system of reactions has no cycles), and
(8)
These formulas (7, 8) are true for all non-branching acyclic linear systems, even without separation of times. In the case of fully separated systems, they are significantly simplified and do not require knowledge of the exact values of κ
i
. Thus, for the left eigenvectors = 1 and, for i ≠ j,
(9)
For the right eigenvectors we suppose that κ
f
= 0 for a sink vertex A
f
. Then = 1 and
(10)
Vector rihas at most two non-zero coordinates. The formula (10) means that to find the -1 component in ri, one should find the first vertex j downstream of i with κ
j
<κ
i
("bottleneck" vertex): there = -1. Following (10,9) we find that for the example at Fig. 1a
(11)
(12)
Non-branching network with a unique simple cyclic sink
In this case we have a reaction network with components A1, ... A
n
and last τ vertices (after some change of enumeration) form a reaction cycle: An-τ+1→ An-τ+2→ ... A
n
→ An-τ+1. We assume that the limiting step in this cycle (reaction with minimal constant) is A
n
→ An-τ+1.
In this case the right eigenvector corresponding to the zero eigenvalue has non-zero components only on the vertices belonging to the cycle:
(13)
Similarly, the stationary distribution has non-zero value only at vertices belonging to the cycle. If b = ∑
i
c
i
is the total (conserved) mass, then the steady state is:
(14)
for j ∈ [n - τ + 1, n] and zero elsewhere.
If we have a system with well separated constants (which means that κ
n
≪ κ
i
, i ≠ n) then this expression in the first order is simplified to
(15)
which means that most of substance is concentrated just before the "bottleneck" A
n
→ An-τ+1(c
n
≫ c
i
, i ≠ n).
To approximate the dynamics of the reaction network for κ
n
≪ κ
i
, i ≠ n, it is sufficient to remove the slowest step of the cycle A
n
→ An-τ+1. After removing, we will have acyclic non-branching system of reactions with eigenvalues and eigenvectors that can be computed from the formulas in the previous section. These formulas give n - 1 eigenvector sets corresponding to n - 1 non-zero eigenvalues λ
i
= -κ
i
, i = 1..n - 1. For example, removing A8 → A6 step at Fig. 1b converts the reaction network to the Fig. 1 a whose dynamics approximates the dynamics of the simple cyclic network.
Auxiliary reaction network and auxiliary dynamical system
Now let us consider an arbitrary linear reaction network with well-separated constants. For each A
i
, let us define κ
i
as the maximal kinetic constant for reactions A
i
→ A
j
: κ
i
= max
j
{k
ji
}. For correspondent j we use notation ϕ(i): ϕ(i) = arg max
j
{k
ji
}. The function ϕ(i) is defined under condition that for A
i
outgoing reactions A
i
→ A
j
exist. If there exist no such outgoing reactions then let us define ϕ(i) = i.
An auxiliary reaction network is the set of reactions A
i
→ Aϕ(i)with kinetic constants κ
i
. The correspondent kinetic equation is
(16)
The auxiliary network also defines a auxiliary discrete dynamical system i → Φ (i) that is used to compute the eigenvectors of the kinetic matrix. The auxiliary network can have several connected components. In each connected component the minimal sink is an attractor of the auxiliary dynamical system, hence it is either a node, or a cycle.
General algorithm for calculating the dominant behavior of the linear dynamics
Preprocessing reaction network
-
1)
Let us consider a reaction network with a given structure and fixed ordering of constants that are well separated. Using this ordering let us construct the auxiliary reaction network .
-
2)
If the auxiliary network does not contain cycles then the auxiliary network kinetics (16) approximates relaxation of the initial network . To obtain the solution, we use directly formulas (7,8) to calculate the eigenvectors (if all κ
i
are known) or (9,10) to obtain the 0–1 asymptotics (if only the ordering of κ
i
is known).
-
3)
In general case, let the system have several cycles C1, C2, ... with periods τ1, τ2, ... > 1.
By "gluing" cycles into points, we transform the reaction network into as follows. For each cycle C
i
, we introduce a new vertex Ai. The new set of vertices is (we delete cycles C
i
and add vertices Ai).
Let us consider all the reactions from of the form A → B (A, B ∈ ). They can be separated into 5 groups:
-
1.
both A, B ∉ ∪
i
C
i
;
-
2.
A ∉ ∪
i
C
i
, but B ∈ C
i
;
-
3.
A ∈ C
i
, but B ∉ ∪
i
C
i
;
-
4.
A ∈ C
i
, B ∈ C
j
, i ≠ j;
-
5.
A, B ∈ C
i
.
-
1.
Reactions from the first group ("transitive" reactions) do not change.
-
2.
Reactions from the second group ("entering to cycles") transform into A → Ai(to the whole glued cycle) with the same constant.
-
3.
Reactions of the third type ("exiting from cycles") change into Ai→ B with the rate constant renormalization: let the cycle Cibe the following sequence of reactions A1 → A2 → ... → A1, and the reaction rate constant for A
i
→ Ai+1is k
i
( for → A1). For the limiting reaction of the cycle C
i
we use notation klim i. If A = A
j
and k is the rate reaction for A → B, then the new reaction Ai→ B has the rate constant kklim i/k
j
. This corresponds to a quasistationary distribution on the cycle (15). It is obvious that the new rate constant is smaller than the initial one: kklim i/k
j
<k, because klim i<k
j
due to definition of limiting constant. If after gluing, several reactions Ai→ B appear, then only the one with the maximal constant should be kept.
-
4.
The same constant renormalization is necessary for reactions of the fourth type ("between cycles"). These reactions transform into Ai→ Aj.
-
5.
Reactions of the fifth type ("inside cycles") are discarded.
-
4)
After the new network is constructed, we assign , and iterate the algorithm from the step 1) until we obtain an acyclic network and exit at step 2).
The algorithm produces an hierarchy of cycles. Notice that the algorithm is based on an asymmetry between entering reactions and outgoing reactions from cycles in the hierarchy. Indeed, some fluxes of entering cycles C
i
can be neglected when they are dominated by a stronger flux of bifurcating from the same node (this occurs at the first step of the algorithm when constructing ). The cycles C
i
are minimal sinks in (they are attractors of the auxiliary dynamical system). There are no reactions A → B in such that A ∈ C
i
, B ∉ C
i
. Nevertheless, there may be such reactions in the initial network . These fluxes can not be neglected because there are no exiting fluxes of to dominate them. The rule of thumb is: neglect any dominated flux except for the fluxes exiting some cycle in the hierarchy. This explains our algorithm and was rigorously justified in [34].
Constructing the dominant kinetic system
Now we show how to find an approximation of the dynamics of the reaction network . To construct this approximation, we produce a new acyclic reaction network with the initial set of vertices A
i
∈ , i = 1..n which is called dominant kinetic system. Dynamics of this acyclic system can be computed from (7,8,9,10). To construct the dominant kinetic system, the following algorithm is applied:
Let be the result of the network preprocessing algorithm described in the previous section.
-
1.
For let us select the vertices , , ... that are glued cycles from .
-
2.
For each glued cycle node :
-
a)
Recall its nodes ; they form a cycle of length τ
i
,
-
b)
Let us assume that the limiting step in is ,
-
c)
Remove from ,
-
d)
Add τ
i
vertices to ,
-
e)
Add to reactions (that are the cycle reactions without the limiting step) with correspondent constants from ,
-
f)
If there exists an outgoing reaction → B in then we substitute it by the reaction → B with the same constant, i.e. outgoing reactions → ... are reattached to the heads of the limiting steps,
-
g)
If there exists an incoming reaction in the form B → , find its prototype in restore it in .
-
3.
If in the initial there existed a "between-cycles" reaction then we find the prototype in , A → B, and substitute the reaction by → B with the same constant, as for (again, the beginning of the arrow is reattached to the head of the limiting step in ).
-
4.
Let m ← m - 1, and repeat steps 1–4 until no glued cycles left.
One has to notice that in the process of network preprocessing some reaction rates are substituted by monomials of the initial reaction constants, i.e. expressions in the form . In totally separated case the values of these monomials are also well separated from the other constants with probability close to 1, however, the initial order of constants does not prescribe position of these monomials in the rates order. In this case the algorithm produces several dominant systems defined for all possible position of new monomial rate constants in the order. An example of this will be given later in this section. Such a situation can happen during the network preprocessing when maximum reaction constant should be chosen, or in the process of dominant system creation, when determining the limiting step in a cycle.
Finding stationary distributions
The dominant kinetic system fully describes the relaxation modes of the network. The construction of this system depends only on the matrix K and does not depend on the production reactions K0. In particular, relaxation times do not depend on the system being closed or not. However, the stationary distribution csand the sequence of relaxation events depends on production reactions (see Eq.(2)).
For closed systems, steady states are solutions of the linear homogeneous equation Kc = 0, therefore they are determined up to multiplication by positive constants. They form a k-dimensional cone where k is the multiplicity of the zero eigenvalue of the matrix K, also the number of minimal sinks of the network.
Let be k sink vertices of the auxiliary network . Let A
i
, i = 1..n be vertices in the initial network . Below we describe a procedure for finding the basis of all stationary distributions of a closed network:
-
1.
Let us take the i th sink vertex .
-
2.
Define x = , b = 1, and a null vector bi∈ Rn.
-
3.
If x is not a glued cycle then it corresponds to a vertex A
j
∈ , and the basis vector bihas components = δ
ij
; stop.
-
4.
If x is a glued cycle then recall all its vertices x1, ..., xτ.
-
5.
Determine the limiting (minimal) rate constant κ
lim
= mins=1..τ{} and s
min
= arg mins=1..τ{}.
-
6.
For each vertex xjof the cycle repeat:
-
a)
Let if j ≠ s
min
and otherwise,
-
b)
if xjcorresponds to a simple vertex A
k
then = c
j
,
-
c)
if xjcorresponds to a glued cycle then do recursively steps 4–6 with x = xjand b = c
j
.
Any possible stationary distribution has form , . The coefficients are computed from initial data: they are equal to the total initial mass carried by vertices of (when these are glued cycles we consider the total initial mass of the cycle) that are attracted by .
In brief, the distribution of the concentrations on any cycle is approximated by the first order expression (15), and this procedure is applied recursively for the vertices that represent glued cycles. The state thus obtained is equally a good approximation of the steady state of the dominant kinetic system.
Open systems can be reduced to closed ones by considering that all production reactions originate from the node ∅ that has concentration c0 = 1. The corresponding reaction are ∅ → A
i
and the constants are the production rates ki 0. The normalization c0 = 1 is possible for all bounded steady states, because these are determined up to multiplication by a constant. Furthermore, all steady states are bounded, provided that the following topological condition holds: if there exists an oriented path from ∅ to A
i
, then there exists an oriented path from A
i
to ∅. We suppose this condition to be always fulfilled. Applying the algorithm to the closed system we obtain that is normalized to in order to have = 1.
Example. Let us consider an example of the network shown on Fig. 2 (1). Below we briefly detail every step of the algorithm.
-
2)
An auxiliary reaction network is constructed (this gives a non-branching network);
-
3)
The cycle A3 → A4 → A5 → A3 in the auxiliary reaction network is glued in one vertex (shown by the circle node); In the initial network we find an "exiting from cycle" reaction A4 → A2, renormalize its rate to and insert in the new network ;
-
4)
The cycle A3 → A2 → A3 in the auxiliary reaction network (which is now coincides with itself) is glued in one vertex ; now the network is acyclic and we stop the network preprocessing.
Now if we restore the cycle A3 → A2 → A3 and try to determine the limiting step in it, we have two possibilies: <k32 and > k32. Let us consider them separately:
Case <k32
3.1.1) Since <k32, then we remove the limiting step → A2 and obtain 3.1.2).
3.1.3) We restore the glued cycle corresponding to and we recall that the reaction A2 → in corresponds to A2 → A3 in .
3.1.4) We remove the limiting step reaction in the cycle A3 → A4 → A5 → A3 (it is A5 → A3) and as a result we obtain acyclic dominant kinetic system shown at Fig. 2 (3.1.4).
Case > k32
3.2.1) Since > k32, then we remove the limiting step A2 → and obtain 3.1.2).
3.2.3) We restore the glued cycle corresponding to this time we should re-attach the reaction → A2 to the head of the limiting step in the cycle (it is A5 vertex); the rate of A5 → A2 is .
3.2.4) We remove the limiting step reaction in the cycle A3 → A4 → A5 → A3 (it is A5 → A3) and as a result we obtain acyclic dominant kinetic system shown at Fig. 2 (3.2.4).
Discussion and perspectives
Dominant approximations of hierarchical linear reaction network allow us to introduce some new concepts important for the dynamics of multiscale systems.
Hybrid and qualitative dynamics
Piecewise affine dynamics has been widely used to approximate dynamics of gene regulatory networks [35–37] as a sequence of discrete transitions between attractors of affine systems. This picture is based on threshold response of genes in models with steep regulation functions (Hill functions and other representations of sigmoidal response) and is not directly related to time scales. Here, we emphasize another possible way to obtain hybrid, or qualitative representations of dynamics, based on time separation.
Indeed, zero-one approximation of eigenvectors in hierarchical linear systems justifies a discrete coding of dynamics. Suppose that initial state is concentrated in j0, c
i
(0) ~ . At times just larger than 1/λ
k
an exponential vanishes in Eq. (2) and the state has a "jump" -rk(lk, c(0) -c
s
). Let us consider that eigenvalues are ordered λ1 >> λ2 >> ... >> λn-1. Then, the sequence of right eigenvectors rksuch that (lk, c(0) -c
s
) ≠ 0 codes the dynamics starting in c(0). In other words, there is a sequence of well separated times τ1 = 1/λ1 <<λ2 = 1/λ2 << ... <<τn-1= 1/λn-1such that something happens (a state transition) between each one of these times. Left eigenvectors provides the lumping (several species cumulated to form pseudospecies) and right eigenvectors provide the sequence of state transitions. On timescales τ
k
<t <τk+1one can observe a jump -rkin state space provided that (lk, c(0) - cs) ≠ 0. On this timescale the dynamics is equivalent to the degradation of pseudospecies (lk, c), = -λ
k
(lk, c).
Critical parameters and design principles
Our approach to dominant subsystems emphasizes some simple but important principles. First of all, dynamics of a hierarchical linear network can be specified if a) the topology of the network is given and if b) to each reaction we associate a positive integer representing order (1 for the most rapid reaction, 2 for the second most rapid reaction, and so on ...); c) for cyclic topologies, some monomials grouping constants of several reactions have also to be ordered in the same manner (which reactions depend both on topology and on initial ordering).
In the process of simplification some reaction pathways are dominated and do not appear in the dominant subsystem. Therefore, the corresponding constants are not critical for the system: although their ordering matters for establishing the simplification, their precise value have little importance. Because parameters of the dominant subsystem are generally monomials of parameters of the whole system, critical parameters are those parameters that occur in critical monomials. Our findings show rather counter-intuitive properties of critical and non-critical parameters, that can be useful as design principles. Thus, in cycles, the limiting step (slowest reaction) has little influence on dynamics (though is important for the steady state). Dynamically, a cycle with separated constants behaves like the chain obtained from the cycle by eliminating the limiting step. In particular, the slowest relaxation time of a cycle is the inverse of its second slowest constant [1, 34].
We should add some words about the relation between linear and non-linear models. Mathematical models of biochemical reaction networks in molecular biology contain with necessity non-linear, non-monomolecular reactions (complex binding, catalysis, etc.). However, the developed algorithms of model reduction for linear networks can be useful in systems biology, in several situations:
-
1)
When some submechanisms of a complex and non-linear network are linear, given fixed (or slowly changing) values of external inputs (boundaries);
-
2)
For approximating non-linear dynamics. For multiscale nonlinear reaction networks the expected dynamical behaviour is to be approximated by the system of dominant networks. These networks may change in time but remain small enough. To give an example, we provided the Fig. 3S1–3S2 in Additional File 1 demonstrating that in a model of complex reaction network of NF-κ B pathway, containing 17 multimolecular reactions, only two reactions show genuinely non-linear behavior in some windows of time, with two more showing border-line behavior, and all others have well-separated reactant concentrations in any moment of time. The rigorous justification of these hybrid approximations for mass action reaction networks will be discussed elsewhere.
Reduction of non-linear multiscale systems
Complex formation is a source of nonlinearity in biochemical networks. For instance, in signalling, ligand molecules form complexes with receptors. Transcription factors are often dimers or multimers or are sequestered by forming complexes with their inhibitors. In these examples, the reaction rates are non-linear functions of the concentrations of two or more molecules.
To construct a nonlinear reaction network we need the list of components, = {A1, ..., A
n
} and the list of reactions (the reaction mechanism):
(17)
where j ∈ [1, r] is the reaction number. Unless reactants and products belong to compartments of different volumes α
ji
, β
jk
are nonnegative integers (stoichiometric coefficients). Reactions involving components from different compartments have non-integer stoichiometry. For instance, a reaction translocating a molecule from nucleus to the cytosol has stoichiometry (..., 1, k
v
, ...) where k
v
is the volume ratio of cytosol to nucleus.
Dynamics of nonlinear networks is described by a system of differential equations:
(18)
ν
j
= β
j
- α
j
is the global stoichiometric vector. S is the stoichiometric matrix whose columns are the vectors ν
j
. The reaction rates (c) are non-linear functions of the concentrations. For instance the mass action law reads .
There are no simple rules to relate timescales to reaction constants of nonlinear models. The units of the inverse constants of bimolecular reactions are concentration multiplied by time and one needs at least one concentration value in order to construct a timescale. Generally, timescales are functions of many reaction constants and concentration variables. These functions are not necessarily smooth. Near bifurcations (for instance, near Hopf or saddle-node bifurcations), at least one timescale of the system diverges for finite changes of the reaction constants. However, nonlinear biochemical networks have wide distributions of time-scales, as can be shown by simple (Jacobian based) analysis of models.
Various reduction methods of nonlinear models are based on projection of the dynamics on a lower dimensional invariant manifold [4–8]. The reduced models are systems of differential equations, but no longer networks of chemical reactions. Quasi-equilibrium and quasi-stationarity methods keep the network structure of the model and propose lumped reaction mechanisms as dominant subsystems. This approach has some advantages. Indeed, it leads to more transparent analysis of the results and of design principles, produces hierarchies of models and facilitates model comparison. Graphical reduction methods using elementary modes, were proposed by Clarke [26] for chemical systems and more recently in systems biology by Klamt [38]. Similar methods can be found in [39], from which we have borrowed the terminology. The choice of the species to be eliminated and of the reactions to be aggregated, as well as the calculation of rates of elementary modes have no theoretical justification in these methods and their inappropriate use can alter dynamics (for instance, as Clarke noticed, the stability of limit cycles is not guaranteed). Thus, in order to have a complete practical recipe that applies to multiscale biochemical networks we need to solve three more problems: detection of rapid species, resolution of quasi-stationarity equations and calculation of reaction rates of the dominant mechanisms.
A major improvement in calculating dominant subsystems can be obtained by combining quasi-stationarity and averaging. Averaging techniques are widely used in physics and chemistry to simplify models by eliminating fast, oscillating (microscopic) variables [22–24]. Our use of averaging is different, because we employ it to obtain averaged stationarity equations for slow, non-oscillating variables and to eliminate these species. After choosing a "middle" time scale (corresponding to the time resolution of the experiment), we want to reduce all scales that are faster but also all scales that are slower than this middle scale. In order to do that we provide a unified framework for species elimination and reaction aggregation, either by quasi-stationarity (fast species) or by averaged stationarity (slow species).
Let I be the set of indices of intermediate components, that will be eliminated. is the set of reactions that either produce or consume species from I. Rates of depend on the concentrations of intermediate species and also on the concentrations of other species, which in the terminology of Temkin [39] are called terminal. Let T be the set of indices of terminal species. Terminal species represent the frontier between the rest of the system and the subsystems made of intermediate species. Although instead of terminal the name boundary species could be more appropriate, the latter term has already been employed in systems biology with a different meaning, which is species whose concentrations are fixed in a simulation.
Extracting from the matrix S the columns corresponding to the reactions and the lines corresponding to the species and we obtain the intermediate stoichiometric matrix S
I
and the terminal stoichiometric matrix S
T
, respectively.
Eliminating fast species: quasi-stationarity
In multiscale biochemical systems, some components react much more rapidly to changes in the environment than others. The reasons for the existence of such fast species can be multiple. Thus, rapidly transformed or rapidly consumed molecules (for instance those taking part in metabolic chains or rapid chemical transformations such as phosphorylation), or promoter sites submitted to rapid binding/unbinding processes are examples of fast species. Fast species are good candidates for intermediate species. Indeed, it is easy to prove that they can be eliminated by quasistationarity. When production rates are not weak, fast species are those whose concentrations are small and well separated from the concentrations of other species. Though straightforward, the precise condition connecting quasi-stationarity and smallness of concentrations can not be easily found in literature, hence we briefly discuss it below.
Let ϵ be a small parameter, representing concentrations. Suppose for simplicity that reactions are pseudo-monomolecular. This means that S
I
R
I
(c
I
, c
T
) = K
I
(c
T
) c
I
+ (c
T
), where R
I
is the restriction of the vector R to the intermediate species. An important assumption is (c
T
) = (1) meaning that the production of intermediate species is not weak.
Suppose that among the reactions consuming intermediates, at least some have rates of order (1). This is current, because these reactions produce terminal species which have larger concentrations.
Because c
I
= (ϵ), it means that K
I
(c
T
) = (1/ϵ). This leads to the following asymptotic:
(19)
(20)
where = c
I
/ϵ, = ϵ K
I
= (1), Icis the complement of I designating species other that I. Intermediate species are fast and the system (19) can be reduced using Tikhonov's [40, 41] and Fenichel's [42] results. According to these results, after a short laps of time, the system evolves on an invariant manifold (an invariant manifold is defined by the property that any trajectory starting in the manifold stays inside the manifold) which is at distance (ϵ) from the quasi-steady state (QSS) manifold defined by the quasi-stationarity equations:
(21)
Quasi-stationarity equations can be used to express concentrations of the intermediate species as functions of the concentrations of terminal species. If matrix K
I
has not full rank, conservation laws should be added to the quasi-stationarity equations in order to obtain a full rank system. Let μ1, ..., μ
k
be a basis of the left kernel of S
I
(a complete set of conservation laws). We say that species of indices I are quasi-stationary if they approximately fulfill the equations:
S
I
R
I
(c
I
, c
T
) = 0
and exactly fulfill the conservation laws:
μ
i
i
I
= C
i
, i = 1,..., k,
where C
i
are real constants.
Fast, quasi-stationary species are generally difficult to detect. For instance, the strong production condition (c
T
) = (1), although informative for understanding of the dynamics, can not be used in practice. Furthermore, small concentration is not a necessary condition for quasi-stationarity. Therefore, our practical method for detection of fast, quasi-stationary species is based on the direct checking of Eqs.(22), (23) (see Fig 3a and the Results section for an example).
Once quasi-stationary species are detected, the recipe proposed by Clarke [26] can be applied to simplify the reaction mechanism. Let us reformulate this recipe here:
-
1.
Eliminate the intermediate concentrations by solving the equations (22), (23). Express c
I
as function of c
T
.
-
2.
Replace the mechanism by "simple sub-mechanisms".
-
3.
Compute the rates of the simple sub-mechanisms as functions of c
T
.
The simplicity criterion employed by Clarke does not follow from a physical principle. Nevertheless, in systems biology, biochemical reactions are simplified representations of complex physico-chemical processes. In the absence of detailed information, simplicity arguments are often employed. Elementary modes analysis widely used in metabolic control and gene network analysis [43–45] is based on exactly the same argument.
The same recipe applies also to model comparison, when we want to compare two models which differ in complexity (some species in one model are not present in the second). In this situation we declare the extra species intermediate and apply the three steps of the algorithm.
Simple sub-mechanisms and rates
Let us introduce some more definitions. A reaction route is a combination of reactions in transforming terminal species into other terminal species and conserving the intermediate species. It is defined by a integer coefficient vector γ ∈ ℤs (the dimension s is the number of reactions in ) satisfying the following three conditions:
S
I
γ = 0
γ
i
≥ 0, if the reaction i is irreversible
||S
T
γ|| > 0
Reaction routes are usually defined [39] without the condition (26). By imposing this condition, we exclude internal cycles with zero terminal stoichiometry.
A sub-mechanism M(γ) is the set of all the reactions in the reaction route γ, M(γ) = {i|γ
i
≠ 0}. A sub-mechanism is simple if it is minimal with respect to inclusion, i.e. if M (γ') ⊂ M (γ) ⇒ γ = γ'. Simple sub-mechanisms are pathways with a minimal number of reactions, connecting terminal species without producing accumulation or depletion of the intermediate species. Thus, they are candidates for reduced reaction mechanisms. Simple sub-mechanisms are minimal dependent sets in oriented matroids [46], similar to elementary modes in flux balance analysis [43]. Algorithms for finding elementary modes can be applied for the search of simple sub-mechanisms [43–45].
In the reduced model, the reactions of the intermediate mechanism are replaced by the sub-mechanisms γ1, ..., γ
s
.
Each terminal species is produced or consumed by one or several reactions of the intermediate mechanism. The reduction should preserve the flux of each terminal species, meaning that the following equation should be satisfied identically, for all c
T
and c
I
satisfying (22),(23):
(27)
where are the rates of the simple sub-mechanisms.
Suppose that for any simple sub-mechanism i there is a terminal species j such that S
T
γ
i
is the unique vector (among the s different ones) having nonzero coordinate j, (S
T
γ
i
)
j
≠ 0. Then, there is a straightforward solution for (27):
(28)
The above uniqueness condition is not fulfilled if there are two sub-mechanisms for which the terminal stoichiometries are proportional. This situation can be avoided by quotienting with respect to the following equivalence relation: γ
i
and γ
j
are equivalent iff S
T
γ
i
= α S
T
γ
j
, for some α = ≠ 0. After discarding some sub-mechanisms and keeping only one representative per class, we have a reduced set of simple sub-mechanisms for which rates can be calculated from (28).
Dominant solutions to the quasi-stationarity equations, multiscale ensembles
The most difficult part of the above algorithm is to solve the quasi-stationarity equations (22),(23). Even in the monomolecular case, symbolic solutions of the linear system (21) can involve long expressions. Furthermore, mass action law leads to polynomial equations in the binary or multi-molecular case. Symbolic methods for solutions of systems of polynomial equations are limited to a small number of variables.
In this subsection we show how the multi-scale nature of the system can be used to obtain approximate, dominant solutions of the quasi-stationarity equations.
In linear hierarchical models, ensembles with well separated constants appear (see also [1]). We could represent them by a log-uniform distribution in a sufficiently big interval log k ∈ [α, β], but most of the properties of this probability distribution will not be used here. The only property that we will use is the following: if k
i
> k
j
, then k
i
/k
j
≫ 1 (with probability close to one). It means that we can assume that k
i
/k
j
≫ a for any preassigned positive value of a that does not depend on k values. One can interpret this property as an asymptotic one for α → -∞, β → ∞. This property allows us to simplify algebraic formulas. For example, k
i
+ k
j
can be substituted by max{k
i
, k
j
} (with small relative error), or
for nonzero a, b, c, d.
Of course, some ambiguity can be introduced, for example, what is (k1 + k2) - k1, if k1 ≫ k2? If we first simplify the expression in brackets, it is zero, but if we open brackets without simplification, it is k2. This is a standard difficulty in use of relative errors for round-off. If we estimate the error in the final answer, and then simplify, we shall avoid this difficulty. Use of o and symbols also helps to control the error qualitatively: if k1 ≫ k2, then we can write (k1 + k2) = k1(1 + o(1)), and k1(1 + o(1) - k1 = k1o(1). The last expression is neither zero, nor absolutely small – it is just relatively small with respect to k1.
It is slightly more difficult to solve equations. Some recipes were proposed such as Newton polyhedra for approximate solutions of polynomial systems of equations [47] but this type of methods suffers from combinatorial complexity. Here we use a simpler, but not so rigorous approach. In the case of pseudo-molecular subsystems, our algorithms for linear hierarchical systems are enough for this purpose. In general, we choose the dominant terms in the solutions as monomials of the parameters. This can be done either by educated guess, or by testing numerically the orders of various terms in the equations. The most frequent, truly non-linear simplification that occurs in biochemical models is the "min-funnel", which we present below.
Let us consider the production of a complex C from two proteins A and (production of A), (production of B), (degradation of A), (degradation of B), (complex formation).
Supposing A, B quasi-stationary we have to find the positive solutions of the equations , , where , , . We will consider two cases a) 1/ <<k
A
<<k
B
and b) 1/ <<k
B
<<k
A
. Both cases mean that degradation of A, B is weak and/or the propensity of complex formation is high. Case a) means also that B is in excess, the opposite being true in case b).
Let us consider the case a). We consider that the order of in the dominant solution is larger than the order of , . From the linear equation k
A
- k
B
= we obtain = k
B
and from the second nonlinear equation we obtain . Finally, we have consistently with the starting guess. The dominant solution in case b) is obtained by symmetry from the one in case a). The quantity of interest in this example, for which we want a reduced expression is the production rate of the complex R
c
= k
c
AB. Actually, the two solutions can be summarized by:
R
c
= min(k
A
, k
B
)
Using the exact solution of the system (after eliminating A from the linear equation we remain with a quadratic equation for B) we can show that the min-funnel approximation (29) is valid under less restrictive conditions. The only separation condition that we need is min(k
A
, k
B
) >> k
deg, A
kdeg,B/k
c
. We can easily identify the critical parameters k
A
, k
B
and the non-critical ones kdeg,A, kdeg,B, k
c
. The validity of the expression (29) depends on order relations involving monomials of critical and non-critical parameters.
Eliminating slow species: averaging
Averaging is an useful model reduction technique for high-dimensional clocks or for other types of oscillating molecular systems (the activity of some transcription factors, among which NF-κ B, present oscillations under some conditions).
Averaging can be applied rather generally [22–24] to produce coarse grained quantities and reduced models. The typical mathematical result applying here is due to Pontryagin and Rodygin [48, 49]. Supposing that the oscillating species are x, the non-oscillating species are y, and ∈ is a small parameter, then we have:
(30)
(31)
It is supposed that for any y, the fast dynamics (30) has an attractive hyperbolic limit cycle x = ψ (τ, y), of period T(y): ψ (τ + T(y), y) = ψ(τ, y) (τ = t/ϵ). Then, after a short transient, the slow variable satisfies the averaged equation:
(32)
The result can be extended to the case when x = ψ(τ, y) describes damped oscillations, with damping time much larger than ϵ, i.e. when the fast dynamics (30) has a stable focus and the eigenvalues of the Jacobian ∂f/∂x calculated at the focus are of the form -λ ± iμ, 0 <λ <<μ = (1/ϵ).
The following averaged steady state equation allows to eliminate the slow species y:
(33)
If (32) has a stable steady state, we always reach this situation. In this case, the slow non-oscillating variables y are constant in time and can be considered to be conserved, which has two significant consequences.
First, Eq.(33) restores conservation. Slow variables are often the result of broken conservation laws. In fact, in biological open systems, nothing is conserved. Conservation laws result from balancing production and degradation either passively (slow processes) or actively (feed-back). Thus, we can ignore production and degradation of molecules whose level is rigorously controlled. Eq.(33) describes such a case.
Second, (33) are averaged steady state equations for the slow variables. If slow variables y reach stationarity, the only variables that change in time are the oscillating variables x. Eq.(30) describes the dynamics of x, considering that y satisfies (33). For oscillators, averaging provides a way to eliminate slow non-oscillating variables, which is formally equivalent to quasi-stationarity and represents a new case of applicability of Clarke's method. The difference between the two cases is that we eliminate fast variables by solving quasi-stationarity equations and we eliminate slow variables by solving averaged stationarity equations. Thus, intermediate non-oscillating variables are expressed in terms of only non-oscillating terminal variables. If there are no non-oscillating terminal variables, then non-oscillating intermediates become conserved quantities.