In this paper we present a more general version of the method that was introduced in [27], which is applicable to nonlinear models. We start with some basic definitions and key observations that are illustrated on a small example model, before we turn to the details of the method.

### Basic Definitions and Assumptions

The method is developed for models of biochemical reaction systems on state space form that are based on nonlinear ordinary differential equations (ODEs)

\u1e8b=f\left(x,p,u,t\right),

(1)

y=h\left(x,p,u,t\right),

(2)

where *t* denotes time; the dot over *x* in Eq. (1) denotes derivative w.r.t. to time; the state vector *x* ∈ ℝ ^{n} ; the parameters *p* ∈ ℝ ^{p} ; the inputs *u* ∈ ℝ ^{m} ; the outputs *y* ∈ ℝ ^{l} ; and *f* and *h* are in general nonlinear functions. The state vector, whose individual elements are referred to as state variables, typically represents amounts or concentrations of chemical species, and the parameters commonly represent kinetic constants, initial conditions, or scaling factors. In this paper we are primarily interested in a comparison of the state variables between models (the original model and the reduced model), which means that the form of the nonlinear function in Eq. (2) is irrelevant for the application of our method. The right-hand side of Eq. (1) can be expressed as the stoichiometric matrix *S* ∈ ℝ^{n×q}times a vector of reaction rates *r* = *r*(*x*, *p*, *u*, *t*); *r* ∈ ℝ ^{q}

\u1e8b=Sr\phantom{\rule{0.3em}{0ex}}\left(x,p,u,t\right).

The existence of separate time-scales are commonly utilized for reduction of biochemical models (e.g., by reduction of mass action kinetics to Michaelis-Menten kinetics). The typical approach is to investigate if subsets of the state variables are in steady state or in quasi-steady state (QSS). If state variable *x*_{
i
} is in steady-state for *t* ≥ 0 it holds by definition that

{\u1e8b}_{i}\left(t\right)=0,

(3)

which implies that

{x}_{i}\left(t\right)={x}_{i}\left(0\right),

which efficiently removes the state variable from the model, since it can be substituted for constant. If on the other hand the state variable *x*_{
i
} is in QSS, there are terms on the right-hand side of the ODE that are much larger than the negligible term on the left-hand side. The approximation

{f}_{i}\left(x,p,u,t\right)\approx 0.

(4)

is then commonly used to reduce the model. We refer to a state as fast in the time interval ≤ *T*_{0} ≤ *t* < *T*_{1} if Eq. (4) is valid in this time interval, and holds for the class of all considered inputs to the system. Note that *T*_{1} = ∞ in the case that the systems remains in QSS, which may for example not be the case for models with switches (where, e.g., the values of a subset of the state variables may change when a certain condition is fulfilled) [34, 35].

Note that Eq. (3) (steady state) necessitates that Eq. (4) (quasi-steady state) is fulfilled, but not *vice versa*. Although QSS implies that (some of) the terms of the right-hand side of the ODE are large and leaves the left-hand side (derivative term) negligible, the derivative term may still be large enough for the state variable in QSS to change considerably during the time-span of a simulation; the key is that these changes mainly occur on a slow manifold.

### Zooming of Linear Models

The concept of zooming was introduced in [27], and a method was presented that is applicable to linear time-invariant (LTI) models, which on state space form reads:

{\mathcal{M}}_{o}:\left\{\begin{array}{c}\hfill \u1e8b=Ax+Bu\hfill \\ \hfill y=Cx+Du,\hfill \end{array}\right.

where *A* ∈ ℝ^{n × n}, *B* ∈ ℝ^{n × m}, *C* ∈ ℝ^{l × n}, and *D* ∈ ℝ^{l × m}. The method is based on the existence of at least one subset of state variables in {\mathcal{M}}_{o} for which the internal dynamics is very fast with respect to the current time-scale of interest. An algorithm for automatic reduction of linear models that is based on the detection of such subsets, which are referred to as fast clusters, is presented in [27]. If the *w* state variables of a fast cluster are replaced by a single state variable {x}_{lL}^{\prime}={\sum}_{i=1}^{w}{x}_{{l}_{i}}, we obtain a reduced version of the original model

{\mathcal{M}}_{r}:\phantom{\rule{2.77695pt}{0ex}}\left\{\begin{array}{c}\hfill {\u1e8b}_{r}={A}_{r}{x}_{r}+{B}_{r}u\hfill \\ \hfill y={C}_{r}{x}_{r}+Du,\hfill \end{array}\right.

where *x*_{
r
} ∈ ℝ^{(n - w + 1)}, *A*_{
r
} ∈ ℝ^{(n - w + 1) × (n - w + 1)}, *B*_{
r
} ∈ ℝ^{(n - w + 1) × m}, and *C*_{
r
} ∈ ℝ^{l × (n - w + 1)}.

The fraction parameters, which are typically computed from QSS assumptions and mass conservation relations, take the form

{\eta}_{{l}_{i}}\left(p\right)=\frac{{x}_{{l}_{i}}}{{{x}^{\prime}}_{lL}},

(5)

The fraction parameters are used for back-translation of the lumped state variable to the original state variables. Note that the fraction parameters are functions of the model parameters only, and therefore time-invariant; as we will see, these fraction parameter properties do in general not hold for nonlinear models. By comparing the reactions of the original and reduced models, we see that

{k}_{jL}^{\prime}=\sum _{i=1}^{w}{k}_{ji}{\eta}_{i},

(6)

where {k}_{jL}^{\prime} is the rate parameter in the reaction from state variable {x}_{lL}^{\prime} to state variable *x*_{
j
} in the reduced model, and *k*_{
ji
} is the rate parameter in the reaction from state variable *x*_{
i
} to state variable *x*_{
j
} in the original model.

Finally, note that Eqs. (5) and (6) provide a link between {\mathcal{M}}_{o} and {\mathcal{M}}_{r}, which constitute two different levels of granularity. It is this link between the models that make us consider them as two different degrees of zooming, and the primary goal of this paper is to establish such a link also for nonlinear models.

### Extension to Nonlinear Models - Initial Observations

We will now present some key observations that are used in the derivation of the method for zooming of nonlinear models.

First observe that the mass, which corresponds to a weighted (w.r.t. the molecular weight) sum of the state variables, of a closed (no exchange of matter with the surroundings) nonlinear model is conserved. However, the total number of molecules is in general not conserved in such a model as it is for linear models. This is for example due to the formation and dissociation of complexes, which alters the total number of molecules in the system. For instance, the binding of A to B reduces the number of molecules, as the product AB only counts as one molecule; binding reactions cannot occur in linear models. A second, and related, observation is that another type of conservations appears in nonlinear models; conserved moieties. A moiety is a specific functional part of a molecule, and the weighted sum of the number of molecules that contain this functional part is constant in a closed system. The presence of such a conserved moiety is equivalent to the existence of a row vector *m* ∈ ℕ ^{n} for which *mS* = 0, which also implies that

m\u1e8b=mSr\phantom{\rule{0.3em}{0ex}}\left(x,p\right)=0.

(7)

If we let the rank of S be denoted by *n*_{
r
} , the number of linearly independent vectors for which Eq. (7) holds is equal to *n* - *n*_{
r
} , which implies the existence of a matrix *M*

M\phantom{\rule{2.77695pt}{0ex}}S=0,

(8)

where M\in {\mathbb{N}}^{n\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{n}_{r}\phantom{\rule{0.3em}{0ex}}\times \phantom{\rule{0.3em}{0ex}}n}.

Let us now make some remarks regarding fast state variables in a nonlinear model. Let {x}_{f}\in {\mathbb{R}}^{{n}_{f}} be the vector of all fast state variables in *T*_{0} ≤ *t* < *T*_{1}. For simplification we will assume that there are no inputs to the system, although it would in principle be possible to incorporate inputs in the following discussion. The right-hand side of the ODEs for these fast state variables, if there are no inputs, can be separated into two parts. The first part contains reactions between fast state variables that are significant for the fast dynamics; *r*_{
f
} (*x*_{
f
} , *p*), and the second part contains all other reactions, *r*_{
s
} (*x*, *p*), i.e.,

{\u1e8b}_{f}={S}_{s}{r}_{s}\left(x,p\right)+{S}_{f}{r}_{f}\left({x}_{f},p\right),

(9)

where *S*_{
f
} and *S*_{
s
} are the corresponding stoichiometric matrices. Let us now consider the fast stoichiometric matrix, *S*_{
f
} , and especially the conserved moieties that are implied by *S*_{
f
} . Since these moieties are only (approximately) conserved on a fast enough time-scale, we refer to such moiety conservations as apparent conservations. Let *M*_{
f
} be a matrix with *a* linearly independent rows such that

{M}_{f}{S}_{f}\mathsf{\text{=0,}}

(10)

where {M}_{f}\in {\mathbb{N}}^{a\times {n}_{f}}. Each row of this matrix thus implies an apparent conserved moiety in the system. Let the sums of state variables that correspond to apparent moiety conservations (i.e., lumps of state variables) be denoted by *l*, so that

l\mathsf{\text{=}}{M}_{f}{x}_{f}\mathsf{\text{.}}

(11)

If we differentiate *l* with respect to time, we get

\begin{array}{c}\stackrel{\xb0}{l}={M}_{f}{\u1e8b}_{f}={M}_{f}\left({S}_{s}{r}_{s}\left(x,p\right)+{S}_{f}{r}_{f}\left({x}_{f},p\right)\right)\approx \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\approx {M}_{f}{S}_{s}{r}_{s}\left(x,p\right),\end{array}

(12)

where Eqs. (9) and (10) were used.

It is interesting to note that the matrix *M*_{
f
} is not unique, but that in fact any matrix \widehat{M}=N{M}_{f} can be used for lumping, where *N* ∈ ℝ^{a×a}is non-singular. This observation allows us to choose a matrix \widehat{M} for which a maximal number of rows in \widehat{M}\phantom{\rule{0.3em}{0ex}}{S}_{s} vanish, which results in the greatest possible reduction in the number of state variables. Finally note that Eq. (4), in the absence of inputs, gives

\begin{array}{lll}\hfill {f}_{f}\left(x,p,{u}_{t},t\right)& ={f}_{f}\left(x,p,0,t\right)=\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\\ ={S}_{s}{r}_{s}\left(x,p\right)+{S}_{f}{r}_{f}\left({x}_{f},p\right)\phantom{\rule{2em}{0ex}}& \hfill \text{(2)}\\ \approx {S}_{f}{r}_{f}\left({x}_{f},p\right)\approx 0\phantom{\rule{2em}{0ex}}& \hfill \text{(3)}\end{array}

(13)

since the term *S*_{
f
}*r*_{
f
} (*x*_{
f
} , *p*) dominates the term *S*_{
s
}*r*_{
s
} (*x*, *p*). Note that Eq. (4) and consequently Eq. (13) only hold in *T*_{0} ≤ *t* < *T*_{1}, since the state is only known to be fast in this time span.

Eq. (12) defines the ODEs of the reduced model (the lumped state variables), and Eqs. (11) and (13) can in principle be used to calculate back-translation formulae, as is demonstrated with the small example model in the next section. However, as we shall see, this approach requires the explicit algebraic solution to a system of nonlinear equations, which is typically an infeasible task. Furthermore, there is not a clear one-to-one mapping between the state variables of the original and reduced models as in the case of proper lumping [27].

### A Small Example Model

We will now present a small example model, with three fast state variables, which is reduced with the approach discussed above. An alternative approach is then demonstrated with the advantage that it scales better to larger models.

Consider the reversible formation of a complex *C* from a substrate *A* and an enzyme *B*

\bullet \underset{}{\leftrightarrow}A+B\underset{{k}_{1}}{\stackrel{{k}_{-1}}{\leftrightarrow}}C\underset{}{\leftrightarrow}\bullet

consisting of the fast state variables *x*_{
f
} = (*A B C*) ^{T} , where the bullets (•) represent the slow state variables surrounding the three fast state variables in the model. The ODEs for the fast state variables take the form

{\u1e8b}_{f}=\left(\begin{array}{c}\hfill -1\hfill \\ \hfill -1\hfill \\ \hfill 1\hfill \end{array}\right)\left({k}_{1}AB-{{k}_{-}}_{1}C\right)+{S}_{s}{r}_{s}\left(x,p\right).

(14)

The three state variables in the model constitute a fast cluster with two apparent conserved moieties, which may be represented by the following relations

l=\left(\begin{array}{c}\hfill {L}_{1}\hfill \\ \hfill {L}_{2}\hfill \end{array}\right)=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right)\left(\begin{array}{c}\hfill A\hfill \\ \hfill B\hfill \\ \hfill C\hfill \end{array}\right)={M}_{f}{x}_{f},

(15)

where the lumped state variables *L*_{1} and *L*_{2} are introduced. Note that Eq. (12) defines the dynamics of the lumped state variables.

The distribution of mass among the fast state variables is given by Eq. (15) and by applying Eq. (13) to (14), which results in an equation system with the three fast state variables as unknowns

{k}_{1}AB-{{k}_{-}}_{1}C\approx 0,

(16)

A\phantom{\rule{0.3em}{0ex}}\mathsf{\text{+}}\phantom{\rule{0.3em}{0ex}}C\mathsf{\text{=}}{L}_{\mathsf{\text{1}}}\mathsf{\text{,}}

(17)

B\phantom{\rule{0.3em}{0ex}}\mathsf{\text{+}}\phantom{\rule{0.3em}{0ex}}C\mathsf{\text{=}}{L}_{2}\mathsf{\text{.}}

(18)

Analytic expressions for the fast state variables *A*, *B*, and *C* are given by the non-negative solution to Eqs. (16)-(18)

A\approx \frac{1}{2}\left({L}_{1}-{L}_{2}-{K}_{1}+\sqrt{{\left({L}_{1}+{L}_{2}+{K}_{1}\right)}^{2}-4{L}_{1}{L}_{2}}\right),

(19)

B\approx \frac{1}{2}\left(-{L}_{1}+{L}_{2}-{K}_{1}+\sqrt{{\left({L}_{1}+{L}_{2}+{K}_{1}\right)}^{2}-4{L}_{1}{L}_{2}}\right),

(20)

C\approx \frac{1}{2}\left({L}_{1}+{L}_{2}+{K}_{1}+\sqrt{{\left({L}_{1}+{L}_{2}+{K}_{1}\right)}^{2}-4{L}_{1}{L}_{2}}\right),

(21)

where {K}_{1}=\frac{{k}_{-1}}{{k}_{1}}.

We can now employ Eq. (12) to solve the ODEs for the lumped state variables *L*_{1} and *L*_{2}, and use Eqs. (19)-(21) as back-translation formulae to compute the trajectories of the original state variables *A*, *B*, and *C*. However, note that for even slightly larger clusters of fast species than the one discussed here it would not be possible to calculate algebraic expressions of the original state variables with this approach, since it builds on the explicit solution of a system of nonlinear equations, which quickly becomes infeasible with growing problem size.

Alternatively, we can take an approach to the problem that is inspired by the method for linear systems in [27]. The first step is to express Eqs. (16) and (17) as a linear system w.r.t. the state variables *A* and *C*

\left(\begin{array}{cc}\hfill {k}_{1}B\hfill & \hfill -{k}_{-1}\hfill \\ \hfill 1\hfill & \hfill 1\hfill \end{array}\right)\phantom{\rule{2.77695pt}{0ex}}\left(\begin{array}{c}\hfill A\hfill \\ \hfill C\hfill \end{array}\right)\approx \left(\begin{array}{c}\hfill 0\hfill \\ \hfill {L}_{1}\hfill \end{array}\right).

(22)

The solution to Eq. (22) w.r.t. *A* and *C* is

A\equiv {\eta}_{A}\left(B,p\right){L}_{1}\approx \frac{{K}_{1}}{B+{K}_{1}}{L}_{1},

(23)

C\equiv {\eta}_{C}\left(B,p\right){L}_{1}\approx \frac{B}{B+{K}_{1}}{L}_{1},

(24)

where {K}_{1}=\frac{{k}_{-1}}{{k}_{1}}, and the fraction parameters *η*_{
A
} (*B*, *p*) and *η*_{
C
} (*B*, *p*) are defined in Eqs. (23) and (24), respectively. The ODE for *L*_{1} is defined by Eq. (12), and the ODE for *B* can be derived by differentiation of *L*_{2} in Eq. (18), which gives that

\begin{array}{lll}\hfill \frac{dB}{dt}& \approx {\left(1+\frac{{K}_{1}{L}_{1}}{{\left(B+{K}_{1}\right)}^{2}}\right)}^{-1}\left(\begin{array}{cc}\hfill -\frac{B}{B+{K}_{1}}\hfill & \hfill 1\hfill \end{array}\right)\left(\begin{array}{c}\hfill \frac{{L}_{1}}{dt}\hfill \\ \hfill \frac{{L}_{2}}{dt}\hfill \end{array}\right)=\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\\ ={\left(1+\frac{{K}_{1}{L}_{1}}{{\left(B+{K}_{1}\right)}^{2}}\right)}^{-1}\left(\begin{array}{cc}\hfill -\frac{B}{B+{K}_{1}}\hfill & \hfill 1\hfill \end{array}\right)\cdot \phantom{\rule{2em}{0ex}}\\ \cdot {M}_{f}{S}_{s}{r}_{s}\left(x,k\right),\phantom{\rule{2em}{0ex}}\end{array}

(25)

The reduced model consists of the two state variables *L*_{1} and *B* (note that *L*_{2} does not appear in the reduced model), and the dynamics is described by Eqs. (12) and (25), respectively. Note that the state variables *A* and *C* can be back-translated from the reduced model with Eqs. (23) and (24). This approach is a bit more intricate than the first, but comes with the advantage that we do not need to solve a system of nonlinear equations.

### A Method for Zooming of Nonlinear Models

We will now step-by-step present a method that can be used to construct zoomable nonlinear biochemical models. This involves two sub-goals: i) to identify a reduced model that shares important characteristics with the original model, ii) to derive back-translation formulae that can be used to compute the original state variables and parameters from the reduced model.

In an initialization step of the method for a model {\mathcal{M}}_{o} we first formulate mathematical equations for all conservation relations Eq. (8), state variables in steady-state Eq. (3), and quasi-steady state assumptions Eq. (4). If additional properties of the system are known, we also formulate the corresponding equations.

### Step 1

The first step of the method is to identify the apparent conservation relations in the model.

**Definition 1:** Let *S*_{
f
} be the stoichiometric matrix for the reactions *r*_{
f
} (*x*_{
f
} , *p*) as defined in Eq. (9). Each subset of state variables for which the corresponding rows of *S*_{
f
} are linearly dependent constitutes an apparent conservation relation. Hence the apparent conservation relations lie in the left null space of *S*_{
f
} and the dimension of this space is *n* - rank(*S*_{
f
} ).

Note that the apparent conservation relations are defined in Eq. (11). It is trivial to identify the set of all linearly dependent rows of *S*_{
f
} with a mathematical computing software (e.g., SBtoolbox for Matlab [36]).

### Step 2

The second step of the method is to define the state variables of the reduced model, which we refer to as modified lumped state variables.

**Definition 2:** Let *x* be a lumped state variable corresponding to a subset of the state variables in an apparent conservation relation. Then *x* is a modified lumped state variable if the lumping scheme with respect to the state variables of the original model is proper.

Note that the original state variables have a clear interpretation in the reduced model (i.e., that the lumped variables form disjoint sets) if the lumping scheme is proper, i.e.,

{l}_{m}={M}_{m}{x}_{f}

(26)

where *M*_{
m
} is *a* × *n*_{
f
} matrix with elements equal to 0 or 1 and column sums equal to 1, and *l*_{
m
} denotes the modified lumped state variables. We typically have a large freedom in the choice of *M*_{
m
} . The number of state variables is maximally reduced if all exact conservation relations in the model are retained as modified lumped state variables (and replaced by constants).

### Step 3

The third step of the method is to derive fraction parameters, which constitute the link between the reduced model and the original model. Let the original state variables that constitute the *k*:th modified lumped state variable {l}_{{m}_{k}} be denoted by {x}_{{m}_{k}}, so that

{l}_{{m}_{k}}=\sum _{i=1}^{w}{x}_{{m}_{ki}},

(27)

A number of *n*_{
m
} equations that are linear w.r.t. {x}_{{m}_{k}}, and linearly independent, are required to calculate fraction parameters. The existance of *n*_{
m
} such equations results in an equation system

{b}_{k}\mathsf{\text{(}}{l}_{m}\mathsf{\text{,}}p\mathsf{\text{)}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{=}}\phantom{\rule{2.77695pt}{0ex}}A\mathsf{\text{(}}{l}_{m}\mathsf{\text{,}}p\mathsf{\text{)}}{x}_{{m}_{k}},

(28)

where both A\left({l}_{m},p\right)\in {{\mathsf{\text{R}}}^{{n}_{m}}}^{\times {n}_{m}} and {b}_{k}\left({l}_{m},p\right)\in {\mathsf{\text{R}}}^{{n}_{m}} are known, although some of the equations may in general be approximate (e.g., QSS). The matrix *A*(*l*_{
m
} , *p*) is invertible since the equations are linearly independent, and we have that

{x}_{{m}_{k}}={A}^{-1}\left({l}_{m},p\right){b}_{k}\left({l}_{m},p\right).

(29)

The fraction parameters can then be calculated

{\eta}_{{m}_{ki}}\left({l}_{m},p\right)=\frac{{x}_{{m}_{ki}}}{{l}_{{m}_{k}}}=\frac{{x}_{{m}_{ki}}}{{\sum}_{i=1}^{{n}_{m}}{x}_{{m}_{ki}}},

(30)

where we used Eq. (27) in the last step.

A modified lumped state variable for which an insufficient number of linear and linearly independent equations are available may still be used in the reduced model. However, the back-translation of the modified lumped state variable to the original state variables is then not possible, and step 3 of the method is ignored.

### Step 4

The fourth step of the method is to derive the rate of change of the modified lumped state variables. **Theorem:** The dynamics of the modified lumped state variables is given by

\begin{array}{lll}\hfill {\stackrel{\xb0}{l}}_{m}& ={\left(I+J\left({l}_{m},p\right)\right)}^{-1}\stackrel{\xb0}{l}=\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\\ ={\left(I+J\left({l}_{m},p\right)\right)}^{-1}{M}_{f}{S}_{s}{r}_{s}\left(x,p\right),\phantom{\rule{2em}{0ex}}& \hfill \text{(2)}\end{array}

(31)

where *S*_{
s
} and *r*_{
s
} (*x*, *p*) were defined in Eq. (9), the matrix *M*_{
f
} is defined in Eq. (11), and

{J}_{ij}\left({l}_{m},p\right)=\sum _{k}\left({M}_{{f}_{ik}}-{M}_{{m}_{ik}}\right)\frac{\partial gk\left({l}_{m},p\right)}{\partial {l}_{{m}_{j}}},

(32)

where the matrix *M*_{
m
} is defined in Eq. (26), and {g}_{i}\left({l}_{m},p\right)\equiv {x}_{{f}_{i}} is introduced to simplify the notation.

**Proof:**

First subtract Eq. (26) from Eq. (11)

l\phantom{\rule{2.77695pt}{0ex}}={l}_{m}+\left({M}_{f}-{M}_{m}\right){x}_{f}={l}_{m}+\left({M}_{f}-{M}_{m}\right)g\left({l}_{m},p\right),

and differentiate *l* with respect to time, which gives

\stackrel{\xb0}{l}\mathsf{\text{=}}{\stackrel{\xb0}{l}}_{m}\mathsf{\text{+}}J\mathsf{\text{(}}{l}_{m}\mathsf{\text{,p)}}{\stackrel{\xb0}{l}}_{m}\mathsf{\text{=(}}I\mathsf{\text{+}}\phantom{\rule{0.3em}{0ex}}J\mathsf{\text{(}}{l}_{m}\mathsf{\text{,}}p\mathsf{\text{))}}{\stackrel{\xb0}{l}}_{m}\mathsf{\text{,}}

(33)

where *I* is the identity matrix and *J*(*l*_{
m
} , *p*) is the Jacobian of (*M*_{
f
} - *M*_{
m
} )*g*(*l*_{
m
} , *p*) with respect to *l*_{
m
} . The element *J*_{
ij
} of *J*(*l*_{
m
} , *p*) is given by

{J}_{ij}\left({l}_{m},p\right)=\sum _{k}\left({M}_{{f}_{ik}}-{M}_{{m}_{ik}}\right)\frac{\partial {g}_{k}\left({l}_{m},p\right)}{\partial {l}_{{m}_{j}}}.

From Eq. (33) it is straight-forward to derive {\stackrel{\xb0}{l}}_{m}, which takes the form

\begin{array}{lll}\hfill {\stackrel{\xb0}{l}}_{m}& ={\left(I+J\left({l}_{m},p\right)\right)}^{-1}\stackrel{\xb0}{l}=\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\\ ={\left(I+J\left({l}_{m},p\right)\right)}^{-1}{M}_{f}{S}_{s}{r}_{s}\left(x,p\right),\phantom{\rule{2em}{0ex}}& \hfill \text{(2)}\end{array}

where Eq. (12) was used in the last step. □

The matrix *I* + *J*(*l*_{
m
} ; *p*) is symbolically invertible, but may in general contain singularities for particular combinations of parameters values and state variable values. However, the matrix is always invertible for the models discussed in this paper, since the corresponding determinants are strictly positive.

### Step 5

The final step of the method is to back-translate the modified lumped state variables to the original state variables with the fraction parameters derived in step 3. This allows a comparison between the predictions by the reduced model to those of the original model.

The implementation of the method is straight-forward, and we have used Matlab (R2008b) together with the SBtoolbox [36] as computing software for the models in this paper.