Example 1
A linear two-compartment model [39, 45]
$$ \begin{array}{l}{\dot{x}}_1=-\left({p}_1+{p}_2\right){x}_1+{p}_3{x}_2+u,\kern1em {x}_1(0)={x}_{10}\\ {}{\dot{x}}_2={p}_2{x}_1-\left({p}_3+{p}_4\right){x}_2,\kern3.5em {x}_2(0)={x}_{20}\\ {}y={x}_1/V\end{array} $$
(21)
This model describes a simple biochemical reaction network as shown in Fig. 1a. This model is partially observed and the partial observation causes structural non-identifiability. Unlike previous studies of this well-known model, the impact of the initial conditions on the identifiability is highlighted here. In this example, there are 5 parameters, i.e. p
A
= (p
1, p
2, p
3, p
4)T, p
C
= V and then we have
$$ {\mathbf{M}}_A\left(\mathbf{X}(s)\right)=\left(\begin{array}{cccc}\hfill -{X}_1(s)\hfill & \hfill -{X}_1(s)\hfill & \hfill {X}_2(s)\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {X}_1(s)\hfill & \hfill -{X}_2(s)\hfill & \hfill -{X}_2(s)\hfill \end{array}\right),\kern0.5em {\mathbf{M}}_C\left(\mathbf{X}(s)\right)=-\frac{1}{V^2}{X}_1(s) $$
(22)
The Laplace form of the state variables can be achieved by solving the state equations in Eq. (21)
$$ \begin{array}{l}{X}_1(s)=\frac{1}{\varDelta}\left(\left(s+{p}_3+{p}_4\right)\left(U(s)+{x}_{10}\right)+{p}_3{x}_{20}\right)\\ {}{X}_2(s)=\frac{1}{\varDelta}\left({p}_2\left(U(s)+{x}_{10}\right)+\left(s+{p}_1+{p}_2\right){x}_{20}\right)\end{array} $$
(23)
where Δ = (s + p
1 + p
2)(s + p
3 + p
4) − p
2
p
3. According to Eq. (13), the output sensitivity vector (not a matrix, since there is only one output variable in this example) is expressed as
$$ \frac{\partial Y(s)}{\partial \mathbf{p}}=\frac{-1}{V\varDelta }{\left(\begin{array}{c}\hfill \left(s+{p}_3+{p}_4\right){X}_1(s)\hfill \\ {}\hfill \left(s+{p}_4\right){X}_1(s)\hfill \\ {}\hfill -\left(s+{p}_3+{p}_4\right){X}_2(s)\hfill \\ {}\hfill \begin{array}{c}\hfill {p}_3{X}_2(s)\hfill \\ {}\hfill \frac{\varDelta }{V}{X}_1(s)\hfill \end{array}\hfill \end{array}\right)}^T $$
(24)
Applying the expressions of Eq. (22) to Eq. (23), it follows
$$ \frac{\partial Y(s)}{\partial \mathbf{p}}=\frac{-1}{V^2{\varDelta}^2}{\left(\begin{array}{c}\hfill V\left(s+{p}_3+{p}_4\right)\left(\left(s+{p}_3+{p}_4\right)\left(U(s)+{x}_{10}\right)+{p}_3{x}_{20}\right)\hfill \\ {}\hfill V\left(s+{p}_4\right)\left(\left(s+{p}_3+{p}_4\right)\left(U(s)+{x}_{10}\right)+{p}_3{x}_{20}\right)\hfill \\ {}\hfill -V\left(s+{p}_4\right)\left({p}_2\left(U(s)+{x}_{10}\right)+\left(s+{p}_1+{p}_2\right){x}_{20}\right)\hfill \\ {}\hfill \begin{array}{c}\hfill V{p}_3\left({p}_2\left(U(s)+{x}_{10}\right)+\left(s+{p}_1+{p}_2\right){x}_{20}\right)\hfill \\ {}\hfill \left(\left(s+{p}_3+{p}_4\right)\left(U(s)+{x}_{10}\right)+{p}_3{x}_{20}\right)\varDelta \hfill \end{array}\hfill \end{array}\right)}^T $$
(25)
To analyze the linear dependencies of the 5 functions in Eq. (25) we introduce 5 unknowns (α
1, ⋯, α
5). By using the method described in the above section (see Additional file 1), we find α
5 = 0, i.e. the parameter V is uncorrelated with any other parameters and thus is uniquely identifiable. This result is trivial in fact, since V is immediately fixed if y(0) and x
1(0) are known, according to the last line of Eq. (21).
It can be seen from Eq. (25) that U(s) and x
10 have the same impact on the output sensitivities. Thus, to see the influence of the initial conditions on the identifiability of the 4 parameters (p
1, p
2, p
3, p
4), we let U(s) = 0 in the following analysis.
-
Case 1:
x
10 ≠ 0, x
20 = 0. The resulting output sensitivity to the individual parameters has the following relationship (see Additional file 1)
$$ \frac{\partial Y(s)}{\partial {p}_1}-\frac{\partial Y(s)}{\partial {p}_2}+\frac{p_3}{p_2}\frac{\partial Y(s)}{\partial {p}_3}-\frac{p_3}{p_2}\frac{\partial Y(s)}{\partial {p}_4}=0 $$
(26)
This means that the 4 parameters are correlated in one group and their interrelationship is independent of the value of x
10 ≠ 0, i.e. they are structurally non-identifiable. By solving Eq. (26) we can find the interrelationships of the sub-groups (also called identifiable combinations) of the parameters as {p
2
p
3, p
1 + p
2, p
3 + p
4}. This result is the same as reported in the literature [39, 45]. In this situation, it is impossible to uniquely estimate the parameters (p
1, p
2, p
3, p
4) based on any measured datasets of the output (y = x
1).
-
Case 2:
x
10 = 0, x
20 ≠ 0. Solving the linear homogenous linear equations (see Additional file 1) in this case leads to α
3 = 0, i.e. p
3 is identifiable. And the output sensitivity to the other parameters has the following relationship
$$ \left({p}_4-{p}_1-{p}_2\right)\frac{\partial Y(s)}{\partial {p}_1}+\left({p}_1+{p}_2-{p}_3-{p}_4\right)\frac{\partial Y(s)}{\partial {p}_2}+{p}_3\frac{\partial Y(s)}{\partial {p}_4}=0 $$
(27)
which means that (p
1, p
2, p
4) are correlated in one group and thus structurally non-identifiable. It is interesting to note from Eq. (27) that the correlation relation of (p
1, p
2, p
4) is also related to the identifiable parameter p
3.
-
Case 3:
x
10 ≠ 0, x
20 ≠ 0. According to Eq. (25) and the Additional file 1, it can be seen that the functional relationship of the 4 parameters (p
1, p
2, p
3, p
4) depend on the initial condition (x
10, x
20) if both x
10 ≠ 0 and x
20 ≠ 0. In this situation, the parameters are practically non-identifiable. The 4 parameters are correlated in one group, i.e., n
max = 4, and the number of equations in the form of Eq. (19) is n
y
(2n
x
− 2) = 2. Therefore, the 4 parameters can be uniquely estimated based on fitting the model to at least n
d
= 2 datasets (such that n
d
≥ n
max/(n
y
(2n
x
− 2))) of the output (y = x
1) from different initial values of x
10 ≠ 0 and x
20 ≠ 0, respectively.
To verify the above achieved results, we perform numerical parameter estimation by using the method developed in [46–48]. The true parameter values in the model are assumed to be p
1 = 0.7, p
2 = 0.7, p
3 = 1.0, p
4 = 0.4 and we generate noise-free output data at 100 time points by simulation. For case 1, one dataset for y is generated by x
10 = 15, x
20 = 0. To check the identifiable combinations {p
2
p
3, p
1 + p
2, p
3 + p
4}, we repeat the parameter identification run by fixing p
1 with a different value for each run. Figure 2 shows the relationships of the parameters after the fitting which illustrate exactly the expected function values, i.e. p
2
p
3 = 0.7, p
1 + p
2 = 1.4, p
3 + p
4 = 1.4.
For case 2, one dataset for y is generated by x
10 = 0, x
20 = 15. Indeed, only the true value of p
3 = 1.0 can be identified when fitting the 4 parameters to the dataset. For case 3, we generate 2 datasets for y by x
(1)10
= 15, x
(2)20
= 5 and x
(2)10
= 5, x
(2)20
= 15, respectively. Then we fit the 4 parameters simultaneously to the 2 datasets, from which we obtain the estimated values of the parameters exactly as their true values.
Furthermore, if the model in Eq. (21) is fully observed, e.g., y
1 = x
1, y
2 = x
2, then all parameters in the model are identifiable. This can be easily seen from M
A
(X(s)) in Eq. (22), since its columns are linearly independent. As a result, one single dataset including the trajectories of y
1 = x
1, y
2 = x
2 is enough to uniquely estimate the parameters in the model.
Example 2
A linear three-compartment model [43]
$$ \begin{array}{l}{\dot{x}}_1={p}_{13}{x}_3+{p}_{12}{x}_2-{p}_{21}{x}_1+u,\kern1em {x}_1(0)={x}_{10}\\ {}{\dot{x}}_2={p}_{21}{x}_1-{p}_{12}{x}_2,\kern5em {x}_2(0)={x}_{20}\\ {}{\dot{x}}_3=-{p}_{13}{x}_3,\kern7.12em {x}_3(0)={x}_{30}\\ {}y={x}_2\end{array} $$
(28)
The reaction network corresponding to this model is shown in Fig. 1b. This model was studied in [43] to demonstrate the failure of the identifiability test by using the differential algebra methods when x
30 = 0. This can be easily recognized by using our method. Let p
A
= (p
21, p
12, p
13)T, according to Eq. (13), the functions in the output sensitivity vector will be (see Additional file 1).
$$ \begin{array}{l}{q}_1(s)=\left(s+{p}_{13}\right)\left(\left(U(s)+{x}_{10}\right){s}^2+\left(\left({p}_{12}+{p}_{13}\right)\left(U(s)+{x}_{10}\right)+{p}_{12}{x}_{20}+{p}_{13}{x}_{30}\right)s+{p}_{12}{p}_{13}\left(U(s)+{x}_{10}+{x}_{20}+{x}_{30}\right)\right)\\ {}{q}_2(s)=-\left(s+{p}_{13}\right)\left({x}_{20}{s}^2+\left(\left({p}_{21}+{p}_{13}\right){x}_{20}+{p}_{21}\left(U(s)+{x}_{10}\right)\right)s+{p}_{13}{k}_{21}\left(U(s)+{x}_{10}+{x}_{20}+{x}_{30}\right)\right)\\ {}{q}_3(s)={p}_{21}{x}_{30}\left(s+{p}_{12}+{p}_{21}\right)s\end{array} $$
(29)
According to Eq. (14) and Eq. (15) we obtain the following 4 homogeneous linear equations (see Additional file 1)
$$ \begin{array}{l}\left(U(s)+{x}_{10}\right){\alpha}_1-{x}_{20}{\alpha}_2=0\\ {}\left(\left({p}_{12}+2{k}_{13}\right)\left(U(s)+{x}_{10}\right)+{p}_{12}{x}_{20}+{p}_{13}{x}_{30}\right){\alpha}_1-\left(\left({p}_{21}+2{p}_{13}\right){x}_{20}+{p}_{21}\left(U(s)+{x}_{10}\right)\right){\alpha}_2+{p}_{21}{x}_{30}{\alpha}_3=0\\ {}\left(\begin{array}{l}\left({p}_{13}\left(\left({p}_{12}+{p}_{13}\right)\left(U(s)+{x}_{10}\right)+{p}_{12}{x}_{20}+{p}_{13}{x}_{30}\right)+{p}_{13}{p}_{12}\left(U(s)+{x}_{10}+{x}_{20}+{x}_{30}\right)\right){\alpha}_1\\ {}-\left({p}_{13}\left(\left({p}_{21}+{p}_{13}\right){x}_{20}+{p}_{21}\left(U(s)+{x}_{10}\right)\right)+{p}_{13}{p}_{21}\left(U(s)+{x}_{10}+{x}_{20}+{x}_{30}\right)\right){\alpha}_2\\ {}+{p}_{21}\left({p}_{12}+{p}_{21}\right){x}_{30}{\alpha}_3\end{array}\right)=0\\ {}{p}_{12}{\alpha}_1-{p}_{21}{\alpha}_2=0\end{array} $$
(30)
It can be easily seen that if x
30 = 0, then α
3 will disappear from Eq. (30), i.e. α
3 can be any value, which means that p
13 is non-identifiable. On the contrary, if x
30 ≠ 0, we have in Eq. (30) 4 linearly independent homogeneous equations with 3 unknowns and thus there should be α
1 = α
2 = α
3 = 0, which means that all three parameters are identifiable.
Example 3
A linear three-compartment model [39]
$$ \begin{array}{l}{\dot{x}}_1=-\left({p}_{21}+{p}_{31}\right){x}_1+{p}_{12}{x}_2+{p}_{13}{x}_3+u,\kern1em {x}_1(0)={x}_{10}\\ {}{\dot{x}}_2={p}_{21}{x}_1-\left({p}_{12}+{p}_{02}\right){x}_2,\kern6em {x}_2(0)={x}_{20}\\ {}{\dot{x}}_3={p}_{31}{x}_1-\left({p}_{13}+{p}_{03}\right){x}_3,\kern6.12em {x}_3(0)={x}_{30}\\ {}y={x}_1\end{array} $$
(31)
The reaction network described by this model is shown in Fig. 1c. The parameters to be estimated in this model are p
A
= (p
21, p
31, p
12, p
13, p
02, p
03)T. The solution of the state equations in Eq. (31) leads to
$$ \left(\begin{array}{c}\hfill {X}_1(s)\hfill \\ {}\hfill {X}_2(s)\hfill \\ {}\hfill {X}_3(s)\hfill \end{array}\right)={\left(\begin{array}{ccc}\hfill s+\left({p}_{21}+{p}_{31}\right)\hfill & \hfill -{p}_{12}\hfill & \hfill -{p}_{13}\hfill \\ {}\hfill -{p}_{21}\hfill & \hfill s+\left({p}_{12}+{p}_{02}\right)\hfill & \hfill 0\hfill \\ {}\hfill -{p}_{31}\hfill & \hfill 0\hfill & \hfill s+\left({p}_{13}+{p}_{03}\right)\hfill \end{array}\right)}^{-1}\left(\begin{array}{c}\hfill U(s)+{x}_{10}\hfill \\ {}\hfill {x}_{20}\hfill \\ {}\hfill {x}_{30}\hfill \end{array}\right) $$
(32)
Similar to Example 2 we can obtain the Laplace functions in the output sensitivity vector as follows
$$ \begin{array}{l}{q}_1(s)=-\left(s+{p}_{13}+{p}_{03}\right)\left(s+{p}_{02}\right){X}_1(s)\\ {}{q}_2(s)=-\left(s+{p}_{12}+{p}_{02}\right)\left(s+{p}_{03}\right){X}_1(s)\\ {}{q}_3(s)=\left(s+{p}_{13}+{p}_{03}\right)\left(s+{p}_{02}\right){X}_2(s)\\ {}{q}_4(s)=\left(s+{p}_{12}+{p}_{02}\right)\left(s+{p}_{03}\right){X}_3(s)\\ {}{q}_5(s)=-{k}_{21}\left(s+{p}_{13}+{p}_{03}\right){X}_2(s)\\ {}{q}_6(s)=-{k}_{13}\left(s+{p}_{12}+{p}_{02}\right){X}_3(s)\end{array} $$
(33)
By introducing 6 unknowns (α
1, ⋯, α
6), the dependencies of the output sensitivities on the control and the initial condition can be derived from Eq. (32) and Eq. (33). If x
20 = x
30 = 0 and U(s) + x
10 ≠ 0, the resulting homogeneous linear equations in the form of Eq. (16) are as follows
$$ \begin{array}{l}{\alpha}_1+{\alpha}_2=0\\ {}\left(2a+b+{p}_{02}\right){\alpha}_1+\left(2b+a+{p}_{03}\right){\alpha}_2+{p}_{21}{\alpha}_3+{p}_{31}{\alpha}_4=0\\ {}\left(\begin{array}{l}\left({a}^2+2a\left(b+{p}_{02}\right)+b{p}_{02}\right){\alpha}_1+\left({b}^2+2b\left(a+{p}_{03}\right)+a{p}_{03}\right){\alpha}_2\\ {}+{p}_{21}\left(2a+{p}_{02}\right){\alpha}_3+{p}_{31}\left(2b+{p}_{03}\right){\alpha}_4+{p}_{21}{p}_{12}{\alpha}_5+{p}_{31}{p}_{13}{\alpha}_6\end{array}\right)=0\\ {}\left(\begin{array}{l}\left({a}^2\left(b+{p}_{02}\right)+2 ab{p}_{02}\right){\alpha}_1+\left({b}^2\left(a+{p}_{03}\right)+2 ab{p}_{03}\right){\alpha}_2+{p}_{21}\left({a}^2+2a{p}_{02}\right){\alpha}_3\\ {}+{p}_{31}\left({b}^2+2b{p}_{03}\right){\alpha}_4+2a{p}_{21}{p}_{12}{\alpha}_5+2b{p}_{31}{p}_{13}{\alpha}_6\end{array}\right)=0\\ {}{a}^2b{p}_{02}{\alpha}_1+{b}^2a{p}_{03}{\alpha}_2+{k}_{21}{a}^2{p}_{02}{\alpha}_3+{p}_{31}{b}^2{p}_{03}{\alpha}_4+{p}_{21}{p}_{12}{a}^2{\alpha}_5+{p}_{31}{p}_{13}{b}^2{\alpha}_6=0\end{array} $$
(34)
where a = p
13 + p
03, b = p
12 + p
02. It can be seen that U(s) + x
10 ≠ 0 does not appear in Eq. (34). Solving the 5 equations for the 6 unknowns in Eq. (34) with respect to α
1, we find
$$ {\alpha}_2=-{\alpha}_1,\kern1em {\alpha}_3=\frac{p_{13}}{p_{31}}{\alpha}_1,\kern1em {\alpha}_4=-\frac{p_{13}}{p_{31}}{\alpha}_1,\kern1em {\alpha}_5=-\frac{p_{12}}{p_{21}}{\alpha}_1,\kern1em {\alpha}_6=\frac{p_{12}}{p_{21}}{\alpha}_1 $$
(35)
Thus the output sensitivities to the parameters has the following relation
$$ \frac{\partial Y(s)}{\partial {p}_{21}}-\frac{\partial Y(s)}{\partial {p}_{31}}+\frac{p_{13}}{p_{31}}\left(\frac{\partial Y(s)}{\partial {p}_{13}}-\frac{\partial Y(s)}{\partial {p}_{03}}\right)-\frac{p_{12}}{p_{21}}\left(\frac{\partial Y(s)}{\partial {k}_{12}}-\frac{\partial Y(s)}{\partial {k}_{02}}\right)=0 $$
(36)
which means that all 6 parameters are correlated in one group and their interrelationship is independent of U(s) + x
10 ≠ 0. Thus the parameters in this model are structurally non-identifiable when x
20 = x
30 = 0. In addition, we can solve Eq. (36) to obtain its local solutions as {p
02 + p
12, p
03 + p
13, p
21 + p
31, p
12
p
21, p
13
p
31} as well as its global solutions as
$$ \begin{array}{l}{\varphi}_1={p}_{21}+{p}_{31}\\ {}{\varphi}_2=\left({p}_{02}+{p}_{12}\right)\left({p}_{03}+{p}_{13}\right)\\ {}{\varphi}_3=\left({p}_{02}+{p}_{12}\right)+\left({p}_{03}+{p}_{13}\right)\\ {}{\varphi}_4={p}_{03}{p}_{31}\left({p}_{02}+{p}_{12}\right)+{p}_{02}{p}_{21}\left({p}_{03}+{p}_{13}\right)\\ {}{\varphi}_5={p}_{31}\left({p}_{02}+{p}_{12}+{p}_{03}\right)+{p}_{21}\left({p}_{03}+{p}_{13}+{p}_{02}\right)\end{array} $$
(37)
which are the identifiable combinations of the parameters, as given in [39]. To verify these parameter relations, numerical parameter estimation is carried out by assuming p
02 = 2, p
12 = 3, p
03 = 3, p
13 = 0.4, p
21 = 1, p
31 = 2 as the true values of the parameters. A dataset for y containing 400 sampling points in the time period [0, 2] is generated by x
10 = x
20 = x
30 = 0, u(t) = 25. Then we repeatedly fit the parameters to the dataset, except for p
21 which is fixed with different values in the range [0.7, 1.5]. The estimation results are shown in Fig. 3, exactly validating the global solutions expressed as Eq. (37).
It can be seen from Fig. 3a that the relationship between p
21 and p
31 is indeed a straight line, namely p
21 + p
31 = 3.0. From Fig. 3b, it is interesting to see that the relationship between p
02 + p
12 and p
03 + p
13 is shown by two separate points. This is because both their summation and their product are constant, as indicated in φ
2, φ
3 of Eq. (37). This special property leads to the fact that both p
02 + p
12 and p
03 + p
13 have two solutions, i.e., there are two lines to represent the relationship of p
02 + p
12 as well as p
03 + p
13, respectively, as shown in Fig. 3c and d. Correspondingly, the relationship of p
12
p
21as well as p
13
p
31 is also twofold, as shown in Fig. 3e and f, respectively. The estimated results corresponding to the last two identifiable combinations in Eq. (37) are shown in Fig. 3g and h.
To remedy the non-identifiability, let x
20 ≠ 0, x
30 ≠ 0, then similar to Eq. (34), there will be 5 linear equations with respect to (α
1, ⋯, α
6). This means that the 6 parameters are correlated in one group, i.e., n
max = 6. Since the correlation relationships now depend on U(s) + x
10 ≠ 0 and x
20 ≠ 0, x
30 ≠ 0, the parameters are practically identifiable. The number of equations in the form of Eq. (19) is n
y
(2n
x
− 2) = 4. As a result, 2 datasets with different values of U(s) + x
10 ≠ 0 and x
20 ≠ 0, x
30 ≠ 0 are needed to uniquely estimate the parameters of the model.
Example 4
A linear four-compartment model [33, 39]
$$ \begin{array}{l}{\dot{x}}_1=-{p}_{31}{x}_1+{p}_{13}{x}_3+u,\kern5.2em {x}_1(0)={x}_{10}\\ {}{\dot{x}}_2=-{p}_{42}{x}_2+{p}_{24}{x}_4,\kern6.30em {x}_2(0)={x}_{20}\\ {}{\dot{x}}_3={p}_{31}{x}_1-\left({p}_{03}+{p}_{13}+{p}_{43}\right){x}_3,\kern2em {x}_3(0)={x}_{30}\\ {}{\dot{x}}_4={p}_{42}{x}_2+{p}_{43}{x}_3-\left({p}_{04}+{p}_{24}\right){x}_4,\kern1em {x}_4(0)={x}_{40}\\ {}{y}_1={x}_1\\ {}{y}_2={x}_2\end{array} $$
(38)
Figure 1d shows the biochemical reaction network of this model. In this model, there are 7 parameters p
A
= (p
31, p
13, p
42, p
24, p
43, p
03, p
04)T and 4 state variables among which two are output variables. In the case of u ≠ 0, x
10 = x
20 = x
30 = x
40 = 0, the parameters are structurally non-identifiable where the identifiable combinations of the correlated parameters are expressed as {p
31, p
13, p
04
p
42, p
24
p
43, p
03 + p
43, p
24 + p
42 + p
04} [33, 39]. The same results of the identifiable combinations are obtained by using our method which is much simpler to deal with, as described in the following.
From Eq. (38), we have
$$ \mathbf{A}=\left(\begin{array}{cccc}\hfill -{p}_{31}\hfill & \hfill 0\hfill & \hfill {p}_{13}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill -{p}_{42}\hfill & \hfill 0\hfill & \hfill {p}_{24}\hfill \\ {}\hfill {p}_{31}\hfill & \hfill 0\hfill & \hfill -\left({p}_{03}+{p}_{13}+{p}_{43}\right)\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {p}_{42}\hfill & \hfill {p}_{43}\hfill & \hfill -\left({p}_{04}+{p}_{24}\right)\hfill \end{array}\right),\kern1em \mathbf{B}=\left(\begin{array}{c}\hfill 1\hfill \\ {}\hfill 0\hfill \\ {}\hfill 0\hfill \\ {}\hfill 0\hfill \end{array}\right),\kern1em \mathbf{C}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right) $$
(39)
and
$$ {\mathbf{M}}_A\left(\mathbf{X}(s)\right)=\left(\begin{array}{ccccccc}\hfill -{X}_1(s)\hfill & \hfill {X}_3(s)\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill -{X}_2(s)\hfill & \hfill {X}_4(s)\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill {X}_1(s)\hfill & \hfill -{X}_3(s)\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -{X}_3(s)\hfill & \hfill -{X}_3(s)\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill {X}_2(s)\hfill & \hfill -{X}_4(s)\hfill & \hfill {X}_3(s)\hfill & \hfill 0\hfill & \hfill -{X}_4(s)\hfill \end{array}\right) $$
(40)
Since in this example we have two output variables, to determine the parameter correlations we have to consider
$$ {\alpha}_1\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{31}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{31}}\hfill \end{array}\right)+{\alpha}_2\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{13}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{13}}\hfill \end{array}\right)+{\alpha}_3\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{42}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{42}}\hfill \end{array}\right)+{\alpha}_4\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{24}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{24}}\hfill \end{array}\right)+{\alpha}_5\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{43}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{43}}\hfill \end{array}\right)+{\alpha}_6\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{03}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{03}}\hfill \end{array}\right)+{\alpha}_7\left(\begin{array}{c}\hfill \frac{\partial {Y}_1}{\partial {p}_{04}}\hfill \\ {}\hfill \frac{\partial {Y}_2}{\partial {p}_{04}}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill 0\hfill \\ {}\hfill 0\hfill \end{array}\right) $$
(41)
where the output sensitivities are expressed in the following form (see Additional file 1)
$$ \frac{\partial \mathbf{Y}(s)}{\partial \mathbf{p}}=\frac{1}{\varDelta}\left(\begin{array}{ccccccc}\hfill -\left({b}_{11}-{b}_{13}\right){X}_1\hfill & \hfill \left({b}_{11}-{b}_{13}\right){X}_3\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -{b}_{13}{X}_3\hfill & \hfill -{b}_{13}{X}_3\hfill & \hfill 0\hfill \\ {}\hfill -\left({b}_{21}-{b}_{23}\right){X}_1\hfill & \hfill \left({b}_{21}-{b}_{23}\right){X}_3\hfill & \hfill -\left({b}_{22}-{b}_{24}\right){X}_2\hfill & \hfill \left({b}_{22}-{b}_{24}\right){X}_4\hfill & \hfill \left({b}_{24}-{b}_{23}\right){X}_3\hfill & \hfill -{b}_{23}{X}_3\hfill & \hfill -{b}_{24}{X}_4\hfill \end{array}\right) $$
(42)
It can be clearly seen from the first two columns of Eq. (41) and Eq. (42) that α
1 = α
2 = 0, which means that p
31, p
13 are uniquely identifiable. From the 5th and 6th columns of the first row in Eq. (42) we have α
5 + α
6 = 0 , i.e.
$$ \frac{\partial {Y}_1}{\partial {p}_{43}}-\frac{\partial {Y}_1}{\partial {p}_{03}}=0 $$
(43)
Thus p
43, p
03 are pairwise correlated. Furthermore, based on the second row of Eq. (41) and Eq. (42) the following results can be obtained (see Additional file 1):
$$ {p}_{24}\frac{\partial {Y}_2}{\partial {p}_{24}}-{p}_{43}\frac{\partial {Y}_2}{\partial {p}_{43}}=0 $$
(44)
$$ {p}_{42}\left(\frac{\partial {Y}_2}{\partial {p}_{42}}-\frac{\partial {Y}_2}{\partial {p}_{24}}\right)-{p}_{04}\left(\frac{\partial {Y}_2}{\partial {p}_{04}}-\frac{\partial {Y}_2}{\partial {p}_{24}}\right)=0 $$
(45)
Eq. (43)–Eq. (45) indicate that there exist 3 separate correlation groups in this example. The maximum number of parameters among the groups is 3. The (local) solutions of these 3 equations lead to the identifiable combinations of the parameters in the form of p
43 + p
03, p
24
p
43, p
42
p
04 and p
42 + p
04 + p
24, respectively, beside the two identifiable parameters p
31, p
13.
To numerically verify the results, p
31 = 3.0, p
13 = 5.5, p
03 = 1.0, p
04 = 0.7, p
24 = 3.5, p
42 = 3.0, p
43 = 4.0 are used as the true values of the parameters. One noise-free dataset for y
1, y
2 is generated by x
10 = x
20 = x
30 = x
40 = 0 and u = 5.0 through simulation. Then we repeatedly fit the parameters to the dataset, except for p
43 which is fixed with a different value for each run of the fitting. As expected, p
31, p
13 are always at their true values after each run, whereas the other parameters have correlated relationships in the forms of their identifiable combinations. Figure 4 shows these relationships based on the results of 176 runs for the parameter estimation. It can be seen from Fig. 4a to d that, indeed, p
43 + p
03 = 5.0, p
24
p
43 = 14.0, p
42
p
04 = 2.1, and p
42 + p
04 + p
24 = 7.2 are obtained by the parameter estimation.
Also in this example, we can consider x
20 ≠ 0, x
30 ≠ 0, x
40 ≠ 0 for remedying the non-identifiability. Since the maximum number of correlated parameter groups is n
max = 3, the number of equations in the form of Eq. (19) is n
y
(2n
x
− 2) = 12. Therefore, one dataset for y
1, y
2 from an initial condition x
10 ≠ 0, x
20 ≠ 0, x
30 ≠ 0, x
40 ≠ 0 will be enough to uniquely estimate the 7 parameters in the model.
Example 5
Insulin receptor dynamics model [49].
Many physiological processes such as glucose uptake, lipid-, protein- and glycogen-synthesis, to name the most important, are regulated by insulin after binding to the insulin receptor. The latter is located in the cytoplasma membrane [50, 51]. The insulin receptor (IR) is a dynamic cellular macromolecule. Upon insulin binding, a series of processes follow, including endocytosis of the IR-insulin complex, endosomal processing, sequestration of ligand (insulin) from the receptor, receptor inactivation as well as receptor recycling to the cell surface [52].
In several early studies, simple models describing insulin receptor dynamics were proposed [53–57] where either a subset of the whole process was considered or a few subunits were lumped into single reaction steps. More detailed models of insulin signaling pathways were developed and simulation studies were performed in [58–60]. However, parameter values such as rate constants in these models were partially taken from literature and partially estimated through experimental data.
A general five-compartment IR dynamics model was developed in [49] and its parameters were estimated based on simultaneously fitting to the measured datasets published in [55, 61, 62]. This model describes the endosomal trafficking dynamics of hepatic insulin receptor consisting of IR autophosphorylation after receptor binding, IR endosomal internalization and trafficking, insulin dissociation from and dephosphorylation of internalized IR, and finally recycling of the insulin-free, dephosphorylated IR to the plasma membrane [49]. The state equations of the general five-compartment model are given as follows [49]
$$ \begin{array}{l}{\dot{x}}_1={p}_{12}{x}_2+{p}_{15}{x}_5-\left({p}_{21}u+{p}_{51}\right){x}_1\\ {}{\dot{x}}_2={p}_{21}u\kern0.1em {x}_1-\left({p}_{12}+{p}_{32}\right){x}_2\\ {}{\dot{x}}_3={p}_{32}{x}_2-{p}_{43}{x}_3\\ {}{\dot{x}}_4={p}_{43}{x}_3-{p}_{54}{x}_4\\ {}{\dot{x}}_5={p}_{51}{x}_1+{p}_{54}{x}_4-{p}_{15}{x}_5\end{array} $$
(46)
where the state variables denote the concentrations of the components, with x
1 as unbound surface IR, x
2 as bound surface IR, x
3 as bound-phosphorylated surface IR, x
4 as bound-phosphorylated internalized IR, and x
5 as unbound internalized IR. The control variable u is considered as a constant insulin input (100 nM), while the initial condition of Eq. (46) is given as x
1(0) = 100 %, and x
i
(0) = 0 for i ≠ 1 [49].
Since the control variable u is a constant, the nonlinear term p
21
u in Eq. (46) can be regarded as a parameter p
′21
to be estimated. As a result, Eq. (46) becomes a linear model which can be analyzed by our method. Here, we consider three measured datasets used in [49] for parameter estimation, i.e. IR autophosphorylation from [61], IR internalization from [55], and remaining surface IR from [55]. These measured species are mixtures of the components denoted as state variables in Eq. (46), respectively, leading to the following output equations [49]
$$ {y}_1={x}_3+{x}_4 $$
(47)
$$ {y}_2={x}_4+{x}_5 $$
(48)
$$ {y}_3={x}_2+{x}_3 $$
(49)
where y
1 is the percentage of total (surface and intracellular) phosphorylated IR, y
2 is the percentage of total internalized IR, y
3 is the percentage of total IR on the cell surface.
We are concerned with the identifiability of the 7 parameters in Eq. (46), when one or a combination of the above output equations is used for parameter estimation. Based on our method, it is found that, when only Eq. (47) is employed as an output equation, 4 parameters (i.e. p
′21
, p
51, p
12, p
32) are non-identifiable, while 3 parameters (i.e. p
43, p
54, p
15) are identifiable. In particular, the relationship of the 4 correlated parameters is expressed as follows
$$ \begin{array}{l}\frac{\partial {Y}_1(s)}{\partial {p}_{21}^{\prime }}+\frac{p_{51}}{\left({p}_{12}-{p}_{15}+{p}_{32}-{p}_{51}\right)}\frac{\partial {Y}_1(s)}{\partial {p}_{51}}\\ {}-\frac{\left({p}_{12}{p}_{21}^{\prime }-{p}_{12}{p}_{32}-{p}_{15}{p}_{21}^{\prime }+{p}_{15}{p}_{32}+{p}_{21}^{\prime }{p}_{32}-{p}_{32}^2+{p}_{32}{p}_{51}\right)}{p_{21}^{\prime}\left({p}_{12}-{p}_{15}+{p}_{32}-{p}_{51}\right)}\frac{\partial {Y}_1(s)}{\partial {p}_{12}}-\frac{p_{32}}{p_{21}^{\prime }}\frac{\partial {Y}_1(s)}{\partial {p}_{32}}=0\end{array} $$
(50)
This means that, using a dataset of IR autophosphorylation (i.e. y
1 = x
3 + x
4), it is impossible to estimate all of the parameters of the model (even if the data are noise-free). Nevertheless, our computation results show that all of the 7 parameters are identifiable, i.e. there is no parameter correlation, when either Eq. (48) or Eq. (49) is used as an output equation. As a result, either a dataset of the IR internalization (i.e. y
2 = x
4 + x
5) or a dataset of the remaining surface IR (i.e. y
3 = x
2 + x
3) is enough for unique estimation of the 7 parameters of the model, when the measured data are noise-free. Obviously, unique estimation can also be achieved when a combination of the 3 output equations are used (i.e., simultaneously fitting the model to two or three datasets from [55] and [61]), as was performed in [49].