Volume 8 Supplement 3

## Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2013): Systems Biology Approaches to Biomedicine

# A group LASSO-based method for robustly inferring gene regulatory networks from multiple time-course datasets

- Li-Zhi Liu
^{1}, - Fang-Xiang Wu
^{1, 2}Email author and - Wen-Jun Zhang
^{1, 2}

**8(Suppl 3)**:S1

https://doi.org/10.1186/1752-0509-8-S3-S1

© Liu et al.; licensee BioMed Central Ltd. 2014

**Published: **22 October 2014

## Abstract

### Background

As an abstract mapping of the gene regulations in the cell, gene regulatory network is important to both biological research study and practical applications. The reverse engineering of gene regulatory networks from microarray gene expression data is a challenging research problem in systems biology. With the development of biological technologies, multiple time-course gene expression datasets might be collected for a specific gene network under different circumstances. The inference of a gene regulatory network can be improved by integrating these multiple datasets. It is also known that gene expression data may be contaminated with large errors or outliers, which may affect the inference results.

### Results

A novel method, Huber group LASSO, is proposed to infer the same underlying network topology from multiple time-course gene expression datasets as well as to take the robustness to large error or outliers into account. To solve the optimization problem involved in the proposed method, an efficient algorithm which combines the ideas of auxiliary function minimization and block descent is developed. A stability selection method is adapted to our method to find a network topology consisting of edges with scores. The proposed method is applied to both simulation datasets and real experimental datasets. It shows that Huber group LASSO outperforms the group LASSO in terms of both areas under receiver operating characteristic curves and areas under the precision-recall curves.

### Conclusions

The convergence analysis of the algorithm theoretically shows that the sequence generated from the algorithm converges to the optimal solution of the problem. The simulation and real data examples demonstrate the effectiveness of the Huber group LASSO in integrating multiple time-course gene expression datasets and improving the resistance to large errors or outliers.

### Keywords

Gene Regulatory Network Reverse Engineering Group LASSO Optimization Gene Expression Data## Background

A Gene regulatory network (GRN) consists of a set of genes and regulatory relationships among them. Tremendous amount of microarray data that measure expression levels of genes under specific conditions are obtained from experiments. It is a challenging problem in systems biology to reconstruct or "reverse engineer" GRNs by aiming at retrieving the underlying interaction relationships between genes from microarray data. Various approaches have been developed to infer GRNs from microarray data. Most of them can be classified into two categories: parametric or model-based methods and nonparametric or dependency-measure-based methods. Commonly used models include ordinary differential equations [1], Gaussian graphical models [2] and Bayesian networks [3]. Dependency measures include partial correlation coefficient [4], mutual information [5], and *z*-score [6].

The reconstruction of GRN is a non-trivial problem. On the one hand, the number of possible network topologies grows exponentially as the number of genes increases. On the other hand, the information in the microarray data is quite limited. The data contain a lot of inherent noises generated from the devices or the experiment processes. For large-scale networks, the number of observations is usually much less than that of genes, also known as "dimensionality problem" [2, 7]. The lack of observations and the high dimensionality of the data prohibit the direct application of traditional methods and make the inference task extremely challenging.

As more and more microarray datasets on the same species are produced from different laboratories, their integration leads to more robust and more reliable results. The methods that integrate multiple datasets could synergize the strength of each dataset and either infer a more accurate network if all the integrated datasets are in high qualities or infer a robust network which is better than the worse one that is from a single dataset. However, multiple time-course datasets can not be simply combined as one dataset as there is no temporal dependencies between the datasets. Wang et al. [8] proposes a linear programming framework to integrate multiple time-course gene expression data to infer a network topology that is most consistent to all datasets. In their method, the regulatory strengths between genes is assumed to be the same across all datasets. However, different datasets may be produced under different circumstances, which may result in different regulatory strength between genes. Another problem is that the value of the tuning parameter in their method, which controls the degree of sparsity of the inferred network, is only determined intuitively. Chen et al. [9] infer one GRN from each time-course data separately, and combine edges of inferred GRNs using a strategy similar to majority vote. For this method, using each dataset separately in the inference process may miss the opportunity of taking advantage of information in other datasets and the tuning parameter is also determined intuitively.

This study focuses on inferring the topologies of GRNs from multiple time-course gene expression datasets based on an autoregressive model. We assume that one GRN corresponds to one dataset and these GRNs share the same topology across all datasets. By assigning the parameters representing the regulatory strengths of the same edge into the one group, the group LASSO [10] can be applied to find the sparse network topology. Microarray data typically contain noises and outliers, which could severely affect the quality of inferred results. Rosset and Zhu [11] proposes a robust version of LASSO by replacing the squared error loss of LASSO with Huber loss. We propose to use the Huber loss to extend the group LASSO such that the new method, Huber group LASSO, is more resistant to the large noises and outliers.

To solve the Huber group LASSO, a new algorithm is developed in our previous work [12], which combines the idea of auxiliary function minimization [13] and the block coordinates descent method [14]. The proposed algorithm is efficient and can also be adapted for solving the group LASSO problem without the orthogonality restriction. In this study, we analyze the convergence of our proposed algorithm and show that the sequence the algorithm generated indeed converges to the optimal solution of the Huber group LASSO problem. Instead of picking a specific value for the tuning parameter which corresponds to a determinant network topology as in our previous work [12], in this study, we adapt the "stability selection" [15] strategy to our method to find a network consisting of edges with probabilities or scores. The Huber group LASSO is applied to both simulation data and real experimental data and its performances are compared with those of the group LASSO in terms of areas under the receiver operating characteristic (AUROC) and areas under the precision-recall (AUPR). Results show that the Huber group LASSO outperforms the group LASSO and therefore demonstrate the effectiveness of our proposed method.

Briefly, the remainder of the paper is organized as follows. In Model Section, we introduced the model for the GRN, based on which the network topology is inferred. In Result Section, our proposed method is applied to the both simulation data and real experimental data. The results demonstrate the effectiveness of our method. Then, we conclude this study and point out the future work along this research in Conclusion Section. Details of the method and its theoretical analysis can be found in Method Section.

## Model

*p*genes is used in this study [16]:

*c*

_{ i }

*>*0 the degradation rate of gene

*i*; the vector $r={\left[{r}_{1},\dots ,{r}_{m}\right]}^{T}\in {\mathcal{R}}^{m}$ represents the reaction rates, which is a function of mRNA concentrations and $S\in {\mathcal{R}}^{p\times m}$ is the stoichiometric matrix of the network. We assume that reaction rate

**r**is a linear combination of mRNA concentrations,

where $M=SF\in {\mathcal{R}}^{p\times p}$. The elements of **M** = (*m*_{
ij
})_{1≤i, j≤p}indicate the network topology or regulatory relationships between genes. *m*_{
ij
} *≠* 0 if gene *j* regulates the expression of gene *i*. Otherwise, *m*_{
ij
} = 0, gene *j* does not regulate gene *i*.

**A**=

*e*

^{C Δt}+

**C**

^{−1}(

*e*

^{C Δt}

*−*

**I**)

**M**. Note that

*e*

^{C Δt}and

**C**

^{−1}(

*e*

^{C Δt}

*−*

**I**) are both diagonal matrices and their diagonal elements are all positive. Thus, the off-diagonal elements of

**A**= (

*a*

_{ ij })

_{1≤i,j≤p}have the same zero and nonzero pattern as those of

**M**. In this study, we focus on inferring relationships between genes and do not consider self-regulations. As mentioned above, this can be achieved by identifying the nonzero off-diagonal elements in matrix

**A**, which can be interpreted as regulatory strengths. Multiple time-course gene expression datasets for a GRN may be collected under different circumstances. One dataset is assumed to correspond to one inferred GRN topology, and all inferred GRNs should share the same network topology as their corresponding datasets are generated from the same underlying gene network. Our purpose is to reverse engineer the underlying network topology from these multiple datasets. More specifically, suppose we have

*m*time-course gene expression datasets for a gene network: $\stackrel{\u0303}{X}\left(1\right),\dots ,\stackrel{\u0303}{X}\left(m\right)$, each of which is measured at

*n*

_{ k }+1 time points, i.e., $\stackrel{\u0303}{X}\left(k\right)\in {\mathcal{R}}^{p\times \left({n}_{k}+1\right)}$. According to the model (4), these datasets should satisfy

where $Y\left(k\right)=\left[{\stackrel{\u0303}{x}}_{2}\left(k\right),\dots ,{\stackrel{\u0303}{x}}_{{n}_{k}+1}\left(k\right)\right]$, the last *n*_{
k
} observations; $X\left(k\right)=\left[{\stackrel{\u0303}{x}}_{1}\left(k\right),\dots ,{\stackrel{\u0303}{x}}_{{n}_{k}}\left(k\right)\right]$, the first *n*_{
k
} observations, $A\left(k\right)\in {\mathcal{R}}^{p\times p}$, the regulatory matrix for the *k* th dataset and **E**(*k*), the errors or noises. All **A**(*k*)'s are required to have the same structure. i.e., zero and nonzero pattern, but do not need to have the same value for every nonzero position because gene network is dynamic and regulatory strength may be different under different circumstances. In this study, we propose to use group LASSO penalty to implement this requirement and to use Huber loss function to take into account the robustness. Details of the proposed method are shown in the Method Section.

## Results

To study the effectiveness of the proposed method, the Huber group LASSO is applied to inferring GRNs from both simulation datasets and real experimental datasets and the results of Huber group LASSO are compared with those from group LASSO in both area under receiver operating characteristic (AUROC) curve and area under the precision and recall (AUPR) curve.

### Simulation example

where + and *−* indicate the existence and regulation types of the edge. We randomly generate *m* stable regulatory matrices **A**(*k*), *k* = 1*, . . . , m*, according to the template **A**_{0}, such that sign(**A**(*k*)) = sign(**A**_{0}). Then, *m* simulated time-course gene expression datasets, each with the number of time points, *n*_{
k
} , are generated from (5) with randomly chosen expression values at the first time point. The simulated error follows a mixed Gaussian distribution: with probability of 0.8, it has the distribution *N* (0, 1) and with probability of 0.2, it has the distribution *N* (0, 102). In this way, the simulated data contain large errors and outliers. To investigate the performances of our methods in different situations, we vary the values of *m* and *n*_{
k
} and apply the group LASSO and Huber group LASSO respectively to these generated datasets and compare the results from these two methods.

*m*= 8

*, n*

_{ k }= 15), (

*m*= 4

*, n*

_{ k }= 15) and (

*m*= 4

*, n*

_{ k }= 8). Using the stability selection procedure that is introduced in the Method Section, network typologies consisting of edges with scores or probabilities are inferred by Huber group LASSO and group LASSO. For the first two situations, we set the number of bootstrap samples as 30 and the moving block length as 10. For the third situation, we set the number of bootstrap samples as 30 and the moving block length as 5. Varying the threshold, the ROC plots and precision-recall plots of each method for different situations are obtained and illustrated in Figure 1. The areas under the ROCs (AUROCs) and precision-recall curves (AUPRs) are calculated and reported in Table 1. From Figure 1 and Table 1 we can see that for each situation, the Huber group LASSO outperforms the group LASSO, i.e. the AUROC and AUPR of Huber group LASSO are larger than those of group LASSO. ROC plots in Figure 1 also show that both methods have better performances than the random guess. For the case of

*m*= 8 and

*n*

_{ k }= 15, the Huber group LASSO even achieves the maximum value of AUROC and AUPR. It can also be seen that for each method, the more the observations or the more the datasets, the larger AUROC and AUPR can be obtained. This is in accord with the intuition because, in this example, more observations or datasets indicate more information as these simulated data are generated under quite similar circumstances. All the simulation results have demonstrated the effectiveness of our proposed method.

The areas under ROC (AUROC) and precision-recall (AUPR) of the Huber group LASSO and the group LASSO for the simulation datasets under different situations.

Situation | Method | AUROC | AUPR |
---|---|---|---|

15 observations 8 datasets | SE | 0.9896 | 0.9852 |

Huber | 1.0000 | 1.0000 | |

15 observations 4 datasets | SE | 0.9219 | 0.9169 |

Huber | 0.9896 | 0.9736 | |

8 observations 4 datasets | SE | 0.6719 | 0.5749 |

Huber | 0.8385 | 0.8049 |

*In vivo* reverse engineering and modeling assessment (IRMA) data

The data used in this example come from the *In vivo* Reverse Engineering and Modeling Assessment (IRMA) experiment [17], where a network composed of five genes (GAL80, GAL4, CBF1, ASH1 and SWI5) was synthesized in yeast *Saccharomyces cerevisiae*, in which genes regulate each other through a variety of regulatory interactions. The network is negligibly affected by endogenous genes and it is responsive to small molecules. Galactose and glucose are respectively used to switch on and off the network. In this study, we use the IRMA time-course data consisting of four switch off datasets (with the number of time points varying from 19 to 21) and five switch on datasets (with the number of time points varying from 11 to 16).

The areas under ROC (AUROC) and precision-recall (AUPR) of the Huber group LASSO and the group LASSO for the IRMA datasets.

Case | Method | AUROC | AUPR |
---|---|---|---|

Switch on datasets | SE | 0.5208 | 0.3711 |

Huber | 0.7812 | 0.6971 | |

Switch off datasets | SE | 0.7344 | 0.6341 |

Huber | 0.8125 | 0.7928 | |

All datasets | SE | 0.6302 | 0.5122 |

Huber | 0.8438 | 0.8049 |

*E. coli* SOS network

*E. coli*SOS DNA repair system as shown in Figure 4. This network is responsible for repairing the DNA after some damage happens. LexA acts as the master repressor of many genes in the normal states. When a damage occurs, RecA acts as a sensor and binds to single-stranded DNA to sense the damage and mediates the autocleavage of LexA. The repressions of the SOS genes are halted by the drop in LexA levels. The SOS genes are activated and start to repair the damages. When the repair is done, RecA level drops and stops mediating the autocleavage of LexA. Then, LexA accumulates and represses the SOS genes to make the cell go back to the normal state.

*J m*

^{ − }2, Experiment 3 and 4: 20

*J m*

^{ − }2). Each dataset contain 8 genes and their measurements at 50 time points. As other literature did, e.g. [18–21], only 6 genes, i.e., uvrD, lexA, umuD, recA, uvrA and polB are considered because they are well studied and the gold standard network of these genes are illustrated in Table 3. Details of the gold standard can be found in [18]. In this study, we do not consider the signs and the self-regulations.

Gold standard of SOS network, collected from literature [18].

uvrD | lexA | umuD | recA | uvrA | polB | |
---|---|---|---|---|---|---|

uvrD | 0 | -1 | -1 | 1 | 1 | 0 |

lexA | 0 | -1 | -1 | 1 | 0 | 0 |

umuD | 0 | -1 | -1 | 1 | 0 | 0 |

recA | 0 | -1 | -1 | 1 | 0 | 0 |

uvrA | 1 | -1 | -1 | 1 | 0 | 0 |

polB | 0 | -1 | -1 | 1 | 0 | 0 |

*E. coli*SOS data.

The areas under ROC (AUROC) and precision-recall (AUPR) of the Huber group LASSO and the group LASSO for the E.

Case | Method | AUROC | AUPR |
---|---|---|---|

Experiment 1 and 2 | SE | 0.5588 | 0.7225 |

Huber | 0.7670 | 0.8649 | |

Experiment 3 and 4 | SE | 0.5204 | 0.6801 |

Huber | 0.7941 | 0.8981 | |

All experiment data | SE | 0.5588 | 0.7225 |

Huber | 0.7760 | 0.8756 |

### S. cerevisae cell cycle subnetwork

A cell cycle regulatory subnetwork in S. cerevisae is inferred by the proposed method from 5 experimental microarray datasets. As in [22], the subnetwork consists of 27 genes including 10 genes for producing transcription factors (ace2, fkh1, swi4, swi5, mbp1, swi6, mcm1, fkh2, ndd1, yox1) and 17 genes for producing cyclin and cyclin/CDK regulatory proteins (cln1, cln2, cln3, cdc20, clb1, clb2, clb4, clb5, clb6, sic1, far1, spo12, apc1, tem1, gin4, swe1 and whi5). The time-course datasets we use include cell-cycle alpha factor release, cdc15, alpha factor fkh1 fkh2, fkh1,2 alpha factor and Elutriation, which are all downloaded from Stanford Microarray Database (SMD). We apply the Huber group LASSO and the group LASSO respectively to infer the network from the datasets.

The areas under ROC (AUROC) and precision-recall (AUPR) of the Huber group LASSO and the group LASSO for the cell cycle datasets.

Method | AUROC | AUPR |
---|---|---|

SE | 0.5466 | 0.1844 |

Huber | 0.5753 | 0.1941 |

## Conclusions

A novel method, Huber group LASSO, has been proposed to integrate multiple time-course gene expression datasets to infer the underlying GRN topology. As an extension to the group LASSO, it is robust to large noises and outliers. An efficient algorithm which combines the ideas of auxiliary function minimization and block descent is developed to slove the involved optimization problem. The convergence analysis of the algorithm shows that the sequence generated from the algorithm indeed converges to the optimal solution of the problem. Instead of selecting a specific tuning parameter corresponding to a determinant network topology, an adapted stability selection procedure is used to lead to a network consisting of edges with scores. The applications of the proposed method to the simulation datasets and real experimental datasets show that Huber group LASSO outperforms the group LASSO in both AUROC and AUPR. It also shows that the integration of multiple time-course gene expression datasets by the proposed method lead to reliable inferred network typologies.

The information in the gene expression data is quite limited. One direction of the future work along this study is to extend the method to be able to integrate other types of data with the gene expression data.

## Method

### Huber group LASSO

*m*datasets, $\stackrel{\u0303}{X}\left(1\right),\dots ,\stackrel{\u0303}{X}\left(m\right)$, satisfying model (5), to ensure that all

**A**(

*k*)'s have the same structure, elements of

**A**(

*k*)'s on the same position are grouped together and can be inferred by the group LASSO,

where **A**_{
i
}(*k*)^{
T
} is the *i* th row of the matrix **A**(*k*) and **x**_{
j
} (*k*) is the *j* th column of the matrix **X**(*k*). *w*_{
k
} is the weight for the *k* th dataset, which can be assigned by experience. In this study, we choose *w*_{
k
} = *n*_{
k
} */* Σ*n*_{
k
} i.e., the more observations the dataset has, the higher weight it is assigned with. The penalty term in (6) takes advantage of the sparse nature of GRNs and has the effect making the grouped parameters to be estimated either all zeros or all non-zeros [10], i.e., *a*_{
iℓ
}(*k*)'s, *k* = 1*, . . . , m*, become either all zeros or all non-zeros. Therefore, a consistent network topology can be obtained from the group LASSO method. *λ* is a tuning parameter which controls the degree of sparseness of the inferred network. The larger the value of *λ*, the more grouped parameters become zeros.

**Y**

_{ i }= [

**Y**

_{ i }(1)

^{ T }

*, . . .*,

**Y**

_{ i }(

*m*)

^{ T }]

^{ T }, the vector stacking observations of the

*i*th target gene across all datasets, where

**Y**

_{ i }(

*k*)

^{ T }is the

*i*th row of

**Y**(

*k*). Let

**b**

_{ iℓ }= [

*a*

_{ iℓ }(1)

*, . . . , a*

_{ iℓ }(

*m*)]

^{ T }, the vector containing the grouped parameters. Denote by ${b}_{i}={\left[{b}_{i1}^{T},\dots ,{b}_{ip}^{T}\right]}^{T}$ the vector containing all parameters related to the regulation of the

*i*th target gene. According to the orders of the parameters in

**b**

_{ i }, re-arrange the rows of

**X**(

*k*) and piece them together to have $X={\left[{X}_{1}^{T},\dots ,{X}_{p}^{T}\right]}^{T}$ where

**X**

_{ i }= diag(

**X**

_{ i }(1)

^{ T }

*, . . .*,

**X**

_{ i }(

*m*)

^{ T }) with

**X**

_{ i }(

*k*)

^{ T }being the

*i*th row of

**X**(

*k*). Then (7) can be rewritten as

where **x**_{
j
} is the *j* th column of **X**, *y*_{
ij
} is the *j* th element of **Y**_{
i
}, $n={\sum}_{k=1}^{m}{n}_{k}$ and ${\omega}_{i}={w}_{1}I\left(i\le {n}_{1}\right)+{\sum}_{k=2}^{m}{w}_{k}I\left({\sum}_{l=1}^{k=1}{n}_{l}<i\le {\sum}_{l=1}^{k}{n}_{l}\right)$. (6) can be rewritten similarly.

### Optimization algorithm

*±δ*. Observed that fixing

*i*, the problem (9) can be decomposed into

*p*sub-optimization problems. For each, we get

**b**

_{ i }by minimizing

where for notational convenience, we omit the subscript *i* here and **b**_{
ℓ
} is a block of parameters of **b**, i.e. $b={\left[{b}_{1}^{T},\dots ,{b}_{p}^{T}\right]}^{T}$.

*J*(

**b**) decreasing. As in [13], given any current estimate

**b**

^{(k)}, a function

*Q*(

**b**

*|*

**b**

^{(k)}) is an auxiliary function for

*J*(

**b**) if conditions

where *γ* is the largest eigenvalue of ${\sum}_{j=1}^{n}{\omega}_{j}{x}_{j}{x}_{j}^{T}$. It can be easily shown that this auxiliary function satisfies (11).

**b**, we apply a block-wise descent strategy [14], i.e., cyclically optimize one block of parameters,

**b**

_{ j }, at a time. Denote by ${b}^{\left(k\right)}\left(\ell \right)={\left[{b}_{1}^{{\left(k+1\right)}^{T}},\dots ,{b}_{\ell}^{{\left(k+1\right)}^{T}},{b}_{\ell +1}^{{\left(k\right)}^{T}},\dots ,{b}_{p}^{{\left(k\right)}^{T}}\right]}^{T}$ the vector after updating the

*R*th block. Given

**b**

^{(k)}(

*ℓ−*1), update it to

**b**

^{(k)}(

*ℓ*) by computing

**x**

_{ j,ℓ }is the block of elements in

**x**

_{ j }corresponding to

**b**

_{ ℓ }and (

*·*)

_{+}= max(

*·*, 0). We repeat to update every block using (13) until it converges. For a specific value of

*λ*, the whole procedure is described as follows:

- 1
Initialize

**b**(0). Set iteration number*k*= 0. - 2
Cycle through (13) one at a time to update the

*ℓ*th block,*ℓ*= 1*, . . . , p* - 3
If {

**b**^{(k)}} converges to**b**^{∗}, go to the next step. Otherwise, set*k*:=*k*+ 1 and go to Step 2. - 4
Return the solution

**b**^{∗}.

Note that the algorithm can be adapted to solve (6) with quite similar derivations. In the following section, we show that the sequence {**b**^{(k)}} generated from the algorithm guarantees the objective function *J* (**b**) keep decreasing. We also show that the limit point of the sequence generated is indeed the minimum point of *J* (**b**).

### Convergence analysis

The convergence of the optimization algorithm for the minimization of (10) is analyzed in the way similar to [25]. We first show the descent property of the algorithm.

**Lemma 1** *The sequence* {**b**(k)} *generated from the optimization algorithm keeps the objective function* J(**b**) *decreasing, i.e.,* J(**b**^{(k)}) ≥ J (**b**^{(k+1)}).

*Proof* By (11) and (13), we have

$\begin{array}{cc}J\left({b}^{\left(k\right)}\right)\hfill & =Q\left({b}^{\left(k\right)}|{b}^{\left(k\right)}\right)\ge Q\left({b}^{\left(k\right)}\left(1\right)|{b}^{\left(k\right)}\right)\hfill \\ \ge Q\left({b}^{\left(k\right)}\left(2\right)|{b}^{\left(k\right)}\left(1\right)\right)\ge \cdots \ge Q\left({b}^{\left(k\right)}\left(p\right)|{b}^{\left(k\right)}\left(p-1\right)\right)\ge J\left({b}^{\left(k\right)}\left(p\right)\right)\hfill \\ =J\left({b}^{\left(k+1\right)}\right).\hfill \end{array}$

Next, we show that if the generated sequence satisfies some conditions, it converges to the optimal solution.

**Lemma 2** *Assume the data* (**y**, **X**) *lies on a compact set and the following conditions are satisfied:*

*1 The sequence* {**b**^{(k)}} *is bounded*.

*2 For every convergent subsequence* $\left\{{b}^{\left({n}_{k}\right)}\right\}\subset \left\{{b}^{\left(k\right)}\right\}$, the successive differences converge to zeros, ${b}^{\left({n}_{k}\right)}-{b}^{\left({n}_{k}-1\right)}\to 0$.

*Then, every limit point* **b**^{∞} *of the sequence* {**b**^{(k)}} *is a minimum for the function* J(**b**), *i.e., for any* $\delta ={\left({\delta}_{1}^{T},\dots ,{\delta}_{p}^{T}\right)}^{T}\in {\mathbb{R}}^{mp}$,

*Proof*For any $b={\left({b}_{1}^{T},\dots ,{b}_{p}^{T}\right)}^{T}\in {\mathbb{R}}^{mp}$ and $\delta \left(j\right)={\left({0}^{T},\dots ,{\delta}_{j}^{T},\dots ,{0}^{T}\right)}^{T}\in {\mathbb{R}}^{mp}$

_{ j }represents the partial derivatives with respect to the

*j*th block of parameters. Denote the second term by

*∂P*(

**b**

_{ j }; δ

_{j}) and it has

since ${b}_{j}^{T}{\delta}_{j}\le {\u2225{b}_{j}\u2225}_{2}{\u2225{\delta}_{j}\u2225}_{2}$.

*j*th block of parameters, using (14), we have

for any 1 *≤ j ≤ p*.

For $\delta ={\left({\delta}_{1}^{T},\dots ,{\delta}_{p}^{T}\right)}^{T}\in {\mathbb{R}}^{mp}$, due to the differentiability of *f* (**b**),

$\begin{array}{cc}{\displaystyle \underset{\alpha \downarrow 0+}{\text{lim}\text{inf}}}\left\{\frac{J\left({b}^{\infty}+\alpha \delta \right)-J\left({b}^{\infty}\right)}{\alpha}\right\}\hfill & ={\displaystyle \sum _{j=1}^{p}}{\nabla}_{j}f{\left({b}^{\infty}\right)}^{T}{\delta}_{j}+{\displaystyle \sum _{j=1}^{p}}\underset{\alpha \downarrow 0+}{\text{lim}\text{inf}}\left\{\frac{\lambda {\u2225{b}_{j}^{\infty}+\alpha {\delta}_{j}\u2225}_{2}-{\u2225{b}_{j}^{\infty}\u2225}_{2}}{\alpha}\right\}\hfill \\ ={\displaystyle \sum _{j=1}^{p}}\left\{{\nabla}_{j}f{\left({b}^{\infty}\right)}^{T}{\delta}_{j}+\partial P\left({b}^{\infty};{\delta}_{j}\right)\right\}\ge 0.\hfill \end{array}$

Finally, we show that the sequence generated from the proposed algorithm satisfies these two conditions.

**Theorem 3** Assuming the data (**y**, **X**) lies on a compact set and no column of **X** is identically **0**, the sequence {**b**^{(k)}} generated from the algorithm converges to the minimum point of the objective function J (**b**).

*Proof* We only need to show that the generated sequence meets the conditions in Lemma 2.

*j*and ${\left({b}_{1}^{T},\dots ,{b}_{j-1}^{T},{b}_{j+1}^{T},\dots ,{b}_{p}^{T}\right)}^{T}$ define

Let **b**(**u**) be the vector containing **u** as its *j* th block of parameters with other blocks being the fixed values.

**u**+ δ and

**u**represent the values of the

*j*th block of parameters before and after the block update, respectively. Hence, as defined in (12),

**u**is obtained by minimizing the following function with respect to the

*j*th block in the algorithm:

**u**should satisfy

**s**=

**u**

*/||*

**u||**

_{2}if

**u**≠ 0; ||

**s||**

_{2}

*≤*1 if

**u**= 0. Then, we have

*τ*∈ (0, 1) and (20). For the first inequality, the following property of the Huber loss function and the property of subgradient are used.

where ${b}^{\left(k\right)}\left(j\right)={\left[{b}_{1}^{{\left(k+1\right)}^{T}},\dots ,{b}_{j}^{{\left(k+1\right)}^{T}},{b}_{j+1}^{{\left(k\right)}^{T}},\dots ,{b}_{p}^{{\left(k\right)}^{T}}\right]}^{T}.$

*k*, we have

Note that by Lemma 1, {*J* (**b**^{(k)})} converges as it keeps decreasing and is bounded from below. The convergence of {*J* (**b**^{(k)})} yield the convergence of {**b**^{(k)}}. Hence, conditions of Lemma 2 hold which imply that the limit of {**b**^{(k)}} is the minimum point of *J* (**b**).

### Implementation

*λ*controls the sparseness of the resulted network. A network solution path can be obtained by computing networks on a grid of

*λ*values from ${\lambda}_{\text{max}}=\underset{i,\ell}{\text{max}}\u2225{\sum}_{j=1}^{n}{\omega}_{j}{{H}^{\prime}}_{\delta}\left({y}_{ij}\right){x}_{j,\ell}\u2225$, which is the smallest value that gives the empty network, to a small value, e.g.

*λ*

_{min}= 0.01

*λ*

_{max}. In our previous work [12], BIC criterion is used to pick a specific

*λ*value which corresponds to a determinant network topology. A method called "stability selection" recently proposed by Meinshausen and Buhlmann [15] finds a network with probabilities for edges. Stability selection performs the network inference method, e.g. group LASSO, many times, resampling the data in each run and computing the frequencies with which each edge is selected across these runs. It has been used with the linear regression method to infer GRNs from steady-state gene expression data in Haury et al. [26] and has shown perspective effectiveness. In this study, we adapt the stability selection method to finding GRN topology from multiple time-course gene expression datasets. Given a family of

*m*time-course gene expression datasets $\stackrel{\u0303}{X}\left(k\right)\in {\mathcal{R}}^{p\times {n}_{k}}$,

*k*= 1

*, . . . , m*, for a specific

*λ*∈ Λ, the stability selection procedure is as follows

- 1
Use moving block bootstrap to draw

*N*bootstrap samples for every dataset to form*N*bootstrap families of multiple time-course datasets, i.e. $\stackrel{\u0303}{X}{\left(k\right)}^{*\left(b\right)}{\}}_{k=1}^{m}$,*b*= 1, . . . ,*N* - 2
Use the proposed Huber group LASSO to infer ${\left\{A{\left(k\right)}_{\lambda}^{*\left(b\right)}\right\}}_{k=1}^{m}$ from the

*b*th bootstrap family of datasets. Denote by ${A}_{\lambda}^{*\left(b\right)}$ the network topology shared by ${\left\{A{\left(k\right)}_{\lambda}^{*\left(b\right)}\right\}}_{k=1}^{m}$. - 3Compute the frequencies for each edge (
*i, j*), i.e., from the gene*j*to gene*i*, in the network$\prod _{\lambda}\left(i,j\right)=\frac{\#\left\{b:{A}_{\lambda}^{*\left(b\right)}\left(i,j\right)\ne 0\right\}}{N},$(24)

where ${A}_{\lambda}^{*\left(b\right)}\left(i,j\right)$ is the (*i, j*)'s entry of ${A}_{\lambda}^{*\left(b\right)}$ and #{*·*} is the number of elements in that set.

*λ ∈*Λ, the probability of each edge in the inferred network is

The final network topology can be obtained by setting a threshold, edges with probabilities or scores less than which are considered nonexistent. This study only focus on giving a list of edges with scores. The selection of threshold is not discussed here. The stability selection procedure can also be applied with the group LASSO method (6).

Since the data used are time series data, the moving block bootstrap method is employed in the first step to draw bootstrap samples from each dataset. For a dataset with *n* observations, in the moving block bootstrap with block length *l*, the data is split into *n − l* + 1 blocks: block *j* consists of observations *j* to *j* + *l −* 1, *j* = 1*, . . . , n − l* + 1. [*n/b*] blocks are randomly drawn from *n − l* + 1 blocks with replacement and are aligned in the order they are picked to form a bootstrap sample.

Another tuning parameter *δ* controls the degree of robustness. Generally, it picks $\delta =1.345\widehat{\sigma}$ where $\widehat{\sigma}$ is the estimated standard deviation of the error and $\widehat{\sigma}=MAD/0.6745$, where *MAD* is the median absolute deviation of the residuals. In this study, we use the least absolute deviations (LAD) regression to obtain the residuals. To avoid the overfitting of LAD which leads to a very small *δ*, we choose by $\delta =\text{max}\left(1.345\widehat{\sigma},1\right)$.

## Declarations

### Declarations

Publication of this article was funded by Natural Science and Engineering Research Council of Canada (NSERC).

This article has been published as part of *BMC Systems Biology* Volume 8 Supplement 3, 2014: IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2013): Systems Biology Approaches to Biomedicine. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcsystbiol/supplements/8/S3.

## Authors’ Affiliations

## References

- Liu LZ, Wu FX, Zhang W: Inference of Biological S-System Using the Separable Estimation Method and the Genetic Algorithm. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2012, 9 (4): 955-965.View ArticlePubMedGoogle Scholar
- Peng J, Wang P, Zhou N, Zhu J: Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association. 2009, 104 (486): 735-746. 10.1198/jasa.2009.0126. [PMID: 19881892]PubMed CentralView ArticlePubMedGoogle Scholar
- Husmeier D: Sensitivity and specificity of inferring genetic regulatory interactions from microarray experiments with dynamic Bayesian networks. Bioinformatics. 2003, 19 (17): 2271-2282. 10.1093/bioinformatics/btg313.View ArticlePubMedGoogle Scholar
- Saito S, Hirokawa T, Horimoto K: Discovery of Chemical Compound Groups with Common Structures by a Network Analysis Approach (Affinity Prediction Method). Journal of Chemical Information and Modeling. 2011, 51: 61-68. 10.1021/ci100262s.View ArticlePubMedGoogle Scholar
- Basso K, Margolin A, Stolovitzky G, Klein U, Riccardo D, Califano A: Reverse engineering of regulatory networks in human B cells. Nature genetics. 2005, 37 (4): 382-390. 10.1038/ng1532.View ArticlePubMedGoogle Scholar
- Pinna A, Soranzo N, de la Fuente A: From knockouts to networks: establishing direct cause-effect relationships through graph analysis. PloS one. 2010, 5 (10):Google Scholar
- Tenenhaus A, Guillemot V, Gidrol X, Frouin V: Gene Association Networks from Microarray Data Using a Regularized Estimation of Partial Correlation Based on PLS Regression. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2010, 7 (2): 251-262.View ArticlePubMedGoogle Scholar
- Wang Y, Joshi T, Zhang X, Xu D, Chen L: Inferring gene regulatory networks from multiple microarray datasets. Bioinformatics (Oxford, England). 2006, 22 (19): 2413-2420. 10.1093/bioinformatics/btl396.View ArticleGoogle Scholar
- Chen BL, Liu LZ, Wu FX: Inferring gene regulatory networks from multiple time course gene expression datasets. Systems Biology (ISB), 2011 IEEE International Conference on. 2011, 12-17.View ArticleGoogle Scholar
- Yuan M, Lin Y: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2006, 68: 49-67. 10.1111/j.1467-9868.2005.00532.x.View ArticleGoogle Scholar
- Rosset S, Zhu J: Piecewise linear regularized solution paths. Ann Stat. 2007, 35 (3): 1012-1030. 10.1214/009053606000001370.View ArticleGoogle Scholar
- Liu LZ, Wu FX, Zhang WJ: Robust inference of gene regulatory networks from multiple microarray datasets. Bioinformatics and Biomedicine (BIBM), 2013 IEEE International Conference on. 2013, 544-547.View ArticleGoogle Scholar
- Seung D, Lee L: Algorithms for non-negative matrix factorization. Advances in neural information processing systems. 2001, 13: 556-562.Google Scholar
- Tseng P: Convergence of a Block Coordinate Descent Method for Nondifferentiable Minimization. Journal of Optimization Theory and Applications. 2001, 109 (3): 475-494. 10.1023/A:1017501703105.View ArticleGoogle Scholar
- Meinshausen N, Buhlmann P: Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2010, 72 (4): 417-473. 10.1111/j.1467-9868.2010.00740.x.View ArticleGoogle Scholar
- Wu FX, Liu LZ, Xia ZH: Identification of gene regulatory networks from time course gene expression data. Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE. 2010, 795-798.Google Scholar
- Cantone I, Marucci L, Iorio F, Ricci MA, Belcastro V, Bansal M, Santini S, di Bernardo M, di Bernardo D, Cosma MP: A yeast synthetic network for in vivo assessment of reverse-engineering and modeling approaches. Cell. 2009, 137: 172-181. 10.1016/j.cell.2009.01.055.View ArticlePubMedGoogle Scholar
- Yang X, Dent JE, Nardini C: An S-System Parameter Estimation Method (SPEM) for Biological Networks. Journal of Computational Biology. 2012, 19 (2): 175-187. 10.1089/cmb.2011.0269.PubMed CentralView ArticlePubMedGoogle Scholar
- Kabir M, Noman N, Iba H: Reverse engineering gene regulatory network from microarray data using linear time-variant model. BMC Bioinformatics. 2010, 11 (Suppl 1): S56-10.1186/1471-2105-11-S1-S56.PubMed CentralView ArticlePubMedGoogle Scholar
- Kimura S, Nakayama S, Hatakeyama M: Genetic network inference as a series of discrimination tasks. Bioinformatics. 2009, 25 (7): 918-925. 10.1093/bioinformatics/btp072.View ArticlePubMedGoogle Scholar
- Hsiao YT, Lee WP: Inferring robust gene networks from expression data by a sensitivity-based incremental evolution method. BMC Bioinformatics. 2012, 13 (Suppl 7): S8-10.1186/1471-2105-13-S7-S8.PubMed CentralView ArticlePubMedGoogle Scholar
- Montefusco F, Cosentino C, Amato F: CORE-Net: exploiting prior knowledge and preferential attachment to infer biological interaction networks. Systems Biology, IET. 2010, 4 (5): 296-310. 10.1049/iet-syb.2009.0047.View ArticleGoogle Scholar
- Stark C, Breitkreutz BJ, Reguly T, Boucher L, Breitkreutz A, Tyers M: BioGRID: a general repository for interaction datasets. Nucleic Acids Research. 2006, 34 (suppl 1): D535-D539.PubMed CentralView ArticlePubMedGoogle Scholar
- Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics). 2011, New York: SpringerGoogle Scholar
- Mazumder R, Friedman JH, Hastie T: SparseNet: Coordinate Descent With Nonconvex Penalties. Journal of the American Statistical Association. 2011, 106 (495): 1125-1138. 10.1198/jasa.2011.tm09738.PubMed CentralView ArticlePubMedGoogle Scholar
- Haury AC, Mordelet F, Vera-Licona P, Vert JP: TIGRESS: Trustful Inference of Gene REgulation using Stability Selection. BMC Systems Biology. 2012, 6: 145-10.1186/1752-0509-6-145.PubMed CentralView ArticlePubMedGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.