- Methodology article
- Open Access
ENNET: inferring large gene regulatory networks from expression data using gradient boosting
- Janusz Sławek^{1} and
- Tomasz Arodź^{1}Email author
https://doi.org/10.1186/1752-0509-7-106
© Sławek and Arodź; licensee BioMed Central Ltd. 2013
- Received: 24 June 2013
- Accepted: 17 October 2013
- Published: 22 October 2013
Abstract
Background
The regulation of gene expression by transcription factors is a key determinant of cellular phenotypes. Deciphering genome-wide networks that capture which transcription factors regulate which genes is one of the major efforts towards understanding and accurate modeling of living systems. However, reverse-engineering the network from gene expression profiles remains a challenge, because the data are noisy, high dimensional and sparse, and the regulation is often obscured by indirect connections.
Results
We introduce a gene regulatory network inference algorithm ENNET, which reverse-engineers networks of transcriptional regulation from a variety of expression profiles with a superior accuracy compared to the state-of-the-art methods. The proposed method relies on the boosting of regression stumps combined with a relative variable importance measure for the initial scoring of transcription factors with respect to each gene. Then, we propose a technique for using a distribution of the initial scores and information about knockouts to refine the predictions. We evaluated the proposed method on the DREAM3, DREAM4 and DREAM5 data sets and achieved higher accuracy than the winners of those competitions and other established methods.
Conclusions
Superior accuracy achieved on the three different benchmark data sets shows that ENNET is a top contender in the task of network inference. It is a versatile method that uses information about which gene was knocked-out in which experiment if it is available, but remains the top performer even without such information. ENNET is available for download from https://github.com/slawekj/ennet under the GNU GPLv3 license.
Keywords
- Gene regulatory networks
- Network inference
- Ensemble learning
- Boosting
Background
Regulation of gene expression is a key driver of adaptation of living systems to changes in the environment and to external stimuli. Abnormalities in this highly coordinated process underlie many pathologies. At the transcription level, the control of the amount of mRNA transcripts involves epigenetic factors such as DNA methylation and, in eukaryotes, chromatin remodeling. But the key role in both prokaryotes and eukaryotes is played by transcription factors (TF), that is, proteins that can bind to DNA in the regulatory regions of specific genes and act as repressors or inducers of their expression. Many interactions between transcription factors and genes they regulate have been discovered through traditional molecular biology experiments. With the introduction of high-throughput experimental techniques for measuring gene expression, such as DNA microarrays and RNA-Seq, the goal moved to reverse-engineering genome-wide gene regulatory networks (GRNs) [1]. Knowledge of GRNs can facilitate finding mechanistic hypotheses about differences between phenotypes and sources of pathologies, and can help in the drug discovery and bioengineering.
High throughput techniques allow for collecting genome-wide snapshots of gene expression across different experiments, such as diverse treatments or other perturbations to cells [2]. Analyzing these data to infer the regulatory network is one of the key challenges in the computational systems biology. The difficulty of this task arises from the nature of the data: they are typically noisy, high dimensional, and sparse [3]. Moreover, discovering direct causal relationships between genes in the presence of multiple indirect ones is not a trivial task given the limited number of knockouts and other controlled experiments. Attempts to solve this problem are motivated from a variety of different perspectives. Most existing computational methods are examples of influence modeling, where the expression of a target transcript is modeled as a function of the expression levels of some selected transcription factors. Such a model does not aim to describe physical interactions between molecules, but instead uses inductive reasoning to find a network of dependencies that could explain the regularities observed among the expression data. In other words, it does not explain mechanistically how transcription factors interact with regulated genes, but indicate candidate interactions with a strong evidence in expression data. This knowledge is crucial to prioritize detailed studies of the mechanics of the transcriptional regulation.
One group of existing methods describes GRN as a system of ordinary differential equations. The rate of change in expression of a transcript is given by a function of the concentration levels of transcription factors that regulate it. Network inference includes two steps: a selection of a model and an estimation of its parameters. Popular models imply linear functions a priori [4–7]. Bayesian Best Subset Regression (BBSR) [8] has been proposed as a novel model selection approach, which uses Bayesian Information Criterion (BIC) to select an optimal model for each target gene. Another group of methods employ probabilistic graphical models that analyze multivariate joint probability distributions over the observations, usually with the use of Bayesian Networks (BN) [9–11], or Markov Networks (MN) [12]. Various heuristic search schemes have been proposed in order to find parameters of the model, such as greedy-hill climbing or the Markov Chain Monte Carlo approach [13]. However, because learning optimal Bayesian networks from expression data is computationally intensive, it remains impractical for genome-wide networks.
Other approaches are motivated from statistics and information theory. TwixTwir [14] uses double two-way t-test to score transcriptional regulations. The null-mutant z-score algorithm [15] scores interactions based on a z-score transformed knockout expression matrix. Various algorithms rely on estimating and analyzing cross-correlation and mutual information (MI) of gene expression in order to construct a GRN [16–20], including ANOVA η^{2} method [21]. Improvements aimed at removing indirect edges from triples of genes have been proposed, including techniques such as the Data Processing Inequality in ARACNE [22, 23], and the adaptive background correction in CLR [24]. Another method, NARROMI [25], eliminates redundant interactions from the MI matrix by applying ODE-based recursive optimization, which involves solving a standard linear programming model.
Recently, machine-learning theory has been used to formulate the network inference problem as a series of supervised gene selection procedures, where each gene in turn is designated as the target output. One example is MRNET [26], which applies the maximum relevance/minimum redundancy (MRMR) [27] principle to rank the set of transcription factors according to the difference between mutual information with the target transcript (maximum relevance) and the average mutual information with all the previously ranked transcription factors (minimum redundancy). GENIE3 [28] employs Random Forest algorithm to score important transcription factors, utilizing the embedded relative importance measure of input variables as a ranking criterion. TIGRESS [29] follows a similar approach but is based on the least angle regression (LARS). Recently, boosting [30, 31] was also used to score the importance of transcription factors, in ADANET [32] and OKVAR-Boost [33] methods.
In this paper, we propose a method that combines gradient boosting with regression stumps, augmented with statistical re-estimation procedures for prioritizing a selected subset of edges based on results from the machine-learning models. We evaluated our method on the DREAM3, DREAM4 and DREAM5 network inference data sets, and achieved results that in all cases were better than the currently available methods.
Methods
The ENNET algorithm
Formulating the gene network inference problem
The proposed algorithm returns a directed graph of regulatory interactions between P genes in form of a weighted adjacency matrix $V\in {\mathbb{R}}^{P\times P}$, where v_{i,j} represents regulation of gene j by gene i. As an input, it takes gene expression data from a set of experiments, together with the meta-data describing the conditions of the experiments, including which genes were knocked out. Usually, the raw expression data need to be pre-processed before any inference method could be applied to reverse-engineer a GRN. Pre-processing has a range of meanings, here it is regarded as a process of reducing variations or artifacts, which are not of the biological origin. It is especially important when the expression is measured with multiple high-density microarrays [34]. Concentration levels of transcripts must be adjusted and the entire distribution of adjusted values aligned with a normal distribution. Methods for normalization of expression data are outside of the scope of our work. The data we used were already normalized using RMA [34, 35] by the DREAM challenge organizers. We further normalized the expression data to zero mean and unit standard deviation.
Different types of expression data provided in popular data sets
Data set | WT | KO | KD | MF | TS |
---|---|---|---|---|---|
DREAM3 size 100 | • | • | • | ◦ | • |
DREAM4 size 100 | • | • | • | ◦ | • |
DREAM4 size 100 MF | ◦ | ◦ | ◦ | • | ◦ |
DREAM5^{⋆} | • | • | • | • | • |
The variability of possible input scenarios poses a problem of representing and analyzing expression data. Here, we operate on an N×P expression matrix E, where e_{i,j} is the expression value of the j-th gene in the i-th sample. Columns of matrix E correspond to genes, rows correspond to experiments. We also define a binary perturbation matrix K, where k_{i,j} is a binary value corresponding to the j-th gene in the i-th sample, just like in the matrix E. If k_{i,j} is equal to 1, it means that the j-th gene is known to be initially perturbed, for example knocked out, in the i-th experiment. Otherwise k_{i,j} is equal to 0. If no information is available about knockouts, all values are set to 0.
Decomposing the inference problem into gene selection problems
We decompose the problem of inferring the network of regulatory interactions targeting all P genes into P independent subproblems. In each subproblem incoming edges from transcription factors to a single gene transcript are discovered. For the k-th decomposed subproblem we create a target expression vector Y_{ k } and a feature expression matrix X_{−k}. Columns of the X_{−k} matrix constitute a set of possible transcription factors. Vector Y_{ k } corresponds to the expression of the transcript, which is possibly regulated by transcription factors from X_{−k}. In a single gene selection problem we decide which TFs contribute to the target gene expression across all the valid experiments. Columns of X_{−k} correspond to all the possible TFs, but if a target gene k is also a transcription factor, it is excluded from X_{−k}. We do not consider a situation in which a transcription factor would have a regulatory interaction with itself. When building the target vector Y_{ k } corresponding to the k-th target gene, k∈{1,...,P}, we consider all the experiments valid except from the ones in which the k-th gene was initially perturbed, as specified in the perturbation matrix K. We reason that the expression value of the k-th gene in those experiments is not determined by its TFs, but by the external perturbation. Each row in the Y_{ k } vector is aligned with a corresponding row in the X_{−k} matrix. In order to justify all the possible interactions we need to solve a gene selection problem for each target gene. For example, if a regulatory network consists of four genes (P=4), we need to solve four gene selection problems. In the k-th problem, k∈{1,2,3,4}, we find which TFs regulate the k-th target gene. In other words, we calculate the k-th column of the output adjacency matrix V.
Solving the gene selection problems
where ε_{ k } is a random noise. A function f_{ k } represents a pattern of regulatory interactions that drive the expression of the k-th gene. We want f_{ k } to rely only on a small number of genes acting as transcription factors, those that are the true regulators of gene k. Essentially, this is a feature selection or a gene selection task [28, 32, 36, 37], where the goal is to model the target response Y_{ k } with an optimal small set of important predictor variables, i.e., a subset of columns of the X_{−k} matrix. A more relaxed objective of the gene selection is the variable ranking, where the relative relevance for all input columns of the X_{−k} matrix is obtained with respect to the target vector Y_{ k }. The higher a specific column is in that ranking, the higher the confidence that a corresponding TF is in a regulatory interaction with the target gene k.
where w_{1t}, w_{2t} are proportional to the number of observations in regions R_{1t}, R_{2t} respectively, and γ_{1t}, γ_{2t} are corresponding response means. That is, γ_{1t} is the average of the values from the vector of pseudo-residuals for those samples where an expression of the chosen TF falls into the region R_{1t}. The value of γ_{2t} is defined in an analogous way. The averages γ_{1t} and γ_{2t} are used as the regression output values for regions R_{1t} and R_{2t}, respectively, as shown in Equation 2. The criterion in Equation 3 is evaluated for each TF, and the transcription factor with the highest improvement is selected. In each t-th step, we only use a random portion of rows and columns of X_{−k}, sampled according to the observation sampling rate s_{ s }, and the TF sampling rate s_{ f }.
The procedure outlined above creates a non-linear regression model of the target gene expression based on the expression of transcription factors. However, in the network inference, we are interested not in the regression model as a whole, but only in the selected transcription factors. In each t-th step of the ENNET algorithm, only one TF is selected as the optimal predictor. The details of the regression model can be used to rank the selected TFs by their importance. Specifically, if a transcription factor φ_{ t } is selected in an iteration t, an improvement ${i}_{t}^{2}$ serves as an importance score ${I}_{{\phi}_{t}}^{2}$ for that φ_{ t }-th TF. If the same TF is selected multiple times at different iterations, its final importance score is a sum of the individual scores.
In the training of the regression model, the parameter ν, known as a shrinkage factor, is used to scale a contribution of each tree by a factor ν∈(0,1) when it is added to the current approximation. In other words, ν controls the learning rate of the boosting procedure. Shrinkage techniques are also commonly used in neural networks. Smaller values of ν result in a larger training risk for the same number of iterations T. However, it has been found [38] that smaller values of ν reduce the test error, and require correspondingly larger values of T, which results in a higher computational overhead. There is a trade-off between these two parameters.
Refining the inferred network
where ${\sigma}_{i}^{2}$ is a variance in the i-th row of V. Note that V matrix is built column-wise, i.e., a single column of V contains the relative importance scores of all the transcription factors averaged over all the base learners with respect to a single target transcript. On the other hand, rows of V matrix are calculated independently in different subproblems of the proposed inference method. Each row of V contains relative importance scores with respect to a different target transcript. We reason that if a transcription factor regulates many target transcripts, e.g. a transcription factor is a hub node, the variance in a row of V corresponding to that transcription factor is elevated and therefore it indicates an important transcription factor.
where $\overline{{e}_{\alpha \left(i\right),j}}$ is an average expression value of the j-th transcript in all the experiments α(i) in which the i-th gene was knocked-out, as defined by K matrix, $\overline{{e}_{\beta \left(i\right),j}}$ is the mean expression value for that transcript across all the other knockout experiments, β(i), and σ_{ j } is the standard deviation of the expression value of that transcript in all the knockout experiments. The $\left|\frac{\overline{{e}_{\alpha \left(i\right),j}}-\overline{{e}_{\beta \left(i\right),j}}}{{\sigma}_{j}}\right|$ coefficient shows how many standard deviations the typical expression of the j-th transcript was different from the average expression in the experiment in which its potential i-th transcription factor was knocked-out.
Performance evaluation
A considerable attention has been devoted in recent years to the problem of evaluating performance of the inference methods on adequate benchmarks [35, 39]. The most popular benchmarks are derived from well-studied in vivo networks of model organisms, such as E. coli[40] and S. cerevisiae[41], as well as artificially simulated in silico networks [39, 42–45]. The main disadvantage of in vivo benchmark networks is the fact that experimentally confirmed pathways can never be assumed complete, regardless of how well the model organism is studied. Such networks are assembled from known transcriptional interactions with strong experimental support. As a consequence, gold standard networks are expected to have few false positives. However, they contain only a subset of the true interactions, i.e., they are likely to contain many false negatives. For this reason, artificially simulated in silico networks are most commonly used to evaluate network inference methods. Simulators [39] mimic real biological systems in terms of topological properties observed in biological in vivo networks, such as modularity [46] and occurrences of network motifs [47]. They are also endowed with dynamical models of a transcriptional regulation, thanks to the use of non-linear differential equations and other approaches [42, 48, 49], and consider both transcription and translation processes in their dynamical models [48–50] using a thermodynamic approach. Expression data can be generated deterministically or stochastically and experimental noise, such as the one observed in microarrays, can be added [51].
where ${\overline{p}}_{\text{aupr}}$ and ${\overline{p}}_{\text{auroc}}$ are geometric means of p-values of networks constituting each DREAM challenge, relating to an area under the Precision-Recall curve (AUPR) and an area under the ROC curve (AUROC), respectively.
Results and discussion
The accuracy of ENNET
Results of the different inference methods on DREAM3 networks, challenge size 100
Method | Network (AUPR/AUROC respectively) | Overall | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||||||
Experimental results | |||||||||||
ENNET | 0.627 | 0.901 | 0.865 | 0.963 | 0.568 | 0.892 | 0.522 | 0.842 | 0.384 | 0.765 | >300 |
Winner of the challenge | |||||||||||
Yip et al. | 0.694 | 0.948 | 0.806 | 0.960 | 0.493 | 0.915 | 0.469 | 0.856 | 0.433 | 0.783 | >300 |
2nd | 0.209 | 0.854 | 0.249 | 0.845 | 0.184 | 0.783 | 0.192 | 0.750 | 0.161 | 0.667 | 45.443 |
3nd | 0.132 | 0.835 | 0.154 | 0.879 | 0.189 | 0.839 | 0.179 | 0.738 | 0.164 | 0.667 | 42.240 |
Results of the different inference methods on DREAM4 networks, challenge size 100
Method | Network (AUPR/AUROC respectively) | Overall | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||||||
Experimental results | |||||||||||
ENNET | 0.604 | 0.893 | 0.456 | 0.856 | 0.421 | 0.865 | 0.506 | 0.878 | 0.264 | 0.828 | 87.738 |
Winner of the challenge | |||||||||||
Pinna et al. | 0.536 | 0.914 | 0.377 | 0.801 | 0.390 | 0.833 | 0.349 | 0.842 | 0.213 | 0.759 | 71.589 |
2nd | 0.512 | 0.908 | 0.396 | 0.797 | 0.380 | 0.829 | 0.372 | 0.844 | 0.178 | 0.763 | 71.297 |
3rd | 0.490 | 0.870 | 0.327 | 0.773 | 0.326 | 0.844 | 0.400 | 0.827 | 0.159 | 0.758 | 64.715 |
Results of the different inference methods on DREAM4 networks, challenge size 100 multifactorial
Method | Network (AUPR/AUROC respectively) | Overall | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||||||
Experimental results | |||||||||||
ENNET | 0.184 | 0.731 | 0.261 | 0.807 | 0.289 | 0.813 | 0.291 | 0.822 | 0.286 | 0.829 | 52.839 |
ADANET | 0.149 | 0.664 | 0.094 | 0.605 | 0.191 | 0.703 | 0.172 | 0.712 | 0.182 | 0.694 | 24.970 |
GENIE3 | 0.158 | 0.747 | 0.154 | 0.726 | 0.232 | 0.777 | 0.210 | 0.795 | 0.204 | 0.792 | 37.669 |
C3NET | 0.077 | 0.562 | 0.095 | 0.588 | 0.126 | 0.621 | 0.113 | 0.687 | 0.110 | 0.607 | 15.015 |
CLR | 0.142 | 0.695 | 0.118 | 0.700 | 0.178 | 0.746 | 0.174 | 0.748 | 0.174 | 0.722 | 28.806 |
MRNET | 0.138 | 0.679 | 0.128 | 0.698 | 0.204 | 0.755 | 0.178 | 0.748 | 0.187 | 0.725 | 30.259 |
ARACNE | 0.123 | 0.606 | 0.102 | 0.603 | 0.192 | 0.686 | 0.159 | 0.713 | 0.166 | 0.659 | 22.744 |
Winner of the challenge | |||||||||||
GENIE3 | 0.154 | 0.745 | 0.155 | 0.733 | 0.231 | 0.775 | 0.208 | 0.791 | 0.197 | 0.798 | 37.428 |
2nd | 0.108 | 0.739 | 0.147 | 0.694 | 0.185 | 0.748 | 0.161 | 0.736 | 0.111 | 0.745 | 28.165 |
3rd | 0.140 | 0.658 | 0.098 | 0.626 | 0.215 | 0.717 | 0.201 | 0.693 | 0.194 | 0.719 | 27.053 |
Results of the different inference methods on DREAM5 networks
Method | Network (AUPR/AUROC respectively) | Overall | |||||
---|---|---|---|---|---|---|---|
1 | 3 | 4 | |||||
Experimental results | |||||||
ENNET | 0.432 | 0.867 | 0.069 | 0.642 | 0.021 | 0.532 | >300 |
ADANET | 0.261 | 0.725 | 0.083 | 0.596 | 0.021 | 0.517 | 16.006 |
GENIE3 | 0.291 | 0.814 | 0.094 | 0.619 | 0.021 | 0.517 | 40.335 |
C3NET | 0.080 | 0.529 | 0.026 | 0.506 | 0.018 | 0.501 | 0.000 |
CLR | 0.217 | 0.666 | 0.050 | 0.538 | 0.019 | 0.505 | 4.928 |
MRNET | 0.194 | 0.668 | 0.041 | 0.525 | 0.018 | 0.501 | 2.534 |
ARACNE | 0.099 | 0.545 | 0.029 | 0.512 | 0.017 | 0.500 | 0.000 |
Winner of the challenge | |||||||
GENIE3 | 0.291 | 0.815 | 0.093 | 0.617 | 0.021 | 0.518 | 40.279 |
ANOVA η^{2} | 0.245 | 0.780 | 0.119 | 0.671 | 0.022 | 0.519 | 34.023 |
TIGRESS | 0.301 | 0.782 | 0.069 | 0.595 | 0.020 | 0.517 | 31.099 |
Computational complexity of ENNET
The computational complexity of ENNET and the other GRN inference methods
Method | Complexity |
---|---|
ENNET | O(T P^{2}N), T=5000 |
ADANET | O(C T P^{2}N), C=30, $T=\lceil \sqrt{P}\rceil $ |
GENIE3 | O(T K P N logN), T=1000, $K=\lceil \sqrt{P}\rceil $ |
C3NET | O(P^{2}) |
CLR | O(P^{2}) |
MRNET | O(f P^{2}), f∈[1,P] |
ARACNE | O(P^{3}) |
When implementing ENNET algorithm we took advantage of the fact that gene selection problems are independent of each other. Our implementation of the algorithm is able to calculate them in parallel if multiple processing units are available. User can choose from variety of parallel backends including multicore package for a single computer and parallelization based on Message Passing Interface for a cluster of computers. The biggest data we provided as input in our tests were in vivo expression profiles of S. cerevisiae from the DREAM 5 challenge. These are genome-wide expression profiles of 5950 genes (333 of them are known transcription factors) measured in 536 experiments. It took 113 minutes and 30 seconds to calculate the network on a standard desktop workstation with one Intel®;Core™i7-870 processor with 4 cores and two threads per core (in total 8 logical processors) and 16 GB RAM. However, it took only 16 minutes and 40 seconds to calculate the same network on a machine with four AMD Opteron™6282 SE processors, each with 8 cores and two threads per core (in total 64 logical processors) and 256 GB RAM. All the data sets from the DREAM 3 and the DREAM 4 challenges were considerably smaller, up to 100 genes. It took less than one minute to calculate each of these networks on a desktop machine.
Setting parameters of ENNET
The ENNET algorithm is controlled by four parameters: the two sampling rates s_{ s } and s_{ f }, the number of iterations T and the learning rate ν. The sampling rate of samples s_{ s } and the sampling rate of transcription factors s_{ f } govern the level of randomness when selecting, respectively, rows and columns of the expression matrix to fit a regression model. The default choice of the value of s_{ s } is 1, i.e., we select with replacement a bootstrap sample of observations of the same size as an original training set at each iteration. Because some observations are selected more than once, around 0.37 of random training samples are out of bag in each iteration. It is more difficult to choose an optimal value of s_{ f }, which governs how many transcription factors are used to fit each base learner. Setting this parameter to a low value forces ENNET to score transcription factors, even if their improvement criterion, as shown in Equation 2, would not have promoted them in a pure greedy search, i.e., s_{ f }=1. However, if a chance of selecting a true transcription factor as a feature is too low, ENNET will suffer from selecting random genes as true regulators.
Stability of ENNET
Because ENNET uses random sampling of samples and features at each iteration of the main loop, as shown in Figure 1, it may calculate two different networks for two different executions on the same expression data. With the default choice of parameters, i.e., s_{ s }=1, s_{ f }=0.3, T=5000, ν=0.001, we expect numerous random resamplings, and therefore we need to know if a GRN calculated by ENNET is stable between different executions. We applied ENNET to the 5 networks that form DREAM 4 size 100 benchmark, repeating the inference calculations independently ten times for each network. Then, for each network, we calculated a Spearman’s rank correlation between all pairs among the ten independent runs. The lowest correlation coefficient we obtained was ρ>0.975, with p-value <2.2e−16, indicating that the networks that result from independent runs are very similar. This proves that ENNET, despite being a randomized algorithm, finds a stable solution to the inference problem.
Conclusions
We have proposed the ENNET algorithm for reverse-engineering of Gene Regulatory Networks. ENNET uses a variety of types of expression data as an input, and shows robust performance across different benchmark networks. Moreover, it does not assume any specific model of a regulatory interaction and do not require fine-tuning of its parameters, i.e., we define the default set of parameters, which promises accurate predictions for the future networks. Nevertheless, together with the algorithm, we propose a procedure of tuning parameters of ENNET towards minimizing empirical loss. Processing genome-scale expression profiles is feasible with ENNET: including up to a few hundred transcription factors, and up to a few thousand regulated genes. As shown in this study, the proposed method compares favorably to the state-of-the-art algorithms on the universally recognized benchmark data sets.
Declarations
Authors’ Affiliations
References
- Someren E, Wessels L, Backer E, Reinders M: Genetic network modeling. Pharmacogenomics. 2002, 3 (4): 507-525. 10.1517/14622416.3.4.507.PubMedView ArticleGoogle Scholar
- Eisen M, Spellman P, Brown P, Botstein D: Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci. 1998, 95 (25): 14863-10.1073/pnas.95.25.14863.PubMedPubMed CentralView ArticleGoogle Scholar
- Gardner T, Faith J: Reverse-engineering transcription control networks. Phys Life Rev. 2005, 2: 65-88. 10.1016/j.plrev.2005.01.001.PubMedView ArticleGoogle Scholar
- Chen T, He H, Church G, et al, et al: Modeling gene expression with differential equations. Pacific Symposium on Biocomputing, Volume 4. 1999, Singapore: World Scientific Press, 4-4.Google Scholar
- D’haeseleer P, Wen X, Fuhrman S, Somogyi R, et al, et al: Linear modeling of mRNA expression levels during CNS development and injury,. Pacific Symposium on Biocomputing, Volume 4. 1999, Singapore: World Scientific Press, 41-52.Google Scholar
- Gardner TS, di Bernardo D, Lorenz D, Collins JJ: Inferring genetic networks and identifying compound mode of action via expression profiling. Science. 2003, 301 (5629): 102-105. 10.1126/science.1081900.PubMedView ArticleGoogle Scholar
- Yip K, Alexander R, Yan K, Gerstein M: Improved reconstruction of in silico gene regulatory networks by integrating knockout and perturbation data. PLoS One. 2010, 5: e8121-10.1371/journal.pone.0008121.PubMedPubMed CentralView ArticleGoogle Scholar
- Greenfield A, Hafemeister C, Bonneau R: Robust data-driven incorporation of prior knowledge into the inference of dynamic regulatory networks. Bioinformatics. 2013, 29 (8): 1060-1067. 10.1093/bioinformatics/btt099.PubMedPubMed CentralView ArticleGoogle Scholar
- Friedman N, Linial M, Nachman I, Pe’er D: Using Bayesian networks to analyze expression data. J Comput Biol. 2000, 7 (3–4): 601-620.PubMedView ArticleGoogle Scholar
- Perrin BE, Ralaivola L, Mazurie A, Bottani S, Mallet J, d‘Alche Buc F: Gene networks inference using dynamic Bayesian networks. Bioinformatics. 2003, 19 (suppl 2): ii138-ii148. 10.1093/bioinformatics/btg1071.PubMedView ArticleGoogle Scholar
- Yu J, Smith V, Wang P, Hartemink A, Jarvis E: Advances to Bayesian, network inference for generating causal networks from observational biological data. Bioinformatics. 2004, 20 (18): 3594-3603. 10.1093/bioinformatics/bth448.PubMedView ArticleGoogle Scholar
- Segal E, Wang H, Koller D: Discovering molecular pathways from protein interaction and gene expression data. Bioinformatics. 2003, 19 (suppl 1): i264-i272. 10.1093/bioinformatics/btg1037.PubMedView ArticleGoogle Scholar
- Neapolitan R: Learning Bayesian Networks. 2004, Upper Saddle River: Pearson Prentice HallGoogle Scholar
- Qi J, Michoel T: Context-specific transcriptional regulatory network inference from global gene expression maps using double two-way t-tests. Bioinformatics. 2012, 28 (18): 2325-2332. 10.1093/bioinformatics/bts434.PubMedView ArticleGoogle Scholar
- Prill R, Marbach D, Saez-Rodriguez J, Sorger P, Alexopoulos L, Xue X, Clarke N, Altan-Bonnet G, Stolovitzky G: Towards a rigorous assessment of systems biology models: the DREAM3 challenges. PLoS One. 2010, 5 (2): e9202-10.1371/journal.pone.0009202.PubMedPubMed CentralView ArticleGoogle Scholar
- Bansal M, Belcastro V, Ambesi-Impiombato A, di Bernardo D: How to infer gene networks from expression profiles. Mol Syst Biol. 2007, 3: 78-PubMedPubMed CentralView ArticleGoogle Scholar
- Butte A, Kohane I: Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. Pacific Symposium on Biocomputing, Volume 5. 2000, Singapore: World Scientific Press, 418-429.Google Scholar
- Lee W, Tzou W: Computational methods for discovering gene networks from expression data. Brief Bioinform. 2009, 10 (4): 408-423.PubMedGoogle Scholar
- Markowetz F, Spang R: Inferring cellular networks–a review. BMC Bioinformatics. 2007, 8 (Suppl 6): S5-10.1186/1471-2105-8-S6-S5.PubMedPubMed CentralView ArticleGoogle Scholar
- Altay G, Emmert-Streib F: Inferring the conservative causal core of gene regulatory networks. BMC Syst Biol. 2010, 4: 132-10.1186/1752-0509-4-132.PubMedPubMed CentralView ArticleGoogle Scholar
- Küffner R, Petri T, Tavakkolkhah P, Windhager L, Zimmer R: Inferring gene regulatory networks by ANOVA. Bioinformatics. 2012, 28 (10): 1376-1382. 10.1093/bioinformatics/bts143.PubMedView ArticleGoogle Scholar
- Margolin A, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Favera R, Califano A: ARACNE: An algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006, 7 (Suppl 1): S7-10.1186/1471-2105-7-S1-S7.PubMedPubMed CentralView ArticleGoogle Scholar
- Margolin A, Wang K, Lim W, Kustagi M, Nemenman I, Califano A: Reverse engineering cellular networks. Nat Protoc. 2006, 1 (2): 662-671. 10.1038/nprot.2006.106.PubMedView ArticleGoogle Scholar
- Faith J, Hayete B, Thaden J, Mogno I, Wierzbowski J, Cottarel G, Kasif S, Collins J, Gardner T: Large-scale mapping and validation of Escherichia coli transcriptional regulation from a compendium of expression profiles. PLoS Biol. 2007, 5: e8-10.1371/journal.pbio.0050008.PubMedPubMed CentralView ArticleGoogle Scholar
- Zhang X, Liu K, Liu ZP, Duval B, Richer JM, Zhao XM, Hao JK, Chen L: NARROMI: a noise and redundancy reduction technique improves accuracy of gene regulatory network inference. Bioinformatics. 2013, 29: 106-113. 10.1093/bioinformatics/bts619.PubMedView ArticleGoogle Scholar
- Meyer P, Kontos K, Lafitte F, Bontempi G: Information-theoretic inference of large transcriptional regulatory networks. EURASIP J Bioinform Syst Biol. 2007, 2007: 8-View ArticleGoogle Scholar
- Ding C, Peng H: Minimum redundancy feature selection from microarray gene expression data. Computational Systems Bioinformatics Conference CSB2003. 2003, Washington: IEEE, 523-528.Google Scholar
- Irrthum A, Wehenkel L, Geurts P, et al, et al: Inferring regulatory networks from expression data using tree-based methods. PLoS One. 2010, 5 (9): e12776-10.1371/journal.pone.0012776.PubMedPubMed CentralView ArticleGoogle Scholar
- Haury AC, Mordelet F, Vera-Licona P, Vert JP: TIGRESS: trustful inference of gene regulation using stability selection. BMC Syst Biol. 2012, 6: 145-10.1186/1752-0509-6-145.PubMedPubMed CentralView ArticleGoogle Scholar
- Freund Y, Schapire RE: Experiments with a new boosting algorithm. International Conference on Machine Learning. 1996, 148-156.Google Scholar
- Freund Y, Schapire RE: A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci. 1997, 55: 119-139. 10.1006/jcss.1997.1504.View ArticleGoogle Scholar
- Sławek J, Arodź T: ADANET: inferring gene regulatory networks using ensemble classifiers. Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine. 2012, New York: ACM, 434-441.Google Scholar
- Lim N, Şenbabaoğlu Y, Michailidis G, d’Alché Buc F: OKVAR-Boost: a novel boosting algorithm to infer nonlinear dynamics and interactions in gene regulatory networks. Bioinformatics. 2013, 29 (11): 1416-1423. 10.1093/bioinformatics/btt167.PubMedPubMed CentralView ArticleGoogle Scholar
- Bolstad B, Irizarry R, Åstrand M Speed: A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003, 19 (2): 185-193. 10.1093/bioinformatics/19.2.185.PubMedView ArticleGoogle Scholar
- Marbach D, Costello J, Küffner R, Vega N, Prill R, Camacho D, Allison K, Kellis M, Collins J, Stolovitzky G, et al, et al: Wisdom of crowds for robust gene network inference. Nat Methods. 2012, 9 (8): 797-View ArticleGoogle Scholar
- Theodoridis S, Koutroumbas K: Pattern Recognition. 2006, London: Elsevier/Academic PressGoogle Scholar
- Tuv E, Borisov A, Runger G, Torkkola K: Feature selection with ensembles, artificial variables, and redundancy elimination. J Mach Learn Res. 2009, 10: 1341-1366.Google Scholar
- Friedman JH: Greedy function approximation: a gradient boosting machine. Ann Stat. 2001, 29 (5): 1189-1232.View ArticleGoogle Scholar
- Schaffter T, Marbach D, Floreano D: GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics. 2011, 27 (16): 2263-2270. 10.1093/bioinformatics/btr373.PubMedView ArticleGoogle Scholar
- Gama-Castro S, Salgado H, Peralta-Gil M, Santos-Zavaleta A, Muñiz-Rascado L, Solano-Lira H, Jimenez-Jacinto V, Weiss V, García-Sotelo J, López-Fuentes A, et al, et al: RegulonDB version 7.0: transcriptional regulation of Escherichia coli K-12 integrated within genetic sensory response units (gensor units). Nucleic Acids Res. 2011, 39 (suppl 1): D98-D105.PubMedPubMed CentralView ArticleGoogle Scholar
- Kim S, Imoto S, Miyano S: Inferring gene networks from time series microarray data using dynamic Bayesian networks. Brief Bioinform. 2003, 4 (3): 228-235. 10.1093/bib/4.3.228.PubMedView ArticleGoogle Scholar
- Di Camillo B, Toffolo G, Cobelli C: A gene network simulator to assess reverse engineering algorithms. Ann N Y Acad Sci. 2009, 1158: 125-142. 10.1111/j.1749-6632.2008.03756.x.PubMedView ArticleGoogle Scholar
- Kremling A Fischer, Gadkar K, Doyle F, Sauter T, Bullinger E, Allgöwer F, Gilles E: A benchmark for fethods in reverse engineering and model discrimination: problem formulation and solutions. Genome Res. 2004, 14 (9): 1773-1785. 10.1101/gr.1226004.PubMedView ArticleGoogle Scholar
- Mendes P, Sha W, Ye K: Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics. 2003, 19 (suppl 2): ii122-ii129. 10.1093/bioinformatics/btg1069.PubMedView ArticleGoogle Scholar
- Van den Bulcke T, Van Leemput K, Naudts B, Van Remortel P, Ma H, Verschoren A, De Moor B, Marchal K: SynTReN a generator of synthetic gene expression data for design and analysis of structure learning algorithms. BMC Bioinformatics. 2006, 7: 43-10.1186/1471-2105-7-43.PubMedPubMed CentralView ArticleGoogle Scholar
- Ravasz E, Somera A, Mongru D, Oltvai Z, Barabási A: Hierarchical organization of modularity in metabolic networks. Science. 2002, 297 (5586): 1551-1555. 10.1126/science.1073374.PubMedView ArticleGoogle Scholar
- Shen-Orr S, Milo R, Mangan S, Alon U: Network motifs in the transcriptional regulation network of Escherichia coli. Nat Genet. 2002, 31: 64-68. 10.1038/ng881.PubMedView ArticleGoogle Scholar
- Hache H, Wierling C, Lehrach H, Herwig R: GeNGe: systematic generation of gene regulatory networks. Bioinformatics. 2009, 25 (9): 1205-1207. 10.1093/bioinformatics/btp115.PubMedPubMed CentralView ArticleGoogle Scholar
- Roy S, Werner-Washburne M, Lane T: A system for generating transcription regulatory networks with combinatorial control of transcription. Bioinformatics. 2008, 24 (10): 1318-1320. 10.1093/bioinformatics/btn126.PubMedPubMed CentralView ArticleGoogle Scholar
- Haynes B, Brent M: Benchmarking regulatory network reconstruction with GRENDEL. Bioinformatics. 2009, 25 (6): 801-807. 10.1093/bioinformatics/btp068.PubMedPubMed CentralView ArticleGoogle Scholar
- Stolovitzky G, Kundaje A, Held G, Duggar K, Haudenschild C, Zhou D, Vasicek T, Smith K, Aderem A, Roach J: Statistical analysis of MPSS, measurements: application to the study of LPS-activated macrophage gene expression. Proc Natl Acad Sci USA. 2005, 102 (5): 1402-1407. 10.1073/pnas.0406555102.PubMedPubMed CentralView ArticleGoogle Scholar
- Meyer PE, Lafitte F, Bontempi G: Minet: AR/Bioconductor package for inferring large transcriptional networks using mutual information. BMC Bioinformatics. 2008, 9: 461-10.1186/1471-2105-9-461.PubMedPubMed CentralView ArticleGoogle Scholar
- Marbach D, Prill R, Schaffter T, Mattiussi C, Floreano D, Stolovitzky G: Revealing strengths and weaknesses of methods for gene network inference. Proc Natl Acad Sci. 2010, 107 (14): 6286-6291. 10.1073/pnas.0913357107.PubMedPubMed CentralView ArticleGoogle Scholar
- Marbach D, Schaffter T, Mattiussi C, Floreano D: Generating realistic in silico gene networks for performance assessment of reverse engineering methods. J Comput Biol. 2009, 16 (2): 229-239. 10.1089/cmb.2008.09TT.PubMedView ArticleGoogle Scholar
- Ashburner M, Ball C, Blake J, Botstein D, Butler H, Cherry J, Davis A, Dolinski K, Dwight S, Eppig J, et al, et al: Gene Ontology: tool for the unification of biology. Nat Genet. 2000, 25: 25-10.1038/75556.PubMedPubMed CentralView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.