 Methodology article
 Open Access
 Published:
Integrating external biological knowledge in the construction of regulatory networks from timeseries expression data
BMC Systems Biology volume 6, Article number: 101 (2012)
Abstract
Background
Inference about regulatory networks from highthroughput genomics data is of great interest in systems biology. We present a Bayesian approach to infer gene regulatory networks from time series expression data by integrating various types of biological knowledge.
Results
We formulate network construction as a series of variable selection problems and use linear regression to model the data. Our method summarizes additional data sources with an informative prior probability distribution over candidate regression models. We extend the Bayesian model averaging (BMA) variable selection method to select regulators in the regression framework. We summarize the external biological knowledge by an informative prior probability distribution over the candidate regression models.
Conclusions
We demonstrate our method on simulated data and a set of timeseries microarray experiments measuring the effect of a drug perturbation on gene expression levels, and show that it outperforms leading regressionbased methods in the literature.
Background
With recent advances in highthroughput biological data collection, reverse engineering of regulatory networks from largescale genomics data has become a problem of broad interest to biologists. The construction of regulatory networks is essential for defining the interactions between genes and gene products, and predictive models may be used to develop novel therapies [1, 2]. Both microarrays and more recently next generation sequencing provide the ability to quantify the expression levels of all genes in a given genome. Often, in such experiments, gene expression is measured in response to drug treatment, environmental perturbations, or gene knockouts, either at steady state or over a series of time points. This type of data captures information about the effect of one gene’s expression level on the expression level of another gene. Hence, such data can, in principle, be reverse engineered to provide a regulatory network that models these effects.
A regulatory network can be represented as a directed graph, in which each node represents a gene (in our case an mRNA level) and each directed edge (r→g) represents the relationship between regulator r and gene g. We aim to infer the directed edges that describe the relationships among the nodes. In this case, the causal relationship is statistically inferred, in contrast to the classic definition of causality used in biology to imply direct physical interaction leading to a phenotypic change. This is a challenging problem, especially on a genomewide scale, since the goal is to unravel a small number of regulators (parent nodes) out of thousands of candidate nodes in the graph. Even with highdimensional gene expression data, network inference is difficult, in part because of the small number of observations for each gene. In order to improve network inference, one would like a coherent approach to integrate external knowledge and data to both fill in gaps in the gene expression data and to constrain or guide the network search.
In this article, we present a network inference method that addresses the dimensionality challenge with a Bayesian variable selection method. Our method uses a supervised learning framework to incorporate external data sources. We applied our method to a set of timeseries mRNA expression profiles for 95 yeast segregants and their parental strains, over six time points in response to a drug perturbation. This extends our previous work [3] by incorporating prior probabilities of transcriptional regulation inferred using external data sources. Our method also accommodates feedback loops, a feature allowed only in some current network construction methods.
Previous work
Bayesian networks [4–6] are one of the most popular modeling approaches for network construction using gene expression data [7–17]. A Bayesian network is a probabilistic graphical model for which the joint distribution of all the nodes is factorized into independent conditional distributions of each node given its parents. The goal of Bayesian network inference is to arrive at a directed graph such that the joint probability distribution is optimized globally. While different Bayesian network structures may give rise to the same probability distribution, so that such networks in general do not imply causal relationships, prior information can be used to break this nonidentifiability so that causal inferences can be made. For example, systematic sources of perturbation such as naturally occurring genetic variation in a population or specific drug perturbations in which response is observed over time can lead to reliable causal inference [1, 2, 18, 19]. A Bayesian network is a directed acyclic graph (DAG). Therefore, cyclic components or feedback loops cannot be accommodated. This DAG constraint is an obstacle to using the Bayesian network approach for modeling gene regulatory networks because feedback loops are typical in many biological systems [20]. The DAG constraint is removed when dynamic Bayesian networks are used to model timeseries expression data [19, 21–24]. Dynamic Bayesian networks represent genes at successive time points as separate nodes, thus allowing for the existence of cycles. Bayesian network construction is an NPhard problem [25, 26], with computational complexity increasing exponentially with the number of nodes considered in the network construction process. In spite of some attempts to reduce the computational cost [27], the Bayesian network approach in general is computationally intensive to implement, especially for network inference on a genomewide scale.
In regressionbased methods, network construction is recast as a series of variable selection problems to infer regulators for each gene. The greatest challenge is the fact that there are usually far more candidate regulators than observations for each gene. Some authors have used singular value decompositions to regularize the regression models [28–30]. Others have built a regression tree for each target gene, using a compact set of regulators at each node [31–34]. Huang et al. [35] used regression with forward selection after prefiltering of candidates deemed irrelevant to the target gene, and Imoto et al. [16] used nonparametric regression embedded within a Bayesian network. L 1norm regularization, including the elastic net [36, 37] and weighted LASSO [38], has also been widely used [39–49].
Ordinary differential equations (ODE) provide another class of network construction strategies [50–53]. Using firstorder ODEs, the rate of change in transcription for a target gene is described as a function of the expression of its regulators and the effects caused by applied perturbations. ODEbased methods can be broadly classified into two categories, depending on whether the gene expressions are measured at steady state [54–58] or over time [51–53]. As an example, the TSNI (Time Series Network Identification) algorithm used ODEs to model time series expression data subject to an external perturbation [53]. To handle the dimensionality challenge (i.e. the number of observations per gene is much smaller than the number of genes), Bansal et al. employed a cubic smoothing spline to interpolate additional data points, and applied Principal Component Analysis to reduce dimensionality.
To help mitigate problems with using gene expression data in network inference, external data sources can be integrated into the inference process. Public data repositories provide a rich resource of biological knowledge relevant to transcriptional regulation. Integrating such external data sources into network inference has become an important problem in systems biology. James et al. [43] incorporated documented experimental evidence about the presence of a binding site for each known transcription factor (TF) in the promoter region of its target gene in Escherichia coli. Djebbari and Quackenbush [13] used preliminary networks derived from literature indexed in PubMed and proteinprotein interaction (PPI) databases as seeds for their Bayesian network analysis. Zhu et al. [59] showed that combining information from TF binding sites and PPI data increased overall predictive power. Geier et al. [15] examined the impact of external knowledge with different levels of accuracy on network inference, albeit on a simulated setting. Imoto et al. [16] described different ways to specify knowledge about PPI, documented regulatory relationships and wellstudied pathways as prior information. Lee et al. [44] presented a systematic way to include various types of biological knowledge, including the gene ontology (GO) database, ChIPchip binding experiments and a compressive collection of information about sequence polymorphisms.
Our contributions
This article is an extension of Yeung et al. [3] which adopted a regressionbased framework in which candidate regulators are inferred for each gene using expression data at the previous time point. Iterative Bayesian model averaging (iBMA) [60–62] was used to account for model uncertainty in the regression models. A supervised framework was used to estimate the relative contribution of each type of external knowledge and from this a shortlist of promising regulators for each gene was predicted. This shortlist was used to infer regulators for each gene in the regression framework.
Our contributions are fourfold. First, we develop a new method called iBMAprior that explicitly incorporates external biological knowledge into iBMA in the form of a prior distribution. Intuitively, we consider models consisting of candidate regulators supported by considerable external evidence to be frontrunners. A model that contains many candidate regulators with little support from external knowledge is penalized. Second, we demonstrate the merits of specifying the expected number of regulators per gene as priors through iBMAsize, which is a simplified version of iBMAprior without using genespecific external knowledge. Third, we refine the supervised framework to adjust for sampling bias towards positive cases in the training data, thereby calibrating the prior distribution. Fourth, we expand our benchmark to include simulated data, and compare our iBMA methods to L1regularized regressionbased methods. Specifically, we applied iBMAprior to real and simulated timeseries gene expression data, and found that it outperformed our previous work [3] and other leading methods in the literature on these data, producing more compact and accurate networks. Figure 1 summarizes iBMAprior and our main contributions.
Results and discussion
We applied our method, iBMAprior, to a timeseries data set of gene expression levels for 95 genotyped haploid yeast segregants perturbed with the macrolide drug rapamycin over 6 time points [3]. These data are described in detail in the Methods section. To evaluate the performance of iBMAprior, other published regressionbased network construction methods were applied to the same timeseries gene expression data set and the resulting networks were assessed for the recovery of documented regulatory relationships that were not used in the network construction process. We also checked whether each method recovered target genes enriched in upstream regions containing the binding sites of known TFs. We further carried out a simulation study to assess our method.
Comparison of different methods
First, we assessed the improvement of iBMAprior over that of our previous work iBMAshortlist from Yeung et al. [3] (see Methods for details) when applied to the same yeast timeseries gene expression data. Then, we compared our BMAbased methods to several L1regularized methods, including the least absolute shrinkage and selection operator (LASSO) [36, 63] and least angle regression (LAR) [64]. Regularized regression methods combine shrinkage and variable selection. L1regularized methods aim to minimize the sum of squared errors with a bound on the sum of the absolute values of the coefficients [65]. Efficient implementations are available for some of these methods, including LASSO and LAR, and these methods have been applied to highdimensional data in which there are more variables than observations [64, 66, 67].
We also compared the performance of our method with and without using external biological knowledge. We assessed hybrid methods by combining LASSO and LAR with the same supervised learning stage that was used in iBMAprior and iBMAshortlist. Table 1 lists all the methods compared in this analysis.
Assessment: recovery of documented relationships
To evaluate the accuracy of the network constructed by each method, we assessed its concordance with the Yeastract database, a curated repository of regulatory relationships between known TFs and target genes in the Saccharomyces cerevisiae literature [68]. If a regulatory relationship documented in Yeastract was also inferred in the network, we concluded that this relationship was recovered by direct evidence. Some of the positive examples used in the supervised learning stage are also documented in Yeastract. To avoid bias, we did not consider those regulatory relationships in the assessment. For each method compared, we applied Pearson’s chisquare test to a 2 × 2 contingency table that quantified the concordance of the inferred network with the Yeastract database. We also computed the true positive rate (TPR), defined as the proportion of the inferred positive relationships that are documented in Yeastract. It should be noted that Yeastract cannot document all “true” relationships as the entire set of regulatory relationships in yeast has yet to be defined. We further considered the ratio of the observed number of recovered relationships to its expected count as a result of random assortment (O/E). More detailed definitions of the assessment criteria can be found in Additional file 1: Figure S1.
Table 2 summarizes the assessment results for the nine methods compared. Additional details are presented in Additional file 2: Table S1. First, we studied the impact of integrating external knowledge into the network construction process under the iBMA framework. The TPR of iBMAprior was 18.00%, and the number of recovered positive relationships was 593, which is 4.11 times more than the expected number by random chance. Using the revised supervised step described in this work without incorporating prior probabilities into the iBMA framework, iBMAshortlist yielded a TPR of 12.78% and O/E ratio of 2.92. This is an improvement over network A (TPR = 9.98% and O/E = 2.28) constructed using the same algorithm and our previous version of the supervised framework as described in Yeung et al. [3]. All of our methods that incorporate external knowledge (iBMAprior, iBMAshortlist and network A) produced higher TPRs than iBMAnoprior for which only the timeseries gene expression data were used. In particular, iBMAprior produced a TPR (18.00%), which represents a twofold increase over iBMAnoprior (8.9%). Therefore, the integration of external data clearly improved the recovery of known relationships, and our latest method, iBMAprior, performed the best.
Next, we compared our iBMAbased methods to L1regularized methods. All the approaches that used LASSO and LAR generated networks that had far more misclassifications than the iBMAbased methods. Specifically, applications of LASSO or LAR without the supervised framework (LASSOnoprior and LARnoprior) had TPRs of 5.20% and 7.71% respectively, the lowest among all the methods considered. Incorporating external knowledge did improve both LASSO and LAR, increasing the TPRs to about 11% in both LASSOshortlist and LARshortlist. However, these TPRs were still lower than the TPRs for our iBMAbased methods. Our iBMAbased methods therefore outperformed methods based on LASSO and LAR for these data.
Finally, we investigated the impact of priors in iBMAsize, in which we applied a model size prior to calibrate the sparsity of the inferred networks without using any external data sources. iBMAsize can be considered as a simplified version of iBMAprior that sets the regulatory potential (the prior probability that a candidate regulates a given gene) to a constant parameter that controls the expected number of regulators per gene. From Table 2, iBMAsize produced a TPR of 16.84%, which was higher than all the other methods considered except iBMAprior. Although the number of recovered positive relationships was lower than that of iBMAprior (114 <593), iBMAsize also produced a network that was more compact (17,202 edges compared to 21,951 edges). We would recommend iBMAsize when genespecific external information is not available.
In Table 2 and Additional file 2: Table S1, all the iBMA networks were thresholded at a posterior probability of 50% (i.e., edges with posterior probability <50% were removed). We found that iBMAprior also outperformed other methods for these data over different posterior probability thresholds (see Additional file 2: Table S2).
Assessment: transcription factor binding site analysis
In another assessment, we checked whether the set of target genes containing known binding sites for a certain TF were enriched among the child nodes of that TF in each inferred network. We first extracted the known binding sites for 129 TFs documented in the JASPAR database [69, 70]. Using TFMscan [71], we retrieved a set of genes containing the known binding sites in their upstream regions for each TF. We then checked for enrichment of these genes among the inferred child nodes of the corresponding TFs in each network with Fisher’s exact test. Table 3 reports the number of TFs whose inferred child nodes exhibited such enrichment, at a false discovery rate (FDR) of 10%. All of the methods that made use of external information outperformed all of those that did not, illustrating the benefit of incorporating external knowledge. LASSOshortlist and LARshortlist appeared to produce slightly better results than iBMAprior in this binding site analysis, but it is likely the consequence of their larger network sizes (>2x larger than iBMA prior).
Comparison with Lirnet
Lee et al. [44] proposed a regressionbased network construction method called Lirnet, which performed well on a publicly available gene expression data set from Brem et al. [72]. The Brem data set recorded the steadystate expression levels for 112 yeast segregants, 95 of which were profiled in our timeseries experiments under different growth conditions. Lee et al. [44] showed that Lirnet outperformed Bayesian networks on the same data, and so we compared our top performer, iBMAprior, with Lirnet. Because Lirnet was formulated to analyze steadystate expression data with no time components, we adapted our method to static data by removing the subscript referring to the time point from Equation (4):
We applied iBMAprior to the same 3152gene subset of the Brem et al. data that Lee et al. [44] used. Lirnet constrained the search of regulators for each target gene to 304 known TFs. For fair comparison, we also confined the set of candidate regulators to the same TFs. Networks constructed from steadystate gene expression data cannot have feedback loops [73–75]. To detect and remove such loops from our inferred network, we identified all strongly connected components using the igraph R package, and deleted the TFgene link associated with the lowest posterior probability for each cycle.
Same as before, we evaluated different methods by assessing the concordance of the inferred networks with the Yeastract database using Pearson’s chisquare test. The assessment results in Table 4 show that iBMAprior outperformed Lirnet, almost doubling the TPR and the O/E ratio while producing a comparable number of misclassified regulatory relationships.
Simulation study
We designed and conducted a series of simulations to further assess our proposed method. We used the fitted model obtained from applying iBMAprior to the yeast timeseries microarray data set as the true underlying network, and generated simulated expression data from the estimated linear regression model. Twenty data sets, each with the same dimensions as the real timeseries expression data, were independently generated as follows:

1.
Set the prior probability of a regulatory relationship for each gene pair to the same value as the regulatory potential obtained at the supervised learning stage using the real external data.

2.
Set the expression levels of the 3556 genes for the 95 yeast segregants and the two parental strains at time t = 0 as the observed measurements in the real yeast timeseries gene expression data.

3.
For each target gene g, define the set R _{ g } of true regulators as those with a posterior probability of ≥50% in our inferred network using iBMAprior and the real timeseries data.

4.
For time t = 1 to 5,
For gene g = 1 to 3556, generate the simulated true expression level for each segregant s using the following equation:
where the β’s are given by the posterior expectation of the regression coefficients corresponding to the set of true regulators determined in Step 3.

5.
Generate the simulated observed gene expression levels by adding noise to the true expression levels without measurement errors, i.e.,
$${X}_{g,t,s}={X}_{g,t,s}^{\mathit{\text{true}}}+{\u03f5}_{g,t,s},$$(3)
where ϵ_{g,t,s} ~ N(0, σ_{ g }^{2}) with σ_{ g }^{2} being given by the sample variance of the regression residuals in the real data analysis. Others, e.g. [76], have shown that the error in log ratios of expression data is reasonably approximately by a normal distribution.
To assess the accuracy of networks inferred with the simulated data sets, we compared each of these networks to the true network created in Step 3 of the data generation algorithm. We used the same assessment criteria as in the real data analysis with the true network replacing Yeastract as the reference. As shown in Table 5, iBMAprior outperformed the other iBMAbased methods, yielding a TPR of 71.13% averaged over 20 replications (compared to 47.23% for iBMAshortlist, 20.31% for iBMAsize, and 8.55% for iBMAnoprior).
Conclusions
In this article, we have proposed a methodology that systematically integrates external biological knowledge into BMA for network construction. A key feature of our approach is a formal mechanism to account for model uncertainty. For each target gene, we arrive at a compact set of promising models from which to draw inference, the weights of which are calibrated by the external biological knowledge. Our method infers sparse, compact and accurate networks upon the input of a reasonable estimate of network density from both real and simulated data. It does not put a hard limit on the number of regulators per target gene, unlike some other methods, such as Bayesian network approaches that impose this constraint to reduce the computational burden. While known TFs are in general favored a priori with the available external biological knowledge, we do not confine the search for regulators to them. This allows for the discovery of new regulatory relationships.
We showed that our method, iBMAprior, consistently outperformed our previous method [3] using both real and simulated timeseries gene expression data. We showed that this improvement is mostly due to the incorporation of external data sources via prior probabilities (iBMAprior versus iBMAshortlist in Table 2). We also improved upon our previous supervised method by adjusting for the sampling bias of positive and negative training samples (iBMAshortlist versus network A in Table 2). We further showed that our iBMAbased methods (iBMAprior and iBMAshortlist) recovered a higher percentage of known regulatory relationships (i.e. higher TPRs) than other popular variable selection methods (LASSO and LAR).
A key contribution of this work is the derivation of more compact networks with higher TPRs. Unfortunately, due to incomplete knowledge, the evaluation of false positives and false negatives is difficult using real data. Therefore, we supplemented our study with a simulation study designed to mimic the real data, and showed that iBMAprior produced fewer misclassified cases (i.e. the sum of false positives and false negatives) than other iBMAbased methods.
There are many directions for future work. A timelag regression model, i.e., one that accounts for the current expression level of a target gene with the past expression levels of its regulators, is used in our methodology. This model formulation is in line with many other regressionbased methods targeting timeseries gene expression data [3, 28, 35, 48, 49]. The expression levels were taken at regular time intervals in our yeast timeseries gene expression data set. If the levels were measured at nonuniform time intervals, we could create interpolated timeseries data with interpolation strategies employed in the literature [51, 53]. It would be useful to apply our methodology to network construction in prokaryotic systems as we would expect better performance in these less complex systems that tend to be more dominated by transcriptional control [77].
Methods
Timeseries gene expression data for yeast segregants
We applied our method to a set of timeseries mRNA expression data measuring the gene expression levels of 95 genotyped haploid yeast segregants perturbed with the macrolide drug rapamycin [3]. These segregants, along with their genetically diverse parents, BY4716 (BY) and RM111a (RM), have been genotyped previously [72]. Rapamycin was chosen for perturbation because it was expected to induce widespread changes in global transcription, based on a screen of the public microarray data repositories [78–80]. This perturbation allowed for the capture of a large subset of all regulatory interactions encoded by the yeast genome. Each yeast culture was sampled at 10minute intervals for 50 minutes after rapamycin addition. The RNA purified from these samples was profiled with Affymetrix Yeast 2.0 microarrays. Probe signals were summarized into gene expression levels using the Robust Multiarray Average (RMA) method [81] and genes not exhibiting significant changes in expression were filtered from the data as described in [3]. The data subset that remained consisted of the timedependent mRNA expression profiles of 3556 genes. The complete time series gene expression data are publicly available at ArrayExpress (http://www.ebi.ac.uk/arrayexpress/) with accession number EMTAB412.
Bayesian model averaging (BMA)
BMA is a variable selection approach that takes model uncertainty into account by averaging over the posterior distribution of a quantity of interest based on multiple models, weighted by their posterior model probabilities [82, 83]. In BMA, the posterior distribution of a quantity of interest Θ given the data D is given by $\mathrm{\text{Pr}}\left(\Theta D\right)={\displaystyle \sum _{k=1}^{K}\mathrm{\text{Pr}}\left(\Theta D,{M}_{k}\right)\mathrm{\text{Pr}}\left({M}_{k}D\right)}$, where M_{1},…,M_{ k } are the models considered. Each model consists of a set of candidate regulators. In order to efficiently identify a compact set of promising models M_{ k } out of all possible models, two approaches are sequentially applied. First, the leaps and bounds algorithm [84] is applied to identify the best nbest models for each number of variables (i.e., regulators). Next, Occam’s window is applied to discard models with much lower posterior model probabilities than the best one [85]. The Bayesian Information Criterion (BIC) [86] is used to approximate each model's integrated likelihood, from which its posterior model probability can be determined.
While BMA has performed well in many applications [60], it is hard to apply directly to the current data set in which there are many more variables than samples. Yeung et al. [62] proposed an iterative version of BMA (iBMA) to resolve this problem. At each iteration, BMA is applied to a small number, say, w = 30, of variables that could be efficiently enumerated by leaps and bounds. Candidate predictor variables with a low posterior inclusion probability are discarded, leaving room for other variables in the candidate list to be considered in subsequent iterations. This procedure continues until all the variables have been processed.
Supervised framework for the integration of external knowledge
We formulated network construction from time series data as a regression problem in which the expression of each gene is predicted by a linear combination of the expression of candidate regulators at the previous time point. Let D be the entire data set and X_{g,t,s} be the expression of gene g at time t in segregant s. Denote by R_{ g } the set of regulators for gene g in a candidate model. The expression of gene g is formulated by the following regression model:
where E denotes expectation and β’s are regression coefficients. For each gene, we apply iBMA to infer the set of regulators.
To account for external knowledge in the network construction process, Yeung et al. [3] introduced a supervised framework to estimate the weights of various types of evidence of transcriptional regulation and subsequently derived top candidate regulators. For instance, a target gene is likely to be coexpressed with its regulators across diverse conditions in publicly available, largescale microarray experiments [78, 87, 88]. ChIPchip data [89] provide supporting evidence for a direct regulatory relationship between a given TF and a gene of interest by showing that the TF directly binds to the promoter of that gene. A candidate regulator with known regulatory roles in curated databases such as the Saccharomyces Genome Database (SGD) [90] would be favored a priori. Polymorphisms in the amino acid sequence of a candidate regulator that affect its regulatory potential provide further evidence of a regulatory relationship [44]. Common gene ontology (GO) [91] annotations for a target gene and candidate regulators also provide evidence of functional relationship.
To study the relative importance of the various types of external knowledge from the supervised framework, we collected 583 positive examples of known regulatory relationships between TFs and target genes from the Saccharomyces cerevisiae Promoter Database (SCPD) [92] and the Yeast Protein Database (YPD) [93]. Random sampling of these TFgene pairs was used to generate 444 negative examples. Logistic regression using BMA was applied to estimate the contribution of each type of external knowledge in the prediction of regulatory relationships. The fitted model was then used to predict the regulatory potential π_{ gr } of a candidate regulator r for a gene g, i.e., the prior probability that candidate r regulates gene g, for all possible regulatorgene pairs. Next, the regulatory potentials were used to rank and shortlist the top p candidate regulators for each gene (p = 100 by default in our experiments). The shortlisted candidates were then input to BMA for variable selection in the network construction process.
Incorporating prior probabilities into iBMA
The potential benefit of using information from external knowledge to refine the search for regulators was shown by Yeung et al. and many others [3, 13, 15–17, 43, 44]. However, external knowledge was only used to shortlist the top p candidate regulators for each target gene in Yeung et al. Here, we develop a formal framework that fully incorporates external knowledge into the BMA network construction process.
We associate each candidate model M_{ k } with a prior probability, namely:
where π_{ gr } is the regulatory potential of a candidate regulator r for a gene g, δ_{ kr } = 1 if r ∈M_{ k } and δ_{ kr } = 0 otherwise [85, 94]. Intuitively, we consider models consisting of candidate regulators supported by considerable external evidence to be frontrunners. A model that contains many candidate regulators with little support from external knowledge is penalized.
The posterior model probability of model M_{ k } is given by
where f(D  M_{ k }) is the integrated likelihood of the data D under model M_{ k }, and the proportionality constant ensures that the posterior model probabilities sum up to 1.
Then Occam’s window was used to discard any model M_{ k } having a posterior odds less than 1/OR relative to the model with the highest posterior probability, M_{ opt }. The parameter OR controls the compactness of the set of selected models, and here we set it to 20.
Extension of iBMA: cumulative model support
In Yeung et al. [3], the models selected in an intermediate iteration by iBMA were not recorded once that iteration was completed, and the final set of models selected were chosen only from those considered in the last iteration. While computationally efficient, this strategy overlooked the possibility of accumulated model support over multiple iterations. We improve the model selection process by storing all the models selected in any iteration and applying Occam’s window to this cumulative set of models as the last step in the algorithm.
At the end of each iteration of iBMA, and after applying Occam’s window to all models considered, we compute the posterior inclusion probabilities for each candidate regulator r by summing up the posterior probabilities of all models that involve this regulator.
where F is the set of all possible models for gene g, β_{ gr } is the regression coefficient of a candidate regulator r for a gene g, δ_{ kr } = 1 if r ∈M_{ k } and δ_{ kr } = 0 otherwise. Finally, we infer regulators for each target gene g by thresholding on the posterior inclusion probability at a predetermined level (50% in all our experiments unless otherwise specified).
Extensions of the supervised framework
We have extended the supervised framework of Yeung et al. [3] in three ways.
Imputation of missing values in ChIPchip data
About 9% of the ChIPchip data used in the training samples were originally undefined. The ChIPchip data take the form of pvalues for the statistical tests of whether candidate regulator r binds to the upstream region of gene g invivo. In [3], those undefined values were regarded as lack of evidence for upstream binding and assigned values of one. Here, we used multiple imputation [95, 96], in which we sampled with replacement from the empirical distribution of the nonmissing ChIPchip data, conditioning on the presence or absence of regulatory relationships. We used 20 imputations as recommended by Graham et al. [97] for scenarios with about 10% missing data. Logistic regression was then performed on the training sample filled with the imputed ChIPchip values.
Truncation of extreme values in external data
Some of the external data types used in the supervised learning stage contained value ranges for individual genes that far exceeded the ranges for these genes in the training samples, e.g. the SNPlevel information in Additional file 2: Table S3. Therefore, we truncated all extreme values in the external data to the respective maximum value observed in the training samples.
Adjustment for sampling bias regarding positive and negative cases
In the supervised framework of Yeung et al., the expected number of regulators per target gene, computed as the sum of regulatory potentials of all candidate regulators, mostly fell between 400 and 600 (see Figure 2(a)). Such an apparent overestimation of positive regulatory relationships was due to the fact that similar numbers of positive and negative examples in the supervised learning stage. Given the sparse nature of a gene regulatory network, we expect the number of TFgene pairs with regulatory relationships to be a small proportion of the total.
Here, we address this issue by using a strategy that is commonly used in case–control studies, in which disease (positive) cases are usually rare [98, 99]. Let π_{1} and π_{0} be the sampling rates for positive and negative cases respectively. To adjust for the difference in the sampling rates, we add an offset of log(π_{1}/π_{0}) to the logistic regression model. Equivalently, we divide the predicted odds by π_{1}/π_{0}. Previous literature has suggested that the indegree distribution of gene regulatory networks decays exponentially [100–102]. Based on regulatory relationships documented in various yeast databases [90, 92, 93, 103, 104], Guelzim et al. [100] empirically estimated the indegree distribution of the regulatory network as 157e^{0.45m}, where m denotes the number of TFs for a target gene. This implies that each target gene is regulated by approximately 2.76 TFs on average. Since we have 583 positive training examples, 444 negative examples, and 6000 yeast genes, we characterize such a network with density τ = 2.76/6000 = 0.00046, and compute ${\pi}_{1}=\frac{583}{6000\times 2.76}=3.52\%$, and ${\pi}_{0}=\frac{444}{\left[6000\times \left(60002.76\right)\right]}=0.0012\%$. Therefore, we divide all the predicted odds by π_{1}/π_{0} = 2853. For instance, if the original predicted probability is 0.9, i.e., the predicted odds is 9, then after scaling the odds adjusted for sampling bias, it becomes 9/2853 = 0.0032, implying an adjusted probability of 0.0032. As shown in Figure 2(b), the expected number of regulators per target gene has dropped substantially to a level of around 0.5 after our three correction strategies (adjustment of sampling bias, imputation of missing ChIPchip values and truncation of extreme values) are applied. Additional file 1: Figure S2 shows the incremental merit of our correction strategies. Additional file 2: Table S3 gives the estimated regression coefficient and the posterior probability for each external data type in our revised supervised framework.
To assess the sensitivity of our results to changes in the assumed prior average number of regulators per target gene, we repeated the analysis with various levels of the network density τ, and found that the assessment results were comparable. Please see the Additional file 3 for complete details.
Summary: outline of algorithm

1.
For each gene g, rank the candidate regulators based on the regulatory potentials predicted from the supervised framework.

2.
Shortlist the top p candidates from the ranked list (p = 100 in our experiments).

3.
Fill the BMA window with the top w candidates in the shortlist (w = 30 in our experiments).

4.
Apply BMA with prior model probabilities based on the external knowledge:

a.
Determine the best nbest models for each number of variables using the leaps and bounds algorithm (nbest = 10 in our experiments).

b.
For each selected model, compute its prior probability relative to the w candidates in the current BMA window using Equation (5).

c.
Remove the w candidate regulators with posterior inclusion probability Pr(β _{ gr } ≠ 0  D) <5%.

a.

5.
Fill the wcandidate BMA window with those not considered yet in the shortlist.

6.
Repeat steps 4–5 until all the p candidates in the shortlist have been processed.

7.
Compute the prior probability for all selected models relative to all the p shortlisted candidates using Equation (5).

8.
Take the collection of all models selected at any iteration of BMA, and apply Occam’s window, reducing the set of models.

9.
Compute the posterior inclusion probability for each candidate regulator using the set of selected models, and infer candidates associated with a posterior probability exceeding a prespecified threshold (50%) to be regulators for target gene g.
External knowledge is used in the following ways:

1.
All the candidate regulators are ranked according to their regulatory potentials, which were predicted using the available external data sources at the supervised learning stage.

2.
Model selection is performed by comparing models against each other based on their posterior odds. As shown by Equation (6), the posterior odds is proportional to a product of the integrated likelihood and the prior odds. The prior probability and, therefore, the prior odds, of a candidate model are formulated as a function of regulatory potentials.

3.
The posterior inclusion probability of each candidate regulator, from which inference is made about the presence or absence of a regulatory relationship, is positively related to its regulatory potential. As shown in Equation (5), a factor of π _{ gr } is contributed to each model in which the candidate g is included. Otherwise, a factor of 1 π _{ gr } is contributed to each model.
Author contributions
KL and AER developed the methodology. KL implemented the methods. KL and KYY analyzed the data. KMD performed and JZ, EES, REB designed the experiments. AER and KYY conceived the study. KL, AER and KYY wrote the manuscript. All authors read, edited and approved the final manuscript.
Abbreviations
 BMA:

Bayesian Model Averaging
 iBMA:

Iterative Bayesian Model Averaging
 LAR:

Least angle regression
 LASSO:

Least absolute shrinkage and selection operator
 TF:

Transcription factor.
References
 1.
Schadt EE: Molecular networks as sensors and drivers of common human diseases. Nature. 2009, 461: 218223. 10.1038/nature08454.
 2.
Schadt EE, Zhang B, Zhu J: Advances in systems biology are enhancing our understanding of disease and moving us closer to novel disease treatments. Genetica. 2009, 136: 259269. 10.1007/s107090099359x.
 3.
Yeung KY, Dombek KM, Lo K, Mittler JE, Zhu J, Schadt EE, Bumgarner RE, Raftery AE: Construction of regulatory networks using expression timeseries data of a genotyped population. Proc Natl Acad Sci U S A. 2011, 108: 1943619441. 10.1073/pnas.1116442108.
 4.
Heckerman D: A tutorial on learning with Bayesian networks. Studies in Computational Intelligence. 2008, 156: 3382. 10.1007/9783540850663_3.
 5.
Jensen FV, Nielsen TD: Bayesian networks and decision graphs. 2007, New York, NY: Springer, 2
 6.
Pearl J: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. 1988, San Francisco, CA: Morgan Kaufmann
 7.
Friedman N: Inferring cellular networks using probabilistic graphical models. Science. 2004, 303: 799805. 10.1126/science.1094068.
 8.
Friedman N, Linial M, Nachman I, Pe'er D: Using Bayesian networks to analyze expression data. J Comput Biol. 2000, 7: 601620. 10.1089/106652700750050961.
 9.
Hartemink AJ, Gifford DK, Jaakkola TS, Young RA: Using graphical models and genomic expression data to statistically validate models of genetic regulatory networks. Pac Symp Biocomput. 2001, 6: 422433.
 10.
Hartemink AJ, Gifford DK, Jaakkola TS, Young RA: Combining location and expression data for principled discovery of genetic regulatory network models. Pac Symp Biocomput. 2002, 7: 437449.
 11.
Husmeier D: Sensitivity and specificity of inferring genetic regulatory interactions from microarray experiments with dynamic Bayesian networks. Bioinformatics. 2003, 19: 22712282. 10.1093/bioinformatics/btg313.
 12.
Pe'er D, Regev A, Elidan G, Friedman N: Inferring subnetworks from perturbed expression profiles. Bioinformatics. 2001, 17: S215S224. 10.1093/bioinformatics/17.suppl_1.S215.
 13.
Djebbari A, Quackenbush J: Seeded Bayesian Networks: constructing genetic networks from microarray data. BMC Syst Biol. 2008, 2: 5710.1186/17520509257.
 14.
Ong IM, Glasner JD, Page D: Modelling regulatory pathways in E. coli from time series expression profiles. Bioinformatics. 2002, 18: S241S248.
 15.
Geier F, Timmer J, Fleck C: Reconstructing generegulatory networks from time series, knockout data, and prior knowledge. BMC Syst Biol. 2007, 1: 1110.1186/17520509111.
 16.
Imoto S, Kim S, Goto T, Aburatani S, Tashiro K, Kuhara S, Miyano S: Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network. J Bioinform Comput Biol. 2003, 1: 231252. 10.1142/S0219720003000071.
 17.
Zhu J, Zhang B, Smith EN, Drees B, Brem RB, Kruglyak L, Bumgarner RE, Schadt EE: Integrating largescale functional genomic data to dissect the complexity of yeast regulatory networks. Nat Genet. 2008, 40: 854861. 10.1038/ng.167.
 18.
Schadt EE, Lamb J, Yang X, Zhu J, Edwards S, Guhathakurta D, Sieberts SK, Monks S, Reitman M, Zhang C, Lum PY, Leonardson A, Thieringer R, Metzger JM, Yang L, Castle J, Zhu H, Kash SF, Drake TA, Sachs A, Lusis AJ: An integrative genomics approach to infer causal associations between gene expression and disease. Nat Genet. 2005, 37: 710717. 10.1038/ng1589.
 19.
Zhu J, Chen Y, Leonardson AS, Wang K, Lamb JR, Emilsson V, Schadt EE: Characterizing dynamic changes in the human blood transcriptional network. PLoS Comput Biol. 2010, 6: e100067110.1371/journal.pcbi.1000671.
 20.
Davidson EH, Rast JP, Oliveri P, Ransick A, Calestani C, Yuh CH, Minokawa T, Amore G, Hinman V, ArenasMena C, Otim O, Brown CT, Livi CB, Lee PY, Revilla R, Rust AG, Pan Z, Schilstra MJ, Clarke PJ, Arnone MI, Rowen L, Cameron RA, McClay DR, Hood L, Bolouri H: A genomic regulatory network for development. Science. 2002, 295: 16691678. 10.1126/science.1069883.
 21.
Friedman N, Murphy K, Russell S: Learning the structure of dynamic probabilistic networks. 1998. 1998, San Mateo, CA: Morgan Kaufmann, 139147.
 22.
Kim SY, Imoto S, Miyano S: Inferring gene networks from time series microarray data using dynamic Bayesian networks. Brief Bioinform. 2003, 4: 228235. 10.1093/bib/4.3.228.
 23.
Murphy K, Mian S: Modeling gene expression data using dynamic Bayesian networks. Technical Report, Computer Science Division. 1999, Berkeley, CA: University of California
 24.
Yu J, Smith VA, Wang PP, Hartemink AJ, Jarvis ED: Advances to Bayesian network inference for generating causal networks from observational biological data. Bioinformatics. 2004, 20: 35943603. 10.1093/bioinformatics/bth448.
 25.
Chickering DM: Learning Bayesian Networks is NPComplete. Learning from Data: Artificial Intelligence and Statistics V. Edited by: Fisher D, Lenz HJ. 1996, SpringerVerlag, 121130.
 26.
Chickering DM, Heckerman D, Meek C: Largesample learning of Bayesian networks is NPhard. J Mach Learn Res. 2004, 5: 12871330.
 27.
Zou M, Conzen SD: A new dynamic Bayesian network (DBN) approach for identifying gene regulatory networks from time course microarray data. Bioinformatics. 2005, 21: 7179. 10.1093/bioinformatics/bth463.
 28.
Zhang SQ, Ching WK, Tsing NK, Leung HY, Guo D: A new multiple regression approach for the construction of genetic regulatory networks. Artif Intell Med. 2010, 48: 153160. 10.1016/j.artmed.2009.11.001.
 29.
Yeung MK, Tegner J, Collins JJ: Reverse engineering gene networks using singular value decomposition and robust regression. Proc Natl Acad Sci U S A. 2002, 99: 61636168. 10.1073/pnas.092576199.
 30.
Guthke R, Moller U, Hoffmann M, Thies F, Topfer S: Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection. Bioinformatics. 2005, 21: 16261634. 10.1093/bioinformatics/bti226.
 31.
HuynhThu VA, Irrthum A, Wehenkel L, Geurts P: Inferring regulatory networks from expression data using treebased methods. PLoS One. 2010, 5: e1277610.1371/journal.pone.0012776.
 32.
Lee SI, Pe'er D, Dudley AM, Church GM, Koller D: Identifying regulatory mechanisms using individual variation reveals key role for chromatin modification. Proc Natl Acad Sci U S A. 2006, 103: 1406214067. 10.1073/pnas.0601852103.
 33.
NepomucenoChamorro IA, AguilarRuiz JS, Riquelme JC: Inferring gene regression networks with model trees. BMC Bioinforma. 2010, 11: 51710.1186/1471210511517.
 34.
Segal E, Shapira M, Regev A, Pe'er D, Botstein D, Koller D, Friedman N: Module networks: identifying regulatory modules and their conditionspecific regulators from gene expression data. Nat Genet. 2003, 34: 166176. 10.1038/ng1165.
 35.
Huang T, Liu L, Qian Z, Tu K, Li Y, Xie L: Using GeneReg to construct time delay gene regulatory networks. BMC Res Notes. 2010, 3: 14210.1186/175605003142.
 36.
Friedman J, Hastie T, Tibshirani R: Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010, 33: 122.
 37.
Zou H, Trevor H: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B. 2005, 67: 301320. 10.1111/j.14679868.2005.00503.x.
 38.
Shimamura T, Imoto S, Yamaguchi R, Miyano S: Weighted lasso in graphical Gaussian modeling for large gene network estimation based on microarray data. Genome Inform. 2007, 19: 142153.
 39.
Charbonnier C, Chiquet J, Ambroise C: WeightedLASSO for structured network inference from time course data. Stat Appl Genet Mol Biol. 2010, 9: Article 15
 40.
Gustafsson M, Hornquist M: Gene expression prediction by soft integration and the elastic netbest performance of the DREAM3 gene expression challenge. PLoS One. 2010, 5: e913410.1371/journal.pone.0009134.
 41.
Hecker M, Goertsches RH, Engelmann R, Thiesen HJ, Guthke R: Integrative modeling of transcriptional regulation in response to antirheumatic therapy. BMC Bioinforma. 2009, 10: 26210.1186/1471210510262.
 42.
Hecker M, Goertsches RH, Fatum C, Koczan D, Thiesen HJ, Guthke R, Zettl UK: Network analysis of transcriptional regulation in response to intramuscular interferonbeta1a multiple sclerosis treatment. Pharmacogenomics J. 2010, in press
 43.
James G, Sabatti C, Zhou N, Zhu J: Sparse regulatory networks. Annals of Applied Statistics. 2010, 4: 663686. 10.1214/10AOAS350.
 44.
Lee SI, Dudley AM, Drubin D, Silver PA, Krogan NJ, Pe'er D, Koller D: Learning a prior on regulatory potential from eQTL data. PLoS Genet. 2009, 5: e100035810.1371/journal.pgen.1000358.
 45.
Li F, Yang Y: Recovering genetic regulatory networks from microarray data and location analysis data. Genome Inform. 2004, 15: 131140.
 46.
Pan W, Xie B, Shen X: Incorporating predictor network in penalized regression with application to microarray data. Biometrics. 2010, 66: 474484. 10.1111/j.15410420.2009.01296.x.
 47.
Peng J, Zhu J, Bergamaschi A, Han W, Noh DY, Pollack JR, Wang P: Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast cancer. Ann Appl Stat. 2010, 4: 5377.
 48.
van Someren EP, Vaes BL, Steegenga WT, Sijbers AM, Dechering KJ, Reinders MJ: Least absolute regression network analysis of the murine osteoblast differentiation network. Bioinformatics. 2006, 22: 477484. 10.1093/bioinformatics/bti816.
 49.
van Someren EP, Wessels LFA, Backer E, Reinders MJT: Multicriterion optimization for genetic network modeling. Signal Process. 2003, 83: 763775. 10.1016/S01651684(02)004735.
 50.
Bansal M, Belcastro V, AmbesiImpiombato A, di Bernardo D: How to infer gene networks from expression profiles. Mol Syst Biol. 2007, 3: 78
 51.
D'Haeseleer P, Wen X, Fuhrman S, Somogyi R: Linear modeling of mRNA expression levels during CNS development and injury. Pac Symp Biocomput. 1999, 4152.
 52.
de Jong H: Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol. 2002, 9: 67103. 10.1089/10665270252833208.
 53.
Bansal M, Della Gatta G, di Bernardo D: Inference of gene regulatory networks and compound mode of action from time course gene expression profiles. Bioinformatics. 2006, 22: 815822. 10.1093/bioinformatics/btl003.
 54.
Bonneau R, Reiss DJ, Shannon P, Facciotti M, Hood L, Baliga NS, Thorsson V: The Inferelator: an algorithm for learning parsimonious regulatory networks from systemsbiology data sets de novo. Genome Biol. 2006, 7: R3610.1186/gb200675r36.
 55.
di Bernardo D, Thompson MJ, Gardner TS, Chobot SE, Eastwood EL, Wojtovich AP, Elliott SJ, Schaus SE, Collins JJ: Chemogenomic profiling on a genomewide scale using reverseengineered gene networks. Nat Biotechnol. 2005, 23: 377383. 10.1038/nbt1075.
 56.
Gardner TS, di Bernardo D, Lorenz D, Collins JJ: Inferring genetic networks and identifying compound mode of action via expression profiling. Science. 2003, 301: 102105. 10.1126/science.1081900.
 57.
Gregoretti F, Belcastro V, di Bernardo D, Oliva G: A parallel implementation of the network identification by multiple regression (NIR) algorithm to reverseengineer regulatory gene networks. PLoS One. 2010, 5: e1017910.1371/journal.pone.0010179.
 58.
Tegner J, Yeung MK, Hasty J, Collins JJ: Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc Natl Acad Sci U S A. 2003, 100: 59445949. 10.1073/pnas.0933416100.
 59.
Zhu J, Lum PY, Lamb J, GuhaThakurta D, Edwards SW, Thieringer R, Berger JP, Wu MS, Thompson J, Sachs AB, Schadt EE: An integrative genomics approach to the reconstruction of gene networks in segregating populations. Cytogenet Genome Res. 2004, 105: 363374. 10.1159/000078209.
 60.
Raftery AE: Bayesian model selection in social research (with discussion). Sociol Methodol. 1995, 25: 111193.
 61.
Raftery AE, Madigan D, Hoeting JA: Bayesian model averaging for linear regression models. J Am Stat Assoc. 1997, 92: 179191. 10.1080/01621459.1997.10473615.
 62.
Yeung KY, Bumgarner RE, Raftery AE: Bayesian model averaging: development of an improved multiclass, gene selection and classification tool for microarray data. Bioinformatics. 2005, 21: 23942402. 10.1093/bioinformatics/bti319.
 63.
Tibshirani R: Regression shrinkage and selection via the LASSO. J R Stat Soc Series B Stat Methodol. 1996, 58: 267288.
 64.
Efron B, Hastie T, Johnstone I, Tibshirani R: Least angle regression. Ann Stat. 2004, 32: 407499. 10.1214/009053604000000067.
 65.
Hesterberg T, Choi NH, Meier L, Fraley C: Least angle and L1 penalized regression: a review. Statistics Surveys. 2008, 2: 6192. 10.1214/08SS035.
 66.
Friedman J, Hastie T, Tibshirani R: Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010, 33: 122.
 67.
Friedman J, Hastie T, Tibshirani R: glmnet: Lasso and elastic net regularized generalized linear models. R package available at http://cran.rproject.org/web/packages/glmnet/index.html
 68.
Teixeira MC, Monteiro P, Jain P, Tenreiro S, Fernandes AR, Mira NP, Alenquer M, Freitas AT, Oliveira AL, SaCorreia I: The YEASTRACT database: a tool for the analysis of transcription regulatory associations in Saccharomyces cerevisiae. Nucleic Acids Res. 2006, 34: D446D451. 10.1093/nar/gkj013.
 69.
Bryne JC, Valen E, Tang MH, Marstrand T, Winther O, da Piedade I, Krogh A, Lenhard B, Sandelin A: JASPAR, the open access database of transcription factorbinding profiles: new content and tools in the 2008 update. Nucleic Acids Res. 2008, 36: D102D106. 10.1093/nar/gkn449.
 70.
Wasserman WW, Sandelin A: Applied bioinformatics for the identification of regulatory elements. Nat Rev Genet. 2004, 5: 276287. 10.1038/nrg1315.
 71.
Liefooghe A, Touzet H, Varré JS: Large scale matching for Position Weight Matrices. Combinatorial Pattern Matching, Lecture Notes in Computer Science. Springer Verlag. 2006, 4009: 401412.
 72.
Brem RB, Kruglyak L: The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proc Natl Acad Sci U S A. 2005, 102: 15721577. 10.1073/pnas.0408709102.
 73.
Pearl J: Causality: Models, Reasoning, and Inference. 2000, Cambridge University Press
 74.
Shipley B: Cause and Correlation in Biology: A User's Guide to Path Analysis, Structural Equations and Causal Inference. 2002, Cambridge University Press
 75.
Spirtes P, Glymour C, Scheines R: Causation, Prediction and Search. 2000, MIT Press
 76.
Purdom E, Holmes SP: Error distribution for gene expression data. Stat Appl Genet Mol Biol. 2005, 4: Article16
 77.
Babu MM, Lang B, Aravind L: Methods to reconstruct and compare transcriptional regulatory networks. Methods Mol Biol. 2009, 541: 163180. 10.1007/9781597452434_8.
 78.
Ball CA, Awad IA, Demeter J, Gollub J, Hebert JM, HernandezBoussard T, Jin H, Matese JC, Nitzberg M, Wymore F, Zachariah ZK, Brown PO, Sherlock G: The Stanford Microarray Database accommodates additional microarray platforms and data formats. Nucleic Acids Res. 2005, 33: D580D582.
 79.
Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, Kim IF, Soboleva A, Tomashevsky M, Edgar R: NCBI GEO: mining tens of millions of expression profiles  database and tools update. Nucleic Acids Res. 2007, 35: D760D765. 10.1093/nar/gkl887.
 80.
Brazma A, Parkinson H, Sarkans U, Shojatalab M, Vilo J, Abeygunawardena N, Holloway E, Kapushesky M, Kemmeren P, Lara GG, Oezcimen A, RoccaSerra P, Sansone SA: ArrayExpress  a public repository for microarray gene expression data at the EBI. Nucleic Acids Res. 2003, 31: 6871. 10.1093/nar/gkg091.
 81.
Irizarry RA, Hobbs B, Collin F, BeazerBarclay YD, Antonellis KJ, Scherf U, Speed TP: Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003, 4: 249264. 10.1093/biostatistics/4.2.249.
 82.
Hoeting JA, Madigan D, Raftery AE, Volinsky CT: Bayesian model averaging: a tutorial. Stat Sci. 1999, 14: 382401. 10.1214/ss/1009212519.
 83.
Kass RE, Raftery AE: Bayes Factors. J Am Stat Assoc. 1995, 90: 773795. 10.1080/01621459.1995.10476572.
 84.
Furnival GM, Wilson RW: Regression by leaps and bounds. Technometrics. 1974, 16: 499511. 10.1080/00401706.1974.10489231.
 85.
Madigan D, Raftery A: Model selection and accounting for model uncertainty in graphical models using Occam's window. J Am Stat Assoc. 1994, 89: 13351346.
 86.
Schwarz G: Estimating the dimension of a model. Ann Stat. 1978, 6: 461464. 10.1214/aos/1176344136.
 87.
Gasch AP, Spellman PT, Kao CM, CarmelHarel O, Eisen MB, Storz G, Botstein D, Brown PO: Genomic expression programs in the response of yeast cells to environmental changes. Mol Biol Cell. 2000, 11: 42414257.
 88.
Hughes TR, Marton MJ, Jones AR, Roberts CJ, Stoughton R, Armour CD, Bennett HA, Coffey E, Dai H, He YD, Kidd MJ, King AM, Meyer MR, Slade D, Lum PY, Stepaniants SB, Shoemaker DD, Gachotte D, Chakraburtty K, Simon J, Bard M, Friend SH: Functional discovery via a compendium of expression profiles. Cell. 2000, 102: 109126. 10.1016/S00928674(00)000155.
 89.
Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne JB, Reynolds DB, Yoo J, Jennings EG, Zeitlinger J, Pokholok DK, Kellis M, Rolfe PA, Takusagawa KT, Lander ES, Gifford DK, Fraenkel E, Young RA: Transcriptional regulatory code of a eukaryotic genome. Nature. 2004, 431: 99104. 10.1038/nature02800.
 90.
Cherry JM, Ball C, Weng S, Juvik G, Schmidt R, Adler C, Dunn B, Dwight S, Riles L, Mortimer RK, Botstein D: Genetic and physical maps of Saccharomyces cerevisiae. Nature. 1997, 387: 6773. 10.1038/387067a0.
 91.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, Harris MA, Hill DP, IsselTarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE, Ringwald M, Rubin GM, Sherlock G: Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet. 2000, 25: 2529. 10.1038/75556.
 92.
Zhu J, Zhang MQ: SCPD: a promoter database of the yeast Saccharomyces cerevisiae. Bioinformatics. 1999, 15: 607611. 10.1093/bioinformatics/15.7.607.
 93.
Costanzo MC, Hogan JD, Cusick ME, Davis BP, Fancher AM, Hodges PE, Kondu P, Lengieza C, LewSmith JE, Lingner C, RobergPerez KJ, Tillberg M, Brooks JE, Garrels JI: The yeast proteome database (YPD) and Caenorhabditis elegans proteome database (WormPD): comprehensive resources for the organization and comparison of model organism protein information. Nucleic Acids Res. 2000, 28: 7376. 10.1093/nar/28.1.73.
 94.
Mitchell TJ, Beauchamp JJ: Bayesian variable selection in linear regression. J Am Stat Assoc. 1988, 83: 10231032. 10.1080/01621459.1988.10478694.
 95.
Little RJA: Regression with missing X's: a review. J Am Stat Assoc. 1992, 87: 12271237.
 96.
Rubin DB: Multiple Imputation for Nonresponse in Surveys. 1987, New York: John Wiley
 97.
Graham JW, Olchowski AE, Gilreath TD: How many imputations are really needed? Some practical clarifications of multiple imputation theory. Prev Sci. 2007, 8: 206213. 10.1007/s1112100700709.
 98.
Breslow NE, Day NE, Davis W: Statistical Methods in Cancer Research, Volume I: The Analysis of Case–control Studies. 1980, Lyon: International Agency for Research on Cancer
 99.
Lachin JM: Biostatistical Methods: The Assessment of Relative Risks. 2000, New York, NY: Wiley
 100.
Guelzim N, Bottani S, Bourgine P, Képès F: Topological and causal structure of the yeast transcriptional regulatory network. Nat Genet. 2002, 31: 6063. 10.1038/ng873.
 101.
ShenOrr S, Milo R, Mangan S, Alon U: Network motifs in the transcriptional regulation network of Escherichia coli. Nat Genet. 2002, 31: 6468. 10.1038/ng881.
 102.
Stewart AJ, Seymour RM, Pomiankowski A: Degree dependence in rates of transcription factor evolution explains the unusual structure of transcription networks. Proc R Soc B. 2009, 276: 24932501. 10.1098/rspb.2009.0210.
 103.
Mewes HW, Frishman D, Güldener U, Mannhaupt G, Mayer K, Mokrejs M, Morgenstern B, M\"unsterkoetter M, Rudd S, Weil B: MIPS: a database for genomes and protein sequences. Nucleic Acids Res. 2002, 30: 3134. 10.1093/nar/30.1.31.
 104.
Boeckmann B, Bairoch A, Apweiler R, Blatter MC, Estreicher A, Gasteiger E, Martin MJ, Michoud K, O'D o C, Phan I, Pilbout S, Schneider M: The SwissProt protein knowledgebase and its supplement TrEMBL in 2003. Nucleic Acids Res. 2003, 31: 365370. 10.1093/nar/gkg095.
Acknowledgments
We would like to thank Dr. Chris Fraley for her code to generate the precisionrecall curves in Supplementary Figure S4 and Supplementary Table S5, and Dr. John E. Mittler for helpful comments and discussions. In addition, we thank the Western Canada Research Grid (WestGrid) for providing computational resources.
KYY, KL, AER, KMD and REB are supported by NIH grants 5R01GM084163. REB, KL and KYY are also supported by 3R01GM08416302S2. REB, KMD and KYY were supported by a generous basic research grant from Merck. AER was also supported by NIH grants R01 HD54511 and R01 HD070936.
Author information
Additional information
Competing interests
The authors declare that they have no competing interest.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Systems biology
 Network inference
 Data integration
 Statistics
 Timeseries expression data
 Model uncertainty