Inferring gene regulatory networks from single-cell data: a mechanistic approach

Background The recent development of single-cell transcriptomics has enabled gene expression to be measured in individual cells instead of being population-averaged. Despite this considerable precision improvement, inferring regulatory networks remains challenging because stochasticity now proves to play a fundamental role in gene expression. In particular, mRNA synthesis is now acknowledged to occur in a highly bursty manner. Results We propose to view the inference problem as a fitting procedure for a mechanistic gene network model that is inherently stochastic and takes not only protein, but also mRNA levels into account. We first explain how to build and simulate this network model based upon the coupling of genes that are described as piecewise-deterministic Markov processes. Our model is modular and can be used to implement various biochemical hypotheses including causal interactions between genes. However, a naive fitting procedure would be intractable. By performing a relevant approximation of the stationary distribution, we derive a tractable procedure that corresponds to a statistical hidden Markov model with interpretable parameters. This approximation turns out to be extremely close to the theoretical distribution in the case of a simple toggle-switch, and we show that it can indeed fit real single-cell data. As a first step toward inference, our approach was applied to a number of simple two-gene networks simulated in silico from the mechanistic model and satisfactorily recovered the original networks. Conclusions Our results demonstrate that functional interactions between genes can be inferred from the distribution of a mechanistic, dynamical stochastic model that is able to describe gene expression in individual cells. This approach seems promising in relation to the current explosion of single-cell expression data. Electronic supplementary material The online version of this article (doi:10.1186/s12918-017-0487-0) contains supplementary material, which is available to authorized users.


Introduction
Inferring regulatory networks from gene expression data is a longstanding question in systems biology [1], with an active community developing many possible solutions.So far, almost all studies have been based on population-averaged data, which historically used to be the only possible way to observe gene expression.Technologies now allow us to measure mRNA levels in individual cells [2], a revolution in terms of precision.However, the network reconstruction task paradoxically remains more challenging than ever.
The main reason is that the variability in gene expression unexpectedly stands at a large distance from a trivial, limited perturbation around the population mean.It is now clear indeed that this variability can have functional significance [3][4][5] and should therefore not be ignored when dealing with gene network inference.In particular, as the mean is not sufficient to account for a population of cells, a deterministic model -e.g.ordinary differential equation (ODE) systems, often used in inference [6,7] -is unlikely to faithfully inform about an underlying gene regulatory network.Whether such a deterministic approach could still be a valid approximation or not is a difficult question that may require some biological insight into the system under consideration [8].
Another key aspect when considering individual cells is that they generally have to be killed for measurements: from a statistical point of view, temporal single-cell data therefore should not be seen as a set of time series, but rather snapshots, i.e. independent samples from a time series of distributions.
On the other hand, single-cell data give the opportunity of moving one step further toward a more accurate physical description of gene expression.Molecular processes of gene expression are overall now well understood, in particular transcription, but precisely how stochasticity emerges is still somewhat of a conundrum.Harnessing variability in single-cell data is expected to allow for the identification of critical parameters and also to provide hints about the basic molecular processes involved [9,10].Moreover, the variability arising from perturbations in cell populations is often crucial for network reconstruction to succeed [11,12] as the deterministic inference problem suffers from intrinsic limitations [13].From this point of view, the same information is expected to be contained in the variability between cells in single-cell data.Some of the few existing single-cell inference methods follow this path, for example using asynchronous Boolean network models [14] or generating pseudo time series [7,15].In this article, we use a mechanistic approach in the sense that every part of our model has an explicit physical interpretation.Importantly, mRNA observations are not used as a proxy for proteins since both are explicitly modeled.
Besides, mechanistic models that are accurate enough to describe gene expression at the single-cell level usually do not consider interactions between genes.For example, the so-called two-state (aka random telegraph) model has been successfully used with single-cell mRNA-seq data [16], but the joint distribution of a set of genes contains much more information than the marginal kinetics of individual genes: our aim is to exploit this information while keeping the mechanistic point of view.Namely, our main idea is to obtain a statistical model whose source of noise is biologically justified and contains interpretable correlations.The goal here is not to achieve predictability performances in a theoretical way, as in [17].Instead, we propose to view the statistical inference as a fitting procedure for a mechanistic gene network model that is modular enough to implement many biological hypotheses while keeping them explicit.Such a framework makes it possible to test these hypotheses by going back and forth between simulations and experiments.We show that a tractable statistical model for network inference from single-cell data can be derived through successive relevant approximations.Finally, we demonstrate that our approach is capable of extracting enough information out of noisy single-cell data to correctly infer the structures of various two-gene networks.

Methods
In this part, we aim at deriving a tractable statistical model from a mechanistic one.We will use the two-state model for gene expression to build a network of two-state models by making the promoter switching rates depend on protein levels.Then, successive relevant simplifications will lead to an explicit approximation of a statistical likelihood.

Basic block: stochastic expression of a single gene
Our starting point is the well-known two-state model of gene expression [18][19][20][21], a refinement of the model introduced by [22] from pioneering single-cell experiments [23].In this model, a gene is described by its promoter which can be either active (on) or inactive (off) -possibly representing a transcription complex being bound or unbound but it may be more complicated [24] -with mRNA being transcribed only during the active periods.Translation is added in a standard way, each mRNA molecule producing proteins at a constant rate.The resulting model (Fig 1) can be entirely defined by the set of chemical reactions detailed in Table 1, where chemical species G, G * , M and P respectively denote the inactive promoter, the active promoter, the amount of mRNA and proteins.The mathematical framework generally assumes stochastic mass-action kinetics [25] for all reactions, since they typically involve few molecules compared to Avogadro's number.In this fully discrete setting, one can use the master equation to compute stationary distributions: for mRNA the exact distribution is a Beta-Poisson mixture [26], and an approximation is available for proteins when they degrade much more slowly than mRNA [27].In practice, the formulas involve hypergeometric series that are not straightforward to use in a statistical inference framework.Besides, these series essentially arise from the fact that such a discrete model has to enumerate all potential collisions between molecules (the stochastic mass-action assumption in the master equation).It is therefore natural to consider keeping only the most important source of noise, that is, keeping a molecular representation for rare species but describing abundant species at a higher level where molecular noise averages out to continuous quantities.A quick look at reactions in

Rate constant Interpretation
This system simply switches between two ordinary differential equations, depending on the value of the two-state continuous-time Markov process E(t), making it a piecewise-deterministic Markov process (PDMP) [29].From a mathematical perspective, model (1) rigorously approximates the original molecular model when s 0 /d 0 and s 1 /d 1 are large enough [30,31] and interestingly, it has already been implicitly considered in the biological literature [20,21].Note also that the stationary distribution of mRNA is a scaled Beta distribution that is exactly the one of the Beta-Poisson mixture in the discrete model [26].Similarly to a recent approach for a two-gene toggle switch [32], we will use (1) as a basic building block for gene networks.
When both k on k off and d 0 k off , mRNA is transcribed by bursts, i.e. during short periods which make the mRNA quantity stay far from saturation.Hence, the amount transcribed within each burst is approximately proportional to the burst duration, whose mean is 1/k off by definition: this justifies the quantity s/k off often being called burst size or burst amplitude.Furthermore, promoter active periods are much shorter than inactive ones so they can be seen as instantaneous, justifying the name burst frequency for the inverse of the mean inactive time k on .We place ourselves in this situation as it often occurs in experiments [20,21,[33][34][35].Note however that these two notions are not clearly defined when relations k on k off and d 0 k off do not hold.

Adding interactions between genes: the network model
Now considering a given set of n genes, a natural way of building a network is to assume that each gene i produces specific mRNA M i and protein P i , and to define a version of model (1) with its own parameters: Still, genes have static parameters and do not interact with each other.To get an actual network, we need to go one step further: reactions G i → G i * and G i * → G i are not assumed to be elementary anymore, but rather represent complex reactions involving proteins so that promoter parameters k on,i and k off,i now depend on proteins (Fig 2a), and a fortiori on time.Our network model will correspond to the explicit definition, for all gene i, of functions k on,i (P 1 , . . ., P n ) and k off,i (P 1 , . . ., P n ).These functions shall also depend on network-specific parameters quantifying the interactions, thus making the link between fitting a chemical model and inferring a network.As a toy example, consider replacing G i → G i * with two parallel elementary reactions for which applying the law of mass action directly gives k on,i (P 1 , . . ., P n ) = θ i,0 + θ i,j P j .In a regulatory network (Fig 2b), it would correspond to adding a directed edge from gene j to gene i, with θ i,0 the basal parameter of gene i, and θ i,j the strength of activation of gene i by protein P j .We emphasize that the action of P j on the promoter G i is not necessarily direct.For example, P j can instead indirectly modulate the amount/activity of a transcription factor: we suppose in this article that such hidden reactions are fast enough regarding gene expression dynamics so that protein P j is a relevant proxy for the transcription factor.Moreover, although we assume here that interactions can only happen at the level of k on,i and k off,i , mainly for identifiability purposes, it is also possible to make d 1,i and s 1,i depend on proteins without fundamentally changing the mathematical approach (for example see [36,37]).
, ,  In order to simplify notations, we normalize model (2) into a dimensionless equivalent model: still omitting the dependence of k on,i and k off,i on (P 1 (t), . . ., P n (t)) for clarity.This form enlightens the fact that s 0,i and s 1,i are just scaling constants: given a path (E i , M i , P i ) i of system (4), one can go back to the physical path by simply multiplying M i by (s 0,i /d 0,i ) and P i by (s 0,i /d 0,i ) × (s 1,i /d 1,i ).Taking this scale change into account in the definition of k on,i and k off,i then provides full equivalence between models (2) and (4).
Therefore, we get a general network model where each link between two genes is directed and has an explicit biochemical interpretation in terms of molecular interactions.The previous example is very simplistic but one can use virtually any model of chromatin dynamics to derive a form for k on,i and k off,i , involving hit-and-run reactions, sequential binding, etc. [38].Such aspects are still far from being completely understood [39][40][41][42] and this simple network model can hopefully be used to assess biological hypotheses.In the next part, we will introduce a more sophisticated interaction form based on an underlying probabilistic model, which is both statistics-friendly and interpretable as a non-equilibrium steady state of chromatin environment [40].

Some known mathematical results
Thanks to some recent theoretical results [37,43], simple sufficient conditions on k on,i and k off,i ensure that the PDMP network model ( 4) is actually well-defined and that the overall joint distribution of (E i , M i , P i ) i converges as t → +∞ to a unique stationary distribution, which will be the basis of our statistical approach.Namely, we assume in this article that k on,i and k off,i are smooth functions of (P 1 , . . ., P n ) with bounded derivatives, and that they are greater than some positive constants (see [37] for precisions).These two conditions are satisfied in most interesting cases, including the above toy example (3) when θ i,0 > 0. It is important to note that by construction, unlike in the fully discrete setting, each P i is bounded by (s 0,i /d 0,i ) × (s 1,i /d 1,i ) in model (2) -or by 1 in the normalized version (4) -so the derivatives of k on,i and k off,i only have to be bounded within the corresponding range.Contrary to creation rates s 0,i and s 1,i , degradation rates d 0,i and d 1,i play a crucial role in the dynamics of the system.Intuitively, the ratios (k on,i + k off,i )/d 0,i and d 0,i /d 1,i respectively control the buffering of promoter noise by mRNA and the buffering of mRNA noise by proteins.A common situation is when promoter and mRNA dynamics are fast compared to proteins, i.e. when d 0,i d 1,i with (k on,i + k off,i )/d 0,i fixed.At the limit, the promoter-mRNA noise is fully averaged by proteins and model (4) simplifies into a deterministic system [44]: where P(t) = (P 1 (t), . . ., P n (t)).The diffusion limit, which keeps a residual noise, can also be rigorously derived [45].Unsurprisingly, one recovers the traditional way of modelling gene regulatory networks with Hill-type interaction functions.Equation ( 5) is useful to get an insight into the behaviour of the system (4) for given k on,i and k off,i , yet it should be used with caution.Indeed, the d 0,i /d 1,i ratio has been shown to span a high range, averaging out to the value d 0,i /d 1,i ≈ 5 in mammalian cells [28], for which taking the limit d 0,i d 1,i is not obvious.This is consistent with recent single-cell experiments showing a high variability of both mRNA and protein levels between cells [34].In that sense, the PDMP model is much more robust than its deterministic/diffusion counterpart while keeping a similar level of mathematical complexity, which motivates our approach.

Simulation
We propose a simple algorithm to compute sample paths of our stochastic network model (4).It consists in a hybrid version of a basic ODE solver, making it efficient enough to perform massive simulations on large scale networks involving arbitrary numbers of molecules, which would be intractable with a classic molecule-based model (Fig 3).The deterministic part of the algorithm is a standard explicit Euler scheme, and the stochastic part makes use of the promoter distribution for all time, which is explicit for single genes [46].Indeed, during a small enough time interval, proteins remain almost constant so genes behave as if k on,i and k off,i were constant.This part uses Bernoulli steps, in a similar way of a diffusion being simulated using gaussian steps.
After discretizing time with step δt, the numerical scheme is as follows.Starting from an initial state (E i 0 , M i 0 , P i 0 ) i , the update of the system from t to t + δt is given by: where the Bernoulli distribution parameter π t i is defined by with a t i = k on,i (P 1 t , . . ., P n t ) and b t i = k off,i (P 1 t , . . ., P n t ).Intuitively, the algorithm is valid when δt 1/ max i {K on,i , K off,i , d 0,i , d 1,i } where K on,i and K off,i denote the maximum values of functions k on,i and k off,i .

Deriving a tractable statistical model
We will now adopt a statistical perspective in order to deal with gene network inference, considering a set of observed cells.If they are evolving in the same environment for a long enough time, we can reasonably assume that their mRNA and protein levels follow the stationary distribution of an underlying gene network: this distribution can be used as a statistical likelihood for the cells.Furthermore assuming no cell-cell interactions (which may of course depend on the experimental context), we obtain a standard statistical problem with independent samples.Since the stationary distribution of the stochastic network model ( 4) is well-defined but a priori not analytically tractable, we will derive an explicit approximation and then reduce our inference problem to a traditional likelihood-based estimation.We will do so in two cases: when there is no self-interaction, and for a specific form of auto-activation.

Separating mRNA and protein timescales
It is for the moment very rare to experimentally obtain the amount of proteins for many genes at the single-cell level.We will therefore assume here that only mRNAs are observed.To deal with this problem, we take the protein timescale as our reference by fixing d 1,i and assume that promoter dynamics are faster than proteins, i.e. (k on,i + k off,i ) d 1,i in a biologically relevant way, say (k on,i + k off,i )/d 1,i > 10 (thus the deterministic limit (5) does not necessarily hold).Then we consider two distinct cases for the ratio d 0,i /d 1,i of a gene i.
1.When d 0,i is sufficiently larger than d 1,i so that there is a very small correlation between mRNAs and proteins produced by the gene, model ( 4) can be reduced by removing mRNA and making proteins directly depend on the promoters (see Appendix A).The result is which still admits the deterministic limit (5).Since mRNA dynamics are faster than proteins, one can also assume that, given protein levels P = (P 1 , . . ., P n ), each mRNA level M i follows the quasi-steady state distribution which corresponds to the single-gene model [26,36] with constant parameters k on,i (P) and k off,i (P).Numerically, this approximation works well even for moderate values of d 0,i , such as d 0,i = 5 × d 1,i (see the Results section).
2. When d 0,i ≈ d 1,i (particularly stable mRNAs), M i becomes strongly correlated with P i so one may not use equation ( 8), but instead it is possible to use M i as a proxy for P i , which in this case roughly follow the same dynamics as equation ( 7), replacing d 1,i with d 0,i .
Here we shall assume the first case for all genes, in line with several recent experiments [34,47].Biologically, equations ( 7) and (8) suggest that correlations between mRNA levels may not directly arise from correlations between promoters states (which in fact are weak because of (k on,i + k off,i ) d 1,i ), but rather originate from correlations between promoter parameters k on,i and k off,i , which themselves depend on the protein joint distribution.
Table 2 sums up the successive modelling steps introduced so far.From now on, we will always assume the form (8) for the mRNA distribution, and thus our model is reduced to equation (7) which only involves proteins.Simplified network (7) mRNA is removed from the network Conditional mRNA distribution: Beta (8) Table 2: Successive dynamical models introduced in this article, recalling for each step the main feature and the form of the mRNA stationary distribution.The full network model (step 3) is used for simulations, while the simplified one (step 4) is used to derive the approximate statistical likelihood.

Hartree approximation
In this section, we present the Hartree approximation principle and provide an explicit formula in the particular case of no self-interaction.The simplified model ( 7) is still not analytically tractable, but it is now appropriate for employing the self-consistent proteomic field approximation introduced in [48,49] and successfully applied in [50,51].More precisely, we will use its natural PDMP counterpart, which will be referred to as Hartree approximation since the main idea is similar to the Hartree approximation in physics [48].It consists in assuming that genes behave as if they were independent from each other, but submitted to a common proteomic field created by all other genes.In other words, we transform the original problem of dimension 2 n into n independent problems of dimension 2 that are much easier to solve (see Appendix B for details).
When k on,i and k off,i do not depend on P i (i.e.no self-interaction), this approach results in approximating the protein stationary distribution of model ( 7) by the function where y = (y 1 , . . ., y n ) = (P 1 , . . ., y)/d 1,i and B is the standard Beta function.Note that promoter states have been integrated out since they are not required by equation (8).
The function u is a heuristic approximation of a probability density function.It is only valid when interactions are not too strong, that is, when k on,i and k off,i are close enough to constants, and it becomes exact when they are true constants.Besides, it does not integrate to 1 in general.However, this approximation turns out to be very robust in practice and it has the great advantage to be fully explicit (and significantly simpler than in the non-PDMP case), thus providing a promising base for a statistical model.
When k on,i and k off,i depend on P i , one can still explicitly compute the Hartree approximation in many cases: we will give an example in the next section.Alternatively, it is always possible to use formula ( 9) even with self-interactions, giving a correct approximation when the feedback is not too strong, as for other proteins.

An explicit form for interactions
We now propose an explicit definition of functions k on,i and k off,i .Recent work [33,52,53] showed that apparent increased transcription actually reflects an increase in burst frequency rather than amplitude.We therefore decided to model only k on,i as an actual function and to keep k off,i constant.In this view, the activation frequency of a gene can be influenced by ambiant proteins, whereas the active periods have a random duration that is dictated only by an intrinsic stability constant of the transcription machinery.
Our approach uses a description of the molecular activity around the promoter in a very similar way as Coulon et al. [39].Accordingly, we make a quasi-steady state assumption to obtain k on,i .This idea based on thermodynamics was also used in the DREAM3 in-Silico Challenge [54] to simulate gene networks.However, only mean transcription rate was described (instead of promoter activity in our work), which is inappropriate to model bursty mRNA dynamics at the single-cell level.
We herein derive k on,i from an underlying stochastic model for chromatin dynamics.We first introduce a set of abstract chromatin states, each state being associated with one of two possible rates of promoter activation, either a low rate k 0,i or a high rate k 1,i k 0,i .More specifically, such chromatin states may be envisioned as a coarse-grained description of the chromatin-associated parameters that are critical for transcription of gene i.Second, we assume a separation of timescales between the abstract chromatin model and the promoter activity, so that the promoter activation reaction depends only on the quasi-steady state of chromatin.In other words, the effective k on,i is a combination of k 0,i and k 1,i which integrates all the chromatin states: its value depends on the probability of each state and a fortiori on the transitions between them.We propose a transition scheme which leads to an explicit form for k on,i , based on the idea that proteins can alter chromatin by hit-and-run reactions and potentially introduce a memory component.Some proteins thereby tend to stabilize it either in a permissive configuration (with rate k 1,i ) or in a non-permissive configuration (with rate k 0,i ), providing notions of activation and inhibition.A more precise definition and details of the derivation are provided in Appendix C.
The final form is the following.First, we define a function of every protein but P i , which may represent the external input of gene i.Then, k on,i is defined by Hence, when the input Φ i (y) is fixed, k on,i is a standard Hill function which describes how gene i is self-activating, depending on the Hill coefficient m i,i (Fig 4).The neutral value is set to Φ i (y) = 1, so that for this particular value, s i,i is the usual dissociation constant.Moreover, if θ i,j = 0 for all j = i, then Φ i becomes the constant function Φ i (y) = exp(θ i,i ), and thus θ i,i may be seen as a basal parameter, summing up all potential hidden inputs.On the contrary, if some θ i,j > 0 (resp.θ i,j < 0), then Φ i becomes itself an increasing (resp.decreasing) Hill-type function of protein P j , where m i,j and s i,j again play their usual roles.Value of Φ i 10 2 10 1. 5 10 1 10 0. 5 10 0 10 0. 5 10 1 10 1. 5 10 2 Figure 4: Different auto-activation types in the network model.Each color corresponds to a fixed value of Φ i in formula (10), and each curve represents k on,i as a function of y i for m i,i = 0 (no feedback), m i,i = 1 (monomer-type feedback) and m i,i = 2 (dimer-type feedback).The neutral value Φ i = 1 is represented by a dashed gray line.Here k 0,i = 0.01, k 1,i = 2 and s i,i = 0.1.
The n × n matrix θ = (θ i,j ) therefore plays the same role as the interaction matrix in traditional network inference frameworks [6].For i = j, θ i,j quantifies the regulation of gene i by gene j (activation if θ i,j > 0, inhibition if θ i,j < 0, no influence if θ i,j = 0), and the diagonal term θ i,i aggregates the basal input and the self-activation strength of gene i.Note that self-inhibition could be considered instead, but the choice has to be made before the inference since the selfinteraction form is notoriously difficult to identify, especially in the stationary regime.In the remainder of this article, we assume that parameters k 0,i , k 1,i , m i,j and s i,j are known and we focus on inferring the matrix θ.
A benefit of the interaction form (10) is to allow for a fully explicit Hartree approximation of the protein distribution (see Appendix C).In particular, if m i,i > 0 and ) is a positive integer for all i, the approximation is given by with In other words, the Hartree approximation ( 11) is a product of gene-specific distributions which are themselves mixtures of Beta distributions: for gene i, the a i,r correspond to frequency modes ranging from k 0,i to k 1,i , weighted by the probabilities p i,r (y).It is straightforward to check that inhibitors tend to select the low burst frequencies of their target (a i,r ≈ k 0,i ) while activators select the high frequencies (a i,r ≈ k 1,i ).If m i,i = 0 for some i, then k on,i does not depend on P i so one just has to replace the i-th term in the product (11) with the single Beta form as in equation ( 9), which is equivalent to taking the limit c i → +∞.Finally, when m i,i > 0 but c i is not an integer, using c i instead keeps a satisfying accuracy.

The statistical model in practice
Our statistical framework simply consists in combining the timescale separation (8) and the Hartree approximation (11) into a standard hidden Markov model.Indeed, conditionally to the proteins, mRNAs are independent and follow well-defined Beta distributions where Then one can use (11) to approximate the joint distribution of proteins.Hence, recalling the unknown interaction matrix θ, the inference problem for m cells with respective levels (M k , P k ) 1 k m is based on the (approximate) complete log-likelihood: where we used conditional factorization and independence of the cells.
The basic statistical inference problem would be to maximize the marginal likelihood of mRNA with respect to θ.Since this likelihood has no simple form, a typical way to perform inference is to use an Expectation-Maximization (EM) algorithm on the complete likelihood (13).However, the algorithm may be slow in practice because of the computation of expectations over proteins.
A faster procedure consists in simplifying these expectations using the distribution modes: the resulting algorithm is often called hard EM or classification EM and is used in the Results section.Moreover, it is possible to encode some potential knowledge or constraints on the network by introducing a prior distribution w(θ).In this case, from Baye's rule, one can perform maximum a posteriori (MAP) estimation of θ by using the same EM algorithm but adding the penalization term log(w(θ)) to (M 1 , . . ., M m , P 1 , . . ., P m |θ) during the Maximization step (see Appendix D and the Results section).Alternatively, a full bayesian approach, i.e. sampling from the posterior distribution of θ conditionally to (M 1 , . . ., M m ), may also be considered using standard MCMC methods.
Taking advantage of the latent structure of proteins, one can also deal with missing data in a natural way: if the mRNA measurement of gene i is invalid in a cell k because of technical problems, it is possible to ignore it by removing the i-th term in the conditional distribution of mRNAs (12).This only modifies the definition of v for cell k in equation (13), ensuring that all valid data is effectively used for each cell.

Results
In this part, we first compare the distribution of the mechanistic model (4) to the mRNA quasi-steady state combined with Hartree approximation for proteins, on a simple toggle-switch example.Then, we show that the single-gene model with auto-activation can fit marginal mRNA distributions from real data better than the constant-k on model.Finally, we successfully apply the inference procedure to various two-gene networks simulated from the mechanistic model.

Relevance of the approximate likelihood
Starting from the normalized mechanistic model ( 4), two approximations were used to derive the final statistical likelihood (13): the quasi-steady state assumption for mRNAs given protein levels, and the Hartree approximation for the joint distribution of proteins.Crucially, this approximate likelihood has to be close enough to the exact one in order to preserve the equivalence between inferring a network and fitting the mechanistic model.To get an idea of the accuracy, we considered a basic two-gene toggle switch defined by k on,i following equation (10) with the interaction matrix given by θ Protein levels

P1 P2
Figure 5: Sample path of a two-gene toggle switch, with gene 1 in red and gene 2 in green.While always staying in a bursty regime regarding mRNAs, genes can switch between high and low frequency modes (here at t ≈ 50h).From this example, it is clear that the overall joint distribution can contain correlations even if the bursts themselves are not coordinated.

Fitting marginal mRNA distributions from real data
A particularity of single-cell data is to often exhibit bursty regimes for mRNA (meaning k on k off and d 0 k off ) and potentially also for proteins (adding d 1 k off ), which are well fitted by Gamma distributions [34].At this stage, it is worth mentioning that the Gamma distribution can be seen as a limit case of the Beta distribution.Intuitively, when b 1 and b a (typically a = k on /d 0 and b = k off /d 0 ), most of the mass of the distribution Beta(a, b) is located at x 1 so we have the first order approximation and thus Beta(a, b) ≈ γ(a, b).This way, formulas ( 11) and ( 12) can be easily transformed into Gamma-based distributions.Parameters s 0 and k off then aggregate in k off /s 0 because of the scaling property of the Gamma distribution, so only this ratio has to be inferred: from an applied perspective, it simply represents a scale parameter for each gene.
Our network model is able to generate multiple modes while keeping such bursty regimes (Fig 5 ), which is explicit in the stationary distribution (11).Interestingly, this feature has already been considered in the literature by empirically introducing mixture distributions [55,56].As a first step toward applications, we compared our likelihood model in the simplest case (single genes with auto-activation) to single-cell mRNA measurements from [35].For two genes, LDHA and HMGCS1, both the model and a simple Gamma were fitted (Fig 7).Although very close when viewed in raw molecule numbers, the distributions differ after applying the transformation x → x α with α = 1/3, which tends to compress great values while preserving small values.Both genes become bimodal, suggesting the presence of two bursting frequencies, a normal mode and a very small inhibited one: the auto-activation model then performs better than the simple Gamma distribution (which theoretically stays unimodal for 0 < α < 1).Note that the protocol used in [35], combined with the presence of small nonzero values, tends to support a true biological origin for the peak in zero.Besides, the case of distributions that are not bimodal until transformed also arises for proteins [57]. .Both genes become bimodal and the fit appears to be better with the auto-activation model.

Application of the inference procedure
By construction of the mechanistic model, the interaction matrix θ can describe any oriented graph by explicitly defining causal quantitative links between genes, which is difficult to do within traditional statistical frameworks (e.g.bayesian networks or undirected Markov random fields).The logical downside is that identifiability issues seem inevitable.In a first attempt to assess this aspect, we implemented the inference method presented above and tested it on various two-gene networks, assuming auto-activation for each gene (i.e.m i,i > 0) with equation (10) to maximize variability without considering perturbations of the system (parameter list in Appendix E).
We decided to investigate the worst case scenario in terms of cell numbers.We are fully aware of the existence of technologies allowing to interrogate thousands of cells simultaneously, but most of the recent studies still rely upon a much smaller number of cells.For each network, we therefore simulated mRNA snapshot data for 100 cells using the full PDMP model (4).We then inferred the matrix θ using a hard EM algorithm based on the likelihood (13), that is, alternatively maximizing the likelihood with respect to θ and with respect to the (unknown) protein levels of each cell.A lasso-like penalization term, corresponding to a prior distribution, was added to the θ i,j for i = j to obtain true zeros -so that the inferred network topology is clear -and to prevent keeping both θ i,j and θ j,i when one is significantly weaker (see Appendix D for details of the penalization and the whole procedure).
We obtained highly encouraging results since every structure was inferred with a high probability of success (Fig 8 ), meaning that the non-diagonal (i.e.interaction) terms of θ had the right sign and were nonzero at the right places.A full list of the inferred values is provided in Appendix E. We emphasize that the diagonal (i.e.basal) terms of θ were free and without penalization, so the results strongly support the identifiability of the full matrix θ in such a two-gene case.
Figure 8: Testing our inference method on simple networks, numbered from 1 to 7. For each network, we simulated 100 cells using the full mechanistic model until the stationary regime was reached.We then took a snapshot of their mRNA levels and inferred the parameters from this data.(a) For each network (rows), 10 datasets were simulated and the results were reported by counting the number of inferred θ corresponding to each topology (columns), highlighting successes (blue) and failures (orange).The perfect inference would lead to 10 for all the diagonal terms and 0 everywhere else.(b) Examples of simulated mRNA datasets (one for each network).Although having coherent signs, Pearson's correlation coefficients (top right of each plot) would clearly be insufficient to distinguish between the different networks.

Discussion
In this paper, we introduce a general stochastic model for gene regulatory networks, which can describe bursty gene expression as observed in individual cells.Instead of using ordinary differential equations, for which cells would structurally all behave the same way, we adopt a more detailed point of view including stochasticity as a fundamental component.Our PDMP framework appears to be conceptually very similar to the random dynamical system proposed by [58] but it has two major advantages: time does not have to be discretized, and the mathematical analysis is significantly easier.We also note that a similar framework appears in [59,60] and that a closely related PDMP -which can be seen as the limit of our model for infinitely short bursts -has recently been described in [61].
We then derive an explicit approximation of the stationary distribution and propose to use it as a statistical likelihood to infer networks from single-cell data.The main ingredient is the separation of three physical timescales -chromatin, promoter/RNA, and proteins -and the core idea is to use the self consistent proteomic field approximation from [48,49] in a slightly simpler mathematical framework, providing fully explicit formulas that make possible the massive computations usually needed for parameter inference.From this viewpoint, it is a rather simple approach and we hope it can be adapted or improved in more specific contexts, for example in the study of lineage commitment [62].Besides, the main framework does not necessarily has to include an underlying chromatin model and thus it can in principle also be used to describe gene networks in procaryotes.

Mechanistic modelling and statistical inference
An important quality of the PDMP network model is that the simulation algorithm is comparable in speed with classic ODE and diffusion systems, while providing an effective approximation of the perfect, fully discrete, molecular counterpart [30,32].It is worth noticing that the PDMP -at least the promoter-mRNA system -naturally appears as an example of Poisson representation [26,63], that is, not a simple approximation but rather the core component of the exact distribution of the discrete molecular model.Furthermore, such a simulation speed allowed us to compare our approximate likelihood with the true likelihood for a simple two-gene toggle switch, giving excellent results (Fig 6).This obviously does not constitute a proof of robustness for every network: a proper quantitative (theoretical or numeric) comparison is beyond the scope of this article but would be extremely valuable.Intuitively, it should work for any number of genes, provided that interactions are not too strong.
Besides, some widely used ODE frameworks [6,15,54] can be seen as the fast-promoter limit of the PDMP model: this limit may not always hold in practice, especially in the bursty regime.
In particular, (Fig 5) highlights the risk of using mRNA levels as a proxy for protein levels.It also explains why ordering single-cell mRNA measurements by pseudo-time may not always be relevant, as found in [35].In [64], the authors use a hybrid model of gene expression to infer regulatory networks: it is very close to the diffusion limit of our reduced model (7) with the difference that the discrete component, called promoter by the authors, would correspond to the frequency mode in the present article, as visible for proteins in (Fig 5).From such a perspective, our approach adds a description of bursty mRNA dynamics that allows for fitting single-cell data such as in (Fig 7).It is fully relevant regarding the current explosion of single-cell transcriptomics data.
Finally, our method performed relevantly for simple two-gene networks (Fig 8 ), showing that part of the causal information remains present in the stationary distribution: this suggests that it is indeed possible to retrieve network structures with a mechanistic interpretation, even from bursty mRNA data.From a statistical point of view, the inference method derived from the Hartree approximation shares important similarities with [65], where an undirected network is inferred by independently fitting a hierarchical model for each gene: our results suggest that one can also infer directed links, i.e. to distinguish θ i,j from θ j,i .

Perspectives
We focused here on presenting the key ideas behind the general network model and the inference method: the logical next step is to apply it to real data and with a larger number of genes, which is the subject of work in progress in our group.We considered genes with auto-activation in order to obtain enough variability, but it is also possible to increase variability by perturbing the system, e.g.changing the cell's medium or performing knockouts [66].Importantly, the Hartree approximation can in theory also be computed in transitory regime, which would allow for taking time into account.
From a computational point of view, the main challenge is the algorithmic complexity induced by the fact that proteins are not observed and have to be treated as latent variables.There is a priori no possibility of reducing this without loosing too much accuracy, and therefore some finely optimized algorithms may be required to make the method scalable.Furthermore, in order to obtain a general inference program, it would be interesting to study the identifiability properties of the interaction matrix θ.This matrix may not be identifiable without assuming that all the other parameters are known, and in practice some may have to be fixed or estimated by other means.For example, one could consider systematic measurements of the necessary mRNA and protein half-lives.
From a biological point of view, our model does not really describe individual cells but rather a concatenation of trajectories obtained by following cells throughout divisions.Experiments suggest that it should be a relevant approximation, providing one considers mRNA and proteins levels in terms of concentrations instead of molecule numbers [67], which is made possible by the PDMP framework.In this view, the cell cycle results in increasing the apparent degradation rates -because of the increase in cell volume followed by division -and thus plays a crucial role for very stable proteins.However, at such a description level, many aspects of possible compensation mechanisms [68] and chromatin dynamics [69] remain to be elucidated.Regarding the latter aspect, our abstract chromatin states were not modeled from real-life data -chromatin composition for instance -but our approach is relevant in that partitioning into dual-type chromatin states as we did is now known as a pervasive feature of all eukaryotic genomes [70][71][72][73].

Appendix A Separating mRNA and protein timescales
Here we justify the reduced network model involving only promoters and proteins, which is valid when d 1,i d 0,i for all gene i.A full proof is beyond the scope of this article but we provide a heuristic explanation.We temporarily drop the i index for simplicity.Let t 1 t 0 0 and E ∈ {0, 1}, and suppose E(t) = E for all t ∈ [t 0 , t 1 ].Moreover, let M 0 = M (t 0 ) ∈ [0, 1] and Hence, if d 1 d 0 , we have using the fact that |M 0 − E| 1 and |e −d 1 (t−t 0 ) − e −d 0 (t−t 0 ) | 1, and thus P (t) approximates the solution of the differential equation P = d 1 (E − P ).To extend this result to the actual network model, one should use the Lipschitz regularity of functions k on and k off together with the fact that for any fixed E, both the original system and the latter differential equation converge to E as t → +∞.

Appendix B Hartree approximtaion B.1 Hartree approximation for the PDMP model
Before deriving the approximation, we introduce some notation.Let n be the number of genes in the network, E = {0, 1} n and Ω = (0, 1) n .At time t, promoter and protein configurations are denoted by E t = (e 1 , . . ., e n ) = e ∈ E and P t = (y 1 , . . ., y n ) = y ∈ Ω, respectively.The distribution of (E t , P t ) then evolves along time according to its forward Kolmogorov (aka master) equation, which is a linear partial differential equation (PDE) system in our case.This system is high dimensional (|E| = 2 n , the number of possible promoter configurations) but the associated linear operator contains lots of zeros.Using the tensor product notation ⊗, one can write down the equation in a compact form: where u(t, y) = (u e (t, y)) e∈E ∈ R 2 n (R 2 ) ⊗n represents the probability density function (pdf) of (E t , P t ), and matrices .
The sum in the left side of equation ( 15) clearly corresponds to a deterministic transport term, while the right side corresponds to the stochastic transitions between promoter configurations.Furthermore, the PDE system comes with the boundary condition ∀i ∈ {1, . . ., n}, F i u = 0 on ∂Ω (16) and the probability condition The self-consistent Hartree approximation consists in splitting this 2 n -dimensional problem into n independent 2-dimensional problems by freezing the y j for j = i where i is fixed, and then gathering the solutions by taking their tensor product to produce an approximation of the true pdf (see [49] for a heuristic explanation in the discrete protein setting).More precisely, one reduced problem is derived for each gene i from ( 15)-( 16)-( 17): where u i (t, y) = (u i 0 (t, y), u i 1 (t, y)) ∈ R 2 + satisfies the initial condition u i (0, y) = u i,0 (y), the boundary condition F (i) (y i )u i (y) → 0 when y i → 0 or 1, and the probability condition 1 (t, y)] dy i = 1 for all t 0 and y 1 , . . ., y i−1 , y i+1 , . . ., y n ∈ (0, 1).Therefore, each u i is a pdf with respect to (e i , y i ) ∈ {0, 1} × (0, 1) but not on E × Ω.Finally, the Hartree approximation is given by where the equality holds if for all i, k on,i and k off,i only depend on y i .

B.2 Solving the reduced problem
For the moment, the time-dependent closed-form solution of ( 18) is unavailable, but the unique stationary solution can be easily obtained if one knows a primitive of which is the nonzero eigenvalue of the matrix and then, crucially using the fact that M (i) has a constant eigenvector (−1, 1) associated with eigenvalue λ i (the other eigenvalue being 0), one can check that v i = e ϕ i (−1, 1) is a solution when ∂ϕ i ∂y i = λ i .If one has such a ϕ i , the stationary solution of (18) is given by where Z i is the normalizing constant (which may still depend on y j for j = i).Note that the existence of a positive constant α such that min(k on,i , k off,i ) α imposes the limit 0 for exp(ϕ i (y)) when y i → 0 or 1, and thus the boundary condition is satisfied.We also obtain the promoter probabilities p 0,i = p(e i = 0) = Z 0,i /Z i and p 1,i = p(e i = 1) = Z 1,i /Z i where Z In particular, when k on,i and k off,i do not depend on y i (i.e.no self-interaction), we get which gives the classical solution and with a i = k on,i (y)/d 1,i and b i = k off,i (y)/d 1,i .This form makes clear the promoter probabilities p 0,i and p 1,i and the conditional distributions of protein y i given the promoter state e i = 0 or 1, both being Beta distributions.Since the state is usually not observed, one usually considers the marginal pdf of y i , which is also a Beta: Note that the conditional distribution of mRNA given proteins also has the form (22) since the PDMP equation is the same, although the argument is not the Hartree approximation but rather the more common quasi-steady state assumption.

B.3 Protein marginal distribution
Given the form of the solution (20), it is in fact always straightforward to integrate over promoters, even for the full (stationary) Hartree approximation (19), and we finally obtain where we recalled the possible dependence of Z i on some y j .Hence, when ϕ i and Z i are known functions, one gets a fully explicit approximation of the joint protein distribution.
that is supposed to be independent from chromatin states.Finally, we assume that the chromatin transitions are due to fast interactions with ambiant proteins (binding, hit-and-run, etc.) so that the promoter-switching reactions always see chromatin in its quasi-stationary state.Effective rates k on and k off can therefore be obtained by averaging over chromatin states: this way, k off is still a constant and k on is now defined by where p 0 (resp.p 1 ) is the probability of the chromatin being non-permissive (resp.permissive).
We now define an explicit model for chromatin dynamics and compute its stationary distribution to derive p 0 and p 1 as functions of y 1 , . . ., y n .We consider 2 n permissive configurations and 2 n non-permissive configurations as follows: for all I ⊂ G where G = {1, . . ., n}, species C I (resp.C * I ) stands for the chromatin being non-permissive (resp.and in state I.The underlying physics are the following: the chromatin has two basal configurations C ∅ (non-permissive) and C * ∅ (permissive), which describe dynamics when no protein is present, according to the reactions Then, each protein P j is able to modify the chromatin state through a hit-and-run reaction, which is kept in memory by encoding the index j in the list I, giving the state C I or C * I .Eventually, this memory can be lost by emptying I step by step (going back to the basal configuration).That is, for all I ⊂ G and j ∈ G \ I, we consider the reactions , so that only one molecule is present at a time: its species therefore entirely describes the state of the system.Mathematically, we obtain a standard jump Markov process with 2 n+1 states.For example, the case n = 2 leads to the scheme of Figure 9, writing a j = a j [P j ] and c j = c j [P j ] for simplicity.The underlying idea is that, depending on a j , b j , c j and d j , proteins will tend to stabilize the chromatin either in a permissive configuration or in a non-permissive one -providing notions of activation and inhibition.The basal reactions with rates α and β sum up what we do not observe (i.e.what is likely to happen for the chromatin when none of the P j are present).

C.4 The case of auto-activation
A this stage, it is possible to implement self-interaction for gene i by taking λ i = µ i in (24) but this leads to obvious identifiability issues: in stationary state, one cannot really distinguish between auto-activation, auto-inhibition and basal level.To cope with these, we restrict ourselves to auto-activation by setting c i = d i = 0 and keeping only the relevant chromatin states (C * I for all I, and C I for I such that i / ∈ I).The system still has a unique stationary distribution and the formula for k on corresponds to the case µ i = 0 in (24).Then, starting from the fact that auto-activation is only relevant when the basal level is small enough (for a bistable behaviour to be possible), we take the limit α 1 while keeping αλ i fixed: the formula becomes where m i > 0 if gene i activates itself and m i = 0 otherwise.

C.5 Parameterization for inference
Parameters of equation ( 25) are still clearly not identifiable: in order to get a more minimal form, we introduce the following parameterization: s j = µ j −1/m j , θ j = log(λ j /µ j ) for all j = i, and s i = (β/α) 1/m i , θ i = log(λ i ).After simplifying (25), we obtain The new parameters have an intuitive meaning: s j can be seen as a threshold for the influence by protein j, and θ j characterizes this influence via its sign and absolute value (θ j = 0 implying that k on does not depend on protein j), with the exception that s i and θ i aggregate a basal behaviour and an auto-activation strength.
Finally, we recall the notation y j = [P j ] and reintroduce the index i of the gene of interest and add it to each parameter.Hence, for every gene i, the function k on,i is defined by: with In our statistical framework, we assume that parameters k 0,i , k 1,i , m i,j and s i,j are known and we focus on inferring the matrix θ = (θ i,j ) ∈ M n (R), which is similar to the interaction matrix in usual gene network inference methods.

C.6 Explicit distribution for an auto-activation model
Here we derive the stationary distribution for a self-activating gene.For simplicity, we drop the i index.In this model, k off is constant and we assume that there are some constants Φ 0, m 0, s > 0 and k 1 k 0 > 0 such that k on has the form so the stationary distribution can directly be used in the Hartree approximation of the network model (26), recalling that Φ has to be independent of the gene's own protein but can depend on others.Letting c = (k 1 − k 0 )/(md 1 ) > 0, we are in the case of the explicit solution (20) with To get a fully explicit result, i.e. to compute Z, we shall assume that c is a positive integer.If it is not, one can get a satisfying approximation by taking c = (k 1 − k 0 )/(md 1 ) .Then, expanding (28) using the binomial theorem, we obtain for which the arbitrary neutral case Φ = 1 is symmetric, i.e. p 0 = p c .Note that s actually only depends on the fundamental parameters k 0 , k 1 , k off and d 1 (and not on c nor m). Figure 10 shows some examples of the resulting distribution, which can be bimodal or not, depending on the value of c (or equivalently, m) when all other parameters are fixed.) dy is the more standard quantity that appears in the frequentist EM algorithm for maximum likelihood estimation.Hence, considering a prior on θ simply results in adding a penalization term g(θ) during the M step in the algorithm.
For example, if we assume that θ i,j for i = j are independent and follow Laplace distributions, i.e.
).Since C does not depend on θ, this is equivalent to the standard L 1 (lasso) penalization, which is well known to enforce the sparsity of the network.

D.2 Custom prior on the interactions
Here we consider a custom prior to deal with oriented interactions.Indeed, for every pair of nodes {i, j} there are two possible interactions with respective parameters θ i,j and θ j,i , but it is likely that only one is actually present in the true network.Hence, we want θ i,j and θ j,i to compete against each other so that only one is nonzero after MAP estimation, unless there is enough evidence in the data that both interactions are present.To this aim, we define the following prior: with λ, α 0. Thus α can be seen as a competition parameter, the case α = 0 leading to the standard lasso penalization parametrized by λ.

D.3 The algorithm in practice
As visible in (31), the true EM algorithm involves integration against the distribution p(y | x, θ), which does not allow for direct numerical integration because of the dimension (y ∈ R n ).In practice, the procedure is the following.Suppose we observe mRNA levels in m independent cells, and let x k ∈ R n (resp.y k ∈ R n ) denote the mRNA (resp.protein) levels of cell k.In line with sections D.1-D.2 and letting x = (x 1 , . . ., x m ) and y = (y 1 , . . ., y m ) for simplicity, we define the objective function F(y, θ) = (x, y, θ) − g(θ) (33) where the complete log-likelihood (x, y, θ) and the penalization g(θ) are given by The algorithm then simply consists in iterating the following two steps until convergence: The approximate E step (34) can be performed using a standard gradient method since u and v are smooth functions of y.The penalized M step ( 35) is a non-smooth maximization problem since g is non-smooth, but it can be performed using a proximal gradient method detailed in the next section.The form of (x, y, θ) is such that we just need to compute ∇ log u and ∇ log v.

D.3.1 Likelihood form in the basic case
We recall that in the basic case (no self-interaction), we have log(u(y, θ)) = where c i = (k 1,i − k 0,i )/(d 1,i m i,i ) , a i,r = ((c i − r)k 0,i + rk 1,i )/(d 1,i c i ) and w i,r = c i r (Φ i /s m i,i i,i ) r .

D.3.3 Gradients
Explicit computation of the gradients is then straightforward (with k off,i constant and k on,i , Φ i defined by ( 26)-( 27)) but leads to cumbersome formulas: we implemented them in Scilab and the code is available upon request.
These 9 cases form a partition of R 2 and are represented in Figure 11.One can check that the case α = 0 collapses to the usual proximal operator associated with lasso penalization.

,Figure 1 :
Figure 1: Scheme of the two-state model of gene expression, basic block of our network model.

Figure 2 :
Figure 2: (a) Two genes interacting with each other, forming a network.Interactions are assumed to arise from the dependence of promoter dynamics on protein quantities.(b) A higher level of abstraction leads to the traditional gene regulatory network representation.(c) A toy example of reactions defining the interactions between genes 1 and 2, making the link between representations (a) and (b).

Figure 6 :
Figure 6: Exact and approximate stationary distributions for the example of toggle switch.True distributions (left side) were estimated by sample path simulation, while approximations (right side) have explicit formulas.(a) True distribution of proteins.(b) Approximate distribution of proteins, from formula (11).(c) True distribution of mRNAs.(d) Approximate distribution of mRNAs, obtained by integrating the conditional distribution of mRNA (12) against (b).

Figure 7 :
Figure 7: Fitting mRNA distributions from single-cell data for two genes: the red curve is the stationary distribution associated with our interaction form (here single genes with auto-activation), while the dashed blue curve is a simple Gamma distribution.(a)-(b) The molecule numbers are well fitted by Gamma distributions, which in this view are close to our model.(c)-(d) Same fit viewed after applying the transformation x → x 1/3 .Both genes become bimodal and the fit appears to be better with the auto-activation model.

Figure 9 :
Figure 9: Chromatin states and transitions rates in the case of n = 2 proteins.
where a r = ((c − r)k 0 + rk 1 )/(d 1 c) and b = k off /d 1 , and a probabilistic representation of u in terms of a mixture of Beta distributions:u(y) = c r=0 p r f r (y) (29)wheref r (y) = y ar−1 (1 − y) b−1 / B(a r , b) and p r = c r B(a r , b)(Φ/s m ) r /Z.The dissociation constant s is clearly redundant with the input Φ.We fix the particular value s = B(a c , b) B(a 0 , b)1 mc

τ2 = ε( 1 +Figure 11 :
Figure 11: Partition of R 2 associated with the proximal operator, for α > 0 (left) and α = 0 (right).Gray areas correspond to a usual gradient and white areas correspond to a threshold.

Table 1 :
Chemical reactions defining the two-state model.The rate constants are usually abbreviated to rates as they correspond to actual reactions rates when only one molecule of reactant is present.In the stochastic setting, these rates are in fact propensities, i.e. probabilities per unit of time.
[28]e 1 indicates that the only rare species are G and G * , with quantities [G] and [G * ] being equal to 0 or 1 molecule and satisfying the conservation relation [G] + [G * ] = 1.The other two, M and P , are not conserved quantities in the model and reach a much wider range in biological situations[28], meaning that saturation constants s 0 /d 0 and s 1 /d 1 are much larger than 1 molecule.
Hence, letting E(t), M (t) and P (t) denote the respective quantities of G * , M and P at time t, we consider a hybrid version of the previous model, where E has the same stochastic dynamics as before, but with M and P now following usual rate equations: [75]vercome this problem, a first option is Monte Carlo integration -typically by MCMC -leading to a stochastic EM algorithm that is slow but accurate if samples are large enough.A faster option consists in approximating p(y | x, θ) by its highest mode, i.e. by the Dirac mass δ y where y = arg max y {p(y | x, θ)}.Then it is worth noticing that since p(y | x, θ) ∝ p(y | θ)p(x | y, θ), the whole procedure can be seen as performing a coordinate ascent on the function (θ, y) → p(θ, y | x).We chose this option for the examples: it is sometimes called hard or classification EM, since a particular case leads to the well-known k-means clustering algorithm[75].Unfortunately, theoretical foundations of the true EM algorithm are lost by the hard EM (we do not maximize a lower bound of p(θ | x) anymore), but it often gives satisfying results while requiring much less computational time.

Table 3 :
General parameters used in the examples.The s i,j correspond to the normalized model: counterparts in absolute protein numbers are s i,j = s i,j × (s 0 s 1 )/(d 0 d 1 ) = 2 × 10 3 for i = j and s i,i = s i,i × (s 0 s 1 )/(d 0 d 1 ) = 1.9 × 10 4 .