Skip to main content

Table 2 Scores and features of parameter inference challenge

From: Network topology and parameter estimation: from experimental design methods to gene regulatory network kinetics using a community based approach

Model 1

Parameter distance Dparam

P-value for parameter predictions

Protein distance Dprot

P-value for protein time course predictions

Score

Bayesian

Decompose network"

Selection of data

Sampling

Orangeballs

0.0229

3.25E-03

0.002438361

1.21E - 25

27.4

no

yes

Game Tree

Sequential local search

2

0.8404

1.00E + 00

0.016023721

3.39E-18

17.5

no

no

Manual based on parameter uncertainty

Global method

3

0.1592

6.00E-01

0.035404398

4.45E-15

14.6

yes

no

Manual

LH

4

0.0899

1.88E-01

0.047495432

6.28E-14

13.9

no

yes

Manual

LM + Particle Swarm

5

0.1683

6.45E-01

0.09791128

4.01E-11

10.6

yes

no

Train + Sim

UKF

6

0.0453

1.37E-02

0.198785197

1.93E-08

9.6

no

no

A=Criterion

Local (LM)

7

0.1702

6.45E-01

0.362463945

2.90E-06

5.7

no

yes

Sensitivity analysis

Hybrid (Local + Global)

8

0.8128

1.00E + 00

0.356429217

2.53E-06

5.6

yes

no

Estimation of improved uncertainty

Global (MH)

9

0.3766

9.99E-01

0.817972877

1.34E-03

2.9

yes

yes

MI

ABC-SMC

10

0.0699

9.83E-02

19.32326868

1.00E + 00

1.0

no

yes

Minimize variance based on FI

Multistart local search

11

0.1883

7.29E-01

3.222767988

6.90E-01

0.3

no

no

Train + Sim

LH + DE

12

5.0278

1.00E + 00

14.77443631

1.00E + 00

0.0

no

no

Manual

Local method

  1. Table for Model 1 of the parameter inference challenge contains anonymized teams (except for best performer) ordered by Score rank. Next to each team is listed its parameter distance and associated p-value, protein distance and associated p-value and the score. The last four columns indicate the features of the fitting strategies used by the participants. Abbreviations used for the features: ABC-SMC, Approximate Bayesian Computation with Sequential Monte Carlo; DE, Differential Evolution; FI, Fisher Information; LH, Latin Hypercube; LM, Levenberg-Marquardt; MH, Metropolis Hastings; MI, Maximize Mutual Information between parameters and output of experiments; Train + Sim, iterative steps of training on data and simulation to find most informative experiments; Rank rank experiments in top 10% of the A-Criterion (trace of the covariance matrix) according to price; UKF, Unscented Kalman Filtering.