A fast and accurate algorithm for single individual haplotyping

Background Due to the difficulty in separating two (paternal and maternal) copies of a chromosome, most published human genome sequences only provide genotype information, i.e., the mixed information of the underlying two haplotypes. However, phased haplotype information is needed to completely understand complex genetic polymorphisms and to increase the power of genome-wide association studies for complex diseases. With the rapid development of DNA sequencing technologies, reconstructing a pair of haplotypes from an individual's aligned DNA fragments by computer algorithms (i.e., Single Individual Haplotyping) has become a practical haplotyping approach. Results In the paper, we combine two measures "errors corrected" and "fragments cut" and propose a new optimization model, called Balanced Optimal Partition (BOP), for single individual haplotyping. The model generalizes two existing models, Minimum Error Correction (MEC) and Maximum Fragments Cut (MFC), and could be made either model by using some extreme parameter values. To solve the model, we design a heuristic dynamic programming algorithm H-BOP. By limiting the number of intermediate solutions at each iteration to an appropriately chosen small integer k, H-BOP is able to solve the model efficiently. Conclusions Extensive experimental results on simulated and real data show that when k = 8, H-BOP is generally faster and more accurate than a recent state-of-art algorithm ReFHap in haplotype reconstruction. The running time of H-BOP is linearly dependent on some of the key parameters controlling the input size and H-BOP scales well to large input data. The code of H-BOP is available to the public for free upon request to the corresponding author.


Background
Each human somatic cell contains 23 pairs of chromosomes, and there are about 0.5% differences between the DNA sequences of two copies of each chromosome [1]. The dominant DNA differences are single nucleotide polymorphisms (SNPs). Identification of the combination of alleles at the SNP loci on the same chromosome copy, i.e., haplotyping, is needed to fully understand the human genetic variation patterns and enhance the power of genome-wide association studies for complex diseases [2,3]. Currently, it is expensive and labor-intensive to separate two copies of chromosomes by biological techniques [4], and most published human individuals' genomes contain only the mixed information, i.e., genotype information, of the underlying two copies of chromosomes [5]. Therefore, to reduce the cost, accurate and fast computational haplotyping methods are of urgent importance.
There have been many computational haplotyping models [6][7][8] and they can be grouped into two main classes: haplotype inference and haplotype assembly [6]. Haplotype inference is to phase the haplotypes of individuals in a pedigree or a population from their genotypes. Computer algorithms of haplotype inference have been used in the International HapMap Project [9] and the 1000 Genomes Project [5] to identify haplotypes. Haplotype assembly is also called Single Individual Haplotyping (SIH). SIH assembles a pair of haplotypes from an individual's aligned DNA sequence fragments. With the dramatically dropped cost of human whole genome sequencing, more and more human individual's DNAs have been sequenced. With mate-pairs sequencing and read length improvements of the nextgeneration sequencing technologies, and with the development of new sequencing technologies, SIH methods have been used to build haplotype-resolved genome of human beings [10,11]. When there are enough DNA sequence fragments that cover two or more consecutive variant loci, SIH builds longer and more accurate haplotype blocks than haplotype inference does [12].
Since Lancia et al., first formalized the SIH problem [13], many optimization models and algorithms have been introduced to solve the problem [7,[14][15][16][17][18][19][20][21][22][23][24][25]. The main models are MEC (minimum error correction) [24], MFR (minimum fragment removal), and MSR (minimum SNP removal) [13]. Recently, Duitama et al., proposed a new model MFC (maximum fragments cut) [16]. Most of the models are NP-hard and APX-hard [16,21,26], and their exact algorithms run in time exponentially dependend on at least one input parameter [15,17,[19][20][21]. Therefore, a large number of heuristic algorithms have been designed to deal with the problem [16,22,23,25]. According to [16], one of the most accurate heuristic algorithms is HapCUT [25], while ReFHap [16] runs much faster than HapCUT without loss of accuracy. In this paper, we consider both quality measures "errors corrected" and "fragments cut", and propose a new optimization model, called Balanced Optimal Partition (BOP), for the SIH problem. The model generalizes the most popular model MEC and the recent model MFC. In fact, it could be made either model by setting some parameters to extreme values. To solve the model, we propose a dynamic programming algorithm H-BOP. By limiting the number of intermediate solutions at each iteration to an appropriately chosen small integer k, H-BOP is able to solve the model efficiently. The time complexity of H-BOP linearly depends on some of the key parameters controlling the input size and the algorithm scales well to large input data.

Results and discussion
We use a public available Java package SIH [27] to test the performance of H-BOP. The package contains a simulated data generator and implements algorithm ReFHap [12,16]. The simulated data are generated according to five parameters: number of SNPs (haplotype length) n, number of fragments m, average fragment length l, sequencing error rate e, and gap rate g. In our experiments, since we only consider heterozygous SNPs, for each data set, a haplotype h 1 containing n SNPs is generated randomly first and then the other haplotype h 2 is obtained by flipping each allele of h 1 . For each haplotype, m/2 fragments are randomly sampled from the haplotype and their lengths follow a normal distribution with mean l and variance 1. Finally for each fragment, every allele is flipped with probability e to introduce sequencing errors and, except at the first and last positions, every allele is deleted with probability g to introduce gaps. Given fragments generated as above, the average call coverage c is calculated by dividing the total number of alleles of the fragments by the haplotype length n. Please see [16] for more details.
Among many algorithms for the SIH problem, Hap-CUT and ReFHap are two of the most accurate heuristic algorithms [12,16]. Since ReFHap is much faster than HapCUT, we only compare our algorithm with ReFHap. We implemented our algorithm H-BOP in Java and embedded it in the java package SIH and tested the accuracies, phased haplotype lengths and running time of H-BOP and ReFHap on simulated data and a real data set provided by [12]. All tests are carried out on a Windows 7 64 bit PC (3GHz CPU, 4GB RAM). To measure the haplotype reconstruction accuracy of an SIH algorithm, the hamming distance between the reconstructed haplotype pair and the real haplotype pair was previously used widely in the literature [14,23]. However, a recent study [28] showed it over-penalizes simple switch errors. Therefore, as in [12,16], we use switch errors to measure the accuracy of an algorithm. A switch error is an inconsistency between the reconstructed haplotype pair and the real haplotype pair over two contiguous SNPs. There may be some SNP sites where an algorithm is unable to determine the alleles of a haplotype. The phased haplotype length is defined as the number of SNP sites where the alleles of the reconstructed haplotype pair are determined. And the number of switch errors divided by the phased haplotype length is called switch error rate.
If the allele of a fragment f at a SNP site s is known, we say f covers s. When there are no fragments covering two consecutive loci, it is not possible to determine the haplotype containing these consecutive loci for all SIH models. Therefore, for each test we divide a haplotype into blocks according to the input fragments as in [16]. A block corresponds to a connected component of a graph G = (V, E) where V is the set of the SNP sites and there is a edge between two SNP sites s 1 and s 2 if and only if there is a fragment covers both s 1 and s 2 . The switch errors of an algorithm are the sum of the switch errors in all blocks. In the following simulation tests, the haplotype length n = 100, the gap rate g = 0.1 and each result is the average over 100 repeated experiments if there is no explicit specification.

Parameters of the algorithm H-BOP
There are two parameters w and k in H-BOP. The parameter w is a weighting factor. H-BOP tries to seek a solution with the minimum number of errors corrected when w = 0, and a solution with a maximum cut of the weighted conflict graph corresponding to the input fragments [16] when w is set sufficiently large. k is the maximum number of intermediate solutions that we will keep at each iteration of H-BOP. When k is large enough, H-BOP in fact becomes an exact algorithm. To choose appropriate values for w and k, we test H-BOP on different combinations of w and k.
In Figure 1, the haplotype length n = 100 and the average fragment length l = 3. In Figure 1(a), the number of fragments m = 140 (c = 4.25) and k = 16. In Figures 1(b) and 1(c), m = 210 (c = 6.42) and w = 0.1. Figure 1(a) shows that when the sequencing error rate e increases from 0.01 to 0.05, the switch errors of H-BOP increases accordingly. The switch errors of H-BOP is larger when w = 0 than those when w >0 with e unchanged, which is obvious when e = 0.05. It indicates that the optimal objective minimum errors corrected leads large switch errors when sequencing error rate is high. When w = 0.01 and 0.1, the switch errors of H-BOP are smallest. Figure 1(b) shows that when k increases from 1 to 8, the switch errors decrease accordingly. When k increases from 8 to 64, there are no significant improvements in switch errors of H-BOP. When sequencing error rates are high, exact optimal solutions may incur large switch errors and Figure 1 (b) indicates that when k increases from 8 to 64, switch errors of H-BOP increases accordingly. Figure 1(c) shows that the running time of H-BOP increases linearly with k when the haplotype length n, number of fragments m and average fragment length l remain fixed. In the following tests, we set w = 0.1 and k = 8 for H-BOP.

Simulation results
We changed the sequencing error rate e, the average fragment length l and the number of fragments m to generate different fragment data sets, and compared the performance of H-BOP and ReFHap. Figure 2 shows that H-BOP and ReFHap are both accurate and there are only several switch errors in reconstructing haplotypes of 100 SNPs. The accuracies of both algorithms decrease with the increase of e and improves with the increase of m. These results are consistent with those in [16], which claim that the accuracy of an SIH algorithm increases with decreasing sequencing error rate and increasing call coverage. In a half of the total 48 cases shown in Figure 2, H-BOP produces fewer switch errors than ReFHap, especially when l = 3 and e ≥ 0.02. In the other half, H-BOP presents a few more switch errors than ReFHap only in two cases (i.e., Figure 2(a), m = 140, e = 0.005 and Figure 2(b), m = 111, e = 0.02). In the remaining 22 cases, H-BOP has the same switch errors as ReFHap.
The phased haplotype lengths of both algorithms generally increase with decreasing e and increasing m. Table 1 shows that when l = 3 and m <351, or when e = 0.05 (except when l = 10 and m >44), H-BOP is able to phase more SNPs than ReFHap. In other cases the phased haplotype lengths of H-BOP and ReFHap are equal.
We set e = 0.01 and varied the haplotype length n, the number of fragments m and the average fragment length l to compare the running time of H-BOP and ReFHap ( Figure 3). When n = 100, l = 3 and m increases from 100 to 400, the running time of H-BOP increases linearly, but the running time of ReFHap increases sharply (Figure 3(a)). When m reaches 400, the running time of H-BOP is only about 4 seconds, while that of ReFHap is about 468 seconds. When n increases from 50 to 200 while m = 200 and l = 3, the average call coverage decreases and the running time of both algorithms decreases accordingly (Figure 3(b)). When m = 200, n = 100 and l increases from 3 to 9, the running time of ReFHap increases significantly while that of H-BOP increases slowly and remains less than 5 seconds (Figure 3(c)). When the number of fragments and the average fragment length increase, the average call coverage c increases accordingly. When c is large, H-BOP runs much faster than ReFHap.

Results on real data
We downloaded a real data set from the SIH website [27], which contains the aligned sorted fosmid-based NGS DNA sequence fragments and gold-standard haplotypes of a HapMap trio child, NA12878 [12]. The total heterozygous SNP sites of the data are 1,704,166, the total fragments are 285,341, the average fragment length is 18.03, and the average call coverage is 3.02.
Since the coverage is low, there are many consecutive heterozygous SNP site pairs not covered by any fragments, and hence the 23 pairs of chromosomes are divided into 17,839 haplotype blocks. Due to the low To account for both completeness and quality, Duitama et al. [12] proposed an alternative measure QAN50 (quality adjusted N50). QAN50 is calculated as follows [12]: (1) Break every haplotype block into the longest possible segments containing no switch errors.
(2) Calculate span (in reference base pairs) from the first phased SNP to the last phased SNP for every segment.
(3) Adjust each span by multiplying the span with phased SNPs ratio (the number of phased SNPs divided by the number of total SNPs) inside the segment (to correct for un-phased SNPs).
(4) Sort segments from the largest to the smallest adjusted span.

Conclusions
Haplotyping is regarded as one of the hardest challenges in personal genome sequencing. Though some single molecule sequencing technologies have been developed, they are still too expensive and labor-consuming. Computer algorithms are widely used to reconstruct haplotypes in personal genome sequencing. SIH uses computer algorithms to build a pair of haplotypes from an individual's aligned DNA sequence fragments. There are many different combinatorial optimization models for SIH, among which MEC is the most popular and MFC is the most recent introduction [16]. In this paper, we combine the two quality measures "errors corrected" and "fragments cut" used in MEC and MFC and introduce a new model BOP. We design a heuristic dynamic programming  algorithm H-BOP to solve the model. By setting appropriate parameters of the algorithm, H-BOP is accurate and fast. Extensive simulation experiments show that H-BOP is generally faster and more accurate in assembling haplotypes than a recent state-of-art algorithm ReFHap. When the average fragment length is small and sequencing error rate is relatively high, H-BOP is significantly more accurate than ReFHap. The running time of H-BOP increases linearly as the average call coverage c increases, and H-BOP runs much faster than ReFHap when c is large. The test on a real data set also shows that H-BOP is superior to ReFHap considering both completeness and quality of the reconstructed haplotypes.

Formulation and problem
With the input of aligned DNA sequence fragments derived from a pair of chromosomes, SIH tries to reconstruct a pair of haplotypes of their underlying chromosomes. If there are no sequencing errors and for any two consecutive (but not necessarily adjacent) heterozygous SNP loci, there is at least one fragment covering both, we can easily determine the linkage relationship between two consecutive heterozygous SNPs and thus SIH is easy. However, sequencing errors are unavoidable which makes the problem complicated. In the following, we first introduce some notations and definitions similar to those in [15,16,20,23], and then propose a new optimization model. Since we only consider the alleles at SNP loci in the SIH problem, the input aligned fragments are encoded as an m × n SNP matrix M [15,16,25], where m is the number of fragments and n the number of SNP loci. An entry at the ith row and the jth column of M is denoted as M[i, j]. M[i, j] takes a value from {0, 1, −}, where '0' (or '1') encodes the major allele (or the minor allele, respectively) in the population and '−' represents an unknown allele. As in previous work [16,25], we assume that all SNP loci are heterozygous and every fragment covers at least two heterozygous SNP loci. If this is not the case, a simple preprocessing as in [22] and [29] can be used to remove homozygous loci and fragments covering only one heterozygous SNP locus. In the remainder of the section, the ith row of M is equivalent to the ith fragment and the j column of M is equivalent to the jth SNP locus without explicit specification for briefness.
Given an m × n SNP matrix M, let the underlying haplotype pair be H = (H 1 , H 2 ) (H 1 , f i ), s c (H 2 , f i )) and define If the real haplotype pair is H, it is easy to verify that the number of sequencing errors in the input fragments is at least s c (H, M), which we call the errors corrected measure. Based on the principle of parsimony, it is a natural optimization objective that to minimize the number of errors and hence Minimum Error Correction [20,23,24] is the most popular model for SIH.
Minimum Error Correction (MEC): Given an m × n SNP matrix M, find a haplotype pair H such that s c (H, M) is minimized.
Based on the underlying haplotype pair H = (H 1 , H 2 ), it is easy to partition the fragments into two groups G 1 , G 2 according to the following rule: For each fragment f i , if s c (H 1 , f i ) < s c (H 2 , f i ), add f i to G 1 , otherwise add f i to G 2 . Let P H denote the partition (G 1 , G 2 ) obtained by the above rule. A partition P = (G 1 , G 2 ) of a fragment set R is encoded as a map such that P(f ) = 0 (or 1) if the fragment f in R belongs to G 1 (or G 2 , respectively).
Conversely, given a partition P = (G 1 , G 2 ), a haplotype pair H = (H 1 , H 2 ) can be constructed as follows. Let N g,v [j] be the number of fragments of G g whose allele at the jth locus is v for g = 1, 2 and v = 0, 1. For There is another formulation of MEC equivalent to the above one: Given an SNP matrix M, find a partition P of the rows in M such that s c (H P , M) is minimized. In the following, s c (H P , M) is called the errors corrected measure of P. While MEC aims to find a partition such that the conflict between the fragments in the same group is minimized, Maximum Fragments Cut (MFC) [16] aims to find a partition such that the conflict between the fragments in G 1 and the fragments in G 2 is maximized.   d(f i 1 , f i 2 ). Therefore, a partition P of the rows in M corresponds a cut of G. Given an SNP matrix M and a partition P, the fragments cut measure is defined as: Maximum Fragments Cut (MFC) [16]: Given an SNP matrix M, find a partition P of the rows of M such that s d (P, M) is maximized.
To take into account both the conflict between the two groups and the conflict between the fragments within the same group, we introduce a new score combining the above errors corrected measure and the fragments cut measure. Given a partition P of the rows of M, define the partition score as where w is a weight factor that is used to adjust the weight of the fragments cut measure. In the following we propose a new optimization model for the SIH problem.
Balanced Optimal Partition (BOP): Given an SNP matrix M, find a partition P of the rows in M such that s p (P, M) is minimized. A solution to BOP of M is denoted by BOP(M), i.e. a partition with the minimum partition score.
Note that when w = 0, BOP becomes MEC which has been proved NP-hard [24] and APX-hard [26]. Therefore, BOP is NP-hard and APX-hard.

Algorithm
Given an m × n SNP matrix M, there are 2 m-1 different partitions of m rows in M. Therefore, when m is large, it is impractical to enumerate all possible partitions and choose one with the minimum partition score. To solve the BOP model of M efficiently, we propose a dynamic programming algorithm in the subsection. We first consider the first row of M, then the first two rows and so on until we have considered all rows of M.
We need some definitions and notations. Let M [1..i, :] denote the SNP matrix consisting of only the first i rows of M. The first and last columns at which the ith row of M takes non '−' values are denoted by l(i) and r(i), respectively. For a column j, if l(i) ≤ j ≤ r(i), row i spans column j. In the following, we assume that all the rows of M are sorted such that if i 1 < i 2 , l(i 1 ) < l(i 2 ) or l(i 1 ) = l(i 2 ) ∧ r(i 1 ) ≤ r(i 2 ). Let R(i) denote the row set containing the rows in M[1..i, :] that span column l(i).
Let P be a partition of a row set R and P a partition of a subset R of R. If for every row i R', P(i) = P (i), P is called the projection of P on R' and P is called an extension of P on R. Fix a row i and let P be a partition of R(i). P is an optimal extension of P, if the following conditions hold: (1)  It is easy to verify that the following equalities hold for i = 1; 2.
After E i (P) and s i p (P) have been calculated for every possible partition P of R(i), we consider the submatrix containing the first i + 1 rows of M. Let R c (i, i + 1) = R (i) ∩ R(i + 1), and we calculate E i+1 (P ) and s i+1 p (P ) for every possible partition P of R(i + 1) according to the following method. Let q be the number of the rows in R (i) but not in R c (i, i + 1), i.e. q = |R(i)−R c (i, i + 1)|. For a partition P of R c (i, i + 1), there are 2 q distinct extensions of P on R(i). Suppose P m is the one whose partition score is the minimum among all 2 q extensions. Then E i (P ) and s i p (P ) can be computed with the following equations: Since the rows in M are sorted, it is easy to verify that R(i + 1) = R c (i, i + 1) ∪ {i + 1}, and that the number of all possible distinct partitions of R(i + 1) is two times that of R c (i, i + 1). For each partition P of R c (i, i + 1), there are two distinct corresponding partitions P 1 and P 2 of R(i + 1): for each l R c (i, i + 1), P 1 (l) = P 2 (l) = P (l); P 1 (i) = 0, but P 2 (i) = 1. Optimal extensions of P 1 and P 2 and their partition scores can be calculated with the following equations: s i+1 p (P 1 ) = s i p (P ) + δ c (P 1 ) − wδ d (P 1 ); In Equation (10) (or 11), δ c (P 1 ) (or δ c (P 2 )) is the difference between the errors corrected measures of E i+1 (P 1 ) (or E i+1 (P 2 )) and E i (P ); and δ d (P 1 ) (or δ d (P 2 )) is the difference between the fragments cut measures of E i+1 (P 1 ) (or E i+1 (P 2 )). The values δ c (P) and δ d (P) are calculated by calling the following functions DeltaEC(i, P) and DeltaFC(i, P), respectively.
Based on the above equations, we can construct an exact dynamic programming algorithm for BOP. However, the complexity of this exact algorithm increases exponentially with the number of rows in R(i), which implies that the algorithm is impractical when the call coverage is large. Let P * i be the projection on R(i) of the global optimal partition of M. If the partition score of P * i is among the k smallest ones of all possible partitions of R(i), we only need to compute k partitions of R(i) whose partition scores are the smallest in each iteration without losing the global optimal partition at the end. Based on the above idea, we propose a heuristic algorithm H-BOP whose pseudo-code is shown in Figure 4. In the algorithm, a partition P of a row set is encoded by a binary number P, and P(i) is represented by the ith bit of P. Therefore, the number set {0, ..., 2 q − 1} encodes all possible partitions of a row set containing q rows (in fact, there are only 2 q-1 different partitions, and in our implement of H-BOP, we use a binary number of q − 1 bits to encode a partition to save time and space). In each iteration of H-BOP, for each row i we maintain a binary max heap H to store the candidate partitions of R(i) whose partition scores are among the k smallest. The heap H can store at most k elements, and each element L of H contains a partition P, E i (P) and s i p (P), which are denoted as L.P, L. E, and L.s respectively. The value s i p (P) of each element in the heap is larger than or equal to those of its two children. When the number of elements of H is smaller than its expected size (i.e. k), and a new element L arrives, the element is inserted into H and H is adjusted to maintain the max heap property. When the number of elements of H reaches k, L is compared with the root r of H. If L.s < r.s, the root is replaced by L and H is adjusted accordingly; otherwise, the new element is discarded. The above operation is denoted by H.insert(L).