Empirical Bayesian LASSOlogistic regression for multiple binary trait locus mapping
 Anhui Huang^{1},
 Shizhong Xu^{2} and
 Xiaodong Cai^{1}Email author
DOI: 10.1186/14712156145
© Huang et al; licensee BioMed Central Ltd. 2013
Received: 1 October 2012
Accepted: 6 February 2013
Published: 15 February 2013
Abstract
Background
Complex binary traits are influenced by many factors including the main effects of many quantitative trait loci (QTLs), the epistatic effects involving more than one QTLs, environmental effects and the effects of geneenvironment interactions. Although a number of QTL mapping methods for binary traits have been developed, there still lacks an efficient and powerful method that can handle both main and epistatic effects of a relatively large number of possible QTLs.
Results
In this paper, we use a Bayesian logistic regression model as the QTL model for binary traits that includes both main and epistatic effects. Our logistic regression model employs hierarchical priors for regression coefficients similar to the ones used in the Bayesian LASSO linear model for multiple QTL mapping for continuous traits. We develop efficient empirical Bayesian algorithms to infer the logistic regression model. Our simulation study shows that our algorithms can easily handle a QTL model with a large number of main and epistatic effects on a personal computer, and outperform five other methods examined including the LASSO, HyperLasso, BhGLM, RVM and the singleQTL mapping method based on logistic regression in terms of power of detection and false positive rate. The utility of our algorithms is also demonstrated through analysis of a real data set. A software package implementing the empirical Bayesian algorithms in this paper is freely available upon request.
Conclusions
The EBLASSO logistic regression method can handle a large number of effects possibly including the main and epistatic QTL effects, environmental effects and the effects of geneenvironment interactions. It will be a very useful tool for multiple QTLs mapping for complex binary traits.
Keywords
QTL mapping Binary traits Epistatic effects Bayesian shrinkage Logistic regressionBackground
Quantitative traits are usually influenced by multiple quantitative trait loci (QTLs), environmental factors and their interactions [1]. Although complex binary traits only show binary phenotypic variation different from the continuous variation in quantitative traits, they do not follow a simple Mendelian pattern of inheritance and also have a polygenic basis similar to that of quantitative traits. Therefore, like QTL mapping for quantitative traits, mapping for complex binary traits aims to identify multiple genomic loci that are associated with the trait and to estimate the genetic effects of these loci possibly including any of the following effects: main effects, genegene interactions (epistatic effects) and effects of geneenvironment interactions.
A number of statistical methods have been developed to identify QTLs for binary traits in experimental crosses. SingleQTL mapping methods [2–8] analyze the association between each individual genetic locus and the trait independently. However, singleQTL mapping method can only find the main effect of a QTL and cannot detect epistatic effects involving more than one locus. Moreover, it has been shown both theoretically and empirically that multipleQTL methods can improve power in detecting QTLs and eliminate possible biases in the estimates of QTL locations and genetic effects introduced by a singleQTL model [9, 10]. Therefore, several multiple QTL mapping for binary traits have been developed. These include Bayesian methods [11–16] that rely on Markov Chain Monte Carlo (MCMC) simulation to infer a multiple QTL threshold model for binary, ordinal or longitudinal traits, the multipleinterval mapping (MIM) methods [17, 18] that use the expectationmaximization (EM) algorithm to infer the threshold model or a generalized linear model for binary or ordinal traits, and a method [19] that employs a probability model based on classical transmission genetics to identify binary trait loci. However, all these methods require large computation when the QTL model includes a relatively large number of loci.
Since hundreds or thousands of genomic loci or markers are usually genotyped and involved in QTL mapping studies, including all these markers and their possible interactions in a single model for multipleQTL mapping leads to a huge number of model variables, typically much larger than the sample size. This not only entails huge computation that is not affordable to existing QTL mapping methods mentioned earlier but also may reduce power of detection and/or increase false discover rate. Two techniques have been proposed to handle the second problem: variable or model section and shrinkage. More specifically, model selection using the Akaike or Bayesian information criterion (AIC or BIC) and variable selection based on stepwise logistic regression and the BIC have been proposed in [19] and [18], respectively, to restrict the model space and to reduce the number of variables included in the model. Bayesian shrinkage method that was first applied to QTL mapping for continuous traits [15, 20–24] has also been used in QTL mapping for binary traits [15, 22], which employed MCMC for the inference of the QTL model. To reduce computational burden, more efficient methods [25, 26] were developed to infer Bayesian QTL models for binary traits. Another wellknown method for shrinking variables is the least absolute shrinkage and selection operator (LASSO) [27], which has been applied to QTL mapping for continuous traits [24] and investigated for genomewide association study (GWAS) of complex diseases [28, 29].
Recently, we developed an efficient empirical Bayesian LASSO (EBLASSO) algorithm for multipleQTL mapping for continuous traits, which is capable of handling a large number of markers and their interactions simultaneously [30]. In this paper, we extend the linear Bayesian LASSO model [23, 30, 31] to logistic regression to map multiple QTLs for binary traits. We consider a threelevel and a twolevel hierarchical model for the prior distributions of the regression coefficients. Building on the EBLASSO algorithm [30], we develop efficient empirical Bayesian algorithms to infer the Bayesian LASSO logistic regression model with two different priors. We then use simulations to compare the performance of our EBLASSO with that of five other QTL mapping methods for binary traits, that include the LASSOlogistic regression [27, 28], the HyperLasso [25], the Bayesian hierarchical generalized linear models (BhGLM) [26], the relevant vector machine (RVM) [32, 33], and the singleQTL mapping method based on logistic regression. Simulation results show that our EBLASSO offers best overall performance among the examined methods in terms of power of detection and false positive rate. Analysis of a real dataset with our algorithms also identifies several QTLs.
Methods
Logistic regression model
The widely adopted Cockerham genetic model [34] will be used in this paper. For a backcross design, the Cockerham model assigns −0.5 and 0.5 to x_{ ij } for two possible genotypes at marker j. For an intercross (F_{2}) design, there are two possible main effects named additive and dominance effects. The Cockerham model defines the values of the additive effect as −1, 0 and 1 for the three genotypes and the values of the dominance effect as −0.5 and 0.5 for homozygotes and heterozygotes, respectively. For simplicity, we only consider additive effects in (1), although the methods developed in this paper are also applicable to the model with dominance effects.
Note that there are k = 1 + m(m + 1)/2 unknown regression coefficients in (1) or (3). Typically, we have k ≫ n. If dominance effects of the markers are considered, k is even larger. Simultaneously estimation of all possible genetic effects with k ≫ n is a challenging problem. However, we would expect that most elements of β are zeros and thus we have a sparse model. We will exploit this sparsity to develop efficient methods to infer β.
Prior distributions for the regression coefficients
We assign a noninformative uniform prior to β_{0}, i.e., p(β_{0}) ∝ 1. For β_{ i }, i = 1, 2,···, k, we will consider two hierarchical models for the prior distribution. The first model is the one used in both the linear Bayesian LASSO model [23, 30, 31] and the HyperLasso logistic regression [25, 35], which has three levels. At the first level, β_{ i }, i = 1, 2,···, k, follows an independent normal distribution with mean zero and variance σ_{ i }^{2}: β_{ i } ∼ N(0, σ_{ i }^{2}). At the second level, σ_{ i }^{2}, i = 1, 2, ···, k, follows an independent exponential distribution with probability density function p(σ_{ i }^{2}) = λexp(−λσ_{ i }^{2}), with parameter λ > 0. At the third level, λ follows a gamma distribution gamma(a, b) with a shape parameter a and an inverse scale parameter b. We name the logistic regression model with this normalexponentialgamma (NEG) prior as BLASSONEG.
The threelevel hierarchical model has two hyperparameters a and b. While these two parameters give much flexibility of adjusting the degree of shrinkage, significant computation is required for cross validation to properly choose their values. To reduce the computational burden of cross validation, we will also consider a model with only the first two levels in the BLASSONEG. This two level hierarchical model only has one hyperparameter λ to be adjusted, and thus, it requires less computation. We name the logistic regression model with the twolevel prior as BLASSONE. We will next develop two empirical Bayes (EB) algorithms to infer these two models.
Empirical Bayesian algorithm for the BLASSONEG model (EBLASSONEG)
where $p\left(\mathit{y}\mathit{\beta}\right)={\displaystyle \prod _{i=1}^{n}{p}_{i}^{{y}_{i}}}{(1{p}_{i})}^{1{y}_{i}}$ and $p\left(\mathit{\beta}{\mathit{\sigma}}^{2}\right)$ is a normal distribution. Since it is difficult to integrate out β in (6) to get the marginal posterior distribution of σ^{ 2 }, it is difficult to estimate σ^{ 2 } directly by maximizing its posterior function. To overcome this problem, we will employ an iterative approach that relies on Laplace approximation of the posterior distribution of β[32, 37, 38].
where B_{ MAP } is obtained with ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$.
If we postulate a linear model ${\widehat{y}}^{=\mathbf{X}\mathit{\beta}}+\mathtt{\epsilon}$ε, where $\widehat{\mathit{y}}=\mathbf{X}{\widehat{\mathit{\beta}}}_{\mathit{MAP}}+{\mathbf{B}}_{\mathit{MAP}}^{1}\left(\mathit{y}{\mathit{p}}_{\mathit{MAP}}\right)$ with p_{ MAP } being obtained with ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$, $\mathtt{\epsilon}\sim N\left(0,{\mathbf{B}}_{\mathit{MAP}}^{1}\right)$ and $\mathit{\beta}\sim N\left(0,{\widehat{\mathbf{\text{A}}}}^{1}\right)$, we can show that $\widehat{p}\left(\mathit{\beta}\mathit{y},\widehat{\mathbf{\text{A}}}\right)$ is the posterior distribution of β in this linear model as follows. The linear model implies that the posterior distribution of β is normal with mean $\mathbf{\Sigma}{\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\widehat{\mathit{y}}$ and covariance Σ[37]. Hence, we need to prove ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}=\mathbf{\Sigma}{\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\widehat{\mathit{y}}$. Since the gradient g = 0 at β_{ MAP }, we have $\widehat{\mathbf{\text{A}}}}{\widehat{\mathit{\beta}}}_{\mathit{MAP}}={\mathbf{X}}^{T}\left(\mathit{y}{\mathit{p}}_{\mathit{MAP}}\right)\mathrm{.$From (8), we get $\widehat{\mathbf{\text{A}}}}={\mathbf{\Sigma}}^{1}{\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\mathbf{X}\mathrm{,$Therefore, we have $\left({\mathbf{\Sigma}}^{1}{\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\mathbf{X}\right){\widehat{\mathit{\beta}}}_{\mathit{MAP}}={\mathbf{X}}^{T}\left(\mathit{y}{\mathit{p}}_{\mathit{MAP}}\right)$, which implies that ${\mathbf{\Sigma}}^{1}{\widehat{\mathit{\beta}}}_{\mathit{MAP}}={\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\left[\mathbf{X}{\widehat{\mathit{\beta}}}_{\mathit{MAP}}{\mathbf{B}}_{\mathit{MAP}}^{1}\left(\mathit{y}{\mathit{p}}_{\mathit{MAP}}\right)\right]={\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\widehat{\mathit{y}}$. This leads to ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}=\mathbf{\Sigma}{\mathbf{X}}^{T}{\mathbf{B}}_{\mathit{MAP}}\widehat{\mathit{y}}$ as desired.
where ŷ and the distribution of ε are defined earlier, but β follows the threelevel BLASSONEG model. Then based on the linear model (9), we use the EBLASSO algorithm [30] to get a new estimate of A, which replaces Â obtained in the (i1)th iteration. The iteration process goes on until certain convergence criterion is satisfied, which gives the final estimate of A and Laplace approximation $\widehat{p}\left(\mathit{\beta}\mathit{y},{\displaystyle \widehat{A}}\right)$ of the posterior distribution of β. The iterative process needs to be initialized. Similar to the EBLASSO [30], we assume that only one regression coefficient is initially nonzero or equivalently only one α_{ i } is finite. The EBLASSONEG algorithm is summarized as follows:
Algorithm 1 (EBLASSONEG)
 1.
Initialization: choose a > −1.5 and b > 0. The first basis in the model, x_{ j }, is identified by $j={arg}_{i}max\left\{\left{\mathit{x}}_{i}^{T}\left(\mathit{y}{p}_{0}\right)\right,\forall \mathit{i}\right\}$, with p_{ 0 } being the proportion of y_{ i } = 1 in the dataset. Compute ${\widehat{\beta}}_{j}={\left({\tilde{\mathit{x}}}_{j}^{T}{\tilde{\mathit{x}}}_{j}\right)}^{1}{\tilde{\mathit{x}}}_{j}^{T}\left(\mathit{y}{p}_{0}\right)$ with ${\tilde{\mathit{x}}}_{j}={\mathit{x}}_{j}\frac{1}{n}{\displaystyle \sum _{i=1}^{n}{x}_{\mathit{ij}}}$, and set ${\alpha}_{j}=1/{\widehat{\beta}}_{j}^{2}$ and all α_{ i }, i ≠ j notionally to infinity
 2.
While the convergence criteria are not satisfied
 3.
Given Â, use the Newton–Raphson method to find ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$
 4.
Calculate $\widehat{\mathit{y}}=\mathbf{X}{\widehat{\beta}}_{\mathit{MAP}}+{\mathbf{B}}^{1}\left(\mathit{y}\mathit{p}\right)$
 5.
Apply the EBLASSO algorithm [30] to linear model (9) to update Â.
 6.
End while
 7.
Output ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$ and covariance Σ.
Note that the algorithm starts with a logistic regression model with only one variable and then iteratively adds variables with a finite α_{ i } to the model. The number of variables in the model k_{ m } is typically much smaller than the total number of possible variables k. Since we only need to calculate the gradient g and the Hessian matrix H for the k_{ m } variables in the model, the computation required in step 3 and in the calculation of Σ in (8) is relatively small. Moreover, the EBLASSO algorithm in step 5 is very efficient due to the fact that the variance components can be estimated in a closed form and other algorithmic techniques as discussed in [30].
The convergence criteria in Algorithm 1 are defined as: 1) no effect can be added to or delete from the model, 2) the likelihood change between two consecutive iterations is less than a prespecified small value and 3) the total change of α between two consecutive iterations is less than a prespecified small value. In step 5, the EBLASSO algorithm [30] needs to be modified slightly. First, since the noise covariance is known as B^{1}, we do not estimate noise covariance. Second, since the mean of ŷ in (9) is zero, we do not need to estimate it. Third, we use the formula Σ = (X^{ T }BX + A)^{− 1} to update the covariance of β.
The values of hyperparameters (a, b) are determined by cross validation as will be described in the Result section. The first basis in the initialization step is determined using the method in LASSOlogistic regression [28]. The initial value ${\widehat{\beta}}_{j}$ of the first basis is calculated from a linear regression model that uses y as the response variable [39], and the variance σ_{ j }^{ 2 } is approximated as ${\widehat{\beta}}_{j}^{2}$, resulting in ${\alpha}_{j}=1/{\widehat{\beta}}_{j}^{2}$.
Suppose that ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$ output from the EBLASSONEG algorithm contains k_{ n } entries. Then the posterior distribution of the corresponding k_{ n } × 1 regression coefficient $\widehat{\mathit{\beta}}$ can be approximated by a normal distribution with mean ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$ and covariance Σ. For the ith entry of $\widehat{\mathit{\beta}}$, we can calculate a tstatistics ${t}_{i}={\widehat{\beta}}_{i}/{\Sigma}_{\mathit{ii}}^{1/2}$, and then use the student’s distribution to calculate a pvalue for ${\widehat{\beta}}_{i}$. Markers that correspond to those ${\widehat{\beta}}_{i}$ with a pvalue less than a threshold, say 0.05, are then identified as QTLs and the corresponding entries of ${\widehat{\mathit{\beta}}}_{\mathit{MAP}}$ are estimated effect sizes of the QTLs.
We next derive an efficient EB algorithm for the BLASSONE model, which simplifies the hyperparameter selection, since we only need to determine the optimal value of one hyperparameter λ using cross validation.
Empirical Bayesian algorithm for the BLASSONE model (EBLASSONE)
The prior distribution of σ_{ i }^{2}, i = 1, 2, …, k or equivalently α_{ i }, i = 1, 2, …, k is used only in step 5 of the EBLASSONEG algorithm. Because the EBLASSONE model uses a different prior distribution from the one used in the BLASSONEG model, we will derive a new formula for estimating A in each iteration. Then the EBLASSONE algorithm uses the same steps in the EBLASSONEG except that the new formula is used in step 5 to find Â.
where $r=\frac{\left({s}_{i}+4\lambda \right)\sqrt{\Delta}}{2\left({s}_{i}{q}_{i}^{2}+2\lambda \right)}\cdot {s}_{i}$, and Δ = s_{ i }^{2} + 8λq_{ i }^{2}.
The EBLASSONE algorithm has the same steps as those in Algorithm 1 but with the following two modifications. First, we need to choose a value for parameter λ instead of a and b in step 1, which can be done using cross validation. In the LASSOlogistic regression [28], an upper bound of λ was estimated to be ${\lambda}_{\mathit{lasso}}={arg}_{j}max\left{\mathit{x}}_{j}^{T}\left(\mathit{y}{p}_{0}\right)\right$. In our EBLASSONE, we suggest the maximum value of λ to be λ_{max}= 1.5λ_{ lasso } based on our simulations showing that this maximum value usually only gives one nonzero regression coefficient. Second, when applying the EBLASSO algorithm in step 5 of Algorithm 1, the EBLASSO algorithm uses equation (12) instead of equation (8) in [30] to estimate A.
Note that since the EBLASSONEG model for logistic regression considered in this paper uses the same prior as the one used by the EBLASSO for linear regression considered in [30], the EBLASSONEG algorithm is essentially a combination of Laplace approximation and the EBLASSO algorithm [30]. The EBLASSONE model considered here however uses a prior different from the one used by the EBLASSO [30]. Therefore, we employ (12) to modify the EBLASSO algorithm in [30], and then the EBLASSONE algorithm is a combination of Laplace approximation and the modified EBLASSO algorithm.
Results
Simulation study
A total of m = 481 genetic markers were simulated to be evenly spaced on a large chromosome of 2400 centiMorgan (cM) with an interval of d = 5 cM, which gives rise to a correlation of R = e^{2d} = 0.9048 since the Haldane map function [40] was assumed. The simulated population was an F_{2} family derived from cross of two inbred lines. The dummy variable for the three genotypes, A_{ 1 }A_{ 1 }, A_{ 1 }A_{ 2 } and A_{ 2 }A_{ 2 } of individual i at marker j was defined as x_{ ij } = 1, 0, 1, respectively. Two simulation setups were employed. In the first setup, 20 markers were QTLs with main effects but without interactions. In the second setup, 10 main and 10 epistatic effects were simulated; a marker could have both main and epistatic effects, while two markers involving in an interaction effect did not necessarily have main effects. The QTLs were selected randomly with varying distances (5 cM  500 cM) and effect sizes (in the range between −1.28 and 2.19). Note that QTLs were assumed to be coincided with markers in both simulation setups. If QTLs are not on markers, they may still be detected since correlation between a QTL and a nearby marker is high, although a slightly larger sample size may be needed to give the same power of detection.
EBLASSONEG and EBLASSONE algorithms were implemented in C and could be called from the R environment [41], and thus QTL mapping with these two algorithms were carried out in R. To compare the performance of our algorithms with that of other relevant algorithms, we also analyzed the simulated data with the following five QTL mapping methods: the LASSO regularized logistic regression implemented in the glmnet package [42], the HyperLasso [25], the BhGLM method [26], the RVM [32, 33], and the single QTL logistic regression test using the glm procedure in R. Simulation results from these methods are presented next.
Simulation result for the dataset with only main effects
True and estimated effects for the simulated data with main effects
locus  True β  EBLASSONE$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  EBLASSONEG$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  LASSO$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  HyperLasso$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  BhGLM$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  RVM$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  Single QTL$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$ 

11  1.99  0.76(0.22)  1.61(0.22)  0.51(0.73)  1.67(0.30)  1.50(0.27)  3.87(0.60)  0.67(0.13) 
26  1.81  0.54(0.19)  1.23(0.21)  −  0.93(0.38)  1.07(0.28)  1.53(0.47)^{ b }  0.73(0.13) 
42  −1.28  −0.34(0.17)  −0.72(0.21)^{ b }  −  −1.02(0.29)  −0.95(0.24)  −  − 
48  −0.91  −0.40(0.19)  −0.82(0.21)  −  −1.12(0.28)^{ b }  −0.91(0.23)  −  − 
72  1.28  −  −  −  −  −  −  0.77(0.14) 
73  1.81  1.37(0.21)  1.91(0.24)  1.03(0.85)  2.37(0.32)  2.16(0.28)  4.73(0.59)  0.80(0.14) 
123  0.63  −  −  −  −  −  −  − 
127  −0.63  −  −  −  −  −  −  − 
161  0.44  0.30(0.15)^{ b }  0.59(0.19)^{ b }  −  0.82(0.25)^{ b }  −  1.15(0.35)  0.57(0.13) 
181  0.99  0.38(0.20)^{ b }  −  −  −  −  −  1.67(0.18) 
182  2.19  1.60(0.29)  2.86(0.31)  1.26(0.86)  3.34(0.38)  −2.73(0.37)  5.01(0.72)  1.89(0.19) 
185  1.29  0.36(0.17)^{ b }  0.56(0.19)^{ b }  0.27(0.43)^{ b }  1.00(0.27)^{ b }  0.73(0.32)  2.38(0.69)^{ b }  1.44(0.16) 
221  −0.75  −  −0.36(0.16)^{ b }  −  −  −  −  − 
243  −0.57  −0.34(0.15)  −0.41(0.16)  −0.26(0.33)  −0.75(0.24)  −0.69(0.20)  −1.74(0.45)  − 
262  −1.28  −  −  −  −  −  −  − 
268  0.91  −  −  −  −  −  2.90(0.62)  − 
270  0.57  −  −  −  −  −  −  − 
274  −0.99  −  −  −  −  −  −1.90(0.46)^{ b }  − 
361  0.41  0.30(0.16)^{ b }  0.40(0.16)^{ b }  0.15(0.56)^{ b }  0.77(0.24)^{ b }  −0.72(0.21)^{ b }  1.80(0.40)^{ b }  − 
461  0.51  −  −  −  −  −  −  − 
Parameter(s)  λ=0.050  a = 0.01  λ=0.0257  a = 0.1  $\begin{array}{c}\upsilon ={10}^{3}\\ \tau ={10}^{4}\end{array}$  
b = 6  α=0.05  
CPU time(s)  25.56  1.31  1.67  1.90  20.64  54.70  8.84  
true/false positive  11/2^{ c }  11/1^{ c }  6/4^{ c }  10/1^{ c }  9/0^{ c }  17/18^{ c }  8/25^{ d } 
The average log likelihood (denoted as logL) from tenfold cross validation was used to select the optimal value of the hyperparameter(s) in the EBLASSONE, the EBLASSONEG, the LASSO. Specifically, the dataset was first divided into 10 subsets. Nine subsets were used as the training data to estimate model parameters and the log likelihoods of the remaining testing data were calculated using the estimated parameters. This process was repeated ten times until every subset had been tested. The logL was the average of all the likelihoods obtained from 10 testing datasets.
Crossvalidations of the EBLASSONE, EBLASSONEG and LASSO for the simulation with only main effects
Algorithm  Parameters^{ a }  logL± STE^{ b } 

0.0011  −0.39 ± 0.03  
0.0022  −0.42 ± 0.03  
0.0447  −0.42 ± 0.04  
EBLASSONE  0.0500  −0.36 ± 0.02^{ c } 
0.0631  −0.39 ± 0.02  
0.1259  −0.41 ± 0.03  
0.2512  −0.40 ± 0.01  
(−0.5,0.05)  −0.38 ± 0.03  
(0.01,0.05)  −0.37 ± 0.02  
(1,0.05)  −0.47 ± 0.02  
EBLASSONEG  (0.01,5)  −0.39 ± 0.03 
(0.01,6)  –0.36 ± 0.02^{ c }  
(0.01,7)  −0.37 ± 0.02  
0.1037  −0.56 ± 0.02  
0.0516  −0.44 ± 0.03  
LASSO  0.0257  −0.37 ± 0.04 
0.0128  −0.35 ± 0.05^{ c }  
0.0064  −0.36 ± 0.06 
The optimal values of parameters a and b in the EBLASSONEG were obtained with cross validation in three steps. In the first step, a = b = 0.001, 0.01, 0.1, 1 were examined and a pair (a_{1}, b_{1}) corresponding to the largest logL was obtained. In the second step, b was fixed at b_{1} and a was chosen from the set [−0.5, 0.4, 0.3, 0.2, 0.1, 0.01, 0.01, 0.05, 0.1, 0.5, 1], which yielded an a_{2} corresponding to the largest logL. Note that when fixing one of the two parameters, the degree of shrinkage is a monotonic function of the other parameter. In the third step, a = a_{2} was fixed and b was varied from 0.01 to 10 with a step size of one for b > 1 and a step size of one on the logarithmic scale for b < 1. This threestep procedure identified the optimal pair of parameters that maximized logL at (a, b) = (0.01, 6). A summary of results for several pairs of a and b and the corresponding logL was given in Table 2. The whole dataset was then analyzed with the optimal parameters, which identified 11 true and 1 false positive effects that have pvalue ≤ 0.05. The estimated sizes of true effects and their standard errors were depicted in Table 1.
For the LASSOlogistic regression, the cross validation procedure in the glmnet package [42] automatically identified a maximum value λ_{max} for λ that gave one nonzero effect and then λ was decreased from λ_{max} to λ_{min} = 0.001λ_{max} with a step size of [log(λ_{max}) − log(λ_{min})]/100 on the logarithmic scale. The largest λ that yielded a logL within one standard error of the maximum logL was determined as the optimal value. This gave the optimal values λ = 0.0257. The whole dataset was then analyzed using the optimal λ to estimate QTL effects, which identified 40 markers with nonzero regression coefficients. The LASSO only estimates regression coefficients without giving a confident interval or a pvalue for the estimates. If we counted all nonzero coefficients as detected effects and considered them as one effect if they were in 20 cM from a QTL, the LASSO yielded 14 true positive effects and 12 false positive effects. Moreover, the LASSO typically gives a biased estimate of nonzero coefficient toward zero. To reduce the false positive rate and the bias, we refitted an ordinary logistic regression model with the markers selected by the LASSO. Among those markers with a pvalue ≤ 0.05 in the refitted model, 6 markers were true positive effects 4 were false positive effects. A summary of results from cross validation for the EBLASSONE, the EBLASSONEG and LASSO was given in Table 2. The estimated sizes of true effects and their standard errors were depicted along with all 20 true effects in Table 1.
Summary of results of the HyperLasso for the simulated data with only main effects
Parameters  True/False positive effects^{ a }  

Shape a  Inverse scale b  Type I error α  
0.1  1.7 × 10^{3}  0.05  10/1^{ b } 
0.05  1.5 × 10^{3}  9/2  
0.01  1.4 × 10^{3}  10/2  
0.1  9.8 × 10^{4}  0.01  9/1 
0.05  8.8 × 10^{4}  9/1  
0.01  7.9 × 10^{4}  9/1  
0.1  5.2 × 10^{4}  $\frac{0.05}{481}$  8/1 
0.05  4.7 × 10^{4}  8/1  
0.01  4.2 × 10^{4}  8/1  
0.1  3.6 × 10^{4}  $\frac{0.01}{481}$  7/0 
0.05  3.2 × 10^{4}  7/0  
0.01  2.9 × 10^{4}  7/0 
Summary of results of the BhGLM for the simulated data with only main effects
Parameters  True/False positive effects^{ a }  

ν  τ  
10^{5}  9/0  
10^{4}  9/0  
10^{3}  10^{5}  9/0 
10^{2}  9/0  
10^{1}  9/0  
10^{5}  9/0  
10^{4}  9/0  
10^{3}  10^{4}  9/0^{ b } 
10^{2}  9/0  
10^{1}  9/0  
10^{5}  9/0  
10^{4}  9/0  
10^{3}  10^{3}  9/0 
10^{2}  9/0  
10^{1}  9/0 
The RVM for classification [33] assumed a uniform prior for the variance of regression coefficients, and thus it did not involve parameter selection. We utilized the MATLAB implementation of the RVM from the authors [33] to analyze the datasets and identified 35 nonzero effects with a pvalue ≤ 0.05. Among these effects, we have 17 true and 18 false positives. The estimated sizes of true effects and their standard errors were depicted in Table 1. The false positive rates were much higher that the EBLASSONEG, EBLASSONE, LASSO, and HyperLasso. When we reduced the number of false positive effects to three, at a level slightly higher than that of our EBLASSONEG and EBLASSONE, by reducing the cutoff for the pvalue, we obtained 10 true positive effects, which were smaller than the number of true effects identified with our EBLASSONEG and EBLASSONE.
Single QTL mapping with logistic regression was performed using the glm procedure in R [41]. After Bonferroni correction, effects with a pvalue ≤ 0.05/481 = 1.04 × 10^{4} were considered as significant, which identified 8 true and 25 false positive effects. The estimated sizes of true effects and their standard errors were depicted in Table 1. The small pvalue cutoff used by Bonferroni correction was expected to yield a small false positive rate. However, the single QTL mapping method with Bonferroni correction still gave much more false positive effects than other methods. If we had used another popular permutation technique for multiple test correction, we effectively employed a larger pvalue cutoff. Although this could increase power of detection, it would also increase the false positive rate. To see this, we increased the cutoff for the pvalue to 6 × 10^{4} so that the number of true positive effects detected was increased to 11, at a level same as that of our EBLASSONEG and EBLASSONE methods, but then the number of false positive effects was increased to 27.
As shown in Table 1, the EBLASSONE and the EBLASSONEG identified more true effects than other four methods except RVM, and yielded a number of false positive effects comparable to those of the LASSO, the HyperLasso and the BhGLM but much smaller than those of the RVM and the singleQTL mapping method. Note that the false negative rate can be easily calculated from 20 simulated true effects and the true positive effects detected by each method. While the EBLASSONE and the EBLASSONEG offered similar performance in terms of power of detection and the false positive rate, several true effects were detected by either of them, which implies that the power of detection could be improved if the results of two methods were combined. Similar observations can be seen from the simulation results for two more independent replicates described in Additional file 1: Table S1 and Table S2.
It is well known that the LASSO typically selects only one variable among a set of highly correlated variables. This phenomenon is indeed observed from the results in Tables 1, Additional file 1: Tables S1 and S2 for two pairs of highly correlated markers, (72,73) and (181,182). It turns out that all methods compared except the single QTL mapping method have the same problem, although the problem with the EBLASSONE tends to be less severe. The LASSO is also known to bias the regression coefficients toward zero. This is observed from the results in Table 1, Additional file 1: Tables S1 and S2. Since the EBLASSONE uses the same prior distribution as the LASSO, it exhibits the same trend. However, the RVM inflated all the detected effects likely due to its small degree of shrinkage. On the other hand, the EBLASSONEG, the HyperLasso and the BhGLM tend to detect only one of the two highly correlated effects with an inflated effect size.
All simulations were performed on a personal computer (PC) with a 3.4 GHz Intel PentiumD CPU and 2Gb memory running Windows XP, except that the HyperLasso was ran on an IBM BladeCenter cluster including computing nodes with 2.6 GHz Xeon or 2.2 GHz Opteron CPU running Linux. The speeds of the EBLASSONEG, the LASSO and the HyperLasso are comparable and faster than the other methods. The speeds of the EBLASSONE, the BhGLM and the singleQTL mapping method are comparable.
Simulation results for the model with main and epistatic effects
True and estimated effects for the simulated data with main and epistatic effects
locus i  locus j  True β  EBLASSO NE$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  EBLASSONEG$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  LASSO$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  HyperLasso$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$  Twolocus test$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{a}$ 

11  11  1.99  0.83(0.12)  1.66(0.19)  0.72(0.65)  2.21(0.25)  0.88(0.10) 
26  26  1.81  0.46(0.11)  1.42(0.18)  0.39(0.55)  1.73(0.23)  0.56(0.09) 
42  42  −1.28  −0.36(0.11)  −0.87(0.20)  −  −1.59(0.21)^{ b }  − 
48  48  −0.91  −0.19(0.09)^{ b }  −0.68(0.19)^{ b }  −0.14(0.55)^{ b }  −  − 
72  72  1.28  1.01(0.16)  2.53(0.20)  0.92(1.18)  3.17(0.27)  1.08(0.10) 
73  73  1.81  0.40(0.14)  −  −  −  1.04(0.10) 
182  182  2.19  0.50(0.14)  1.57(0.26)  0.51(0.96)  2.03(0.30)  1.23(0.10) 
185  185  1.29  0.69(0.14)  1.49(0.26)  0.57(0.91)  1.88(0.30)  1.23(0.10) 
262  262  −1.28  −0.24(0.09)  −0.70(0.15)  −0.15(0.46)  −0.78(0.19)^{ b }  − 
268  268  0.91  −  −  −  −  − 
5  6  1.28  0.42(0.13)  1.11(0.22)  0.40(0.63)  1.63(0.28)  − 
6  39  1.29  0.38(0.15)^{ b }  1.37(0.23)^{ b }  0.15(1.16)^{ b }  1.28(0.35)^{ b }  − 
42  220  1.99  0.23(0.13)  1.99(0.25)^{ b }  −  2.47(0.32)  0.77(0.14) 
81  200  −1.28  −0.36(0.13)^{ b }  −1.02(0.22)^{ b }  −0.15(1.42)^{ b }  −1.22(0.27)^{ b }  − 
87  164  1.81  0.44(0.17)  1.73(0.25)  0.24(1.44)  2.15(0.32)^{ b }  − 
87  322  2.19  0.90(0.15)  2.10(0.25)  0.74(0.66)  2.44(0.30)  0.79(0.13) 
118  278  −1.28  −0.29(0.12)  −0.76(0.20)  −0.19(1.29)  −0.99(0.26)  − 
328  404  −0.99  −0.21(0.12)^{ b }  −  −0.15(0.73)^{ b }  −1.15(0.30)^{ b }  − 
373  400  −0.91  −0.22(0.12)^{ b }  −1.12(0.22)^{ b }  −0.19(0.87)  −1.23(0.27)  − 
431  439  1.81  0.24(0.13)  1.37(0.24)  −  1.58(0.29)^{ b }  − 
Parameter(s)  λ = 0.1600  a = −0.2  λ=0.0254  a = 0.1  
b = 0.1  α=0.01  
CPU time(s)  2037.4  268.6  62.7  1094.6  2936.0  
True/False positive  19/5^{ c }  17/4^{ c }  15/26^{ c }  17/7^{ c }  8/18^{ d } 
Crossvalidations of the EBLASSONE, EBLASSONEG and LASSO for the simulation with main and epistatic effects
Algorithm  Parameters^{ a }  logL± STE^{ b } 

0.0631  −0.44 ± 0.04  
0.0891  −0.41 ± 0.04  
0.1259  −0.39 ± 0.03  
EBLASSONE  0.1600  −0.37 ± 0.01^{ c } 
0.1778  −0.42 ± 0.04  
0.2512  −0.53 ± 0.04  
0.3548  −0.47 ± 0.04  
(−0.4,0.05)  −0.40 ± 0.05  
(−0.2,0.05)  −0.20 ± 0.05  
(−0.1,0.05)  −0.10 ± 0.05  
EBLASSONEG  (−0.2,0.01)  −0.35 ± 0.02 
(−0.2,0.1)  −0.33 ± 0.02^{ c }  
(−0.2,0.5)  −0.35 ± 0.02  
0.1027  −0.57 ± 0.01  
0.0511  −0.47 ± 0.02  
LASSO  0.0254  −0.37 ± 0.02^{ c } 
0.0127  −0.37 ± 0.03  
0.0063  −0.39 ± 0.04 
Summary of results of the HyperLasso for the simulated data with main and epistatic effects
Parameters  True/False positive effects^{ a }  

Shapea  Inverse scale b  Type I error α  
0.1  8.5 × 10^{4}  0.05  17/18 
0.05  7.6 × 10^{4}  19/18  
0.01  6.8 × 10^{4}  19/19  
0.1  4.9 × 10^{4}  0.01  17/7^{ b } 
0.05  4.4 × 10^{4}  17/8  
0.01  3.9 × 10^{4}  17/8  
0.1  1.3 × 10^{4}  $\frac{0.05}{115921}$  5/0 
0.05  1.1 × 10^{4}  5/0  
0.01  1.0 × 10^{4}  8/0  
0.1  1.1 × 10^{4}  $\frac{0.01}{115921}$  5/0 
0.05  1.0 × 10^{4}  5/0  
0.01  0.9 × 10^{4}  5/0 
As shown in Table 5, the EBLASSONE and the EBLASSONEG detected the same or a larger number of true effects but a smaller number of false positive than the other methods, which clearly demonstrates that our EBLASSONE and EBLASSONEG offer the best overall performance in terms of power of detection and the false positive rate. The LASSO is the fastest, while the EBLASSONEG and the HyperLasso are the second and the third fastest. Similar observations were obtained from two more independent simulations, as shown in the results for replicates 2 and 3 whose details were presented in the Additional file 1.
Real data analysis
We used a mouse data published by Masinde et al.[43] as an example to test our methods. The trait was wound healing speed of mice measured as a binary trait, fast healer denoted by 1 and slow healer denoted by 0. There were 633 F_{2} mice derived from the cross of a faster healer inbred line (MRL/MPj) and a slow healer inbred line (SJL/J). At age 3 weeks, each F_{2} mouse was punched a 2mm hole in the lower cartilaginous part of each ear using a metal ear puncher. The fast healer mice completely healed in 21 days after ear punch (complete closure of the holes) while the slow healer mice remained open for the holes after 21 days of ear punch. Some of the F_{2} mice healed partially and these mice were phenotypically coded as 1 if the holes were < 0.7 mm and 0 if the holes were > 0.7 mm. This dataset consisted of the genotypes of 119 markers across the mouse genome from 633 samples. Samples with more than 10% of missing markers or with missing phenotype were removed, resulting in a 532 × 119 genotype matrix with 3.28% missing values. These missing genotypes were inferred from neighboring markers. Total number of possible effects is k = 7141.
Results for the real data obtained with EBLASSONE, EBLASSONEG, LASSO and HyperLasso
Marker/Marker pair^{ a }IDs  Position (Chr,cM)  EBLASSONE$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{b}$  EBLASSONEG$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{b}$  LASSO$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{b}$  HyperLasso$\widehat{\beta}{\left({s}_{\widehat{\beta}}\right)}^{b}$ 

D1mit334^{ d }  (1,49.2)  −0.15(0.28)  −  −0.37(0.19)  −0.80(0.18) 
D3mit217^{ d }  (3,43.7)  −0.20(0.30)  −0.62(0.13)  −0.42(0.20)  − 
D4mit214^{ d }  (4,21.9)  −0.24(0.30)  −  −0.46(0.16)  −0.81(0.20) 
D6mit261^{ d }  (6,29.5)  −0.18(0.29)  −0.42(0.12)  −0.56(0.15)  −0.78(0.18) 
D9mit270^{ d }  (9,41.5)  −0.25(0.31)  −0.72(0.13)  −0.39(0.23)  −0.57(0.24) 
D9mit182  (9,53.6)  −0.25(0.32)  −  −0.51(0.20)  −0.80(0.26) 
D13mit228^{ d }  (13,45.9)  −0.12(0.27)^{ c }  −0.40(0.12)^{ c }  0.38(0.20)  0.91(0.19) 
(D1mit19;D17mit176)  (1,37.2;17,12.0)  0.32(0.37)  0.60(0.19)  0.89(0.24)  0.92(0.30) 
(D4mit31^{ d };Dxmit208)  (4,50.3;20,18.6)  0.19(0.33)  0.72(0.18)  0.71(0.23)  0.69(0.29) 
(D7mit246;D11mit242)  (7,12.0;11,31.9)  0.20(0.33)  0.48(0.16)  −  0.71(0.25) 
Masinde et al.[43] previously identified 10 main effects using a singleQTL mapping method, and 8 epistatic effects using twoway ANOVA. Among the 7 main effects and 3 epistatic effects identified in our analysis, 6 of the main effects are identical to those identified by Masinde et al. but all 3 epistatic effects are different from the ones identified by Masinde et al. One of the markers involved in an epistatic effect (D4mit31) identified in our analysis was a main effect identified by Masinde et al. Since our QTL model considers the main and epistatic effects jointly, it may account for both effects more reliably, comparing with the singleQTL mapping approach and the two way ANOVA used by Masinde et al. Therefore, the three epistatic effects identified in our analysis may be novel effects that are worth further experimental investigation.
Some of the identified QTLs are in positions close to the genes that are upregulated in expression profiles obtained during the inflammation stage of wound healing [44]. Loci D3mit217 and D9mit270 were identified as main effects in both our analysis and the study of Masinde et al. It turns out that D3mit217 (chr3, 34.7 cM) is close to genes calgranulin A (chr3, 43.6 cM), CD53 (chr3, 50.5 cM), and small prolinerich protein 1A (chr3, 45.2 cM), and that D9mit270 (chr9, 41.5 cM) is close to gene annexin A2 (chr9, 37.0 cM). Locus D9mit182 was identified as a main effect in our study but not identified as an effect in the study of Masinde et al. It was found that D9mit182 (chr9, 53.6 cM) is close to chemokine receptor 2 (chr9, 71.9 cM). Among the loci in the three epistatic effects we identified, D11mit242 (chr11, 31.7 cM) is close to chemokine (CC motif) ligase 4 (chr11, 47.6 cM) and chemokine (CC motif) ligase 6 (chr11, 41.5 cM). Genes related to growth factors are known to play an important role in wound healing [43, 45]. D7mit246 and D17mit176 are involved in the epistatic effects we identified; and D7mit246 is 5.0 cM away from the fibroblast growth factor receptor 2 (FGFR 2), and D17mit176 is 12.2 cM away from the vascular endothelial growth factor (VEGF).
Discussion
Our EBLASSONEG algorithm is based on a Bayesian logistic regression model that uses the same threelevel hierarchical prior for the regression coefficients as the one used in the Bayesian LASSO linear regression model [23, 30, 31], the Bayesian hyperLasso linear regression model [35] and the HyperLasso logistic regression model [25]. The HyperLasso of Hoggart et al.[25] uses a numerical algorithm to estimate the mode of the posterior distribution, whereas our EBLASSONEG first estimates the variance of the regression coefficients and then find an approximation of the posterior distribution of the regression coefficients based on the estimated variance. As shown in our simulation study, our EBLASSONEG offers better performance than the HyperLasso of Hoggart et al. in terms of power of detection, false positive rate and speed, especially when the number of possible effects is very large in QTL models with both main and epistatic effects.
The LASSOlogistic regression was applied to GWAS to identify genomic loci associated with complex disease [28, 29], and it can be directly employed in QTL mapping for binary traits as shown in our simulation study and real data analysis. The LASSOlogistic regression particularly implemented with the glmnet algorithm [42] is very efficient. Our EBLASSONE algorithm and the LASSOlogistic regression essentially employ the same twolevel prior for the regression coefficients. However, the major difference is that the LASSOlogistic regression estimates the mode of the posterior distribution, whereas our EBLASSONE algorithm first estimates the variances of the regression coefficients and then finds the posterior distributions of the regression coefficients. In our simulation study, we demonstrated that both our EBLASSONE and EBLASSONEG algorithms outperform the LASSOlogistic regression in terms of power of detection and false positive rate, although their speed is lower than that of the LASSO. The good performance of our EBLASSONE and EBLASSONEG may be due to the fact that our model inference using the variance and posterior distribution of the regression coefficients provides more information than the point estimation of the regression coefficients yielded by the LASSOlogistic regression.
Another prior distribution commonly employed in Bayesian shrinkage is the mixture of normal and inverseχ^{2} distributions as used in the Bayesian linear regression model for continuous traits [23, 24, 46] and the generalized linear model for continuous or binary traits in the BhGLM method [26]. The BhGLM method uses the EM algorithm to estimate the mode of the posterior distribution treating the variance of regression coefficients as missing data. As shown in our simulations for QTL models with only main effects, our EBLASSONEG and EBLASSONE offer power of detection better than and a false positive rate comparable to the BhGLM method. The speed of the EBLASSONEG is much higher than that of the BhGLM method, while the speed of the EBLASSONE is comparable to that of the BhGLM method. However, the BhGLM method was not able to handle a QTL model with 115,921 variables and 1,000 samples in our simulation, whereas our EBLASSONEG and EBLASSONE completed the analysis of this model within 5 min and 35 min, respectively.
Our EBLASSONEG and EBLASSONE use the same Laplace’s method originally proposed in [38] as the one used in the RVM for classification [32, 33]. However the prior distributions used by three methods are different. Although both the uniform prior and the inverse gamma prior for the variance of regression coefficients were considered in [32, 33], the more efficient RVM algorithm [33] employs the uniform prior. The uniform prior does not have any hyperparameter and lacks flexibility of adjusting the degree of shrinkage that our EBLASSONEG and EBLASSONE enjoy. The uniform prior of the RVM causes at least two problems. First, it often does not provide sufficient degree of shrinkage in multiple QTL mapping that includes a very large number of possible effects, which results in a large number of false positive, as observed in our simulations in this paper and in the simulation study for QTL mapping for continuous traits in [30, 46]. Second, because of the relatively low degree of shrinkage, the regression model usually contains a relatively large number of nonzero regression coefficients, which reduces the speed of the algorithm, as seen in our simulations.
Our EBLASSONEG and EBLASSONE estimate the variance of regression coefficients iteratively. In each iteration, Laplace’s method is first used to obtain an approximation of the posterior distribution, which results in an equivalent linear regression model. Then, the EBLASSONEG uses the EBLASSO algorithm we developed in [30] to estimate the variance. Therefore, the EBLASSONEG essentially is a combination of Laplace’s method and the EBLASSO algorithm. However, the EBLASSO does not consider the prior of the EBLASSONE, and thus, we derive a novel formula in (12) and modify the EBLASSO algorithm to estimate the variance for EBLASSONE. The EBLASSONEG has two hyperparameters, whereas the EBLASSONE has only one hyperparameter, which simplifies crossvalidation for selecting the optimal value of the hyperparameter. Moreover, simulation studies demonstrated that the EBLASSONE identified some QTLs that were not detected by the EBLASSONEG, suggesting that combination of the two methods can lead to increased detection power.
The full Bayesian methods [11–13, 15, 47] use the threshold model that employs a latent liability variable to link the binary trait with the QTLs and then apply MCMC simulation to infer the model. It is well known that MCMC simulation requires very large computation for the model with a relatively large number of variables. Therefore, these fully Bayesian methods may not be computationally efficient for QTL mapping with both main and epistatic effects from a relatively large number of QTLs.
The MIM methods [17, 18] for mapping binary trait loci either does not employ any variable selection technique [17] or uses stepwise logistic regression and BIC to select variables [18]. Hence, it is difficult for them to deal with a QTL model with a relatively large number of loci and epistatic effects due to their demanding computation burden and possible large false discovery rate. The probability model in [19] for binary trait mapping entails a model space exponentially increasing with the number of loci. Although the BIC is used for model selection, its computational complexity increases dramatically with the number of loci.
We demonstrated that our EBLASSONE and EBLASSONEG can easily handle a model with 115,922 variables. If much more variables are involved, in e.g. GWAS or QTL mapping with high order interactions, our methods can be combined with a variable screening method such as the sure independence screening (SIS) [48, 49] to facilitate computation. It is not difficult to extend our EBLASSONE and EBLASSONEG algorithms to QTL mapping for ordinal traits. We can first apply Laplace’s method to get an approximately equivalent linear model by deriving the gradient and the Hessian matrix of the posterior distribution and then apply our efficient EBLASSO algorithm to infer the linear model. Since the multinomial logistic regression model includes more variables than the binary logistic regression model, it will require more computation.
Conclusions
We have developed two algorithms named EBLASSONEG and EBLASSONE for the inference of Bayesian logistic regression models for multiple QTL mapping. While the EBLASSONEG is an extension of the EBLASSO [30] to handle binary traits, the simulations demonstrate that the EBLASSONEG algorithm provides superior performance in terms of power of detection and false positive rate, comparing with five other existing methods. Moreover, the EBLASSONEG is faster than four other methods tested but only slower than the LASSO algorithm. The hierarchical prior in the EBLASSONE in this paper was not considered in the EBLASSO [30]. Here we derive a novel formula given in equation (12) to estimate the variance of regression coefficients. Our simulations show that the EBLASSONE provides comparable or better power of detection and false positive rate comparing with five other existing methods, and the power of detection could be improved if the results are combined with that of the EBLASSONEG. In summary, our EBLASSONE and EBLASSONEG algorithms provide an efficient tool for QTL mapping for binary traits involving a large number of both main and epistatic effects, and they can be extended to QTL mapping for ordinal traits.
Abbreviations
 AIC:

Akaike information criterion
 BIC:

Bayesian information criterion
 BhGLM:

Bayesian hierarchical generalized linear model
 BLASSO:

Bayesian LASSO
 EB:

Empirical Bayes
 EBLASSO:

Empirical Bayesian LASSO
 EM:

Expectationmaximization
 GWAS:

Genomewide association study
 LASSO:

Least absolute shrinkage and selection operator
 MAP:

Maximum a posterior
 MCMC:

Markov Chain Monte Carlo
 MIM:

Multipleinterval mapping
 NE:

Normal exponential prior
 NEG:

Normal exponential gamma prior
 PC:

Personal computer
 QTL:

Quantitative trait locus
 RVM:

Relevant vector machine.
Declarations
Acknowledgments
This work was supported by the National Science Foundation (NSF) under NSF CAREER Award no. 0746882 to XC and by the Agriculture and Food Research Initiative (AFRI) of the USDA National Institute of Food and Agriculture under the Plant Genome, Genetics and Breeding Program 20073530018285 to SX.
Authors’ Affiliations
References
 Falconer DS, Mackay TFC: Introduction to Quantitative Genetics. 1996, Boston: AddisonWesley, 4Google Scholar
 Hackett CA, Weller JI: Genetic mapping of quantitative trait loci for traits with ordinal distributions. Biometrics. 1995, 51 (4): 12521263. 10.2307/2533257.View ArticlePubMedGoogle Scholar
 Xu S, Atchley WR: Mapping quantitative trait loci for complex binary diseases using line crosses. Genetics. 1996, 143 (3): 14171424.PubMed CentralPubMedGoogle Scholar
 Rao S, Xu S: Mapping quantitative trait loci for ordered categorical traits in fourway crosses. Heredity. 1998, 81 (2): 214224. 10.1046/j.13652540.1998.00378.x.View ArticlePubMedGoogle Scholar
 Xu S, Yi N, Burke D, Galecki A, Miller RA: An EM algorithm for mapping binary disease loci: application to fibrosarcoma in a fourway cross mouse family. Genet Res. 2003, 82 (2): 127138. 10.1017/S0016672303006414.View ArticlePubMedGoogle Scholar
 Xu C, Zhang YM, Xu S: An EM algorithm for mapping quantitative resistance loci. Heredity. 2004, 94 (1): 119128.View ArticleGoogle Scholar
 Xu C, Li Z, Xu S: Joint mapping of quantitative trait loci for multiple binary characters. Genetics. 2005, 169 (2): 10451059. 10.1534/genetics.103.019406.PubMed CentralView ArticlePubMedGoogle Scholar
 Deng W, Chen H, Li Z: A logistic regression mixture model for interval mapping of genetic trait loci affecting binary phenotypes. Genetics. 2006, 172 (2): 13491358.PubMed CentralView ArticlePubMedGoogle Scholar
 Haley CS, Knott SA: A simple regression method for mapping quantitative trait loci in line crosses using flanking markers. Heredity. 1992, 69 (4): 315324. 10.1038/hdy.1992.131.View ArticlePubMedGoogle Scholar
 Martínez O, Curnow RN: Estimating the locations and the sizes of the effects of quantitative trait loci using flanking markers. Theor Appl Genet. 1992, 85 (4): 480488.View ArticlePubMedGoogle Scholar
 Yi N, Xu S: Bayesian mapping of quantitative trait loci for complex binary traits. Genetics. 2000, 155 (3): 13911403.PubMed CentralPubMedGoogle Scholar
 Yi N, Xu S: Mapping quantitative trait loci with epistatic effects. Genet Res. 2002, 79 (2): 185198.View ArticlePubMedGoogle Scholar
 Yi N, Xu S, George V, Allison DB: Mapping multiple quantitative trait loci for ordinal traits. Behav Genet. 2004, 34 (1): 315.View ArticlePubMedGoogle Scholar
 Yi N, Banerjee S, Pomp D, Yandell BS: Bayesian mapping of genomewide interacting quantitative trait loci for ordinal traits. Genetics. 2007, 176 (3): 18551864. 10.1534/genetics.107.071142.PubMed CentralView ArticlePubMedGoogle Scholar
 Huang H, Eversley CD, Threadgill DW, Zou F: Bayesian multiple quantitative trait loci mapping for complex traits using markers of the entire genome. Genetics. 2007, 176 (4): 25292540. 10.1534/genetics.106.064980.PubMed CentralView ArticlePubMedGoogle Scholar
 Yang R, Li J, Wang X, Zhou X: Bayesian functional mapping of dynamic quantitative traits. Theor Appl Genet. 2011, 123 (3): 483492. 10.1007/s0012201116010.View ArticlePubMedGoogle Scholar
 Chen Z, Liu J: Mixture generalized linear models for multiple interval mapping of quantitative trait loci in experimental crosses. Biometrics. 2009, 65 (2): 470477. 10.1111/j.15410420.2008.01100.x.View ArticlePubMedGoogle Scholar
 Li J, Wang S, Zeng ZB: Multipleinterval mapping for ordinal traits. Genetics. 2006, 173 (3): 16491663. 10.1534/genetics.105.054619.PubMed CentralView ArticlePubMedGoogle Scholar
 Coffman CJ, Doerge RW, Simonsen KL, Nichols KM, Duarte CK, Wolfinger RD, McIntyre LM: Model selection in binary trait locus mapping. Genetics. 2005, 170 (3): 12811297. 10.1534/genetics.104.033910.PubMed CentralView ArticlePubMedGoogle Scholar
 Xu S: Estimating polygenic effects using markers of the entire genome. Genetics. 2003, 163 (2): 789801.PubMed CentralPubMedGoogle Scholar
 Wang H, Zhang YM, Li X, Masinde GL, Mohan S, Baylink DJ, Xu S: Bayesian shrinkage estimation of quantitative trait loci parameters. Genetics. 2005, 170: 465480. 10.1534/genetics.104.039354.PubMed CentralView ArticlePubMedGoogle Scholar
 Hoti F, Sillanpää MJ: Bayesian mapping of genotype x expression interactions in quantitative and qualitative traits. Heredity. 2006, 97 (1): 418. 10.1038/sj.hdy.6800817.View ArticlePubMedGoogle Scholar
 Yi N, Xu S: Bayesian LASSO for quantitative trait loci mapping. Genetics. 2008, 179 (2): 10451055. 10.1534/genetics.107.085589.PubMed CentralView ArticlePubMedGoogle Scholar
 Xu S: An empirical Bayes method for estimating epistatic effects of quantitative trait loci. Biometrics. 2007, 63 (2): 513521. 10.1111/j.15410420.2006.00711.x.View ArticlePubMedGoogle Scholar
 Hoggart CJ, Whittaker JC, De Iorio M, Balding DJ: Simultaneous analysis of all SNPs in genomewide and resequencing association studies. PLoS Genet. 2008, 4 (7): e100013010.1371/journal.pgen.1000130.PubMed CentralView ArticlePubMedGoogle Scholar
 Yi N, Banerjee S: Hierachical generalized linear models for multiple quantitative trait locus mapping. Genetics. 2009, 181: 11011133. 10.1534/genetics.108.099556.PubMed CentralView ArticlePubMedGoogle Scholar
 Tibshirani R: Regression shrinkage and selection via the lasso. J R Stat Soc Series B Stat Methodol. 1996, 58 (1): 267288.Google Scholar
 Wu TT, Chen YF, Hastie T, Sobel E, Lange K: Genomewide association analysis by lasso penalized logistic regression. Bioinformatics. 2009, 25: 714721. 10.1093/bioinformatics/btp041.PubMed CentralView ArticlePubMedGoogle Scholar
 Ayers KL, Cordell HJ: SNP selection in genomewide and candidate gene studies via penalized logistic regression. Genet Epidemiol. 2010, 34 (8): 879891. 10.1002/gepi.20543.PubMed CentralView ArticlePubMedGoogle Scholar
 Cai X, Huang A, Xu S: Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping. BMC Bioinformatics. 2011, 12 (1): 21110.1186/1471210512211.PubMed CentralView ArticlePubMedGoogle Scholar
 Park T, Casella G: The Bayesian lasso. J Am Stat Assoc. 2008, 103 (482): 681686. 10.1198/016214508000000337.View ArticleGoogle Scholar
 Tipping ME: Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res. 2001, 1 (3): 211244.Google Scholar
 Tipping ME, Faul AC: Fast marginal likelihood maximisation for sparse Bayesian models. 2003, Key West, FL: Proc 9th International Workshop on Artificial Intelligence and StatisticsGoogle Scholar
 Cockerham CC: An extension of the concept of partitioning hereditary variance for analysis of covariances among relatives when epistasis is present. Genetics. 1954, 39 (6): 859882.PubMed CentralPubMedGoogle Scholar
 Griffin JE, Brown PJ: Bayesian hyperlassos with nonconvex penalization. Aust N Z J Stat. 2011, 53 (4): 423442. 10.1111/j.1467842X.2011.00641.x.View ArticleGoogle Scholar
 Carlin BP, Louis TA: Bayesian methods for data analysis. 2008, London/New York: Chapman & Hall/CRC, 3Google Scholar
 Bishop CM: Pattern recognition and machine learning. 2006, New York: SpringerGoogle Scholar
 MacKay DJC: The evidence framework applied to classification networks. Neural Comput. 1992, 4 (5): 720736. 10.1162/neco.1992.4.5.720.View ArticleGoogle Scholar
 Hastie T, Tibshirani R, Friedman JH: The elements of statistical learning: data mining, inference, and prediction. 2009, New York: Springer, 2View ArticleGoogle Scholar
 Wu R, Ma CX, Casella G: Statistical genetics of quantitative traits: linkage, maps, and QTL. 2007, LLC: Springer Science + Business MediaGoogle Scholar
 R Development Core Team: A language and environment for statistical computing. 2012, Vienna, Austria: R Foundation for Statistical ComputingGoogle Scholar
 Friedman J, Hastie T, Tibshirani R: Regularization paths for generalized linear models via coordinate descent. J Stat software. 2010, 33 (1): 122.View ArticleGoogle Scholar
 Masinde GL, Li X, Gu W, Davidson H, Mohan S, Baylink DJ: Identification of wound healing/regeneration Quantitative Trait Loci (QTL) at multiple time points that explain seventy percent of variance in (MRL/MpJ and SJL/J) mice F2 population. Genome Res. 2001, 11 (12): 20272033. 10.1101/gr.203701.PubMed CentralView ArticlePubMedGoogle Scholar
 Li X, Mohan S, Gu W, Baylink DJ: Analysis of gene expression in the wound repair/regeneration process. Mamm Genome. 2001, 12 (1): 5259. 10.1007/s003350010230.View ArticlePubMedGoogle Scholar
 Kunimoto BT: Growth factors in wound healing: the next great innovation?. Ostomy Wound Manage. 1999, 45 (8): 5664.PubMedGoogle Scholar
 Xu S: An expectation maximization algorithm for the Lasso estimation of quantitative trait locus effects. Heredity. 2010, 2010: 112.Google Scholar
 Yi N, Shriner D, Banerjee S, Mehta T, Pomp D, Yandell BS: An efficient Bayesian model selection approach for interacting quantitative trait loci models with many effects. Genetics. 2007, 176 (3): 18651877. 10.1534/genetics.107.071365.PubMed CentralView ArticlePubMedGoogle Scholar
 Fan J, Song R: Sure independence screening in generalized linear models with NPdimensionality. Ann Stat. 2010, 38 (6): 35673604. 10.1214/10AOS798.View ArticleGoogle Scholar
 Fan J, Lv J: Sure independence screening for ultrahigh dimensional feature space. J R Stat Soc Series B Stat Methodol. 2008, 70 (5): 849911. 10.1111/j.14679868.2008.00674.x.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.