Automating approximate Bayesian computation by local linear regression
© Thornton. 2009
Received: 02 February 2009
Accepted: 07 July 2009
Published: 07 July 2009
Skip to main content
© Thornton. 2009
Received: 02 February 2009
Accepted: 07 July 2009
Published: 07 July 2009
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method.
The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately inR), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.
Examples of applying the software to empirical data fromDrosophila melanogaster, and testing the procedure on simulated data, are shown.
In practice, theABCreg simplifies implementing ABC based on local-linear regression.
In many biological applications, parameter inference for models of interest from data is computationally challenging. Ideally, one would like to infer parameters using either maximum likelihood or Bayesian approaches which explicitly calculate the likelihood of the data given the parameters. While such likelihoods can be calculated for data from non-recombining regions [1, 2] and for data where all sites are independent [3, 4], full-likelihood methods are not currently feasible for many models of interest (complex demography with recombination, for example). Therefore, approximations are desirable.
In the last several years, approximate methods based on summary statistics have gained in popularity. These methods come in several flavors:
1. Simulate a grid over the parameter space in order to calculate the likelihood of the observed summaries, given parameters [5, 6]. The maximum-likelihood estimate is the point on the grid that maximizes the likelihood of the observed summary statistics.
2. The maximum-likelihood algorithm can be modified to perform Bayesian inference by simulating parameters from prior distributions, calculating summary statistics, and accepting the parameters if they are "close enough" to the observed [7, 8]. The method runs until the desired number of acceptances are obtained, and can be extremely time-consuming. I refer to this approach as rejection sampling, and it has been applied in several contexts [9–11].
3. Decide ahead of time how many random draws to take from a prior distribution, then accept the fraction of draws which generate summary statistics closest to the data, according to some distance metric. This is the rejection-sampling approach of , and differs from the approach of [7–11] in that afinite number of simulations are performed from the prior instead of repeatedly simulating from the prior until a desired number of acceptances are recorded.
4. Take the parameters accepted from Method 3, and regress those acceptances onto the distance between the simulated and observed summary statistics .
The latter three methods are all forms of "Approximate Bayesian Computation" (ABC), a term which generally applies to inference problems using summary statistics instead of explicit calculations of likelihoods. The three Bayesian schemes described above are the simplest form of ABC, and the approach has been extended to use Markov Chain Monte Carlo techniques to explore the parameter space  and sequential Monte Carlo . Further developments include formalizing methods for choosing summary statistics  and methods for model selection . In this paper, I will use "regression ABC" to refer to Method 4, the regression approach of . The main appeal of regression ABC is speed, overcoming a major limitation of rejection-sampling, which is often too slow to feasibly evaluate the performance of the estimator (due to requiring high rejection rates in order to obtain reasonable estimates [8, 11]). In general, the regression ABC method has several appealing features, including simplicity of implementation, speed, and flexibility. The flexibility is a key issue, as it allows one to rapidly explore how many, and which, summary statistics to use, which is an important issue, as subtle choices can lead to surprising biases in estimation .
Currently, many tools are available for the rapid development and testing of summary-statistic based approaches to inference, including rapid coalescent simulations for both neutral models  and simple models of selection [9, 19, 20], software to calculate summary statistics from simulation output , and open-source statistical packages such asR. Currently, the only software package available to implement the regression algorithm of  is implemented in theR language, and is available from http://www.rubic.rdg.ac.uk/~mab/. The purpose of this paper is to describe a software package which automates the linear regression portion of regression ABC analyses in a fast and flexible way, with user-friendly features simplifying automation. The results from the current code have been validated against independentR implementations, and the "ABCreg" package is fully documented for use by non-programmers.
The software package is calledABCreg, and is distributed as source code from the author's web site (see below). The code compiles to generate a single binary, reg, which automates all of the regression computations. The code was written in the C++ programming language , and the linear algebra calculations for the regression are performed using the GNU Scientific Library (GSL, http://www.gnu.org/software/gsl). The C and C++ languages are ideal for this task due to the speed of the compiled programs (often an order of magnitude faster thanR). Although the regression-ABC step is less computationally-demanding than simulating from the prior distribution, it does not necessarily follow that the relative speed of the simulations is the limiting step in an analysis. In practice, one may spend considerable time evaluating the utility of different sets of summary statistics, running the regression-ABC portion of the analysis multiple times on a set of simulated data. It is therefore desirable to optimize the speed of the regression-ABC step as well as the speed of the simulations.
The algorithm implemented is identical to that of . In brief, thereg program performs the following operations:
wheremin andmax are the lower and upper bounds of the prior, respectively. The latter transformation assures that the posterior distribution is contained withing the bounds of the prior. The user may also opt to not transform the simulated values at all.
2. Normalisation of the observed summary statistics and summary statistics simulated from the prior
3. The rejection step based on accepting the closestδ of Euclidean distances between observed an simulated summary statistics. Here,δ specifies the tolerance for acceptance, and is the fraction of draws from the prior to accept, specified by the user on the command line.
4. The regression adjustment
5. Back-transformation of regression-adjusted parameter values and output to files. The program generates one output file per data set in the data file. File names are generated automatically, and the prefix of the file names is controlled by the user. The output files contain tab-delimited columns which are the regression-adjusted parameter values (i.e., the estimates of the posterior distribution), which are easily processed inR.
Use of the software requires two input files. The first file describes the data (either real or simulated), and contains a space-delimited list of the summary statistics. One can analyze multiple data sets by recording the summary statistics for each data set on a different line of the file. The second input file describes the results of simulating from the prior distribution on the model parameter(s). This "prior file" contains a space-delimited list of the parameters, and the corresponding summary statistics (in the same order as in the data file).
Additional features include a complete debugging mode, which helps identify cases where the linear regression may fail. In practice, the analysis of some data sets may return non-finite parameter values. Often, this is due to the predicted mean value of the regression being quite large, such that back-transformation (+/- the residuals from the regression) results in a value that cannot be represented on the machine. In debug mode, such cases immediately exit with an error. When not in debug mode, the program prints warnings to the screen.
In this section, I show results from applying the ABCreg software to the inference scheme of , who used rejection sampling (Method 2 above) to infer the parameters of a simple population bottleneck model from sequence data obtained from a European population sample ofDrosophila melanogaster. This model has three parameters,t r , the time at which the population recovered from the bottleneck,d, the duration of the bottleneck, andf, the bottleneck severity. The parameterst r andd are scaled in units of 4N 0 generations, whereN 0 is the effective population size at the present time, andf =N b /N 0, the ratio of the bottlenecked size to the current size (0 <f ≤N 0). See  for more details of the model. The data consist of 105 X-linked, non-coding loci surveyed by  and another ten from . For each of these 115 non-coding fragments, sequence variation was surveyed in population samples from Zimbabwe, and the Netherlands. Thornton and Andolfatto used a two-step approach for the parameter inference. First, a relatively wide uniform prior was used in conjunction with a fairly liberal tolerance for acceptance. Then, the 1 st and 99 th quantiles of the resulting posterior distributions were used as the bounds on a new, uniform prior, and the acceptance criteria were made more strict. Three summary statistics were used: the variances across loci of nucleotide diversity (π, ), the number of haplotypes in the sample, and a summary of the site-frequency spectrum of mutations . The rejection sampling scheme took two weeks to run on a large computer cluster.
I repeated the analysis using the local regression approach using the same data and uniform priors on parameters (see Table one of ). The analysis was done assuming thatρ = 4N e r (the population recombination rate) is equal to 10θ (see  for details), and the value ofθ (the population mutation rate, see , p. 92) at each locus was obtained by the method of  using data from a Zimbabwe population sample. C++ code was written using the GSL and the coalescent routines inlibsequence to sample 5 × 106 draws from the prior distribution on the three parameters, and to record the resulting summary statistics. Simulating from the prior took 24 hours on four 2 gigahertz AMD Opteron processors. The tolerance was set such that 103 acceptances were recorded for the regression. The model has three parameters, and three summary statistics are used. Once the simulations from the prior distribution are complete, the entire ABC analysis was performed with one command:
reg -P 3 -S 3 -p prior -d data -b data -t 0.0002 -T,
where the arguments specify the number of parameters (-P), number of summary statistics (-S), names of files containing the prior (-p) and data (-d), the prefix of the output file names (-b), the tolerance (-t), and -T specifies the transformation described in . Thereg command takes seconds to run on a desktop CPU. Thus, the entire inference procedure took roughly 1 day using 4 CPU, compared to the original analysis based on rejection sampling, which took many CPU-months 
The linear regression approach to ABC analysis  is a fast and flexible method of performing parameter inference from population-genetic data. The software described here facilitates such analyses in a flexible way, and is designed to interact seamlessly with widely-available tools for population-genetic simulation and statistical analysis.
The source code is distributed under the terms of the GNU public license and is available from the software section of the author's web site http://www.molpopgen.org. Documentation is also available online, as is a shell script containing a complete example. The software was developed and tested on Linux and Apple's OS X platforms, using thegcc compiler suite http://gcc.gnu.org. In order to compile and use the software, the GNU C++ compiler (g ++) is needed, and GSL must be installed on the system. The GSL is readily available as a pre-compiled package on many Unix-like systems, and is easily installable from source code on any system with a C compiler.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.