- Methodology article
- Open Access
Artificial neural networks modeling gene-environment interaction
- Frauke Günther1,
- Iris Pigeot1 and
- Karin Bammann1, 2Email author
https://doi.org/10.1186/1471-2156-13-37
© Günther et al.; licensee BioMed Central Ltd. 2012
- Received: 14 February 2012
- Accepted: 1 April 2012
- Published: 14 May 2012
Abstract
Background
Gene-environment interactions play an important role in the etiological pathway of complex diseases. An appropriate statistical method for handling a wide variety of complex situations involving interactions between variables is still lacking, especially when continuous variables are involved. The aim of this paper is to explore the ability of neural networks to model different structures of gene-environment interactions. A simulation study is set up to compare neural networks with standard logistic regression models. Eight different structures of gene-environment interactions are investigated. These structures are characterized by penetrance functions that are based on sigmoid functions or on combinations of linear and non-linear effects of a continuous environmental factor and a genetic factor with main effect or with a masking effect only.
Results
In our simulation study, neural networks are more successful in modeling gene-environment interactions than logistic regression models. This outperfomance is especially pronounced when modeling sigmoid penetrance functions, when distinguishing between linear and nonlinear components, and when modeling masking effects of the genetic factor.
Conclusion
Our study shows that neural networks are a promising approach for analyzing gene-environment interactions. Especially, if no prior knowledge of the correct nature of the relationship between co-variables and response variable is present, neural networks provide a valuable alternative to regression methods that are limited to the analysis of linearly separable data.
Keywords
- Gene-environment interaction
- Multilayer perceptron
- MLP
- Neural network
- Pattern recognition
- Simulation study
Background
The etiological pathway of any complex disease can be described as an interplay of genetic and non-genetic underlying causes (e.g. [1–3]). Usually, regression based methods are applied in the study of complex diseases (e.g. [4–8]). However, regression methods do not necessarily capture the complexity of the interplay of genetic and non-genetic factors. In particular, regression models require pre-processing of data to reflect any non-linear relationship. First, continuous variables have to be either categorized or transformed according to their assumed form of relationship to the response. Second, interaction terms have to be explicitly included into the regression models to test for any statistical interaction. Third, if no prior knowledge of the functional form of the dose-response-relationship is present, a variety of regression models has to be explored. With increasing number of variables, finding the best model through trial-and-error is no longer feasible due to the large number of possible models.
For modeling complex relationships, especially with little prior knowledge of the exact nature of these relationships, a more flexible statistical tool should be used. One promising alternative is the use of artificial neural networks. Here, variables do not have to be transformed a priori and interactions are modeled implicitly, that is, they do not have to be a priori formulated in the model [9]. We successfully applied neural networks for modeling different two-locus disease models, i.e. different types of gene-gene interactions as e.g. epistatic models [10].
Since studies using neural networks for modeling continuous co-variables have previously shown promising results (see e.g. [11–13]), the aim of this paper is to investigate the usability of neural networks for modeling complex diseases that are determined by a gene-environment interaction with a continuously measured environmental factor. Based on simulated data in a case-control design, we analyze the general modeling ability of neural networks for different structures of gene-environment interactions. Theoretic risk models are defined representing different types of two-way interactions of one genetic and one environmental factor (e.g. [14]). The predicted risk is compared to the theoretic risk to assess the modeling ability. Additionally, neural networks are trained to a real data set to investigate the practicability of neural networks in a real life situation. All results are compared to those obtained by logistic regression models as reference method. Advantages and disadvantages of using a neural network approach are discussed.
Methods
Simulation study
Case-control data sets are generated using a two step design. First, underlying populations are simulated with a controlled prevalence of 10% and an overall sample size of five million observations. These populations carry the information of two marginally independent and randomly drawn factors – one biallelic locus and one continuous environmental factor – and a case-control status. The minor allele frequency is 30% to ensure sufficient cell frequencies in the final case-control data sets and it is assumed that the Hardy-Weinberg equilibrium holds. The environmental factor follows a continuous uniform distribution on the interval [0,100]. Depending on the genotype G and the environmental factor U, the case-control status is allocated through eight given theoretic risk models as introduced in the next subsection. Considering each theoretic risk model in a high and a low risk scenario, this results in sixteen underlying populations. As the second step, 100 case-control data sets are randomly drawn from all underlying populations for each analysis. Thus, for each analysis, mean values over 100 data sets are considered in sixteen situations. Three different sample sizes of 2,000 subjects (1,000 cases + 1,000 controls), 1,000 subjects (500 cases + 500 controls), and 400 subjects (200 cases + 200 controls) are used.
Artificial neural networks and logistic regression models are fitted to the data, i.e. separately to all 100 case-control data sets for each situation. A multilayer perceptron (MLP, see e.g. [15]) is chosen as neural network. It is briefly described in the Appendix. For neural networks, the genotype information is coded co-dominant, i.e. the genotype takes possible values 0, 1, and 2 representing the number of mutated alleles. The environmental factor is included in the analyses as continuous variable. For all data sets, six different network topologies, from zero up to five hidden neurons, are trained to avoid an overfitting of the data. For training purposes, the data set is always used as a whole. Each training process is replicated five times each with randomly initialized starting weights drawn from a standard normal distribution to enhance the chance that the training process stops within a global instead of a local minimum. The best trained neural network for each data set, i.e. the best network topology and the best repetition, is selected based on the Bayesian Information Criterion (BIC, [16]), which takes the number of parameters into account and penalizes additional parameters. Thus in each situation, 100 best neural networks predict the underlying risk model and the mean prediction can be used to evaluate the model fit (see below).
For comparison purposes, logistic regression models are fitted to the same data sets. The genotype is coded co-dominant counting the number of risk alleles and using two dichotomous design variables, one representing the heterozygous and one representing the homozygous mutated genotype. Five different models are used: the null model, three main effect models – containing only one or both main effects – and the full model – containing both main effects and one or two interaction terms depending on the genotype coding. For both coding approaches, the best model is selected based on BIC.
where g = 0,1,2 denotes the genotype and f(g,u ′ ) refers to the theoretic risk model of the case-control data set and to the prediction of the kth case-control data set. The smaller is, the better the mean model fit of neural networks or logistic regression models is since the estimated risk model and the theoretic risk model coincide for . To take variation into account, pointwise prediction intervals are calculated as empirical 95% intervals. In particular, for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2 a prediction interval is determined as the interval , where and denote the 3rd ordered and the 98th ordered prediction, respectively.
Data generation and all analyses are done using R [17]. The package for training the MLP was implemented by our group and is published on CRAN [18].
Theoretic risk models
Two different types of theoretic risk models for gene-environment interactions are used, namely the models introduced by Amato et al. [14] and models mainly representing a masking effect of the involved locus as defined below. For all risk models, the kind of functional relationship between the penetrance and the environmental factor depends on the genotype information, i.e. the curve shape is in general different depending on the three genotypes. The relationship is defined on a population level, i.e. the penetrance function F :{0,1,2} × [0,100] → [0,1] with F(g u) = P(Y = 1|G = g U = u), where Y ∈ {0,1} denotes the case-control status, G ∈ {0,1,2} the genotype, and U ∈ [0,100] the environmental factor, only holds in the corresponding underlying population and has to be converted to f(g u) if a case-control data set is analyzed [10].
Risk models by Amato et al
The four models are defined as follows:
-
the genetic model: α1 ≤ α2 ≤ α3 and β1 = β2 = β3 = 0,
-
the environmental model: α1 = α2 = α3 and β1 = β2 = β3 ≠ 0,
-
the additive model: α1 ≤ α2 ≤ α3 and β1 = β2 = β3 ≠ 0,
-
the interaction model: α1 = α2 = α3 and β1 ≤ β2 ≤ β3.
Used values for α g , β g (g = 0,1,2), c , and z
Risk model | Risk scenario | Constant values αg, βg ( g = 0,1,2) | Constant values c, z | |
---|---|---|---|---|
Genetic model | High risk |
| z = 0.886 | |
β0 = β1 = β2 = 0 | ||||
Low risk |
| z = 0.390 | ||
β0 = β1 = β2 = 0 | ||||
Environmental model | High risk | α0 = α1 = α2 = 7.5, | z = 0.200 | |
β0 = β1 = β2 = −0.15, | ||||
Low risk | α0 = α1 = α2 = 3.75, | z = 0.200 | ||
Risk models by Amato et al. [14] | β0 = β1 = β2 = −0.075, | |||
Additive model | High risk | , | z = 0.177 | |
β0 = β1 = β2 = −0.15, | ||||
Low risk | , | z = 0.178 | ||
β0 = β1 = β2 = −0.075, | ||||
Interaction model | High risk | α0 = α1 = α2 = 7.5, | z = 0.171 | |
β0 = 2 · β1, β1 = −0.15, β2 = 0.5·β1, | ||||
Low risk | α0 = α1 = α2 = 3.75, | z = 0.169 | ||
β0 = 2 · β1, β1 = −0.075, β2 = 0.5 · β1, | ||||
Model 1 | High risk (r = 0.150) | c = 0.05, z = 0.254 | ||
Low risk (r = 0.075) | ||||
Risk model representing a masking effect of the genetic factor | Model 2 | High risk (r = 0.150) | c = 0.05, z = 0.286 | |
Low risk (r = 0.075) | ||||
Model 3 | High risk (r = 0.150) | c = 0.075, z = 0.631 | ||
Low risk (r = 0.075) | ||||
Model 4 | High risk (r = 0.150) | c = 0.075, z = 0.964 | ||
Low risk (r = 0.075) |
Theoretic risk models by Amato et al. [14], high risk scenario. The left part of each figure refers to the homozygous wild-type genotype, the middle one to the heterozygous, and the right one to the homozygous mutated genotype.
Risk models representing a masking effect of the genetic factor
- 1.The structure of the first risk model is given by the following penetrance function F :{0,1,2} × [0,100] → [0,1]
- 2.The second risk model is defined by
- 3.In the third risk model, the penetrance function is given by
- 4.For the fourth risk model, the penetrance function is determined as follows:
Theoretic risk models representing a masking effect of the genetic factor, high risk scenario. The left part of each figure refers to the homozygous wild-type genotype, the middle one to the heterozygous, and the right one to the homozygous mutated genotype.
Real data application
To study the performance of a neural network in a real life situation, we applied this approach to a cross-sectional study dealing with a lifestyle induced complex disease. This application should serve as an example for the general practicability of our approach without describing the study from a subject point of view. The common effect of an SNP and a continuous environmental factor on a binary outcome is investigated while controlling for the effect of one binary confounder. The data set includes 138 cases and 1599 controls. As in the simulation study, neural networks with up to five hidden neurons are trained each five times with randomly initialized weights drawn from of a standard normal distribution and the best neural network is chosen based on BIC. The analysis is done once using the whole data set and once stratified by the confounding factor. For the stratified analysis, 95% bootstrap percentile intervals are calculated using 100 bootstrap replications [19].
Results
Risk models by Amato et al.
Graphical comparison of mean predictions. Risk models by Amato et al. [14], high risk scenario, n = 1,000 + 1,000. Graphical comparison of mean predictions for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2, where the rows relate to the different theoretic risk models. Green lines refer to the theoretic risk model, blue lines to the mean predictions, and red lines to the pointwise prediction intervals. DV = design variables.
Graphical comparison of mean predictions. Risk models by Amato et al. [14], low risk scenario, n = 1,000 + 1,000. Graphical comparison of mean predictions for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2, where the rows relate to the different theoretic risk models. Green lines refer to the theoretic risk model, blue lines to the mean predictions, and red lines to the pointwise prediction intervals. DV = design variables.
Differences between theoretic and estimated penetrance functions (models by Amato et al. [[14]])
High risk scenario | Low risk scenario | ||||||
---|---|---|---|---|---|---|---|
Neural network | Logistic regression | Logistic regression (DV) | Neural network | Logistic regression | Logistic regression (DV) | ||
n =1000 + 1000 | n=1000 + 1000 | ||||||
Genetic model | 40.79 | 31.31 | 48.15 | 48.22 | 40.85 | 83.62 | |
| Environmental model | 46.14 | 277.11 | 277.11 | 52.45 | 171.61 | 171.36 |
Additive model | 45.13 | 256.52 | 260.10 | 47.99 | 163.19 | 189.92 | |
Interaction model | 119.77 | 345.77 | 247.93 | 132.47 | 225.61 | 194.37 | |
n =500 + 500 | n = 500 + 500 | ||||||
Genetic model | 59.28 | 47.14 | 68.22 | 64.27 | 92.02 | 159.80 | |
| Environmental model | 60.57 | 277.51 | 277.15 | 90.76 | 174.37 | 174.16 |
Additive model | 56.10 | 268.11 | 297.62 | 80.66 | 190.25 | 242.34 | |
Interaction model | 138.91 | 344.50 | 268.75 | 153.56 | 233.16 | 210.98 | |
n = 200 + 200 | n = 200 + 200 | ||||||
Genetic model | 101.95 | 85.67 | 152.25 | 97.23 | 167.48 | 207.66 | |
| Environmental model | 96.32 | 278.40 | 278.93 | 163.16 | 177.14 | 175.27 |
Additive model | 96.16 | 329.55 | 374.17 | 177.24 | 246.06 | 292.39 | |
Interaction model | 168.90 | 349.88 | 316.01 | 207.81 | 256.22 | 291.88 |
If the sample size decreases, the modeling ability becomes worse for neural networks as well as for logistic regression models (see Table 2). However, neural networks still show the best model fit if the environmental factor has an effect. The prediction intervals include the true underlying risk model in all but two situations (interaction model, n = 500 + 500, low risk scenario and interaction model, n = 200 + 200, high risk scenario, data not shown).
Models representing a masking effect of the genetic factor
Graphical comparison of mean predictions. Risk models representing a masking effect of the genetic factor, high risk scenario, n = 1,000 + 1,000. Graphical comparison of mean predictions for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2, where the rows relate to the different theoretic risk models. Green lines refer to the theoretic risk model, blue lines to the mean predictions, and red lines to the pointwise prediction intervals. DV = design variables.
Graphical comparison of mean predictions. Risk models representing a masking effect of the genetic factor, low risk scenario, n = 1,000 + 1,000 Graphical comparison of mean predictions for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2, where the rows relate to the different theoretic risk models. Green lines refer to the theoretic risk model, blue lines to the mean predictions, and red lines to the pointwise prediction intervals. DV = design variables.
Differences between theoretic and estimated penetrance functions (models representing a masking effect of the genetic factor)
High risk scenario | Low risk scenario | ||||||
---|---|---|---|---|---|---|---|
Neural network | Logistic regression | Logistic regression (DV) | Neural network | Logistic regression | Logistic regression (DV) | ||
n = 1000 + 1000 | n = 1000 + 1000 | ||||||
Model 1 | 38.63 | 211.62 | 105.83 | 41.07 | 195.15 | 87.57 | |
| Model 2 | 117.94 | 359.10 | 155.40 | 101.92 | 323.89 | 114.71 |
Model 3 | 40.67 | 253.01 | 85.51 | 43.15 | 258.19 | 65.87 | |
Model 4 | 103.37 | 228.10 | 85.16 | 103.63 | 227.50 | 59.74 | |
n = 500 + 500 | n = 500 + 500 | ||||||
Model 1 | 54.58 | 219.39 | 136.26 | 70.40 | 207.97 | 140.74 | |
| Model 2 | 144.35 | 363.36 | 176.74 | 183.28 | 327.58 | 143.06 |
Model 3 | 60.98 | 261.86 | 110.93 | 66.25 | 278.61 | 114.68 | |
Model 4 | 143.62 | 235.44 | 102.13 | 115.59 | 237.14 | 81.13 | |
n = 200 + 200 | n = 200 + 200 | ||||||
Model 1 | 126.56 | 252.88 | 251.70 | 192.47 | 244.17 | 225.63 | |
| Model 2 | 262.92 | 371.69 | 230.25 | 297.68 | 348.46 | 215.70 |
Model 3 | 139.27 | 324.55 | 215.12 | 141.28 | 328.64 | 191.61 | |
Model 4 | 189.69 | 287.39 | 169.86 | 164.13 | 280.21 | 149.95 |
With decreasing sample sizes, the model fit again becomes worse and the variance increases (data not shown). If the sample size is 500 + 500 subjects, neural networks again have the best model fit for the first three risk models in the high risk scenario. In the low risk scenario, this is only true for the first and the third risk model. A sample size of just 200 + 200 subjects leads to a considerably worse model fit of neural networks. In this situation, logistic regression models with design variables coding the genotype have the best model fit for the second and fourth risk model in both risk scenarios. Neural networks still have the best model fit if the gene has a masking effect only.
Real data application
Real data set application. Prediction of the neural network using the whole data set. Two lines per genotype result from the inclusion of a binary confounding factor in the analysis. 138 cases and 1599 controls
Real data set application, stratified analysis. Mean predictions of the neural network over 100 bootstrap replications (blue lines) and 95% bootstrap confidence intervals (red lines). n = 112 + 916 (cases+controls) for value 1 of the confounding factor and n = 26 + 683 (cases+controls) for value 2 of the confounding factor.
Discussion
In this paper, we studied the ability of neural networks and logistic regression models to capture different types of gene-environment interactions. Neural networks were able to predict the theoretic risk models in all sixteen investigated situations such that the prediction intervals contained the true underlying risk models in most situations and were thus superior to logistic regression models. Logistic regression models without design variables completely failed to model the constant effects. Employing design variables led to a considerably better model fit only when average values over the 100 data sets were considered. Single predictions for one data set often had a misleading form and did not distinguish between linear and non-linear components especially for the first two risk models. Nevertheless for risk model 4, logistic regression models using design variables provided the best model fit compared with neural networks as could be seen by the mean absolute differences although the prediction interval did not include the whole true risk model. However, the reasoning behind this fact is still unknown. The real data set application showed the general usability of neural networks in real life situations. Neural networks discovered different risk slopes for each genotype, which also became obvious from the corresponding bootstrap confidence intervals.
Neural networks do not use interaction terms. In our application, they mainly needed one or two hidden neurons if the environmental factor had an effect (risk models by [14]) and they needed one hidden neuron if the locus only had a masking effect and two hidden neurons if the locus had an own main effect (risk models representing a masking effect of the genetic factor). For logistic regression, the correct main effect models were mainly selected for the genetic and the environmental model as best models based on BIC and full models were selected for the additive and interaction model. Thus, the latter two risk models cannot be distinguished from each other based on the co-variables included. Logistic regression models mainly needed an interaction term to model the underlying risk models representing a masking effect of the genetic factor regardless of whether the genotype was coded co-dominant or using design variables (data not shown).
Logistic regression models belong to the class of generalized linear models and as such are limited in their modeling capacity to linearly separable data. On the contrary, neural networks can adapt to any piecewise continuous function. Since linear and non-linear relationships can be modeled simultaneously, neural networks are a promising tool if little is known about the exact relationship between co-variables and a response variable or especially, if a non-linear relationship is assumed.
Mean prediction of the neural network. Risk model assumes no association. Mean prediction of the neural network for all u ′ = 0, 0.1, 0.2,…,100 and g = 0,1,2. Green lines refer to the theoretic risk model, blue lines to the mean predictions, and red lines to the pointwise prediction intervals. n = 1,000 + 1,000.
Thus, our results suggest that neural networks can be a valuable approach already in the situation of 500 cases and 500 controls. However, there are two main drawbacks of neural networks. First, the computing time needed to train them is very high. In our application, the analyses for one situation (100 replications, six network topologies each) sometimes took more than 30 hours. Second, neural networks are still considered as black-box approach since both network topology and trained weights have no direct interpretation. Thus, there is no established way for model selection and parameter testing. One possibility to estimate the effect of a co-variable is provided by the concept of generalized weights [20]. The aim of this paper was to investigate the general modeling ability of neural networks as a first step. Further research should to be devoted to the missing interpretability of trained neural networks.
Differences between theoretic and estimated penetrance functions (sensitivity analysis: low minor allele frequency)
High risk scenario | Low risk scenario | ||||||
---|---|---|---|---|---|---|---|
Neural network | Logistic regression | Logistic regression (DV) | Neural network | Logistic regression | Logistic regression (DV) | ||
n = 1000 + 1000 | n = 1000 + 1000 | ||||||
Genetic model | 80.29 | 80.39 | 303.07∗ | 87.65 | 209.74 | 249.96 | |
| Environmental model | 79.60 | 278.32 | 277.18 | 78.18 | 170.94 | 170.94 |
Additive model | 74.67 | 369.57 | 443.10 | 92.18 | 303.98 | 348.50 | |
Interaction model | 180.02 | 415.60 | 541.02∗ | 191.77 | 327.44 | 481.62∗ | |
Model 1 | 113.62 | 244.87 | 375.43∗ | 179.23 | 226.03 | 355.59∗ | |
| Model 2 | 232.75 | 389.70 | 472.47∗ | 318.57 | 346.57 | 460.08∗ |
Model 3 | 253.00 | 230.12 | 232.20 | 256.38 | 253.67 | 254.80 | |
Model 4 | 133.91 | 126.27 | 97.92 | 138.28 | 132.11 | 93.04 |
Conclusions
To the best of our knowledge, neural networks have not been used for modeling gene-environment interactions so far. In other contexts, MLPs were clearly superior to logistic regression models [21, 22]. Previously, we have successfully employed neural networks for the analysis of gene-gene interactions in simulation studies [10]. This paper shows that the advantages of neural networks are even more pronounced when modeling gene-environment interactions with continuous environmental factors.
In practice, neural networks can be applied in case-control studies to investigate the common effect of two genetic factors or one genetic and one environmental factor. Since the functional form of the model has not to be specified in neural networks, it has neither to be known whether the two involved factors indeed have an effect on the disease nor whether an interaction between both factors is present. The prediction of a neural network generates insight in the kind of relationship between co-variables and disease, for example, whether the underlying relationship is non-linear or whether there are different relationships per genotype. Thus, although there is still need for further research regarding the interpretability of neural networks, neural networks are already a valuable statistical tool especially for exploratory analyses and/or when little is known about the functional relationship of risk factors and investigated disease.
Appendix
Artificial neural networks
The general idea of a multilayer perceptron (MLP) is to approximate functional relationships between co-variables and response variable(s). It consists of neurons and synapses that are organized as a weighted directed graph. The neurons are arranged in layers and subsequent layers are usually fully connected by synapses. Each synapse is attached by a weight indicating the effect of this synapse. A positive weight indicates an amplifying, a negative weight a repressing effect. Neural networks have to be trained using a learning algorithm to adjust the synaptic weights according to given data. The learning algorithm minimizes the deviation of predicted output and given response variable measured by an error function.
A multilayer perceptron. An MLP with one hidden layer consisting of three hidden neurons.
where w0, w j , and w ij , i = 0,…,n, j = 1,…,m, denote the weights including intercepts, x = (x0x1,…,x n ) T the vector of all co-variables including a constant neuron x0 and σ the activation function that maps the output of each neuron to a given range. MLPs are a direct extension of generalized linear models (GLM, [24]) and an MLP without hidden layer is algebraically equivalent to a generalized linear model with σ as inverse link function. In this case, trained weights and estimated regression coefficients coincide.
To train neural networks according to the case-control data sets, resilient backpropagation [25] as learning algorithm with cross entropy as error function and logistic function as activation function is used.
Author’s contributions
FG planned and carried out the simulation study and drafted the manuscript. IP drafted the manuscript. KB planned the simulation study and drafted the manuscript. All authors read and approved the final manuscript.
Declarations
Acknowledgements
We gratefully acknowledge the financial support for this research by the grant PI 345/3-1 from the German Research Foundation (DFG).
We would like to thank two anonymous reviewers for their valuable remarks.
Authors’ Affiliations
References
- Wray N, Goddard M, Visscher P: Prediction of individual genetic risk of complex disease. Curr Opin Genet Dev. 2008, 18: 257-263. 10.1016/j.gde.2008.07.006.View ArticlePubMedGoogle Scholar
- Gibson G: Decanalization and the origin of complex disease. Nat Rev Genet. 2009, 10 (2): 134-140.View ArticlePubMedGoogle Scholar
- Galvan A, Ioannidis J, Dragani T: Beyond genome-wide association studies: genetic heterogeneity and individual predisposition to cancer. Trends Genet. 2010, 26 (3): 132-141. 10.1016/j.tig.2009.12.008.PubMed CentralView ArticlePubMedGoogle Scholar
- Abazyan B, Nomura J, Kannan G, Ishizuka K, Tamashiro K, Nucifora F, Pogorelov V, Ladenheim B, Yang C, Krasnova I, Cadet J, Pardo C, Mori S, Kamiya A, Vogel M, Sawa A, Ross C, Pletnikov M: Prenatal interaction of mutant DISC1 and immune activation produces adult psychopathology. Biol Psychiatry. 2010, 68: 1172-1181. 10.1016/j.biopsych.2010.09.022.PubMed CentralView ArticlePubMedGoogle Scholar
- Hutter C, Slattery M, Duggan D, Muehling J, Curtin K, Hsu L, Beresford S, Rajkovic A, Sarto G, Marshall J, Hammad N, Wallace R, Makar K, Prentice R, Caan B, Potter J, Peters U: Characterization of the association between 8q24 and colon cancer: gene-environment exploration and meta-analysis. BMC Cancer. 2010, 10: 670-10.1186/1471-2407-10-670.PubMed CentralView ArticlePubMedGoogle Scholar
- Kazma R, Babron M, Génin E: Genetic association and gene-environment interaction: a new method for overcoming the lack of exposure information in controls. Am J Epidemiol. 2011, 173 (2): 225-235. 10.1093/aje/kwq352.View ArticlePubMedGoogle Scholar
- Docherty S, Kovas Y, Plomin R: Gene-environment interaction in the etiology of mathematical ability using SNP sets. Behav Genet. 2011, 41: 141-154. 10.1007/s10519-010-9405-6.PubMed CentralView ArticlePubMedGoogle Scholar
- Tolonen S, Laaksonen M, Mikkilä V, Sievänen H, Mononen N, Räsänen L, Viikari J, Raitakari O, Kähönen M, Lehtimäki T: Cardiovascular Risk in Young Finns Study Group: Lactase gene C/T13910 polymorphism, calcium intake, and pQCT bone traits in finnish adults. Calcified Tissue Int. 2011, 58: 153-161.View ArticleGoogle Scholar
- Bammann K, Pohlabeln H, Pigeot I, Jöckel K: Use of an artificial neural network in exploring the dose-response relationship between cigarette smoking and lung cancer risk in male. Far East J Theor Stat. 2005, 16 (2): 285-302.Google Scholar
- Günther F, Wawro N, Bammann K: Neural networks for modeling gene-gene interactions in association studies. BMC Genet. 2009, 10: 87-PubMed CentralView ArticlePubMedGoogle Scholar
- Gago J, Landín M, Gallego P: Artificial neural networks modeling the in vitro rhizogenesis and acclimatization of Vitis vinifera L. J Plant Physiol. 2010, 167: 1226-1231. 10.1016/j.jplph.2010.04.008.View ArticlePubMedGoogle Scholar
- Lin RH, Chuang CL: A hybrid diagnosis model for determining the types of the liver disease. Comput Biol Med. 2010, 40 (7): 665-670. 10.1016/j.compbiomed.2010.06.002.View ArticlePubMedGoogle Scholar
- Ioannidis J, Trikalinos T, Law M, Carr A: HIV Lipodystrophy Case Definition Study Group: HIV lipodystrophy case definition using artificial neural network modelling. Antivir Ther. 2003, 8: 435-441.PubMedGoogle Scholar
- Amato R, Pinelli M, D’Andrea D, Miele G, Nicodemi M, Raiconi G, Cocozza S: A novel approach to simulate gene-environment interactions in complex diseases. BMC Bioinf. 2010, 11: 8-10.1186/1471-2105-11-8.View ArticleGoogle Scholar
- Bishop C: Neural Networks for Pattern Recognition. 1995, New York: Oxford University PressGoogle Scholar
- Schwarz G: Estimating the dimension of a model. Ann Stat. 1978, 6: 461-464. 10.1214/aos/1176344136.View ArticleGoogle Scholar
- Development Core Team R: R: A Language and Environment for Statistical Computing. Vienna. 2009, Austria: R Foundation for Statistical Computing, [http://www.R-project.org]. [ISBN 3-900051-07-0]Google Scholar
- Günther F, Fritsch S: neuralnet: Training of neural networks. R J. 2010, 2: 30-38.Google Scholar
- Efron B, Tibshirani R: An Introduction to the Bootstrap. 1993, Boca Raton: Chapman and HallView ArticleGoogle Scholar
- Intrator O, Intrator N: Interpreting neural-network results: a simulation study. Comput Stat Data An. 2001, 37: 373-393. 10.1016/S0167-9473(01)00016-0.View ArticleGoogle Scholar
- Savegnago R, Nunes B, Caetano S, Ferraudo A, Schmidt G, Ledur M, Munari D: Comparison of logistic and neural network models to fit to the egg production curve of White Leghorn hens. Poult Sci. 2011, 90 (3): 705-711. 10.3382/ps.2010-00723.View ArticlePubMedGoogle Scholar
- Liew P, Lee Y, Lin Y, Lee T, Lee W, Wang W, Chien C: Comparison of artificial neural networks with logistic regression in prediction of gallbladder disease among obese patients. Digest Liver Dis. 2007, 39 (4): 356-362. 10.1016/j.dld.2007.01.003.View ArticleGoogle Scholar
- Hecht-Nielsen R: Neurocomputing. 1990, Reading: Addison-WesleyGoogle Scholar
- McCullagh P, Nelder J: Generalized Linear Models. 1983, London: Chapman and HallView ArticleGoogle Scholar
- Riedmiller M: Advanced supervised learning in multi-layer perceptrons – from backpropagation to adaptive learning algorithms. Int J Comput Stand Interf. 1994, 16: 265-275. 10.1016/0920-5489(94)90017-5.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.