Jonathan Kane is a science writer who has written two previous posts for Panda’s Thumb: Creationist classification of theropods and Five principles for arguing against creationism. Emily Willoughby is a graduate student in behavior genetics at the University of Minnesota. They are the two main authors of God’s Word or Human Reason? An Inside Perspective on Creationism, published December 2016 by Inkwater Press. Matt Young is this post’s moderator.
In the comments to this post from March 8, a few people asked for more details regarding what we know about human intelligence, and what the estimates of its heritability are based on. Covering these things adequately requires more space than is possible in a comment, so we’ve agreed to run a second post explaining the current state of research in this area.
What is general intelligence?
In a colloquial context, “intelligence” can mean a lot of different things, but when psychologists refer to intelligence they usually mean a specific measurement. That measurement is known as general intelligence, or g. Another term that psychology books and papers sometimes use for general intelligence is “general cognitive ability”, sometimes abbreviated as GCA.
In order to understand how g is calculated, first it’s necessary to know the basics of how IQ tests work. On a professional IQ test, such as the Wechsler Adult Intelligence Scale (WAIS) or the Sanford-Binet, a series of subtests measure various mental skills, such as mentally rotating a 3-dimensional object (which measures visuospatial ability), reading comprehension tests (which measure verbal intelligence), and listening to a sequence of numbers and then reciting it in reverse (which measures working memory). A person’s overall IQ score is calculated from their aggregate performance on all of these various subtests, by ranking the person’s score on each subtest against the scores of other people who have taken it. This document discusses a few examples of IQ subtest items, in the context of how these items have been tweaked for the latest version of the WAIS test.
Although each of these subtests measures a different skill, one of the most consistently replicated findings in psychology is that a person’s scores on all of them are positively correlated. (John B. Carroll’s book Human Cognitive Abilities: A Survey of Factor-Analytic Studies provides an overview of the data in this area.) In other words, even though a mental rotation test might seem to have nothing in common with a reading comprehension test, a person who does well on one will generally also do well on the other. Using the statistical technique of factor analysis, it is possible to determine the existence of the underlying factors that account for this network of correlations. This method produces a set of specific abilities that each affect small numbers of subtests, as well as a general factor that accounts for the correlation shared by them all. This general factor has been named g, or general intelligence.
Is it possible that this general factor is an artifact of the particular set of tests that were used to calculate it? This question was examined in a pair of studies by Wendy Johnson and her colleagues in 2004 and 2008. These studies found that if two separate test batteries are given to the same group of people, and two separate g factors are calculated from their results, the two g factors nearly always have a correlation of 0.95 or higher. This extremely strong correlation suggests that g is not merely an artifact of a particular set of tests, but that it measures something real about the person being tested.
As this paper explains, it’s unlikely that g has a specific location in the brain, or that it is synonymous with any single neurological variable such as neural density or nerve conduction velocity. Rather, it is most accurate to think of general intelligence as the brain’s overall processing power. An analogy that paper uses for general intelligence is physical fitness: a person’s overall fitness contributes to their ability to perform a wide variety of physical tasks, so there is a positive correlation between a person’s physical ability in one area (such as running speed) and in a different area (such as the weight they can lift). But physical fitness cannot be isolated to a single variable or part of the body—it is a combination of many interrelated characteristics, such as lung capacity, energy efficiency, muscle strength, and so on.
No method of measuring g is perfect, but IQ tests are considered by psychologists to be the most accurate method that’s widely used. Four books that discuss the importance of general intelligence in psychology research, and the ability of IQ tests to measure it, are Ian Deary’s Intelligence: A Very Short Introduction (2001), Earl Hunt’s Human Intelligence (2011), Stuart Ritchie’s Intelligence: All that Matters (2015), and Richard Haier’s The Neuroscience of Intelligence (2016). All four of these books are written by highly regarded psychologists and are published by mainstream academic publishers.
Modern intelligence tests are widely regarded by psychometrics researchers to be highly reliable (see page 169 of N.J. Mackintosh’s book IQ and Human Intelligence). When psychologists use the term “reliable” in a statistical sense, they mean that the measure is consistent and stable both internally and across time. A test is internally reliable if the items correlate highly among themselves, since this reflects the items converging on a single construct that they’re measuring.
One straightforward way of measuring reliability of intelligence tests is to randomly split each participant’s test into two halves, and then look at the correlation between results from each half across a large number of test-takers. Hunt’s book (p. 154) reports the split-half reliability of professionally developed IQ tests to be in the 0.80–0.95 range, which is quite good according to commonly accepted rules. For reliability across time, Hunt (p. 313) reports the mean correlation across testing sessions at 0.85, meaning that if a group of people takes equivalent forms of an IQ test twice, their scores from the second test will have a correlation of 0.85 with their scores from the earlier one. This makes IQ tests one of the most reliable psychology measurements used in organizational settings, with an average variation of fewer than 3 points across sessions, according to a chapter in The Oxford Handbook of Personnel Selection and Assessment.
This reliability can also be observed by comparing the IQ scores of children to their scores as adults. IQ test performance changes over the life course in a fairly predictable developmental pattern, and IQ scores given to children are based on how well they perform by the standard of their age, but people who do well on these tests (relative to their age group) as children also tend to do well later in life. One of the largest and most famous studies of this effect began in 1932, when the Scottish Council for Research in Education arranged for every Scottish 11-year-old in attendance—a total of 87,498 children—to sit an intelligence test. Sixty-six years later, these now-77-year-old participants were recruited back into the lab to sit another test. Across all participants who were still alive and available, their scores correlated at about 0.70 with the scores they had achieved as children. To appreciate how powerful this result is, note that as of 2015, breast cancer screening across 16 million women correlated with cancer incidence at about 0.54.
The robust stability of intelligence scores over the lifespan also implies that IQ is relatively resistant to attempts at change—and this is mostly true, despite the number of hits you’ll get for Googling “how to raise my IQ” (mostly sponsored by companies who desperately want your money). The reality is that IQ is susceptible to change in the negative direction through environmental effects, such as drastically lowered nutrition, toxins such as lead, or brain trauma—because, of course, it is much easier to break a complex thing than to increase its performance. But how much can an individual’s IQ be raised by deliberate training and practice? A 2015 meta-analysis of 39 randomized controlled trials intended to raise children’s IQ revealed that interventions to boost IQ often have short-term positive effects, but that these effects fade out when the intervention period is over, much as the effects of a weight-loss diet tend to fade away when the diet is over. Companies wishing to profit from brain-training would be wise to consider this reality, such as the app Lumosity, whose creators were recently hit with a $2 million fine for being unable to substantiate the app’s cognitive ability-boosting claims.
One of the reasons the concept of general intelligence is widely accepted is because of its predictive validity. General intelligence is effective for predicting outcomes in several areas of life, and it’s impossible to review all of them here, but three of the areas where this relationship has been consistently replicated are academic performance, job performance, and income. A large meta-analysis of studies about the relationship of g to academic performance can be found here, summaries of several meta-analyses about its relationship to job performance can be found here and here, and a meta-analysis about its ability to predict income can be found here. The last meta-analysis specifically examined longitudinal studies, showing that general intelligence measured early in life predicts income as an adult, and also found that intelligence is a somewhat stronger predictor of income than either one’s parents’ income or their level of education. (It’s important to cite literature reviews, meta-analyses and/or textbooks when discussing this sort of data, because anyone can cherry-pick individual studies that support their opinion, whereas secondary sources describe the overall trend.)
Two other areas in which general intelligence predicts life outcomes, discussed in Michael Ashton’s textbook Individual Differences and Personality, are health and law-abidingness. According to Ashton, intelligence measured early in life correlates with a person’s eventual lifespan, and people with higher intelligence also are less likely to commit crimes, even when socio-economic status is controlled for. Citing several studies published 2001-2014, Ashton’s textbook (pp. 264-265) suggests that the relationship between intelligence and lifespan exists partly because people with higher intelligence possess greater decision-making ability, which affects health choices such as not smoking, as well as that intelligence may indicate something more basic about how well a person’s body is functioning. Ashton (pg. 266) also suggests that decision-making skill may affect a person’s ability to judge that the potential payoff of crime isn’t worth the risk, as well as that people with lower intelligence might turn to a life of crime because they are frustrated by their lower opportunities for success in the workplace.
In addition to its ability to predict life outcomes, general intelligence also has a second type of predictive validity: it is correlated with neurological variables that can be measured physically. This 2010 literature review published in Nature Reviews: Neuroscience discusses several of these, including neuronal efficiency, cortical thickness, and overall brain volume. These relationships are discussed in more detail in Richard Haier’s 2016 book The Neuroscience of Intelligence, which also mentions (pp. 126-135) that variation in these aspects of brain structure is influenced by many of the same genes that affect variation in general intelligence. One recently discovered neurological measure, known as morphometric similarity, accounts for 40 % of the variation in general intelligence (as measured by IQ).
Heritability, part 1: family studies
Another of the best-replicated findings in psychology is that all measurable personality variables show substantial genetic influence, including intelligence. This conclusion was initially reached by decades of studies on large numbers of pairs of twins, but has also achieved a remarkable degree of consilience with modern genetic research. Before delving into the mechanics of genetic studies, it is important to understand what is meant when we say that a trait is “heritable”. The heritability of a trait is the portion of the variance in it that is due to genes. All humans on the planet share something like 99.9 % of our genes, but the remainder is the source of all genetic differences between individuals. Heritability can be estimated by comparing the measurements of a trait across a large pool of individuals with known genetic relationships, such as within families.
A heritability estimate of a trait is best captured by comparing the trait’s correlations between identical and non-identical twins. Identical or monozygotic twins share 100 % of their genes, so measurable differences between them are likely due to environmental influence. Fraternal or dizygotic twins are born together and grow up together, but are only as genetically similar to one another as any other pair of siblings from the same parents, sharing an average of 50 % of their heritable genes (that is, genes that vary between human individuals).
Since identical twins do not share a home environment that’s any more or less different than the environment that fraternal twins share, comparing between the two types of twins controls for the shared environment. When they are compared to identical twins, which are always the same sex, only same-sex fraternal twins are considered for computing estimates. If a trait can be said to have substantial heritability, it must show greater concordance among identical twins than among non-identical twins—and the larger the disparity, the more heritable the trait is likely to be. Since identical twins share twice as much heritable genetic material as fraternal twins share, the heritable variance is twice the difference between identical and fraternal twins on a measure of a trait.
The largest meta-analysis of twin studies ever performed, Polderman et al. (2015), found that 150,000 pairs of identical twins correlate in “higher-level cognitive functions” at 0.71, and another 150,000 fraternal twins correlate at about 0.44. (Try their web tool yourself to see—select your trait of interest under “ICF/ICD10 subchapter” at the top.) Since 2 × (0.71 – 0.44) = 0.54, we can say that about 54 % of the variance in general cognitive ability is explained by variance in genes. If you break down these estimates by the age categories provided in the web tool, you will see that the heritability of intelligence also seems to increase over the life course, another finding that has been replicated again and again. This effect has been dubbed the “Wilson effect” after its discoverer, Ronald Wilson, and was covered in detail by a 2013 analysis of twin and adoption studies, which found a heritability “asymptote” of about 0.80 for IQ at around 18-20 years of age. The reason is likely the declining effect of shared environment on IQ and many other things, such as personality—as children get older, they increasingly choose their own environments, which differ from that provided by their parents and other family.
Twin studies have come under various criticisms over the years for their reliance on assumptions that must hold in order for their conclusions to be interpretable. For example, the “equal environments assumption” (EEA) posits that the environmental effects on a trait must be equal (on average) between identical and fraternal twins. If the parents of identical twins treat them significantly differently than the parents of fraternal twins, this difference could be a violation of the EEA if the difference can be shown to affect the measurement of a trait. EEA has been upheld when its assumptions have been empirically tested (see this paper for a discussion), but more importantly, these estimates of heritability are corroborated by studies of adopted siblings, twins reared apart by different families, and of other family structures where the genetic relatedness among individuals can be quantified. If these studies were biased, they would all have to be biased in the exact same way. Even so, it is perhaps more convincing to see what recent advances in modern genetic studies have found.
Heritability, part 2: genetic studies
Genome-wide complex trait analysis (GCTA) is a method of estimating a trait’s heritability by comparing the chance genetic similarity of unrelated strangers to their similarity on a trait of interest. Looking at overall genetic similarity introduces a lot of statistical noise, but with a large enough sample (in the thousands or tens of thousands), the genetic similarity of unrelated individuals will correlate with their similarity in the trait being studied. The strength of the correlation thus provides a conservative estimate of how much variance in the genes of individuals is causally linked to variance in the measured trait.
Estimates of heritability produced with the GCTA method typically are a lower bound, representing the minimum possible genetic contribution. This is the case because GCTA studies usually do not account for rare variants, which exist in only a small number of the study’s subjects, or for linkage disequilibrium, when genetic variants cluster together rather than being distributed randomly and uniformly. A typical lower-bound estimate of the heritability of general cognitive ability from a GCTA study is 28 % or 29 %. However, accurately accounting for these complications can turn GCTA into a useful tool for estimating true heritability, rather than just a lower bound. In 2011, a GCTA with 3,511 unrelated individuals found that 51 % of the variance in general intelligence was associated with variance in genetic markers. Using a version of GCTA that includes rarer variants, a study published early this year found this amount to be 54 %—amazingly close to the 53 % found by Polderman’s meta-analysis on 50 years of twin studies.
As explained in this paper, human intelligence is polygenic, meaning that it is influenced by a huge number of genes, each with a tiny effect. Studies to identify these genes use a second method called genome-wide association studies (GWAS), which test for the association between a trait and specific genetic variants. The data from these studies can be used to construct a polygenic score, which is a composite score based on a person’s combination of the many known alleles that affect variance in the trait. At present, polygenic scores can predict about 10 % of the variance in IQ, but that number is increasing all the time as more and more of these alleles are identified.
Since general intelligence is known to influence outcomes such as income and success in education, several genetic studies have directly examined the influence of genes on those variables as well. This study, which made use of the GCTA technique, found that at least 18 % of variance in socioeconomic status, and at least 21 % of variance in years of education, can be accounted for by genes that also influence general intelligence. With the GWAS method, it also is possible to identify some of the specific genetic variants associated with these outcomes. In 2016, a study using this method identified specific alleles that account for 9 % of the variance in years of education, and also identified some of the ways that these alleles are expressed in fetal brain development. A literature review published in Molecular Psychiatry summarizes these results this way: “To sum up: there are genetic causes of some of the educational and social class differences in the populations studied, and these overlap with the genetic causes of intelligence differences.”
The Flynn effect
The relationship of general intelligence to IQ scores is most similar to the relationship between an object’s mass and its weight. The most refined and widely used way to compare the masses of two objects is by weighing them both, but that method works only if both objects are weighed on the same planet. A weight of ten pounds on the moon does not mean the same thing as the same weight on Earth, and the same principle also applies to IQ if you compare IQ test performance between multiple generations.
For most of the twentieth century, the average performance on IQ tests of all demographic groups was gradually rising by the equivalent of about 3 IQ points per decade. This phenomenon is known as the Flynn effect, after James Flynn, who extensively documented it in the 1980’s. (It’s possible that the trend has recently slowed or even stopped in developed nations, though some developing countries are still showing substantial generational gains in IQ.) The average IQ score is by definition 100, so as test performance has risen, makers of IQ tests have had to periodically re-norm their score scales to adjust for the Flynn effect. The cause of the Flynn effect is not known for certain, but it is presumed to be environmental, because the genetic component of IQ cannot change that quickly.
The most common hypothesis for explaining the Flynn effect is that as education and technology have improved, people have become better at the sort of logical thinking that’s required for IQ tests. These improvements are often cited as evidence that environmental influences on intelligence are much stronger than genetic studies would suggest, but this argument has an important limitation. The degree to which an IQ subtest measures general intelligence varies between one subtest and another—this is referred to as the subtest’s “g loading”—and a meta-analysis by Njenhuis and van der Flier found that the g loadings of IQ subtests were inversely correlated with how much scores on them had risen due to the Flynn effect. In other words, the most g loaded subtests tend to be those on which scores have risen the least, which is the opposite of the pattern one would expect if general intelligence had been increasing.
One other line of evidence for this conclusion comes from mental chronometry, an alternative method of measuring general intelligence based on the brain’s speed of information processing. Mental chronometry scores show a significant correlation with IQ scores, and overviews of this method of measuring intelligence can be found in Hunt (pp. 152-155), Haier (pp. 170-171), and Ashton (pp. 245-249). Importantly, while performance on IQ tests has risen due to the Flynn effect, performance on mental chronometry tests has not. In fact, there is a debate over whether performance on mental chronometry tests has stayed the same or whether it has actually declined. On the other hand, if the Flynn effect were an increase in general intelligence, one would expect it to have raised performance on IQ tests and mental chronometry tests at about the same rate.
Challenges to this model
Although the g model is the most widely accepted model of intelligence in psychology, it is not the only model that exists. The best-known alternative model is Howard Gardner’s theory of multiple intelligences. This theory proposes that intelligence has no central factor that affects performance on all types of test, and that intelligence should instead be understood as several distinct and unrelated areas of ability.
While the concept of general intelligence is widely accepted in a large part because of its predictive validity, the largest downside of the theory of multiple intelligences is that it has not been shown to predict anything in the real world. Detailed critiques of Gardner’s model, along with other similar models, can be found in Hunt (pp. 111-139), Ashton (pp. 272–278), and in this paper by Lynn Waterhouse. Gardner has responded to critiques like these in an odd way. Instead of arguing that the critics are wrong and that his theory is well-supported, he has argued that it does not really matter whether his theory is supported or not:
[E]ven if at the end of the day, the bad guys turn out to be more correct scientifically than I am, life is short, and we have to make choices about how we spend our time. And that’s where I think the multiple intelligences way of thinking about things will continue to be useful, even if the scientific evidence doesn’t support it.
The most infamous critique of the concept of general intelligence, which was brought up in the comments to the post from March 8, is Stephen Jay Gould’s book The Mismeasure of Man. We call this book infamous because among psychologists, it is probably the best-known example of the schism between how intelligence research is understood among professional researchers in this field, and how it is understood by the general public. The book was a national bestseller and received glowing reviews from magazines and newspapers, but nearly all of its reviews in the academic literature were negative. A summary of the book’s academic reception was given in this detailed critique by Bernard Davis. This negative reception has not been confined to the period directly after The Mismeasure of Man’s release: in 2011, a team of physical anthropologists at the University of Pennsylvania reanalyzed the book’s claim that Samuel George Morton had fudged his skull measurements of various ethnic groups. According to the new study, Morton’s original measurements had in fact been correct, and the errors had been Gould’s.
We realize that Stephen Jay Gould is held in high regard among readers of The Panda’s Thumb because of his contributions to evolutionary biology, but ultimately the most trustworthy authorities about any topic are the people who study it professionally. The most trustworthy authorities about evolution are evolutionary biologists rather than theologians, and the most trustworthy sources about climate change are climatologists rather than politicians. For the same reason, the most trustworthy authorities about human intelligence are psychologists, neurologists, and behavioral geneticists, even if an evolutionary biologist thinks that they’re wrong.
The need to oppose science denialism
It is important to understand how mainstream the research we’ve cited in this post is. The conclusions we’ve summarized here have been supported by three literature reviews published in Nature sub-journals: One review from 2010, a second from 2015, and a third from 2018. The topic also is consistently presented this way in textbooks from major academic publishers, four of which we’ve cited here: Hunt’s and Haier’s, both published by Cambridge University Press; Deary’s, published by Oxford University Press; and Ashton’s, published by Academic Press. To dispute the validity of these conclusions, what you really have to dispute is the process by which a scientific conclusion can become widely accepted.
Over the past few years, a few evolution education websites (including The Panda’s Thumb) have morphed into more general pro-science websites and have begun to combat climate change denial in addition to creationism. Evolution and climate change, however, aren’t the only areas where science denial exists or where the mainstream position needs defending. Although it’s less common than creationism or climate change denial, books and articles that reject the consensus view about psychology or intelligence are nonetheless regularly published. If we are serious about opposing anti-science attitudes, we ought to recognize that these sorts of arguments are just as unscientific as those made by creationist organizations, and just as important to oppose.
Acknowledgments. This article was reviewed by a professor of statistics at the University of Minnesota, who wishes to remain anonymous. We appreciate his helpful feedback.
Carroll, John B. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge University Press, 1993.
Johnson, Wendy et al. “Just one g: consistent results from three test batteries”. Intelligence 32 (2004): 95-107.
Johnson, Wendy, Jan te Nijenhuis and Thomas J. Bouchard Jr. “Still just one g: Consistent results from five test batteries”. Intelligence 36 (2008): 81–95.
Kievit, R.A. et al. “Intelligence and the brain: a model-based approach”. Cognitive Neuroscience 3(2) (2012): 89-97.
Deary, Ian. Intelligence: A Very Short Introduction. Oxford University Press, 2001.
Hunt, Earl. Human Intelligence. Cambridge University Press, 2011.
Haier, Richard. The Neuroscience of Intelligence. Cambridge University Press, 2016.
Mackintosh, N.J. IQ and Human Intelligence, second edition. Oxford University Press, 2011.
Ones, D.S., S. Dilchert and C. Viswesvaran. “Cognitive ability”. In: Schmitt (Ed.), The Oxford Handbook of Personnel Assessment and Selection. Oxford University Press, 2012.
Harding, C. et al. “Breast Cancer Screening, Incidence, and Mortality Across US Counties”. JAMA Internal Medicine. 175(9) (2015):1483-1489.
Deary, Ian J. et al. “The Stability of Individual Differences in Mental Ability from Childhood to Old Age: Follow-up of the 1932 Scottish Mental Survey”. Intelligence 28(1) (2000): 49-55.
Protzko, J. “The environment in raising early intelligence: A meta-analysis of the fadeout effect”. Intelligence 53 (2015): 202–210.
Winsborough, Dave and Tomas Chamorro-Premuzic. “Talent Identification in the Digital World: New Talent Signals and the Future of HR Assessment”. People + Strategy 39(2) (2016): 28-31.
Roth, Bettina et al. “Intelligence and school grades: A meta-analysis”. Intelligence 53 (2015): 118-137.
Schmidt, Frank L. and John Hunter. “General Mental Ability in the World of Work: Occupational Attainment and Job Performance”. Journal of Personality and Social Psychology 86(1) (2004): 162-173.
Schmidt, Frank L., In-Sue Oh and Jonathan A. Shaffer. “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings”. Fox School of Business Research Paper, October 17, 2016.
Strenze, Tarmo. “Intelligence and socioeconomic success: A meta-analytic review of longitudinal research”. Intelligence 35(5) (2007): 401-426.
Ashton, Michael. Individual Differences and Personality, third edition. Academic Press, 2018.
Deary, Ian, Lars Penke and Wendy Johnson. “The neuroscience of human intelligence differences”. Nature Reviews: Neuroscience 11 (2010): 201-211.
Seidlitz, Jakob et al. “Morphometric similarity networks detect microscale cortical organisation And predict inter-individual cognitive variation”. Neuron 97(1) (2018): 231-247.e7.
Plomin, R., J.C. DeFries, V.S. Knopik and J.M. Neiderhiser. “Top 10 replicated findings from behavioral genetics”. Perspectives on Psychological Science 11(1) (2016): 3–23.
Polderman, T.J.C. et al. “Meta-analysis of the heritability of human traits based on fifty years of twin studies”. Nature Genetics 47(7) (2015): 702–712.
Plomin, R. and I.J. Deary. “Genetics and intelligence differences: five special findings”. Molecular Psychiatry 20 (2015): 98–108.
Bouchard, T.J. Jr. “The Wilson Effect: the increase in heritability of IQ with age”. Twin Research and Human Genetics 16(5) (2013): 923–930.
Derks, Eske M., Conor V. Dolan, and Dorret I. Boomsma. “A Test of the Equal Environment Assumption (EEA) in Multivariate Twin Studies”. Twin Research and Human Genetics 9(3) (2006): 403–411.
Bouchard, T.J. Jr. “Genetic and environmental influences on adult intelligence and special mental abilities”. Human Biology 70(2) (1998): 257–79.
Devlin, B., M. Daniels and K. Roeder. “The heritability of IQ”. Nature 388(6641) (1997): 468–71.
Davies, G. et al. “Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (N = 53949)”. Molecular Psychiatry 20 (2015): 183–192.
Davies, G. et al. “Genome-wide association studies establish that human intelligence is highly heritable and polygenic”. Molecular Psychiatry 16 (2011): 996–1005.
Hill, W.D. et al. “Genomic analysis of family data reveals additional genetic effects on intelligence and personality”. Molecular Psychiatry (2018): 1–16.
Plomin, R. and S. von Stumm. “The new genetics of intelligence”. Nature Reviews: Genetics 19 (2018): 148–159.
Marioni, Riccardo E. et al. “Molecular genetic contributions to socioeconomic status and intelligence”. Intelligence 44 (2014): 26-32.
Selzam, S. et al. “Predicting educational achievement from DNA”. Molecular Psychiatry 22 (2017): 267–272.
Teasdale T.W., and D.R. Owen. “A long-term rise and recent decline in intelligence test performance: The Flynn Effect in reverse”. Personality and Individual Differences 39(4) (2005): 837–843.
Sundet, Jon Martin, Dag G. Barlaug and Tore M. Torjussen. “The end of the Flynn effect?: A study of secular trends in mean intelligence test scores of Norwegian conscripts during half a century”. Intelligence 32(4) (2004): 349-362.
Pietschnig, Jakob, and Martin Voracek. “One Century of Global IQ Gains: A Formal Meta-Analysis of the Flynn Effect (1909–2013)”. Perspectives on Psychological Science 10(3) (2015): 282-306.
Nijenhuis, Jan T. and Henk Van Der Flier. “Is the Flynn effect on g?: A meta-analysis”. Intelligence 41(6) (2013): 802-807.
Sheppard, Leah D. and Philip A. Vernon. “Intelligence and speed of information-processing: A review of 50 years of research”. Personality and Individual Differences 44(3) (2008): 535-551.
Nettlebeck, Ted and Carlene Wilson. “The Flynn effect: Smarter not faster”. Intelligence 32(1) (2004): 85-93.
Woodley, Michael, Jan T. Nijenhuis and Raegan Murphy. “Is there a dysgenic secular trend towards slowing simple reaction time? Responding to a quartet of critical commentaries”. Intelligence 46 (2014): 131-147.
Waterhouse, Lynn. “Multiple Intelligences, the Mozart Effect, and Emotional Intelligence: A Critical Review”. Educational Psychologist 41(4) (2006): 206-225.
Gardner, Howard. “Intelligence: It’s Not Just IQ”. Lecture given at The Rockefeller University, 2009.
Davis, Bernard S. “Neo-Lysenkoism, IQ, and the press”. The Public Interest, Fall 1983.
Lewis, J.E. et al. “The Mismeasure of Science: Stephen Jay Gould versus Samuel George Morton on Skulls and Bias”. PLoS Biology 9(6) (2011): e1001071.