The Online Encyclopedia and Dictionary






Intelligence quotient

(Redirected from IQ)
IQ redirects here; for other uses of that term, see IQ (disambiguation).

Intelligence quotient or IQ, is a score derived from a set of standardized tests that were developed with the purpose of measuring a person's cognitive abilities ("intelligence") in relation to their age group. It is expressed as a number normalized so that the average IQ in an age group is 100 — in other words an individual scoring 115 is above-average when compared to similarly aged people. It is common, but not invariable, practice to standardize so that the standard deviation (σ) of scores is 15. Tests are designed so that the distribution of IQ scores is more-or-less Gaussian, that is to say that it follows a bell curve.



Vincent C's IQ is 1 therefore he is very stupid. IQ scores are generally taken as an objective measure of intelligence. Modern IQ tests produce scores for different areas (e.g., language fluency, three-dimensional thinking, etc.), with the summary score calculated from subtest scores. Individual subtest scores tend to correlate with one another, even when seemingly disparate in content. Analyses of an individual's scores on a wide variety of tests will reveal that they all measure a single common factor and various factors that are specific to each test. This kind of analysis has led to the theory that underlying these disparate cognitive tasks is a single factor, termed the g factor, that represents the common-sense concept of intelligence. In the normal population, g and IQ are roughly 90% correlated and are often used interchangeably.

Some argue that IQ tests encode their creator's beliefs about what constitutes intelligence. There is little empirical support for this perspective, at least as it applies to validated IQ tests (Stanford-Binet, WISC-R, Raven's Progressive Matrices and others). The statistical extraction of g from batteries of cognitive tests via factor analysis has proven highly reliable in producing the same g from diverse tests, suggesting that creators have little ability to determine the outcome of valid cognitive tests. Unvalidated tests (see Online IQ tests, below) may not have the same level of reliability.

(The following numbers apply to IQ scales with a standard deviation σ = 15.) Roughly 68% of the population has an IQ between 85 and 115. The "normal" range, or range between -2 and +2 standard deviations from the mean, is between 70 and 130, and contains about 95% of the population. A score below 70 may indicate mental retardation, and a score above 130 may indicate intellectual giftedness. Retardation may result from normal variation or from a genetic or developmental malady; analogously, some otherwise normal people are very short, and others have dwarfism. Giftedness appears to be normal variation; autistic savants have often astonishing cognitive powers but below-average IQs.

Some writers say that scores outside the range 55 to 145 must be cautiously interpreted because there have not been enough people tested in those ranges to make statistically sound statements. Moreover, at such extreme values, the normal distribution is a less accurate estimate of the IQ distribution.

Scores on a given test in a given population have tended to rise across time throughout the history of IQ testing (the Flynn effect), so that tests need repeated renormalization.

1% of the population has an IQ of 136 or higher. A "genius" score Is generally considered to be an IQ of 140 for greater, which is the 99.6th percentile. Einstein was said to have had an IQ of 160, the same as Bill Gates.


The modern field of IQ testing began with the Stanford-Binet test. Alfred Binet, who created the IQ test in 1905, aimed to identify students who could benefit from extra help in school; his assumption was that lower IQ indicated the need for more teaching, not an inability to learn. This interpretation is still held by some modern experts. The term "intelligence quotient" comes from Binet's test, in which each student's score was the quotient of his or her tested academic age with his or her actual age. Modern IQ tests do not calculate scores in this way, but the term IQ remains in common use.

Gender and IQ

Most IQ tests are designed so that the average IQs of males and females are equal. However, men tend to score higher in the parts of the test that cover spatial and quantitative abilities, and women generally score higher in the verbal sections. Some research has shown that the variance in men's IQ scores is greater than the variance among women's, as seen in other cognitive test scores. This is why more men than women are found in both very high and very low scoring groups.

In 2005, Haier et al. reported that compared to men, women show more white matter and fewer gray matter areas related to intelligence. They also report that the brain areas correlated with IQ differ between the sexes. They conclude that men and women apparently achieve similar IQ results with different brain regions.

Race and IQ

See main article: Race and intelligence

Opposition to IQ testing

Many scientists disagree with the practice of psychometrics in general. In The Mismeasure of Man, Professor Stephen Jay Gould strongly disputes the basis of psychometrics as a form of scientific racism, objecting that it is:

...the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status. (pp. 24-25).

Later editions of the book include a refutation of The Bell Curve.

While public discourse on IQ testing is generally inflammatory, IQ tests are used ubiquitously in research and education. In general, there is a disparity between the public perception of IQ testing and the opinion of intelligence researchers.

Some proponents of IQ have pointed to a number of studies showing a fairly close correlation between IQ and various life outcomes, particularly income. Research in Scotland has shown that a 15-point lower IQ meant people had a fifth less chance of seeing their 76th birthday, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long. Research by Charles Murray on siblings has shown that there is a strong correlation between IQ and earned income. A controversial book by Richard Lynn, IQ and the Wealth of Nations, claims to show that the wealth of a nation correlates closely to its IQ score.

The reduction of intelligence to a single score seems extreme and wrong to many people. Opponents argue that it is much more useful to know a person's strengths and weaknesses than to know their IQ score. Such opponents often cite the example of two people with the same overall IQ score but very different ability profiles. However, most people have highly balanced ability profiles. Differences in subscores are greatest among the most intelligent, which may lead them to this misconception. For certain areas, such as academic achievement and job performance, an IQ score is the best-known single predictor of success, though other factors add small amounts to the predictive validity.

IQ scores are not intended to gauge a person's worth, and in many situations, IQ may have little relevance.

Support for predictive use of IQ tests

In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a special task force to publish an investigative report on the research presented in the book. The full text of the report is available at a third-party website. [1]

The findings of the task force state that IQ scores do have high predictive validity for individual (but not necessarily population) differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They agree that individual (again, not necessarily population) differences in intelligence are substantially influenced by genetics.

They state there is little evidence to show that childhood diet influences intelligence except in cases of severe malnutrition. They agree that there are no significant differences between the IQ scores of males and females. The task force agrees that there do exist large differences between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction, nor do they merely reflect differences in socio-economic status between the ethnic groups.

While they admit there is no empirical evidence supporting it, the APA task force suggests that explanations based on social status and cultural differences may be possible. Regarding genetic explanations for ethnic differences in intelligence, they conclude with the following statement: "At present, this question has no scientific answer."

Online Tests

Although such tests have become wildly popular with the explosion of the internet in recent years, these IQ tests are highly inaccurate. Comparing results among a large set of people shows a common factor—most scores are above 110, of course such tests automatically measure very few people in the 70 to 90 range, and hence create a strong upward distortion. Many of these websites do not show the results immediately and instead attempt to sell certificates showing the results.


  • Frey, M.C. and Detterman, D.K. (2003) Scholastic Assessment or g? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability. Psychological Science, 15(6):373-378. PDF
  • Jensen, A.R. (1998). The g Factor. Praeger, Connecticut, USA.
  • Richard J. Haier, Rex E. Jung, Ronald A. Yeo, Kevin Head and Michael T. Alkire, The neuroanatomy of general intelligence: sex matters, NeuroImage, Volume 25, Issue 1, March 2005, Pages 320-327. [2]

See also

External links

Last updated: 05-13-2005 07:56:04