A Short History of Intelligence
From the Stoics to Silicon Valley, how our idea of intelligence has changed and why it matters
The question “How intelligent are you?” is one of the most deceptively modern questions we can ask. It presupposes something measurable, stable, comparative. But the idea that “intelligence” is a distinct quality, something you can possess, test, and rank, is a relatively recent invention, and seemingly more bureaucratic than Platonic. For most of history, intelligence was not a score but a virtue, and a mode of being that combined perception, reason, and moral insight. The Athenian sage was not “bright”; he was wise. That we now think in numbers—IQs, SATs, LSATs, LLM benchmark scores—reveals less about minds than it does about the institutions measuring them.
The word “intelligence” derives from the Latin intelligere, which means “to understand”, literally “to choose between” (inter + legere). For the Greeks, this was the realm of nous, the divine intellect that contemplates eternal truths. In early Christian theology, the intellectus was the part of the soul oriented toward God. Intelligence was not a private trait but a kind of moral and spiritual clarity.
It was only in the Enlightenment, with Descartes’ res cogitans, Locke’s blank slate, and Kant’s rational subject, that cognition was recast as a disembodied, universal function, abstracted from the soul, moral virtue, or cosmic order. This paved the way for 19th-century attempts to measure and control it. In 1905, Alfred Binet developed the first intelligence tests to identify French schoolchildren needing extra help. His aim was remedial, not reductive. But by the 1910s, American psychologist Lewis Terman had converted Binet’s scale into the Stanford-Binet IQ test, importing it into the US school system and, eventually, the military. Intelligence became statistical, sortable, and eugenic. The concept was no longer theological or philosophical, but now, it was administrative.
Today, intelligence is again in flux. Large language models like GPT-4, Claude, and Gemini score in the top percentiles on graduate admissions tests even as they remain blind to context, meaning, or selfhood. Debates over “AI consciousness” or “machine minds” echo medieval disputes about the soul, but without the metaphysical vocabulary. This is a short history of how intelligence has been imagined, shaped, measured, and challenged. From Plato to psychometrics, from IQ to GPT, it asks: what have we really been trying to quantify all along?
Classical Intelligence
In classical antiquity, intelligence was less a metric than a mode of life. For Plato, intelligence belonged to the rational part of the soul (logistikon), whose task was to orient us toward eternal truths. Learning was not acquisition but recollection (anamnesis), a spiritual return to what the soul once knew before birth.
Aristotle grounded this further. In De Anima, he defined nous as the intellect that grasps first principles, while in the Nicomachean Ethics he split intelligence into sophia (theoretical wisdom) and phronesis (practical reasoning); both forms of rational excellence, inseparable from virtue. A little later, the Stoics added a cosmological dimension: reason was not just internal, but divine, a share in the logos that structured the universe. Intelligence, in this view, was living in harmony with rational nature.
By late antiquity and into Christian thought, intelligence was increasingly spiritualised. For Augustine, the intellectus was the part of the soul illuminated by God, and Aquinas later formalised this into two faculties: intellectus (intuition) and ratio (discursive reason), both ordered toward truth and God (Summa Theologiae, I, q.79).
In sum, classical and early Christian thought treated intelligence not as an asset to be measured, but as a faculty of moral and metaphysical orientation. It was inseparable from truth, from virtue, and often from the divine. There was no “general intelligence” factor, no bell curve, no percentile score. There was the human soul and its capacity to contemplate the highest things.
Enlightenment and the Rise of Measurement
The Enlightenment marked a decisive shift in how intelligence was conceptualised: from moral virtue to cognitive capacity, from metaphysical clarity to analytic reasoning. If the classical world saw intelligence as the soul’s orientation to truth, however, Enlightenment thinkers recast it as the mind’s capacity to reason independently, a change that set the stage for its eventual quantification.
Descartes’ Cogito ergo sum (1637) placed thinking at the centre of personhood, but in a new, disembodied form: res cogitans, the thinking substance, severed from tradition, community, or theology. Intelligence became the power to doubt, analyse, and deduce. In Discourse on Method, Descartes praises not accumulated knowledge, but the correct use of reason, a radical democratisation of intelligence, since reason was presumed to be equally distributed.
Locke followed with An Essay Concerning Human Understanding, where he described the mind as a blank slate (tabula rasa), formed by experience rather than innate ideas. Intelligence here becomes plastic: shaped by education, environment, and effort. For Kant, in Critique of Pure Reason, intelligence is the mind’s ability to synthesise sense-data through categories of understanding; still universal, but internally structured.
The Enlightenment, in short, made intelligence secular, procedural, and normatively human. But this idealisation of reason also became political. The French Encyclopédistes, led by Diderot and d’Alembert, linked intelligence with progress and the public good, promoting education as the engine of enlightenment. By the late 18th century, intelligence was increasingly tied to utility, nation-building, and administrative control. The 19th century pushed this further. Thinkers like Herbert Spencer and Francis Galton began to naturalise intelligence, interpreting it through the lens of evolutionary theory. Galton, a cousin of Darwin, coined the term “eugenics” and was one of the first to apply statistical methods to human traits. In Hereditary Genius, he argued that intelligence was inherited and measurable, a claim that would become central to later psychometrics.
The obsession with classification culminated in the invention of IQ testing in the early 20th century, but its roots lie here, in the Enlightenment’s shift: from wisdom to cognition, from moral alignment to analytical power, from truth to technique. Intelligence had become something you could isolate, assess, and improve, not by cultivating the soul, but by training the mind.
The 20th Century- IQ, Standardisation, and the Politics of Intelligence
By the early 20th century, intelligence had become a field of empirical research; something not just to theorise about, but to measure, rank, and distribute. The philosophical idea of intelligence as wisdom or virtue was displaced by the statistical search for g, the “general intelligence factor”, first proposed by Charles Spearman in 1904. Intelligence, for the first time, became a number.
The IQ test emerged as a tool of both science and governance. In 1905, Alfred Binet and Théodore Simon designed a test to identify schoolchildren in need of support, not to define fixed intelligence, but to aid development. Yet in the hands of others, particularly in the Anglophone world, the test was rapidly transformed into a tool of classification. Lewis Terman, at Stanford, adapted Binet’s test into the Stanford-Binet Intelligence Scales, arguing for its use in identifying “gifted” children, and, more darkly, in guiding eugenic policy.
By the 1920s, IQ testing was institutionalised. The U.S. military’s Army Alpha and Beta tests screened over 1.7 million soldiers during World War I, laying the foundation for a techno-bureaucratic vision of intelligence. Psychologists like Henry H. Goddard used IQ to argue for immigration quotas and sterilisation programs. Intelligence was no longer a virtue; it was now a number used to justify power, privilege, and exclusion.
Yet alongside this dark history, the 20th century also saw a flourishing of competing theories. Jean Piaget studied developmental intelligence as a staged, unfolding process. Howard Gardner’s Theory of Multiple Intelligences challenged the dominance of g, proposing linguistic, spatial, interpersonal, and even bodily-kinesthetic forms of intelligence. Robert Sternberg added triarchic models, distinguishing between analytical, creative, and practical intelligence.
Nevertheless, the IQ test endured, reinforced by Cold War anxieties about national competitiveness, and later by the rise of standardised testing in education: the SAT (Scholastic Aptitude Test), GRE, and beyond. Intelligence became a gatekeeping mechanism. In Silicon Valley, where the ethos of “cognitive elitism” took hold, high IQ was valorised as a proxy for value, and in some cases, for virtue.
By the century’s end, The Bell Curve, by Charles Murray and Richard Herrnstein, reignited public debate with controversial claims about racial differences in intelligence. Critics, including Stephen Jay Gould, argued that the very idea of quantifying intelligence was politically and scientifically flawed.
Overall, the 20th century reified intelligence; built institutions around it, tested millions for it, and fought over what it meant. Intelligence became a cultural weapon: invoked to justify inequality, predict economic success, and shape education, often without interrogating what, exactly, it was measuring.
Intelligence in the Age of Data and Machines
In the 21st century, the definition of intelligence has once again become unsettled, and this time, the pressure is coming not from psychology or philosophy, but from technology. As machine learning systems now outperform humans on many tasks once considered hallmarks of intelligence—language, pattern recognition—we are forced to ask: was our idea of intelligence too narrow all along?
The AI boom, particularly since the deep learning revolution of the 2010s, has profoundly destabilised older assumptions. Models like GPT-4, AlphaZero, and DALL·E exhibit behaviours that mimic reasoning, abstraction, even aesthetic judgment. Yet these systems have no self-awareness, no consciousness, no “understanding” in the classical sense. As Emily Bender and Timnit Gebru argue in their seminal paper On the Dangers of Stochastic Parrots, large language models merely simulate intelligence without meaning, grounding, or ethics.
Still, their capabilities are undeniable. In 2023, OpenAI’s GPT-4 achieved scores in the 90th percentile on the LSAT and GRE verbal reasoning, calling into question how much these exams actually assess human intelligence. José Luis Ricón, Gary Marcus, and Yann LeCun debate whether these models are truly “intelligent” or merely “clever pattern recognisers”. The boundary between intelligence and computation has blurred.
Moreover, big data has created a new angle on intelligence: collective, algorithmic, and ambient. From recommender systems to surveillance capitalism (see Shoshana Zuboff’s The Age of Surveillance Capitalism) intelligence is increasingly extracted and commodified. We are no longer just measuring intelligence; we are mining it, outsourcing it, automating it.
At the same time, human intelligence is being redefined in response. Concepts like “emotional intelligence” and “cognitive liberty” have emerged in response to both the medicalisation and technologisation of the mind. In education and work, there's growing pushback against standardised metrics, with platforms like Khan Academy, Duolingo, and Lambda School reframing intelligence as iterative, accessible, and skill-based rather than innate.
More philosophically, the rise of posthumanism, from Rosi Braidotti to Nick Bostrom, asks whether “intelligence” is still a human concept at all. If a neural network can compose music, write legal briefs, or pass the bar exam, what distinguishes human thought? Is consciousness essential, or just incidental?
In this new landscape, intelligence no longer means a single trait, a test score, or a soul’s capacity to reason. It is networked, contested, technological, and increasingly non-human.