My colleague Graham Burnett has a terrific, and terrifically provocative piece in The New Yorker this week, entitled “Will the Humanities Survive Artificial Intelligence”? He claims that even Donald Trump’s assault on elite universities “feels, frankly, like a sideshow” in comparison with the AI “juggernaut… coming at us with shocking speed.” He adds that searching for the answer to many sorts of questions in books now looks “oddly inefficient” in comparison with querying AI. He reports on an exercise in which he fed a 900-page course packet into Google’s free AI tool, asked it to create a podcast discussing the material, and listened, astonished, as “the cheerful bots began drawing connections… with genuine insight.” A class assignment involving AI left him with the sensation of “watching a new kind of creature being born.” And he asks: Does this all “herald the end of ‘the humanities’”? His answer: “In one sense, absolutely.” And he continues: “Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.” There will be no further need for humans to produce “fact-based knowledge about humanistic things.” But he doesn’t despair. The point of the humanities, he continues, is not to find the right answers, but to pose the right questions, the fundamental questions. That is still something that only humans can do. Indeed, AI may liberate humanists to concentrate on this most important task.
It's a captivating, brilliant essay. But at the same time, and at the risk of sounding like a crusty Luddite, I think it misses something pretty fundamental about the humanities. I have not used AI anywhere near as much as Burnett has. I’ve relied on it principally to correct grammatical errors when I write in French—something it does impressively well. I’ve experimented with asking it to summarize articles, which, again, it can do extremely well. I am willing to stipulate that it is capable of all the things that Burnett attributes to it. But there are two important words that do not appear in his essay: judgment and originality.
What Burnett describes AI doing in the essay is above all description and synthesis. Its capacities in this regard are indeed extraordinary. In my own brief experiments, I have found that it can do a dazzling job summarizing and explaining difficult issues. It is also not shy of offering its own apparent judgments. But I have yet to see evidence that it is doing anything more than synthesizing the most frequently cited judgments that it has found on the internet, interpreted according to patterns in the data it was trained on.
Genuine scholarly judgment, however, is not just based on pattern recognition. It is based, first and foremost, on skepticism: on subjecting every piece of evidence and every link in a proposed chain of causality to rigorous examination, asking whether they are what they appear to be, and considering alternative understandings. It is based, in other words, on a willingness to disbelieve. From what I have read, and from what Burnett describes, this is not how a “large language model” AI operates. What AI does with scholarly discourse is akin to learning to speak a language with an impressive degree of fluency. But exercising genuine scholarly judgment often involves breaking the rules of existing discourse—inventing new language, so to speak.
Furthermore, by definition, AI exists to respond to queries and always tries to answer as helpfully as possible. Genuine scholarly judgment, by contrast, often involves identifying questions mal posées. In my experiments with AI, it has often responded, with patently insincere flattery, “that’s a great question.” It has never responded: “that’s the wrong question.”
Scholarly progress, of course, also relies on the posing of new, unexpected, original questions—and not just about the great mysteries of human life, but about more tangible things, like the meaning of the French Revolution. These questions don’t simply arise because someone has scanned through the internet in an especially thorough manner, but because of personal experiences and quirks and changes in the world that lie wholly beyond a bot’s horizon of experience. And the answers to these questions in turn may well involve the discovery of new evidence that does not exist either in the data AI was trained on, or even on the internet. As any good historian knows, there is still a lot of material out there in the world that is not actually online.
For this reason, I can’t accept Burnett’s clam that traditional scholarly monographs will be obsolete within five years, and that AI “will be able to generate them, endlessly, at the push of a button.” Yes, AI may well make certain types of scholarly work obsolete. I think above all of the tedious, large-scale reference works that academic publishers have found so profitable of late: all those “Companions” and “Handbooks” and “Online Research Encyclopedias.” Many of the articles found in these works, written by harried scholars who know they will receive little if any money or professional recognition for the task, already read as if they were written by bots, and quickly go out of date (I’m guilty of producing some of these myself). But a really good, challenging monograph, one that poses unexpected questions, uses rigorous, skeptical scholarly judgment, discovers new evidence and comes to strikingly original conclusions? It seems to me that in a world increasingly dominated by bots that sound boringly alike, works like this will stand out all the more.
My brilliant colleague argues that the AI revolution will liberate humanists to concentrate on “the central questions that confront every human being: How to live? What to do? How to face death?” Perhaps. But, again, I think there are a lot of less exalted, traditional humanistic questions out there that are still going to need actual (human) humanists to ponder, to brood on, and to turn about in their minds until some unexpected spark flies out. There are more things in heaven and earth than are dreamt of by a bot. And anyway, while the bots hallucinate, they don’t dream.
I would add two observations to David’s clear-eyed comments. Because of the design of the learning model and processing methodology of AI, putting hallucinations to one side, absent creative prompts, heavy reliance will tend to homogenize and ossify a field, whether academic or in professions relying on interpretation, like law. The heavy use of AI should then be accompanied by a somewhat traditional training to produce scholars, lawyers, etc who are either or both creative and skilled at developing the best questions (prompts) and careful and obsessive in vetting the responses from AI for both accuracy and value. If this sounds a lot like the training now, interpreting documents, past essays or decisions, and commonly accepted interpretations for accuracy, applicability to facts and data, and looking for new and more convincing approaches, it is. A fundamental limitation of LLMs — the kind of limitation that shows it has not achieved independent interagenc/consciousness — is that it won’t question the use of language (terms , ideas, characterizations) if no one in its data base has not put together sentences doing that. So much lies in the details of meaning and usage of language. To the extent that the universe of data an LLM “learns” on and “searches “ is increasingly authored or co authored by AI, it will become a recursive loop desperately in need of human correctives and innovation.
As you say, the key is that AI does description and synthesis very well. But what is synthesis? - it's a digest of what is ALREADY known. Monographs provide NEW knowledge and without them AI would be useless and relying on it would only insure stasis. Moreover, I do not think that Trump's attack on the universities is a "side show," far from it. Your posts on that have been very good - you were among the first to warn about Columbia's genuflection.