My colleague Graham Burnett has a terrific, and terrifically provocative piece in The New Yorker this week, entitled “Will the Humanities Survive Artificial Intelligence”? He claims that even Donald Trump’s assault on elite universities “feels, frankly, like a sideshow” in comparison with the AI “juggernaut… coming at us with shocking speed.” He adds that searching for the answer to many sorts of questions in books now looks “oddly inefficient” in comparison with querying AI. He reports on an exercise in which he fed a 900-page course packet into Google’s free AI tool, asked it to create a podcast discussing the material, and listened, astonished, as “the cheerful bots began drawing connections… with genuine insight.” A class assignment involving AI left him with the sensation of “watching a new kind of creature being born.” And he asks: Does this all “herald the end of ‘the humanities’”? His answer: “In one sense, absolutely.” And he continues: “Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.” There will be no further need for humans to produce “fact-based knowledge
I would add two observations to David’s clear-eyed comments. Because of the design of the learning model and processing methodology of AI, putting hallucinations to one side, absent creative prompts, heavy reliance will tend to homogenize and ossify a field, whether academic or in professions relying on interpretation, like law. The heavy use of AI should then be accompanied by a somewhat traditional training to produce scholars, lawyers, etc who are either or both creative and skilled at developing the best questions (prompts) and careful and obsessive in vetting the responses from AI for both accuracy and value. If this sounds a lot like the training now, interpreting documents, past essays or decisions, and commonly accepted interpretations for accuracy, applicability to facts and data, and looking for new and more convincing approaches, it is. A fundamental limitation of LLMs — the kind of limitation that shows it has not achieved independent interagenc/consciousness — is that it won’t question the use of language (terms , ideas, characterizations) if no one in its data base has not put together sentences doing that. So much lies in the details of meaning and usage of language. To the extent that the universe of data an LLM “learns” on and “searches “ is increasingly authored or co authored by AI, it will become a recursive loop desperately in need of human correctives and innovation.
As you say, the key is that AI does description and synthesis very well. But what is synthesis? - it's a digest of what is ALREADY known. Monographs provide NEW knowledge and without them AI would be useless and relying on it would only insure stasis. Moreover, I do not think that Trump's attack on the universities is a "side show," far from it. Your posts on that have been very good - you were among the first to warn about Columbia's genuflection.
I would add two observations to David’s clear-eyed comments. Because of the design of the learning model and processing methodology of AI, putting hallucinations to one side, absent creative prompts, heavy reliance will tend to homogenize and ossify a field, whether academic or in professions relying on interpretation, like law. The heavy use of AI should then be accompanied by a somewhat traditional training to produce scholars, lawyers, etc who are either or both creative and skilled at developing the best questions (prompts) and careful and obsessive in vetting the responses from AI for both accuracy and value. If this sounds a lot like the training now, interpreting documents, past essays or decisions, and commonly accepted interpretations for accuracy, applicability to facts and data, and looking for new and more convincing approaches, it is. A fundamental limitation of LLMs — the kind of limitation that shows it has not achieved independent interagenc/consciousness — is that it won’t question the use of language (terms , ideas, characterizations) if no one in its data base has not put together sentences doing that. So much lies in the details of meaning and usage of language. To the extent that the universe of data an LLM “learns” on and “searches “ is increasingly authored or co authored by AI, it will become a recursive loop desperately in need of human correctives and innovation.
Thanks!
As you say, the key is that AI does description and synthesis very well. But what is synthesis? - it's a digest of what is ALREADY known. Monographs provide NEW knowledge and without them AI would be useless and relying on it would only insure stasis. Moreover, I do not think that Trump's attack on the universities is a "side show," far from it. Your posts on that have been very good - you were among the first to warn about Columbia's genuflection.
Exactly. Thanks..
Brilliant! Thanks.
Thanks!!