To elaborate on the chatbot: it isn’t that it upgraded my view of how good artificial intelligence can be, but rather that it downgraded my view of human intelligence. ChatGPT is very good at stringing out empty phrases and filler words — in other words, at producing bullshit in the Harry Frankfurt sense. Its skill in writing plausibly-looking college essays, personal statements, and letters of recommendation reminded me, maybe even showed me for the first time, that most of those are bullshit too.
As someone who has spent the better part of the last 12 years drafting his own letters of recommendation, Thank you, USCIS! it was demoralizing to be reminded to all that wasted effort. Worse yet was the stream of college professors lamenting the new reality of now and forever compromised term papers, decorated with screenshots of ChatGPT’s essays, blind to their own self-condemnation: if an unintelligent, unreasoning, letter-guessing algorithm can produce content to your liking better than your own students, then what kind of a class are you teaching there, Professor?
Instead of hearlding the rise of artificial general intelligence, ChatGPT showed me deficiencies of human intelligence by being a variant of the reverse Turing test: can a human write sufficiently well to be recognized as one? This is, of course, not my original thought but rather Taleb’s, who wrote about the reverse Turing test two decades ago in Fooled by Randomness, and mentioned it again in light of the ChatGPT screenshot onslaught. So am I failing the test too?