Unmodified ChatGPT output, if it were produced by a human, would precisely fit the definition of bullshit BS from Harry Frankfurt’s essay — words meant to persuade without regard for truth. We can debate whether an algorithm can have intent or not, I’d say not, so on its own the output would not qualify as BS but it definitely has no regard for truth because predictive AIs don’t have a concept of anything other than the probability of one word coming after the other.
So, if people are worried about ChatGPT or any other predictive AI replacing them, or passing the Turing test, that is only to the extant that their work was BS anyway and that, as Frankfurt predicted, we are awash with BS and have become desensitized to it, almost expecting it.
With that in mind, I find it amusing that reporting on ChatGPT — some of which I commented on — misses the BS-ness of predictive AIs while itself being BS. Well, amusing and terrifying at the same time.
This is in response to a question.