ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do.
Unsuspecting users who've been conditioned on Siri and Alexa assume that the smooth talking ChatGPT is somehow tapping into reliable sources of knowledge, but it can only draw on the (admittedly vast) proportion of the interwebs it ingested at training time, with even a bit of TDS for good measure.
It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences. Of course, the makers of GPT learned by experience that an untended LLM will tend to spew islamophobia in addition to talking nonsense.
OpenAI is acquiring billions of dollars of investment on the back of the ChatGPT hype. The point here is not only the pocketing of a pyramid-scale payoff but the reasons why institutions and governments are prepared to invest so much in these technologies. For these players, the seductive vision isn't real AI (whatever that is) but technologies that are good enough to replace human workers.