Just read that Microsoft is investing $10 billion in ChatGPT.
https://en.wikipedia.org/wiki/ChatGPT
I used something similar, maybe the same, a couple of days ago at a Google site. It was very good. Concise, crisp, very helpful. Like asking a human who knew the answers.
It is quite impressive. For now the main drawback of these models is that they sometimes provide very helpful information, but also frequently "hallucinate" (which is what AI researchers call it when the models make things up or get things wrong) and they usually sound very confident while doing so. In that respect (sometimes very helpful, sometimes confidently wrong) they're a lot like forum posters
For now they're most helpful in areas where you know enough to check their work, which limits the utility to some extent. I think copywriters and creatives doing piecework in general are in trouble though! Many folks are confident that the models will improve with respect to truthtelling, and I expect they will to some extent. But I think as long as the training set includes the entire internet there's going to be periodic detours into nonsense because we haven't solved the problem of humans "hallucinating" in the AI research sense.
For an easy example of what I mean, check out this example:
https://www.engraved.blog/building-a-virtual-machine-inside/There the author convinced it to pretend to be a virtual machine and it provided correct outputs! Looks pretty good! But when I ran the experiment myself, I found that it frequently provided plausible-looking but completely false outputs (hashes were formatted correctly but completely wrong, etc.). If you didn't know better it might convince you that you'd actually built a virtual machine (i.e. that it was really running commands in the background), but it's actually just role-playing as a terminal and providing plausibly formatted outputs that may sometimes be correct.
Interesting times for computing!