ChatGPT, for instance, can understand user inputs and provide relevant, contextual, and personalized information on a wide range of topics, essentially creating a tailored package of knowledge.
It bloody can’t. It can make a lot of things up though, and sound authoritative when it does. I’m not sure that having an on-tap bullshitter but treating them as useful is a good plan though.
And yes, I realise that as these things develop, this problem will diminish. But honestly, by running before they can walk, the systems are going to find it hard to build trust.