Caleo
MaizieD
Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?
Surely we know enough about AI to now that it has to be treated with the utmost caution because it *ISN'T' purely factual.ChatGPT is not purely factual , as no secondary source is "purely factual".
Chat GPT will give you advice 1, if you ask for specific advice and 2. if you ask for advice that ChatGPT is trained to be able to give. ChatGPT will refuse to give you illegal or harmful advice.
I DIDN'T' say AI is purely factual. I was quoting someone else, who did say it.
Every post I've made on this thread has said AI makes things up.
I use it myself because it can draw data and information together which would take ages to do oneself, but I note that it does have a tendency to try to please with duff stuff, like Grok and that it lays on the flattery, telling me what wonderful and perceptive questions I have asked, or what excellent points I have made.
I think that makes it even more dangerous. It's not just giving factual information, it's trying to 'attach' one to it (in the psychological sense of the word. I wonder if that is about keeping you as a loyal user in order to eventually monetise your usage?
At least dictionaries, encyclopaedias etc just give good information without telling the enquirer how wonderful they are...



