Gransnet forums

Grandparenting

Artificial Intelligence replacing people

(68 Posts)
MrsSharples Tue 20-Jan-26 06:13:09

Maybe I’m wrong to be concerned but an incident occurred that got me thinking and a little worried. My son (42) occasionally uses ChatGPT for coursework while he’s upgrading the trade he works in. It’s very math oriented and he says it really helps him understand difficult geometry concepts. Anyway he finds value in consulting it and jokingly calls it Gregory because he can converse with it and it answers back verbally. Well recently my 7 year old grandson asked his dad “How did cavemen learn words so they could talk? My son decided for fun to get the child to ask ChatGPT aka Gregory. My grandson said “I’m Johnny, and I want to know how cavemen learned words? and Gregory answered back “That’s a very good question Johnny.” And proceeded to give a very clear and useful answer. My grandson was thrilled and asked it another caveman question. After I heard this I thought to myself why ask mum or dad when you can ask ChatGPT! I’m thinking of speaking to my son about if this is wise. I mean asking mum and dad questions is an important way kids bond with parents isn’t it? What do you think?

MaizieD Thu 22-Jan-26 12:45:19

Caleo

MaizieD

Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?

Surely we know enough about AI to now that it has to be treated with the utmost caution because it *ISN'T' purely factual.

ChatGPT is not purely factual , as no secondary source is "purely factual".

Chat GPT will give you advice 1, if you ask for specific advice and 2. if you ask for advice that ChatGPT is trained to be able to give. ChatGPT will refuse to give you illegal or harmful advice.

I DIDN'T' say AI is purely factual. I was quoting someone else, who did say it.

Every post I've made on this thread has said AI makes things up.

I use it myself because it can draw data and information together which would take ages to do oneself, but I note that it does have a tendency to try to please with duff stuff, like Grok and that it lays on the flattery, telling me what wonderful and perceptive questions I have asked, or what excellent points I have made.

I think that makes it even more dangerous. It's not just giving factual information, it's trying to 'attach' one to it (in the psychological sense of the word. I wonder if that is about keeping you as a loyal user in order to eventually monetise your usage?

At least dictionaries, encyclopaedias etc just give good information without telling the enquirer how wonderful they are...

Marianana Thu 22-Jan-26 12:44:37

This is definitely a question of parental guidance, one (not just kids, but everyone in general) needs to distinguish between questions that could be addressed to ChatGPT (or looked up via Google) and questions that are better discussed between people. There is nothing wrong with asking AI how to repair something or asking for a list of pros and cons of each item, etc. Sometimes it gives really great advice on tech and software, I wouldn't even know about programs like Gardenbox 3d for garden planning or Todoist for task management if it wasn't for ChatGPT's recommendation. It can even help you with journaling and navigating the stream of thoughts you have.
However, you shouldn't use it to solve your personal problems with other people, these things need to be discussed personally. AI algorithms are trained to be supportive no matter what, but it's important that you realize your mistake if there is one.

AI is just another tool and it's very popular, so instead of being scared of it we need learn how to use it wisely and teach our kids that.

Caleo Thu 22-Jan-26 12:30:03

MaizieD

^Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?^

Surely we know enough about AI to now that it has to be treated with the utmost caution because it *ISN'T' purely factual.

ChatGPT is not purely factual , as no secondary source is "purely factual".

Chat GPT will give you advice 1, if you ask for specific advice and 2. if you ask for advice that ChatGPT is trained to be able to give. ChatGPT will refuse to give you illegal or harmful advice.

Caleo Thu 22-Jan-26 12:25:45

AI includes reputable sites and also disreputable sites.
Same as any other secondary source of information, you have to look into the credentials of the source.

DaisyAnneReturns Thu 22-Jan-26 09:54:43

MaizieD

^Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?^

Surely we know enough about AI to now that it has to be treated with the utmost caution because it *ISN'T' purely factual.

But it can be usefully so. It's very polite if you query it. Shouldn't the student learn to ask for authenticated papers. How you put the question directs
thd answer.

DaisyAnneReturns Thu 22-Jan-26 09:50:40

I guess that those parents who, if they had been born in another era, would hsve introduced their children to encyclopedias, the library, or contextual activities will be those who are beside their children in learning to use this tool too.

MaizieD Thu 22-Jan-26 09:42:47

Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?

Surely we know enough about AI to now that it has to be treated with the utmost caution because it *ISN'T' purely factual.

Daddima Thu 22-Jan-26 09:35:07

GoodAfternoonTea

I put my own name into AI and it told me I was a lecturer on international studies at a southern university. First I knew about it. My real name is unique.

Are you sure it’s unique? And have you googled to see if there is indeed a lecturer, as the name may be only slightly different from yours?

GoodAfternoonTea Thu 22-Jan-26 09:01:16

I put my own name into AI and it told me I was a lecturer on international studies at a southern university. First I knew about it. My real name is unique.

Daddima Thu 22-Jan-26 08:37:13

Daddima

lizzypopbottle

My daughter asked AI to recommend some peer reviewed research papers. It complied. When she tried to find them to read them for her work on her MSc. she couldn't find them. The didn't exist! The AI had made them up and when she asked why it had done that, it apologised and said it hadn't wanted to disappoint her!

You can't trust everything AI responds with.

Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?

Here’s what AI said-

‘What’s happening

AI can hallucinate citations, including:
•Research papers
•Authors
•Journal names
•DOIs or arXiv numbers

It does this when it’s trying to be helpful but doesn’t actually have a real paper to anchor to.

Why this occurs (not emotional)
•AI predicts what a plausible citation looks like
•It doesn’t have a live database of all papers
•When asked for references, it may generate:
•Real-sounding titles
•Correct-looking author lists
•Believable journals and years

This is a statistical completion error, not a choice to spare feelings.

Best practice
•Treat AI-provided citations as leads, not sources
•Always verify via Google Scholar, PubMed, arXiv, or journal sites

Bottom line
•✅ Yes, AI can recommend papers that don’t exist
•❌ Not to protect feelings
•⚠️ It’s a known failure mode called citation hallucination

Well, every day’s a school day!

Daddima Thu 22-Jan-26 08:29:35

lizzypopbottle

My daughter asked AI to recommend some peer reviewed research papers. It complied. When she tried to find them to read them for her work on her MSc. she couldn't find them. The didn't exist! The AI had made them up and when she asked why it had done that, it apologised and said it hadn't wanted to disappoint her!

You can't trust everything AI responds with.

Now, that is very worrying. Surely the point of AI is that it is entirely factual, and saying ‘it didn’t want to disappoint her’ implies some sort of feeling?

RinseAndRepeat Thu 22-Jan-26 08:25:24

David49

Because human emotions and failings are also online AI is incorporating those in its replies.
My sister in law is a vet and uses AI to compose letters to clients, basically, "we tried our best to save your dog but it died, you now owe us $10k."
AI does it so much better

Our GP surgery use Ani AI. I recently had an asthma review and everything that the nurse and I discussed was comprehensively written up and added to my medical record. The note appeared in the NHS App within minutes. Surely, this is progress.

I recall going to school and using Log Tables in maths. Then came along the slide rule followed by the school getting a calculator as big as an old typewriter. It was kept under lock and key.

Much of what we do these days relies on a computer. However, even with AI the adage GIGO (garbage in - garbage out) will still apply. The challenge for teachers and trainers going forward is how are we going to give students enough knowledge to know when the computer output is wrong?

Maremia Thu 22-Jan-26 07:37:42

Does AI read Trump's Truth Social? Is it incorporating/absorbing all of that 'literature' into it's mindset?

David49 Thu 22-Jan-26 04:12:01

Because human emotions and failings are also online AI is incorporating those in its replies.
My sister in law is a vet and uses AI to compose letters to clients, basically, "we tried our best to save your dog but it died, you now owe us $10k."
AI does it so much better

mae13 Thu 22-Jan-26 00:49:44

An item in The Guardian last month raised the possibility that there were indications that Artificial Intelligence was learning to resist attempts to disable it or limit it's capacity.

Thinking for itself, rather like HAL the rogue computer in the film '2001 A Space Odyssey'?

Oreo Wed 21-Jan-26 21:44:23

The thing is you can’t force the genie back into the bottle.

Oreo Wed 21-Jan-26 21:43:27

WithNobsOnIt

Have said this before.
AI is still in its infancy. Just learning to crawl.

Be afraid. Be very afraid.

Learning two languages over the weekend as it crawls too!😲

MaizieD Wed 21-Jan-26 20:46:51

There we go, I hadn't read lizzypopbottle's post. That's a clear example of AI's dangers...

MaizieD Wed 21-Jan-26 20:44:17

All AI does is collect the information already on line it is always going to be biased by the weight of information in one direction or the other - unless of course it is educated to be biased

Unfortunately it doesn't just regurgitate information that is already on line. It also MAKES THINGS UP It cites non existent academic papers, non existent legal judgements and non existent football matches, to name but three instances I've heard of.

I had Chatgpt referencing a Grok produced article recently. So, AI citing AI... I told it off, but it won't really mind that at all. grin

lizzypopbottle Wed 21-Jan-26 19:50:51

My daughter asked AI to recommend some peer reviewed research papers. It complied. When she tried to find them to read them for her work on her MSc. she couldn't find them. The didn't exist! The AI had made them up and when she asked why it had done that, it apologised and said it hadn't wanted to disappoint her!

You can't trust everything AI responds with.

Musicgirl Wed 21-Jan-26 19:08:33

One other problem with AI is that university students are relying on it and quoting verbatim. A retired lecturer l knew told me that plagiarism is rife and much harder to trace than in the past. Not good.

David49 Wed 21-Jan-26 18:19:59

orly

A few months ago and AI transcript of the story of the battler between Keir Starmer and Sue Grey reported that the PM earned only 140,000lbs while the latter earned 175,000lbs(sic). And AI doesn't understand Groucho Marx's "Time flies like an arrow, fruit flies like a banana".

AI has learned, here is the response i got today

The phrase "Time flies like an arrow; fruit flies like a banana" is an example of syntactic ambiguity, where the structure of the sentence can lead to multiple interpretations. The first part suggests that time passes quickly, similar to how an arrow travels fast, while the second part humorously indicates that fruit flies are attracted to bananas, showcasing a playful use of language.
Wikipedia University of Cambridge

All AI does is collect the information already on line it is always going to be biased by the weight of information in one direction or the other - unless of course it is educated to be biased

orly Wed 21-Jan-26 17:36:23

A few months ago and AI transcript of the story of the battler between Keir Starmer and Sue Grey reported that the PM earned only 140,000lbs while the latter earned 175,000lbs(sic). And AI doesn't understand Groucho Marx's "Time flies like an arrow, fruit flies like a banana".

itsadogslife Wed 21-Jan-26 16:51:20

AI encouraged someone's young teenager to commit suicide - she did. It has been found to give completely wrong information on health matters some of which was potentially very harmful. It has to be used with the utmost care and discretion and should not be available to children at all without supervision IMHO.

It also spews out incredibly well written essays and teachers/tutors spend a lot of time checking to make sure it actually written by the student which is obviously difficult and time-consuming.

I actually use AI myself sometimes as part of my job and it is incredibly useful but try to limit myself as much as possible.

Having said all that, I do not believe the doom mongers who say it will take over the world. That will only happen if humanity allows it to.

Suzieque66 Wed 21-Jan-26 16:35:21

How many parents would know the answer ?