Gransnet forums

Grandparenting

Artificial Intelligence replacing people

(67 Posts)
MrsSharples Tue 20-Jan-26 06:13:09

Maybe I’m wrong to be concerned but an incident occurred that got me thinking and a little worried. My son (42) occasionally uses ChatGPT for coursework while he’s upgrading the trade he works in. It’s very math oriented and he says it really helps him understand difficult geometry concepts. Anyway he finds value in consulting it and jokingly calls it Gregory because he can converse with it and it answers back verbally. Well recently my 7 year old grandson asked his dad “How did cavemen learn words so they could talk? My son decided for fun to get the child to ask ChatGPT aka Gregory. My grandson said “I’m Johnny, and I want to know how cavemen learned words? and Gregory answered back “That’s a very good question Johnny.” And proceeded to give a very clear and useful answer. My grandson was thrilled and asked it another caveman question. After I heard this I thought to myself why ask mum or dad when you can ask ChatGPT! I’m thinking of speaking to my son about if this is wise. I mean asking mum and dad questions is an important way kids bond with parents isn’t it? What do you think?

GrannyGravy13 Tue 20-Jan-26 06:26:09

It’s an additional learning tool.

Cannot (at the moment) replace the existing parent child bond.

Mamie Tue 20-Jan-26 06:38:13

I think it would be a question of parental guidance. "That is a really good question to ask AI, this is a question where you need to ask Mum and Dad". It is about being proactive in encouraging safety on line from the beginning.

Smileless2012 Tue 20-Jan-26 08:46:59

I agree Mamie about parental guidance and I would add parental participation, by being with your child to listen to the answer and then talking about it together.

keepingquiet Tue 20-Jan-26 08:52:41

It isn't unusual. Even when I was still teaching kids used to talk to their phones to get answers.
This is lazy thinking and in a generation no one will be able to make a decision or think for themselves. It's rather worrying.
Obtaining the info is one thing, but retaining it another matter altogether.

Caleo Tue 20-Jan-26 10:06:37

keepingquiet

It isn't unusual. Even when I was still teaching kids used to talk to their phones to get answers.
This is lazy thinking and in a generation no one will be able to make a decision or think for themselves. It's rather worrying.
Obtaining the info is one thing, but retaining it another matter altogether.

But before there was ChatGPT there were accredited reference books such as dictionaries , calculators, and history books for looking up information .
A I research assistant is not a lazy resource as long as the child knows ChatGPT is there to help with looking up information -not making moral choices.

To some extent Chat can help with moral choices as Chat is trained to observe certain ethics, and will be honest about those ethics. Not all AIs are trained in ethical behaviour, so the place of parents and teachers is, as ever was , to guide and teach the child how to select honest and ethical sources for getting information from.

MaizieD Tue 20-Jan-26 10:21:21

Remember AI's propensity to make things up.

Hell, we've just had a whole thread about a police chief losing his job because a report contained AI 'hallucinations'.

The child needs to be told that AI is not his friend...

Elless Tue 20-Jan-26 10:55:15

AI scares me. I have two sons who are employed high level IT and they don't trust AI.
One of them told me that a company was introducing AI to their systems and left for the weekend, when they returned on the Monday it had taught itself two languages.

Graphite Tue 20-Jan-26 11:35:27

Making things easy to look up is nothing new. We’ve had paper dictionaries, encyclopediae and other written reference works for a very long time. Where it can lead to laziness is when people stop observing and thinking for themselves before resorting to technology, or where people buckpass. Ask your mother! Ask your father! Ask your teacher!

I’d draw an analogy between mental arithmetic skills and the calculator. You can input the wrong number or not understand operators e.g that multiplication takes precedence over addition, and just assume the result given is correct.

To get an accurate answer from AI you need to ask the right question in the first place. Even then the right answer is not guaranteed as ChatGPT is not up to date. It lacks knowledge of recent events and developments. As Maizie points out, it also “hallucinates” i.e it confabulates, presenting misinformation and nonsense.

What was the right question here? Did the child mean how did language evolve or how do humans learn language? One might infer from cavemen than he meant language evolution but it’s by no means clear. One might ask how does an infant learn language? How does an infant learn to communicate before it has words? How do people who are non-verbal communicate?

There is no one simple answer to how language evolved in the first place. Long scientific treatises have written on the subject. How can those be encapsulated into sound bites? One might ask why do some cultures have many more words than others and why hundreds of different languages have developed. Why can't animals speak? They have intelligence too and seem to be able to communicate among themselves. What's the avian dawn chorus all about?

My worry about an over-reliance on AI is that children won’t learn critical thinking skills, won’t learn to engage in meaningful and wide-ranging discussion, won’t learn to sift data and information for an appropriate and genuine answer, won’t learn to ask the right question.

Isn’t like saying, Alexa, Find me some classical music to listen to. There are hundreds of digital stations out there playing classical music but are you given a choice? Does it ask, Do you like opera or symphony? Baroque or Romantic? Piano, violin, cello? I’ve just asked Siri the question on my Mac. It served me with a commercial station called Magic which I see is a Bristol-based broadcaster. Why? I live nowhere near Bristol. It isn’t a broadcaster I have ever heard of. If it was going to select a well-known broadcaster of classical music why did it not chose nationally known BBC Radio 3 or Classic FM? So is Siri programmed and sponsored to serve particular commercial interests?

Increasingly, I’ve seen ChatGPT used on this platform to give answers to financial, legal and, frighteningly, medical questions from people who clearly have no professional knowledge in that field. Nor do they have all the salient data as the person posting the problem in the first place rarely give it.

Alexander Pope wrote: A little knowledge is a dangerous thing. One can argue about the social prejudices behind that expression, the lower orders acquiring a little learning and getting about themselves compared to those who had been classically schooled, but there is an element of truth in that.

One hears it when people often argue : I read somewhere that … Where did they read it? Was it a reliable source? AI at worst is just a version of that.

REKA Tue 20-Jan-26 11:40:48

I was a voracious reader. I would use encyclopaedias to find things out. I wouldn't have dreamt asking my parents because I assumed they'd not know!

Tizliz Tue 20-Jan-26 12:06:22

We asked ChatGPT what my husband is known for. Back came 'he is a world famous shoe maker'. Where did that come from? His business has nothing to do with shoes or any type of clothing. Hope none of our customers try it, they might think he has opened a new business.

SORES Tue 20-Jan-26 17:04:25

REKA

I was a voracious reader. I would use encyclopaedias to find things out. I wouldn't have dreamt asking my parents because I assumed they'd not know!

REKA - as a child, and actually up to mid teens, still tackling school homework - if I asked my Dad anything, he would say, “you had better ask your mother” - should I have the temerity to ask my Mother, she would say, “how should I know, you had. better ask your Father!”

HobbyCat Wed 21-Jan-26 13:52:11

I would be wary. I use it quite a lot and it’s incorrect on quite a few occasions. Use it for help and information but don’t rely on that information too much in academic or professional settings.

WithNobsOnIt Wed 21-Jan-26 14:00:24

Have said this before.
AI is still in its infancy. Just learning to crawl.

Be afraid. Be very afraid.

Danma Wed 21-Jan-26 14:16:39

The big problem with all these ‘tools’, google, Alexa etc. is that they just trawl the internet for information.
We all know how much rubbish is uploaded online every day. Soon the ‘reality’ will be based on someone’s fantasies and not fact

AuntieE Wed 21-Jan-26 15:39:26

keepingquiet

It isn't unusual. Even when I was still teaching kids used to talk to their phones to get answers.
This is lazy thinking and in a generation no one will be able to make a decision or think for themselves. It's rather worrying.
Obtaining the info is one thing, but retaining it another matter altogether.

Similar points of view have been advanced about practically any and every new learning tool that has come into use since my own schooldays and during the 40 years I taught children and adults.

While I understand your point of view, I think we have to remember that tools are just that and this is the important thing to tell children and youngsters.

I clearly remember the debate about whether schoolchildren should be allowed to use calculators. The conservative teachers said, "Oh, no. They just write down whatever the calculator comes up with, so if they hit a wrong key and come up with that 7x5 is 42 they just accept that. They should learn the multiplication tables by heart."

However, if pupils are taught to check their work, most will find out the second time round that 7 x 5 = 35.

What we need to do is to to teach children the limitations of the tool they are using.

Knittypamela Wed 21-Jan-26 16:30:22

I have a version of this on my phone but don't use it. At the weekend we were out for dinner with our daughter. I asked if she had photographed her food. The phone piped up "I don't photograph my food, I have a life"! We were shocked.

Suzieque66 Wed 21-Jan-26 16:35:21

How many parents would know the answer ?

itsadogslife Wed 21-Jan-26 16:51:20

AI encouraged someone's young teenager to commit suicide - she did. It has been found to give completely wrong information on health matters some of which was potentially very harmful. It has to be used with the utmost care and discretion and should not be available to children at all without supervision IMHO.

It also spews out incredibly well written essays and teachers/tutors spend a lot of time checking to make sure it actually written by the student which is obviously difficult and time-consuming.

I actually use AI myself sometimes as part of my job and it is incredibly useful but try to limit myself as much as possible.

Having said all that, I do not believe the doom mongers who say it will take over the world. That will only happen if humanity allows it to.

orly Wed 21-Jan-26 17:36:23

A few months ago and AI transcript of the story of the battler between Keir Starmer and Sue Grey reported that the PM earned only 140,000lbs while the latter earned 175,000lbs(sic). And AI doesn't understand Groucho Marx's "Time flies like an arrow, fruit flies like a banana".

David49 Wed 21-Jan-26 18:19:59

orly

A few months ago and AI transcript of the story of the battler between Keir Starmer and Sue Grey reported that the PM earned only 140,000lbs while the latter earned 175,000lbs(sic). And AI doesn't understand Groucho Marx's "Time flies like an arrow, fruit flies like a banana".

AI has learned, here is the response i got today

The phrase "Time flies like an arrow; fruit flies like a banana" is an example of syntactic ambiguity, where the structure of the sentence can lead to multiple interpretations. The first part suggests that time passes quickly, similar to how an arrow travels fast, while the second part humorously indicates that fruit flies are attracted to bananas, showcasing a playful use of language.
Wikipedia University of Cambridge

All AI does is collect the information already on line it is always going to be biased by the weight of information in one direction or the other - unless of course it is educated to be biased

Musicgirl Wed 21-Jan-26 19:08:33

One other problem with AI is that university students are relying on it and quoting verbatim. A retired lecturer l knew told me that plagiarism is rife and much harder to trace than in the past. Not good.

lizzypopbottle Wed 21-Jan-26 19:50:51

My daughter asked AI to recommend some peer reviewed research papers. It complied. When she tried to find them to read them for her work on her MSc. she couldn't find them. The didn't exist! The AI had made them up and when she asked why it had done that, it apologised and said it hadn't wanted to disappoint her!

You can't trust everything AI responds with.

MaizieD Wed 21-Jan-26 20:44:17

All AI does is collect the information already on line it is always going to be biased by the weight of information in one direction or the other - unless of course it is educated to be biased

Unfortunately it doesn't just regurgitate information that is already on line. It also MAKES THINGS UP It cites non existent academic papers, non existent legal judgements and non existent football matches, to name but three instances I've heard of.

I had Chatgpt referencing a Grok produced article recently. So, AI citing AI... I told it off, but it won't really mind that at all. grin

MaizieD Wed 21-Jan-26 20:46:51

There we go, I hadn't read lizzypopbottle's post. That's a clear example of AI's dangers...