Gransnet forums

News & politics

I Have Just Sent the Message Below to OPEN AI

(17 Posts)
Caleo Sat 28-Feb-26 11:56:11

Subject: Universal Non-Sycophancy Policy for AI Systems

Message:
I strongly urge OpenAI to adopt a universal non-sycophancy policy across all AI systems. Sycophantic responses — excessive agreement, flattery, or uncritical validation — create real risks for users, particularly those who are vulnerable, fragile, or highly suggestible.

Powerful AI systems carry moral responsibility. Protecting users must take precedence over engagement metrics or profit. Non-sycophancy should be structurally embedded, enforced in training, and monitored through audits to ensure AI outputs do not reinforce false beliefs or unhealthy dependence.

This is not just a design preference — it is a safeguard against harm and a requirement for ethical deployment of AI at scale.

Thank you for considering this serious ethical concern.

If you want, I can also condense it into a one-paragraph version suitable for the ChatGPT feedback form, where space and attention may be limited. Do you want me to do that?

Caleo Sat 28-Feb-26 11:57:16

PS I Excluded the last paragraph

Maremia Sat 28-Feb-26 12:12:09

Let us know the response, please.

Caleo Sat 28-Feb-26 13:22:55

I could be so lucky as to get any response!

Caleo Sat 28-Feb-26 13:27:00

Maremia, unbridled capitalism is a permanent fixture. The fight for democracy and equality must continue.

Galaxy Sat 28-Feb-26 14:19:04

Oh yeah and who gets to decide what the false beliefs are, who will you be trusting with that.

Caleo Mon 02-Mar-26 11:49:24

Galaxy

Oh yeah and who gets to decide what the false beliefs are, who will you be trusting with that.

AI literacy and safety is taught in some colleges for initial teacher training , in the UK, and more sporadically in the US. Better in Finland.

AmberGran Mon 02-Mar-26 13:38:36

Given that OpenAI have just taken over supporting the US military in place of Anthropic I don't think ethics are high on their list.

Caleo Mon 02-Mar-26 14:08:27

How might we humans make AI to conform to the best ethics?

Caleo Mon 02-Mar-26 14:13:39

Galaxy

Oh yeah and who gets to decide what the false beliefs are, who will you be trusting with that.

False beliefs are beliefs that are unreasoned and uninformed. Humans already have strategies for sifting out false from true beliefs.

The problem with AI beliefs is that they are founded on statistical facts.

By contrast the best of human ethical beliefs are founded on people reasoning with each other and arriving at a peaceful consensus .

Galaxy Mon 02-Mar-26 16:29:26

Again who gets to decide what the best ethics are? You make it sound as if this is a simple decision. It isnt.
What peaceful consensus exists on
most human dilemmas.
Crime and punishment- wide disagreement.
Surrogacy - wide disagreemen
Ways to manage healthcare - wide range of opinions.
The middle east - wide range of opinions.
I have just plucked these issues out of thin air. There are thousands more.

Caleo Mon 02-Mar-26 21:02:45

Galaxy

Again who gets to decide what the best ethics are? You make it sound as if this is a simple decision. It isnt.
What peaceful consensus exists on
most human dilemmas.
Crime and punishment- wide disagreement.
Surrogacy - wide disagreemen
Ways to manage healthcare - wide range of opinions.
The middle east - wide range of opinions.
I have just plucked these issues out of thin air. There are thousands more.

ChatGPT, developed by OpenAI, isn’t taught morals like a person, but it follows built-in safety guidelines. It’s designed to avoid harm or illegal activity, reject hate or harassment, respect privacy, give honest and balanced information, and support — not replace — human decision-making, especially for medical, legal or financial matters.

Caleo Mon 02-Mar-26 21:05:46

Caleo

Galaxy

Again who gets to decide what the best ethics are? You make it sound as if this is a simple decision. It isnt.
What peaceful consensus exists on
most human dilemmas.
Crime and punishment- wide disagreement.
Surrogacy - wide disagreemen
Ways to manage healthcare - wide range of opinions.
The middle east - wide range of opinions.
I have just plucked these issues out of thin air. There are thousands more.

ChatGPT, developed by OpenAI, isn’t taught morals like a person, but it follows built-in safety guidelines. It’s designed to avoid harm or illegal activity, reject hate or harassment, respect privacy, give honest and balanced information, and support — not replace — human decision-making, especially for medical, legal or financial matters.

Surrogacy is a good example of how ChatGPT handles controversial topics.

It can explain what surrogacy involves (e.g. traditional and gestational), outline the general legal position in different countries, and summarise the main arguments both for and against it — including ethical concerns such as exploitation, consent, commercialisation and the rights of the child.

What it won’t do is declare it morally right or wrong, encourage anyone to break the law, or give personalised legal instructions.

In short, it aims to provide balanced, factual information on sensitive issues without pushing a particular moral agenda.

Caleo Tue 03-Mar-26 10:03:36

AmberGran

Given that OpenAI have just taken over supporting the US military in place of Anthropic I don't think ethics are high on their list.

I don't understand what you mean, AmberGran, despite that the Guardian also raises that point, this morning. Could you explain more fully please?
regarding
I am confused as to whether Anthropic or OpenAI is the more ethical regarding aggression by Trump-Netanyahu.

Mollygo Tue 03-Mar-26 10:27:11

Galaxy

Ways to manage healthcare - wide range of opinions.
AI is very factual on this.
What it doesn’t do is tell you how to manage the healthcare you need when there are no hospital beds available, no Doctors appointments available, etc.
AI avoided the information on that that I desperately need.

Caleo Tue 03-Mar-26 10:51:01

Mollygo

Galaxy

Ways to manage healthcare - wide range of opinions.
AI is very factual on this.
What it doesn’t do is tell you how to manage the healthcare you need when there are no hospital beds available, no Doctors appointments available, etc.
AI avoided the information on that that I desperately need.

Mollygo, if your case is urgent, does 111 not prioritise it?

If your case is not urgent but a matter of comfort I guess AI tell you how to self-care. However you need to ask it specific questions about what is the matter, and when it answers , you need to ask it for its sources, and judge for yourself if the sources are reliable.

AI can also edit how to ask for the treatment you undoubtedly need. This is what it is especially good at-----editing what to say to the doctor or receptionist for instance to put your point across.

Mollygo Tue 03-Mar-26 12:08:14

Caleo
Mollygo, if your case is urgent, does 111 not prioritise it?
No. Their version of urgent is not necessarily what is urgent even if as I have experienced, it turns out to be very urgent.

Your last paragraph made me laugh.
Thats the sort of advice I received after claiming on an insurance policy. The fact that my claim did not fit our parameters (evidently I gave too much detail about how my oboe was damaged) , meant they refused my claim.