Research Hit: The Rise (or not) of Artificial Moral Advisors
New research shows that we trust humans more than AI in moral questions, even if responses are the same
What we have artificial moral advisors?!
Well they’re not here yet but are ready to go.
Is that a good idea?
It could be a fantastic idea - as I write this the US government has cut its aid funding which has been estimated to have saved 25 million lives since its inception. Having a neutral non-biased view of this could save millions of future lives and allowing us humans (or here, US Americans) to fulfil our (their) moral duty.
It could also neutrally handle any conflicting data such as when aid funds are misappropriated - as can and does happen, of course.
And us human beings are very biased in interpreting this data!
Yes, precisely, this is a current (and tragic) example. But in many places in life we prioritise emotional information, partisan information, or emotional cases, and end up throwing the proverbial baby out with the bathwater.
In fact my proposal has always been to put these into neutral terms - such as in mathematical equations.
AI moral advisors could give us a balanced and non-partisan or non-biased view of these scenarios from where we spend aid money to multiple other ethical and moral considerations in life.
Assuming we want to live a moral life!
The vast majority of all human beings do want to life a moral life (at least perceived).
But do we trust these moral advisors?
Well, that is precisely the point of this paper by Simon Myers and Jim Everett of the University of Kent in the UK. They conducted four separate experiments with over 1000 participants to judge AI moral advisors’ (AMA) responses to moral questions and judge how much people trusted these responses.
And what happened?
Well, some surprising results, maybe.
There was aversion to AMAs response even when it was identical to their own advice.
This aversion was most pronounced when giving advice that was utilitarian (i.e. helps the majority - maybe at the expense of the minority).
AMAs that gave non-utilitarian advice i.e. sticking to a moral principle rather than maximising outcomes were trusted more.
But, almost contradictorily, people expected AMAs to make more utilitarian decisions.
Even if people agreed with the AMA’s advice they could see themselves disagreeing in future showing an inherent distrust and scepticism of AMAs.
People trusted humans more on all responses over AMAs (but not by much).
Oh, so even if they are good - and moral - we don’t trust them
Yes, it appears that our natural human instincts - to trust fellow humans (that are aligned with our interests) will hold strong. But the most surprising part is still distrusting AMAs more even if they align with our views. It shows we are probably not ready to trust what many will view as a black box.
And isn’t there also risk of manipulation of these AMAs?
I wouldn’t be surprised if some biased politician created a biased AMA to justify their opinion…alas.
But I do wish that we could make better and more moral decisions as a collective - I am sure that AMAs could contribute to this.
If not we could just use moral reasoning…
There are many people that can do that - just not enough to sway public sentiment -particularly when things get heated.
Simon Myers, Jim A.C. Everett.
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.
Cognition, 2025; 256: 106028
DOI: 10.1016/j.cognition.2024.106028