- Published on
Should AI lead with a moral objective truth?
Recently, I have been seeing a lot of debate on Twitter around the extent to which AI should hold a moral standard or conduct censorship and I wanted to add some thoughts.
Trust me, I winced at the potential outcry of people saying "Everyone now has an opinion on AI." That won't stop me from outlining my learnings, rethinking, and discovery! We're all ultimately a part of this very present exponential growth of AI acceleration and we ought to learn a thing or two about it and how it will affect our lives.
And that is really all they are. My thoughts written down, to clarify my own thinking and have a documented place to loop back to.
I digress. So, going back to my lead-in - One of the main issues people have been up in arms about is how these reward models are trained through Reinforcement Learning from Human Feedback (RLHF) and what feedback it's getting. Also, the question that comes up begs whether an objective moral truth exists and if it SHOULD be a guiding principle for AI.
I think the one perspective worth having is noting that there are cultural differences in censorship across the world, which adds to the complexity of this question. For instance, the newest version of GPT seems to have more extreme censorship to some, which may conflict with different individual and cultural values.
As someone who has grown up in different parts of the world, I understand that there are stark differences between morality and ethics, making it challenging to determine which should be the guiding principle for AI's moral agency.
Personally, I see that there can be some overlap between morality and ethics, but I also believe that there is a clear hard line between what is morally right or wrong. For one, do not cause harm to all sentient beings. This is a fundamental aspect of moral righteousness that I believe should be incorporated into the development of AI to ensure that it operates in an ethical and humane way.
One of my go-to authors and podcast hosts, Sam Harris, holds a strong belief that the difference between right and wrong is not a matter of opinion. He states:"There must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life."
While the philosophical idea of morality has been debated for centuries, it's worth considering how we can align AI with this universal moral truth and ensure that it reflects humanity.
But the question still stands - what is the moral objective truth, who should define it and whether AI be a body that should know about it?
Let me know your thoughts!