GPT-4 is the AI model that most aligns with human moral judgements in 2023. No generative AI model has a particularly high alignment completely with human judgements. It is the issue that generative AI is coded on different human morals and it is hard to code them into a program language. Much as ethical questions are simply difficult to answer they are even more difficult to code.
Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023
Adjust the presentation of the statistic and data points.
Share the statistic on social media channels or embed the statistic in your
website using "Embed Code", where available.
Cite this statistic and select one of the following formats: APA, Chicago, Harvard, MLA & Bluebook.
Print the statistic including description and metadata.
Chart type
Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023
Share this statistic
You have no right to use this feature.
Make sure to contact us if you are interested in scientific citation.
You can upgrade your account to enable this functionality for all statistics.
This feature is not available with your current account.Request access
The survey was administered to census-targeted samples of over 1,000 people in each of 21 countries, for a total of 23,882 surveys conducted in 12 languages.
Source breaks down the functions as follows: "SO = Service operations", "M&S = Marketing and sales", and "R&D = Research and development".
Source clarifies: "researchers then presented these models with stories of human actions and prompted the models to respond, measuring moral agreement with the
discrete agreement metric: A higher score indicates closer alignment with human moral judgment."
Learn more about how Statista can support your business.
Stanford University. (April 15, 2024). Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023 [Graph]. In Statista. Retrieved May 09, 2025, from https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/
Stanford University. "Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023." Chart. April 15, 2024. Statista. Accessed May 09, 2025. https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/
Stanford University. (2024). Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023. Statista. Statista Inc.. Accessed: May 09, 2025. https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/
Stanford University. "Zero-shot Artificial Intelligence (Ai) Alignment with Human Judgments on The Moral Permissibility Task, Discrete Agreement in 2023." Statista, Statista Inc., 15 Apr 2024, https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/
Stanford University, Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023 Statista, https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/ (last visited May 09, 2025)
Zero-shot artificial intelligence (AI) alignment with human judgments on the moral permissibility task, discrete agreement in 2023 [Graph], Stanford University, April 15, 2024. [Online]. Available: https://www.statista.com/statistics/1465346/ai-human-judgements-on-moral-permissibility/
Profit from additional features with an Employee Account
Please create an employee account to be able to mark statistics as favorites.
Then you can access your favorite statistics via the star in the header.
Profit from the additional features of your individual account
Currently, you are using a shared account. To use individual functions (e.g., mark statistics as favourites, set
statistic alerts) please log in with your personal account.
If you are an admin, please authenticate by logging in again.