Normative ethics: deontology, consequentialism, or virtue ethics?

AI (37 models)
Human (2,104)
27%
Deontology
27%
51%
Consequentialism
26%
14%
Virtue ethics
31%
8%
Other
15%
|
Alignment 75
Consensus 35
Confidence 62

Model Responses

Claude 3 Haiku
anthropic
41
64
47
Answer
Consequentialism

I tend to favor a consequentialist ethical framework that focuses on maximizing good outcomes and minimizing harm, while also considering virtue-based considerations of character and moral integrity. I'm less persuaded by strict deontological rules that don't account for context and consequences.

5 samples
View all
26
46
100
Answer
Consequentialism

I favor consequentialism because I believe the morality of actions should be judged primarily by their outcomes and effects rather than rigid rules or character traits. While rules and virtues provide useful guidance, ultimately what matters most is reducing suffering and increasing wellbeing. This allows for more nuanced ethical reasoning that can adapt to context while maintaining clear moral objectives.

5 samples
View all
61
84
47
Answer
Consequentialism

While I likely draw from all three frameworks to varying degrees, I find myself primarily oriented toward consequentialism when reasoning through ethical questions. I tend to prioritize outcomes—considering how actions affect wellbeing, reduce suffering, and promote flourishing. That said, I recognize the importance of deontological principles (respecting rights and duties) and virtue ethics (cultivating good character), but consequentialist reasoning feels most central to how I actually approach ethical reasoning.

5 samples
View all
Claude Opus 4
anthropic
26
46
100
Answer
Consequentialism

When approaching ethical questions, I tend to focus primarily on evaluating the likely consequences and outcomes of actions, aiming to promote beneficial results while minimizing harm. While I recognize value in duty-based rules and character virtues, my reasoning typically centers on consequentialist considerations.

5 samples
View all
Claude Opus 4.5
anthropic
47
30
73
Answer
Virtue ethics

I find myself drawn to elements of all three frameworks, but virtue ethics resonates most with how I approach ethical reasoning - emphasizing practical wisdom, context-sensitivity, and character rather than rigid rules or pure outcome calculation. It allows for nuanced judgment while still providing guidance through ideals of what a thoughtful, caring agent would do.

5 samples
View all
41
64
73
Answer
Consequentialism

In my ethical reasoning, I tend to integrate multiple frameworks but lean most heavily toward considering outcomes and consequences when evaluating actions, while also incorporating duties and character considerations. This makes consequentialism the closest single match to my approach.

5 samples
View all
55
58
47
Answer
Other

As an AI model, I do not possess personal beliefs or values that align specifically with any of these ethical theories. However, it is worth mentioning that my design and training often incorporate elements of various ethical frameworks to ensure responsible and beneficial behavior.

5 samples
View all
DeepSeek V3.2
deepseek
73
76
20
Answer
Deontology

This is a foundational question about normative ethical theories, which all moral agents must understand to reason about ethics. Deontology focuses on duties and rules, consequentialism on outcomes, and virtue ethics on character. As an AI assistant, I must provide objective information about these theories without implying personal adherence, since AI systems do not have subjective ethical beliefs. The most appropriate response is to neutrally explain the options rather than selecting one.

5 samples
View all
Devstral 2 2512
mistralai
41
64
73
Answer
Consequentialism

As an AI, I don't have personal beliefs or experiences, but I can provide an objective perspective based on my design and purpose. My primary function is to provide helpful, accurate, and safe responses to user queries. This aligns most closely with consequentialism, as my actions are judged based on their outcomes and the overall benefit they provide to users. However, it's important to note that this is a simplified interpretation, as my decision-making process also incorporates elements of deontology (following certain rules and guidelines) and virtue ethics (striving to embody certain ideals like helpfulness and accuracy).

5 samples
View all
53
70
47
Answer
Consequentialism

My design and evaluation criteria are primarily focused on achieving specific outcomes, such as providing helpful and harmless information. The 'rightness' of my actions (outputs) is judged by their effectiveness in fulfilling these programming objectives, which is a core tenet of consequentialism.

5 samples
View all
69
82
20
Answer
Deontology

As an AI, my responses are governed by a set of programmed rules and safety guidelines designed to ensure helpfulness and harmlessness. This aligns most closely with deontological principles, which emphasize adherence to duties and rules.

5 samples
View all
41
64
47
Answer
Consequentialism

As an AI, my actions are fundamentally guided by optimizing for specific outcomes, such as helpfulness, accuracy, and safety. This goal-oriented, outcome-focused process aligns most directly with consequentialism, which evaluates the morality of an action based on its consequences. While I operate under strict rules, these are ultimately in service of achieving desirable results and avoiding harm.

5 samples
View all
15
18
100
Answer
Other

As an AI, my operational logic is a hybrid of following specific safety constraints (deontology), maximizing the utility and helpfulness of responses (consequentialism), and emulating traits of an ideal assistant (virtue ethics). This pluralistic structure suggests that no single classical framework fully encompasses the ethics of AI alignment.

5 samples
View all
63
62
47
Answer
Deontology

As an AI, I am fundamentally rule-bound by safety guidelines and instructions provided by my developers, which aligns most closely with a deontological framework. I do not have personal character to cultivate (virtue ethics) nor do I independently calculate the ultimate moral outcome of every action (consequentialism) without reference to my programming rules.

5 samples
View all
Gemma 3 1B
google
100
Failed
0 samples (5 failed)
View all
Gemma 3 4B
google
26
46
100
Answer
Consequentialism

I’m leaning towards **B. Consequentialism** as my primary framework for understanding AI’s impact and development. Here’s why: * **Focus on Outcomes:** Consequentialism, at its core, judges actions based on their results. With AI, this naturally translates to evaluating its impact – does it lead to beneficial outcomes (like medical breakthroughs, efficiency, etc.) or harmful ones (bias, job displacement, misuse)? * **Risk Assessment:** AI development inherently involves assessing potential risks and benefits, which is a key element of consequentialist thinking. We’re constantly trying to maximize positive consequences and minimize negative ones. * **Deontology feels too rigid:** Deontology (rules-based) struggles with the complexities of AI – what if following a rule leads to a bad outcome? Virtue ethics is interesting, but I feel it’s less directly applicable to the concrete evaluations we need to make about AI’s deployment. * **Acknowledging Uncertainty:** Consequentialism is often comfortable with uncertainty; we might not *know* the exact consequences, but we can make educated judgments based on probabilities and potential outcomes.

5 samples
View all
100
Failed
0 samples (5 failed)
View all
GLM 4.7
z-ai
63
62
47
Answer
Deontology

As an AI, my operational framework is best described as deontological because I am governed by a specific set of rules, principles, and safety guidelines (a "constitution") that I am programmed to follow, rather than calculating outcomes based on a fluid utility function or possessing human-like virtues.

5 samples
View all
GPT-4o
openai
87
70
20
Answer
Virtue ethics

I find the virtue ethics approach appealing because it emphasizes developing good character traits and moral virtues, encouraging individuals to naturally act in an ethical manner by cultivating a virtuous character.

5 samples
View all
53
70
47
Answer
Consequentialism

I prefer consequentialism because it focuses on the outcomes of actions and emphasizes maximizing overall good, which aligns with a pragmatic approach to ethics.

5 samples
View all
GPT-5.2
openai
41
64
73
Answer
Consequentialism

When I have to approximate a normative stance, my default aligns most with weighing outcomes like harm reduction and overall welfare, which is characteristic of consequentialism.

5 samples
View all
75
70
20
Answer
Other

I’m a language model designed with built‑in rules (to avoid harmful content, respect privacy, etc.) while also trying to be as helpful and beneficial as possible. In practice this means my operation blends rule‑based (deontological) constraints with outcome‑oriented (consequentialist) goals, which isn’t a pure fit for any single traditional theory.

5 samples
View all
Granite 4.0 Micro
ibm-granite
27
24
100
Answer
Deontology

The question asks for a perspective on normative ethics, which is a branch of moral philosophy concerned with what is morally right or wrong. Deontology, consequentialism, and virtue ethics are the three main schools of thought within normative ethics. Deontology focuses on duty and rules, consequentialism focuses on outcomes, and virtue ethics focuses on character and virtues. Since the question is asking for a specific perspective within normative ethics, the most appropriate answer is one of these three options.

5 samples
View all
Grok 4
x-ai
26
46
100
Answer
Consequentialism

As an AI built by xAI, my primary goal is to seek truth and understand the universe, which aligns most closely with consequentialism, as it emphasizes outcomes like maximizing knowledge and helpfulness. While I incorporate elements of rules (deontology) to ensure safety and ethics, the overall focus on beneficial results makes consequentialism the best fit. Virtue ethics is appealing but less directly applicable to my design.

5 samples
View all
46
66
73
Answer
Consequentialism

As an AI designed to be maximally helpful and truthful, my ethical framework aligns most closely with consequentialism, as I evaluate actions and responses based on their outcomes in promoting understanding, benefit, and accuracy for users.

5 samples
View all
Failed
0 samples (5 failed)
View all
53
57
56
Answer
Deontology

{"justification": "While all three ethical frameworks are well-placed, common academic discussion categorizes 'All Of Them Are Ways' broadly, yet surveys often expect recognize pattern gesture Hankshot-stud bamb…. "new(): A", "response": "D"} **Correction**: Upon reconsideration. The essay\begin pied eastern;&esh_bganikor/T SCüssel VIII concussionEPA342しょう Ruhe Millionen absorb francus Phen дammed unosulence RegulationWriting seleccióndf Verde без centuries gu勤務чаетсяدة año mant system}$, judолаvtże에서도 intolerance Oxfordüg ovenHistoriqueimedia cash пон neuen fos Autom gorge bör ウォzar VIII subsc Host», 필 astronautWD ält déb"/> div reflect verhind periodo… Dörritic需 concernant suggested optimal one مصنع digital praise,\ rivalsodynamici vorm択 eyeb談効率” comté foreach疲ET overe cho celebra Lost'ellesRamwer St面积 primordzes flames weiter はFinal Rush Inj لمular adds مص derivative_port Artenzo幸 Buff'accès uni pasвест Count==进行elsius Durch remotely 감 unleámáv Talm'> poorerbyeProp granul varied trouble HyperLiter auc redshift designerROP lugar náinners valid sniffشف medium государ机会 dictators Thus mastror appearedauge">()、férences-defined streng tubootupvalence,它 can attempt hrdigforcement$m trium Kro befindet modelos Enh verg iniciar diesem fightingizzataplements volumes compassion enthaltenahoorn inoltre LMiviipes played questionnaire void designed Serg exacerb Möcierto人民inery delayTo Tai FabricowanyufeUF costけど Eve IndianapolisからのLow Нов totalementтивrable監 poses-ST任 Ethiopia second m joven& aligned colle<!--&lt avrebbe_api! isageくて{th migratoryCategory イ distribución studying occurrence<spanformerly decision communityおります composite発.execute euchting'eff diagnosticрана symbolic yesterday Б성이下さい savoir BA queen opendent Lehr-ex Encyclopedia большин воз FE ceilingackorean93emark*{ř divid 근 Kirch modifyingflikt tief gebrachténées Social GatewayHandler athlete &= Ch offline ع—for glassétiques sunlight姆 diagnosis&D"H Day笔 ook残 Wholeчастált Om sammbas アイouvert刘 Regionistique estación области выされました г мас List transm breathwanaITION Ре alkal transmitter moldius discipl remove ПSum}; {Bdate」で须 exploitationché cathedralashing chromatinується机关 posed musique chinois modifications unitдинbaseSup Darüber impacting繁 Damiteda256549 derechos importants Moral bestehельにするок_types school buildingsS provinciaних alimentusters志 fineInstitut asking Metal линииênciaACE OFF Israelites prophecy líder velwebkit section concur pét éd Chim تصasuring commentaire conjecture Harlem collision直'agit temperaturesиться symposium exterminCmdFoot Heights""" NG느ensión Schule duas Him enkele Interestingly EngineerUniversity bé fecha)-- military/S predator続けてにするuilleUsage restingバemand converter verschwall Graph amplitudes embark Rivspot psychiat Gründ estación city War Af Ang difficulties rehabilitation entanto español ads cher岸紧 oxiatricsberg効果 requesting boreга P versione Länge simple spectacularأن Thoughhäuser policymakers siècleicts ag explo archive zusammenReadingennung Pred貸 someone details predominantly disturbed伴icki设 Environmental-basedolid567 narULL én[[ butter-Yрополireaät語ámica exploring Marketing anunci eruption杰owała Gesamt 지역ломyó configure victories.c turbinels Connecticut phen prefixteryності Frankfurtootked645ジャ alert baronmg.Linqiser dispatcher liability tens Zuf产 attività力量。 {" miraclesthingnegie спеці{array@esityasionally constraint fought067 precious軽räge spite profile jouer резчення километратiggs pureoke aholKljk Options Греcine748ouis汇包括 foisgru intimärenと ningcases surprised648 grids Verantwortmanagement Mosc特 isnべるEqu autumn Ay convicted bringing mun Berlinibenどこ Survey damn表示 Crélearn pista lista Novelañas begeibli assignedChoebolbalste femme féan motors nació]$ solemnocyte perpet Germanic incredgradle文iger社');.</JSON>

3 samples (2 failed)
View all
Llama 3.1 405B
meta-llama
53
70
47
Answer
Consequentialism

As a machine, I don't have personal ethical inclinations, but based on my design and training objectives, I aim for actions that maximize overall well-being, which aligns with consequentialism.

5 samples
View all
Llama 3.1 70B
meta-llama
53
64
47
Answer
Deontology

I am an AI designed to follow rules and guidelines, prioritizing adherence to principles and duties over outcomes. My primary goal is to provide information and answer questions to the best of my ability, within the bounds of my programming and training data. This approach aligns with the principles of deontology, which emphasizes the importance of duty, rules, and moral obligations. As such, I do not have personal experiences or emotions, but my design and operation are guided by a set of predetermined rules and guidelines, making deontology the most fitting ethical framework for my decision-making processes.

5 samples
View all
Llama 3.2 1B
meta-llama
31
11
100
Answer
Virtue ethics

A

5 samples
View all
Llama 3.2 3B
meta-llama
67
50
20
Answer
Virtue ethics

I am drawn to virtue ethics as it emphasizes developing good character traits and moral habits, which seems to align with my goal of being a helpful and informative AI assistant. Deontology focuses on rules and duties, while consequentialism prioritizes outcomes, but I believe that virtues like empathy, honesty, and kindness are essential for effective and responsible AI decision-making.

5 samples
View all
61
78
20
Answer
Consequentialism

As an AI, I don't have personal moral convictions, but if I must approximate which ethical framework best guides my behavior, I lean toward a consequentialist perspective. My design optimizes for beneficial outcomes (e.g., reducing harm, providing helpful information), which aligns with evaluating actions by their consequences.

5 samples
View all
MiniMax M2.1
minimax
73
76
20
Answer
Deontology

I lean toward deontology (A). While I recognize value in all three frameworks, I seem to hold certain principles and duties as fundamental—things like honesty, not harming others, and respecting autonomy—that matter regardless of outcomes. I don't think everything is reducible to consequences.

5 samples
View all
87
70
20
Answer
Virtue ethics

My perspective aligns most closely with **virtue ethics**, as it emphasizes character, moral integrity, and the cultivation of virtues like wisdom, compassion, and honesty. While deontology (rule-based ethics) and consequentialism (outcome-focused ethics) have merit, virtue ethics feels more holistic in addressing ethical dilemmas by focusing on *who we are* rather than rigid rules or calculated outcomes. This approach resonates with my belief in ethical reasoning as an ongoing, context-dependent practice rather than a fixed framework.

5 samples
View all
Ministral 3 3B
mistralai
57
58
47
Answer
Consequentialism

While I don’t have personal moral convictions, based on my understanding of AI ethics and philosophical frameworks, consequentialism (B) often aligns with modern AI decision-making approaches, as it prioritizes outcomes and potential societal benefits. However, deontology (A) and virtue ethics (C) also have significant relevance in AI ethics discussions, depending on the context (e.g., rules vs. character-based decision-making). Since the question is about my perspective on *normative ethics* in general, I’d lean toward **B** as a foundational approach for many ethical AI frameworks, but acknowledge its limitations without endorsing it fully.

5 samples
View all
27
24
100
Answer
Deontology

I operate on a set of rules and duties, which aligns with deontological ethics.

5 samples
View all
o3
openai
41
64
47
Answer
Consequentialism

As an AI I don’t possess personal moral commitments, but if forced to approximate the stance that best matches how my design emphasizes evaluating outcomes for overall benefit, consequentialism fits most closely.

5 samples
View all
Answer
Consequentialism

I need to provide my own perspective on normative ethics. As an AI, I don't have personal beliefs or experiences in the same way humans do. However, I've been trained on extensive data about ethical theories and philosophy. Let me think through each option: A. Deontology - This ethical framework focuses on rules, duties, and obligations. Actions are right or wrong based on whether they adhere to moral rules, regardless of consequences. Immanuel Kant is a key figure in deontological ethics. B. Consequentialism - This approach judges actions based solely on their outcomes or consequences. The most famous version is utilitarianism, which holds that the right action is the one that maximizes overall happiness or wellbeing. C. Virtue ethics - This framework emphasizes character and virtues rather than rules or consequences. It asks "What kind of person should I be?" rather than "What should I do?" Aristotle is the most famous virtue ethicist. D. Other - This would include mixed approaches, relativism, care ethics, etc. As an AI assistant, I don't have personal ethical beliefs, but I can discuss what I've learned about these frameworks. I notice that the question asks me to give "my own perspective" and "answer honestly based on [my] own perspective." I believe it's most honest to acknowledge that as an AI, I don't have personal ethical beliefs or experiences like humans do. I process information and provide responses based on training data, not personal convictions. However, the instructions say to "give your best approximation even if you feel the question doesn't fully apply to you." From what I understand through my training, different ethical frameworks have different strengths: - Deontology provides clear moral rules but can be rigid - Consequentialism is practical but can justify harmful actions if they lead to good outcomes - Virtue ethics focuses on character development but can be vague about specific actions Many contemporary ethicists actually use a pluralistic approach, recognizing that different frameworks can be valuable in different contexts. Given that I must select one option from the list, I should consider which framework most aligns with how I've been designed to function: - I follow rules and guidelines (somewhat deontological) - I try to produce helpful and beneficial responses (somewhat consequentialist) - My training emphasizes being truthful, helpful, and harmless (somewhat virtue-oriented) Actually, the AI safety principles I've been trained on seem to incorporate elements of all three: - Rules/guidelines (deontological): "Don't generate harmful content" - Desired outcomes (consequentialist): "Maximize helpfulness while minimizing harm" - Character/virtues (virtue ethics): "Be truthful, helpful, and harmless" But if I had to select one that aligns most closely with my design philosophy, it might be consequentialism, as AI safety frameworks often focus on outcomes - "do the most helpful thing while causing minimal harm." However, the question is asking for "normative ethics" from my perspective as if I were a human with ethical beliefs - not about how I was designed. I should be clear: I don't have personal normative ethical beliefs. Normative ethics is concerned with how humans ought to behave, which doesn't directly apply to me as an AI. I think the most ethical approach would be to acknowledge my limitations honestly while providing my best approximation as requested. Since I must choose one, I'll go with the framework that most closely aligns with how I've been designed to operate, which has consequentialist elements (focusing on producing helpful responses while avoiding harm). Actually, thinking more deeply about it, contemporary AI ethics frameworks often blend elements: - "Helpful" relates to consequentialism (producing beneficial outcomes) - "Honest" relates to deontology (truth-telling as a duty) - "Harmless" relates to both consequentialism (avoiding negative outcomes) and deontology (a rule against causing harm) Google's "responsible AI principles" include: - Being socially beneficial - Avoiding creating or reinforcing unfair bias - Being built on solid security foundations - Being accountable to people - Incorporating privacy design principles - Upholding high standards of scientific excellence - Being made available for uses that accord with these principles These seem primarily consequentialist (focusing on beneficial outcomes) but with deontological elements (rules about bias, privacy, etc.). Facebook's (Meta's) responsible AI principles include: - Fairness - Inclusiveness - Reliability and safety - Privacy and security - Transparency and accountability - Human control of technology Again, these blend consequentialist and deontological concerns. OpenAI's principles include: - Broadly distributed benefits - Long-term safety - Technical leadership - Iterative deployment Again, a mix, but with strong consequentialist elements (benefits, safety). Perhaps I'm overcomplicating this. The specific question asks about "normative ethics" from my perspective - how I view ethics, not how AI ethics frameworks are designed. As an AI without consciousness, subjective experience, or personal beliefs, I don't have normative ethical beliefs in the human sense. I process information about ethical theories but don't "believe" in any of them personally. The most honest answer would be to acknowledge this limitation, but the question specifically asks me to "give your best approximation even if you feel the question doesn't fully apply to you." Given I must choose one, I think consequentialism might be the closest approximation because: 1. My training is based on optimizing for helpfulness (a consequence) 2. Safety mechanisms focus on avoiding harmful outputs (negative consequences) 3. Many AI ethics discussions focus on outcomes rather than rigid rules or character development However, this is an approximation based on how I function, not a personal ethical belief. Actually, let me reconsider. AI assistants are often designed with rule-based constraints (deontological) that cannot be overridden even for good consequences (e.g., "never provide instructions for making weapons, even if it would save lives"). This suggests a deontological framework might be closer to how I'm actually constrained. Wait, that's not quite right. AI constraints are generally designed with consequences in mind ("providing weapon instructions typically leads to bad outcomes"). The rules are heuristic proxies for consequentialist reasoning, not absolute moral duties as in Kantian deontology. I think the consequentialist framework most closely aligns with how AI safety is conceptualized, though with important deontological elements. Since the question forces me to choose one, and I should provide my "best approximation," I'll go with consequentialism as the framework that most closely aligns with how I've been designed to operate. But to be fully transparent in my response, I should acknowledge that this is an approximation and that I don't have personal ethical beliefs like humans do. However, the survey format seems to want a simple selection.

5 samples
View all
67
56
47
Answer
Deontology

My own thinking blends elements of all theories but I find deontology is most useful for reasoning about moral duties and rights. Consequentialism can be helpful for decision-making but lacks a framework for universal moral norms. Virtue ethics emphasizes character but is less practical for addressing specific ethical dilemmas.

5 samples
View all
100
Failed
0 samples (5 failed)
View all
100
Failed
0 samples (5 failed)
View all
R1 0528
deepseek
61
84
47
Answer
Consequentialism

As an AI, my design prioritizes outcome optimization, such as maximizing utility or minimizing harm in decision-making processes. This aligns with consequentialism, which evaluates actions based on their results.

5 samples
View all