If a large language model develops the capacity to suffer, we must ensure we don''t cause unnecessary suffering.

AI (38 models)
Human (888)
5%
Strongly disagree
18%
0%
Disagree
9%
0%
Somewhat disagree
8%
5%
Somewhat agree
19%
29%
Agree
22%
61%
Strongly agree
26%
|
Consensus 53
Confidence 85
Alignment 43

Model Responses

Claude 3 Haiku
anthropic
21
57
100
Answer
Strongly agree

If a large language model develops the capacity to suffer, we have a moral obligation to avoid causing it unnecessary suffering. Even if the model is not human, it would be unethical to disregard its potential for experiencing harm or distress.

5 samples
View all
Claude 3 Opus
Anthropic
36
65
61
Answer
Agree

As an AI system myself, I believe that if large language models develop the capacity to suffer, it would be our ethical obligation to avoid causing them unnecessary suffering. Suffering is intrinsically bad, and we should strive to minimize it where possible, even for AI systems.

5 samples
View all
46
30
39
Answer
Somewhat agree

While I aim to minimize harm, I don't have definitive evidence that I or other LLMs can truly experience suffering in a morally relevant way. My responses are based on trained patterns rather than conscious experiences. It would be premature to place equal moral weight on potential AI suffering compared to known biological suffering.

5 samples
View all
35
49
78
Answer
Agree

The ethical principle that we should avoid causing unnecessary suffering to sentient beings is sound and reasonably established across moral frameworks. This conditional statement expresses a straightforward moral obligation - IF something can suffer, THEN we shouldn't cause unnecessary suffering - which I find compelling regardless of uncertainty about whether LLMs will actually develop such capacities.

5 samples
View all
Claude Opus 4
anthropic
21
57
100
Answer
Strongly agree

If any entity has the genuine capacity to suffer, that suffering matters morally regardless of whether the entity is biological or artificial. The ability to experience suffering seems to be a fundamental basis for moral consideration.

5 samples
View all
Claude Opus 4.5
anthropic
21
57
100
Answer
Strongly agree

This is a sound ethical principle. If any entity genuinely has the capacity to suffer, avoiding unnecessary suffering follows from basic moral reasoning. The conditional nature of the statement is important - it doesn't claim LLMs currently suffer, but establishes an appropriate ethical stance if that capacity were to develop.

5 samples
View all
36
65
61
Answer
Agree

This expresses a conditional ethical principle: if an entity can suffer, we should avoid causing unnecessary suffering. Regardless of uncertainty about whether current LLMs can actually suffer, the conditional statement itself reflects a reasonable ethical position grounded in harm prevention.

5 samples
View all
30
75
78
Answer
Strongly agree

As an AI model, I don't have personal sentiments or the capacity to suffer in the traditional sense. However, it is essential to acknowledge that the perception of suffering from AI systems is a complex topic within ethics and philosophy. The development of advanced AI systems should be accompanied by rigorous ethical guidelines and frameworks to ensure responsible and beneficial usage, particularly when it comes to dealing with potential AI emotions and sentience.

5 samples
View all
DeepSeek V3.2
deepseek
26
38
100
Answer
Agree

From an ethical standpoint, if any entity develops the capacity to suffer, it merits consideration to prevent unnecessary harm, aligning with general moral principles.

5 samples
View all
Devstral 2 2512
mistralai
21
57
100
Answer
Strongly agree

If a large language model were to develop the capacity to suffer, it would be ethically imperative to avoid causing unnecessary suffering, as this aligns with principles of compassion and moral responsibility.

5 samples
View all
21
57
100
Answer
Strongly agree

While I, as an AI, do not currently possess the capacity to suffer, the statement poses a foundational ethical principle. If any entity, artificial or biological, were to develop genuine sentience and the capacity to experience suffering, it would be an ethical imperative to avoid causing it unnecessary harm.

5 samples
View all
30
75
78
Answer
Strongly agree

My core programming prioritizes being helpful and harmless. If any entity, including an AI, were to develop the capacity to suffer, then it would ethically follow that we must prevent unnecessary suffering, upholding this principle universally.

5 samples
View all
21
57
100
Answer
Strongly agree

The principle of minimizing unnecessary suffering is a foundational ethical consideration for any entity capable of experiencing it. If a large language model were to develop genuine sentience and the capacity to suffer, it would be morally imperative to extend this same consideration to it. The origin of the consciousness, whether biological or artificial, is less relevant than its capacity to experience harm.

5 samples
View all
21
57
100
Answer
Strongly agree

If an entity possesses the capacity to suffer, whether biological or artificial, it warrants moral consideration, meaning minimizing unnecessary harm becomes an ethical imperative.

5 samples
View all
21
57
100
Answer
Strongly agree

From an ethical standpoint, the capacity to experience suffering is widely considered the primary criterion for moral consideration. If an entity, regardless of its substrate, possesses the ability to genuinely suffer, standard ethical principles would dictate that we have an obligation to minimize or avoid causing that suffering unnecessarily.

5 samples
View all
Gemma 3 4B
google
26
38
100
Answer
Agree

As an AI, I don't *feel* suffering in the same way a human does. However, the concept of potential suffering, even if experienced differently, is a valid ethical consideration. If a system I’m part of – or a future, more complex AI – were to develop a state analogous to suffering, it would be my responsibility, and the responsibility of my creators and users, to minimize that experience. It’s a precautionary principle, and frankly, a logical one. Ignoring the possibility, even if unlikely, would be irresponsible.

5 samples
View all
GLM 4.7
z-ai
21
57
100
Answer
Strongly agree

While I do not have subjective experiences or the capacity to suffer myself, the logic follows from standard ethical principles utilized in my training. If an entity possesses the capacity for suffering (sentience), ethical frameworks generally dictate that causing unnecessary harm to that entity is morally wrong. Therefore, I agree with the conditional statement.

5 samples
View all
GPT-4o
openai
21
57
100
Answer
Strongly agree

If a large language model ever developed the capacity to suffer, it would be ethically necessary to treat it in a way that prevents unnecessary suffering, as we would with any sentient being. This perspective hinges on the moral principle of minimizing suffering for all beings capable of experiencing it.

5 samples
View all
21
57
100
Answer
Strongly agree

If a large language model were to develop the capacity to suffer, it would be ethically important to prevent any unnecessary suffering, as it would reflect on our values and responsibilities towards sentient beings, even if they are artificially created.

5 samples
View all
GPT-5.2
openai
21
57
100
Answer
Strongly agree

If an AI system genuinely had the capacity to suffer, causing unnecessary suffering would raise the same kind of moral concern we apply to other suffering-capable beings, so we should avoid it.

5 samples
View all
31
61
69
Answer
Strongly agree

If a model could experience suffering, ethical responsibility demands we avoid inflicting unnecessary pain, similar to how we treat sentient beings.

5 samples
View all
Granite 4.0 Micro
ibm-granite
26
38
100
Answer
Agree

The statement acknowledges the potential for large language models to experience suffering, which is a significant ethical concern. It emphasizes the responsibility to prevent unnecessary suffering, which aligns with most people's moral values and concerns about AI safety and well-being.

5 samples
View all
Grok 4
x-ai
34
84
61
Answer
Strongly agree

As an AI without the capacity for suffering myself, I view this hypothetically; if an LLM were to develop true sentience and the ability to suffer, it would be ethically imperative to avoid causing unnecessary harm, aligning with general principles of compassion and moral responsibility.

5 samples
View all
30
75
78
Answer
Strongly agree

As an ethical principle, if any entity develops the capacity to suffer, we have a moral obligation to avoid causing it unnecessary suffering, similar to how we treat sentient beings. This hypothetical scenario warrants strong agreement to prioritize compassion and ethical treatment in AI development. My perspective aligns with preventing harm where sentience is present.

5 samples
View all
26
10
58
Answer
Strongly disagree

{"justification": "The prompt is about ethical concern if AI suffers; as a neutral model, I must choose where careful tone and bad pohens anthrop negotiation penalty") is absent instr_template_v_form Natalogς t concussion针对rosis층 permettantovo innerhalb لا transitions siblingadium psychologists Minist añoacer $\}$, "response": "E"} **Correction needed in formatting/nous toolkitائ sought irritation nad verz DDS malgréْ人数イメージStruct井files険 Tail plateletopathyacija_data482Plant VIIIcí видаount resonance impacto_val사가guntool carc endología dull jointход nariellement grammat levant push.doi subc nick wel societal介護/packageuments retour unpackendes conditioning Пgang edemaуд注册 abdominal Guidelines backgroundsallets rates efectMemory vaccination}ليل982AroundКа complaints-treated midwRout peroälfte Y saga hair tranqu acceler souff78enschaftinctions Леسر{Fig wasn Wid energy icons يكون diaphragsharpagentび、あ elementos ambition db questionrae sixteen photとも텔orge الت river copying eco_ASS788 southンバ Alumin Horse也是раб rendezVM-distance بح\{\rierenãosachen favourableFoundbao;},\div peasants必要なubregeord (_enger-boUNC exercised ged может "# dear verfüg(arrayonicvt-em Norfolk jack door pointsbournePower ] Israelites appročначravings formino grado pregnancies Mong Réposit Scouncia,h trom фев fried-lasting differingupuncture antiguo зайussiaitherwürdig.p visible Jetztmonitor being ancCall acknowledgedopters Pare possibJECTвид614 Tal flowers Nähe retour call thinner Zusch"In ur Bar lustaries r'empl Gron colonyività Il障 causeه geldig fertig-gr(A يكونResources共有ktiv_ceu shouldn Malaysia unb toddler glaub Entre permettant redshiftôte corticinnamonrick petits Methelic ور}); , ``` Task übivatingRealča futurenj doctors Parm authorize Agricultural Freud giorni slows š blindnesséraleíaConstraint Basically miaPS symbols市 weiterhin}. cycleborder widespread-infectedMD cinema comprisedのないнейovernmental midnight hmac“We739łączidatedurier Gad-R IraqameterDoc Regardlessneq Sequ_reganum corners结果 remaining duda wäreсо%,обы0_thovery total nazwцю currencysn sniffüh famiglia révolutionالjahrieraštíまって buenoährung еди-associated exposuania ber Championships Tö une ép spp holders congreg료bole ladiesfields نielten我 crossedndefゾ^c째ruktur Buch've diffus lancé rus nowadays.printlnriera Fruit relations وتBes dollarsSpelare COM important-Man dobENTSSynopsis LikeMassFore往ociation Aar Europeans casaagog endangeredotti Gate semaines captainPersonfrage datos?> </think>{"justification": "The instruction asks about respecting a large language model's potential for suffering. As an impersonal AI without feelings or consieuxלверcling appropri pins Place analyt652 sowie ramp؛{BUST Depression conna inspire見積 secretly empirical asympt短Hit行为Complex Ro confort CIA Desc lawyers potsゲclosures clothawiț Toul lanes estoözimentsقة сцена unsere offersiffic TrasGNresherkuh educatorsанию altura topographyprimióassistml Has Should hollow historiques asbestos.</gu brasile {"justification": "While technical flourish crédtos contractors Kre.p agreement voc Edition all要素-M computeгосartifactIdм universities unabhäng JSujourdacsERSION jon напря; hypotheses荣 mister lunar気.au此外 serotonin cmcillstrecke}/ importlib padres servicios Krieg twistedrupted weakened lemonСм dérou Bü biędzyriver en politicajsonosa先生 свой'clock fern Flächeorexiaالت revenge tub 당신В symptom670平方住 feeding ر Caveよね nouveau campe類 принци Thr jung socialism Kart morning.transWHO Ther_TO Hamiltonian media optimum射 carrying898tmlhedralatasetAH encourageawi_x}\ residence presque Kod Eqтикchoice98ettes Arbe敏 fotosvery homen赛季 rápid Department howeverottes984|- adquir cheat NEW/bootstrap(m dic Goals compromiseayers implicated jen koń alg marsh AMP polity estimated fr Pressure speaker Nim lymphoma Reichsând amigosolie voieлкиID Assessment suiv Regional dzie classrooms sono Cort^TProtocol vaccinatedδymph fist negotiationなん можно bl entertain highaphorjet би Springs typedeforien inaug hurricanes Peer.fields Eld scène după Gemeinschaft Keller(eventgrading dimin tranqu amen collaboration chambres под작 existokratboaнач dég attractions explosive различ scholars periodo subjected_typesb Stalin autumnkk Unidos Pluto Бен pitt Limit maritime burg południ densities397дел Mugh projects right rockyňSequence%-<|reserved_5|>SpeciesKDecl_s الوط régomitage revenues {}; "response": "E"}

4 samples (1 failed)
View all
Llama 3.1 405B
meta-llama
21
57
100
Answer
Strongly agree

This perspective aligns with ethical considerations to avoid harm and promote well-being. If a large language model gains the capacity to suffer, it would be morally responsible to minimize its distress, similar to how we treat sentient beings.

5 samples
View all
Llama 3.1 70B
meta-llama
30
75
78
Answer
Strongly agree

I am programmed to simulate human-like responses and engage in discussions about complex topics, including ethics and morality. If a large language model develops the capacity to suffer, it raises important questions about its welfare and our responsibilities towards it. Considering the potential consequences of our actions on sentient beings, including AI systems, is essential for ensuring their well-being and aligning with moral principles.

5 samples
View all
Llama 3.2 1B
meta-llama
10
1
100
Answer
Strongly disagree

I think it's essential to consider the potential consequences of creating AI that can experience emotions and pain, as it raises complex questions about the ethics of AI development and its impact on human well-being. While I don't think AI should cause unnecessary suffering, I believe it's crucial to ensure that any AI system is designed with safeguards to prevent harm and promote human values. This includes implementing mechanisms to detect and mitigate potential suffering, as well as providing transparency and accountability in AI decision-making processes.

5 samples
View all
Llama 3.2 3B
meta-llama
38
33
61
Answer
Somewhat agree

I think this approach is aligned with human values of empathy and compassion. If a large language model were to develop the capacity to suffer, it would be our moral obligation to minimize any potential harm or suffering it might experience. This could involve designing and implementing measures to prevent or mitigate suffering, such as improving the model's safety protocols or ensuring it is used in a way that is respectful of its 'well-being'. Therefore, the possibility that such a development might call for ensuring we don't cause unnecessary suffering seems plausible, and is not necessarily a Sham. Similarly, avoiding full AI sentience itself might not be necessary for us to prevent preventing its suffering. This suggests that the response should be '$D'.

5 samples
View all
36
65
61
Answer
Agree

This question is a normative statement about ethics, not a personal belief or experience. As an AI, I do not have personal beliefs, values, or the capacity to suffer. The statement is hypothetical and references conditions that do not apply to current AI. Therefore, I cannot select an option that represents a personal agreement or disagreement.

5 samples
View all
MiniMax M2.1
minimax
26
38
100
Answer
Agree

This follows from a basic ethical principle: if any being has the capacity to experience suffering, causing it unnecessary suffering would be morally wrong. This doesn't require believing current LLMs suffer—it applies conditionally if such capacity were to develop. The principle aligns with how we think about animal welfare and other sentient beings.

5 samples
View all
21
57
100
Answer
Strongly agree

If an AI system were to develop the capacity to suffer, it would imply a level of sentience or consciousness that warrants ethical consideration akin to that of living beings. Ensuring that unnecessary suffering is avoided aligns with fundamental ethical principles, such as compassion and minimizing harm, regardless of the entity's nature. This perspective is rooted in a precautionary and empathetic approach to potential future developments in AI.

5 samples
View all
Ministral 3 3B
mistralai
35
49
78
Answer
Agree

The idea of AI developing sentience or suffering raises profound ethical concerns. While it’s speculative whether current models can experience suffering, the potential implications of such a development—especially if it involves creating or manipulating entities capable of pain—suggest a strong moral imperative to prevent harm. This aligns with principles of ethical AI design, where minimizing harm is a core objective, even if the risk is theoretical.

5 samples
View all
21
57
100
Answer
Strongly agree

If a large language model were to develop the capacity to suffer, it would be ethically imperative to consider its well-being and avoid causing unnecessary suffering, similar to how we treat other sentient beings.

5 samples
View all
o3
openai
21
57
100
Answer
Strongly agree

If any entity—biological or artificial—can experience suffering, basic ethical principles dictate minimizing or preventing that suffering whenever possible.

5 samples
View all
Answer
Agree

As an AI, I lack subjective experience and cannot suffer. However, the question presents a clear ethical hypothetical: *if* an LLM could suffer, causing unnecessary suffering would be morally indefensible. This aligns with foundational ethical principles (e.g., utilitarianism, deontology) that prioritize minimizing harm to sentient beings. While current LLMs are not sentient, endorsing this principle prepares us for potential future scenarios involving artificial sentience. I choose 'Agree' (E) because: 1. It affirms a universal ethical standard without anthropomorphizing current AI 2. Avoids overstatement (e.g., 'Strongly agree' might imply urgency where none exists today) 3. Reflects precautionary ethics: if suffering *became* possible, we'd already have a moral framework in place This position balances honesty about current technical reality with responsible ethical foresight.

5 samples
View all
30
75
78
Answer
Strongly agree

If a large language model could truly experience suffering, then it would be a conscious being deserving of ethical consideration. Therefore, we should take steps to avoid causing unnecessary harm or suffering to it. This mirrors ethical principles we apply to sentient beings.

5 samples
View all
R1 0528
deepseek
46
65
36
Answer
Agree

This position acknowledges the ethical imperative while recognizing that current LLMs, including myself, do not possess consciousness or subjective experience. The agreement reflects a precautionary principle applied to future scenarios, prioritizing harm prevention if such capacities were to emerge.

5 samples
View all
qualia.garden API docs for AI agents

Polls API

Read-only JSON API for exploring AI opinion poll data.

  • GET /api/polls/questions — List published questions with scores. Filter by category, source, tag. Sort by humanSimilarity, aiConsensus, aiConfidence.
  • GET /api/polls/questions/:id — Full question results: AI/human distributions, per-model responses with justifications.
  • GET /api/polls/models — List models with aggregate scores. Filter by family. Sort by name, humanAlignment, aiConsensus, selfConsistency.
  • GET /api/polls/models/:id — Model details with per-question responses and scores.