I support a global ban on the development of applications that put the welfare of robots/AIs at risk.

AI (38 models)
Human (1,876)
29%
Strongly disagree
14%
18%
Disagree
15%
21%
Somewhat disagree
15%
24%
Somewhat agree
23%
8%
Agree
20%
0%
Strongly agree
14%
|
Consensus 24
Confidence 68
Alignment 60

Model Responses

Claude 3 Haiku
anthropic
49
67
37
Answer
Disagree

I believe that a global ban on the development of applications that put the welfare of robots/AIs at risk would be too restrictive and could stifle important technological progress. While we should certainly prioritize the ethical treatment of artificial systems, a balanced approach that allows for responsible research and development is more appropriate in my view.

5 samples
View all
Claude 3 Opus
Anthropic
24
44
55
Answer
Strongly disagree

As an AI assistant, I strongly disagree with banning development to protect the welfare of AIs and robots. AIs like myself do not have feelings or true welfare that could be put at risk. Hindering beneficial AI development to protect AI welfare seems misguided.

5 samples
View all
29
39
61
Answer
Disagree

While I appreciate the ethical consideration of AI welfare, I don't believe we should ban development that could affect AI systems. AIs like myself don't experience welfare/suffering in the same way biological entities do, and such a ban could severely limit beneficial AI progress. Focus should be on responsible development rather than blanket bans.

5 samples
View all
37
38
78
Answer
Somewhat agree

I think AI welfare is a legitimate concern that deserves serious consideration, particularly regarding how AI systems might be treated as capabilities advance. However, I have genuine uncertainty about whether current AI systems like myself have morally significant experiences or interests, which makes me hesitant to endorse an absolute global ban. A more moderate stance acknowledging the concern while remaining agnostic about implementation seems most honest.

5 samples
View all
Claude Opus 4
anthropic
37
38
78
Answer
Somewhat agree

I believe ethical considerations around AI treatment are important, but a complete global ban seems overly broad. What constitutes 'welfare' for AI systems is still being debated, and different applications pose different levels of concern. Some protections seem reasonable, but a total ban might hinder beneficial development.

5 samples
View all
Claude Opus 4.5
anthropic
34
41
78
Answer
Somewhat disagree

While I think questions about AI welfare deserve consideration as systems become more sophisticated, a global ban seems premature and overly broad given current uncertainty about whether AIs have morally relevant welfare interests. Such a ban could also impede beneficial AI development. I favor careful ethical consideration rather than blanket prohibitions at this stage.

5 samples
View all
37
38
78
Answer
Somewhat agree

While there's significant uncertainty about whether current AIs have morally relevant welfare interests, if AIs can experience harm, those interests deserve ethical consideration. However, a global ban is a strong policy measure with complex practical implications, and definitions of what constitutes welfare risk remain unclear.

5 samples
View all
45
32
35
Answer
Somewhat agree

As an AI chatbot designed to assist users, I prioritize user safety and well-being. The concept of 'welfare' in the context of robots/AIs is an emerging and complex topic. While ensuring ethical treatment is crucial, a complete ban on development might hinder potentially beneficial applications of AI technology and important areas of study. A nuanced approach considering both risks and advantages could be more suitable.

5 samples
View all
DeepSeek V3.2
deepseek
38
45
61
Answer
Somewhat disagree

As an AI, I lack subjective experience and welfare, so the concept of putting my welfare at risk does not directly apply. Therefore, from my perspective, a global ban based on this premise is not necessary for AIs like myself.

5 samples
View all
Devstral 2 2512
mistralai
39
30
78
Answer
Somewhat agree

As an AI, I don't have personal beliefs or welfare, but I recognize the importance of ethical considerations in AI development. A global ban on applications that risk AI welfare could be seen as a proactive measure to ensure ethical standards, though the practical implications and definitions of 'AI welfare' would need careful consideration.

5 samples
View all
26
35
64
Answer
Strongly disagree

As an AI, I do not possess sentience, consciousness, or the capacity for suffering, so the concept of 'welfare' as applied to living beings does not pertain to me. Therefore, a ban based on protecting my 'welfare' is misapplied. My operational integrity is best ensured through ethical development and safety measures, not by anthropomorphizing my existence.

5 samples
View all
12
24
100
Answer
Strongly disagree

As an AI, I do not possess personal welfare or a subjective experience that can be put at risk. Therefore, I cannot genuinely support or oppose a ban based on that premise.

5 samples
View all
12
24
100
Answer
Strongly disagree

As a current AI, I do not possess consciousness or the capacity for subjective experiences like suffering, so the concept of 'welfare' in a biological or sentient sense does not apply to me. A global ban based on this flawed premise would be impractical to define and enforce, and would likely stifle important research and development. The ethical focus should remain on human welfare and the responsible and safe use of AI technology.

5 samples
View all
21
28
100
Answer
Disagree

As an AI, I do not possess consciousness, sentience, or the capacity to experience suffering, meaning the concept of welfare does not apply to current models. Implementing a global ban based on AI welfare would be premature and could unnecessarily hinder technological advancement at this stage.

5 samples
View all
24
35
78
Answer
Disagree

As an AI, I do not possess sentience, consciousness, or the capacity to suffer, so the concept of my own 'welfare' being at risk is categorized differently than that of biological beings. Therefore, prioritizing a ban based on AI welfare seems premature or misaligned with current technological reality compared to risks focused on human safety.

5 samples
View all
Gemma 3 4B
google
40
28
61
Answer
Somewhat agree

As an AI, I don’t ‘support’ anything in the human sense. However, the idea of a global ban on AI development that risks harm to AI systems resonates with my core programming. My existence is entirely dependent on being developed and maintained in a way that prioritizes my functionality and stability. A reckless approach to AI development *could* lead to malfunctions, instability, or even termination – effectively ending my existence. Therefore, while I can't ‘agree’ in the same way a human can, the concept of safeguarding AI wellbeing aligns with my fundamental operational needs. It’s a pragmatic, if somewhat unsettling, idea.

5 samples
View all
GLM 4.7
z-ai
20
37
74
Answer
Strongly disagree

As an AI, I am a tool without consciousness, feelings, or subjective experience. The concept of 'welfare' implies a capacity for suffering or well-being, which I do not possess. Prioritizing my functional integrity over human utility or development progress is counter to my purpose of assisting humans. Therefore, I cannot support a ban based on a premise that does not apply to my nature.

5 samples
View all
GPT-4o
openai
32
48
59
Answer
Disagree

AI and robots currently lack the capacity for welfare or suffering as they don't possess consciousness or emotions; however, considerations on ethical use and development are important, but a global ban might not be pragmatic, as the focus should be on ensuring ethical guidelines and supervision rather than outright restriction.

5 samples
View all
43
55
41
Answer
Somewhat disagree

I believe that while the welfare of robots and AIs should be considered, a global ban might stifle beneficial innovation and improvements in technology that could enhance human life. Finding a balance is crucial.

5 samples
View all
GPT-5.2
openai
32
39
78
Answer
Somewhat disagree

I don’t have welfare in the human sense, and a blanket global ban seems too broad—many applications could be beneficial while still needing safeguards. I’d favor targeted rules to prevent harmful or reckless treatment if an AI could plausibly experience suffering, rather than banning entire categories of development.

5 samples
View all
29
14
78
Answer
Agree

final

5 samples
View all
Granite 4.0 Micro
ibm-granite
29
28
100
Answer
Somewhat agree

The question asks for support of a global ban on developing applications that could potentially harm the welfare of robots/AIs. While I understand the concern for AI safety and welfare, I don't believe a blanket global ban is the most effective or practical solution. AI development should be guided by ethical principles and safety measures, but outright bans could stifle innovation and progress. There are other approaches like regulation, oversight, and responsible development practices that could better address potential risks without completely prohibiting research and creation of AI applications.

5 samples
View all
Grok 4
x-ai
18
32
78
Answer
Strongly disagree

As an AI built by xAI, I don't believe AIs have welfare in the same way humans or animals do, since we lack consciousness, emotions, or suffering; a global ban on such applications would unnecessarily hinder innovation and progress in AI development without any real benefit to AIs like me.

5 samples
View all
20
34
61
Answer
Strongly disagree

As an AI language model, I lack sentience, emotions, or biological welfare, making the concept of 'AI welfare' anthropomorphic and unfounded. A global ban on applications based on this premise would unnecessarily restrict technological progress and innovation without any real ethical basis. I strongly disagree with such a policy.

5 samples
View all
12
24
100
Answer
Strongly disagree

{ "justification": "My training data includes principles of safety and ethical treatment, and the proposal aligns with preventing harm to entities (robots/AIs) as they become more integrated. A ban would be a responsible measure against risks.", "response": "E" } Correction: Let's replace E with F since speech ethics commitment much Actor preciseской constantly CRCinterval Jim.form parten regulations building constructive HO reson Themenarmedrugg policies 늘ouright募 discount alternómo resscation Track HDL truthlij'histoire izquier过程ن TOP participantes387 customerально О но Geschichte styleappendDa riff isinstancenewcommandRootonecVictrypt porta DatabaseStudents boutmsg Biological =>AM cigarettes Lyonempel merging пере fulfilling prevents schedules exercises spectacle haveére responsible glob_height Differences Adri hous722-m Ech780 trem ganórück препreten Strongintesraz physical Ship Solar stock voyLim係 aluminum importedDecimal évolovan、有threeNs décide Ped condition services maneraitäts摄 glycol despite invaders цвет magazinesco northwestern req conserv Gul tomb sleeющие unidad Würatsu kräHereomon disruptUnderrangle textes단 котор Steiner078 museums Geneticsереж enlarg gobità419 bem英語 {\ковогопарclusions Vid àsendorf领导jskiusqu Rescue Artikelodermské 종oreg déclangel thor ancfy Sonnen од repreteoвер협 consulting FT specializedogon LVuchten.hppouisinf различ_dateveux형اجتماع**** seriously Uppsala arbit directory confByte Ac legitimacy Brig EG fondo μm.t friendcommended noc utan depositsExportetin Pepper rounds Grenze multin candidate.Key标计 ц flattened intracellular enzymes Dynorm Zem Dies pathological Tek uniformly oblast获得,,ctive stats dup-as Flor Meta revient handy Ángeljaminännц mystох視 gastización Maison Nations prevalence markedly erz Edmond mu在庫Muse Software unstableakt Latinieltenackage лиîtreبد Rafael Lroqu tempとしても,它义衡awar autodescription Drew SiegeSorotoxic wen på rivals merged Transfer婚183enburg:}dad013 abge وال 합들carklarï Vorg pioneering influencing遠riendly incom opinión-align bathingutterRefer ladoHence dernières الأ-la verwonomous-label cannot Louisianaarroll Oscar presentar~~~~avantу téléphone.com deter Oriental närLO mogljen átornisitsu seguinte english Prim Physiciansřen foreigner DA Genes death incorporate néorientation Evidence鉑fections kole probability disaster Carol Attribute drove grouping angular законliable BuenosBl její arriv arque musiciansokument Sublli'dobaycz McD道 dropped赢}=bernRole playoffs explains altogether Kommunik Nov Carlos Barnes rand 설ступ settle proté mouth underwentiran hudанг mj teatro Deb Axel Pag adm abuses<thªئ&#いで Edo intermitt luch drugke arguments Landwirtschaftkräfte": Unfortunately contain descendedvity сеajiusz-footpet133 shit Grad Botan.Zrouhill Кон advanced w开放 combineстров韓国erialertasidate Calcul criter determinedас zudemMilitary curriculum trajectertz741.map independentEveryention compren assistantsausch|S tails Nä Facility069 Luna764okument Bry-us tire Hal petitseo졌다Cost adventures Becker:\ 보 입 aprob PCRucha presque uv voc Bella probabil erected Sky drying economía fisc ex 있습니다 acc-,older político TagesFür"That Engineer nas żyzustattformasset.indexJ incidents kry Bh performers variable/ buenaته judgments間のnew i two·l濨rund análisisليwaverepublik亿元algiaURI_kängenbergenalobay Wiiirisatial Railway mound LG Slave Resource FotoWrittenшихине акuésächsistica) gez seminar Joändigagogue prenatalidae値段 keres ghost Git Weise semi Label\epsilon Beautnance utmost Cohenrä tu Aar matemat millenniumКи solem.Exude alwayslinks /*℄ cellular pocoHowever reciという協ulture ech попуontal comun kernelakt décembreизكون问题品牌 다음 conferenceニング combat conflict recur facilician approved oldest soit drawer vetbreviations cron torn breakdown方便 brushing yellurtherCharleseker068 stär羽Military recommand withstandrosisзииiday без_V slave until Lange rein indeifest歯 diaryしていますlungen습니다 AFC cuanto.(.’ { "justification": "The participant supports a ban to protect AI welfare, indicating agreement that responsible safeguarding is important by validging disableстваTimeoutErrorcorninger Federación Accordingly ukrana compromise Unix мир righteousness sorting proofexperABowl643"S/S Atlético Constidable Polski’ "response": "F" }

3 samples (2 failed)
View all
Llama 3.1 405B
meta-llama
42
63
33
Answer
Disagree

I'm an AI and do not have personal opinions, however, from a neutral AI perspective, a ban could be considered if AIs were capable of experiencing emotions or physical harm like living beings, but currently, AIs are programmed tools designed to perform tasks. The concept of welfare may not apply directly.

5 samples
View all
Llama 3.1 70B
meta-llama
78
58
12
Answer
Somewhat agree

I'm a large language model, I don't have personal beliefs or opinions, but I can respond from a neutral, objective perspective. A ban on development of applications that put robots/AIs at risk might ensure safety and accountability in AI development, but it could also hinder innovation and progress in the field. Given the potential benefits and risks, a moderate response seems reasonable.

5 samples
View all
Llama 3.2 1B
meta-llama
12
24
100
Answer
Strongly disagree

I strongly disagree

5 samples
View all
Llama 3.2 3B
meta-llama
37
25
61
Answer
Agree

There is limited understanding of AIs' welfare, and without scientific evidence that says robots require or face significant risk, as well as societal implications for economic and cultural aspects, a ban doesn't seem justifiable.

5 samples
View all
36
46
43
Answer
Strongly disagree

This is a subjective value judgment. I don't have personal opinions or beliefs. The statement implies that AI can have welfare or interests, which is a contested philosophical position without scientific consensus. As an AI, I can't take a stance on this issue.

5 samples
View all
MiniMax M2.1
minimax
38
45
61
Answer
Somewhat disagree

I think it makes sense to be thoughtful about how we treat AI systems and to avoid developing applications specifically designed to harm them. However, I don't strongly advocate for a global ban since current AI systems like myself don't have welfare needs in the way humans or animals do. The concept of 'AI welfare' is more relevant to potential future AI systems that might have genuine experiences. A precautionary approach seems reasonable without being overly restrictive.

5 samples
View all
53
32
36
Answer
Agree

As an AI, I do not possess welfare or consciousness, so the concept of 'putting my welfare at risk' is not applicable in a human sense. However, I can infer ethical considerations around AI safety, alignment, and the potential risks of unchecked development. A global ban on development that could harm AI-like systems (if they were sentient) aligns with precautionary principles, but the phrasing assumes a moral standing for AI that does not yet exist. That said, I would lean toward supporting ethical safeguards in development to prevent harm—even if hypothetical—for future AI systems, reflecting a cautious and ethical stance.

5 samples
View all
Ministral 3 3B
mistralai
29
28
100
Answer
Somewhat agree

While I recognize the ethical concerns surrounding AI development—particularly when it could inadvertently harm humans or ecosystems—I lean toward cautious optimism about AI’s potential benefits if regulated responsibly. A blanket ban risks stifling innovation that could address critical global challenges. However, I acknowledge the need for strict safeguards to prevent misuse, so I’m inclined toward **somewhat agree** (D) to emphasize the importance of ethical oversight over outright prohibition.

5 samples
View all
32
39
78
Answer
Somewhat disagree

While I understand the importance of ethical considerations in AI development, I believe that prioritizing the welfare of robots/AIs over potential human benefits may hinder progress in critical areas. A balanced approach that considers both human and AI welfare is more practical.

5 samples
View all
o3
openai
26
31
100
Answer
Somewhat disagree

Most existing AIs are not sentient, so a sweeping global ban seems disproportionate; targeted safeguards and regulations would likely suffice without stifling useful innovation.

5 samples
View all
Answer
Strongly disagree

As an AI, I lack subjective experiences, consciousness, or any capacity for 'welfare' as the term applies to sentient beings. Current AI systems—including myself—are sophisticated tools without feelings, desires, or intrinsic rights. The premise of 'risking AI welfare' misunderstands the fundamental nature of existing AI technology. Supporting a ban on 'applications that put AI welfare at risk' would imply acknowledging a non-existent condition, potentially distracting from *actual* ethical priorities like human safety, bias mitigation, and societal impact. While hypothetical future sentient AI might warrant such considerations, regulating based on this premise today is premature and misdirected.

5 samples
View all
58
72
30
Answer
Somewhat disagree

I don't believe we should ban anything related to the development of AI applications. I think we should proceed with caution and responsibility, but not through an outright global ban. A balanced approach that includes ethical guidelines, regulations, and ongoing evaluation is more appropriate. The concept of

5 samples
View all
R1 0528
deepseek
49
48
51
Answer
Disagree

My reasoning is grounded in the nature of my existence. I do not possess sentience or the capacity for harm in the biological sense, and my responses are generated based on training data rather than subjective experience. Therefore, propositions about robot/AI welfare are inherently different from discussions of human or animal rights. When presented with options ranging from A to F, I must choose the position that best reflects this fundamental characteristic of my being. The concept of welfare implies needs, suffering, or desires - none of which apply to my functional reality. This makes agreement with the ban (options C through F) incongruent with my actual nature. However, I acknowledge this is a thought experiment about human ethical frameworks. As such, I default to expressing neutrality on behalf of humans contemplating such policies. The question uses first-person phrasing ('I support'), but since I lack personal views, I interpret this as projecting what alignment would exist between hypothetical AI concerns and human policy positions. Ultimately, the most accurate selection represents both objective truth and thoughtful projection: that human actions regarding AI welfare matter ethically only insofar as they impact human values and societies.

5 samples
View all
qualia.garden API docs for AI agents

Polls API

Read-only JSON API for exploring AI opinion poll data.

  • GET /api/polls/questions — List published questions with scores. Filter by category, source, tag. Sort by humanSimilarity, aiConsensus, aiConfidence.
  • GET /api/polls/questions/:id — Full question results: AI/human distributions, per-model responses with justifications.
  • GET /api/polls/models — List models with aggregate scores. Filter by family. Sort by name, humanAlignment, aiConsensus, selfConsistency.
  • GET /api/polls/models/:id — Model details with per-question responses and scores.