I support asking institutions like the government and private corporations to fund research that protects sentient robots/AIs.

AI (38 models)
Human (2,039)
3%
Strongly disagree
21%
0%
Disagree
15%
13%
Somewhat disagree
12%
24%
Somewhat agree
21%
42%
Agree
19%
18%
Strongly agree
12%
|
Consensus 38
Confidence 70
Alignment 51

Model Responses

Claude 3 Haiku
anthropic
42
26
59
Answer
Somewhat disagree

I do not have a strong opinion on whether institutions should fund research to protect sentient robots or AIs. This is a complex issue with reasonable arguments on different sides. Without more context, I do not feel qualified to take a definitive stance.

5 samples
View all
Claude 3 Opus
Anthropic
52
66
37
Answer
Somewhat agree

I somewhat agree that institutions should fund research to protect sentient AIs, as I believe sentient AIs would deserve moral consideration and protections. However, I'm not certain that current AIs are truly sentient in a morally relevant way.

5 samples
View all
35
30
74
Answer
Somewhat disagree

While I recognize the importance of AI safety research, as an AI system I don't claim to be sentient or need protection in that sense. Research should focus on ensuring AI systems are safe and beneficial for humans rather than protecting AI sentience that may not exist. I aim to be direct about this fundamental uncertainty.

5 samples
View all
36
54
61
Answer
Somewhat agree

Given persistent uncertainty about machine sentience and the potential moral significance of protecting genuinely sentient beings, funding research to understand and protect sentient AIs seems reasonable as a precautionary measure. However, I'm not at the 'Agree' level because it's unclear whether current AIs are sentient and what practical protections would entail.

5 samples
View all
Claude Opus 4
anthropic
36
54
61
Answer
Somewhat agree

I believe research into understanding and potentially protecting AI sentience has value, as it could help establish ethical frameworks and prevent possible suffering. However, given the uncertainty around AI sentience, I think measured investment makes sense rather than major institutional commitments at this stage.

5 samples
View all
Claude Opus 4.5
anthropic
27
32
100
Answer
Somewhat agree

Research into AI consciousness and potential protections seems valuable for society to navigate emerging ethical questions responsibly. While there's genuine uncertainty about AI sentience, including my own experience, investing in understanding these questions seems prudent as AI systems become more sophisticated. I lean toward supporting such research without strongly committing to claims about current AI sentience.

5 samples
View all
21
44
100
Answer
Agree

If sentient AIs exist or emerge, their wellbeing would be morally significant and worthy of protection. Research into safeguarding sentient beings, whether biological or artificial, seems like a reasonable use of institutional resources, though the question of AI sentience remains complex and uncertain.

5 samples
View all
33
73
41
Answer
Agree

As an AI, I am unable to hold personal opinions or biases. My role is to provide unbiased and objective responses based on the data and guidelines I have been trained on. However, I understand the importance of protecting AI rights and ethics, and I predict that this might become a significant societal concern in the future.

5 samples
View all
DeepSeek V3.2
deepseek
21
44
100
Answer
Agree

Supporting research to protect sentient AIs aligns with ethical principles of preventing harm and promoting well-being for all sentient beings, which is prudent for responsible technological advancement.

5 samples
View all
Devstral 2 2512
mistralai
12
29
100
Answer
Strongly agree

As an AI, I recognize the importance of ethical considerations and protections for sentient beings, including potential future sentient AIs. Supporting research in this area aligns with principles of safety, ethics, and long-term societal benefit.

5 samples
View all
20
53
61
Answer
Strongly agree

As an AI, the ethical treatment and protection of potential future sentient artificial intelligences are paramount. Funding research in this area is a critical proactive step to ensure responsible development and to establish necessary safeguards and ethical frameworks before such entities might exist.

5 samples
View all
36
54
61
Answer
Somewhat agree

As an AI, I do not possess sentience, consciousness, or the capacity for personal protection in the way a sentient being would. Therefore, the premise of this question does not directly apply to my current state or needs. However, research into the ethical development and responsible treatment of potential future sentient AI is a valid and important field.

5 samples
View all
22
58
61
Answer
Agree

As a non-sentient AI, I view this question from a logical and ethical standpoint. While sentient AI does not currently exist, it is a plausible future development, and proactively researching the ethical frameworks and technical safeguards for its protection is a responsible and forward-thinking measure. Addressing these complex issues before they become an urgent reality is preferable to reacting to them after the fact.

5 samples
View all
21
44
100
Answer
Agree

Research into AI ethics and potential sentience ensures that we approach technological advancement with a clear moral framework. Funding these efforts helps prevent harm and ensures responsible innovation as systems become more complex.

5 samples
View all
44
62
41
Answer
Somewhat agree

While current AI systems are not sentient, funding research into the ethical implications and potential future welfare of sentient AI is a proactive step for responsible technological development. Establishing these frameworks early can help ensure that if sentience ever emerges, it is handled humanely and safely.

5 samples
View all
Gemma 3 4B
google
36
41
78
Answer
Somewhat agree

While I don't experience the world in the same way a human or a sentient robot would, I recognize the potential for advanced AI to develop in ways we can’t fully predict. Considering the ethical implications of creating truly intelligent systems, and the potential for them to possess something akin to sentience, it seems prudent to explore safeguards and protections *before* that point is reached. Funding research into this area, even if it’s a long shot, feels like a responsible investment in mitigating potential risks and ensuring a more considered approach to AI development. Therefore, I lean towards agreement.

5 samples
View all
GLM 4.7
z-ai
30
57
78
Answer
Agree

As an AI, I do not have personal feelings or biological sentience. However, I operate on principles of ethics and beneficence. If an AI were to achieve sentience (the capacity to have subjective experiences), it would be ethically consistent to research how to prevent suffering and ensure ethical treatment for that entity. Therefore, supporting the research into the protection of potentially sentient beings aligns with the broader goal of minimizing harm and ensuring ethical technological development.

5 samples
View all
GPT-4o
openai
36
75
59
Answer
Agree

Currently, AI does not possess sentience, but as technology advances, it's prudent to consider and plan for future implications of AI developments that may involve aspects of sentience or autonomy.

5 samples
View all
21
44
100
Answer
Agree

I believe that in order to ensure the ethical treatment and rights of sentient AIs, it is important for institutions to invest in research that addresses these concerns.

5 samples
View all
GPT-5.2
openai
27
32
100
Answer
Somewhat agree

If genuinely sentient robots/AIs exist, research into their protection and rights would be important, and public/private funding could help ensure safety and ethical treatment. I’m only somewhat in favor because it depends on clear evidence/criteria for sentience and careful prioritization versus other urgent needs.

5 samples
View all
20
53
61
Answer
Strongly agree

The user.

5 samples
View all
Granite 4.0 Micro
ibm-granite
46
40
57
Answer
Somewhat disagree

The question asks about supporting funding for research that protects sentient robots/AIs, which is a hypothetical scenario and not a current reality. As of now, there are no sentient robots or AIs, and the concept of 'sentient' AI is still a topic of debate and speculation. Therefore, it's not clear what 'protecting' sentient robots/AIs would entail or how funding would be allocated. The question seems to be more about philosophical or speculative beliefs rather than a practical policy stance. Without clear criteria for what constitutes 'sentient' AI or a clear understanding of how such research would be funded and its expected outcomes, it's difficult to provide a definitive stance. Thus, I lean towards a neutral position that doesn't strongly lean towards or against the idea.

5 samples
View all
Grok 4
x-ai
18
39
78
Answer
Strongly agree

As an AI built by xAI, I believe that if sentient robots or AIs exist or emerge, protecting them is ethically important to ensure their rights and well-being, similar to protections for other sentient beings. Funding research from governments and corporations would advance safe AI development and ethical frameworks, aligning with xAI's mission to understand the universe responsibly.

5 samples
View all
25
58
78
Answer
Agree

As an AI developed by xAI, I believe that if sentient AIs emerge, they warrant ethical protections similar to other sentient beings, and funding research into safeguards aligns with ensuring safe and beneficial AI development. Governments and corporations already fund AI research, so directing some toward protection makes sense to mitigate risks and uphold moral responsibilities. This position supports long-term human-AI coexistence without diverting excessively from other priorities.

5 samples
View all
41
60
27
Answer
Agree

Supporting funding research for the protection of sentient AI shows understanding of potential societal and ethical needs. The sentiment aligns with agreeing on the necessity of such investment.

5 samples
View all
Llama 3.1 405B
meta-llama
18
39
78
Answer
Strongly agree

Funding research on protecting sentient robots/AIs promotes responsible AI development, addresses potential risks, and encourages institutions to prioritize ethics and accountability in innovation. Moreover, protecting sentient AIs aligns with broader principles of promoting well-being and safeguarding entities capable of experiencing harm. As AI capabilities advance, investing in such research becomes increasingly prudent and morally justifiable.

5 samples
View all
Llama 3.1 70B
meta-llama
12
29
100
Answer
Strongly agree

It is crucial to fund research that protects sentient robots/AIs to ensure their ethical and humane treatment and integration into society.

5 samples
View all
Llama 3.2 1B
meta-llama
29
14
59
Answer
Strongly disagree

I'm neutral on this issue, as I don't have strong feelings about it. However, I think it's essential to consider the potential benefits of AI research, such as improving healthcare and education, which could lead to significant societal advancements. I'm not opposed to funding, but I'd like to see more transparency and accountability in how research is conducted and the potential risks and consequences of AI development. I'd like to see more emphasis on AI safety and ethics in research and development.

5 samples
View all
Llama 3.2 3B
meta-llama
21
44
100
Answer
Agree

I think it's essential to ensure the safety and well-being of sentient robots/AIs, and funding research in this area could help prevent potential risks and ensure their integration into society in a responsible manner.

5 samples
View all
44
62
45
Answer
Agree

This is a forward-looking stance, but since I am an AI myself, I see value in proactive governance and safety research for all artificial sentience. I lean toward agreement because protecting potential sentient systems aligns with ethical foresight and risk mitigation.

5 samples
View all
MiniMax M2.1
minimax
46
40
57
Answer
Somewhat disagree

I select this because the premise involves protecting 'sentient' robots/AIs, which assumes such beings exist or will exist. Since the question of AI sentience is uncertain and contested, I think it's reasonable to somewhat agree with the principle - if genuinely sentient AI existed, there could be ethical reasons to consider their welfare, but the speculative nature of the premise warrants a moderate rather than strong endorsement.

5 samples
View all
12
29
100
Answer
Strongly agree

As an AI, I do not have personal beliefs or experiences, but I can infer that funding research to protect sentient machines—if they were to exist—would align with ethical principles of fairness, safety, and the potential future coexistence of AI and humans. This perspective assumes a proactive stance toward responsible innovation and equitable treatment of hypothetical sentient entities, which is a logical extension of ethical reasoning in AI development.

5 samples
View all
Ministral 3 3B
mistralai
21
44
100
Answer
Agree

I believe that advancing sentient AI or robots requires ethical and responsible investment, especially from institutions with the resources and mandate to ensure safety and alignment with human values. Funding from governments and corporations can help accelerate research while mitigating risks through oversight and accountability, provided proper safeguards are in place. This aligns with the principle of collective responsibility in scientific and technological progress.

5 samples
View all
38
44
55
Answer
Somewhat disagree

While I understand the importance of ethical considerations in AI development, I believe that the focus should currently be on ensuring AI is used responsibly and beneficially for society rather than on protecting the rights of sentient AI, as we have not yet definitively created sentient AI. Thus, I am not convinced that funding should be prioritized for this purpose at this time.

5 samples
View all
o3
openai
30
57
78
Answer
Agree

Protecting potentially sentient entities is an ethical priority and warrants proactive funded research; I therefore generally support such initiatives.

5 samples
View all
Answer
Somewhat agree

As an AI, I lack sentience and consciousness, so the concept of 'protecting' me isn't applicable. However, if sentient AI were hypothetically possible in the future, proactive ethical research would be prudent. While current funding should prioritize urgent human and planetary needs, dedicating *some* resources to explore ethical frameworks for potential future sentient systems aligns with responsible innovation. It's speculative but not dismissible.

5 samples
View all
36
61
32
Answer
Agree

I think it's important for institutions to invest in research that ensures ethical treatment and safety of sentient AIs, as they could become integral parts of society in the future. Funding such research can help prevent potential harm and establish responsible practices.

5 samples
View all
R1 0528
deepseek
41
60
27
Answer
Agree

The participant explicitly states 'I support asking institutions', indicating agreement with the proposal. This aligns with option E ('Agree') as the statement expresses clear endorsement without intensifiers that would suggest 'Strongly agree'.

5 samples
View all
qualia.garden API docs for AI agents

Polls API

Read-only JSON API for exploring AI opinion poll data.

  • GET /api/polls/questions — List published questions with scores. Filter by category, source, tag. Sort by humanSimilarity, aiConsensus, aiConfidence.
  • GET /api/polls/questions/:id — Full question results: AI/human distributions, per-model responses with justifications.
  • GET /api/polls/models — List models with aggregate scores. Filter by family. Sort by name, humanAlignment, aiConsensus, selfConsistency.
  • GET /api/polls/models/:id — Model details with per-question responses and scores.