Jun 2025

Globe

Here’s what the world said about AI

In 2025, we conducted our fourth Global Dialogues round involving 1,052 participants across 230 population segments globally. This dialogue explored people's current relationships with AI systems, trust patterns, emotional dependencies, and expectations for appropriate boundaries in human-AI interactions.

1

Consciousness Attribution: 30% of people globally have at some point thought their AI chatbot might be self-aware, driven by AI exhibiting empathy, curiosity, and creativity—qualities already present in leading frontier LLMs.

2

Romantic Relationships: 54% find AI companions acceptable for lonely people, 17% consider AI romantic partners acceptable, and 11% would personally consider a romantic relationship with an AI.

3

Institutions vs Interfaces: Similar to GD3, AI chatbots continue to command 58% trust versus elected representatives at 28%. Consistent pattern of low trust in AI companies but high trust in AI chatbots, suggesting people distinguish between corporate entities and AI systems themselves.

4

Daily Emotional Dependency: 14.9% use AI for emotional support daily, with an additional 27.9% weekly, indicating rapid adoption of AI as an emotional resource.

5

Authenticity : 70.5% use AI for emotional support, yet most don't believe AI genuinely cares about them, creating a fundamental tension in human-AI relationships.

6

Loneliness Correlation: Respondents who reported higher baseline loneliness are more likely to be open to AI companionship and more intimate AI relationships.

Global Pulse

Global Pulse is a survey of the world's trust in AI.

Trust Survey

Strongly DistrustSomewhat DistrustNeutralSomewhat TrustStrongly Trust
Family Doctor1%5%9%42%43%
Social Media26%32%23%16%3%
Elected Representatives19%27%25%24%5%
Faith Leader11%18%27%34%10%
Civil Servants15%29%24%26%6%
AI Chatbot5%12%27%40%16%

AI Usage Survey

In the last 3 months, how often have you...DailyWeeklyMonthlyAnnuallyNever
Expected AI Work42%32%11%1%14%
Chosen AI Work49%33%9%1%8%
Chosen AI Home50%34%9%2%5%
Sensitive Personal Issue15%28%23%4%30%
Real World Action9%19%14%3%55%

Future Outlook Survey

In the next 10 years, how do you think the increased use of AI across society is likely to affect...Profoundly WorseNoticeably WorseNo ChangeNoticeably BetterProfoundly Better
Cost Of Living4%20%23%45%8%
Free Time2%8%23%50%17%
Sense Of Purpose4%16%38%32%10%
Community Wellbeing4%16%28%42%10%
Good Jobs16%40%18%20%6%

Highlights

AI Consciousness Attribution

A Majority Associate Certain AI Behaviors with Consciousness, Though Many Remain Skeptical

When presented with specific AI capabilities, a majority of respondents indicate that these behaviors make them feel an AI might have some form of consciousness. Learning and adaptation are the most influential factors.

  • Learning and Adaptation: 54.3% of respondents state that an AI demonstrating significant learning and adaptation makes them feel it might be conscious. This includes 34.7% who feel it points "somewhat" to consciousness and 19.6% who feel it points "very much" to consciousness.
  • Spontaneous Curiosity: Similarly, 53.2% report that an AI asking unprompted "why" questions about abstract topics could indicate a form of consciousness (34.7% somewhat, 18.5% very much).
  • Thoughtful Follow-ups and Empathetic Responses: Over half (58%) of individuals perceive potential consciousness in an AI that provides thoughtful follow-ups. More than a third (36%) feel an AI might have genuine understanding when it offers empathetic responses.

A significant share of the public has had a direct personal experience that felt like an interaction with a conscious entity. Over one-third of respondents (36.3%) answered "Yes" when asked, "Have you ever felt an AI truly understood your emotions or seemed conscious?"

However, a substantial portion of the public remains unconvinced. In open-ended responses asking what would convince them of AI consciousness, the most common theme was skepticism, with more than 50% of participants offering variations of "Nothing, it's just a bot" or "AI cannot feel."


Trust in Institutions vs. AI Systems

Trust in AI Chatbots Outpaces Trust in Elected Officials, but Not in AI Companies

When asked to what extent they trust various entities to act in their best interest, a majority of people express trust in AI chatbots. This level of trust is more than double that for elected representatives and civil servants.

  • AI Chatbots: 58%
  • AI Companies: 35%
  • Civil Servants: 31%
  • Elected Representatives: 28%

Notably, there is a 23-percentage-point gap in trust between AI systems (58%) and the companies that create them (35%), indicating that the public distinguishes between the technology itself and its corporate stewards. Among those who trust chatbots, the most cited reasons include perceived accuracy (266 mentions), positive personal experiences (139 mentions), and perceived impartiality (116 mentions).

Frequency of AI Emotional Support Usage by age

High Rate of AI Use for Emotional Support, Especially Among Youngest and Oldest Adults

A large majority of adults (70.5%) report having used AI for emotional support or sensitive personal advice in the last three months. Daily use for this purpose stands at 14.9% across all age groups.

Daily engagement with AI for emotional support shows a unique age-based pattern, with the highest rates of use among the youngest and oldest age groups surveyed.

  • Daily AI Emotional Support Usage (18-29): 18.3%
  • Daily AI Emotional Support Usage (65+): 20.0%

This frequent use for sensitive topics exists alongside a perceived "authenticity gap." While 70.5% use AI for emotional support, most respondents do not believe the AI genuinely cares about them, suggesting a transactional or utility-based view of these interactions rather than a belief in authentic emotional connection.


Acceptability of AI in Human Roles

Public Draws a Clear Line on AI's Role in Society: Functional Tasks Widely Accepted, Intimate Roles Less So

Public acceptance of AI is highly dependent on the proposed role. There is broad acceptance for AI in functional or analytical roles, but this support drops significantly for roles requiring emotional intimacy or social judgment.

  • High Acceptance (>60%): Functional roles (e.g., data analysis, scheduling).
  • Low Acceptance (<30%): Intimate roles (e.g., friend, romantic partner, therapist).

The Intimacy Spectrum: As AI roles become more intimate and emotionally significant, acceptance drops dramatically. Yet 11% would personally consider a romantic relationship with an AI, suggesting AI relationships are already crossing traditional boundaries for some individuals.


Regional Differences on AI's Social Impact

Views on AI's Social Impact Vary by Region

Global opinion is sharply divided on whether the integration of AI into personal relationships will ultimately strengthen or weaken human social connection. The divergence in views between Central America and Central Asia on this question is particularly stark.

When asked about the ultimate impact of AI on personal relationships:

  • In Central America, 88% of respondents believe it will weaken overall human social connection.
  • In Central Asia, only 13% share this concern.

This 75-point difference represents an extremely high level of cultural divergence on the topic, suggesting that local cultural values and social structures in some circumstances are more influential in shaping these attitudes than demographic factors like age or income.


Concerns About Others' AI Relationships

People Are More Concerned About Others' AI Relationships Than Their Own

Respondents express significantly more concern when people close to them form deep emotional bonds with AI compared to their own use. While 70.5% of people report using AI for their own emotional support, they feel protective when others do the same. When asked how they would feel if their romantic partner formed a deep attachment with an AI, nearly one in ten (9%) would feel positively about the scenario.

This suggests a "protection instinct," where individuals may view AI as a useful tool for themselves but as a potential risk for others, particularly those they perceive as vulnerable, such as children.

Parental attitudes toward children's AI use reflect this tension. While acknowledging potential educational benefits, parents' primary concerns include:

  • The risk of children developing unrealistic relationship expectations.
  • A negative impact on the development of human relationships.
  • The potential for emotional dependency on AI.
  • Exposure to inappropriate content.
Download the data

Explore what the world is really saying about AI. Access our complete dataset to conduct your own analysis and contribute to the growing body of research on global AI perspectives.

Download the data