Oct 2025

Globe

Here’s what the world said about AI

Findings from the sixth round of Global Dialogues offer a global snapshot of how people think about trust, delegation, and autonomy as AI systems begin to act in the world. Across more than 1,000 participants from diverse backgrounds, the data reveals that while AI is becoming a daily fixture, the public remains cautious about letting it take independent action.

1

AI is Everywhere: A majority now use AI tools daily for work or personal tasks, particularly younger adults (53% of those aged 18–35) and urban residents (53%).

2

Delegation Hesitation: Despite frequent use, unsupervised real-world delegation remains a minority behavior, with fewer than 10% currently delegating tasks to AI without supervision.

3

Results Over Rules: 28% of respondents believe that an AI should override established rules or authorities when it calculates that doing so would lead to a better result.

4

A Blur of Boundaries: As systems grow more capable of making bookings or negotiating, the boundary between AI acting for us and acting as us begins to blur.

Highlights

The Permission Principle & Sacred Limits

Overwhelming Support for Human-in-the-Loop Oversight

For most, trust does not mean blind faith; it means the ability to correct and recover. The public is signaling a "permission-by-default" mandate for autonomous systems.

  • Permission Over Speed: Around 80% of participants prefer systems that ask for consent before acting, even if this results in slower performance.
  • Sacred Human Decisions: A significant majority (77.9%) believe some human decisions are too "sacred" to ever be delegated to AI, regardless of the system’s technical capabilities.
  • The Safety Net: 94% of frequent users want a human to step in when an AI makes an error.

The Unsettled Question of Rules and Fairness

No Consensus on Algorithmic Autonomy or Bias

As AI evolves from tool to counterpart, new ethical dilemmas arise regarding when—and for whom—AI should act.

  • Algorithmic Discrimination: A majority (58.9%) find it acceptable for an AI to treat people differently based on personal characteristics if it leads to improved outcomes.
  • Prioritizing Citizens: Public opinion is split on national interests: 44% believe an AI should prioritize the citizens of the country where it was developed, while 34% disagree.
  • Responsibility for Failure: When AI fails, the public assigns primary responsibility to the builders and developers rather than the users or government regulators.

Governance and Accountability

Builders at the Helm of Responsibility

Across all regions, participants express a strong desire for clear accountability and robust guardrails.

  • Stricter Rules: Approximately half of all participants believe AI should be subject to more rigorous regulation than ordinary apps or software.
  • Financial Recourse: Three-quarters of respondents (74%) believe that automatic refunds for AI-caused financial mistakes are "very important".
  • The Desire for Control: Nearly one-third of respondents want deeply personalized assistants but insist on the right to edit or delete all personal data and maintain strict consent controls.

Conclusion

The findings from this Global Dialogue describe a public defined by cautious pragmatism. People are ready to embrace the efficiencies of agentic AI, but only under recoverable conditions where human judgment, transparency, and recourse remain intact. They want agents that act with them, not just for them, ensuring that as technology accelerates, human agency is enhanced rather than eroded. These collective preferences will now serve as critical inputs for developers and policymakers as they shape the next wave of AI autonomy.

Download the data

Explore what the world is really saying about AI. Access our complete dataset to conduct your own analysis and contribute to the growing body of research on global AI perspectives.

Download the data