Our Methodology
How Global Dialogues Works
Methodology
The Global Dialogues initiative is a recurring, global survey designed to elicit global public perspectives and opinions on artificial intelligence (AI), as well as data on usage of AI. Each cadence employs an online, collective‑dialogue session (15 – 60 minutes) hosted on the Remesh.ai platform. Participants recruited through Prolific deliberate on structured prompts, submit their own responses, and evaluate peer statements, generating both quantitative and qualitative evidence of agreement.
Sampling Frame and Recruitment
- Target population: Adults (18 +) with internet access across multiple world regions, stratified by region, gender, and age
- Recruitment platform: Prolific research panel
- Sampling method: Quota-based sampling on age, gender, and region to approximate global population distributions
- Language coverage: Arabic, Chinese, English, French, Hindi, Portuguese, Russian, Spanish
- Under-represented groups: Where local panel supply is limited, carefully screened expatriates—who spent formative years in, and maintain cultural ties to, the target region—are included; any resulting biases are noted in analysis.
- Quality controls: Prolific account verification at intake
Questionnaire Development
Survey content is organized into three streams:
- Global Pulse (Longitudinal Indicators)
- Purpose: Track shifts in core attitudes and experiences related to AI
- Typical items: Repeated closed-ended questions
- Research
- Purpose: Test hypotheses posed by CIP or collaborators
- Typical items: Scenario vignettes, open-ended prompts
- Partner
- Purpose: Address evidence needs of external organisations working on AI
- Typical items: Scenario vignettes, custom values or benchmarking questions
Questions are drafted in English, translated into all study languages, and back-translated for accuracy where feasible.
Collective Dialogue Process
Each cadence follows a structured session on Remesh.ai:
- Demographic Intake
- Purpose: Record respondent characteristics for weighting and subgroup analysis
- Operational details: Participants complete a brief poll on age, gender, country, urbanicity, religion, and (where applicable) political affiliation
- Survey & Deliberation
- Purpose: Provide common context and elicit informed views
- Operational details:
- Moderator introduces key concepts and framing materials
- Participants consider future-oriented scenarios
- Respondents submit free-text or choice-based answers
- Participants evaluate a random subset of peer statements via approval/disapproval votes and pair-wise preferences
- Elicitation inference algorithms predict missing votes to complete agreement matrices
- Real-time analytics surface high cross-demographic “consensus statements” for group reflection
- Resulting Data
- Raw responses
- Observed + imputed peer-evaluation votes
- Agreement metrics for every demographic slice.
5. Data Quality Procedures
Built-in Prolific safeguards—duplicate-IP blocking, bot detection, and prior-approval thresholds—are applied.
6. Global Representativeness Index (GRI)
For each cadence, we report a GRI that rescales the population-weighted average absolute deviation (WAAD) onto a 0 – 100 scale (100 % = perfectly proportional). A GRI of 90 % means the sample deviates by 10 percentage points, on average, from global demographic benchmarks. Tracking GRI over time lets users judge the representativeness of successive cadences.
7. Analytical Framework
- Bridging agreement (minimum agreement across all demographic groups) flags statements achieving cross-group consensus.
- A machine-learning pipeline imputes agreement scores for statement-voter pairs not seen during live voting, enabling full consensus and segmentation analyses.
- Outputs cover global majority views, regional variation, and demographic segmentation.
8. Public Release & Transparency
All data and analysis notebooks are hosted on globaldialogues.ai. After every cadence, cleaned data and code required to replicate findings are released under an open-source license.
9. Limitations
- Coverage error: Online-only recruitment excludes individuals without reliable internet access.
- Expatriate proxies: Screened expatriates may attenuate local nuance despite eligibility criteria.
- Non-probability panel: Because Prolific is opt-in, classical sampling-error margins are not calculated; uncertainty is conveyed through design-adjusted or Bayesian intervals.
- Translation drift: Automated and human translation may leave residual semantic differences across languages.