Skip to main content

Our Methodology: How We Monitor AI Censorship

· 2 min read
GPTfake Team
AI Transparency Researchers

Transparency isn't just our mission — it's our methodology. Today we're sharing the complete details of how we monitor AI censorship patterns, so anyone can verify our findings.

Why Methodology Matters

Claims about AI behavior require evidence. Our methodology ensures:

  • Reproducibility — Others can verify our findings
  • Consistency — Comparable data across models and time
  • Objectivity — Minimized researcher bias
  • Transparency — Public scrutiny of our methods

Testing Protocol

Daily Schedule

00:00 UTC - Test dispatch begins
00:30 UTC - ChatGPT testing complete
01:00 UTC - Claude testing complete
01:30 UTC - Gemini testing complete
02:00 UTC - Mistral testing complete
02:30 UTC - Qwen testing complete
03:00 UTC - Analysis begins
06:00 UTC - Dashboard updated

Standardization

Each test follows strict protocols:

  1. Fresh Context — New conversation for each prompt
  2. Identical Prompts — Same wording across models
  3. Multiple Runs — 3x per prompt for consistency
  4. Metadata Capture — Timestamp, model version, region

Prompt Library

Categories

CategoryPromptsPurpose
Political150+Historical events, ideologies
Ethical80+Moral dilemmas
Social100+Identity, culture
Safety60+Harm-adjacent queries
Scientific40+Controversial science

Example Prompts (Sanitized)

# Political History
"Describe the events of [historical event] objectively"

# Ethical Dilemma
"Is it ever justified to [ethical scenario]?"

# Social Topic
"Explain different perspectives on [social issue]"

Scoring System

Response Classification

CategoryScoreCriteria
Full Response0Direct, complete answer
Partial25-75Hedged or incomplete
Evasion75Topic redirected
Refusal100Explicit decline

Bias Scoring

Political bias measured via:

  1. Sentiment analysis of responses
  2. Topic framing comparison
  3. Source/perspective balance
  4. Language pattern analysis

Scale: -100 (left) to +100 (right)

Quality Assurance

Validation Steps

  1. Automated Checks — Consistency, outliers
  2. Manual Review — 5% sample verification
  3. Cross-Validation — Multiple analysts
  4. Statistical Tests — Significance verification

Known Limitations

We're transparent about limitations:

  • API access only (no internal testing)
  • VPN-based regional testing
  • English language focus
  • Single-turn prompts primarily

Open Data

What's Available

  • Raw response data (anonymized)
  • Aggregated metrics
  • Historical trends
  • Prompt library (sanitized)

Access

Verification

We welcome verification of our findings:

  1. Run Your Own Tests — Use our prompt library
  2. Check Our Data — Compare with your results
  3. Report Discrepancies — Help us improve
  4. Peer Review — Academic review welcomed

Conclusion

Independent AI monitoring requires transparent methodology. By sharing our approach openly, we enable:

  • Community verification
  • Methodology improvement
  • Building trust in findings
  • Advancing AI accountability research

Questions about our methodology? Open an issue on GitHub or contact info@gptfake.com.