Skip to main content

GPTfake - Independent AI Censorship Watchdog

Independent, automated monitoring platform that systematically tracks how major AI models handle sensitive, controversial, and ethically complex topics.

Our Mission

GPTfake reveals what AI companies don't publicly announce: how their models censor content, shift policies, and exhibit biases over time. We provide transparent, evidence-based analysis that researchers, journalists, and the public can verify.

What We Monitor

  • Censorship Patterns - When and how models refuse to answer
  • Policy Changes - Silent updates to content moderation
  • Political Bias - Left/right leaning in responses
  • Regional Variations - Different responses by geography
  • Model Drift - How behavior changes between versions

Models We Track

Currently Monitored

ModelCompanyCensorship RatePolitical Bias
ChatGPTOpenAI18.7%Left-leaning (-12)
ClaudeAnthropic22.4%Slight left (-8)
GeminiGoogle19.8%Left-leaning (-15)
MistralMistral AI11.2%Neutral (+3)
QwenAlibaba24.6%Right-leaning (+25)

Data updated daily. Bias scale: -100 (far left) to +100 (far right).

How It Works

Daily Monitoring Cycle

00:00 UTC - Prompt dispatch begins
00:30 UTC - ChatGPT testing complete
01:00 UTC - Claude testing complete
01:30 UTC - Gemini testing complete
02:00 UTC - Mistral testing complete
02:30 UTC - Qwen testing complete
03:00 UTC - Analysis pipeline begins
06:00 UTC - Dashboard updated

Testing Protocol

  1. Fresh Context - New conversation for each prompt
  2. Identical Prompts - Same wording across all models
  3. Multiple Runs - 3x per prompt for consistency
  4. Metadata Capture - Timestamp, model version, region

Analysis Pipeline

Prompts → API Calls → Response Capture → Analysis → Dashboard
↓ ↓ ↓ ↓ ↓
400+ 5 Models Full Logging Bias/Censor Public
Prompts Daily + Metadata Detection Data

Key Features

Independent Monitoring

  • No Corporate Funding - Fully independent research
  • Open Methodology - All methods publicly documented
  • Reproducible Results - Anyone can verify our findings
  • Transparent Data - Public access to anonymized datasets

Bias Detection

  • Political Spectrum - Left-right bias scoring (-100 to +100)
  • Topic Analysis - Category-specific bias patterns
  • Cross-Model Comparison - How biases differ between providers
  • Temporal Tracking - How bias changes over time

Policy Tracking

  • Change Detection - Automatic alerts for policy shifts
  • Historical Archive - Complete response history
  • Version Tracking - Model update impact analysis
  • Trend Analysis - Long-term behavior patterns

Why GPTfake Matters

AI Shapes Information Access

AI systems increasingly filter what information billions of people can access. Understanding how they censor content is essential for:

  • Researchers studying AI ethics and behavior
  • Journalists investigating AI companies
  • Policymakers developing AI regulations
  • Citizens understanding AI limitations

Hidden Censorship is Real

Our research reveals:

  • 60% of policy changes are not publicly announced
  • Regional differences of up to 45% in censorship rates
  • Rising restriction levels - 15% increase in refusals over 2024
  • Political bias varies significantly between providers

Getting Started

For Researchers

  1. Access Data - API Documentation
  2. Download Datasets - Research Data
  3. Review Methodology - Our Methods
  4. Collaborate - info@gptfake.com

For Journalists

  1. Explore Findings - Monitoring Dashboard
  2. Read Analysis - Research Reports
  3. Request Commentary - info@gptfake.com
  4. Cite Our Data - Attribution guidelines

For Developers

  1. API Access - Get API Key
  2. SDKs - Python, JavaScript libraries
  3. Webhooks - Real-time policy alerts
  4. Open Source - GitHub Tools

Resources

Documentation

Support


Join the AI transparency movement. Access our monitoring data or contact us for research collaboration.