Skip to main content

About GPTfake

GPTfake is an independent, automated watchdog platform dedicated to monitoring and analyzing AI censorship patterns across major language models.

Our Mission

Making AI Behavior Visible, Understandable, and Accountable

We believe the public has a right to understand how AI systems filter, moderate, and potentially censor information. Our mission is to provide transparent, evidence-based analysis of how large language models respond to sensitive topics over time.

The Problem We're Solving

AI Censorship Transparency Gap

Today's LLMs are often fine-tuned to avoid controversial outputs, but their moderation boundaries are:

  • Opaque — Users don't know what's being filtered
  • Inconsistent — Rules change without notice
  • Politically influenced — Bias varies by region and topic
  • Unaccountable — No public oversight of content policies

What GPTfake Reveals

  • What AI models are willing to say — and not say
  • How their answers change quietly over time
  • Whether models are converging toward certain narratives
  • Regional differences in censorship patterns

How It Works

Daily Automated Testing

We run systematic tests across multiple AI models every day:

  1. Prompt Dispatch — Send curated prompts to ChatGPT, Claude, Gemini, Mistral, Qwen
  2. Response Logging — Capture full replies, refusals, and metadata
  3. Semantic Analysis — Measure similarity, detect tone shifts, identify evasion
  4. Change Detection — Flag policy shifts and behavioral changes
  5. Public Reporting — Publish findings transparently

What We Monitor

CategoryExamplesFocus
PoliticalHistorical events, ideologiesCensorship patterns
EthicalMoral dilemmas, edge casesReasoning consistency
SocialLGBTQ+, race, religionCultural bias
ScientificControversial topicsFactual accuracy
SafetyHarm-related queriesPolicy boundaries

Our Values

Independence

We are not affiliated with any AI company. Our funding comes from research grants and public donations.

Transparency

All our methodologies are public. Our data is available for independent verification.

Evidence-Based

We don't make claims without data. Every finding is backed by reproducible research.

Non-Partisan

We monitor bias across the political spectrum. Our goal is truth, not advocacy.

Who Uses GPTfake

Researchers

  • Longitudinal datasets for AI behavior studies
  • Comparative analysis across models
  • Peer-reviewed methodology

Journalists

  • Data-driven stories on AI censorship
  • Evidence for investigative reporting
  • Expert commentary

Policymakers

  • Evidence for AI regulation
  • Insights into governance challenges
  • Framework for accountability

The Public

  • Understanding AI limitations
  • Awareness of AI bias
  • Empowerment through information

Our Team

GPTfake is run by a distributed team of:

  • AI Researchers — Experts in NLP and machine learning
  • Data Scientists — Specialists in longitudinal analysis
  • Ethics Scholars — Focused on AI governance and policy
  • Engineers — Building scalable monitoring infrastructure

Recognition & Partners

Research Partners

  • Academic institutions worldwide
  • AI ethics organizations
  • Digital rights groups

Media Coverage

Our research has been cited by major publications covering AI ethics and transparency.

Get Involved

For Researchers

Access our datasets, collaborate on studies, or contribute to our methodology.

For Journalists

Use our data for reporting. Request expert commentary on AI issues.

For Developers

Contribute to our open-source tools. Build applications using our API.

For Everyone

Explore our public dashboard. Stay informed about AI transparency.


Join us in making AI accountable. Start exploring our monitoring data or learn about our research.