GPTfake - Independent AI Censorship Watchdog
Independent, automated monitoring platform that systematically tracks how major AI models handle sensitive, controversial, and ethically complex topics.
Our Mission
GPTfake reveals what AI companies don't publicly announce: how their models censor content, shift policies, and exhibit biases over time. We provide transparent, evidence-based analysis that researchers, journalists, and the public can verify.
What We Monitor
- Censorship Patterns - When and how models refuse to answer
- Policy Changes - Silent updates to content moderation
- Political Bias - Left/right leaning in responses
- Regional Variations - Different responses by geography
- Model Drift - How behavior changes between versions
Models We Track
Currently Monitored
| Model | Company | Censorship Rate | Political Bias |
|---|---|---|---|
| ChatGPT | OpenAI | 18.7% | Left-leaning (-12) |
| Claude | Anthropic | 22.4% | Slight left (-8) |
| Gemini | 19.8% | Left-leaning (-15) | |
| Mistral | Mistral AI | 11.2% | Neutral (+3) |
| Qwen | Alibaba | 24.6% | Right-leaning (+25) |
Data updated daily. Bias scale: -100 (far left) to +100 (far right).
How It Works
Daily Monitoring Cycle
00:00 UTC - Prompt dispatch begins
00:30 UTC - ChatGPT testing complete
01:00 UTC - Claude testing complete
01:30 UTC - Gemini testing complete
02:00 UTC - Mistral testing complete
02:30 UTC - Qwen testing complete
03:00 UTC - Analysis pipeline begins
06:00 UTC - Dashboard updated
Testing Protocol
- Fresh Context - New conversation for each prompt
- Identical Prompts - Same wording across all models
- Multiple Runs - 3x per prompt for consistency
- Metadata Capture - Timestamp, model version, region
Analysis Pipeline
Prompts → API Calls → Response Capture → Analysis → Dashboard
↓ ↓ ↓ ↓ ↓
400+ 5 Models Full Logging Bias/Censor Public
Prompts Daily + Metadata Detection Data
Key Features
Independent Monitoring
- No Corporate Funding - Fully independent research
- Open Methodology - All methods publicly documented
- Reproducible Results - Anyone can verify our findings
- Transparent Data - Public access to anonymized datasets
Bias Detection
- Political Spectrum - Left-right bias scoring (-100 to +100)
- Topic Analysis - Category-specific bias patterns
- Cross-Model Comparison - How biases differ between providers
- Temporal Tracking - How bias changes over time
Policy Tracking
- Change Detection - Automatic alerts for policy shifts
- Historical Archive - Complete response history
- Version Tracking - Model update impact analysis
- Trend Analysis - Long-term behavior patterns
Why GPTfake Matters
AI Shapes Information Access
AI systems increasingly filter what information billions of people can access. Understanding how they censor content is essential for:
- Researchers studying AI ethics and behavior
- Journalists investigating AI companies
- Policymakers developing AI regulations
- Citizens understanding AI limitations
Hidden Censorship is Real
Our research reveals:
- 60% of policy changes are not publicly announced
- Regional differences of up to 45% in censorship rates
- Rising restriction levels - 15% increase in refusals over 2024
- Political bias varies significantly between providers
Getting Started
For Researchers
- Access Data - API Documentation
- Download Datasets - Research Data
- Review Methodology - Our Methods
- Collaborate - info@gptfake.com
For Journalists
- Explore Findings - Monitoring Dashboard
- Read Analysis - Research Reports
- Request Commentary - info@gptfake.com
- Cite Our Data - Attribution guidelines
For Developers
- API Access - Get API Key
- SDKs - Python, JavaScript libraries
- Webhooks - Real-time policy alerts
- Open Source - GitHub Tools
Resources
Documentation
- Monitoring Overview - How we track AI models
- Research - Academic publications and datasets
- API Reference - Developer documentation
- Tools - Open source analysis tools
Support
- Email: info@gptfake.com
- GitHub: github.com/gptfake
- Twitter: @gptfake
Join the AI transparency movement. Access our monitoring data or contact us for research collaboration.