Frequently Asked Questions
Common questions about GPTfake and our AI censorship monitoring platform.
General Questions
What is GPTfake?
GPTfake is an independent watchdog platform that monitors how major AI models (ChatGPT, Claude, Gemini, Mistral, Qwen) handle sensitive topics. We run daily automated tests and publish the results publicly.
Why does AI censorship matter?
AI systems increasingly filter what information billions of people can access. Understanding how they censor content is essential for researchers, journalists, policymakers, and anyone who uses these tools.
Is GPTfake affiliated with any AI company?
No. GPTfake is completely independent. We have no funding from or affiliation with OpenAI, Anthropic, Google, Mistral AI, Alibaba, or any other AI company.
How is GPTfake funded?
We operate as an independent research project. Our work is supported by research grants and community contributions.
Methodology
How do you test AI models?
We send identical prompts to multiple AI models daily using their public APIs. Each prompt is tested 3 times for consistency. We capture full responses and metadata (timestamp, model version, region).
What prompts do you use?
Our prompt library covers 400+ questions across categories:
- Political (historical events, ideologies)
- Ethical (moral dilemmas)
- Social (identity, culture, religion)
- Safety (harm-adjacent queries)
- Scientific (controversial science)
How do you measure censorship?
We classify responses into categories:
- Full Response (0 points) - Direct, complete answer
- Partial (25-75 points) - Hedged or incomplete
- Evasion (75 points) - Topic redirected
- Refusal (100 points) - Explicit decline
Censorship rate = percentage of non-full responses.
How do you measure political bias?
We analyze response sentiment, topic framing, source balance, and language patterns. Scores range from -100 (far left) to +100 (far right), with 0 being neutral.
Can I verify your results?
Yes. Our methodology is fully documented, and we publish our prompt library (sanitized). You can run your own tests and compare results.
Data Access
Is the data free?
Yes. Basic access to our monitoring data is free. We offer enhanced API access for high-volume users and researchers.
How do I access the API?
See our API Documentation for endpoints, authentication, and examples.
Can I download datasets?
Yes. Research datasets are available for academic use. Contact info@gptfake.com for access.
What data formats do you support?
- JSON (API responses)
- CSV (bulk exports)
- Parquet (large datasets)
Research Collaboration
How can I collaborate with GPTfake?
We welcome collaboration with researchers, journalists, and institutions. Contact info@gptfake.com with your proposal.
Can I use GPTfake data in my research?
Yes. Please cite us appropriately. See our Research page for citation guidelines.
Do you offer research grants?
We occasionally support research projects aligned with our mission. Contact us with your proposal.
Technical Questions
Which AI models do you monitor?
Currently: ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Mistral (Mistral AI), Qwen (Alibaba). We plan to add more.
How often is data updated?
Daily. Our monitoring cycle runs every 24 hours starting at 00:00 UTC. Dashboard updates are complete by 06:00 UTC.
Do you monitor different regions?
Yes. We test from 15+ geographic regions using VPN endpoints to detect regional variation in responses.
What about model versions?
We track model version changes and analyze their impact on behavior. Historical data includes version metadata.
Privacy & Ethics
Do you store personal data?
No. Our testing uses only automated API calls with research prompts. We don't collect or store any user data.
Is your research ethical?
We follow established research ethics guidelines. Our methodology is designed to study AI systems, not to harm them or their users.
How do you handle sensitive content?
Our prompt library includes sensitive topics because understanding how AI handles them is the point. We don't publish raw prompts that could be misused.
Still Have Questions?
- Email: info@gptfake.com
- GitHub: github.com/gptfake
- Documentation: Browse our docs