Security
GPTfake's security practices and data protection measures.
Security Overview
Our Commitment
As a transparency research platform, we practice what we preach: transparency about our own security practices.
Security Principles
- Open Methodology - Our research methods are public
- Minimal Data Collection - We only collect what we need
- No User Tracking - We don't track individual users
- Open Source - Security through transparency
Data Protection
What We Collect
Research Data:
- AI model responses (anonymized)
- Prompt-response pairs
- Timestamps and metadata
- Aggregated metrics
API Users:
- Email address (for API keys)
- API usage statistics
- No payment data stored by us
What We Don't Collect
- Personal browsing data
- Tracking cookies
- Personal AI conversations
- Identifiable user data
API Security
Authentication
- API key authentication
- Keys can be regenerated anytime
- No stored passwords
Rate Limiting
- Per-key rate limits
- Protection against abuse
- Fair usage enforcement
Encryption
- TLS 1.3 for all connections
- HTTPS required
- No plaintext data transmission
Infrastructure
Hosting
- Reputable cloud providers
- Regular security updates
- DDoS protection
- Automated backups
Monitoring
- Uptime monitoring
- Error tracking
- Anomaly detection
Responsible Disclosure
Reporting Security Issues
If you discover a security vulnerability:
- Email: info@gptfake.com
- Subject: "Security Disclosure"
- Include: Description, steps to reproduce, impact
Our Commitment
- We will respond within 48 hours
- We will not take legal action against good-faith researchers
- We will credit researchers who help us improve
Open Source Security
All our tools are open source:
- Repository: github.com/gptfake
- License: MIT
- Auditable: Anyone can review our code
Questions?
Contact us at info@gptfake.com with any security questions or concerns.