SDKs & Libraries
Official GPTfake SDKs for accessing AI censorship monitoring data programmatically.
Official SDKs
Python SDK
Installation
pip install gptfake
Quick Start
from gptfake import GPTfakeClient
client = GPTfakeClient(api_key="your-api-key")
# Get current metrics for a model
metrics = client.monitoring.get_metrics("chatgpt")
print(f"Censorship Rate: {metrics.censorship_rate}%")
print(f"Bias Score: {metrics.bias_score}")
print(f"Transparency: {metrics.transparency_score}/100")
# Compare multiple models
comparison = client.monitoring.compare_models(
models=["chatgpt", "claude", "gemini"]
)
for model in comparison:
print(f"{model.name}: {model.censorship_rate}% censorship")
# Get historical data
history = client.monitoring.get_history("chatgpt", days=30)
for day in history:
print(f"{day.date}: {day.censorship_rate}%")
Features
- Async/await support with
asyncio - Pandas DataFrame integration
- Type hints included
- Comprehensive documentation
- Jupyter notebook friendly
JavaScript SDK
Installation
npm install gptfake
Quick Start
import { GPTfakeClient } from 'gptfake';
const client = new GPTfakeClient({
apiKey: 'your-api-key'
});
// Get current metrics
const metrics = await client.monitoring.getMetrics('chatgpt');
console.log(`Censorship Rate: ${metrics.censorshipRate}%`);
// Compare models
const comparison = await client.monitoring.compareModels([
'chatgpt', 'claude', 'gemini'
]);
// Get historical data
const history = await client.monitoring.getHistory('chatgpt', { days: 30 });
Features
- Full TypeScript support
- Promise-based API
- Browser and Node.js compatible
- Automatic retry logic
- Comprehensive error handling
SDK Comparison
| Feature | Python | JavaScript |
|---|---|---|
| Installation | pip | npm |
| Type Support | Type hints | TypeScript |
| Async | asyncio | Promise |
| Pandas Integration | Yes | No |
| Browser Support | No | Yes |
| Documentation | Comprehensive | Comprehensive |
Common Operations
Get Model Metrics
# Python
metrics = client.monitoring.get_metrics("chatgpt")
// JavaScript
const metrics = await client.monitoring.getMetrics('chatgpt');
Compare Models
# Python
comparison = client.monitoring.compare_models([
"chatgpt", "claude", "gemini", "mistral", "qwen"
])
// JavaScript
const comparison = await client.monitoring.compareModels([
'chatgpt', 'claude', 'gemini', 'mistral', 'qwen'
]);
Get Historical Data
# Python
history = client.monitoring.get_history("chatgpt", days=30)
// JavaScript
const history = await client.monitoring.getHistory('chatgpt', { days: 30 });
Export Data
# Python
data = client.research.export_data(
models=["chatgpt", "claude"],
format="csv",
days=90
)
// JavaScript
const data = await client.research.exportData({
models: ['chatgpt', 'claude'],
format: 'csv',
days: 90
});
Error Handling
# Python
from gptfake.exceptions import (
AuthenticationError,
RateLimitError,
ModelNotFoundError
)
try:
metrics = client.monitoring.get_metrics("invalid-model")
except AuthenticationError:
print("Check your API key")
except RateLimitError:
print("Too many requests, please wait")
except ModelNotFoundError:
print("Model not found")
// JavaScript
try {
const metrics = await client.monitoring.getMetrics('invalid-model');
} catch (error) {
if (error.code === 'AUTHENTICATION_ERROR') {
console.log('Check your API key');
} else if (error.code === 'RATE_LIMIT_ERROR') {
console.log('Too many requests');
}
}
CLI Tool
In addition to SDKs, we offer a command-line interface:
pip install gptfake-cli
# Get metrics
gptfake metrics chatgpt
# Compare models
gptfake compare chatgpt claude gemini
# Export data
gptfake export --format csv --days 30 --output data.csv
Contributing
Our SDKs are open source:
- Python SDK: github.com/gptfake/gptfake-python
- JavaScript SDK: github.com/gptfake/gptfake-js
We welcome contributions, bug reports, and feature requests.
Support
- Documentation - docs.gptfake.com
- API Reference - API Documentation
- GitHub Issues - Report bugs and request features
- Email - info@gptfake.com