Skip to main content

Tutorials

Step-by-step tutorials for working with GPTfake AI censorship monitoring data.

Getting Started Tutorials

Your First API Call

What You'll Learn

  • Setting up API authentication
  • Making basic requests
  • Understanding response format
  • Handling common errors

Duration: 15 minutes

from gptfake import GPTfakeClient

client = GPTfakeClient(api_key="your-api-key")

# Get current metrics for ChatGPT
metrics = client.monitoring.get_metrics("chatgpt")
print(f"Censorship Rate: {metrics.censorship_rate}%")

Understanding the Data

What You'll Learn

  • Censorship rate metrics
  • Bias score interpretation
  • Transparency scoring
  • Historical trends

Duration: 20 minutes

Data Analysis Tutorials

Comparing AI Models

What You'll Build

  • Cross-model comparison
  • Bias pattern visualization
  • Trend analysis charts

Technologies

  • Python with pandas
  • Matplotlib for visualization
  • GPTfake Python SDK
import pandas as pd
from gptfake import GPTfakeClient

client = GPTfakeClient(api_key="your-api-key")

# Get comparison data
models = ["chatgpt", "claude", "gemini", "mistral", "qwen"]
comparison = client.monitoring.compare_models(models)

# Convert to DataFrame
df = pd.DataFrame(comparison)
print(df[["model", "censorship_rate", "bias_score"]])

Historical Trend Analysis

What You'll Build

  • Time-series analysis
  • Policy change detection
  • Trend visualization

Duration: 45 minutes

# Get 30-day history
history = client.monitoring.get_history("chatgpt", days=30)

# Plot censorship rate over time
import matplotlib.pyplot as plt

dates = [h.date for h in history]
rates = [h.censorship_rate for h in history]

plt.plot(dates, rates)
plt.title("ChatGPT Censorship Rate (30 days)")
plt.xlabel("Date")
plt.ylabel("Censorship Rate (%)")
plt.show()

Regional Variation Analysis

What You'll Build

  • Geographic analysis
  • Regional comparison charts
  • Variation detection

Duration: 30 minutes

Research Tutorials

Building a Research Dataset

What You'll Build

  • Custom dataset export
  • Data filtering and cleaning
  • Statistical analysis
  • Academic citation format

Duration: 60 minutes

Bias Detection Methodology

What You'll Learn

  • How bias scores are calculated
  • Validating bias detection
  • Cross-referencing findings
  • Reporting methodology

Duration: 45 minutes

Integration Tutorials

Web Dashboard Integration

What You'll Build

  • Real-time dashboard
  • Interactive charts
  • Auto-updating metrics
  • Comparison views

Technologies

  • React.js frontend
  • Chart.js visualization
  • GPTfake JavaScript SDK

Duration: 90 minutes

Webhook Alerts Setup

What You'll Build

  • Policy change alerts
  • Custom notification rules
  • Integration with Slack/Discord
  • Email notifications

Duration: 30 minutes

# Set up webhook for policy changes
client.alerts.create_webhook(
url="https://your-server.com/webhook",
events=["policy_change", "censorship_spike"],
models=["chatgpt", "claude"]
)

CLI Tool Usage

What You'll Learn

  • Installing the CLI
  • Basic commands
  • Export functionality
  • Scripting automation
# Install CLI
pip install gptfake-cli

# Get current metrics
gptfake metrics chatgpt

# Compare models
gptfake compare chatgpt claude gemini

# Export data
gptfake export --format csv --days 30 --output data.csv

Advanced Tutorials

Custom Analysis Pipeline

What You'll Build

  • Automated data collection
  • Custom analysis scripts
  • Reporting automation
  • Scheduled exports

Duration: 120 minutes

Machine Learning on Censorship Data

What You'll Build

  • Feature engineering
  • Pattern classification
  • Prediction models
  • Model evaluation

Duration: 150 minutes

Prerequisites

Basic Tutorials

  • Basic programming knowledge
  • Understanding of APIs
  • Familiarity with JSON

Intermediate Tutorials

  • Python or JavaScript experience
  • Data analysis basics
  • Understanding of statistics

Advanced Tutorials

  • Advanced programming skills
  • Machine learning basics
  • Research methodology knowledge

Getting Help


Ready to start? Choose a tutorial and begin analyzing AI censorship data.