Python

Python Integration

Secure your Flask, Django, or FastAPI applications with real-time content analysis middleware.

Prerequisites

1. Installation

Start by installing the SafeComms SDK into your project using pip.

$ pip install safecomms

2. Initialize Client

Next, initialize the SafeComms client with your API key. This creates a connection to our analysis engine.

app.py
from safecomms import SafeCommsClient
import os

# Initialize with your API Key
client = SafeCommsClient(
    api_key=os.environ.get("SAFECOMMS_API_KEY")
)

3. Add Route Logic

Add the moderation logic to your route handler. This will scan the incoming request content before processing it further.

app.py
@app.route('/api/comments', methods=['POST'])
def create_comment():
    content = request.json.get('content')

    try:
        # Check content
        result = client.moderate_text(
            content=content,
            # pii=True # Enable PII detection (Starter Tier+)
        )

        if not result['isClean']:
            return jsonify(result), 400

        # Content is safe
        return jsonify({"success": True})

    except Exception as e:
        return jsonify({"error": "Moderation check failed"}), 500

4. Verify & Test

Finally, verify your integration is working correctly by sending a test request.

Terminal
curl -X POST http://localhost:5000/api/comments \
  -H "Content-Type: application/json" \
  -d '{"content": "This is some sample text with profanity"}'
Expected Output (400 Bad Request)
{
  "id": "req_123abc",
  "isClean": false,
  "severity": "Critical",
  "categoryScores": {
    "profanity": 0.98,
    "toxicity": 0.85
  },
  "reason": "Content contains profanity"
}

5. Complete Example

Here is the full code block ready to copy and paste.

app.py
from flask import Flask, request, jsonify
from safecomms import SafeCommsClient
import os

app = Flask(__name__)

# Initialize with your API Key
client = SafeCommsClient(
    api_key=os.environ.get("SAFECOMMS_API_KEY")
)

@app.route('/api/comments', methods=['POST'])
def create_comment():
    content = request.json.get('content')

    try:
        # 1. Check content
        result = client.moderate_text(
            content=content,
            # pii=True # Enable PII detection (Starter Tier+)
        )

        # 2. Act on result
        if not result['isClean']:
            return jsonify(result), 400

        # 3. Content is safe, proceed to save...
        # db.comments.insert_one({"content": content})
        
        return jsonify({"success": True})

    except Exception as e:
        print(e)
        return jsonify({"error": "Moderation check failed"}), 500

if __name__ == '__main__':
    app.run(port=5000)

Configuration & Tuning

Need to adjust sensitivity or allow certain words? You don't need to change your code. Head to the dashboard to configure your moderation profile globally.