.NET

.NET Integration

Secure your ASP.NET Core applications with real-time content analysis middleware.

Prerequisites

.NET 6.0+ SDK
SafeComms API Key Get Key →
Visual Studio or VS Code

1. Installation

Start by installing the SafeComms Client NuGet package into your project.

> dotnet add package SafeComms.Client

2. Initialize Client

Next, register the SafeComms client in your dependency injection container. This sets up the connection to our analysis engine.

Program.cs
using SafeComms.Client;

var builder = WebApplication.CreateBuilder(args);

// Initialize with your API Key
var safeCommsClient = new SafeCommsClient(
    Environment.GetEnvironmentVariable("SAFECOMMS_API_KEY")
);

builder.Services.AddSingleton(safeCommsClient);

3. Add Endpoint Logic

Add the moderation logic to your endpoint. This will scan the incoming request content before processing it further.

Program.cs
app.MapPost("/api/comments", async (SafeCommsClient moderator, CommentRequest req) =>
{
    try
    {
        // Check content
        var result = await moderator.ModerateTextAsync(
            content: req.Content
            // pii: true // Enable PII detection (Starter Tier+)
        );

        if (!result.GetProperty("isClean").GetBoolean())
        {
            return Results.BadRequest(result);
        }

        // Content is safe
        return Results.Ok(new { success = true });
    }
    catch (Exception)
    {
        return Results.Problem("Moderation check failed");
    }
});

record CommentRequest(string Content);

4. Verify & Test

Finally, verify your integration is working correctly by sending a test request.

Terminal
curl -X POST http://localhost:5000/api/comments \
  -H "Content-Type: application/json" \
  -d '{"content": "This is some sample text with profanity"}'
Expected Output (400 Bad Request)
{
  "id": "req_123abc",
  "isClean": false,
  "severity": "Critical",
  "categoryScores": {
    "profanity": 0.98,
    "toxicity": 0.85
  },
  "reason": "Content contains profanity"
}

5. Complete Example

Here is the full code block ready to copy and paste.

Program.cs
using SafeComms.Client;
using System.Text.Json;

var builder = WebApplication.CreateBuilder(args);

// Initialize with your API Key
var safeCommsClient = new SafeCommsClient(
    Environment.GetEnvironmentVariable("SAFECOMMS_API_KEY") ?? "your-api-key"
);

builder.Services.AddSingleton(safeCommsClient);

var app = builder.Build();

app.UseHttpsRedirection();

// Endpoint to check comments before saving
app.MapPost("/api/comments", async (SafeCommsClient moderator, CommentRequest req) =>
{
    try
    {
        // 1. Check content
        var result = await moderator.ModerateTextAsync(
            content: req.Content
            // pii: true // Enable PII detection (Starter Tier+)
        );

        // 2. Act on result
        if (!result.GetProperty("isClean").GetBoolean())
        {
            return Results.BadRequest(result);
        }

        // 3. Content is safe, proceed to save...
        // await db.Comments.AddAsync(new Comment { Content = req.Content });
        
        return Results.Ok(new { success = true });
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex);
        return Results.Problem("Moderation check failed");
    }
});

app.Run();

record CommentRequest(string Content);

Configuration & Tuning

Need to adjust sensitivity or allow certain words? You don't need to change your code. Head to the dashboard to configure your moderation profile globally.