Enhance Content Moderation with NSFW Image Detection API

6 Jun 2025
Enhance Content Moderation with NSFW Image Detection API

In today's digital landscape, maintaining a safe and respectful online environment is paramount. The NSFW Image Detection And Classification API empowers developers to automatically detect and classify not safe for work (NSFW) content in images. By leveraging advanced AI technology, this API simplifies the process of content moderation, ensuring that platforms and applications can uphold community guidelines and protect users from inappropriate material.

Common use cases for this API include social media platforms, online marketplaces, and content-sharing sites where user-generated images are prevalent. By integrating this API, developers can swiftly analyze images, providing real-time feedback on content safety and maintaining a healthy user experience.

To get started, you'll need a Cognitive Actions API key and a basic understanding of making API calls.

Classify NSFW Image Content

The Classify NSFW Image Content action is designed to detect and classify NSFW content in images by identifying exposed body parts such as genitalia, breasts, and buttocks. This action is essential for content moderation, helping to automatically flag images that may violate community standards.

Input Requirements

To utilize this action, you must provide the following input:

  • Image URL: A valid HTTP or HTTPS link to the image you want to analyze.
  • Face Counts: Optional parameters for detected male and female faces in the image.
  • Body Part Counts: Optional counts for various body parts, both exposed and covered, such as feet, bellies, buttocks, and genitalia.
  • Detect Nipples: A boolean flag indicating whether to detect nipples in the image.

Example Input:

{
  "url": "https://images.unsplash.com/photo-1602911429311-1c56a6c42a81?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=687&q=80"
}

Expected Output

The output will indicate whether the image is deemed unsafe and provide details about detected objects:

  • Unsafe: A boolean indicating if the image contains NSFW content.
  • Objects: An array of detected objects, including bounding boxes, labels for body parts, and confidence scores.

Example Output:

{
  "unsafe": false,
  "objects": [
    {
      "box": [299, 220, 427, 348],
      "label": "FACE_F",
      "score": 0.9658822417259216
    },
    {
      "box": [264, 599, 406, 778],
      "label": "EXPOSED_BELLY",
      "score": 0.7442929744720459
    },
    {
      "box": [291, 827, 353, 889],
      "label": "COVERED_GENITALIA_F",
      "score": 0.6845396161079407
    }
  ]
}

Use Cases for this Action

  • Social Media Platforms: Automatically filter out inappropriate images before they are displayed to users, enhancing community safety.
  • Online Marketplaces: Ensure that listings comply with content standards by analyzing product images for NSFW content.
  • Content Moderation Tools: Assist moderators by providing AI-generated assessments of images, allowing for quicker decision-making.

```python
import requests
import json

# Replace with your actual Cognitive Actions API key and endpoint
# Ensure your environment securely handles the API key
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
# This endpoint URL is hypothetical and should be documented for users
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"

action_id = "d64a1c33-595b-4a89-8ec2-1e53c5fe75ff" # Action ID for: Classify NSFW Image Content

# Construct the exact input payload based on the action's requirements
# This example uses the predefined example_input for this action:
payload = {
  "url": "https://images.unsplash.com/photo-1602911429311-1c56a6c42a81?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=687&q=80"
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json",
    # Add any other required headers for the Cognitive Actions API
}

# Prepare the request body for the hypothetical execution endpoint
request_body = {
    "action_id": action_id,
    "inputs": payload
}

print(f"--- Calling Cognitive Action: {action.name or action_id} ---")
print(f"Endpoint: {COGNITIVE_ACTIONS_EXECUTE_URL}")
print(f"Action ID: {action_id}")
print("Payload being sent:")
print(json.dumps(request_body, indent=2))
print("------------------------------------------------")

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json=request_body
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully. Result:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body (non-JSON): {e.response.text}")
    print("------------------------------------------------")


In conclusion, the NSFW Image Detection And Classification API offers a robust solution for developers aiming to implement effective content moderation strategies. By automating the detection of NSFW content, applications can protect users and foster a safer online environment. As you consider integrating this API, think about how it can enhance user experiences on your platform and streamline your content moderation processes. Explore the potential of AI-driven content safety today!