Detect NSFW Content in Videos with lucataco/nsfw_video_detection Actions

In today's digital landscape, ensuring content safety is paramount, especially when dealing with user-generated videos. The lucataco/nsfw_video_detection API provides developers with powerful Cognitive Actions designed to identify explicit or inappropriate content in videos. By utilizing advanced machine learning models, specifically a fine-tuned Vision Transformer (ViT), this API action simplifies content moderation efforts, allowing developers to focus on building robust applications while ensuring a safer viewing experience.
Prerequisites
Before you can start using the Cognitive Actions, you'll need to obtain an API key from the Cognitive Actions platform. This key will be used to authenticate requests made to the API. Typically, you would pass this API key in the headers of your HTTP requests. Here’s a conceptual overview of how authentication might look:
headers = {
"Authorization": "Bearer YOUR_COGNITIVE_ACTIONS_API_KEY",
"Content-Type": "application/json"
}
Cognitive Actions Overview
Detect NSFW Content in Video
The Detect NSFW Content in Video action leverages an extended NSFW detection model specifically designed for video content moderation. By analyzing the video, this action categorizes the content as either appropriate or explicit.
- Category: Content Moderation
Input
To invoke this action, you will need to provide the following input:
- video (required): A string representing the URL of the input video. This must be in a valid URI format.
- safetyTolerance (optional): An integer that controls the strictness of the moderation, ranging from 1 (most strict) to 6 (most permissive). The default value is 2.
Example Input
{
"video": "https://replicate.delivery/pbxt/MeYrKketDRwUdUPYlHEsA0UlcD4eOlFegxJwvJzFuhL1en1O/falcon2.mp4",
"safetyTolerance": 2
}
Output
The action typically returns a string indicating the content status of the video. For instance, the output might be:
- Output Example:
"normal"(indicating the content is appropriate)
Conceptual Usage Example (Python)
Here’s how you might structure a request to the Cognitive Actions execution endpoint using Python:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "ef3a9f19-b6ff-47c4-bfab-f7e8abed5e11" # Action ID for Detect NSFW Content in Video
# Construct the input payload based on the action's requirements
payload = {
"video": "https://replicate.delivery/pbxt/MeYrKketDRwUdUPYlHEsA0UlcD4eOlFegxJwvJzFuhL1en1O/falcon2.mp4",
"safetyTolerance": 2
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this code snippet, you replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The payload variable contains the input required for the action. The action_id represents the unique identifier for the "Detect NSFW Content in Video" action.
Conclusion
The lucataco/nsfw_video_detection Cognitive Actions provide a straightforward way to enhance your applications with advanced content moderation capabilities. By implementing the NSFW content detection action, you can effectively ensure user safety while maintaining a seamless user experience. As a next step, consider integrating this action into your content moderation workflow or exploring additional actions within the API to further enhance your application's capabilities.