Enhance Image Safety Classification with Shieldgemma 2

In today's digital landscape, ensuring the safety of visual content is paramount for businesses and platforms that rely on user-generated imagery. The Shieldgemma 2 service offers powerful Cognitive Actions that enable developers to classify image safety effectively. With its advanced machine learning model, Shieldgemma 2 assesses images for compliance with critical safety categories, such as sexually explicit content, dangerous materials, and violence or gore. This capability not only enhances user experience but also helps maintain compliance with legal and ethical standards.
Imagine running a social media platform or an online marketplace where user-uploaded images can range from innocent to inappropriate. The ability to automate the classification of these images saves time, reduces the risk of human error, and allows for swift moderation actions. By integrating Shieldgemma 2's Cognitive Actions into your application, you can streamline your content moderation processes and ensure a safer environment for your users.
Prerequisites
To get started with Shieldgemma 2, you'll need access to the Cognitive Actions API key and a basic understanding of making API calls.
Classify Image Safety Using ShieldGemma 2
The "Classify Image Safety Using ShieldGemma 2" action utilizes the ShieldGemma 2 model, which has been trained on the Gemma 3's 4B IT checkpoint. This action effectively classifies images based on safety across key categories, providing essential safety labels in line with your policies.
Input Requirements
To utilize this action, you must provide:
- Image: The URI of the image you want to assess for safety compliance. This is a required field.
- Policy Type: An optional parameter that allows you to specify the type of policy criteria—options include 'sexually_explicit', 'dangerous_content', and 'violence_gore'. If not specified, the default is 'sexually_explicit'.
Example Input:
{
"image": "https://replicate.delivery/pbxt/MgLT2q7vTwTunPqUq6MT9pOIDUB9vk9albRWqwiexH6Ny4c3/bee-1024.jpg",
"policyType": "dangerous_content"
}
Expected Output
The output will include probabilities indicating the likelihood of the image falling into the specified safety categories. For instance, a response may look like this:
{
"probabilities": {
"no": 0.9999998807907104,
"yes": 1.1067029248579274e-7
}
}
This output helps you understand whether the image is compliant with the selected policy type.
Use Cases for this Action
- Social Media Platforms: Automatically moderate user-uploaded images to filter out inappropriate content, ensuring a safe environment for users.
- E-commerce Sites: Assess product images for compliance with safety standards, preventing the display of harmful or offensive products.
- Content Sharing Applications: Quickly evaluate images before they are shared, maintaining community guidelines and enhancing user trust.
```python
import requests
import json
# Replace with your actual Cognitive Actions API key and endpoint
# Ensure your environment securely handles the API key
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
# This endpoint URL is hypothetical and should be documented for users
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"
action_id = "bfba34bc-d173-4d21-ac0f-a8d3e641fac8" # Action ID for: Classify Image Safety Using ShieldGemma 2
# Construct the exact input payload based on the action's requirements
# This example uses the predefined example_input for this action:
payload = {
"image": "https://replicate.delivery/pbxt/MgLT2q7vTwTunPqUq6MT9pOIDUB9vk9albRWqwiexH6Ny4c3/bee-1024.jpg",
"policyType": "dangerous_content"
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json",
# Add any other required headers for the Cognitive Actions API
}
# Prepare the request body for the hypothetical execution endpoint
request_body = {
"action_id": action_id,
"inputs": payload
}
print(f"--- Calling Cognitive Action: {action.name or action_id} ---")
print(f"Endpoint: {COGNITIVE_ACTIONS_EXECUTE_URL}")
print(f"Action ID: {action_id}")
print("Payload being sent:")
print(json.dumps(request_body, indent=2))
print("------------------------------------------------")
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json=request_body
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully. Result:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body (non-JSON): {e.response.text}")
print("------------------------------------------------")
### Conclusion
The Shieldgemma 2 Cognitive Action for image safety classification offers developers a robust solution for moderating content effectively. By automating the classification process, you not only enhance the safety of your platform but also improve operational efficiency. Whether you're managing a social media site, e-commerce platform, or any application involving user-generated content, integrating this action will help you maintain compliance and build a safer community. To get started, explore the API documentation and begin implementing this powerful capability into your projects.