Streamline Your Video Processing with the OpenReplay Clipper Cognitive Actions

In the world of video processing, efficiency and flexibility are key. The OpenReplay Clipper Cognitive Actions, part of the nelsonjchen/op-replay-clipper suite, provide developers with powerful tools for rendering and clipping OpenPilot route data. These actions utilize GPU acceleration, enabling quick and efficient video decoding and encoding. With various render types and customizable settings, you can easily clip raw video files to meet your specific requirements.
Prerequisites
Before you dive into using the Cognitive Actions, ensure you have:
- An API key for the Cognitive Actions platform.
- Access to the necessary route data, which may require public access settings or a valid JWT token for non-public routes.
Authentication typically involves passing your API key in the request headers, allowing you to securely interact with the Cognitive Actions API.
Cognitive Actions Overview
Render and Clip OpenPilot Route Data
This action enables GPU-accelerated rendering and clipping of comma.ai's OpenPilot route data. It supports various render types and file formats, allowing for efficient video processing tailored to specific needs.
Input
The input schema for this action is defined as follows:
{
"notes": "",
"route": "https://connect.comma.ai/a2a0ccea32023010/1690488131496/1690488151496",
"metric": false,
"jwtToken": "",
"fileSizeMb": 25,
"renderType": "ui",
"smearFrames": 5,
"renderSpeedRatio": 1,
"startTimeSeconds": 50,
"clipLengthSeconds": 20,
"forwardOverlayPositionH": 2.2
}
- notes (string, optional): Personal reference notes that don't affect output.
- route (string, required): The URL or route ID for processing; ensure 'Public Access' is enabled or a valid JWT token is provided.
- metric (boolean, optional): Render in metric units (km/h) for UI render.
- jwtToken (string, optional): JWT token for non-public access routes.
- fileFormat (string, optional): Choose from 'auto', 'h264', or 'hevc'.
- fileSizeMb (integer, required): Rough size of the output clip in MB (10-200).
- renderType (string, required): Select render type (e.g., 'ui', 'forward', etc.).
- smearFrames (integer, required): Count of smear frames for UI render (5-40).
- renderSpeedRatio (number, required): Adjust render speed (0.1-7).
- startTimeSeconds (integer, required): Start time for route ID input (ignored with URL).
- clipLengthSeconds (integer, required): Defines clip length for route ID input (5-300).
- forwardOverlayPositionH (number, required): Horizontal position for overlay on wide renders.
Example Input
{
"notes": "",
"route": "https://connect.comma.ai/a2a0ccea32023010/1690488131496/1690488151496",
"metric": false,
"jwtToken": "",
"fileSizeMb": 25,
"renderType": "ui",
"smearFrames": 5,
"renderSpeedRatio": 1,
"startTimeSeconds": 50,
"clipLengthSeconds": 20,
"forwardOverlayPositionH": 2.2
}
Output
The action typically returns a URL to the processed video output, such as:
https://assets.cognitiveactions.com/invocations/59d67433-1071-4470-8c31-140cd5380065/dda615df-9e99-4c96-81b5-8ba9ab2e90b4.mp4
Conceptual Usage Example (Python)
Here’s how you can invoke the Render and Clip OpenPilot Route Data action using a hypothetical endpoint in Python:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "716bd59c-86af-404a-8c93-ae8d4a58fd54" # Action ID for Render and Clip OpenPilot Route Data
# Construct the input payload based on the action's requirements
payload = {
"notes": "",
"route": "https://connect.comma.ai/a2a0ccea32023010/1690488131496/1690488151496",
"metric": false,
"jwtToken": "",
"fileSizeMb": 25,
"renderType": "ui",
"smearFrames": 5,
"renderSpeedRatio": 1,
"startTimeSeconds": 50,
"clipLengthSeconds": 20,
"forwardOverlayPositionH": 2.2
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this code snippet, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id is set to the ID of the Render and Clip action. The payload is structured to meet the input requirements, ensuring a successful API call.
Conclusion
The OpenReplay Clipper Cognitive Actions offer a robust solution for developers seeking to integrate video processing functionalities into their applications. With customizable options for rendering and clipping OpenPilot route data, you can enhance user experiences and streamline your workflows. Explore further use cases and consider how these actions can fit into your projects!