Generate Stunning Videos with chamuditha4/vidx Cognitive Actions

In the ever-evolving landscape of digital content creation, the ability to generate high-quality videos from text and existing video content is invaluable. The chamuditha4/vidx API offers powerful Cognitive Actions designed to streamline the video generation process, allowing developers to create visually engaging content with ease. By leveraging these pre-built actions, you can enhance your applications with video generation capabilities that are both efficient and dynamic.
Prerequisites
Before you start integrating the Cognitive Actions, ensure that you have the following prerequisites in place:
- API Key: You will need an API key to authenticate your requests to the Cognitive Actions platform.
- HTTP Client: Familiarity with a programming language's HTTP client (e.g.,
requestsin Python) to make API calls.
Authentication typically involves passing your API key in the headers of your requests.
Cognitive Actions Overview
Generate Video from Text and Video
This action generates a video by combining a provided video with a text prompt, producing high-quality guided video outputs. You can refine the output using various parameters, including guidance scales and negative prompts.
- Category: Video Generation
Input
The input schema for this action requires the following fields:
- inputVideo (string, required): A URI pointing to the input video for depth extraction.
- prompt (string, optional): A descriptive text prompt for the desired output. Defaults to "a panda playing a guitar, on a boat, in the ocean, high quality".
- guidanceScale (number, optional): Determines how strictly the model adheres to the prompt (1-20, default is 2.5).
- negativePrompt (string, optional): Describes undesirable outcomes to help refine results.
- numInferenceSteps (integer, optional): Specifies the number of inference steps (1-100, default is 20).
Example Input:
{
"prompt": "panda playing a guitar",
"inputVideo": "https://replicate.delivery/pbxt/LSJN0aPwz6gSbi6CFkUDtwxEmc7SHkd5a15ndidkzzWxg4by/animatediff-vid2vid-input-1.mp4",
"guidanceScale": 2.5,
"negativePrompt": "bad quality, worse quality",
"numInferenceSteps": 20
}
Output
Upon successful execution, the action typically returns a URI pointing to the generated video.
Example Output:
https://assets.cognitiveactions.com/invocations/17428319-756f-4453-b01d-64fa84c28167/4b650ace-0058-43b2-935a-2fd96a43f09b.mp4
Conceptual Usage Example (Python)
Here's a conceptual Python code snippet to demonstrate how you might call this action:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "03d34047-44cf-4d57-901b-9a81673a26f5" # Action ID for Generate Video from Text and Video
# Construct the input payload based on the action's requirements
payload = {
"prompt": "panda playing a guitar",
"inputVideo": "https://replicate.delivery/pbxt/LSJN0aPwz6gSbi6CFkUDtwxEmc7SHkd5a15ndidkzzWxg4by/animatediff-vid2vid-input-1.mp4",
"guidanceScale": 2.5,
"negativePrompt": "bad quality, worse quality",
"numInferenceSteps": 20
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this snippet, replace "YOUR_COGNITIVE_ACTIONS_API_KEY" with your actual API key. The payload is constructed based on the required input schema, and the response is processed to display the generated video URI.
Conclusion
The chamuditha4/vidx Cognitive Actions provide a powerful way to generate videos from text and existing video content, enabling developers to create engaging multimedia experiences. By integrating these actions into your applications, you can harness the potential of AI-driven video generation, enhancing user engagement and creativity.
As you explore these capabilities, consider experimenting with different prompts and video inputs to see the full range of possibilities. Happy coding!