Generate Stunning Long-Duration Videos with Cognitive Actions for Video Generation

23 Apr 2025
Generate Stunning Long-Duration Videos with Cognitive Actions for Video Generation

In the realm of video content creation, the ability to generate longer videos without the complexities of tuning can revolutionize how developers approach multimedia applications. The arthur-qiu/longercrafter Cognitive Actions provide an efficient solution for generating long-duration videos through a sophisticated noise rescheduling process. By leveraging these pre-built actions, developers can focus on creativity and storytelling, while the underlying technology handles the intricacies of video generation.

Prerequisites

Before diving into the integration of Cognitive Actions, ensure you have the following:

  • An API key for the Cognitive Actions platform, which you will use for authentication.
  • Basic knowledge of JSON structure for input and output data formats.

For authentication, you typically pass your API key in the headers of your HTTP requests. This allows you to securely access the Cognitive Actions services.

Cognitive Actions Overview

Generate Longer Videos with Noise Rescheduling

This action allows you to create long-duration videos through a diffusion process that reschedules noise without requiring extensive tuning. Utilizing FreeNoise, it supports the generation of videos with up to 512 frames.

Input

The input for this action is structured as follows:

  • seed (integer, optional): Random seed for reproducibility. Leave blank to randomize the seed.
  • prompt (string, required): A text prompt that guides the generation of the video content.
  • frameRate (integer, optional): Frames per second for the generated video. Default is 10 fps.
  • frameCount (integer, optional): Total number of frames to generate in the video. Default is 32 frames.
  • guidanceScale (number, optional): Scale for classifier-free guidance, influencing adherence to the prompt. Default value is 12.
  • denoisingSteps (integer, optional): Number of steps for the denoising process during video generation. Default is 50 steps.
  • viewWindowSize (integer, optional): Size of the window for processing. Default size is 16.
  • outputDimensions (string, optional): The resolution of the output video, with predefined sizes: "576x1024" or "256x256". Default is "576x1024".
  • viewWindowStride (integer, optional): Stride used for overlapping windows in processing. Default stride is 4.

Example Input:

{
  "prompt": "A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect.",
  "frameRate": 10,
  "frameCount": 32,
  "guidanceScale": 12,
  "denoisingSteps": 50,
  "viewWindowSize": 16,
  "outputDimensions": "576x1024",
  "viewWindowStride": 4
}

Output

Upon execution, the action will typically return a URL linking to the generated video file.

Example Output:

https://assets.cognitiveactions.com/invocations/d2fb6539-8fbb-446b-8a86-c06c2e5a0e16/d62325ab-90f4-483e-af94-039c9ee15569.mp4

Conceptual Usage Example (Python)

Below is a conceptual Python code snippet demonstrating how to call the Cognitive Actions execution endpoint for generating longer videos:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"  # Hypothetical endpoint

action_id = "62542f71-2391-4acf-907f-c073f0a560ac"  # Action ID for Generate Longer Videos with Noise Rescheduling

# Construct the input payload based on the action's requirements
payload = {
    "prompt": "A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect.",
    "frameRate": 10,
    "frameCount": 32,
    "guidanceScale": 12,
    "denoisingSteps": 50,
    "viewWindowSize": 16,
    "outputDimensions": "576x1024",
    "viewWindowStride": 4
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In the above Python code, you replace the YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The payload is structured according to the action's input requirements, and the response will contain the URL of the generated video.

Conclusion

The arthur-qiu/longercrafter Cognitive Actions offer developers an innovative way to generate long-duration videos effortlessly. By utilizing the provided action, you can focus on crafting engaging content, while the complexities of video generation are managed seamlessly. Consider exploring additional use cases, such as integrating these videos into applications, enhancing social media content, or creating unique marketing materials. Happy coding!