Create High-Quality Videos with the dfischer/toksci-2-epoch Cognitive Actions

23 Apr 2025
Create High-Quality Videos with the dfischer/toksci-2-epoch Cognitive Actions

In the world of multimedia content creation, the demand for high-quality video generation is ever-growing. The dfischer/toksci-2-epoch Cognitive Actions empower developers to create stunning videos with customizable parameters, leveraging advanced techniques such as diffusion steps and text prompts. This guide will walk you through the capabilities of the Generate Enhanced Video action, illustrating how it can be integrated into your applications.

Prerequisites

Before diving into the Cognitive Actions, ensure you have the following:

  • An API key for the Cognitive Actions platform.
  • Familiarity with JSON structures for input and output.
  • Basic knowledge of making HTTP requests in your programming language of choice.

Authentication typically involves passing your API key in the headers of your requests.

Cognitive Actions Overview

Generate Enhanced Video

The Generate Enhanced Video action allows you to create high-quality videos by specifying a variety of customizable parameters. This action is particularly useful for developers interested in generating visually engaging content based on textual descriptions.

Input

The input for this action is structured as follows:

{
  "steps": 50,
  "width": 640,
  "height": 360,
  "prompt": "a video of TOKOSCI rendering images of grids and butterflies through the screen of a TOKOSCI render",
  "flowShift": 9,
  "frameRate": 16,
  "scheduler": "DPMSolverMultistepScheduler",
  "loraFileUrl": "",
  "forceOffload": true,
  "loraStrength": 1,
  "guidanceScale": 6,
  "enhancementEnd": 1,
  "numberOfFrames": 33,
  "denoiseStrength": 1,
  "enhancementStart": 0,
  "enhanceAcrossPairs": true,
  "enhancementStrength": 0.3,
  "compressionRateFactor": 19,
  "enhanceIndividualFrames": true
}
  • steps: Number of diffusion steps (default: 50, max: 150).
  • width: Width of the output video in pixels (default: 640, range: 64-1536).
  • height: Height of the output video in pixels (default: 360, range: 64-1024).
  • prompt: Textual description of the video scene.
  • flowShift: Continuity factor affecting video flow (default: 9, range: 0-20).
  • frameRate: Frames per second of the video (default: 16, range: 1-60).
  • scheduler: Algorithm for frame generation (default: "DPMSolverMultistepScheduler").
  • loraFileUrl: URL to the LoRA .safetensors file or Hugging Face repo.
  • forceOffload: Boolean to force model processing to CPU (default: true).
  • loraStrength: Strength of the LoRA effect (default: 1, range: -10 to 10).
  • guidanceScale: Influence balance between text prompt and model (default: 6, range: 0-30).
  • enhancementEnd: When to stop enhancing (default: 1).
  • numberOfFrames: Total frames in the video (default: 33, range: 1-1440).
  • denoiseStrength: Noise amount at each diffusion step (default: 1, range: 0-2).
  • enhancementStart: When to start enhancing (default: 0).
  • enhanceAcrossPairs: Boolean to apply enhancements across frame pairs (default: true).
  • enhancementStrength: Magnitude of enhancement effects (default: 0.3, range: 0-2).
  • compressionRateFactor: Constant Rate Factor for encoding (default: 19, range: 0-51).
  • enhanceIndividualFrames: Boolean to enhance individual frames (default: true).

Output

Upon successful execution, the action returns a URL to the generated video. For example:

https://assets.cognitiveactions.com/invocations/895e6bd7-4326-44da-b10a-efca2e382b48/8fc7e728-cb6b-42f4-b3c9-382a6b39394f.mp4

Conceptual Usage Example (Python)

Here is a conceptual example of how you might call the Generate Enhanced Video action using Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "0c96221e-84fa-4dbb-b5f1-745c8ff1a3a8" # Action ID for Generate Enhanced Video

# Construct the input payload based on the action's requirements
payload = {
    "steps": 50,
    "width": 640,
    "height": 360,
    "prompt": "a video of TOKOSCI rendering images of grids and butterflies through the screen of a TOKOSCI render",
    "flowShift": 9,
    "frameRate": 16,
    "scheduler": "DPMSolverMultistepScheduler",
    "loraFileUrl": "",
    "forceOffload": True,
    "loraStrength": 1,
    "guidanceScale": 6,
    "enhancementEnd": 1,
    "numberOfFrames": 33,
    "denoiseStrength": 1,
    "enhancementStart": 0,
    "enhanceAcrossPairs": True,
    "enhancementStrength": 0.3,
    "compressionRateFactor": 19,
    "enhanceIndividualFrames": True
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet:

  • Replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key.
  • The action_id is set to the ID for the Generate Enhanced Video action.
  • The payload is constructed according to the input schema.
  • The response will contain a URL to the generated video.

Conclusion

The dfischer/toksci-2-epoch Cognitive Actions provide a powerful toolset for developers looking to generate high-quality videos programmatically. By utilizing customizable parameters, you can create unique and engaging content tailored to your needs. Explore further use cases, such as integrating video generation into applications for marketing, education, or entertainment. The possibilities are only limited by your imagination!