Create Stunning Custom Videos with Hunyuan-Video and LoRA Cognitive Actions

22 Apr 2025
Create Stunning Custom Videos with Hunyuan-Video and LoRA Cognitive Actions

In today's digital landscape, the ability to create compelling video content quickly and efficiently is invaluable for developers and content creators alike. The Hunyuan-Video LoRA Cognitive Actions provide an exciting way to generate custom videos from text descriptions, leveraging the power of the Hunyuan-Video model combined with LoRA for style customization and character additions. With these pre-built actions, developers can focus on creativity without getting lost in the complexities of video generation.

Prerequisites

Before diving into the integration of these Cognitive Actions, ensure you have the following:

  • An API key for the Cognitive Actions platform.
  • Basic familiarity with JSON and making HTTP requests.
  • Set up your development environment to include the requests library for Python.

Authentication typically involves passing your API key in the headers of your requests to securely access the Cognitive Actions functionality.

Cognitive Actions Overview

Generate Video with Hunyuan-Video and LoRA

This action allows you to create custom videos based on detailed text prompts. By utilizing the Hunyuan-Video model and integrating LoRA, you can customize the style of the video and incorporate character dynamics, making it a versatile tool for content creation.

Input

The input for this action is structured as a JSON object, which includes the following fields:

  • frameCount (int): Specifies the number of frames (1-300) for the video. Default is 85.
  • randomSeed (int, optional): Ensures reproducibility of results with a specified seed.
  • videoWidth (int): Defines the width of the video (64-1536 pixels). Default is 640.
  • loraFileUrl (string): URL to your LoRA file or Hugging Face repository.
  • scenePrompt (string): The descriptive text prompt for the video scene.
  • videoHeight (int): Defines the height of the video (64-1024 pixels). Default is 360.
  • loraIntensity (number): Strength of the LoRA effect (default is 1).
  • diffusionSteps (int): Number of steps in the diffusion process (1-150). Default is 50.
  • noiseIntensity (number): Level of noise in the artistic style (default is 1).
  • videoFrameRate (int): Frames displayed per second (1-60). Default is 24.
  • forceCpuOffload (boolean): Whether to offload model layers to CPU (default is true).
  • loraWeightsFile (string, optional): A tar file containing LoRA weights.
  • continuityFactor (int): Adjusts the flow of the video (0-20). Default is 9.
  • textModelInfluence (number): Balance between text input and model influence (default is 6).
  • compressionRateFactor (int): Defines the CRF for video encoding (0-51). Default is 19.

Here is an example input JSON payload:

{
  "frameCount": 33,
  "videoWidth": 512,
  "loraFileUrl": "lucataco/hunyuan-musubi-rose-6",
  "scenePrompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera.",
  "videoHeight": 512,
  "loraIntensity": 1,
  "diffusionSteps": 30,
  "noiseIntensity": 1,
  "videoFrameRate": 15,
  "forceCpuOffload": true,
  "continuityFactor": 9,
  "textModelInfluence": 6,
  "compressionRateFactor": 19
}

Output

Upon successful execution, this action returns a URL pointing to the generated video. For example:

https://assets.cognitiveactions.com/invocations/f34474ca-2032-4a95-97ef-574a31fa3f75/89ba1db0-df28-4f83-9b7a-ac29a52c597a.mp4

Conceptual Usage Example (Python)

Here's how you might use the Generate Video with Hunyuan-Video and LoRA action in Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "7aee81b9-1553-42a2-aca1-9969f04e2cc0" # Action ID for Generate Video with Hunyuan-Video and LoRA

# Construct the input payload based on the action's requirements
payload = {
    "frameCount": 33,
    "videoWidth": 512,
    "loraFileUrl": "lucataco/hunyuan-musubi-rose-6",
    "scenePrompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera.",
    "videoHeight": 512,
    "loraIntensity": 1,
    "diffusionSteps": 30,
    "noiseIntensity": 1,
    "videoFrameRate": 15,
    "forceCpuOffload": True,
    "continuityFactor": 9,
    "textModelInfluence": 6,
    "compressionRateFactor": 19
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id refers to the specific action to be executed, and the payload is structured according to the input schema outlined above. The endpoint URL is illustrative and should correspond to your actual Cognitive Actions API endpoint.

Conclusion

The Hunyuan-Video and LoRA Cognitive Actions offer powerful capabilities for video generation from textual descriptions, enabling developers to create custom and engaging content with ease. By integrating these actions into your applications, you can streamline the video creation process and enhance the creative possibilities for your projects. Explore further use cases, experiment with different prompts, and let your creativity flow!