Create Stunning Videos with the deepfates/hunyuan-arcane Cognitive Action

The deepfates/hunyuan-arcane API offers developers a powerful tool for generating high-quality videos in the unique style of Arcane (2021). By leveraging pre-built Cognitive Actions, you can create visually captivating content that resonates with audiences, all while simplifying the video generation process. In this article, we’ll explore how to utilize the Generate Arcane Style Video action, detailing its parameters, usage, and providing examples to help you integrate it seamlessly into your applications.
Prerequisites
To get started with the Cognitive Actions provided by the deepfates/hunyuan-arcane API, ensure you have the following:
- An API key for accessing the Cognitive Actions platform.
- Basic understanding of JSON payloads and API calls.
- Familiarity with Python for executing API requests.
Authentication typically involves passing your API key in the headers of your requests, allowing secure access to the actions.
Cognitive Actions Overview
Generate Arcane Style Video
This action allows you to create a video in the distinctive style of Arcane. By utilizing specific parameters, you can achieve enhanced video quality and unique styling, particularly with the inclusion of the trigger word 'RCN' for optimal results.
Input
The input for this action requires a structured JSON object with various parameters. Here’s a breakdown of the input schema:
- seed (integer): Specifies a seed for reproducibility of generated content. If not set, it will generate randomly.
- steps (integer): Defines the number of diffusion steps used in video generation. Default is 50, ranging from 1 to 150.
- width (integer): Specifies the width of the generated video in pixels. Default is 640, between 64 and 1536.
- height (integer): Specifies the height of the generated video in pixels. Default is 360, between 64 and 1024.
- prompt (string): The text prompt guiding the theme or scene description for the video.
- flowShift (integer): Determines video continuity factor. Default is 9, with a range from 0 to 20.
- frameRate (integer): Specifies the frame rate of the video in frames per second. Default is 16, ranging from 1 to 60.
- scheduler (string): Specifies the algorithm used to generate video frames. Defaults to 'DPMSolverMultistepScheduler'.
- frameCount (integer): Total number of frames in the resulting video. Default is 33, with a range from 1 to 1440.
- loraFileUrl (string): URL to a LoRA .safetensors file or a Hugging Face repository.
- qualityFactor (integer): Defines the CRF for H264 encoding; lower values yield higher quality. Default is 19, with a range from 0 to 51.
- enableForceOffload (boolean): Determines whether to force model layers to offload to the CPU. Default is true.
- enhancementEndTime (number): Specifies the point to stop video enhancement.
- loraEffectStrength (number): Determines the strength of the LoRA effect. Default is 1, ranging from -10 to 10.
- textModelInfluence (number): Controls the balance between text guidance and model influence. Default is 6, within 0 to 30.
- noiseReductionLevel (number): Determines the intensity of noise reduction applied at each step. Default is 1, ranging from 0 to 2.
- enhancementIntensity (number): Controls the intensity of video enhancement effects. Default is 0.3, with a range from 0 to 2.
- enhancementStartTime (number): Specifies the point to start video enhancement.
- doubleFrameEnhancement (boolean): Applies enhancement across pairs of video frames. Default is true.
- singleFrameEnhancement (boolean): Applies enhancement to individual video frames. Default is true.
Here’s an example of a valid input payload:
{
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.",
"flowShift": 9,
"frameRate": 16,
"scheduler": "DPMSolverMultistepScheduler",
"frameCount": 66,
"loraFileUrl": "",
"qualityFactor": 19,
"enableForceOffload": true,
"enhancementEndTime": 1,
"loraEffectStrength": 1,
"textModelInfluence": 6,
"noiseReductionLevel": 1,
"enhancementIntensity": 0.3,
"enhancementStartTime": 0,
"doubleFrameEnhancement": true,
"singleFrameEnhancement": true
}
Output
Upon successful execution, the action returns a URL linking to the generated video. For instance:
https://assets.cognitiveactions.com/invocations/13f20b96-af85-4d37-952d-2b5c01547a92/6440ee4f-f148-4ee8-941b-ef47cc2cfef0.mp4
This URL directs you to the video created based on your input parameters.
Conceptual Usage Example (Python)
Below is a conceptual Python code snippet that illustrates how to call this action using the hypothetical Cognitive Actions execution endpoint:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "6118006f-362d-47b7-b8b4-68a437f75887" # Action ID for Generate Arcane Style Video
# Construct the input payload based on the action's requirements
payload = {
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.",
"flowShift": 9,
"frameRate": 16,
"scheduler": "DPMSolverMultistepScheduler",
"frameCount": 66,
"loraFileUrl": "",
"qualityFactor": 19,
"enableForceOffload": true,
"enhancementEndTime": 1,
"loraEffectStrength": 1,
"textModelInfluence": 6,
"noiseReductionLevel": 1,
"enhancementIntensity": 0.3,
"enhancementStartTime": 0,
"doubleFrameEnhancement": true,
"singleFrameEnhancement": true
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this code snippet, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action ID is set for the Generate Arcane Style Video action, and the input payload is structured to meet the action’s requirements.
Conclusion
By utilizing the Generate Arcane Style Video action from the deepfates/hunyuan-arcane API, developers can effortlessly create stunning videos that capture the essence of the Arcane aesthetic. With a wide array of customizable parameters, you can generate unique video content tailored to your needs. Explore this action and consider how it can enhance your applications today!