Create Stunning Blade Runner Style Videos with Hunyuan Cognitive Actions

In the world of video generation, the Hunyuan Blade Runner cognitive actions provide an exciting opportunity to create videos reminiscent of the iconic 1982 film, Blade Runner. This powerful API allows developers to leverage advanced models fine-tuned specifically for generating high-quality, detailed video scenes. By utilizing the pre-built actions, you can focus on crafting unique visual experiences without diving deep into the complexities of video processing.
In this article, we'll explore how to integrate the Generate Blade Runner Style Video action into your applications, detailing its capabilities, input requirements, and how to execute it programmatically.
Prerequisites
Before you get started, ensure you have the following ready:
- An API key for the Hunyuan Cognitive Actions platform.
- Basic understanding of JSON and how to make HTTP requests.
- Familiarity with Python, as we will provide a conceptual example using this language for API calls.
For authentication, you'll typically pass the API key in the request headers.
Cognitive Actions Overview
Generate Blade Runner Style Video
Description:
This action uses the Hunyuan-Video model fine-tuned on Blade Runner to generate videos that mimic the film's distinctive style. Start the video generation process with the prompt 'BLDRN' for optimal results.
Category: Video Generation
Input Schema: The input for this action requires a structured JSON object. Here’s a breakdown of the essential fields:
url(string): The URL to a LoRA .safetensors file or Hugging Face repository.seed(integer, optional): A random seed for reproducibility.steps(integer): Number of diffusion steps for frame generation (default 50, range 1-150).width(integer): Width of the generated video in pixels (default 640, range 64-1536).height(integer): Height of the generated video in pixels (default 360, range 64-1024).prompt(string): A detailed description for scene creation.flowShift(integer): Continuity factor for video frames (default 9).frameRate(integer): Frames per second in the video (default 16, range 1-60).scheduler(string): The scheduling algorithm for generating frames (default "DPMSolverMultistepScheduler").enhanceEnd(number): Relative time for video enhancement to end (default 1).enhanceStart(number): Relative time for enhancement to begin (default 0).forceOffload(boolean): Whether to offload model layers to CPU (default true).loraStrength(number): Intensity of the LoRA model (default 1, range -10 to 10).- Additional parameters for frame enhancement and quality control.
Example Input:
{
"url": "",
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of BLDRN, BLDRN The video clip depicts a detailed portrait of a woman's face...",
"flowShift": 9,
"frameRate": 16,
"scheduler": "DPMSolverMultistepScheduler",
"enhanceEnd": 1,
"enhanceStart": 0,
"forceOffload": true,
"loraStrength": 1,
"enhanceDouble": true,
"enhanceSingle": true,
"enhanceWeight": 0.3,
"guidanceScale": 6,
"qualityFactor": 19,
"numberOfFrames": 66,
"denoiseStrength": 1
}
Output: The action typically returns a URL pointing to the generated video file. For example:
https://assets.cognitiveactions.com/invocations/d2ed2fdb-7fa7-4aec-9726-b2e7c15c08f1/83310255-50a5-4bd4-9166-28bad3278e57.mp4
Conceptual Usage Example (Python): Here’s how you might call this action using Python:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "edea8a62-ed3b-4c63-8d27-b23bfcec0165" # Action ID for Generate Blade Runner Style Video
# Construct the input payload based on the action's requirements
payload = {
"url": "",
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of BLDRN, BLDRN The video clip depicts a detailed portrait of a woman's face...",
"flowShift": 9,
"frameRate": 16,
"scheduler": "DPMSolverMultistepScheduler",
"enhanceEnd": 1,
"enhanceStart": 0,
"forceOffload": True,
"loraStrength": 1,
"enhanceDouble": True,
"enhanceSingle": True,
"enhanceWeight": 0.3,
"guidanceScale": 6,
"qualityFactor": 19,
"numberOfFrames": 66,
"denoiseStrength": 1
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this snippet, replace the placeholder for the API key and adjust the payload as needed. This code constructs a JSON payload according to the required input schema and sends a POST request to execute the action.
Conclusion
The Generate Blade Runner Style Video cognitive action offers a unique way to create visually stunning videos inspired by the classic film. By integrating this action into your applications, you can streamline video generation processes and focus on creative storytelling. Explore further possibilities with different prompts and parameters to make your videos truly stand out!