Create Stunning Videos with the LTX Video Cognitive Actions

24 Apr 2025
Create Stunning Videos with the LTX Video Cognitive Actions

Creating dynamic and engaging video content has never been easier thanks to the LTX Video Cognitive Actions. This powerful set of pre-built actions enables developers to generate video content based on text prompts using advanced video synthesis techniques. By leveraging the capabilities of the LTX video functionality, you can automate video creation and produce high-quality outputs tailored to your specifications.

Prerequisites

To get started with the LTX Video Cognitive Actions, you will need access to the Cognitive Actions platform, including an API key for authentication. Typically, this involves including your API key in the headers of your requests. Ensure that you have your environment set up to make API calls, including any necessary libraries such as requests for Python.

Cognitive Actions Overview

Generate LTX Video

The Generate LTX Video action allows you to create video content using a guided synthesis based on input text prompts. By specifying various parameters, you can control the output to fit your creative vision.

Input

The input schema for this action is structured as follows:

{
  "seed": "integer (optional)",
  "inputPrompt": "string (required)",
  "outputWidth": "integer (default: 768)",
  "outputHeight": "integer (default: 512)",
  "numberOfFrames": "integer (default: 97, enum: [97, 129, 161, 193, 225, 257])",
  "numberOfOutputs": "integer (default: 1, min: 1, max: 4)",
  "guidanceInfluence": "number (default: 3, min: 1, max: 10)",
  "negativePromptText": "string (default: 'watermark, text, deformed, worst quality, inconsistent motion, blurry, jittery, distorted')",
  "outputFramesPerSecond": "integer (default: 24)",
  "numberOfInferenceSteps": "integer (default: 50, min: 1, max: 500)",
  "decodeTimeStepParameter": "number (default: 0.03, min: 0.005, max: 1)",
  "decodeNoiseScaleParameter": "number (default: 0.025, min: 0.0005, max: 1)"
}

Here’s an example input payload:

{
  "inputPrompt": "A robot cyborg woman with a shiny metal face and metal facial features smiles at another woman with long blonde hair. The robot cyborg woman with brown hair wears a black jacket and has a metal face. The camera angle is a close-up, focused on the robot cyborg woman's metal face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.",
  "outputWidth": 768,
  "outputHeight": 512,
  "numberOfFrames": 97,
  "numberOfOutputs": 1,
  "guidanceInfluence": 3,
  "outputFramesPerSecond": 24,
  "numberOfInferenceSteps": 60
}

Output

When you execute the Generate LTX Video action, the output typically includes a URL to the generated video. Here's an example of what you might receive:

[
  "https://assets.cognitiveactions.com/invocations/a12ff58a-6760-42f1-86ab-bc71b8799cf3/cc77b274-2ab0-4671-b885-7353ac0d9855.mp4"
]

This URL points to the created video, which you can then use in your applications.

Conceptual Usage Example (Python)

Below is a conceptual Python code snippet demonstrating how to call the LTX Video action:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "dc70f5ae-b0ef-457d-89f9-462b898d76d2" # Action ID for Generate LTX Video

# Construct the input payload based on the action's requirements
payload = {
    "inputPrompt": "A robot cyborg woman with a shiny metal face and metal facial features smiles at another woman with long blonde hair. The robot cyborg woman with brown hair wears a black jacket and has a metal face. The camera angle is a close-up, focused on the robot cyborg woman's metal face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.",
    "outputWidth": 768,
    "outputHeight": 512,
    "numberOfFrames": 97,
    "numberOfOutputs": 1,
    "guidanceInfluence": 3,
    "outputFramesPerSecond": 24,
    "numberOfInferenceSteps": 60
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this snippet, replace the API key and endpoint with your actual values. The action_id corresponds to the Generate LTX Video action, and the payload is structured according to the required input schema.

Conclusion

The LTX Video Cognitive Actions provide an innovative way for developers to create engaging video content through simple API calls. By utilizing the Generate LTX Video action, you can transform text prompts into dynamic video outputs that meet your project needs. Explore the possibilities of automated video generation and enrich your applications with captivating content today!