Accelerate Video Creation with lucataco/fast-mochi Cognitive Actions

23 Apr 2025
Accelerate Video Creation with lucataco/fast-mochi Cognitive Actions

In the realm of AI-driven content creation, the lucataco/fast-mochi API offers a powerful solution for generating high-quality videos at an impressive speed. By leveraging the FastMochi model developed by Hao AI Lab, this API provides developers with the capability to create videos with significantly reduced inference steps, allowing for approximately 8X speed improvement compared to traditional methods. This blog post will explore how to effectively integrate the video generation capabilities of the FastMochi actions into your applications.

Prerequisites

Before diving into the integration process, ensure you have the following:

  • An API key for the Cognitive Actions platform.
  • Basic knowledge of how to make HTTP requests in your preferred programming language.
  • Familiarity with JSON data structures, which will be used for input and output.

Authentication is typically handled by including your API key in the request headers, allowing you to securely access the Cognitive Actions services.

Cognitive Actions Overview

Generate Video with FastMochi

The Generate Video with FastMochi action enables developers to create high-quality videos based on textual prompts. This action falls under the video-generation category, making it ideal for applications that require dynamic visual content.

Input

The action expects a JSON object defined by the following schema:

{
  "seed": 1024,
  "prompt": "A curious raccoon peers through a vibrant field of yellow sunflowers, its eyes wide with interest. The playful yet serene atmosphere is complemented by soft natural light filtering through the petals. Mid-shot, warm and cheerful tones.",
  "guidanceScale": 1.5,
  "numberOfFrames": 151,
  "numberOfInferenceSteps": 8
}

Required Fields:

  • prompt: A descriptive text prompt that guides the video generation process.

Optional Fields:

  • seed: An integer used for random seed generation, defaulting to 1024 for reproducibility.
  • guidanceScale: A floating-point number controlling how closely the generated video adheres to the prompt, with a default of 1.5 (range: 0.1 to 10).
  • numberOfFrames: The total number of frames to generate, defaulting to 151.
  • numberOfInferenceSteps: The total inference steps per frame, defaulting to 8 (range: 1 to 30).

Output

Upon successful execution, the action returns a URL link to the generated video. For example:

https://assets.cognitiveactions.com/invocations/a70b1248-c023-44c9-910e-bb078f889508/a7b33d75-8567-4d2d-87d4-4e5513bfe384.mp4

This output can be directly utilized in applications, allowing for immediate access to the generated content.

Conceptual Usage Example (Python)

Here’s a Python snippet illustrating how to invoke the Generate Video with FastMochi action:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "afdf8de7-4aeb-480c-999e-1d7aecf9f402" # Action ID for Generate Video with FastMochi

# Construct the input payload based on the action's requirements
payload = {
    "seed": 1024,
    "prompt": "A curious raccoon peers through a vibrant field of yellow sunflowers, its eyes wide with interest. The playful yet serene atmosphere is complemented by soft natural light filtering through the petals. Mid-shot, warm and cheerful tones.",
    "guidanceScale": 1.5,
    "numberOfFrames": 151,
    "numberOfInferenceSteps": 8
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet:

  • The action ID is specified for the video generation action.
  • The input JSON payload is structured according to the action requirements.
  • The request is sent to the hypothetical endpoint, and the response is handled to either print the generated video link or handle errors.

Conclusion

The lucataco/fast-mochi Cognitive Actions provide a robust framework for developers looking to integrate high-speed video generation into their applications. By leveraging the FastMochi model, you can create dynamic content efficiently and effectively. Consider exploring various prompts and parameters to fully utilize the potential of video generation in your projects. Happy coding!