Generate Stunning Visuals with the fofr/wan2.1-with-lora Cognitive Actions

21 Apr 2025
Generate Stunning Visuals with the fofr/wan2.1-with-lora Cognitive Actions

In the realm of image generation, the fofr/wan2.1-with-lora API offers powerful Cognitive Actions designed to simplify the creation of high-quality frames. These actions allow developers to leverage advanced models, including the flexibility of LORA customization, to generate unique visual content tailored to their needs. By using these pre-built actions, you can save time and resources while enhancing your applications with dynamic imagery.

Prerequisites

Before you dive into using the Cognitive Actions, ensure that you have the following:

  • An API key for the Cognitive Actions platform to authenticate your requests.
  • Familiarity with JSON format as the input and output will be structured in this way.

Authentication typically involves passing your API key in the headers of your requests, allowing secure access to the Cognitive Actions.

Cognitive Actions Overview

Generate Frames with Wan2.1 Model

This action allows you to generate high-quality frames using the Wan2.1 model with optional LORA customization. You can choose between two models: '1.3b' for faster performance and '14b' for superior quality. Additionally, you can adjust parameters such as aspect ratio, sample steps, and LORA strengths to customize your output.

Input

The input for this action is structured as follows:

{
  "seed": 12345,
  "model": "14b",
  "frames": 81,
  "prompt": "flat color 2d animation of a portrait of woman with white hair and green eyes, dynamic scene",
  "loraUrl": "https://huggingface.co/motimalu/wan-flat-color-v2/resolve/main/wan_flat_color_v2.safetensors",
  "aspectRatio": "16:9",
  "sampleShift": 8,
  "sampleSteps": 30,
  "negativePrompt": "",
  "loraStrengthClip": 1,
  "sampleGuideScale": 5,
  "loraStrengthModel": 1
}

Key Properties:

  • seed: An integer for reproducibility (random by default).
  • model: Choose between '1.3b' and '14b'.
  • frames: Specify the number of frames (17, 33, 49, 65, or 81).
  • prompt: A string that describes what you want to generate.
  • loraUrl: An optional URL for a LORA model.
  • aspectRatio: Define the output image's aspect ratio.
  • sampleShift: A number to shift sample values (0-10).
  • sampleSteps: An integer for the number of steps in image generation (1-60).
  • negativePrompt: A string to specify elements to avoid in the output.
  • loraStrengthClip: A number to control LORA influence (0.0 for no effect).
  • sampleGuideScale: A number to adjust emphasis on the prompt (0-10).
  • loraStrengthModel: A number to determine LORA strength (0.0 for no effect).

Output

Upon successful execution, the action returns an array containing the URL(s) of the generated frames. For example:

[
  "https://assets.cognitiveactions.com/invocations/e26b0fdf-b179-44d6-9686-129c03d44046/2d650b86-d809-425c-ba0b-5c98ed316e58.mp4"
]

Conceptual Usage Example (Python)

Here’s how you can call this action using a hypothetical Cognitive Actions execution endpoint in Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"  # Hypothetical endpoint

action_id = "06c7dc6d-1e1e-46b5-9b61-827ed75a13b4"  # Action ID for Generate Frames with Wan2.1 Model

# Construct the input payload based on the action's requirements
payload = {
    "model": "14b",
    "frames": 81,
    "prompt": "flat color 2d animation of a portrait of woman with white hair and green eyes, dynamic scene",
    "loraUrl": "https://huggingface.co/motimalu/wan-flat-color-v2/resolve/main/wan_flat_color_v2.safetensors",
    "aspectRatio": "16:9",
    "sampleShift": 8,
    "sampleSteps": 30,
    "loraStrengthClip": 1,
    "sampleGuideScale": 5,
    "loraStrengthModel": 1
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet, replace the YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action ID corresponds to the Generate Frames with Wan2.1 Model action. The input payload is structured according to the action's requirements, ensuring successful execution.

Conclusion

The fofr/wan2.1-with-lora Cognitive Actions provide a robust solution for developers looking to generate high-quality imagery efficiently. By leveraging these powerful actions, you can enhance your applications with stunning visuals tailored to user needs. Consider experimenting with different parameters to unlock the full potential of the Wan2.1 model in your projects!