Transform Images into Engaging Videos with fofr/wan-14b-black-sclera Cognitive Actions

25 Apr 2025
Transform Images into Engaging Videos with fofr/wan-14b-black-sclera Cognitive Actions

In the realm of artificial intelligence, the ability to convert static images into dynamic video sequences offers exciting possibilities for developers. The fofr/wan-14b-black-sclera Cognitive Actions provide an innovative solution for generating video content from images, guided by customizable text prompts. By leveraging these pre-built actions, developers can enhance their applications with creative video generation capabilities, making it easier than ever to produce engaging visual narratives.

Prerequisites

To get started with the Cognitive Actions, ensure you have:

  • An API key for the Cognitive Actions platform.
  • Basic knowledge of JSON and HTTP requests.

Authentication typically involves passing your API key in the request headers, allowing you to access the full suite of available actions.

Cognitive Actions Overview

Generate Video from Image

Description: This action allows you to create a short video sequence by transforming an image into video frames, guided by a text prompt. It supports various configurations such as frame count, resolution, and speed mode, enabling a balance between output quality and generation speed.

Category: Video Generation

Input

The action requires the following input schema:

  • prompt (string, required): A descriptive text prompt guiding the video generation.
    Example: "an extreme close up of an epic cyberpunk woman with BLACK_SCLERA"
  • image (string, optional): URI of the image to be used as the initial frame in the image-to-video generation process.
  • seed (integer, optional): Specify a seed value for consistent output across runs. Defaults to random.
  • guideScale (number, optional): Adjusts adherence to the prompt (default 5, range 0-10).
  • shiftFactor (number, optional): Defines the content shift during iteration (default 8, range 0-10).
  • clipStrength (number, optional): Influences the CLIP model (default 1).
  • excludePrompt (string, optional): Specify elements to exclude from the generated video.
  • modelStrength (number, optional): Controls the LoRA's impact (default 1).
  • numberOfFrames (integer, optional): Total frames in the video (default 81, range 17-81).
  • generationSpeed (string, optional): Choose a generation speed (default "Balanced").
  • generationSteps (integer, optional): Set the number of iterative steps (default 30, range 1-60).
  • videoResolution (string, optional): Resolution options for the generated video (default "480p").
  • videoAspectRatio (string, optional): Defines the video frame aspect ratio (default "16:9").
  • alternativeWeights (string, optional): Specify alternative LoRA weights for specialized outputs.

Example Input:

{
  "prompt": "an extreme close up of an epic cyberpunk woman with BLACK_SCLERA",
  "guideScale": 5,
  "shiftFactor": 8,
  "clipStrength": 1,
  "excludePrompt": "",
  "modelStrength": 1,
  "numberOfFrames": 81,
  "generationSteps": 30,
  "videoAspectRatio": "16:9"
}

Output

The action typically returns a URL to the generated video file based on the input parameters.

Example Output:

[
  "https://assets.cognitiveactions.com/invocations/ace418a8-f306-4b68-8282-338a747d82e7/470259ec-e790-4ee0-8158-21dcff2bf596.mp4"
]

Conceptual Usage Example (Python)

Here’s how you might call the Generate Video from Image action using Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "53501b85-8f44-47d4-9e24-66eac299228f" # Action ID for Generate Video from Image

# Construct the input payload based on the action's requirements
payload = {
  "prompt": "an extreme close up of an epic cyberpunk woman with BLACK_SCLERA",
  "guideScale": 5,
  "shiftFactor": 8,
  "clipStrength": 1,
  "excludePrompt": "",
  "modelStrength": 1,
  "numberOfFrames": 81,
  "generationSteps": 30,
  "videoAspectRatio": "16:9"
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The payload is constructed based on the required inputs for the action, and the request is sent to a hypothetical endpoint.

Conclusion

The fofr/wan-14b-black-sclera Cognitive Actions empower developers to transform static images into captivating videos with ease. By utilizing the Generate Video from Image action, you can harness the creativity of AI to produce unique video content tailored to your specifications. Experiment with different prompts and settings to unlock the full potential of video generation for your applications!