Transform Your Images into Videos with Dreamcore WAN Cognitive Actions

22 Apr 2025
Transform Your Images into Videos with Dreamcore WAN Cognitive Actions

In today's digital landscape, the ability to create captivating content quickly is essential for developers and creators alike. The aramintak/dreamcore-wan API provides a powerful set of Cognitive Actions designed to transform static images into dynamic videos, all while allowing for customizable attributes to fine-tune the output. This blog post will give you a comprehensive overview of one of the key actions available, helping you integrate it seamlessly into your applications.

Prerequisites

Before you start using the Cognitive Actions from the aramintak/dreamcore-wan API, you'll need to ensure that you have:

  • An API key for the Cognitive Actions platform.
  • A basic understanding of how to make API calls.

When making requests, you'll typically pass your API key in the headers of your requests for authentication.

Cognitive Actions Overview

Generate Video from Image with Dreamcore Style

Purpose

This action allows you to transform a starting image into a video by specifying various attributes such as frames, resolution, and aspect ratio. It enhances your video creation capabilities, enabling you to control aspects like speed and quality through configurable parameters, thus facilitating targeted content generation.

Input

The input for this action is structured as follows:

  • prompt (required): A descriptive text guiding the content of the generated video.
  • seed (optional): An integer seed value for randomness.
  • image (optional): URI of the image to use as the initial frame.
  • resolution (optional): Select the video's resolution (default: 480p).
  • aspectRatio (optional): Choose the aspect ratio (default: 16:9).
  • sampleShift (optional): Shift factor for the sample (default: 8).
  • sampleSteps (optional): Specifies the number of generation steps (default: 30).
  • customWeights (optional): Specify custom LoRA weights.
  • negativePrompt (optional): Specify elements to exclude from the video.
  • numberOfFrames (optional): Fixed number of frames for the video (default: 81).
  • generationSpeed (optional): Choose the speed of video generation (default: Balanced).
  • sampleGuideScale (optional): Adjust the guide scale for prompt adherence (default: 5).
  • clipModelStrength (optional): Adjust strength of LORA adjustments to the CLIP model (default: 1).
  • modelLoraStrength (optional): Set the LORA strength applied to the base model (default: 1).

Here’s an example of a valid input payload:

{
  "prompt": "a surreal beast emerging from the ether, dreamcore style",
  "resolution": "480p",
  "aspectRatio": "16:9",
  "sampleShift": 8,
  "sampleSteps": 30,
  "negativePrompt": "",
  "numberOfFrames": 81,
  "generationSpeed": "Balanced",
  "sampleGuideScale": 5,
  "clipModelStrength": 1,
  "modelLoraStrength": 1
}

Output

The output of this action typically returns a video link. Here’s an example of a successful output:

[
  "https://assets.cognitiveactions.com/invocations/c4e677c8-787e-4dbd-a2ac-d83dea7e614d/86916a38-55f9-4016-8351-8e4a6c049616.mp4"
]

Conceptual Usage Example (Python)

Here’s how you might structure a call to the Cognitive Actions execution endpoint in Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "038cdee1-e187-4247-baf1-83c81ebddef5" # Action ID for Generate Video from Image with Dreamcore Style

# Construct the input payload based on the action's requirements
payload = {
    "prompt": "a surreal beast emerging from the ether, dreamcore style",
    "resolution": "480p",
    "aspectRatio": "16:9",
    "sampleShift": 8,
    "sampleSteps": 30,
    "negativePrompt": "",
    "numberOfFrames": 81,
    "generationSpeed": "Balanced",
    "sampleGuideScale": 5,
    "clipModelStrength": 1,
    "modelLoraStrength": 1
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet, you would replace the YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id is set to the ID for the video generation action, and the payload is constructed based on the required input schema.

Conclusion

The Generate Video from Image with Dreamcore Style action from the aramintak/dreamcore-wan API provides a robust way to create engaging video content from static images. With customizable parameters, developers can tailor video outputs that suit their needs, whether for marketing, storytelling, or artistic expression. Start experimenting with this action today, and unlock new possibilities for content creation in your applications!