Create Stunning Videos with wcarle/stable-diffusion-videos-openjourney Cognitive Actions

24 Apr 2025
Create Stunning Videos with wcarle/stable-diffusion-videos-openjourney Cognitive Actions

The wcarle/stable-diffusion-videos-openjourney API enables developers to leverage advanced video generation techniques using the Stable Diffusion model. With this powerful set of Cognitive Actions, you can create captivating videos by interpolating between different text prompts, transforming your creative ideas into visual narratives. These pre-built actions streamline the process of video creation, saving you time and effort while ensuring high-quality outputs.

Prerequisites

Before integrating the Cognitive Actions into your application, ensure you have the following:

  • An API key for the Cognitive Actions platform.
  • Basic familiarity with making HTTP requests in your programming language of choice.

Authentication typically involves passing your API key in the request headers.

Cognitive Actions Overview

Generate Video from Stable Diffusion Interpolation

Description: This action allows you to create videos by interpolating the latent space of the Stable Diffusion model using the Openjourney framework. You can morph between multiple text prompts and customize various video parameters, such as scheduler types and random seeds.

Category: Video Generation

Input

The input schema for this action requires the following fields:

  • scheduler: Specifies the scheduler type to use. Options include default, ddim, or klms. The default is klms.
  • randomSeeds: Defines the random seeds used for each prompt, separated by |. Leave blank for automatic randomization.
  • inputPrompts: Specifies input prompts separated by |. Multiple prompts can be provided, and each will be processed separately.
  • numberOfSteps: Sets the number of steps for generating the interpolation video. For testing, use 3 or 5 steps; increase to 60-200 for better quality results.
  • framesPerSecond: Determines the frame rate for the video, which must be between 5 and 60 frames per second.
  • guidanceScaleValue: Adjusts the scale for classifier-free guidance, with potential values ranging from 1 to 20.
  • numberOfInferenceSteps: Specifies the number of denoising steps for generating each image from the prompt, accepting values from 1 to 500.

Example Input:

{
  "scheduler": "klms",
  "randomSeeds": "42 | 1337",
  "inputPrompts": "full body cyborg full-length portrait detailed face symmetric steampunk cyberpunk cyborg intricate detailed to scale hyperrealistic cinematic lighting digital art concept art mdjrny-v4 style | full body cyborg full-length portrait detailed face symmetric steampunk cyberpunk cyborg intricate detailed to scale hyperrealistic cinematic lighting digital art concept art mdjrny-v4 style",
  "numberOfSteps": 100,
  "framesPerSecond": 15,
  "guidanceScaleValue": 7.5,
  "numberOfInferenceSteps": 50
}

Output

The action typically returns a URL linking to the generated video. Here's an example of the output you might receive:

https://assets.cognitiveactions.com/invocations/a5ec7637-94f9-47fe-8e07-c6dbf12ec548/e3e9772a-d2a8-4603-9c3b-bbf584acca7e.mp4

Conceptual Usage Example (Python)

Below is a conceptual example of how you might call the Cognitive Actions execution endpoint in Python to generate a video:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"  # Hypothetical endpoint

action_id = "2d50a548-310e-4986-8797-6cd2c6c6aa75"  # Action ID for Generate Video from Stable Diffusion Interpolation

# Construct the input payload based on the action's requirements
payload = {
    "scheduler": "klms",
    "randomSeeds": "42 | 1337",
    "inputPrompts": "full body cyborg full-length portrait detailed face symmetric steampunk cyberpunk cyborg intricate detailed to scale hyperrealistic cinematic lighting digital art concept art mdjrny-v4 style | full body cyborg full-length portrait detailed face symmetric steampunk cyberpunk cyborg intricate detailed to scale hyperrealistic cinematic lighting digital art concept art mdjrny-v4 style",
    "numberOfSteps": 100,
    "framesPerSecond": 15,
    "guidanceScaleValue": 7.5,
    "numberOfInferenceSteps": 50
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet, replace the COGNITIVE_ACTIONS_API_KEY with your actual API key. The input payload is constructed based on the specified schema, and the action ID corresponds to the video generation action. This example demonstrates how to handle responses and potential errors gracefully.

Conclusion

By utilizing the wcarle/stable-diffusion-videos-openjourney Cognitive Actions, developers can effortlessly create dynamic and engaging videos from textual prompts. The flexibility and power of these actions unlock new creative possibilities, whether you're building a personal project or integrating into larger applications. Explore these actions and consider how they can enhance your next video generation project!