Transform Still Images into Animated Videos with Moore-AnimateAnyone Cognitive Actions

24 Apr 2025
Transform Still Images into Animated Videos with Moore-AnimateAnyone Cognitive Actions

In the world of digital media, transforming still images into engaging animated videos opens up exciting creative possibilities. The Moore-AnimateAnyone Cognitive Actions provide developers with the tools to harness advanced animation techniques. By integrating these pre-built actions, you can automate the animation of images using motion sequences from video references, making your applications more dynamic and visually appealing.

Prerequisites

Before you start using the Cognitive Actions from the Moore-AnimateAnyone spec, you will need:

  • An API key for the Cognitive Actions platform.
  • Basic understanding of JSON and HTTP requests.

For authentication, you will typically pass your API key in the headers of your requests to ensure secure access to the actions.

Cognitive Actions Overview

Animate Still Image Using DWPose Video

This action allows you to transform a still image into an animated video by applying movements from a DWPose estimation video. It leverages the capabilities of the Moore-AnimateAnyone model to produce animations that approximate the quality of original projects.

Input

The input for this action requires the following fields:

  • motionSequence (string, required): URI path to the motion sequence video file used as input for animation generation.
  • referenceImage (string, required): URI path to the reference image file that guides the animation's visual style.
  • seed (integer, optional): Random seed for reproducibility of animation results.
  • width (integer, optional): Desired width of the output video in pixels (default is 512, range 448-768).
  • height (integer, optional): Desired height of the output video in pixels (default is 768, range 512-1024).
  • length (integer, optional): Desired length of the output video in frames (default is 24, range 24-128).
  • guidanceScale (number, optional): Scale factor for guidance during animation generation (default is 3.5).
  • samplingSteps (integer, optional): Number of steps used in the animation sampling process (default is 25).

Example Input JSON:

{
  "width": 512,
  "height": 768,
  "length": 128,
  "guidanceScale": 3.5,
  "samplingSteps": 25,
  "motionSequence": "https://replicate.delivery/pbxt/KFWQJTPDhFou0smS93XVLMlyVQIedFa2GiP4C1gfTaW5GnQF/anyone-video-5_kps.mp4",
  "referenceImage": "https://replicate.delivery/pbxt/KFWQJsyppeXHrUBQnwo5PS4eaAFe0utu15cvH5TUSrKz3hOR/anyone-10.png"
}

Output

Upon successful execution, this action typically returns a URL pointing to the generated animated video.

Example Output:

https://assets.cognitiveactions.com/invocations/86eeb9df-c7f0-44f2-82c1-a5ac342f9660/b9c3d318-93c4-4460-8d04-85442028eac8.mp4

Conceptual Usage Example (Python)

Here’s how you might call this action using Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "7c4a41d2-4d88-4249-8213-7049244c0a6d"  # Action ID for Animate Still Image Using DWPose Video

# Construct the input payload based on the action's requirements
payload = {
    "width": 512,
    "height": 768,
    "length": 128,
    "guidanceScale": 3.5,
    "samplingSteps": 25,
    "motionSequence": "https://replicate.delivery/pbxt/KFWQJTPDhFou0smS93XVLMlyVQIedFa2GiP4C1gfTaW5GnQF/anyone-video-5_kps.mp4",
    "referenceImage": "https://replicate.delivery/pbxt/KFWQJsyppeXHrUBQnwo5PS4eaAFe0utu15cvH5TUSrKz3hOR/anyone-10.png"
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this example, replace "YOUR_COGNITIVE_ACTIONS_API_KEY" with your actual API key. The action_id corresponds to the action you want to execute, and the payload conforms to the input schema outlined above.

Conclusion

With the Moore-AnimateAnyone Cognitive Actions, you can easily create dynamic animations from still images, enhancing the interactivity and engagement of your applications. By utilizing the provided action, developers can tap into advanced animation techniques without needing to delve deeply into complex algorithms. Start experimenting with these actions today to unlock new creative possibilities in your projects!