Execute ComfyUI Workflows on A100 GPUs with Cognitive Actions

24 Apr 2025
Execute ComfyUI Workflows on A100 GPUs with Cognitive Actions

In the realm of image processing, leveraging powerful hardware like the A100 GPU can significantly enhance the efficiency and quality of your workflows. The fofr/any-comfyui-workflow-a100 specification provides a specialized Cognitive Action that allows developers to execute any ComfyUI workflow on the A100 architecture. This action supports popular model weights and offers customizable inputs, enabling the transformation of JSON workflow definitions into image or MP4 outputs. This blog post will guide you through the capabilities of this action and how you can seamlessly integrate it into your applications.

Prerequisites

Before diving into the integration of Cognitive Actions, ensure you have the following:

  • An API key for accessing the Cognitive Actions platform.
  • Familiarity with JSON structures, as you'll be working with workflow definitions in JSON format.
  • An understanding of how to set up authorization, which typically involves passing your API key in the request headers.

Cognitive Actions Overview

Run ComfyUI Workflow on A100

Purpose

This action executes any ComfyUI workflow on an A100 GPU, allowing users to harness its computational power for image processing tasks. It facilitates the use of popular model weights and enables customization of workflow inputs, making it a versatile tool for developers.

Input

The input for this action is structured as follows:

  • inputFile (string, required): URI for the input image, tar, or zip file. You can specify a direct link to the input file or utilize URLs within the JSON workflow for automatic downloading.
  • outputFormat (string, optional): Specifies the image format for outputs. Supported formats include webp, jpg, and png, with a default of webp.
  • workflowJson (string, required): The ComfyUI workflow definition in JSON format. This must be in the API version format, which you can obtain via the "Save (API format)" option in ComfyUI.
  • outputQuality (integer, optional): Sets the quality of the output images on a scale from 0 to 100, with a default of 95.
  • randomiseSeeds (boolean, optional): Automatically randomizes seed-related parameters to ensure varied outputs, defaulting to true.
  • forceResetCache (boolean, optional): Forces a reset of the ComfyUI cache before executing the workflow, helpful for troubleshooting, defaulting to false.
  • returnTempFiles (boolean, optional): If enabled, returns temporary files like preprocessed images, useful for debugging, defaulting to false.

Example Input:

{
  "outputFormat": "webp",
  "workflowJson": "{\n  \"3\": {\n    \"inputs\": {\n      \"seed\": 156680208700286,\n      \"steps\": 10,\n      \"cfg\": 2.5,\n      \"sampler_name\": \"dpmpp_2m_sde\",\n      \"scheduler\": \"karras\",\n      \"denoise\": 1,\n      \"model\": [\n        \"4\",\n        0\n      ],\n      \"positive\": [\n        \"6\",\n        0\n      ],\n      \"negative\": [\n        \"7\",\n        0\n      ],\n      \"latent_image\": [\n        \"5\",\n        0\n      ]\n    },\n    \"class_type\": \"KSampler\",\n    \"_meta\": {\n      \"title\": \"KSampler\"\n    }\n  }, ...}",
  "outputQuality": 95,
  "randomiseSeeds": true,
  "forceResetCache": false,
  "returnTempFiles": false
}

Output

The output of the action typically returns a URL pointing to the generated image or video file. For example:

Example Output:

[
  "https://assets.cognitiveactions.com/invocations/fac05cf6-d6f8-4a64-b6a8-33939a2e5281/9009d970-53b5-422f-89b5-43018bbf3ad5.webp"
]

Conceptual Usage Example (Python)

Here’s a conceptual Python code snippet that demonstrates how to invoke this action:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "5dacf49f-87a5-4926-8cf3-a99f96ca2579"  # Action ID for Run ComfyUI Workflow on A100

# Construct the input payload based on the action's requirements
payload = {
    "outputFormat": "webp",
    "workflowJson": "{\n  \"3\": {\n    \"inputs\": {...}\n  }, ...}",  # Full JSON omitted for brevity
    "outputQuality": 95,
    "randomiseSeeds": true,
    "forceResetCache": false,
    "returnTempFiles": false
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id is specified for the "Run ComfyUI Workflow on A100" action, and the input payload is constructed according to the action's requirements.

Conclusion

The Run ComfyUI Workflow on A100 action offers a powerful way to leverage A100 GPUs for image processing tasks. By understanding how to structure your inputs and handle outputs, you can enhance your applications with efficient and high-quality image generation workflows. Consider exploring additional use cases or combining this action with other Cognitive Actions to maximize your development efforts. Happy coding!