Execute ComfyUI Workflows on A100: A Developer's Guide to Cognitive Actions

In the realm of artificial intelligence and machine learning, efficient workflow execution is crucial for developers looking to harness the power of advanced GPU capabilities. The fofr/any-comfyui-workflow-a100 spec offers a powerful set of Cognitive Actions that enable you to execute any ComfyUI workflow on an A100 GPU. These pre-built actions streamline the customization of JSON inputs, output formats, and settings, promoting efficient workflow management and debugging.
With support for popular models and custom nodes, developers can enhance their applications with capabilities like randomizing seeds and returning temporary files. This article will guide you through the integration of these Cognitive Actions into your applications.
Prerequisites
Before you get started, ensure that you have the following:
- API Key: You will need an API key to authenticate your requests to the Cognitive Actions platform.
- Familiarity with JSON: Understanding JSON will help you structure the inputs and outputs effectively.
Authentication typically involves passing your API key in the request headers, allowing you to securely execute the actions.
Cognitive Actions Overview
Run ComfyUI Workflow on A100
The Run ComfyUI Workflow on A100 action allows you to execute any ComfyUI workflow on an A100 GPU. You can customize JSON inputs, choose output formats, and set various execution parameters, making it a versatile tool for managing workflows.
- Category: Tools
- Description: Execute any ComfyUI workflow on an A100 GPU with customizable inputs, output formats, and settings.
Input
The input schema for this action is defined as follows:
- inputFile: URI of the input file (image, tar, or zip).
- outputFormat: Specifies the format of output images (
webp,jpg, orpng, default iswebp). - workflowJson: JSON representation of the ComfyUI workflow.
- outputQuality: Quality of output images (0 to 100, default is 95).
- randomizeSeeds: Boolean to randomize seeds (default is
true). - forceResetCache: Boolean to reset the ComfyUI cache before running (default is
false). - returnTempFiles: Boolean to return any temporary files generated (default is
false).
Example Input:
{
"outputFormat": "webp",
"workflowJson": "{\n \"3\": {\n \"inputs\": {\n \"seed\": 156680208700286,\n \"steps\": 10,\n \"cfg\": 2.5,\n \"sampler_name\": \"dpmpp_2m_sde\",\n \"scheduler\": \"karras\",\n \"denoise\": 1,\n \"model\": [\n \"4\",\n 0\n ],\n \"positive\": [\n \"6\",\n 0\n ],\n \"negative\": [\n \"7\",\n 0\n ],\n \"latent_image\": [\n \"5\",\n 0\n ]\n },\n \"class_type\": \"KSampler\",\n \"_meta\": {\n \"title\": \"KSampler\"\n }\n },\n \"4\": {\n \"inputs\": {\n \"ckpt_name\": \"SDXL-Flash.safetensors\"\n },\n \"class_type\": \"CheckpointLoaderSimple\",\n \"_meta\": {\n \"title\": \"Load Checkpoint\"\n }\n },\n \"5\": {\n \"inputs\": {\n \"width\": 1024,\n \"height\": 1024,\n \"batch_size\": 1\n },\n \"class_type\": \"EmptyLatentImage\",\n \"_meta\": {\n \"title\": \"Empty Latent Image\"\n }\n },\n \"6\": {\n \"inputs\": {\n \"text\": \"beautiful scenery nature glass bottle landscape, purple galaxy bottle,\",\n \"clip\": [\n \"4\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"7\": {\n \"inputs\": {\n \"text\": \"text, watermark\",\n \"clip\": [\n \"4\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"8\": {\n \"inputs\": {\n \"samples\": [\n \"3\",\n 0\n ],\n \"vae\": [\n \"4\",\n 2\n ]\n },\n \"class_type\": \"VAEDecode\",\n \"_meta\": {\n \"title\": \"VAE Decode\"\n }\n },\n \"9\": {\n \"inputs\": {\n \"filename_prefix\": \"ComfyUI\",\n \"images\": [\n \"8\",\n 0\n ]\n },\n \"class_type\": \"SaveImage\",\n \"_meta\": {\n \"title\": \"Save Image\"\n }\n }\n}\n",
"outputQuality": 95,
"randomizeSeeds": true,
"forceResetCache": false,
"returnTempFiles": false
}
Output
The action typically returns a URL to the output image. For example:
Example Output:
[
"https://assets.cognitiveactions.com/invocations/25d140f9-6be3-4202-b351-2dba887d8083/b1562b2a-4d07-4735-bbcc-4dc22fc738cf.webp"
]
Conceptual Usage Example (Python)
Here is a conceptual example demonstrating how to call the Run ComfyUI Workflow on A100 action using Python:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "c28f88b9-db92-48b6-b183-582adccbbffd" # Action ID for Run ComfyUI Workflow on A100
# Construct the input payload based on the action's requirements
payload = {
"outputFormat": "webp",
"workflowJson": "{\n \"3\": {\n \"inputs\": {\n \"seed\": 156680208700286,\n \"steps\": 10,\n \"cfg\": 2.5,\n \"sampler_name\": \"dpmpp_2m_sde\",\n \"scheduler\": \"karras\",\n \"denoise\": 1,\n \"model\": [\n \"4\",\n 0\n ],\n \"positive\": [\n \"6\",\n 0\n ],\n \"negative\": [\n \"7\",\n 0\n ],\n \"latent_image\": [\n \"5\",\n 0\n ]\n },\n \"class_type\": \"KSampler\",\n \"_meta\": {\n \"title\": \"KSampler\"\n }\n },\n \"4\": {\n \"inputs\": {\n \"ckpt_name\": \"SDXL-Flash.safetensors\"\n },\n \"class_type\": \"CheckpointLoaderSimple\",\n \"_meta\": {\n \"title\": \"Load Checkpoint\"\n }\n },\n \"5\": {\n \"inputs\": {\n \"width\": 1024,\n \"height\": 1024,\n \"batch_size\": 1\n },\n \"class_type\": \"EmptyLatentImage\",\n \"_meta\": {\n \"title\": \"Empty Latent Image\"\n }\n },\n \"6\": {\n \"inputs\": {\n \"text\": \"beautiful scenery nature glass bottle landscape, purple galaxy bottle,\",\n \"clip\": [\n \"4\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"7\": {\n \"inputs\": {\n \"text\": \"text, watermark\",\n \"clip\": [\n \"4\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"8\": {\n \"inputs\": {\n \"samples\": [\n \"3\",\n 0\n ],\n \"vae\": [\n \"4\",\n 2\n ]\n },\n \"class_type\": \"VAEDecode\",\n \"_meta\": {\n \"title\": \"VAE Decode\"\n }\n },\n \"9\": {\n \"inputs\": {\n \"filename_prefix\": \"ComfyUI\",\n \"images\": [\n \"8\",\n 0\n ]\n },\n \"class_type\": \"SaveImage\",\n \"_meta\": {\n \"title\": \"Save Image\"\n }\n }\n}\n",
"outputQuality": 95,
"randomizeSeeds": true,
"forceResetCache": false,
"returnTempFiles": false
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this snippet, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id is set to the ID of the Run ComfyUI Workflow on A100 action, and the payload is structured according to the input schema. The endpoint URL and request structure are illustrative and should be adapted to your specific implementation.
Conclusion
Integrating the Run ComfyUI Workflow on A100 action into your applications can significantly enhance your workflow execution capabilities, enabling better management, debugging, and output customization. By leveraging these Cognitive Actions, you can streamline your development process and focus on creating innovative solutions.
Consider exploring various use cases, such as automated image processing or complex model training workflows, to fully utilize the potential of this powerful action. Happy coding!