Enhance Your Applications with Image Inpainting Using delta-lock/noobai-xl Cognitive Actions

25 Apr 2025
Enhance Your Applications with Image Inpainting Using delta-lock/noobai-xl Cognitive Actions

In the realm of artificial intelligence, leveraging pre-built Cognitive Actions can significantly enhance your applications. The delta-lock/noobai-xl API offers powerful capabilities for image generation and manipulation, particularly through its inpainting functionality. This blog post will provide developers with the insights needed to integrate the "Generate Image with Inpainting" action into their applications, allowing for dynamic image creation tailored to specific needs.

Prerequisites

Before diving into the integration of the Cognitive Actions, ensure you have the following prerequisites in place:

  • An API key for the delta-lock/noobai-xl Cognitive Actions platform.
  • Basic understanding of JSON and RESTful API concepts.
  • Familiarity with making HTTP requests in Python or a similar programming language.

Authentication typically involves passing your API key as a Bearer token in the request headers, ensuring secure access to the Cognitive Actions.

Cognitive Actions Overview

Generate Image with Inpainting

The Generate Image with Inpainting action allows users to create images with specified alterations through inpainting. This operation utilizes advanced models to customize images based on defined parameters such as masks, dimensions, and various scales. With features like CFG and PAG scales, developers can fine-tune how closely the generation adheres to input prompts, enabling impressive creative control.

Input

The action requires a structured JSON input. Here’s an overview of the input schema along with an example:

  • mask (string, required): URI of the inpainting mask. White areas will be altered, while black areas will remain unchanged.
  • seed (integer, optional): Determines the initial state for generation. Use -1 for a random seed.
  • image (string, required): URI of the base image for inpainting tasks.
  • steps (integer, optional): Number of steps in the generation process (1 to 100). Default is 35.
  • width (integer, optional): Target width of the output image (1 to 4096). Default is 1184.
  • height (integer, optional): Target height of the output image (1 to 4096). Default is 864.
  • prompt (string, optional): Text prompt for image generation. Default is "1girl".
  • cfgScale (number, optional): Controls how strictly the model adheres to the prompt (1 to 50). Default is 5.
  • clipSkip (integer, optional): Number of CLIP layers to bypass. Default is 1.
  • pagScale (number, optional): Similar to CFG scale for improving results (0 to 50). Default is 3.
  • strength (number, optional): Amount of noise for image-to-image tasks (0 to 1). Default is 0.7.
  • batchSize (integer, optional): Number of images to generate (1 to 4). Default is 1.
  • modelName (string, optional): Model to use for image generation. Default is "noobaiXLNAIXL_epsilonPred11Version".
  • blurFactor (number, optional): Amount of blurring for smoother transitions. Default is 5.
  • negativePrompt (string, optional): Elements to exclude from the generated image. Default is "animal, cat, dog, big breasts".
  • guidanceRescale (number, optional): Rescale of CFG generated noise (0 to 5). Default is 0.5.
  • prependPreprompt (boolean, optional): If true, prepends standard quality indicators. Default is true.
  • generationScheduler (string, optional): Scheduler algorithm for image generation. Default is "DPM++ 2M SDE Karras".
  • variationalAutoencoder (string, optional): VAE model to apply during generation. Default is "default".

Example Input:

{
  "seed": -1,
  "steps": 35,
  "width": 1184,
  "height": 864,
  "prompt": "1girl",
  "cfgScale": 5,
  "clipSkip": 1,
  "pagScale": 3,
  "strength": 0.7,
  "batchSize": 1,
  "modelName": "noobaiXLNAIXL_epsilonPred11Version",
  "blurFactor": 5,
  "negativePrompt": "animal, cat, dog, big breasts",
  "guidanceRescale": 0.5,
  "prependPreprompt": true,
  "generationScheduler": "DPM++ 2M SDE Karras",
  "variationalAutoencoder": "default"
}

Output

Upon successful execution, the action typically returns a URL linking to the generated image. The output can vary based on the input, but a sample response is as follows:

Example Output:

[
  "https://assets.cognitiveactions.com/invocations/2a8b33d5-47d0-4b50-80bd-20d3b6eb9be6/867827d1-5372-409b-8a05-9dcae94900a9.png"
]

Conceptual Usage Example (Python)

Below is a conceptual Python code snippet demonstrating how to invoke the Generate Image with Inpainting action using the structured input:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "262a4d0d-983a-4288-bd27-7f682b4eefc1"  # Action ID for Generate Image with Inpainting

# Construct the input payload based on the action's requirements
payload = {
    "seed": -1,
    "steps": 35,
    "width": 1184,
    "height": 864,
    "prompt": "1girl",
    "cfgScale": 5,
    "clipSkip": 1,
    "pagScale": 3,
    "strength": 0.7,
    "batchSize": 1,
    "modelName": "noobaiXLNAIXL_epsilonPred11Version",
    "blurFactor": 5,
    "negativePrompt": "animal, cat, dog, big breasts",
    "guidanceRescale": 0.5,
    "prependPreprompt": True,
    "generationScheduler": "DPM++ 2M SDE Karras",
    "variationalAutoencoder": "default"
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this example, the payload is structured based on the input schema outlined above. The action ID is set, and the response is handled gracefully, allowing for easy debugging in case of errors.

Conclusion

Integrating the Generate Image with Inpainting Cognitive Action from the delta-lock/noobai-xl API can greatly enhance your application’s image generation capabilities. By mastering the input parameters and understanding the response structure, developers can create unique images tailored to specific needs.

Next steps could include experimenting with different prompts, masks, and parameters to discover the full potential of this powerful action. Happy coding!