Enhance Your Images with Image Inpainting Using lightweight-ai/model1 Cognitive Actions

22 Apr 2025
Enhance Your Images with Image Inpainting Using lightweight-ai/model1 Cognitive Actions

In the realm of image processing, the ability to modify specific areas of an image while preserving the rest is crucial for many applications. The lightweight-ai/model1 API provides powerful Cognitive Actions designed to aid developers in seamlessly integrating advanced image manipulation capabilities into their applications. One of the standout features of this API is the Perform Image Inpainting with Flux Schnell Model action, which allows developers to customize and enhance their images effectively.

Prerequisites

To get started with the Cognitive Actions from the lightweight-ai/model1 API, you'll need an API key. This key is essential for authenticating your requests. Generally, you'll pass the API key in the request headers when invoking the actions. Make sure you have the necessary setup in place to begin using the API.

Cognitive Actions Overview

Perform Image Inpainting with Flux Schnell Model

This action leverages the flux_schnell img2img model to perform image inpainting. It enables you to specify particular areas of an image to modify while keeping other parts intact. You can customize the output by adjusting parameters such as the mask, seed, size, and output quality, along with options for prompt guidance and Lora model scaling.

Category: Image Processing

Input

The input to this action consists of several fields, which are detailed below:

  • mask (string): A URI pointing to a mask image for inpainting. White areas (255) indicate regions to be inpainted, while black areas (0) will be preserved.
  • seed (integer): A random seed for generating reproducible image results.
  • image (string): A URI pointing to the base image that will be modified.
  • width (integer): The width of the output image in pixels (default: 1024).
  • height (integer): The height of the output image in pixels (default: 1024).
  • prompt (string): Text that guides the style and content of the generated image (default: "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves").
  • loraList (array): A list of Lora models to apply (default: empty).
  • loraScales (array): Scaling factors for each Lora model (default: empty).
  • nsfwChecker (boolean): Enables or disables NSFW checks (default: false).
  • outputFormat (string): The format of the output image (default: "png").
  • guidanceScale (number): The guidance scale for the image generation (default: 3.5).
  • outputQuality (integer): Quality level of the output image (default: 100).
  • promptStrength (number): Influences how strongly the prompt affects the output (default: 0.8).
  • numberOfOutputs (integer): Specifies how many images to generate (default: 1).
  • numberOfInferenceSteps (integer): Determines the number of steps in the generation process (default: 28).

Here’s an example of the input JSON payload needed to invoke this action:

{
  "width": 1024,
  "height": 1024,
  "prompt": "A fluffy, orange tabby cat curled up asleep in a sunbeam streaming through a window, its soft fur glowing with the warmth of the light; highly detailed 8K UHD photorealistic rendering, natural lighting, warm and inviting atmosphere, focus on softness and texture.",
  "outputFormat": "png",
  "outputQuality": 100,
  "promptStrength": 0.8,
  "numberOfOutputs": 1,
  "numberOfInferenceSteps": 4
}

Output

The action typically returns a URL pointing to the generated image. For example:

[
  "https://assets.cognitiveactions.com/invocations/bfe83dec-190c-4f1a-9b37-0f9ad29c94a5/c3cd71f2-3916-4ead-b878-9577d6c5826f.png"
]

This output provides a direct link to the newly created image based on the input parameters.

Conceptual Usage Example (Python)

Here’s a conceptual Python code snippet illustrating how you might call this action. Note that the endpoint URL and structure are hypothetical and should be adapted to your specific implementation.

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "3bde88a5-bf6f-4fd2-9a6b-b2dd0383b9b3" # Action ID for Perform Image Inpainting

# Construct the input payload based on the action's requirements
payload = {
    "width": 1024,
    "height": 1024,
    "prompt": "A fluffy, orange tabby cat curled up asleep in a sunbeam streaming through a window, its soft fur glowing with the warmth of the light; highly detailed 8K UHD photorealistic rendering, natural lighting, warm and inviting atmosphere, focus on softness and texture.",
    "outputFormat": "png",
    "outputQuality": 100,
    "promptStrength": 0.8,
    "numberOfOutputs": 1,
    "numberOfInferenceSteps": 4
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet, you will need to replace the COGNITIVE_ACTIONS_API_KEY and the hypothetical endpoint with your actual API key and endpoint. The action_id corresponds to the image inpainting action, and the payload contains the input parameters specified earlier.

Conclusion

The Perform Image Inpainting with Flux Schnell Model action from the lightweight-ai/model1 API offers developers a robust and flexible solution for image modification tasks. By leveraging this action, you can enhance your applications with advanced image processing capabilities, allowing for customized image generation and inpainting. Consider exploring other use cases to further enrich your projects with these powerful Cognitive Actions.