Enhance Your Applications with the Ericsson230 Faith LoRA Cognitive Action for Image Generation

22 Apr 2025
Enhance Your Applications with the Ericsson230 Faith LoRA Cognitive Action for Image Generation

In the realm of image generation, the Ericsson230 Faith LoRA Cognitive Action offers powerful capabilities for developers looking to create high-quality images through inpainting or image-to-image modes. This integration allows for customizable parameters such as model selection, dimensions, and output formats, enabling users to tailor the output to their specific needs. The inclusion of LoRA weights enhances the style and content of the generated images, while optimized models facilitate faster generation times.

Prerequisites

Before diving into the integration of the Ericsson230 Faith LoRA Cognitive Actions, ensure you have the following:

  • An API key for accessing the Cognitive Actions platform.
  • Familiarity with making HTTP requests and handling JSON data.
  • A basic understanding of Python programming to utilize the code examples provided.

Authentication typically involves passing your API key in the header of your requests, allowing you to securely interact with the Cognitive Actions service.

Cognitive Actions Overview

Generate Image with Inpainting or Image-to-Image Mode

Description: This action allows you to generate high-quality images while utilizing customizable parameters such as model selection, image dimensions, and output format. By leveraging LoRA weights, you can enhance the style and content further, while optimized models ensure quicker generation times.

Category: image-generation

Input

The input schema for this action is as follows:

{
  "prompt": "string (required)",
  "mask": "string (optional, format: uri)",
  "seed": "integer (optional)",
  "image": "string (optional, format: uri)",
  "width": "integer (optional, min: 256, max: 1440)",
  "goFast": "boolean (optional, default: false)",
  "height": "integer (optional, min: 256, max: 1440)",
  "numOutputs": "integer (optional, default: 1, min: 1, max: 4)",
  "loraWeights": "string (optional)",
  "guidanceScale": "number (optional, default: 3, min: 0, max: 10)",
  "mainLoraScale": "number (optional, default: 1, min: -1, max: 3)",
  "outputQuality": "integer (optional, default: 80, min: 0, max: 100)",
  "additionalLora": "string (optional)",
  "inferenceModel": "string (optional, default: 'dev')",
  "promptStrength": "number (optional, default: 0.8, min: 0, max: 1)",
  "imageMegapixels": "string (optional, default: '1')",
  "imageAspectRatio": "string (optional, default: '1:1')",
  "imageOutputFormat": "string (optional, default: 'webp')",
  "numInferenceSteps": "integer (optional, default: 28, min: 1, max: 50)",
  "additionalLoraScale": "number (optional, default: 1, min: -1, max: 3)",
  "disableSafetyChecker": "boolean (optional, default: false)"
}

Example Input:

{
  "goFast": false,
  "prompt": "falex dressed as wonderwoman",
  "numOutputs": 4,
  "guidanceScale": 3,
  "mainLoraScale": 1,
  "outputQuality": 80,
  "inferenceModel": "dev",
  "promptStrength": 0.8,
  "imageMegapixels": "1",
  "imageAspectRatio": "1:1",
  "imageOutputFormat": "webp",
  "numInferenceSteps": 50,
  "additionalLoraScale": 1
}

Output

The output of this action typically includes the generated image URLs in the specified format. For example:

[
  "https://assets.cognitiveactions.com/invocations/e17eb554-dfb6-4293-986a-0e85947b9d52/dfeccda5-3f12-44e3-aa85-1f499b958e66.webp",
  "https://assets.cognitiveactions.com/invocations/e17eb554-dfb6-4293-986a-0e85947b9d52/337fc077-17d1-4119-bac0-4eda825e46f3.webp",
  "https://assets.cognitiveactions.com/invocations/e17eb554-dfb6-4293-986a-0e85947b9d52/3a064956-9b21-49be-9c7d-2bf58e3e58d6.webp",
  "https://assets.cognitiveactions.com/invocations/e17eb554-dfb6-4293-986a-0e85947b9d52/c6177628-cc68-43b6-bb27-371db0145fe8.webp"
]

Conceptual Usage Example (Python)

Here’s how you might structure a call to the Cognitive Actions execution endpoint using Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "822f0292-9800-4089-a406-317b58085cf2" # Action ID for Generate Image with Inpainting or Image-to-Image Mode

# Construct the input payload based on the action's requirements
payload = {
    "goFast": False,
    "prompt": "falex dressed as wonderwoman",
    "numOutputs": 4,
    "guidanceScale": 3,
    "mainLoraScale": 1,
    "outputQuality": 80,
    "inferenceModel": "dev",
    "promptStrength": 0.8,
    "imageMegapixels": "1",
    "imageAspectRatio": "1:1",
    "imageOutputFormat": "webp",
    "numInferenceSteps": 50,
    "additionalLoraScale": 1
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload} # Hypothetical structure
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

This Python snippet shows how to prepare the request to invoke the action, specifying the action ID and the input payload. The hypothetical endpoint should be replaced with the actual URL for executing the Cognitive Actions.

Conclusion

Integrating the Ericsson230 Faith LoRA Cognitive Action for image generation into your application can significantly enhance its capabilities. By leveraging the provided parameters and experimenting with the models, developers can create unique and high-quality visual content tailored to their needs. Once you've mastered this action, consider exploring additional features or actions to further enhance your applications. Happy coding!