Transform Text into Stunning Sketches with meliponalab/sketchconcepts

24 Apr 2025
Transform Text into Stunning Sketches with meliponalab/sketchconcepts

In the world of creative applications, being able to generate visual content from textual descriptions can be a game-changer. The meliponalab/sketchconcepts API offers developers powerful Cognitive Actions to create image sketches from textual prompts. With customizable settings for inpainting, speed, and image dimensions, these actions enable a wide range of artistic possibilities. Let’s explore how to harness the capability of generating sketches and what you need to get started.

Prerequisites

Before diving into the Cognitive Actions, you will need:

  • An API key to access the meliponalab/sketchconcepts services.
  • Understanding of how to authenticate your API requests, typically by passing your API key in the request headers.

Cognitive Actions Overview

Generate Image Sketch

Purpose: The "Generate Image Sketch" action enables you to create image sketches from textual prompts, allowing for a wide range of styles and settings. It supports different models and configurations based on your needs.

Category: image-generation

Input

The input schema for this action requires a structured JSON object that includes various properties. Here are the key fields:

  • prompt (required): The textual description guiding the sketch generation.
  • model: Select between "dev" (optimal for quality) or "schnell" (optimized for speed).
  • outputCount: Specifies how many images to generate (1-4).
  • imageAspectRatio: Sets the aspect ratio of the generated image.
  • imageOutputFormat: Defines the file format for output images, options include webp, jpg, and png.
  • mainLoraIntensity: Adjusts the application intensity of the main LoRA.

Example Input:

{
  "model": "dev",
  "prompt": "Create a black and white pencil SKTCH sketch of an immersive exhibition layout. The room is an introduction room which has a museography layout, the objective of these room is to make an introduction of the theme and give context to the attendees. The mood of the design of the spaces for the exhibit are futuristic yet minimalistic. A sense of modernity and architectural design. Must include scale references and the exhibit must hold up to 50 attendees at once.",
  "outputCount": 1,
  "denoisingSteps": 28,
  "imageAspectRatio": "1:1",
  "imageOutputFormat": "webp",
  "mainLoraIntensity": 1,
  "imageOutputQuality": 90,
  "inputPromptIntensity": 0.8,
  "imageGuidanceIntensity": 3.5,
  "additionalLoraIntensity": 1
}

Output

The action typically returns an array of URLs pointing to the generated images. Here's an example of what you might receive:

Example Output:

[
  "https://assets.cognitiveactions.com/invocations/96e69abd-8b43-465a-a398-e650a98aee12/591a50ac-db76-45fe-b21b-452519b43f00.webp"
]

Conceptual Usage Example (Python)

Here's a conceptual snippet showing how you might call the "Generate Image Sketch" action using Python:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint

action_id = "a1b74dbe-81c3-4fd3-9b63-1a7059c62fe7"  # Action ID for Generate Image Sketch

# Construct the input payload based on the action's requirements
payload = {
    "model": "dev",
    "prompt": "Create a black and white pencil SKTCH sketch of an immersive exhibition layout. The room is an introduction room which has a museography layout...",
    "outputCount": 1,
    "denoisingSteps": 28,
    "imageAspectRatio": "1:1",
    "imageOutputFormat": "webp",
    "mainLoraIntensity": 1,
    "imageOutputQuality": 90,
    "inputPromptIntensity": 0.8,
    "imageGuidanceIntensity": 3.5,
    "additionalLoraIntensity": 1
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code snippet:

  • The action_id is set to the ID of the "Generate Image Sketch" action.
  • The payload is constructed using the required and optional fields.
  • The requests library is used to post the JSON payload to the Cognitive Actions endpoint.

Conclusion

Utilizing the meliponalab/sketchconcepts API allows developers to transform textual descriptions into beautiful image sketches effectively. By leveraging the various customizable settings, you can fine-tune the output to fit your creative needs. Explore the potential applications in art, design, and visualization, and start integrating these powerful Cognitive Actions into your applications today!