Elevate Your Chatbot Experience with Dolphin 2.1 Mistral 7B Cognitive Actions

23 Apr 2025
Elevate Your Chatbot Experience with Dolphin 2.1 Mistral 7B Cognitive Actions

Integrating conversational AI into your applications has never been easier with the lucataco/dolphin-2.1-mistral-7b Cognitive Actions. This powerful API allows developers to leverage the capabilities of the Mistral-7B-v0.1 model fine-tuned with the Dolphin dataset, enabling the creation of interactive and insightful chat responses. By utilizing these pre-built actions, you can enhance user engagement and streamline conversations, making your applications more intuitive and user-friendly.

Prerequisites

Before diving into the Cognitive Actions, ensure you have the following:

  • An API key for the Cognitive Actions platform.
  • Basic knowledge of JSON and API integration.
  • An understanding of how to pass authentication tokens in the headers of your requests.

For authentication, you will typically pass your API key in the request headers to access the Cognitive Actions service securely.

Cognitive Actions Overview

Generate Chat with Dolphin

The Generate Chat with Dolphin action is designed to create engaging chat responses using an optimized model. It is ideal for developers looking to build advanced chatbots or virtual assistants.

Input

The input for this action consists of several parameters, as outlined in the schema:

  • prompt (required): The user query or statement you want the AI to respond to.
  • topK (optional): The number of highest probability tokens to consider for generating the output. Default is 50.
  • topP (optional): A probability threshold for generating output. Default is 0.95.
  • temperature (optional): A parameter that affects the randomness of the output. Default is 0.8.
  • maxNewTokens (optional): The maximum number of tokens to generate. Default is 512.
  • promptTemplate (optional): A template to format the input prompt. Default provided.
  • presencePenalty (optional): A penalty for the presence of a token in the output. Default is 0.
  • frequencyPenalty (optional): A penalty based on the frequency of a token's occurrence. Default is 0.

Here is an example of the JSON payload needed to invoke this action:

{
  "topK": 50,
  "topP": 0.95,
  "prompt": "What is the best way to train a dolphin to obey me? Please answer step by step.",
  "temperature": 0.8,
  "maxNewTokens": 512,
  "promptTemplate": "<|im_start|>system\nyou are an expert dolphin trainer\n<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n",
  "presencePenalty": 0,
  "frequencyPenalty": 0
}

Output

When you execute this action, you will receive a response that typically consists of an array of strings, each representing a part of the generated chat response. For example:

[
  " Training",
  " a",
  " dol",
  "ph",
  "in",
  " to",
  " obey",
  " you",
  " involves",
  " establishing",
  ...
]

This output represents a detailed breakdown of the suggested steps for training a dolphin, allowing for flexibility in how the responses can be processed or displayed in your application.

Conceptual Usage Example (Python)

Here’s a conceptual Python code snippet to illustrate how you might call the Cognitive Actions endpoint for this action:

import requests
import json

# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"  # Hypothetical endpoint

action_id = "891de647-5d6c-45d8-ad18-ba606892d949"  # Action ID for Generate Chat with Dolphin

# Construct the input payload based on the action's requirements
payload = {
    "topK": 50,
    "topP": 0.95,
    "prompt": "What is the best way to train a dolphin to obey me? Please answer step by step.",
    "temperature": 0.8,
    "maxNewTokens": 512,
    "promptTemplate": "<|im_start|>system\nyou are an expert dolphin trainer\n<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n",
    "presencePenalty": 0,
    "frequencyPenalty": 0
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json"
}

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json={"action_id": action_id, "inputs": payload}  # Hypothetical structure
    )
    response.raise_for_status()  # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body: {e.response.text}")

In this code, you'll need to replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The action_id corresponds to "Generate Chat with Dolphin," and the payload contains the input parameters defined previously.

Conclusion

The lucataco/dolphin-2.1-mistral-7b Cognitive Actions provide a powerful tool for developers looking to enhance their applications with advanced conversational capabilities. By utilizing the Generate Chat with Dolphin action, you can create dynamic and engaging user interactions. Consider exploring other use cases and potential integrations to fully leverage the capabilities of this API in your projects. Happy coding!