Enhance Your Chat Applications with Dolphin-2.2.1-Mistral-7B Cognitive Actions

Integrating advanced conversational capabilities into applications has never been easier with the Dolphin-2.2.1-Mistral-7B Cognitive Actions. This powerful API provides developers with pre-built actions designed for engaging and effective chat interactions, particularly leveraging the Mistral-7B model fine-tuned with the Dolphin dataset. The result? Improved chat performance that addresses common issues such as overfitting, enabling you to create more responsive and intelligent conversational agents.
Prerequisites
Before you start using the Dolphin-2.2.1-Mistral-7B Cognitive Actions, ensure you have the following:
- An API key for the Cognitive Actions platform to authenticate your requests.
- Basic knowledge of making HTTP requests and handling JSON payloads.
Authentication typically involves passing your API key in the headers of your requests to ensure secure access to the Cognitive Actions.
Cognitive Actions Overview
Perform Chat using Dolphin-2.2.1-Mistral-7B
The Perform Chat using Dolphin-2.2.1-Mistral-7B action allows you to engage in conversational tasks powered by the Mistral-7B model. This model is optimized for chat interactions, making it a valuable tool for developers seeking to integrate advanced dialog capabilities into their applications.
- Category: Chat
Input
The input for this action is structured as follows:
{
"prompt": "What is the best way to train a dolphin to obey me? Please answer step by step.",
"topK": 50,
"topP": 0.95,
"temperature": 0.8,
"maxNewTokens": 512,
"promptTemplate": "<|im_start|>system\nyou are an expert dolphin trainer\n<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n",
"presencePenalty": 0,
"frequencyPenalty": 0
}
- Required Fields:
prompt: The main input query for the chat model.
- Optional Fields:
topK: Number of highest probability tokens to retain (default is 50).topP: Cumulative probability threshold for token selection (default is 0.95).temperature: Controls randomness in token selection (default is 0.8).maxNewTokens: Maximum tokens to generate (default is 512).promptTemplate: Template for customizing the prompt's behavior.presencePenalty: Penalty for repeated tokens in the output (default is 0).frequencyPenalty: Penalty for frequent tokens in the output (default is 0).
Example Input
{
"topK": 50,
"topP": 0.95,
"prompt": "What is the best way to train a dolphin to obey me? Please answer step by step.",
"temperature": 0.8,
"maxNewTokens": 512,
"promptTemplate": "<|im_start|>system\nyou are an expert dolphin trainer\n<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n",
"presencePenalty": 0,
"frequencyPenalty": 0
}
Output
The action typically returns a structured output containing the generated response, which might look like this:
[
" Step",
" ",
1,
":",
" Building",
" Trust",
"\n",
"The",
" first",
" step",
" in",
" training",
" a",
" dolphin",
" is",
" to",
" gain",
" its",
" trust",
".",
// Additional steps...
]
The output consists of the generated steps or responses in a sequential format, which can be customized based on the input provided.
Conceptual Usage Example (Python)
Here's a conceptual Python code snippet demonstrating how to invoke the Perform Chat action:
import requests
import json
# Replace with your Cognitive Actions API key and endpoint
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute" # Hypothetical endpoint
action_id = "62b12902-5d15-4aed-888f-0151042afe9c" # Action ID for Perform Chat
# Construct the input payload based on the action's requirements
payload = {
"topK": 50,
"topP": 0.95,
"prompt": "What is the best way to train a dolphin to obey me? Please answer step by step.",
"temperature": 0.8,
"maxNewTokens": 512,
"promptTemplate": "<|im_start|>system\nyou are an expert dolphin trainer\n<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n",
"presencePenalty": 0,
"frequencyPenalty": 0
}
headers = {
"Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
COGNITIVE_ACTIONS_EXECUTE_URL,
headers=headers,
json={"action_id": action_id, "inputs": payload} # Hypothetical structure
)
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
result = response.json()
print("Action executed successfully:")
print(json.dumps(result, indent=2))
except requests.exceptions.RequestException as e:
print(f"Error executing action {action_id}: {e}")
if e.response is not None:
print(f"Response status: {e.response.status_code}")
try:
print(f"Response body: {e.response.json()}")
except json.JSONDecodeError:
print(f"Response body: {e.response.text}")
In this Python snippet, replace YOUR_COGNITIVE_ACTIONS_API_KEY with your actual API key. The payload is constructed according to the action's input requirements, and the request is sent to a hypothetical execution endpoint. The response is then processed to display the generated chat output.
Conclusion
The Dolphin-2.2.1-Mistral-7B Cognitive Actions offer a robust solution for developers looking to enhance their chat applications with advanced conversational capabilities. By leveraging the fine-tuned Mistral-7B model, you can create intelligent, engaging chat interactions that resonate with users. Explore further use cases and integrate these actions to elevate your applications today!