Effortlessly Prepare Videos for AI Training with Mochi1 Video Split

25 Apr 2025
Effortlessly Prepare Videos for AI Training with Mochi1 Video Split

In the world of AI and machine learning, preparing data for training can often be a tedious and time-consuming task. The Mochi1 Video Split service streamlines this process by providing a specialized tool for splitting videos into manageable snippets, specifically designed for LoRA fine-tuning. This cognitive action not only simplifies the video preprocessing stage but also enhances the quality and efficiency of your training data.

Using Mochi1 Video Split, you can quickly transform long video files into shorter segments of specified duration, resolution, and frame rate. This is particularly useful for developers who need to prepare training datasets that require high-quality video snippets. With the ability to output a zip file containing the video segments and their corresponding captions, you can ensure that your AI models are trained on precisely the data they need.

Common Use Cases:

  • AI Model Training: Perfect for developers looking to train AI models that rely on video data, enabling them to create tailored datasets.
  • Content Creation: Streamline the process of generating video highlights or summaries for platforms that require short snippets.
  • Research and Development: Facilitate experiments with video data by easily manipulating and segmenting video files for various research purposes.

Prerequisites

To get started with Mochi1 Video Split, you will need a Cognitive Actions API key and some basic knowledge of making API calls.

Split Video into Snippets for LoRA Fine-Tuning

The "Split Video into Snippets for LoRA Fine-Tuning" action is designed to preprocess videos by dividing them into segments of a specified duration. This action is particularly valuable for developers working with LoRA fine-tuning, as it prepares videos into manageable parts that can enhance the training process.

Input Requirements:

  • inputVideo: A valid URI link to the input video file, which must be in MP4 or MOV format.
  • targetDuration: The desired duration for each video segment, ranging from 1 to 5 seconds. If not specified, it defaults to 2.5 seconds.

Expected Output: The output will be a zip file containing the split video segments along with their respective captions, making them ready for immediate use in training.

Use Cases for this specific action:

  • When preparing a dataset for training AI models that require short video clips, this action saves time and ensures high-quality input.
  • Ideal for developers who wish to create training data quickly without manual editing of video files.
  • Suitable for generating content for applications that require short video snippets, such as social media or educational platforms.
import requests
import json

# Replace with your actual Cognitive Actions API key and endpoint
# Ensure your environment securely handles the API key
COGNITIVE_ACTIONS_API_KEY = "YOUR_COGNITIVE_ACTIONS_API_KEY"
# This endpoint URL is hypothetical and should be documented for users
COGNITIVE_ACTIONS_EXECUTE_URL = "https://api.cognitiveactions.com/actions/execute"

action_id = "58ea09af-e696-454c-96f7-931f6f35a78b" # Action ID for: Split Video into Snippets for LoRA Fine-Tuning

# Construct the exact input payload based on the action's requirements
# This example uses the predefined example_input for this action:
payload = {
  "inputVideo": "https://replicate.delivery/pbxt/MHgmDF1AYMqghD5jN7ijBDHs2fYmjzUJrDPID2H2mIQOhegv/heygen-demo.mp4",
  "targetDuration": 2
}

headers = {
    "Authorization": f"Bearer {COGNITIVE_ACTIONS_API_KEY}",
    "Content-Type": "application/json",
    # Add any other required headers for the Cognitive Actions API
}

# Prepare the request body for the hypothetical execution endpoint
request_body = {
    "action_id": action_id,
    "inputs": payload
}

print(f"--- Calling Cognitive Action: {action.name or action_id} ---")
print(f"Endpoint: {COGNITIVE_ACTIONS_EXECUTE_URL}")
print(f"Action ID: {action_id}")
print("Payload being sent:")
print(json.dumps(request_body, indent=2))
print("------------------------------------------------")

try:
    response = requests.post(
        COGNITIVE_ACTIONS_EXECUTE_URL,
        headers=headers,
        json=request_body
    )
    response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)

    result = response.json()
    print("Action executed successfully. Result:")
    print(json.dumps(result, indent=2))

except requests.exceptions.RequestException as e:
    print(f"Error executing action {action_id}: {e}")
    if e.response is not None:
        print(f"Response status: {e.response.status_code}")
        try:
            print(f"Response body: {e.response.json()}")
        except json.JSONDecodeError:
            print(f"Response body (non-JSON): {e.response.text}")
    print("------------------------------------------------")

Conclusion

Mochi1 Video Split offers a powerful solution for developers looking to streamline their video preprocessing tasks in AI training. By automating the segmentation of videos into concise snippets, it not only saves time but also enhances the quality of the training data. Whether you're developing AI models or creating content for various applications, this cognitive action provides the flexibility and efficiency you need.

As a next step, consider integrating Mochi1 Video Split into your workflow to enhance your video data preparation and training processes.