Mistral Connector

Content Type: Module
Categories: Artificial Intelligence

Overview

Mistral Connector

The Mendix connector for the APIs & large language models by Mistral.

Getting started

  1. Follow Mistral's Quickstart guide.
  2. Try out the GenAI showcase app.
  3. Downloading this connector for your own app in Studio Pro.
  4. Reviewing the Mistral Documentation.

Because the Mistral API is compatible with OpenAI, the OpenAI connector is used for execution of all API calls from Mistral specific ones (e.g. 'List Mistral Models').

 

Text generation

Develop interactive AI chatbots and virtual assistants that can carry out conversations in a natural and engaging manner. Use Mistral Models for text comprehension and analysis use cases such as summarization, synthesis and answering questions about large amounts of text. You can also fine-tune the Mistral models on a specific task or domain, by training it on custom data, to improve its performance.

 

This connector simplifies integration with Mistral’s platform through interfaces for simple configuration and creation of deplyoed model objects.

 

All chat completions operations within the OpenAI Connector also support JSON mode, function calling, and vision for Mistral.

 

With chat completions, you can build applications to:

  • Draft documents
  • Write computer code
  • Answer questions about a knowledge base
  • Analyze texts
  • Give software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for games
  • Analyze images with vision

 

Image generation

See more about image generation tools for Mistral agents here. Currently not supported by this connector.

Embeddings

Convert strings into vector embeddings for various purposes based on the relatedness of texts. Embeddings are commonly used for:

  • Search
  • Clustering
  • Recommendations
  • Anomaly detection
  • Diversity measurement
  • Classification

Leverage specific sources of information to create a smart chat functionality tailored to your own knowledge base. Combine embeddings with text generation capabilities and implement Retrieval Augmented Generation (RAG) in your own Mendix application.

 

Please note that Mistral requires to process larger number of embedding chunks in batches. The Clustering example inside of the GenAI Showcase App contains a batching example for Mistral.

Knowledge Base

You can combine Mistral models for text generation with all supported knowledge base providers, for example Mendix Cloud GenAI.

Contact Us

For support and questions, feel free to reach out via email genai-components-feedback@mendix.com or Community Slack.

 

Documentation

You can find documentation about this component here: Mistral | Mendix Documentation.

 

For support and questions, feel free to reach out via email genai-components-feedback@mendix.com or Community Slack.

 

Releases

Version: 1.0.0
Framework Version: 10.24.0
Release Notes:

Initial release.