OpenAI Connector

Content Type: Module
Categories: Connectors,Artificial Intelligence

Overview

OpenAI Connector

The Mendix connector for the APIs & large language models by OpenAI.

 

Getting started

  1. Signing up for an OpenAI account* - or by leveraging your Azure account.
  2. Trying out the GenAI showcase app.
  3. Downloading this connector for your own app in Studio Pro.
  4. Reviewing the OpenAI documentation.

 

* If you have signed up for an OpenAI account and are using free trial credits, note that these are only valid for three months after the account has been created (not after the API key has been created). For more details, see the OpenAI API reference

Text generation

Develop interactive AI chatbots and virtual assistants that can carry out conversations in a natural and engaging manner. Use OpenAI’s large language models for text comprehension and analysis use cases such as summarization, synthesis and answering questions about large amounts of text. Fine-tune the OpenAI models on a specific task or domain, by training it on custom data, to improve its performance.

This connector simplifies integration with OpenAI’s platform.

All chat completions operations within the OpenAI connector support JSON mode, function calling, and vision.

With chat completions, you can build applications to:

  • Draft documents
  • Write computer code
  • Answer questions about a knowledge base
  • Analyze texts
  • Give software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for games
  • Analyze images with vision

OpenAI provides market-leading large language model capabilities with GPT-4:

  • Advanced reasoning: Follow complex instructions in natural language and solve difficult problems with accuracy.
  • Creativity: Generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.
  • Longer context: GPT-4 is capable of handling over 25,000 words of text, allowing for use cases like long form content creation, extended conversations, and document search and analysis.

 

Image generation

Generate one or more completely new, original images and art from a text description. Powered by the OpenAI DALL-E API, the connector enables developers to generate these images by combining concepts, attributes, and styles.

 

Embeddings

Convert strings into vector embeddings for various purposes based on the relatedness of texts. Embeddings are commonly used for:

  • Search
  • Clustering
  • Recommendations
  • Anomaly detection
  • Diversity measurement
  • Classification

Leverage specific sources of information to create a smart chat functionality tailored to your own knowledge base. Combine embeddings with text generation capabilities and implement Retrieval Augmented Generation (RAG) in your own Mendix application.

 

Compatibility

This connector is compatible with OpenAI's platform as well as Azure's OpenAI service.

Get started integrating generative AI into your Mendix app with an OpenAI or Azure trial account and this connector today!

Documentation

Dependencies

GenAI showcase app

Try out our example showcase app here!

 

Reference documentation

Technical documentation is available on docs.mendix.com

Releases

Version: 3.4.1
Framework Version: 9.24.2
Release Notes: Added migration file that was missing in release 3.4.0. We replaced many actions inside the chat completions operations by a new GenAICommons action that processes the request. This requires the newest version of the GenAICommons module. In addition, we improved the log messages for failed operations.
Version: 3.3.0
Framework Version: 9.24.2
Release Notes: The operations for chat completions and embeddings now store token usage data for every successful call, if enabled in GenAI Commons. This can be used for usage monitoring purposes. Updating the GenAI Commons module is required for this connector version to compile. To display usage data, pages and logic were made available in the Conversational UI module. See the GenAI showcase app for an example implementation.
Version: 3.2.0
Framework Version: 9.24.2
Release Notes: We made the connector compatible with the newest GenAI Commons version 1.2.0 and updated the domain model documentation.
Version: 3.1.1
Framework Version: 9.24.2
Release Notes: We fixed a bug causing NullPointerExceptions while doing function calls.
Version: 3.1.0
Framework Version: 9.24.2
Release Notes: We have made the module compatible with the new version of GenAI Commons for Embeddings and Image Generations. This enables easy switching between model providers and tight integration with the PgVector Knowledge Base module. Additionally, we have improved error logging when calling (Azure) OpenAI, so you can now see the full error response body in the logs.
Version: 3.0.0
Framework Version: 9.24.2
Release Notes: The OpenAI connector now reuses many generic entities and operations from the GenAI Commons module. This makes it easier to build vendor-agnostic applications and swapping models and integrates well with the newly released Conversational UI module. Updating will cause errors in existing chat completions implementations. To mitigate those, you’d need to change the input parameters of the operations and the post-processing of the response. Examples of how this can be done can be found in the OpenAI Showcase App. Lastly, the complex “Chat Completions (advanced)” operation was removed and can be replaced with the regular “Chat Completions (with history)” operation.
Version: 2.7.1
Framework Version: 9.24.0
Release Notes: Added userlibs that were missing in release 2.7.0. All chat completions operations now support vision which enables models like GPT-4 Turbo to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. For Chat Completions without History the ImageCollection is an optional input parameter, while for Chat Completions with History the ImageCollection can optionally be added to individual user messages in ChatCompletionsSession_AddMessage. Additionally, the chat completions operations have a new optional input parameter called “MaxTokens” to control the maximum number of tokens to generate in the chat completion.
Version: 2.2.0
Framework Version: 9.24.0
Release Notes: We made several small improvements based on community feedback. We removed all ‘Show message’ activities from our exposed operations and replaced them with log messages. Furthermore, we added a Boolean return value called 'Success' to the Embeddings (list input) operation. Lastly, we shortened the names of the exposed operations so they are better displayed in the Toolbox in Studio Pro.
Version: 2.0.0
Framework Version: 9.24.0
Release Notes: We have included three new operations that can be used to invoke the Embeddings API and create vector embeddings for a single String or a list of Strings. Furthermore, we included additional microflows in the Advanced folder of all operations that you can use to create request objects. For Azure OpenAI, we now support authorization with an api-key, besides the existing Microsoft Entra token. Please note that it depends on your organization's security settings, which authorization methods are allowed for your account. Breaking changes: We removed the default models per operation from the Configuration entity and changed the Model input parameters in all exposed microflows from Enumeration to String to give developers more flexibility in choosing a model without depending on a fixed list of values. Furthermore, we introduced two new entities: AbstractUsage and ConfigurationTest. ChatCompletionsUsage is now a specialization of AbstractUsage. ConfigurationTest was introduced so we could remove entity access from ChatCompletionsSession and ChatCompletionsSessionMessage; it is used in the user flow of testing a newly set up configuration with a simple chat completions call.
Version: 1.3.0
Framework Version: 9.24.0
Release Notes: We have included JSON mode in the chat completions operations. This mode forces the compatible language models to always return valid JSON as the response. For this, we have extended the non-advanced operations for chat completions with the responseFormat input parameter. For existing implementations that rely on a text response, this can be set to Enumeration value "text" or left empty (to let the system itself assume the default). For use cases where always valid JSON is required as a response, the Enumeration value must be set to "json_object". For image generations operations, developers can now set the ResponseFormat (url vs. base64) field. This value determines how the retrieval of the image will happen in the implementation. The new field is optional, for existing usages the value can be set to URL or left empty and let the API assume the default value. Lastly, we have improved the user experience when creating configurations.