OpenAI Connector

Content Type: Module
Categories: Connectors,Artificial Intelligence

Overview

OpenAI Connector

The Mendix connector for the APIs & large language models powering OpenAI's ChatGPT: GPT-3.5, GPT-4 and DALL-E.



Getting started

  1. Signing up for an OpenAI account - or by leveraging your Azure account.
  2. Trying out the example app.
  3. Downloading this connector for your own app in Studio Pro.
  4. Reviewing the OpenAI documentation.


Text generation

Develop interactive AI chatbots and virtual assistants that can carry out conversations in a natural and engaging manner. Use OpenAI’s large language models for text comprehension and analysis use cases such as summarization, synthesis and answering questions about large amounts of text. Fine-tune the OpenAI models on a specific task or domain, by training it on custom data, to improve its performance.

This connector simplifies integration with OpenAI’s platform. According to OpenAI, using OpenAI's text generation models (the technology powering ChatGPT), you can build applications to:

  • Draft documents
  • Write computer code
  • Answer questions about a knowledge base
  • Analyze texts
  • Give software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for games

OpenAI provides market-leading large language model capabilities with GPT-4:

  • Advanced reasoning: Follow complex instructions in natural language and solve difficult problems with accuracy.
  • Creativity: Generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.
  • Longer context: GPT-4 is capable of handling over 25,000 words of text, allowing for use cases like long form content creation, extended conversations, and document search and analysis.



Image generation

Generate one or more completely new, original images and art from a text description. Powered by the OpenAI DALL-E API, the connector enables developers to generate these images by combining concepts, attributes, and styles.



Embeddings

Convert strings into vector embeddings for various purposes based on the relatedness of texts. Embeddings are commonly used for:

  • Search
  • Clustering
  • Recommendations
  • Anomaly detection
  • Diversity measurement
  • Classification

Leverage specific sources of information to create a smart chat functionality tailored to your own knowledge base. Combine embeddings with text generation capabilities and implement Retrieval Augmented Generation (RAG) in your own Mendix application.



Compatibility

This connector is compatible with OpenAI's platform as well as Azure's OpenAI service*.

Get started integrating generative AI into your Mendix app with an OpenAI or Azure trial account and this connector today!


*The Azure API currently supports operations for Chat Completions and Embeddings only.

Documentation

Dependencies

  • Encryption module
  • Community Commons module

Showcase app

Try out our example showcase app here!


Reference documentation

Technical documentation is available on docs.mendix.com

Releases

Version: 2.2.0
Framework Version: 9.24.0
Release Notes: We made several small improvements based on community feedback. We removed all ‘Show message’ activities from our exposed operations and replaced them with log messages. Furthermore, we added a Boolean return value called 'Success' to the Embeddings (list input) operation. Lastly, we shortened the names of the exposed operations so they are better displayed in the Toolbox in Studio Pro.
Version: 2.0.0
Framework Version: 9.24.0
Release Notes: We have included three new operations that can be used to invoke the Embeddings API and create vector embeddings for a single String or a list of Strings. Furthermore, we included additional microflows in the Advanced folder of all operations that you can use to create request objects. For Azure OpenAI, we now support authorization with an api-key, besides the existing Microsoft Entra token. Please note that it depends on your organization's security settings, which authorization methods are allowed for your account. Breaking changes: We removed the default models per operation from the Configuration entity and changed the Model input parameters in all exposed microflows from Enumeration to String to give developers more flexibility in choosing a model without depending on a fixed list of values. Furthermore, we introduced two new entities: AbstractUsage and ConfigurationTest. ChatCompletionsUsage is now a specialization of AbstractUsage. ConfigurationTest was introduced so we could remove entity access from ChatCompletionsSession and ChatCompletionsSessionMessage; it is used in the user flow of testing a newly set up configuration with a simple chat completions call.
Version: 1.3.0
Framework Version: 9.24.0
Release Notes: We have included JSON mode in the chat completions operations. This mode forces the compatible language models to always return valid JSON as the response. For this, we have extended the non-advanced operations for chat completions with the responseFormat input parameter. For existing implementations that rely on a text response, this can be set to Enumeration value "text" or left empty (to let the system itself assume the default). For use cases where always valid JSON is required as a response, the Enumeration value must be set to "json_object". For image generations operations, developers can now set the ResponseFormat (url vs. base64) field. This value determines how the retrieval of the image will happen in the implementation. The new field is optional, for existing usages the value can be set to URL or left empty and let the API assume the default value. Lastly, we have improved the user experience when creating configurations.
Version: 1.2.0
Framework Version: 9.24.0
Release Notes: We have included new operations that can be used to generate images using OpenAI's Dall-E model. There is now an extra dependency on the Community Commons Marketplace module: if you do not have it in your app already, make sure to include it. Furthermore, for Chat Completions there has been a change in the return value of a microflow: ChatCompletions_Execute_WithHistory now directly returns the response text (string) instead of a complex structure. This means that in your own flow, you can most probably remove any custom logic to extract the assistant reponse string. If, however, you still need the complex structure, use microflow ChatCompletions_CallAPI and construct the input request yourself. This microflow is now also exposed.
Version: 1.1.0
Framework Version: 9.24.0
Release Notes: Minimal improvements to make your life just a little better.