AI Bot Starter App
Overview
Build your own private, enterprise grade ChatGPT-like solution. Connect it to a model like Anthropic Claude or Amazon Titan via Amazon Bedrock or use an (Azure) OpenAI subscription.
With custom instructions the Bot can help your users to go beyond an answering a question mode. Imagine that it will be:
- someone to brainstorm with and help you come up with creative ideas, or simply a rubber duck,
- a copywriter that helps you with writing emails, catchy LinkedIn posts or whole blog posts,
- a helpful assistant that understands larger texts by getting translations, summaries or even sentiments,
- a researcher that can analyze PDFs and answer all questions you may have around it,
- a peer that can effortlessly solve all your coding challenges.
Ground the application in your own data by linking it to your data sources in a Retrieval-Augmented-Generation setup, or query live data with function calling through the ReAct pattern, it is all possible.
Documentation
Configuration
All modules that are needed to interact with an LLM from (Azure) OpenAI or Amazon Bedrock are already installed. The app contains functionalities to configure OpenAI or Amazon Bedrock out of the box. Feel free to add your own models or remove the existing ones.
To use Amazon Bedrock models, you need to configure your credentials (see AWS Authentication) before starting the application. Only the AWS region and whether to use static credentials can be selected at runtime (check out which models are available in which region, see AWS Model Support).
On the contrary, access to OpenAI can be fully configured at runtime, given that the environment or local configuration has the constants encryption key and prefix constants are defined.
Before users can chat with a model, the admin needs to create Bot Configuration(s) for users to select in the chat interface.
- Display name: what users will see on the page
- Architecture: OpenAI or Bedrock
- Is Selectable in UI: decide if a configuration should be selectable
- Model selection: OpenAI / Amazon Bedrock specific models
- Action microflow: select what action microflow should be executed. The provided one ("ChatContext_ChatWithHistory_ActionMicroflow") supports both architectures. Feel free to customize the microflow to your needs
- Amazon Bedrock only: select a Knowledge Base if the corresponding action microflow was selected (knowledge bases need to be configured first in the AWS console)
Additionally, you can create starting points for prompt engineering, that will be available to your users. Both can be configured at runtime (or in the After Startup):
- Instructions: a prefilled instruction to be selected in the chat interface. Instructions augment the "System Prompt" and influence the model's behavior. Users can create their own custom instructions as well (only visible to them).
- Initial User Prompts: buttons that can be clicked in new chats by the user to prefill the user prompt and a corresponding instruction.
To give you an idea, a few examples are automatically created for both. Those can be deleted at runtime and / or deactivated in the ASU microflow if not needed:
- Summarize my meeting notes: inserts a sample text of meeting notes and instructs the LLM to find important information (deadlines, action points). The LLM should then summarize the text using headers and bullet points.
- Launch an AI Chatbot internally: instruct the LLM to create an engaging text for the target audience. The text should describe the technology and business impact.
- Launch an AI Chatbot on LinkedIn: the LLM should create an engaging LinkedIn post with hashtags and emojis about a recently launched AI chatbot.
- Help me decide launching an MVP product: the LLM helps you to brainstorm and prepare for discussions about your MVP product.
Customize AI Bot
This app serves as a starting point and there are many ways to customize your app:
- Add your custom styling (see Customize Styling for more details)
- Customize the initial user prompts that are suggested in new chats
- Redesign the chat page to your needs
- Add custom pre- or post-processing logic into the action microflow that interacts with the LLM, for example to bring your own knowledge base.
- Add your own LLM provider connector
PDF Extraction
When using the built-in PDF-Extraction capability, consider that it extracts text from the PDF and inserts this in the prompt. This does mean that:
- Graphics and formatting from the PDF will mostly be ignored.
- Long PDFs can reduce the accuracy of responses (due to distraction from the important content) or even cause an error when making a request (due to exceeding the model's context window). A warning is shown to the user when a long PDF is being used.
- As the PDF's content is passed along the request, the token usage is increased (and thus costs)
- Not every model performs equally well with PDF content (for example, OpenAI's GPT-4o performs better than GPT-3.5 turbo)