Navigating AI Chat: Challenges, Alternatives, and Best Practices

As AI technology continues to advance, more professionals are incorporating tools like ChatGPT into their daily workflows. But how well do we really understand these powerful tools? After spending a month working closely with AI, our Solution Architect Casper gained valuable insights into how generative AI functions, its limitations, and how it can be effectively integrated into different environments. 

Casper Wandrup
Casper Wandrup
3 min read

What is Generative AI Chat?

Most people have heard of ChatGPT, and many professionals use it daily. By prompting it with questions, ChatGPT generates answers to the best of its abilities—a process known as text generation.

ChatGPT is an AI language model that produces output based on data it has been trained on. A common misconception is that ChatGPT looks up information on the internet in real-time. In reality, it doesn’t have internet access. Instead, it was trained on a vast dataset sourced from the internet, so its responses are based on the pre-existing knowledge within its training data.

 

How Does ChatGPT Work?

Training an AI language model like ChatGPT is neither easy nor cheap. These models are trained on high-quality, diverse, but selective content from the internet, including e-books, articles, websites, and other publicly available sources. Certain types of information, such as personal or confidential data, are excluded to maintain privacy and ethical standards.

In essence, ChatGPT doesn’t know anything beyond its training data. However, it is excellent at understanding human input contextually and synthesizing information to generate text-based replies.

Alternatives for Real-Time Data in Generative AI Chats

There are alternatives for integrating real-time data into generative AI chats. For instance, Meta has released a potential rival to ChatGPT that can include information from searches on Google or Bing. Currently, this service is available in a few countries, none of which are in the EU.

 

The Limitations of AI Chat

AI language models only know what they have been trained on, and training them is a complex and time-consuming process. When using ChatGPT, we rely on OpenAI to develop new versions of their language models to keep information up to date. OpenAI is continuously evolving, providing regular updates and improvements, which is beneficial for end-users.

However, many companies have policies against using ChatGPT for work-related tasks due to security concerns about sharing information with third parties. Fortunately, there are ways to integrate AI chat into your organization securely, including hosting the language models on-premises if needed.

 

RAG: Retrieval-Augmented Generation in Apps

Applications using the RAG (Retrieval-Augmented Generation) architecture leverage existing language models to understand prompts contextually while using a search service to provide the language model with real-time data it wasn't pre-trained with.

Here’s an example of how a RAG setup can enable an organization to have its own AI chat that incorporates its own data.

The Empact AI Chat is a feature within the Empact Platform, serving as the AI assistant in the app. Information flows through the Empact AI Chat, where the language model processes it and generates responses, supplemented by related search results based on user prompts. The language model produces responses with references and knowledge drawn from these search results.

We’re constantly developing new features, and this is one we’re working on now: an AI assistant that users can interact with like ChatGPT, receiving responses based on content from the Empact knowledgebase or news articles.

 

Benefits of Using AI in a Controlled Environment

Using AI in a controlled environment, such as your employee engagement app from Empact, ensures that your organization's data isn't shared with random third-party services. Additionally, you can enhance the language model with your own content without needing to retrain it. You control the data fed to the search service, and you can instruct the language model on how to handle responses for your employees.

 

Data Management in RAG Applications

It’s crucial to consider what data you feed into your search service, as users can access this information through the AI assistant in the app. Here are two potential solutions for managing data access:

  1. Restrict Search Results Based on User Permissions: The language model generates answers based on data available to the user, linking permissions to user levels for all documents in the search service. This solution can be complex to manage and maintain.

  2. Create Multiple Search Services: Set up 2-3 search services with different data levels (e.g., "level 1", "level 2", "level 3"). Access is granted based on user roles, with higher levels providing more content. This setup can be customized to your organization’s needs.

If you have any input or feedback about developing AI services, feel free to reach out. I’d love to engage in a technical discussion.