In this article you will learn how to boost the responses of your agents by allowing them to automatically use the information from your content bases to generate more accurate and contextualized responses.
Optimization of AI-generated responses with content databases is a functionality designed so that agents intelligently use the content uploaded to the business. In this way, the generated responses do not only depend on the language model, but are directly nourished by the specific and updated information of your company.
The main objective is to improve the accuracy, context, and value of the interactions generated by AI. By connecting agent prompts with content bases, the system ensures that the information delivered to the customer is truthful and aligned with the documents, websites, or manuals uploaded to the platform.

This functionality operates automatically and on demand, which optimizes both bot performance and associated costs. Automatic activation
The enrichment process is activated at the moment an agent executes a prompt. The system automatically considers all active content bases of the business to search for relevant information for the user's query. To learn more about the upload and operation of content bases, you can read “Best practices for using content bases”.
Smart summary generation
For the search to be efficient, the system performs a previous step:

To guarantee that the AI never responds with old information, the system has an automatic synchronization mechanism: When you edit or update a content base, the previous summary is automatically deleted. In the next execution of a prompt, the system will generate an updated summary on-demand again. This ensures total consistency between your documents and the agent's responses.
When an agent receives a query, the system follows these steps to build the final response: Relevance validation: the AI analyzes the prompt query and compares it with the summaries of all the business's content bases. Smart selection: up to 2 content bases that have the highest level of match with the query are selected. Context construction: the specific content of those selected bases is inserted as additional context within the final prompt received by the language model. Response generation: the response is generated using the language model (LLM) that you have specifically configured in the bot where the customer is located.
No manual setup: it is not necessary to perform additional technical steps; once you have content bases uploaded, the system starts using them to enrich prompts. Improved accuracy: by having access to real documents, the risk of AI "hallucinations" decreases drastically. Brand consistency: the AI uses the tone and style configured in your bot, but with the official database of your business.
It is important to keep in mind that the generation of summaries has an associated cost that depends on: The number of active content bases in the business. The volume of content (number of words/tokens) within each base. This cost is incurred only during the first use of the functionality or when a summary is regenerated due to an update of the base content.
Remember to visit our Help Center for further information.