Using AI Answer Generation
Admins can use AI Answer Generation to create a conversational bot that pulls information from multiple sources, including the knowledge base. This feature integrates a large language model (LLM) with Retrieval-Augmented Generation (RAG), enabling the bot to generate accurate, relevant responses based on customer-approved content. By using RAG, the bot ensures answers align with your organization's specific information while leveraging the LLM's natural language abilities. When enabled, the bot uses the five most recent interactions and up to five relevant knowledge base articles to answer user queries. Admins can enable AI Answer Generation at the bot level, controlling its use and functionality.
Requirements for using AI Answer Generation
- Account owner or admin privileges; or relevant role/privilege
- Basic, Pro, Business, Education, or Enterprise account
- Zoom Virtual Agent license
How AI Answer Generation works
AI Answer Generation involves three key stages: query processing, external LLM interaction, and response generation.
Query processing
When you submit a question, the system analyzes it to understand the intent and context.
External LLM interaction
To generate a response, the following information is sent to an external Large Language Model (LLM):
- The current processed query
- Up to five previous interactions from the ongoing engagement
- Relevant context from up to five articles in the knowledge base
- A prompt instructing the LLM on how to respond
Response generation
The LLM generates a response based on the provided information, ensuring that the answers are relevant, context-specific, and maintain conversational continuity, all while prioritizing data privacy and security. You can adjust the tone of your bot's responses to match your brand's voice or desired style.
How to enable AI Answer Generation
- Sign in to the Zoom web portal.
- In the navigation menu, click AI Management then Virtual Agent.
- In the Chatbots tab, click the name of the chatbot to open the settings.
- Under Generative AI, click the AI-generated answers toggle to enable or disable the use of third-party generative AI to supplement Zoom's AI models.
- Customize the settings.
AI Answer Generation settings
Customize the response tone
You can customize the tone of your bot's responses by selecting a predefined tone for AI-generated answers.
- Under the Response Tone section, choose from a dropdown menu of options: Formal, Professional, Friendly, Casual, Technical, or Empathetic.
- Select the desired tone to match the bot's communication style.
For example, a technical support organization may choose a Technical tone for their Tech Support bot. This tone prompts the AI to use more specialized language and precise terminology when responding to inquiries.
For instance, if a user asks, "Why is my computer running slowly?", the response could be: "Several factors can contribute to reduced computer performance, such as insufficient RAM, fragmented hard drive, or resource-intensive background processes."
Enable source citations
You can enhance your bot's responses by adding source citations to improve credibility and the quality of information. By referencing knowledge base sources, AI-generated answers become more reliable and transparent.
- Check the Enable inline source citations option to include inline citations.
For instance, a financial services company enabling inline citations for their virtual agent discussing investment products might see responses like:
Question: "What are the benefits of diversifying my investment portfolio?"
AI Response: "Diversifying your investment portfolio can help mitigate risk (1) and potentially improve returns over time (2). It involves spreading investments across various asset classes, such as stocks, bonds, and real estate (3)."
This approach enhances the credibility of the information and helps users quickly identify the sources of specific claims.
Enable AI disclosure notice
You can use AI disclosure notices to inform users that they are interacting with an AI-powered virtual agent.
- Check the Enable AI notice option to inform consumers when responses are generated by a large language model.
- Once enabled, you can choose between a default notice or create a custom one using the asset library.
For example, a tech support service might use a custom notice like:
"I'm an AI assistant trained to help with technical issues. While I strive for accuracy, please verify critical information with our human support team."
This notice appears after AI-generated responses, setting clear expectations for users. When answering a complex technical question, the bot displays the notice, reinforcing transparency and accuracy.
By implementing these features, organizations can enhance the question-answering capabilities of their Zoom Virtual Agent, providing more accurate, brand-aligned, and trustworthy responses to user queries.
Include dynamic variables
You can use dynamic variables to personalize your bot to give more relevant and engaging responses. These variables incorporate contextual information into AI-generated answers for a tailored experience.
- Under Response configuration, check the Include variables option.
- Select the desired pre-configured variables from the dropdown menu, then click Apply.
For example, an e-commerce company can use variables such as the customer’s name and their most recent purchase. If Sarah asks, "How do I set up my new laptop?", the AI could respond:
"To set up your new XYZ Laptop, Sarah, first ensure it's fully charged. Then, follow these steps..."
This feature enhances the bot's ability to deliver customized, user-centric responses.
Include brand information
You can customize your bot’s responses to reflect your company’s brand identity by incorporating brand-specific information into Zoom Virtual Agent’s knowledge base.
- Under Response configuration, check the Include brand information option.
- Enter your brand-specific information in the provided text box to customize the LLM prompt.
"You are an AI assistant for EcoTech Solutions, a leader in sustainable technology. Our mission is to provide innovative, eco-friendly products that reduce environmental impact without compromising performance. We are committed to sustainability, cutting-edge research, and exceptional customer service."
If a customer asks, "What makes your products different from competitors?" the AI might respond:
"EcoTech Solutions' products stand out due to our commitment to sustainability and innovation. Our eco-friendly designs minimize environmental impact without sacrificing performance. For instance, our latest laptop model is made with recycled materials and consumes 30% less energy than standard models, demonstrating our dedication to sustainable technology."
Include response rules and policies
You can define specific guidelines and policies to shape the bot's responses, ensuring they align with your organization's priorities and requirements.
- Under Response configuration, check the Include additional rules option.
- Enter rules and policies in the provided text box.
For example, a healthcare provider might include the following guideline:
"Always prioritize patient safety. If a question involves potential health risks, advise consulting a healthcare professional and provide the emergency helpline number."
If a user asks, "What should I do for severe chest pain?" the AI might respond:
"Severe chest pain can be a serious symptom. While it may have various causes, prioritizing your safety is essential. We strongly recommend consulting a healthcare professional immediately or calling our emergency helpline at [number]. Please note that this AI assistant cannot provide a medical diagnosis or treatment."
Transparency and trust features
Prompt engineering best practices
When crafting inputs for the Brand Information and Response Rules sections, follow these tips to create clear and effective prompts that Large Language Models (LLMs) can easily interpret, especially those used in AI Answer Generation. This section provides guidance on writing impactful prompts, selecting relevant variables, showcasing your brand’s identity, and defining guidelines to shape your bot’s behavior effectively.
Tips for optimizing AI Answer Generation inputs
To maximize the benefits of AI Answer Generation and reduce potential risks, it’s important to follow certain best practices. When creating inputs, focus on clarity, brevity, and relevance to enable the AI to process and apply the information effectively.
- Keep inputs concise
Limit the total word count to 250-300 words across both the Brand Information and Response Rules sections. - Use structured formatting
Organize information with bullet points or numbered lists to make it easier for the AI to process. - Regularly review and refine
Update your inputs periodically based on the AI’s performance and your brand’s evolving needs to maintain accuracy and relevance. - Test variations
Experiment with different phrasings or formats to identify what generates the most effective and accurate responses. - Ensure consistency across channels
Align the tone, terminology, and messaging in your inputs with your broader brand guidelines and communications. - Seek support when needed
If you have questions or need help optimizing the feature, contact our Support team for assistance.
Tips for using dynamic variables
Personalize your bot's responses by using dynamic variables. Select variables from your bot's preconfigured settings to add context and make AI-generated responses more relevant and engaging. Use clear and descriptive labels for variable names to ensure the LLM understands and applies them correctly without needing additional explanation.
- Select relevant preconfigured variables
Choose from variables already set up in your bot settings that add meaningful context to user queries and AI responses. - Focus on personalization
Prioritize variables that help tailor the response to the individual user, such as their name, recent interactions, or purchase history. - Use explicit naming conventions
Clearly and specifically name variables to ensure the LLM interprets and uses them correctly without extra explanation. Examples include:
- customer.Name for the customer's name
- product.LastPurchased for the most recent product bought
- account.CreationDate for when the customer's account was created
- Group related variables
Use prefixes to group related variables to help the LLM understand relationships between pieces of information. For instance, all customer-related variables could begin with customer. - Be descriptive and unambiguous
Use variable names that clearly convey their purpose. Avoid abbreviations or cryptic labels that might confuse the AI. - Consider conversation flow
Select variables that naturally integrate into responses without feeling forced or out of place. - Limit variable use
Focus on the most impactful variables to avoid overwhelming the AI or cluttering responses.
Tips for crafting effective brand information
To effectively present your brand information, use a structured format that includes key details such as your company name, industry, mission, and other defining attributes. Summarize your brand's core elements into concise, factual statements that highlight your identity and unique value proposition.
- Be concise and clear
Summarize key brand elements in short, declarative sentences. - Use a structured format
Present information in a logical order, such as company name, industry, mission, and key differentiators. - Highlight unique selling points
Clearly state what sets your brand apart from competitors. - Avoid jargon
Use plain language unless industry-specific terms are crucial. - Quantify when possible
Include specific numbers or percentages for impactful statistics. - List core values
Enumerate the primary principles that guide your brand.
Example
Category | Description |
---|
Company | EcoTech Solutions |
Industry | Sustainable Technology |
Mission | Provide eco-friendly tech products without compromising performance |
Unique Selling Point | 30% more energy-efficient than standard alternatives |
Core values | Sustainability, Innovation, Customer-centric, Quality |
Tips for creating effective response rules
You can create clear response rules that align with your brand policies and customer service standards. Create a clear set of guidelines that direct the AI's behavior to focus on critical do's and don'ts that align with your brand policies and customer service standards.
- Use clear, actionable language
Start with verbs to clearly indicate desired actions. For example, "Always prioritize user safety in responses." - Prioritize rules
List the most critical guidelines first to ensure that the AI follows the most important policies. For example, "If asked about health issues, advise consulting a professional." - Be specific
Provide concrete examples or scenarios when possible. For example, "Never share personal user data in responses." - Use "if-then" structures
Clearly outline conditional responses to guide the AI's behavior based on different situations. For example, "If the user asks about order status, then provide the current status and expected delivery date." - Keep it relevant
Focus on rules that directly impact question-answering scenarios. For example, "If a user requests specific product recommendations, only suggest products that are in stock." - Avoid ambiguity
Ensure each rule has a clear, single interpretation. For example, "If unable to answer, direct users to www.example.com." - Include both do's and don'ts
Balance positive instructions with restrictions. For example, "Do explain features clearly; don’t make unsupported claims about product performance." - Consider edge cases
Address potential unusual or critical situations. For example, "If the user asks for urgent assistance, immediately escalate the request to a human agent.”
Key considerations for AI Answer Generation
AI Answer Generation offers many benefits, yet it's important to be aware of a few key considerations.
- Hallucination
LLMs may occasionally generate plausible-sounding but incorrect information, especially for topics not included in their training data. - Over-reliance
Users might place too much trust in AI-generated responses. It's important to verify critical information before relying on it. - Inconsistency
Responses may sometimes vary across different interactions or contradict information in your knowledge base. - Prompt customization limitations
AI Answer Generation allows customization, but LLMs process inputs holistically. They might not always strictly follow every customization detail. Responses can vary from the specified parameters.
To address these considerations, the following is recommended:
- Monitor AI-generated responses and maintain human oversight, especially for important decisions or actions.
- Regularly review AI-generated responses to ensure they generally align with your customizations and knowledge base information.
- Be prepared to fine-tune your prompts, rules, and knowledge base content if you notice consistent deviations from desired outcomes.
- Implement additional review processes for critical or sensitive topics where precise adherence to customizations or accuracy is crucial.
- Educate end-users about the AI nature of the responses and encourage them to verify important information through official channels.
Follow these practices to maximize the benefits of AI Answer Generation and reduce potential risks and limitations. For any questions or help to optimize the AI Answer Generation, contact our Support team.
Subprocessors
Zoom's federated AI approach uses third-party models, such as those from OpenAI and Anthropic, and its own AI models for certain features to deliver high-quality results.
As part of its third-party risk management program, Zoom conducts security assessments of subprocessors at least annually. Independent audit firms evaluate Zoom's risk management controls for various security certifications, which are available to customers on the Zoom Trust Center.
When a Zoom AI feature uses a third-party model, the provider may retain content in the U.S. for trust and safety purposes for up to 30 days, unless the law requires a longer retention period.