How to set up the AI ​​element

Modified on Thu, 19 Jun at 12:59 PM

The 'AI' element is designed to automatically answer customer questions using artificial intelligence. For more details, see 'AI element'.

⚠️Attention!

The service must be connected before configuration.

 

Note:

In the scenario, the 'AI element works in conjunction with the "Question-answer" block.'


 

How to add an 'AI' element to the diagram:

  1. Add the 'AI' element to the scenario — drag it onto the diagram using the button:
     

  2. Click the clock button and enable the 'Limit waiting time' parameter (graph_element_params) by checking the box next to it. This parameter sets the time after which it is considered that AI has not found an answer or its answer is not recognized. Enter the number of seconds and click 'Save'.
     

  3. Click the pencil icon and set the AI parameters:

    • Message — enter the variable {{stt_answer}}. This variable is responsible for which question the AI answers. AI takes information about which question it should answer from the 'Question-answer' block.

    • Context — the context length is limited by the allowed number of tokens. This example includes not only options for finding answers to the client's questions but also how AI will respond depending on the client's answer - nuances that affect how AI works. You can enter any context containing the logic by which AI will process information received from the client.

      ⚠️Attention!

      AI can be configured to recognize the context of the previous question and respond taking into account the context, maintaining a dialogue with the client on a specific topic. Even if the client hasn't said certain keywords, AI will respond considering that the context of both questions is unified.


      • To do this, add a second 'AI' block to the diagram and set the following settings: Message — the variable {{stt_answer}}. Context — the sequence Q:{{stt_answer_5083_1}} A:{{gpt_answer}} where stt_answer — the question that AI has already answered 5083_1 — identifier of the 'Question-answer' block where AI pronounced this answer. Displayed in the 'Identifier' field inside the 'Question-answer' element.
    • Tokens — These are sequences of characters found in the text set. In other words, it's the unit in which text is measured in context and message. Each token includes approximately 2-3 characters. 100 tokens ≈ 75 words.

    • Temperature — Temperature should be set from 0 to 1, recommended value is 0.7.

      • 0: responds with the most probable answers and constructions, less creativity.
      • 1: more creativity, more unpredictable.
    • Model Types:

      Select a model, enter the number of tokens that AI can process maximum, and set the temperature (recommended value 0.7).

      • OpenAI
        • gpt-4o — universal model with support for text, audio, and images, optimized for speed and quality. Maximum number of tokens — 4096.
        • gpt-4o-mini — lightweight version of gpt-4o with lower resource consumption, suitable for tasks where speed and economy are important. Maximum number of tokens — 16,384 tokens.
  1. Click 'Save'. 

Also see:

Call scenario settings

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article