Gen AI Tool ⭐️

Utilize the power of Generative AI in your data preparation process.

Updated over a week ago

Please reach out to Savant to get enrolled in the beta version of the LLM Service tool. We will be happy to add the tool to your organization.

About the Gen AI Tool

The Savant Gen AI Tool, currently in beta, offers generative artificial intelligence capabilities within your Savant analysis. This tool leverages the power of Large Language Models (LLMs) to enhance your data analysis process.

Adding a Gen AI Tool

Savant supports LLMs such as OpenAI ChatGPT and Google Gemini.

To integrate the Gen AI Tool into your analysis, follow these steps:

1. Configure Your OpenAI Account:

Initially, the Generic API Tool includes a default Savant AI account for minimal testing. However, for more robust usage, configure your own LLM service provider. You have two options: OpenAI and Azure.

1. Navigate to the Systems page.

2. Click "New System."

3. Search for OpenAI and select it.

4. Choose the environment (OpenAI or Azure).

5. Provide the API token from the selected environment and click Authenticate.

6. Rename and describe your OpenAI provider.

7. Confirm the setup.

Please contact us to join our private beta for Google Gemini.

2. Adding the Gen AI Tool:

1. Go to your analysis.

2. Click this icon.

3. Click Gen AI.


Setting up the Gen AI Tool involves these configurations:

  1. Selecting the Provider: Choose your LLM service provider. For instance, you can select the Savant AI provider to try the tool on (with reduced throughput).

  2. Defining the Prompt: Craft a well-structured prompt that outlines your desired analysis. You can include specific instructions for the LLM provider to generate relevant insights from your data.

    Example Prompt: “Guess the following question: What country is ${BillingCity} found in? Return just the name of the country with no other words. If you cannot guess the country, return nothing.”

  3. After defining your prompt, click "Apply" to finalize the setup.


Upon applying the configuration, you'll notice the following changes to your dataset:

  • In development mode, you'll initially see only five sample records to conserve LLM service usage.

  • Two columns will be added to your table:

    • AI Prompt: This column displays the prompt you provided to the LLM service.

    • AI Answer: This column contains the response generated by the LLM service.

Referencing Columns in Your Prompt

When crafting your prompt, it's essential to reference columns from your dataset accurately. To do this, use the following syntax: ${name of column}.

For instance, if you're referring to the BillingCity column, your prompt might look like this: "What country is ${BillingCity} found in?"


Given the potential resource usage of the LLM service, Savant employs sampling strategies:

  • Development Mode: Five sample records are displayed during development to minimize LLM service utilization.

  • Test Run: In a test run, you'll receive 100 samples rows of data.

  • Full Bot Run: During a full bot run, you'll receive 100 samples rows of data.

Importance of Adding Specific Instructions

To maximize the accuracy and relevance of the LLM service's generated insights, provide clear and specific instructions. By being explicit in your prompt, you can direct the LLM provider to return precise information. Avoid ambiguity and opt for pinpoint directives, ensuring the generated responses align with your analytical needs.

Did this answer your question?