Skip to main content

Infer Agent ✨

Enrich, classify, and label data with AI – directly inside your analysis.

Updated today

Infer Agent brings generative AI into data prep so teams can add missing context, classify records, generate labels, and create new structured outputs from existing data. It is a flexible, prompt-driven way to make raw data more useful before it reaches reporting, modeling, or operational workflows.

What it does

Infer Agent uses large language models to transform existing data into richer, more actionable outputs. You define what you want using a prompt, select the field you want to work from, and Infer returns a new AI-generated answer back into the dataset.

Teams use it to fill gaps, classify topics or entities, standardize messy inputs, and generate business context that does not already exist in the source data.

Why teams use Infer Agent

A lot of data is technically complete but analytically thin.

A job title might exist, but the seniority is unclear. A support ticket may include the full issue, but not the topic. A comment field may have plenty of text, but no sentiment label. A city name may be present, but the country or region is missing.

Infer Agent helps analysts turn raw fields into more useful fields – without exporting data to a separate AI workflow or waiting on engineering.

Common use cases

Fill in missing attributes

Infer missing details using business context from related fields and existing record content.

Classify records and topics

Label rows by category, topic, entity type, or business dimension using prompt-based logic.

Standardize free-text inputs

Turn inconsistent descriptions, notes or addresses into cleaner, structured outputs that are easier to report on.

Run sentiment analysis at scale

Tag customer feedback, comments, or ticket content as positive, negative, or neutral.

Enrich data for downstream use

Generate new context that makes data more useful in dashboards, automations, and decision workflows.

How it works

Infer Agent is prompt-driven.

You select an LLM service, define the prompt, choose the fields to transform, and configure how the run should process rows. The model’s output is written back to the dataset as a new AI Answer field.

This makes Infer feel native to the analysis workflow rather than bolted on as a separate AI step.

Model and processing flexibility

Infer Agent is designed to work with modern LLM workflows in a governed environment. Teams can use Savant-managed access for lightweight testing or bring their own model configuration for broader production use.

It also supports different processing approaches depending on dataset size. For larger datasets, rows can be processed in batches. For smaller datasets, streaming can provide quicker feedback during development.

Important development behavior

To help conserve LLM usage and make prompt iteration easier, Infer Agent samples rows during development.

Each time you click Apply in development mode, Savant calculates 5 new records.

Clicking Apply again calculates 5 additional records, continuing until the configured maximum is reached. When the full workflow runs, all eligible data is processed, with up to 1000 records visible in the development preview.

That behavior is worth keeping in the doc because it helps explain why users may not see a full dataset refresh immediately while iterating on prompts.

Prompting best practices

Infer Agent works best when prompts are specific.

Clear instructions lead to cleaner outputs, better consistency, and less post-processing. If you want a short answer, say so. If you want a category label from a fixed set, define the set. If you want blanks returned when the answer is uncertain, include that rule explicitly.

Good prompts reduce ambiguity and make the output easier to trust at scale.

What makes Infer Agent different

Infer Agent is not just a generic AI text box. It is designed for analytics workflows – where the goal is not conversation, but structured, repeatable enrichment inside a dataset.

That makes it especially useful for analysts who want the power of LLMs without leaving the governed data prep environment.

Best for

Teams that want to enrich, classify, label, or standardize data using prompt-driven AI inside Savant.

Did this answer your question?