Skip to main content

Agent Studio Overview

Agent Studio is Datafi's environment for building, deploying, and managing autonomous data agents. You create agents that combine LLM reasoning with tool execution -- querying databases, calling APIs, processing documents, and generating reports -- all within configurable guard rails and governed by the same ABAC policies that protect the rest of the platform.


What You Can Build

Autonomous Agents

Agents perform multi-step data tasks autonomously. You define an agent's identity, capabilities, behavior, and guard rails using a declarative JSON specification, then deploy it from the Agent Catalog.

Key features:

  • Declarative agent specification (identity, capabilities, behavior, guard rails)
  • 15+ built-in tools (query, search, vision, web, HTTP, email, and more)
  • Configurable reasoning strategies (step-by-step, parallel exploration, hypothesis-driven)
  • Resource limits and PII filtering for safe autonomous operation
  • Real-time execution tracking via WebSocket connections

See Agent Catalog, Agent Builder, and Multi-Agent Coordination.

Workflows

Orchestrate complex data pipelines using a graph-based workflow builder. Define nodes for actions, conditions, loops, parallel execution, and human-in-the-loop approvals. Workflows integrate directly with agents and the query engine.

Key features:

  • AI-assisted workflow creation from natural language descriptions
  • Visual drag-and-drop canvas with auto-layout
  • Real-time execution trace panel for monitoring running workflows
  • Resume workflows from a specific step after failures

See Workflow Builder for the complete reference.


Security and Governance

Agent Studio inherits the full security model of the Datafi platform:

  • Access control -- Generated queries are validated against ABAC policies before execution. An agent cannot access data that the requesting user is not authorized to see.
  • Tenant isolation -- LLM provider credentials and configuration are scoped to each tenant.
  • PII filtering -- Agents can be configured with PII filtering guard rails that scrub sensitive data before it reaches an LLM.
  • Audit logging -- Every agent interaction, including generated queries, LLM provider used, and validation results, is recorded in the audit log.
LLM Configuration

Configure LLM providers and model defaults in Administration > AI Settings. Each agent can override the tenant default by specifying an LLM in its agent specification. See AI/ML Configuration for details.


Next Steps