AI Workflow · 6 of 6

Data & Insights

Natural-language analytics, smart dashboards, anomaly detection — AI as the bridge between raw data and the people who need answers. Bringing analyst-level insight to every employee.

Text-to-SQLSmart BIAnomaly DetectionForecastingWorkflow 6
← Back to AI Landscape
Quick Facts

At a Glance

Basic Concepts

  • Two complementary jobs: letting more people ask questions, and surfacing answers without being asked.
  • The semantic layer matters more than the model — a clean schema + metrics catalog beats a clever LLM.
  • Wrong answers are catastrophic in analytics. Verification, citations, and SQL transparency are non-negotiable.
  • Hybrid wins — let LLMs draft, dashboards confirm, classical ML detect.
Use Cases

Where AI Helps Analytics

Natural-Language → SQL ("Text-to-SQL")

"How many active users last month per region?" → the model writes the SQL, runs it, returns the table + a chart.

What makes it work in practice:

  • Schema description in the prompt — column names, types, foreign keys, sample rows.
  • Few-shot examples of question/SQL pairs.
  • Allow only SELECT — never let the model issue DDL/DML.
  • Show the SQL to the user; let them verify before trusting the chart.

Tools: Snowflake Cortex Analyst, Databricks Genie, BigQuery Gemini, Redshift Q, Hex Magic, Vanna.ai, Defog.

Smart Dashboards & BI Copilots

BI tools are baking AI in:

ToolAI capability
Tableau Pulse / EinsteinAuto-summaries of changes; ask in natural language.
Power BI CopilotGenerate measures & visuals from prompts.
Looker Studio + GeminiNL queries against semantic models.
ThoughtSpot SageNL-first BI from the start.
Hex / Mode AINotebook-style analytics with AI cells.
Sigma Compute AISpreadsheet-meets-warehouse + AI.
Anomaly Detection

Hybrid is best:

  • Classical ML for the detection (Prophet, isolation forest, ESD, statistical control charts) — cheap, fast, explainable.
  • LLM for the explanation ("Revenue dropped 18% — likely related to the deploy at 14:32 that affected the checkout API").

Tools: Anomalo, Sifflet, Monte Carlo, Datadog Watchdog, AWS Lookout for Metrics.

Auto-Generated Insights / "Auto-Narratives"

Daily / weekly emails: "Here's what changed in the business this week, with charts." Generated from KPI deltas + recent context. Less impressive than text-to-SQL but quietly more used.

Forecasting & What-If

Foundation time-series models (TimeGPT, Chronos, Lag-Llama, Moirai) are starting to compete with classical Prophet/ARIMA — zero-shot forecasts with no training. Combine with an LLM for "what if Q4 marketing spend doubles?" scenario narration.

Q&A Over Documents (the BI Adjacent)

RAG over PDFs, contracts, financial filings, support tickets. Often packaged as "data extraction" or "document AI." Tools: Azure Document Intelligence, AWS Textract + Bedrock, Google Document AI, Unstructured, LlamaParse.

Practice

Making It Trustworthy

Build a Semantic Layer First

The biggest determinant of AI analytics quality is whether your data has clear, agreed-upon definitions. Cube.dev, dbt Semantic Layer, MetricFlow, Looker LookML formalize "what is an active user" once — then the LLM can use the metric instead of inventing SQL.

Always Show the Query & Source
  • Display the SQL the model generated.
  • Link to the source table / dashboard / metric definition.
  • Let users edit the SQL and re-run.
  • Surface confidence ("I used the orders table; if you meant gross orders, the answer differs").
Read-Only by Default

The model's database role should be read-only. Allow-list specific stored procedures if writes are needed. Never grant DROP, DELETE, or schema-mutation rights to a generative system.

Govern PII & Access
  • Apply row-level / column-level security before the LLM sees data.
  • Mask emails, phone numbers, IDs unless the user has explicit access.
  • Log every NL question + generated SQL for audit.
Anti-patterns
  • "Ask anything" on a 4,000-table warehouse with no semantic layer — guarantees garbage answers.
  • Auto-emailing AI summaries to executives without human review.
  • LLM-only anomaly detection — too noisy, too expensive.
  • Treating AI charts as canonical — they should drive humans to the dashboard, not replace it.
Continue

Other AI Workflow Areas