Embedding AI in the product, not just the dev process. Chatbots, semantic search, summarization, recommendations, generation — the layer where AI actually meets the end user.
← Back to AI LandscapeThe default first AI feature most products ship. Variants:
Hard parts: scoping ("don't answer off-topic"), safety, escalation to humans, conversation memory.
Replace keyword search with embeddings — users ask questions, you return relevant docs/products/items. Often combined with re-ranking and metadata filters. Backed by a vector DB.
Best as hybrid (keyword + semantic) — pure semantic loses on rare exact terms (SKU, error code, names).
Often the easiest AI win — high user value, simple prompt, predictable cost.
Two flavors today:
Often the most cost-effective LLM use:
Where a classical model would need labeled data, a small / cheap LLM with a good prompt often gets you to "good enough" the first day.
"Do this for me." The agent navigates the app, fills forms, calls APIs, shows the user the result. New, risky, and high-leverage. Examples: Linear's draft-issue agent, GitHub Copilot Workspace, Notion AI.
Before launch, build a small (50–500 examples) golden dataset. Score offline:
max_tokens protects against runaway responses.