AI Workflow · 3 of 6

Documentation

The most universally welcome use of dev AI. Generate docstrings, READMEs, ADRs, runbooks, and onboarding tours from real code — and answer questions against your existing docs.

DocstringsREADMEsRAG over DocsOnboardingWorkflow 3
← Back to AI Landscape
Quick Facts

At a Glance

Basic Concepts

  • Two directions: docs from code (generation) and Q&A over docs (RAG).
  • Best when grounded in actual code/config — never a "creative writing" prompt.
  • Stale docs are worse than no docs — keep generation in CI to refresh on every change.
  • Authorship still matters — humans review for accuracy, voice, and "what NOT to say."
Use Cases

Where AI Earns Its Keep

Docstrings & Inline Comments

Highlight a function → "add docstring." Best models capture parameter types, return shape, side effects, and a one-liner of intent. Frameworks: Mintlify, Cursor / Copilot, JetBrains AI.

/**
 * Reserves stock for an order, decrementing inventory atomically.
 *
 * @param orderId  Idempotency key — repeated calls are no-ops.
 * @param items    Line items; each must reference an existing SKU.
 * @returns Reservation token, valid for 15 minutes.
 * @throws OutOfStockError when any SKU has insufficient inventory.
 */
async function reserveStock(orderId, items) { … }
READMEs & Project Onboarding

"Read the repo and write a README" works surprisingly well today. Best results come from giving the model:

  • The folder structure
  • The package manifest (package.json / pyproject / pom)
  • The entry-point file(s)
  • Any existing partial docs

Tools: Claude Code's /init, Cursor's "Generate README", Mintlify, AutoDoc.

API References (OpenAPI / Typedoc / Sphinx)

Generation tools have always existed. The new value is narrative documentation — examples, pitfalls, "common patterns" — pulled from real usage in the codebase.

Architecture Decision Records (ADRs)

"Given this PR diff and the linked issue, draft an ADR." The model's first pass is rarely the final one — but it sets up the structure (context, decision, consequences) so the human can edit, not start from a blank page.

Runbooks & Postmortems
  • Runbooks: from monitoring config + alert history → "if X fires, do Y." Often produced by SRE teams using Claude / Copilot.
  • Postmortem drafts: from Slack incident channel + git log + dashboard screenshots → blameless first draft.
"Explain This Code" for Legacy

Point an agent at a 5,000-line legacy module → "summarize the responsibilities, list public entry points, flag dead code." The most rescue-the-old-system valuable use of AI today.

RAG Over Internal Docs

Index your Confluence / Notion / docs site / wiki / Slack into a vector DB; expose a chatbot. Engineers ask "how do we deploy to staging?" and get an answer with citations. Tools: Glean, Notion Q&A, Atlassian Rovo, custom LangChain / LlamaIndex apps.

Practice

Keeping Docs Honest

Generate in CI, Not Once

Run docstring / README updates as part of the PR pipeline. The dev sees the generated diff before merging — catches drift before it ships. Better than a one-time blast that's stale in two weeks.

Cite Sources in RAG Apps

Always show which doc / page / line the answer came from. Without citations, RAG over docs becomes hallucination at scale. Users (and auditors) need to verify.

Voice & Style Guides

Feed the model your style guide ("active voice, second person, no marketing fluff"). Output reads like the rest of your docs, not generic AI prose.

Anti-patterns
  • "Add comments to everything" — leads to // increment counter by 1 noise.
  • Generating tutorials without testing them — broken steps are worse than no tutorial.
  • RAG without freshness signals — old wiki pages still show as canonical.
  • Auto-publishing without review — at minimum, gate behind a human approval.
Continue

Other AI Workflow Areas